This blog post is the answer to a common request we get from people learning how to use Views.
The question is: "How do I automatically add a link to a field"?
The answer is straightforward ... once you know how.
Our old training site was looking a bit long in the tooth. It was not only Drupal 6, but also had an old Acquia design several versions behind the current main site. It was time for a major update.Step by step tutorials
Dave Myburgh, Lead developer for Acquia.com recently gave two webinars about the experience. He shares specific tips on what modules he used to keep the development lightweight and flexible.
Besides fixing Debian Bug #744018, the release fixes the following two vulnerabilities (as mentioned in the bug report):
- CVE-2014-0165 WordPress privilege escalation: prevent contributors from publishing posts
- CVE-2014-0166 WordPress potential authentication cookie forgery
I recommend if you use the Debian package to upgrade as soon as it is available.
- WordPress 3.8.2 Addresses 2 Vulnerabilities, Includes 3 Security Hardening Changes (news.softpedia.com)
- WordPress 3.8.2 now available to download and install (thewayoftheweb.net)
Today, I decided to set my X230 back to UEFI-only boot, after having changed that for a bios upgrade recently (to fix a resume bug). I then choose to save the settings and received several error messages telling me that the system ran out of resources (probably storage space for UEFI variables).
I rebooted my machine, and saw no logo appearing. Just something like an underscore on a text console. The system appears to boot normally otherwise, and once the i915 module is loaded (and we’re switching away from UEFI’s Graphical Output Protocol [GOP]) the screen works correctly.
So it seems the GOP broke.
What should I do next?
Filed under: General
For a while now, I have been looking for a sensible offsite backup solution for use at home. My requirements are simple, it must be cheap and locally encrypted (in other words, I keep the encryption keys, the storage provider do not have access to my private files). One idea me and my friends have had many years ago, before the cloud storage providers showed up, have been to use Google mail as storage, writing a Linux block device storing blocks as emails in the mail service provided by Google, and thus get heaps of free space. On top of this one can add encryption, RAID and volume management to have lots of (fairly slow, I admit that) cheap and encrypted storage. But I never found time to implement such system. But the last few weeks I have looked at a system called S3QL, a locally mounted network backed file system with the features I need.
S3QL is a fuse file system with a local cache and cloud storage, handling several different storage providers, any with Amazon S3, Google Drive or OpenStack API. There are heaps of such storage providers. S3QL can also use a local directory as storage, which combined with sshfs allow for file storage on any ssh server. S3QL include support for encryption, compression, de-duplication, snapshots and immutable file systems, allowing me to mount the remote storage as a local mount point, look at and use the files as if they were local, while the content is stored in the cloud as well. This allow me to have a backup that should survive fire. The file system can not be shared between several machines at the same time, as only one can mount it at the time, but any machine with the encryption key and access to the storage service can mount it if it is unmounted.
It is simple to use. I'm using it on Debian Wheezy, where the package is included already. So to get started, run apt-get install s3ql. Next, pick a storage provider. I ended up picking Greenqloud, after reading their nice recipe on how to use s3ql with their Amazon S3 service, because I trust the laws in Iceland more than those in USA when it come to keeping my personal data safe and private, and thus would rather spend money on a company in Iceland. Another nice recipe is available from the article S3QL Filesystem for HPC Storage by Jeff Layton in the HPC section of Admin magazine. When the provider is picked, figure out how to get the API key needed to connect to the storage API. With Greencloud, the key did not show up until I had added payment details to my account.
Armed with the API access details, it is time to create the file system. First, create a new bucket in the cloud. This bucket is the file system storage area. I picked a bucket name reflecting the machine that was going to store data there, but any name will do. I'll refer to it as bucket-name below. In addition, one need the API login and password, and a locally created password. Store it all in ~root/.s3ql/authinfo2 like this:[s3c] storage-url: s3c://s.greenqloud.com:443/bucket-name backend-login: API-login backend-password: API-password fs-passphrase: local-password
I create my local passphrase using pwget 50 or similar, but any sensible way to create a fairly random password should do it. Armed with these details, it is now time to run mkfs, entering the API details and password to create it:# mkdir -m 700 /var/lib/s3ql-cache # mkfs.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ --ssl s3c://s.greenqloud.com:443/bucket-name Enter backend login: Enter backend password: Before using S3QL, make sure to read the user's guide, especially the 'Important Rules to Avoid Loosing Data' section. Enter encryption password: Confirm encryption password: Generating random encryption key... Creating metadata tables... Dumping metadata... ..objects.. ..blocks.. ..inodes.. ..inode_blocks.. ..symlink_targets.. ..names.. ..contents.. ..ext_attributes.. Compressing and uploading metadata... Wrote 0.00 MB of compressed metadata. #
The next step is mounting the file system to make the storage available.# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql Using 4 upload threads. Downloading and decompressing metadata... Reading metadata... ..objects.. ..blocks.. ..inodes.. ..inode_blocks.. ..symlink_targets.. ..names.. ..contents.. ..ext_attributes.. Mounting filesystem... # df -h /mnt Filesystem Size Used Avail Use% Mounted on s3c://s.greenqloud.com:443/bucket-name 1.0T 0 1.0T 0% /s3ql #
The file system is now ready for use. I use rsync to store my backups in it, and as the metadata used by rsync is downloaded at mount time, no network traffic (and storage cost) is triggered by running rsync. To unmount, one should not use the normal umount command, as this will not flush the cache to the cloud storage, but instead running the umount.s3ql command like this:# umount.s3ql /s3ql #
There is a fsck command available to check the file system and correct any problems detected. This can be used if the local server crashes while the file system is mounted, to reset the "already mounted" flag. This is what it look like when processing a working file system:# fsck.s3ql --force --ssl s3c://s.greenqloud.com:443/bucket-name Using cached metadata. File system seems clean, checking anyway. Checking DB integrity... Creating temporary extra indices... Checking lost+found... Checking cached objects... Checking names (refcounts)... Checking contents (names)... Checking contents (inodes)... Checking contents (parent inodes)... Checking objects (reference counts)... Checking objects (backend)... ..processed 5000 objects so far.. ..processed 10000 objects so far.. ..processed 15000 objects so far.. Checking objects (sizes)... Checking blocks (referenced objects)... Checking blocks (refcounts)... Checking inode-block mapping (blocks)... Checking inode-block mapping (inodes)... Checking inodes (refcounts)... Checking inodes (sizes)... Checking extended attributes (names)... Checking extended attributes (inodes)... Checking symlinks (inodes)... Checking directory reachability... Checking unix conventions... Checking referential integrity... Dropping temporary indices... Backing up old metadata... Dumping metadata... ..objects.. ..blocks.. ..inodes.. ..inode_blocks.. ..symlink_targets.. ..names.. ..contents.. ..ext_attributes.. Compressing and uploading metadata... Wrote 0.89 MB of compressed metadata. #
Thanks to the cache, working on files that fit in the cache is very quick, about the same speed as local file access. Uploading large amount of data is to me limited by the bandwidth out of and into my house. Uploading 685 MiB with a 100 MiB cache gave me 305 kiB/s, which is very close to my upload speed, and downloading the same Debian installation ISO gave me 610 kiB/s, close to my download speed. Both were measured using dd. So for me, the bottleneck is my network, not the file system code. I do not know what a good cache size would be, but suspect that the cache should e larger than your working set.
I mentioned that only one machine can mount the file system at the time. If another machine try, it is told that the file system is busy:# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql Using 8 upload threads. Backend reports that fs is still mounted elsewhere, aborting. #
The file content is uploaded when the cache is full, while the metadata is uploaded once every 24 hour by default. To ensure the file system content is flushed to the cloud, one can either umount the file system, or ask s3ql to flush the cache and metadata using s3qlctrl:# s3qlctrl upload-meta /s3ql # s3qlctrl flushcache /s3ql #
If you are curious about how much space your data uses in the cloud, and how much compression and deduplication cut down on the storage usage, you can use s3qlstat on the mounted file system to get a report:# s3qlstat /s3ql Directory entries: 9141 Inodes: 9143 Data blocks: 8851 Total data size: 22049.38 MB After de-duplication: 21955.46 MB (99.57% of total) After compression: 21877.28 MB (99.22% of total, 99.64% of de-duplicated) Database size: 2.39 MB (uncompressed) (some values do not take into account not-yet-uploaded dirty blocks in cache) #
I mentioned earlier that there are several possible suppliers of storage. I did not try to locate them all, but am aware of at least Greenqloud, Google Drive, Amazon S3 web serivces, Rackspace and Crowncloud. The latter even accept payment in Bitcoin. Pick one that suit your need. Some of them provide several GiB of free storage, but the prize models are quire different and you will have to figure out what suit you best.
While researching this blog post, I had a look at research papers and posters discussing the S3QL file system. There are several, which told me that the file system is getting a critical check by the science community and increased my confidence in using it. One nice poster is titled "An Innovative Parallel Cloud Storage System using OpenStack’s SwiftObject Store and Transformative Parallel I/O Approach" by Hsing-Bung Chen, Benjamin McClelland, David Sherrill, Alfred Torrez, Parks Fields and Pamela Smith. Please have a look.
Given my problems with different file systems earlier, I decided to check out the mounted S3QL file system to see if it would be usable as a home directory (in other word, that it provided POSIX semantics when it come to locking and umask handling etc). Running my test code to check file system semantics, I was happy to discover that no error was found. So the file system can be used for home directories, if one chooses to do so.
If you do not want a locally file system, and want something that work without the Linux fuse file system, I would like to mention the Tarsnap service, which also provide locally encrypted backup using a command line client. It have a nicer access control system, where one can split out read and write access, allowing some systems to write to the backup and others to only read from it.
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
The Form API has a form element called managed_file. It uploads a file and adds it to the managed files table. This way Drupal has knowledge about and control over it. But now I got the situation that after a certain amount of time the image got removed. It just disappeared. What is happening here?
Well the managed_file works with Ajax. To smooth the proces it adds the managed file and leaves the status on temporary until some one specifies 'this is my file its managed'. You do that by adding this snippet of code to your submit handler.$file = file_load($form_state['values']['file_element_name']);
// Change status to permanent.
$file->status = FILE_STATUS_PERMANENT;
If you have your form managed bysystem_settings_form()
you want to add a extra submit handler. You can do that this way.$form['#submit'] = 'extra_admin_submit';
If you are using OpenSSL (or ever did use it with any of your current keypairs in the last 3-4 years), you are probably in a rush to upgrade all your systems and replace all your private keys right now.
If your certificate authority is CACert.org then there is an extra surprise in store for you. CACert.org has changed their hash to SHA-512 recently and some client/server connections silently fail to authenticate with this hash. Any replacement certificates you obtain from CACert.org today are likely to be signed using the new hash. Amongst other things, if you use CACert.org as the CA for a distributed LDAP authentication system, you will find users unable to log in until you upgrade all SSL client code or change all clients to trust an alternative root.
Modules Unraveled: 103 Content Branching and Static Site Generation Using Zariz with Amitai Burstein - Modules Unraveled Podcast
- What is Zariz?
- How did this come about?
- How does it help content creators?
- How is this different from Workbench Moderation, and the default revisioning system?
- You mentioned that it duplicates nodes, how do the URLs stay in tact?
- Talk a bit about how you can create static site from a Drupal site.
- Content staging
- Static site generation
- What about authenticated users?
- How does this help performance and scalability?
Is Zariz an alternative to drupal.org/project/sps?
Screencast demo starts at about 40:23Episode Links: Amitai on drupal.orgAmitai on TwitterZariz RepoTags:
Zoe slept in even later this morning. I'm liking this colder weather. We had nothing particular happening first thing today, so we just snuggled in bed for a bit before we got started.
Tumble Tastics were offering free trial classes this week, so I signed Zoe up for one today. She really enjoyed going to Gold Star Gymnastics in the US, and has asked me about finding a gym class over here every now and then.
Tumble Tastics is a much smaller affair than Gold Star, but at 300 metres from home on foot, it's awesomely convenient. Zoe scootered there this morning.
It seems to be physically part of what I'm guessing used to be the Church of Christ's church hall, so it's not big at all, but the room that Zoe had her class in still had plenty of equipment in it. There were 8 kids in her class, all about her size. I peeked around the door and watched.
Most of the class was instructor led and mainly mat work, but then part way through, the parents were invited in, and the teacher walked us all through a course around the room, using the various equipment, and the parents had to spot for their kids.
The one thing that cracked me up was when the kids were supposed to be tucking into a ball and rocking on their backs. Zoe instead did a Brazilian Jiu-Jitsu break-fall and fell backwards slapping the mat instead. It was good to see that some of what she learned in those classes has kicked in reflexively.
She really enjoyed the rope swing and hanging upside down on the uneven bars.
The class ran for 50 minutes (I was only expecting it to last 30 minutes) and Zoe did really well straight off. I think we'll make this her 4th term extra-curricular activity.
We scootered home the longer way, because we were in no particular hurry. Zoe did some painting when we got home, and then we had lunch.
After lunch we goofed off for a little bit, and then we did quiet time. Zoe napped for about two and a half hours, and then we did some plaster play.
I'd picked up a fish ice cube tray from IKEA on the weekend for 99 cents (queue Thrift Shop), and I bought a bag of plaster of Paris a while back, but haven't had a chance to do anything with it yet. I bribed Zoe into doing quiet time by telling her we'd do something new with the ice cube tray I'd bought.
We mixed up a few paper cups with plaster of Paris in them and then I squirted some paint in. I'm not sure if the paint caused a reaction, or the plaster was already starting to set by the time the paint got mixed in, but it became quite viscous as soon as the paint was mixed in. We did three different colours and used tongue depressers to jam it into the tray. Zoe seemed to twig that it was the same stuff as the impressions of her baby feet, which I thought was a pretty clever connection to make.
After that, there was barely enough time to watch a tiny bit of TV before Sarah arrived to pick Zoe up. I told her that her plaster would be set by the time she got dropped off in the morning.
I procrastinated past the point of no return and didn't go for a run. Instead I decided to go out to Officeworks and print out some photos to stick in the photo frame I bought from IKEA on the weekend.
Upgraded to Boost 1.54.0
Adjust build script local/script/CreateBoost.sh accordingly
Renamed generation_runge_kutta_cash_karp54_classic.hpp to generation_runge_kutta_cash_karp54_cl.hpp to remain within 100-character limit for tar
Fuse Interactive: Watch as I try to upgrade this module to Drupal 8. What happens next you won't BELIEVE!
I recently spoke at the Drupal Melbourne meetup about running Puppet and Docker to increase security for running multiple sites on the one host. It's alot of work to get setup properly for a remote speaker so I would like to thank the organisers for allowing me to present.
Welcome back to the pond! Last week we touched on the importance of mentoring juniors and Github best practices. In this week's episode, we'll be following up on the junior workflow from last week by discussing two tools you should definitely have and how to install them, exploring new ground by touching on some entry level SCSS techniques, sharing my AHA! and FAIL moments of the week, and lastly, our weekly query for you good people out there to ponder. So lets jump right into it shall we?
With the rapidly approaching release of Drupal 8, many Symfony developers may be considering going to Austin for DrupalCon in June. Our advice? Do it!