by Jeff AmaralThe Challenge:
Display the second level menu items of the currently active page, fully expanded, in the left sidebar of every page but the node edit form. And have this configuration saved in code so that every member of your development team can have it all work automatically.
Drupal menus allow you to hierarchically organize the content on your site. In Drupal 7, there are four system menus available by default: Navigation, Management, User menu, and Main menu. The Main menu is generally used for primary navigation (AKA the links generally shown at the top of every page).
All menus in Drupal can be displayed in one of two ways: as a variable in your theme settings or as a block. These blocks are limited. They will display the full menu (including top-level items) and, by default, only expand one of those top-level items if you are on a child page.
Want to show only the second level of the menu with the current page activated? Or the third? Want to expand all of the menu items so you can suckerfish them? You can’t.
Because of these deficiencies, lots of Drupal site builders turn to Menu Block, an incredibly popular and useful module....
In my new position at Drupalize.Me I have the luxury of helping a lot of projects in little ways. Being able to context switch quickly helps a lot. This means I've put a lot of time into how my workstation is setup so that I can easily move from one project to another. With the new job I also decided to add OSX to the mix of computers that I use on a daily basis.
While working for Acquia in the last few months – helping them maintain Drupal Gardens – I had two tasks that required to write small modules. In this blog post, I would like to introduce these modules.Safer Permissions
The first one is Safer Permissions.
First of all: I'm a featured speaker. I'll be hosting a session called 'Drupal 8 for Site Builders'. Come and watch to get an overview of all the wonders and power Drupal 8 has for creating a site. However, there are other reasons to attend DrupalCon Prague, and they are not Drupal related at all.Kutná Hora
Kutná Hora is a little town, about 80 kilometers away from Prague, world known for its Sedlec ossuary which contains a lot of human bones, used for decorating the inside. While this may sound luguber, a visit to this small chapel is not something you will ever forget. And if you don't go inside, the top of the chapel has a skull, they really thought about everything. The easiest way to get there is by train and even arriving in the small station is worth travelling. I have seen it already, but will be going again, so feel free to join me.CERN opendays
CERN, the European Organization for Nuclear Research, is opening its doors on 28th and 29th of september for the public for it so called CERN opendays. CERN lies in Geneva, Switserland, about 8 hours driving from Prague, or maybe 1 hour flying by plane. You'll be able to go down 100 meters underground and look at the Large Hadron collider and all other amazingly cool experiments scientists and engineers perform. Unless you are Daniel "dawehner" Wehner, you don't get that many chances to visit this place in your lifetime, especially since they only open it up publicly every 4 years. Tickets are for sale starting August, so keep an eye on the website if you want to go. That means I won't be attending the post-sprints, but honestly, I can live with that.
Many people are taking a fresh look at IT security strategies in the wake of the NSA revelations. One of the issues that comes up is the need for stronger encryption, using public key cryptography instead of just passwords. This is sometimes referred to as certificate authentication, but certificates are just one of many ways to use public key technology.
One of the core decisions in this field is the key size. Most people have heard that 1024 bit RSA keys have been cracked and are not used any more for web sites or PGP. The next most fashionable number after 1024 appears to be 2048, but a lot of people have also been skipping that and moving to 4096 bit keys. This has lead to some confusion as people try to make decisions about which smartcards to use, which type of CA certificate to use, etc. The discussion here is exclusively about RSA key pairs, although the concepts are similar for other algorithms (although key lengths are not equivalent)The case for using 2048 bits instead of 4096 bits
- Some hardware (many smart cards, some card readers, and some other devices such as Polycom phones) don't support anything bigger than 2048 bits.
- Uses less CPU than a longer key during encryption and authentication
- Using less CPU means using less battery power (important for mobile devices)
- Uses less storage space: while not an issue on disk, this can be an issue in small devices like smart cards that measure their RAM in kilobytes rather than gigabytes
So there are some clear benefits of using 2048 bit keys and not just jumping on the 4096 bit key bandwagonThe case for using 4096 bits
- For some types of attack, security is not just double, it is exponential. 4096 is significantly more secure in this scenario. If an attack is found that allows a 2048 bit key to be hacked in 100 hours, that does not imply that a 4096 bit key can be hacked in 200 hours. The hack that breaks a 2048 bit key in 100 hours may still need many years to crack a single 4096 bit key
- Some types of key (e.g. an OpenPGP primary key which is signed by many other people) are desirable to keep for an extended period of time, perhaps 10 years or more. In this context, the hassle of replacing all those signatures may be quite high and it is more desirable to have a long-term future-proof key length.
Many types of public key cryptography, such as X.509, offer an expiry feature. This is not just a scheme to force you to go back to the certificate authority and pay more money every 12 months. It provides a kind of weak safety net in the case where somebody is secretly using an unauthorised copy of the key or a certificate that the CA issued to an imposter.
However, the expiry doesn't eliminate future algorithmic compromises. If, in the future, an attacker succeeds in finding a shortcut to break 2048 bit keys, then they would presumably crack the root certificate as easily as they crack the server certificates and then, using their shiny new root key, they would be in a position to issue new server certificates with extended expiry dates.
Therefore, the expiry feature alone doesn't protect against abuse of the key in the distant future. It does provide some value though: forcing people to renew certificates periodically allows the industry to bring in new minimum key length standards from time to time.
In practical terms, content signed with a 2048 bit key today will not be valid indefinitely. Imagine in the year 2040 you want to try out a copy of some code you released with a digital signature in 2013. In 2040, that signature may not be trustworthy: most software in that era would probably see the key and tell you there is no way you can trust it. The NIST speculates that 2048 bit keys will be valid up to about the year 2030, so that implies that any code you sign with a 2048 bit key today will have to be resigned with a longer key in the year 2029. You would do that re-signing in the 2048 bit twilight period while you still trust the old signature. Fortunately, there are likely to be few projects where such old code will be in demand.4096 in practice
One of the reasons I decided to write this blog is the fact that some organisations have made the 4096 bit keys very prominent (although nobody has made them mandatory as far as I am aware).
Debian's guide to key creation currently recommends 4096 bit keys (although it doesn't explicitly mandate their use)
Fedora's archive keys are all 4096 bit keys.
These developments may leave people feeling a little bit naked if they have to use a shorter 2048 bit key for any of the reasons suggested above (e.g. for wider choice of smart cards and compatibility with readers). It has also resulted in some people spending time looking for 4096 bit smart cards and compatible readers when they may be better off just using 2048 bits and investing their time in other security improvements.
In fact, the "risk" of using only 2048 rather than 4096 bits in the smartcard may well be far outweighed by the benefits of hardware security (especially if a smartcard reader with pin-pad is used)
My own conclusion is that 2048 is not a dead duck and using this key length remains a valid decision and is very likely to remain so for the next 5 years at least. The US NIST makes a similar recommendation and suggests it will be safe until 2030, although it is the minimum key length they have recommended.
My feeling is that the Debian preference for 4096 bit PGP keys is not based solely on security, rather, it is also influenced by the fact that Debian is a project run by volunteers. Given this background, there is a perception that if everybody migrates from 1024 to 2048, then there would be another big migration effort to move all users from 2048 to 4096 and that those two migrations could be combined into a single effort going directly from 1024 to 4096, reducing the future workload of the volunteers who maintain the keyrings. This is a completely rational decision for administrative reasons, but it is not a decision that questions the security of using 2048 bit keys today. Therefore, people should not see Debian's preference to use 4096 bit keys as a hint that 2048 bit keys are fundamentally flawed.
Unlike the Debian keys (which are user keys), the CACert.org roots and Fedora archive signing keys are centrally managed keys with a long lifetime and none of the benefits of using 2048 bit keys is a compelling factor in those use cases.Practical issues to consider when choosing key-length
Therefore, the choice of using 2048 or 4096 is not pre-determined, and it can be balanced with a range of other decisions:
- Key lifetime: is it a long life key, such as an X.509 root for an in-house CA or an OpenPGP primary key? Or is it just for a HTTPS web server or some other TLS server that can be replaced every two years?
- Is it for a dedicated application (e.g. a closed user group all using the same software supporting 4096 bit) or is it for a widespread user base where some users need to use 2048 bit due to old software/hardware?
- Is it necessary to use the key(s) in a wide variety of smartcard readers?
- Is it a mobile application (where battery must be conserved) or a server that is likely to experience heavy load?
New hardware architectures and custom co-processor extensions are introduced to the market on a regular basis. While it is relatively easy to port a proprietary software stack to a new platform, FOSS distributions face major challenges. Bootstrapping distributions proved to be a yearlong manual process in the past due to a large amount of dependency cycles which had to be broken by hand.
In this paper we propose an heuristic-based algorithm to remove build dependency cycles and to create a build order for automatically bootstrapping a binary based software distribution on a new platform.
Recently, there's been discussions on IRC and the debian-devel mailing list about how to notify users, typically from a cron script or a system daemon needing to tell the user their hard drive is about to expire. The current way is generally "send email to root" and for some bits "pop up a notification bubble, hoping the user will see it". Emailing me means I get far too many notifications. They're often not actionable (apt-get update failed two days ago) and they're not aggregated.
I think we need a system that at its core has level and edge triggers and some way of doing flap detection. Level interrupts means "tell me if a disk is full right now". Edge means "tell me if the checksums have changed, even if they now look ok". Flap detection means "tell me if the nightly apt-get update fails more often than once a week". It would be useful if it could extrapolate some notifications too, so it could tell me "your disk is going to be full in $period unless you add more space".
The system needs to be able to take in input in a variety of formats: syslog, unstructured output from cron scripts (including their exit codes), snmp, nagios notifications, sockets and fifos and so on. Based on those inputs and any correlations it can pull out of it, it should try to reason about what's happening on the system. If the conclusion there is "something is broken", it should see if it's something that it can reasonably fix by itself. If so, fix it and record it (so it can be used for notification if appropriate: I want to be told if you restart apache every two minutes). If it can't fix it, notify the admin.
It should also group similar messages so a single important message doesn't drown in a million unimportant ones. Ideally, this should be cross-host aggregation. The notifications should be possible to escalate if they're not handled within some time period.
I'm not aware of such a tool. Maybe one could be rigged together by careful application of logstash, nagios, munin/ganglia/something and sentry. If anybody knows of such a tool, let me know, or if you're working on one, also please let me know.
The DrupalCon Prague team welcomes you to submit to our call for content for our September conference.
Why content, not papers? Well, the DrupalCon program has changed since we last did the call. We've listened to feedback from DrupalCon attendees, and we're hoping our new direction will really resonate with our audience. In addition to our regular great offerings of Sessions, BoFs, CXO, Keynotes and Training, we're excited to roll out a few new initiatives.
Recently, I had a need to be able to backup a Drupal database but to skip a number of tables that I didn’t care about; mostly tables related to caching. Given most of these tables are prefixed with cache*, I was hoping for a way to specify tables to ignore using a wildcard character like ‘%’ or ‘*’.
Chromatic: Responsive Grid Building with Sass and Zen Grids: The Tale of the Breakpoint Grid Breakdown mixin
A discussion on responsive Sass strategy and how to solve the common problem of numerous grids needing varying numbers of columns across many breakpoints. Can we accomplish this with one mixin?
Building sites using Drupal Commerce is something we often do at Marzee Labs, but when EnjoyThis approached us to build an e-commerce site for The London Distillery Company featuring a “design your own whisky cask” part, we immediately seized that opportunity to do something different. In this post, I’ll review the architecture of the project.Challenges
The new site for The London Distillery Company had to appeal to a young urban crowd. EnjoyThis took on the challenge of creating a visually appealing design using big images, bold typography & plenty of videos (which they shot themselves).
Drupal Commerce was chosen to build the site, which needed a lot of customization that would have been beyond most open-source ecommerce platforms. We needed multi-country & multi-continent shipping which influences shipping costs, delivery times & taxes. We also needed to offer customers the possibility to use coupons, so they’d get free shipping, receive a percentage off their purchase, or get a free bottle for every three bottles they buy.
The most challenging part of the project was to allow visitors to design their own bespoke casks, choosing from options such as barrel size (40 liters, 180 liters, or 220 liters, for the very thirsty ones!), wood type and barley. Every one of these options has a different price and attributes, and some of the options would in turn enable more options. For example, if you pick the Maris Otter barley type, you might want to take the peated or the non-peated version.
After the user has customized his or her own cask, we allow them to share their configuration via mail, Twitter or Facebook, so we needed unique URLs for every cask combination.
The first step in the “design your own whisky cask” process. Selecting a different option triggers an AJAX request that loads a different product combination. Try it out yourself.UX and Front-end
The secret of marrying a good UX implementation to the one-pager “design your own cask” is very simple: relying on what Drupal Commerce gives us. The danger would be to sink in heavy template usage to accommodate the markup. Instead we used a couple of preprocess functions, as well as the standard and almost untouched commerce HTML.
All in all, the best decision we made for this uncommon commerce page was to keep most of what Drupal Commerce would give us out of the box and do a make up with jQuery rather than reinventing the wheel.Under the hood
To build out the “design your own cask” tool, we started from a description and a price for each of the attributes that would made up the final cask: a 20-liter barrel costs that much, adding the peated option would add that much, etc.
We made the maths and found that a user can chose between roughly 200 different cask combinations. Each combination is built out as a separate product and bundled in one single product display (see the bespoke page), taking advantage of Drupal Commerce’s flexible product / product display separation. We built a script to generate the different combinations, and used Commerce Feeds to get that data into Drupal. Future price changes are then easily synced using the built-in synchronization of Commerce Feeds.
Each combination also shows a breakdown of the costs of each selected attribute. Selecting the “peated” option for the barley type would add an additional 200 pounds for example. We store that data in a separate node that is referenced from the product entity. Every time an attribute is selected by the user, we receive a correct reference to the price breakdown node of that particular combination and extract these components using jQuery.
The third step in the cask configuration. The visitor can choose the type of barley, which in turn triggers new choices.
We are very happy with the final site, especially the "Bespoke Tool" which we recommend you try out. Drupal Commerce proved to be a very flexible framework, even for a use case that requires more than just the typical product pages.
Disclaimer: our friends at EnjoyThis designed the whole site, including beautifully shot images and videos to promote the whisky distillery. Marzee Labs architected and implemented the e-commerce part using Drupal Commerce and implemented the User Experience of the “design your own cask” part.
Drupal makes it easy to add fields to your site, which you can use to trigger custom functionality in your modules and themes. It's nice, but there's also a dark side. As sites grow more complex, these fields can get out of hand, resulting in one or more of these outcomes:
- An utterly exhausting number of fields, without any way to see which ones are important
- Fields without any logical grouping
- Fields that do nothing
- Fields where certain combinations of choices result in poorly displayed content or broken functionality
- Fields more suitable for admins than content creators
These problems grow out of fairly innocuous roots. Let me use a simple example:
You've added a field to your page content type called "layout" which lets you choose between two options: a page layout that prominently displays the image you've uploaded to the image field, and one that emphasizes text without displaying any image (you use some CSS being the scenes to do this magic).
Naturally, choosing the "Strong Text w/out Image" option, turns the image field into a "field that does nothing." It's easy to justify, but the more you let this, and other field mismanagement creep into your UI's, the more frustrated your new users will become.
Fortunately, we have some options for managing these kinds of issues:
- The Conditional Fields module provides an admin UI for letting you set up dependencies and conditions for fields without any programming. One great use is to hide "fields that do nothing" based on the contents of previous fields. It's like usability gold.
- The Field Permissions module lets you hide fields on a user permissions basis, which is great if you've got advanced functionality in fields, and you really want to simplify things for content creators.
- The Field Group module lets you combine fields into groups with a variety of UI options, like collapsible containers, vertical tabs, horizontal tabs, and others. This is great for providing whatever form interface is most intuitive for your users. You can also look into the Field Collection module for a additional options.
- The Computed Field module lets you dynamically pre-populate fields with whatever values make sense. Smart default values = big win for users.
If you'd rather make changes in the code than the UI, the Form API in Drupal 7 has a lot of the features that these modules provide. Form states allow developers to programmatically trigger behaviors like hiding, showing, or populating fields based on the states of other fields. Collapsible fieldsets allow for logical grouping of fields and you can set whether their collapsed by default or not.
And being sensitive to the default state of fields makes a huge difference. Here's my usability rule of thumb, regardless of whether I'm using contributed modules with admin UI's or the Form API that comes with Drupal:
Required fields (or those of major importance): Expanded by default
Optional fields (or those of minor importance): Collapsed by default
Fields that do nothing: Hidden
What advice do you have for managing fields in Drupal?
So, regarding my cry for help...
I did get several replies and did more research on my own. The TL;DR up to now is "I have a fully functioning device with no input method and my data may well die on it":
- The device is passphrase-protected and encrypted so I can't simply connect an USB cable and use MTP.
- I can't connect a mouse or keyboard as LG, in their endless wisdom, didn't design the USB port with enough power in mind so it can't support USB OTG on its own.
- Google then removed USB OTG support from the Nexus 4's kernel. It's not as if powered USB hubs existed so this is obviously the correct path of action.
- While I can install new programs via Google Play, Android 4.0 and above prevents newly installed programs to start without user interaction.
- LG points towards a third-party service for out-of-warranty repairs and as part of their Terms of Service, you have to forfeit all data as they "always update the software", i.e. they will prolly ship random other devices to you on a regular basis instead of what you sent in.
- The Nexus 4 is running stock Android, locked bootloader and all
The last two options I see are
- Try to find a way to get a custom ROM onto the device with the help of USB cable and physical buttons only without destroying the encrypted data (yeah, right...)
- Try and source a display so I can repair the device myself. But as not even ifixit.com offers a howto or parts... I suspect this may fail.
And I can not even be reached under my normal number as I don't dare turning the device off and/or removing the SIM as that may prevent me from recovering with the running device, somehow.
At this year’s GPN13 I gave a talk about Debian Code Search. It was in German, so I spent a few hours creating english subtitles.
Get the video at http://ftp.ccc.de/events/gpn/gpn13/gpn13-debian-code-search.mp4 (84 MiB) and the corresponding subtitle file at http://t.zekjur.net/gpn13-debian-code-search.srt. Drop both files in the same directory, run mplayer gpn13-debian-code-search.mp4 and press v to enable subtitles. I intend to eventually put the (subtitled) video on YouTube and refer to it from codesearch.debian.net, but I wanted to post the video in its current form already.
The presentation itself explains the motivation behind Debian Code Search and how it works. You don’t need any knowledge of the system in order to understand the talk. Enjoy!
I can show you plenty of hardware that is perfectly 64 bit capable but probably never will run Ubuntu and/or Unity.
First, what is 64 bit for you? Looking at ubuntu.com/download and getting images from there, one gets the impression, that 64 bit is amd64 (also called x86_64). If one digs deeper to cdimage.ubuntu.com, one will find non-Intel images too: PowerPC and amrhf. As the PowerPC images are said to boot on G3 and G4 PowerPCs, these are 32 bit. Armhf is 32 bit too (arm64/aarch64 support in Linux is just evolving). So yes, if 64 bit means amd64, I do have hardware that can run Unity.
But you asked if I have hardware that is 64 bit capable and can run Ubuntu/Unity, so may I apply my definiton of 64 bit here? I have an old Sun Netra T1-200 (500MHz UltraSPARC IIe) running Debian’s sparc port, which has a 64 bit kernel and 32 bit userland. Unity? No wai.
Sorry for ranting like this, but 64 bit really just means that the CPU can handle 64 bit big addresses etc. End even then, it not always will do so ;)
We have rolled out an experimental feature to export tours (in Inline Manual terminology Topics) that are compatible with Drupal 8 tour module. You can now use the Authoring tool and Inline Manual infrastructure to manage your tours. These exported tours you can play out of the box in Drupal 8. If you are after more advanced solution, that can be used for your clients, check out the Inline Manual module.
First I must admit: I am a big fan of Panels. I know, I know... something must be wrong with me; It is slow, it is for site builders, not for real programmers, and all kinds of other arguments... I say: blah, blah, blah. I am not here to argue about Blocks vs Panes or Display Suite and Context. Been there, done that (And believe me, we do a lot of Drupal module development)... I just prefer Panels and Panelizer for most use cases; period.
With that out of the way, I can come to the point; Why develop a new module, when there are already good Dashboard modules? The reason is simple. It is not 'only' a dashboard module, users can change the way they see a page. They don't have to be owner of the content, they just drag-n-drop the order in the display and show or hide stuff to their liking. Another user can drag-n-drop the order of the same content to his or her liking. So... how to use it, and what is so cool?How I came to development of this module.
First forget Panelizer for a moment. We start with Panels and the Page manager of ctools. It is standard behaviour of Panels that you can override a node view page, so that you can use Panels to construct the node page.
Suppose there is a node type called 'issue'. That node type holds all issues (dûh). The node type has a lot of fields like detailed description, assignee, project, status, attachments and more... We need all those fields.. they have a purpose, but it also gives us a display nightmare. Different users find different stuff important. Account management only cares about the hours and how fast things are picked up, project leads want a decent way to estimate and relate stuff and the developer need to access all the fine details, including the attachments.
So what I did was overriding the default node view with a Panel and started playing around. Assignee at the top, description lower.. etc, etc. It took me a while to figure out a way to display everything and to have a display that I liked. First my buddy from account management showed up in my office... "Listen, I realy like your hard work (typically a sales guy), but I want to see also all previous issues for this project, or maybe even every issue from the same client, and can client name go on top, together with the reporter?". Luckely I did build it all with Panels, so 15 minutes and some Views magic later, I did my job... Happy sales guy!
Next in line was the Project lead. "Can't we show related issues? Issues might have something in common because of a particular modules or so.." No, is not an answer for a devoted developer, so again some views, search api thingies... and woot! I did have a cool pane with related stuff. Then a developer showed up. "What is all this stuff on my issue pages? It is way to crowded... Just give me clear instructions, don't bother me!" Woops... developers... they have their own way of saying things.. All I understood that something was completely wrong.
It was clear that there never could be an agreement on what field on top, what is important and what not to display on that page. So I developed a module called Panels User Override. The administrator provides a 'sane' layout of the page. He even provides some extra stuff that is not in the normal display. A user then can drag-n-drop the way he/she sees the page. So every user can have the issues his or her way, without me doing anything... fantastic! This module also opend the way for a new frontpage of our issue tracker. It will be (in the near feature) a Panel page, with way more possibilities then our friend 'Dashboard', that only works with 'Blocks'.What this module is not.
It is not Panelizer. It cannot be compared to Panelizer. I see no reason why it cannot work together with Panelizer from a technical perspective, but I cannot come up with a use case. I think you will create a nightmare for your users. You then can override displays that are already overriden per node... See a whole thread on this topic in the issue tracker of this module.
I already did make the module available it in my sandbox on drupal.org, but it is not ready to promote it to a full project. It might never see the full project status. One of the main problems is the way it does access checks now... it simply doesn't do any access checks. So you can also override Panels, that you have no access to, and drag-n-drop panes that are not available for you. Not a real problem... it does no harm that you can change the order on things you will never see, but it is weird for the user. I do have some ideas how to fix that.
Problem is that I don't have a context on edit. Possible ways to solve this are:
- The edit/override can only be started from a context (e.g. edit THIS layout). Thing is that this doesn't make it clear that you also edit the same display for all other contexts (nodes).
- Leave all intact as it is now and load a default context on edit. But I simply cannot figure out a way... how to do this? I see that the developers of Panels have the same issue with this, because the preview is also loaded without a default context....
Well.. test it, review it, patch it and come with better ideas! But don't depend on it. It might change, dissapear or never be touched again, without notice. So use this custom developed module at your own risk!
Adding color support to a Drupal theme allows site owners/administrators to modify the theme's color scheme directly from a settings page, rather than having to edit any CSS.
I'm currently assessing color integration options for Neptune and have found DesignKit to be a quick, easy, and flexible alternative to direct color module itegration (as used by Garland and Bartik). On the flip side, there are a couple of caveats (see below).
This tutorial will focus on implementing basic color configuration, although DesignKit also supports image configuration and more advanced color configuration (including color blending/shifting).
The purpose of a backup is to allow you to recover from a disaster with reasonable cost and effort. If you delete a file you shouldn't have, or make changes that you shouldn't have, backups are meant to save you from having to re-create the file, or undo a large amount of steps.
Speaking very broadly, any copy of your live data is a backup, but this is a uselessly broad definition. For example, if you use an automatic synchronisation system such as Dropbox or git-annex, to keep your live data in sync between two computers, you could pretend they're backups of each other. However, unless the synchronisation also allows you to keep a history of file versions, it's not a very good backup. If you delete your precious file on one computer, and it gets then deleted on the other computer as well, automatically, perhaps in seconds, then the backup is not of much use.
Another common assumption is that a RAID array works as a backup. RAID is an excellent technology that allows you to combine several hard disks so that they protect you against loss of data in case of disk failure. If one disk fails, the others have enough data to re-create the data on the failed disk, using either full copies (RAID-1) or error correction codes (RAID-5, RAID-6). This is not a backup. It doesn't protect you against accidental file deletions. There is also no backup history.
A version control system is very much like a backup. It stores copies of many of the versions of your project. However, in most version control systems it's fairly easy to make changes that lose history. Ask anyone who has used git reset to change the tip of the master branch to undo a wrong commit or merge, and then accidentally force-pushed that to the server. This is arguably a normal, if uncommon use of the version control system. A good backup system will protect you from you own mistakes, when you do the kinds of things you're expected to do. Version control systems also rarely capture all your data.
When you were five, and made some stuff on the family computer, and saved it on a floppy, and then drew a cute little picture of yourself on the floppy to make it clear to everyone it was your floppy, and not anyone else's, certainly not your bully of your brother's, and your mother kept the floppy for decades because of the cute picture, then that is also not a backup. You didn't even know your Mom had kept it.
A reasonable backup is one from which you can restore a working copy of your data, when you need to, without too much effort or expense, compared to the disaster you're experiencing. If the disaster is that you deleted a one-page draft outline of the book you want to write someday, the disaster is not very severe. The cost of restoring should be low.
If the disaster is that your plans to become the supreme emperor of the world, and make all people your slaves, are in a spreadsheet on your laptop, and your minions accidentally drove a car over your laptop, and you had accidentally not used a Thinkpad as your laptop, the disaster is quite severe. Unless you recover the spreadsheet, you'll never be able to tell apart the buttons to launch the Moon rocket, to self-destruct your HQ, and to switch channels on your TV, and all your work will be in vain, and you'll never, ever, ever convince the pretty girl with red hair living in the house opposite that she should be interested in you. Also, you'll never be able to move away from your parent's house. So, quite severe. It will be acceptable to go to quite some effort and expense to recover that spreadsheet. It's better if you don't need to, but you will, if you have to.
Your backup should also be reasonably up to date. Backing up every Christmas is a fine family tradition, but if you don't make a backup also on Easter, Midsummer, and Aunt Agatha's birthday sometime in September was it, or maybe October, you'll risk losing a whole year's worth of work. A year is a long time, and you might never be able to re-do all the work.
Personally, I backup my personal laptop every day to a file server at home, and less often to an online backup server. My work laptop gets backed up once an hour to the company file server, which gets backed up to two backup servers about once a day.
You need to balance the risk of losing data and work, and the expense and effort to back up your data. How much is a day's work worth to you, or your employer? How much does a backup system cost?
In the next episode, I'll ponder on how many backups are enough.