In a series of blog posts I am going to present our new tool for doing drupal deploys. It is developed internally in the ops-team in Wunderkraut Sweden , and we did that because of when we started doing Drupal 8 deploys we tried to rethink how we mostly have done Drupal deploys before, because we had some issues what we already had. This is part 2.
The idea with dropcat is that you use it with options, or/and with configuration files. I would recommend to use it with config files, and with minor settings as options.
You could use just use a default settings file, that should be dropcat.yml, or as in most cases you have one config file for each environment you have – dev, stage, prod etc.
You could use an environment variable to set which environment to use, this variable is called DROPCAT_ENV. To use prod environment you could set that variable in the terminal to prod with:
Normally we set this environment variable in our jenkins build, but you could also set it as an parameter with dropcat like:
dropcat backup –env=prod
That will use the dropcat.prod.yml file
By default dropcat uses dropcat.yml if youi don't set an environment.
Thing will be more in the next blog posts, but first we now look into a minimal config file, in our root dir we could hav a dropcat.yml file with this config:app_name: mysite local: environment: tmp_path: /tmp seperator: _ drush_folder: /home/myuser/.drush remote: environment: server: mytarget.server.com ssh_user: myuser ssh_port: 22 identity_file: /home/myuser/.ssh/id_rsa web_root: /var/www/webroot temp_folder: /tmp alias: mysite_latest_stage site: environment: drush_alias: mysitestage backup_path: /backup original_path: /srv/www/shared/mysite_stage/files symlink: /srv/www/mysite_latest_stage/web/sites/default/files url: http://mysite.com name: mysitestage mysql: environment: host: mymysql.host.com database: my_db user: my_db_user password: my_db_password port: 3306
The settings is grouped in a way that should explain what they are used for – local.environment is from where we deploy, remote.environment is to where we deploy. site.environment is for drush and symlinks (we use for the files folder), mysql.environment, is for… yeah you guessed correctly – mysql/mariadb.appname
This is the application name, used for creating a tar-file with that name (with some more information, like build date and build number).local
These are the settings from where we deploy, it could be localy, it could be a build server as jenkins.tmp_path
Where we temporary store stuff.Seperator
Used for i name of foler to deploy as seperator like myapp_DATE
Where drush-settings from you deploy from, normaly in your home folder (for jenkins normaly: /var/lib/jenkins/.drush), and this is also to which path the drush alias is saved on dropcat prepare.Remoteserver
The server you deploy you code too.ssh_user
User to use with ssh to your remote serverssh_port
Port used to use ssh to your serveridentity_file
Which private ssh-key to use to login to your remote serverweb_root
Path to which your site is going to be deployed to.temp_folder
Temp folder on remote server, used for unpacking tar file.alias
Symlink alias for you site
Name of you drush alias, used from 'local' server. Drush alias is created as a part of dropcat prepare.backup_path
Backup path on ”local” server. Used by dropcat backuporiginal_path
Existing path to point a symlink to – we use for the files foldersymlink
Symlink path that points to original_pathurl
URL for you site, used in drush aliasname
Name of site in drush alias.
name of db hostdatabase
Database to useuser
password for db user to hostport
Port to use with mysql
We are still on a very abstract level, next time we will go through that is needed in an normal jenkins-build.
After getting complains from apt and users, I've finally decided to upgrade signing key on my Debian repository to something more decent that DSA. If you are using that repository, you will now have to fetch new key to make it work again.
The old DSA key was there really because my laziness as I didn't want users to reimport the key, but I think it's really good that apt started to complain about it (it doesn't complain about DSA itself, but rather on using SHA1 signatures, which is most you can get out of DSA key).
Anyway the new key ID is DCE7B04E7C6E3CD9 and fingerprint is 4732 8C5E CD1A 3840 0419 1F24 DCE7 B04E 7C6E 3CD9. It's signed by my GPG key, so you can verify it this way. Of course instruction on my Debian repository page have been updated as well.
Today, after many years of hard work from many people, ZFS for Linux finally entered Debian. The package status can be seen on the package tracker for zfs-linux. and the team status page. If you want to help out, please join us. The source code is available via git on Alioth. It would also be great if you could help out with the dkms package, as it is an important piece of the puzzle to get ZFS working.
In this tutorial or guide, I will share the best solutions I found for two basic Drupal Commerce use-cases and delve into their respective setup.
Commerce Kickstart 2 (CK2) is a great distribution for setting up an online store; it packs a lot of goodies out-of-the-box. But it can't have them all. Printing an order to PDF is not included. So one has to do some R&D for that.
Hey! So you are here in this page trying to find/learn something about git! Have you used a source code management system to synchronize your local code remotely before? Do you know that Git is the most powerful SCM. I was convinced and yes it is!
I have actually started SCM with svn( Apache Subversion). In fact started with TortoiseSVN, a GUI tool for Windows. Here there are no commands, no need to remember, so, nothing to worry, Just right click on your web root folder and choose whichever option you need! Sounds easy?
If you want to go with SVN, you can refer these links.
I've just received my laptop sticker from the GnuPG crowdfund http://goteo.org/project/gnupg-new-website-and-infrastructure: it is of excellent quality, but comes with HOWTO-like detailed instructions to apply it in the proper way.
This strikes me as oddly appropriate.
The GPG subkey http://www.trueelena.org/about/gpg.html I keep for daily use was going to expire, and this time I decided to create a new one instead of changing the expiration date.
Doing so I've found out that gnupg does not support importing just a private subkey for a key it already has (on IRC I've heard that there may be more informations on it on the gpg-users mailing list), so I've written a few notes on what I had to do on my website http://www.trueelena.org/computers/howto/gpg_subkeys.html, so that I can remember them next year.
The short version is:
* Create your subkey (in the full keyring, the one with the private master key)
* export every subkey (including the expired ones, if you want to keep them available), but not the master key
* (copy the exported key from the offline computer to the online one)
* delete your private key from your regular use keyring
* import back the private keys you have exported before.
I recently found out that I have access to a 1 TB cloud storage drive by 1&1, so I decided to start taking off-site backups of my $HOME (well, backups at all, previously I only mirrored the latest version from my SSD to an HDD).
I initially tried obnam. Obnam seems like a good tool, but is insanely slow. Unencrypted it can write about 3 MB/s, which is somewhat OK, but even then it can spend hours forgetting generations (1 generation takes probably 2 minutes, and there might be 22 of them). In encrypted mode, the speed reduces a lot, to about 500 KB/s if I recall correctly, which is just unusable.
I found borg backup, a fork of attic. Borg backup achieves speeds of up to 15 MB/s which is really nice. It’s also faster with scanning: I can now run my bihourly backups in about 1 min 30s (usually backs up about 30 to 80 MB – mostly thanks to Chrome I suppose!). And all those speeds are with encryption turned on.
Both borg and obnam use some form of chunks from which they compose files. Obnam stores each chunk in its own file, borg stores multiple chunks (even from different files) in a single pack file which is probably the main reason it is faster.
So how am I backing up: My laptop has an internal SSD and an HDD. I backup every 2 hours (at 09,11,13,15,17,19,21,23,01:00 hours) using a systemd timer event, from the SSD to the HDD. The backup includes all of $HOME except for Downloads, .cache, the trash, Android SDK, and the eclipse and IntelliJ IDEA IDEs.
Now the magic comes in: The backup repository on the HDD is monitored by git-annex assistant, which automatically encrypts and uploads any new files in there to my 1&1 WebDAV drive and registers them in a git repository hosted on bitbucket. All files are encrypted and checksummed using SHA256, reducing the chance of the backup being corrupted.
I’m not sure how the WebDAV thing will work once I want to prune things, I suspect it will then delete some pack files and repack things into new files which means it will spend more bandwidth than obnam would. I’d also have to convince git-annex to actually drop anything from the WebDAV remote, but that is not really that much of a concern with 1TB storage space in the next 2 years at least…
I also have an external encrypted HDD which I can take backups on, it currently houses a fuller backup of $HOME that also includes Downloads, the Android SDK, and the IDEs for quicker recovery. Downloads changes a lot, and all of them can be fairly easily re-retrieved from the internet as needed, so there’s not much point in painfully uploading them to a WebDAV backup site.
Filed under: Uncategorized
After having finished Monument Valley and some spin-offs, Google Play suggested me The Room series games (The Room, The Room II, Room III), classical puzzle games with a common theme – one needs to escape from some confinement.
I have finished all the three games, game play was very nice and smooth on my phone (Nexus 6P). The graphics and detail level is often astonishing, and everything is well made.
But there is one drop of Vermouth: You need a strong finger tapping muscle! I really love solving the puzzles, but most of them were not really difficult. The real difficulty is finding everything by touching each and every knob, looking from all angles at all times. This later part, the tedious part to find things by often illogically tapping on strange places to realize “ahh, there is something that turns”, that is what I do not like.
I had the feeling that more than 60% of the game play is searching for things. Once you have found them, their use and the actual riddle is mostly straightforward, though.
The Room series somehow reminded me of the Myst series (Myst, Riven, Myst III etc), but afair the Myst series had more involved, more complicated riddles, and less searching. Also the recently reviewed Talos Principle and Portal series have clear set problems that challenge your brain, not your finger tapping muscle.
But all in all a very enjoyable series of games.
Final remark: I learned recently that there are real-world games like this, called “Escape Room“. Somehow tempting to try one out …
Again, in the interests of timeliness I'll stick to a simple chronological wrapup of the day. And in the interests of of-course-everyone-cares-what-Mike-eats, I will continue subjecting you to my culinary adventures - breakfast at the Clover Grill in the midst of tourist land (Bourbon Street). Good, basic diner food - eggs over easy with bacon and hash browns, the primary goal here was to make it quick and get to the convention center in time for the prenote (which I have somehow never managed to rouse myself in time for at previous DrupalCons).
And, as always (by reputation), the prenote was an extravaganza hosted by jam. So much energy on the stage, so many songwriters calling their lawyers... Highlights were Gábor unveiling a sweet, soulful voice, and the epic Code of Conduct song (performed 1.5 times, so no excuses for not getting it down).
That brings us to - ta-da! - DriesNote. As always, a lot of information presented succintly - I'm sure others will cover many of his points, so I'll focus on my special interest - migration. In Dries' annual survey, site builders identified migration tools as their biggest need for Drupal 8, and he called out the Friday migration sprint.Sprint all the migrates!
So... let's see how much progress we can make on core migration issues this week! Important things to note:
- You don't have to wait for Friday. The Sprint Lounge (rooms 275-277) is open every day. And, while as usual I checked off many, many sessions I'd like to attend, after sitting in a couple today where (through no fault of the presenters) I was mainly thinking about migration, I'm going to try to spend significant time every day (right up through Sunday morning) sprinting.
- You don't have to be in New Orleans! You can help remotely - drop into the #drupal-migrate IRC channel, or just pick issues from the core queue and dive in on your own.
- You don't have to know the migration framework - there are various ways you can help out (see below).
We already have 10 people officially signed up for migration sprinting (between the core and multilingual lists), so (particularly with more people joining) we can afford to split into multiple sprint teams:
- Backwards-compability breakers - try to address any issues that may affect backwards-compability, so migration implementors will be able to count on a stable API from 8.2.x forward. This was my priority coming in, and you'll find triaged issues on the Sprint triage tab of the Migration sprint spreadsheet.
- I18n issues - penyaskito is already leading a migration sprint in this area - it overlaps with the BC-breakers on the epic Migrate D6 i18n nodes issue.
- Migrate criticals - note that this overlaps some with the BC-breakers (the BC-breaker list has its migrate-criticals listed first), so look for issues not already covered there.
- UI issues - Abhishek Anand, who did some of the work on the UI in contrib, will lead efforts to clean up remaining issues in core. He'll be in the sprint room Wednesday morning, as well as most of the day Friday, and you can also coordinate with him outside of those times (or if you're not here).
- We have a lot of issues at the Needs review stage - let's see how many we can get to RTBC, or give constructive feedback, so we can move forward on stuff like node and user references.
- If you're at DrupalCon NOLA, come to the sprint room (275-277) any time Wednesday-Friday - I'll try to get there early and reserve a table just for migration. There are a couple of sessions I definitely want to catch, but I should be there for most non-lunch time, and there should generally be others there (especially Friday) when I'm not.
- If you're remote, you can announce your presence in #drupal-migrate on IRC. Or just pick an issue to work on.
Either way, please put your name under "Who's working on it" in the spreadsheet so we don't duplicate effort (multiple people can be involved in one patch, but should coordinate).
Ways to help on a specific issue:
- Write a patch (or discuss approaches to a patch) where there is none yet.
- Review an existing "needs review" patch.
- Manually test a "needs review" patch - set up a patched D8 environment and try running your site through the migration process (we'll give some help on setup here).
- Add tests to a patch tagged "Needs tests".
- Help solve any outstanding issues on a "needs work" patch.
- Any other ideas you might have...
mikeryan Tue, 05/10/2016 - 20:33 Tags