Agrégateur de flux
We had a great discussion about how different companies and individuals are using Ansible for Drupal infrastructure management and deployments at DrupalCon LA, and I wanted to post some slides from my (short) intro to Ansible presentation here, as well as a few notes from the presentation.
The slides are below:Notes from the BoF
If first gave an overview of the basics of Ansible, demonstrating some Ad-Hoc commands on my Raspberry Pi Dramble (a cluster of six Raspberry Pi 2 computers running Drupal 8), then we dove headfirst into a great conversation about Ansible and Drupal.
Live (almost) from Los Angeles, Ryan, Ted, and Mike are joined by a few familiar voices for a quick recap of day 1 of DrupalCon. We talk highlights, songs from the prenote, special Drupal moments, and Ryan interviews Rob Loach from Kalamuna about Kalabox 2.0.
I followed my own advice from my DrupalCon for n00bs post, and took part in some great sessions, checked out a BOF, ran a BOF of my own and hung out a bit in the Exhibition Hall. I loved finding out about Symfony and Drupal 8 and am super excited about what the combination has to offer. I was pleasantly surprised by the attendance in my BOF. And, I picked up some great swag in the Exhibition Hall and had some nice conversations.
Let's just hope that they get the lunch situation sorted out better today!
I've still got it!
Recovered floppy disks
My first computer was an Amiga A500, and my brother and I spent a fair chunk of our childhoods creating things with it. These things are locked away on 3.5" floppy disks, but they were also lost a long time ago.
A few weeks ago my dad found them in a box in his loft, so a disk-reading project is now on the horizon! Step one is to catalogue what we've got, which I've done here. Step two is to check which, if any, of these are not already in circulation amongst archivists. Thanks to Matthew Garrett for pointing me at the Software Preservation Society, which is a good first place to check.
When we get to the reading step, there are quite a few approaches I could take. Which one to use depends to some extent on which disks we need to read, and whether they employ any custom sector layout or other copy protection schemes. I think the easiest method using equipment I already have is probably Amiga Explorer and a null-modem cable, as this approach will work on an A500 with Workbench 1.3.
There are a variety of hardware tools and projects for reading Amiga floppies on a PC, but the most interesting one to me is DiscFerret, which is open hardware and software.
We're proud to say DrupalCamp Bristol 2015 is taking shape nicely; tickets are selling well, sessions are being submitted and we already have a number of Sponsors on board!
With just under 2 months to go we're now keen to get more sessions submitted and the remaining Sponsor spaces filled up. If you wish to propose a session for the Business day on Friday 3rd July then please get in touch with Rick Donohoe - firstname.lastname@example.org. If you wish to submit a session for the Saturday conference day then please use the form on the website to submit your idea.
Looking to Sponsor the event? Get in touch here.
Lastly, thanks to everyone who has purchased a ticket so far and a big thanks to all the committee members for their hard work to date.Rick Donohoe
As a web development firm scales, it will inevitably run into a complicated dilemma: whether or not to productize its services. Particularly with a complex and labor intensive content management system like Drupal, turning away from a business model built around providing specialized service to each individual client becomes increasingly logical as a web development firm expands and builds a reputation.
But, as you may have already noted, this comes at a cost—productizing your Drupal services means that each client ostensibly receives less individual attention from its web vendor. However, if done correctly, productizing your Drupal services will not only improve a web development firm's services, but will streamline the design and development process and insure that clients consistently receive excellent customer service, support, and, ultimately, superior web products.
So contemplating productizing your Drupal services doesn't need to run hand in hand with your company having an existential crisis. Here's why:
Productizing your web services means that you'll only need to develop a platform once, instead of starting from scratch each time you take on a project. It also means that a development firm only needs to maintain one system instead of several, which allows software engineers and customer support managers to focus all their energy on that particular system, and thereby improving overall experience for all of a firm's clients collectively.
So in short, what does productizing your web development services achieve? It reduces the total cost of ownership; reduces operational costs for the vendor; makes higher quality standards easier to attain; and finally, delivers a dependably high-quality product to clients.
Check out this presentation assembled by Vardot's CEO, Mohammed Razem, that outlines the product vs. service debate by breaking down the phases of developing a website using Drupal. It details how consolidating these practices into a streamlined product will provide a more refined development and implementation process and will allow a team to produce more consistent products by cutting down on the unnecessary and time-consuming aspects associated with a service-focused model, and how that will result in better overall results.
Tags: Productize Drupal Drupal Services Drupal Planet Title: The Drupal Ecosystem: How to Productize Your Drupal Services
This article is for self, so that I don't again forget the specifics. The last time I did the same setup, it wasn't very important in terms of security. gitolite(3) + gitweb can give an impressive git tool with very simple user acls. After you setup gitolite, ensure that the umask value in gitolite is approriate, i.e. the gitolite group has r-x privilege. This is needed for the web view. Add your apache user to the gitolite group. With the umask changes, and the group association, apache's user will now be able to read gitolite repos.
Now, imagine a repo setting like the following:repo virtualbox RW+ = admin R = gitweb
This allows 'R'ead for gitweb. But by Unix ACLs, now even www-data will have 'RX' on all (the ones created after the UMASK) the repositories.rrs@chutzpah:~$ sudo ls -l /var/lib/gitolite3/repositories/ [sudo] password for rrs: total 20 drwxr-x--- 7 gitolite3 gitolite3 4096 May 12 17:13 foo.git drwx------ 8 gitolite3 gitolite3 4096 May 13 12:06 gitolite-admin.git drwxr-x--- 7 gitolite3 gitolite3 4096 May 13 12:06 linux.git drwx------ 7 gitolite3 gitolite3 4096 May 12 16:38 testing.git drwxr-x--- 7 gitolite3 gitolite3 4096 May 12 17:20 virtualbox.git 13:10 ♒♒♒ ☺
But just www-data. No other users. Because for 'O', there is no 'rwx'. And below shows gitolite's ACL in picture...test@chutzpah:~$ git clone gitolite3@chutzpah:virtualbox Cloning into 'virtualbox'... Enter passphrase for key '/home/test/.ssh/id_rsa': FATAL: R any virtualbox test DENIED by fallthru (or you mis-spelled the reponame) fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists.Categories:
(This came up in a discussion on email@example.com)
When converting from sysvinit scripts to systemd init files, the default practice seems to be to start services without forking, and to use Type=simple in the service description.
What Type=simple does is, well, simple. from systemd.service(5):
If set to simple (the default value if neither Type= nor BusName= are specified), it is expected that the process configured with ExecStart= is the main process of the service. In this mode, if the process offers functionality to other processes on the system, its communication channels should be installed before the daemon is started up (e.g. sockets set up by systemd, via socket activation), as systemd will immediately proceed starting follow-up units.
In other words, systemd just runs the command described in ExecStart=, and it’s done: it considers the service is started.
Unfortunately, this causes a regression compared to the sysvinit behaviour, as described in #778913: if there’s a configuration error, the process will start and exit almost immediately. But from systemd’s point-of-view, the service will have been started successfully, and the error only shows in the logs:root@debian:~# systemctl start ssh root@debian:~# echo $? 0 root@debian:~# systemctl status ssh ● ssh.service - OpenBSD Secure Shell server Loaded: loaded (/lib/systemd/system/ssh.service; enabled) Active: failed (Result: start-limit) since mer. 2015-05-13 09:32:16 CEST; 7s ago Process: 2522 ExecStart=/usr/sbin/sshd -D $SSHD_OPTS (code=exited, status=255) Main PID: 2522 (code=exited, status=255) mai 13 09:32:16 debian systemd: ssh.service: main process exited, code=exited, status=255/n/a mai 13 09:32:16 debian systemd: Unit ssh.service entered failed state. mai 13 09:32:16 debian systemd: ssh.service start request repeated too quickly, refusing to start. mai 13 09:32:16 debian systemd: Failed to start OpenBSD Secure Shell server. mai 13 09:32:16 debian systemd: Unit ssh.service entered failed state.
With sysvinit, this error is detected before the fork(), so it shows during startup:root@debian:~# service ssh start [....] Starting OpenBSD Secure Shell server: sshd/etc/ssh/sshd_config: line 4: Bad configuration option: blah /etc/ssh/sshd_config: terminating, 1 bad configuration options failed! root@debian:~#
It’s not trivial to fix that. The implicit behaviour of sysvinit is that fork() sort-of signals the end of service initialization. The systemd way to do that would be to use Type=notify, and have the service signals that it’s ready using systemd-notify(1) or sd_notify(3) (or to use socket activation, but that’s another story). However that requires changes to the service. Returning to the sysvinit behaviour by using Type=forking would help, but is not really a solution: but what if some of the initialization happens *after* the fork? This is actually the case for sshd, where the socket is bound after the fork (see strace -f -e trace=process,network /usr/sbin/sshd), so if another process is listening on port 22 and preventing sshd to successfully start, it would not be detected.
I wonder if systemd shouldn’t do more to detect problems during services initialization, as the transition to proper notification using sd_notify will likely take some time. A possibility would be to wait 100 or 200ms after the start to ensure that the service doesn’t exit almost immediately. But that’s not really a solution for several obvious reasons. A more hackish, but still less dirty solution could be to poll the state of processes inside the cgroup, and assume that the service is started only when all processes are sleeping. Still, that wouldn’t be entirely satisfying…
I tried to install Jessie on a brand-new virtual machine (kvm), but it has a problem about serial console login.
At least, wheezy installer worked fine because it adds getty entry on /etc/inittab. Jessie uses systemd but no care about getty service for serial console. The probrem is reported as #769406.
My solution is invoke “systemctl enable serial-getty@ttyS0.service” via ssh.
After taking the trifecta of Acquia Developer Certification (General, Back-end, Front-end) exams and earned a new black 'Grand Master' sticker, I decided to complete the gauntlet and take the Acquia Certified Drupal Site Builder Exam at DrupalCon LA.
I have spent some fair amount of time during the life to explore making great responses to generic question (technical one included) and I can say without doubt that it is a pretty simple thing one could learn. First of all, answering question via email or in personal, it is very important that people feel that the person answering is there and is really "getting" their question. So a personal notice at beginning or at the end is not necessary but is a big plus. Particular part of answer should have 3 phases: straight yes or no answer, brief explaining why yes or no, and then explaining the opposite solution. Very important to keep it precise and simple as possible while explaining all what is needed for the person which asked question.
For example Joe asks:
Can I install library libfoo1.2 without breaking software foo1.1
And Jane would answer:
very good question as people often do try things like that and could end up in complicated situation.
So the answer is NO.
Pulling the libfoo1.2 would break foo1.1 because there were numerous changes from libfoo1.1 that break backward compatibility and there was also rewriting and porting to a newer version of language.
Now having that out of way, you can safely pull also foo1.2 and install it with libfoo1.2 which is tested and should work for you without any problems.
And that's it. Lean, clean, cyborg.
Well before "DevOps" was a thing, and long before DevShop existed, was "CI". Continuous Integration is a critical part of successful software development. As a web CMS, Drupal has lagged a bit behind in joining up with this world of CI.
One of the reasons that we don't see CI being used as much as we'd like is that it is hard to setup, and even harder to maintain long term. Wiring up your version control systems to your servers and running tests on a continual basis takes some serious knowledge and experience. There are a lot of tools to try and make it easier, like Jenkins, but there is still a lot of setup and jerry-rigging needed to make everything flow smoothly from git push to test runs to QA analysis and acceptance.
Setup is one thing, keeping things running smoothly is another entirely. With a home-spun continuous integration system, the system operators are usually the only ones who know how it works. This can become a real challenge as people move to new jobs or have other responsibilities.
This is one of the reasons why we created DevShop: to make it ridiculously easy to setup and keep up a CI environment. DevShop's mission is to have everything you need out of the box, with as little setup, or at least as simple a setup process as possible.Continuous Deployment
DevShop has always had Continuous Deployment: When a developer pushes code to the version control repository, it is deployed to any environment configured to follow that branch. This is usually done on the main development branch, typically called master, and the environment is typically been called dev.
However for the last few years, DevShop has had the ability to host unlimited "branch environments". This means that individual developers can work on their code on separate branches, and each one can get it's own copy of the site up and running on that branch. This reduces the chances for conflicts between multiple developers and helps reduce the time needed to debug problems in the code because if you find a problem, you know what branch it came from.
We've found that having branch environments is a much more productive way to code than a shared dev environment on the master branch.Pull Request Environments
DevShop has been able to react to GitHub Pull Requests by creating a new environment since last year. Each project can be configured to either clone an environment or run a fresh install every time a Pull Request is created. It will even tear down the environment when the Pull Request is closed.
Developers have less management to do using Pull Request environments: They don't need access to DevShop at all. Everything is fully automated.
This method is even better than setting up branch environments, since Pull Requests are more than just code: anyone on the team can comment on every Pull Request, offering advice or asking questions about the proposed code. Users can even comment on individual lines of code, making the peer review process smoother than ever by letting teams communicate directly on the code..Continuous Testing
Recently we've added built-in behat testing to DevShop: When a "Test" task is triggered, the logs are saved to the server and displayed to users through the web interface in real time. The pass or fail result is then displayed in the environment user interface as green or red, respectively, along with the full test results with each step highlighted with Pass, Fail, or Skipped.
This gives developers instant feedback on the state of their code, and, because it is running in an online environment, others can review the site and the test results along with them.
The future of DevShop testing is to incorporate even more tests, like PHPUnit, CodeSniffer, and PHP Mess Detectors. Having a common interface for all of these tests will help teams detect problems early and efficiently.Continuous Integration
Continuous Integration can mean different things to different people. In this context I am referring to the full integration of version control, environments, tests, and notifications to users. By tying all of these things together, you can close the feedback look and accelerate software development dramatically.
GitHub, the most popular git host in the world, got to be that way in part by providing an extremely robust API that can be used to setup continuous integration systems. Each repository can have "Post commit receive" webhooks setup that will notify various systems that new code is available.
The "Deployments" API" allows your systems to notify github (and other systems) that code was deployed to certain environments. The "Commit Status" API can be used to flag individual commits with a Pass, Fail, or Pending status. This shows up in the web interface as a a green check, a red X, or an orange circle both on the commit and on each open Pull Request in the repository. A failing commit will notify developers that they should "Merge with Caution", making it much less likely that code reviewers will merge bad code.
DevShop now leverages both the Deployments and the Commit status APIs for Pull Request environments.
Deployments are listed right in line with the commit logs of a pull request, and give the team direct links to the environments created by devshop.
Commit Statuses display not only a pass or fail status, but also link directy to test results, giving developers the instant feedback needed to respond quickly to issues.Continuous Notification
An important part of any CI system is the notifications. Without them, developers and their teams don't know that there is anything for them to do. GitHub has great integration with just about any chat service, and now so does DevShop.
When your CI system is integrated with a chat service, the entire team gets visibility into the progress and status of the project. New commits pushed notify others that there is new work to pull, and test notifications alert everyone to the passing or failing of those code pushes. Having immediate feedback on these types of things in crucial for maintaining a speedy development pace.Continuous Delivery
With all of the pieces in place, you can start to think about Continuous Delivery. Continuous Delivery is the act of "going live" with your code on a continuous basis, or at least very often.
DevShop allows your live environment to track a branch or a tag. If tracking a tag, you must manually go in and deploy a new tag when you are ready to release your code.
If tracking a branch, however, your code will deploy as soon as it is pushed to that branch. Deploy hooks ensure all of the appropriate things run after the code drop, such as update.php and cache clearing. This is what makes continuous delivery possible.
If using Pull Requests for development, you can use GitHub's Merge button to deploy to live if you setup your production environment to track your main branch. With the Commit Status and Deployment APIs, you can be sure that not only did the tests pass but the site looks good with the new code.
Merging to deploy to live is a great way to work. You no longer need to access devshop to deploy a new tagged release, and you no longer need to manually merge code, hoping that everything works.
If it's green, it's working. If your tests are robust enough, you can guarantee that the new code works.
If your tests are so complete that you start to reach 100% code coverage, you can start thinking about true continuous delivery: Tests Pass? Automatically merge to live and deploy. This requires that your tests are amazing and reliable, but it is possible.
DevShop doesn't have the ability out of the box to setup true continuous delivery, but it would not take too much work. You could use the hosting task status hook to fire off a merge using GitHub's API.
As more people start to use Continuous Delivery, we can work on incorporating that into the devshop process.All wrapped up in a Bow
With DevShop, you will spend (much) less time on your support systems so you can focus on your websites. We hope to continue to find ways to improve the development process and incorporate them directly into the platform.
We encourage everyone to embrace continuous integration principles on their projects, whether it is using DevShop or not. Not only does efficiency go up, but code quality and morale does too.
If you are a software developer, having tests in place will change your life. Everyone involved in the project, from clients to quality assurance folks to the dev team, will sleep better at night.Tags: devshopcontinuous integrationPlanet Drupal
The next beta release for Drupal 8 will be beta 11! (Read more about beta releases.) The beta is scheduled for Wednesday, May 27, 2015.
To ensure a reliable release window for the beta, there will be a Drupal 8 commit freeze from 00:00 to 23:30 UTC on May 27.
Since I had a long rant about Lenovo customer service a while back:
My laptop died again during travels; at first, it was really unstable (whenever I'd hold it slightly wrong, it would instantly crash), then later, it would plain refuse to boot (not even anything on the display).
So I called Lenovo, and after some navigating of phone menus I got to someone who took my details, checked my warranty (“let's see, you have warranty until 2018”—no months of arguing this time!) opened a case and sent me on to tech support. Tech support said most likely, the motherboard was broken, and that a technician would call me; today, the tech called, and arrived at work to swap my motherboard. 30 minutes on the phone, 20 minutes waiting for the technician to switch the motherboard (I would probably have used more than an hour myself). And voila, working laptop. (Hope it's stable from now on.)
My only gripe is that I forgot to remind him after the repair to give me new rubber feet—he'd already said it wouldn't be a problem, but we both forgot about it. But overall, this is exactly how it should be—quite unlike last time.
Yesterday, just shortly after the sun sprung up and sparked southern California’s beautiful coastal lines, the doors of LA’s Convention Center opened. Welcoming with it, a first wave of eager Drupalistas and surrounding them by its air conditioned walls. And for the subsequent days that are surely to follow, it will continue to receive and house those, transforming it, to the home of DrupalCon 2015.
To some of us, this day began just like two previous iterations in the past, with the Drupal 8 Training for Drupalistas. And like the preceding ones before, the turn out was again, lovely. A fresh batch of new ambitious students took their seats and embarked on a cinematic journey, led by our resident training director, Diana Montalion.
From lively exchanges of know-how, to focused, almost silent moments, the classroom experienced a day of captivating performances. And in-between those, pupils were given hourly breaks to take a breath and pick from a variety of delicious beverages (there were cookies!) and the oh so essential, mandatory coffee.
After a wholesome feast around noon-ish, the reins were passed on to Jason, who expertly guided the keen students through the inner workings of Drupal 8’s translation system. Shortly thereafter, Kathryn took over to introduce them to the beautiful side of Drupal 8 (theming!) before finally ending again within Diana’s experienced hands.
Meanwhile, the lower floor saw a much more handy development: the exhibition hall of large, empty at first, but slowly building up to a small but respectable miniature city of brands. Replacing muffled sounds of classroom keyboards with the repeating cracks of rising booths, only to be broken apart by the occasional clash of shouty staff members.
Thus the day drew to an end, and our students were being led on their way, charged with knowledge and filled with cookies. The building emptied its walls again to prepare for tomorrow, when at dawn, the drupalistas will rise again.
The first line of the description of the Drupal Theme Developer Module says that it is "Firebug for Drupal themeing". I couldn't agree more. This is the ultimate tool when you need to find out which theme hook, or which template file to modify based on your design and layout needs.
It is a finicky module though, it doesn't work with the latest version of one of it's dependencies, simplehtmldom API, and when turned on it can break your layout.
Note that this module injects markers into the DOM to do its magic. This may cause some themes to behave erratically and less capable browsers may make it worse (especially IE)/. Enable it when needed, and disable it afterwards.
To ensure I install the correct branch of the modules, and to speed up enabling and disabling, I set up some aliases in my local/development computer's .bash_profile:
If you haven't heard yet, we've announced earlier that Moldcamp 2015 is on it's way! Second edition of the Drupal Camp hosted by a small country in the Eastern Europe - Moldova.Tags:
Running XMPP over TLS is a good idea. So I need a X.509 PKI for this purpose. I don’t want to use a third-party Certificate Authority, since that gives them the ability to man-in-the-middle my XMPP connection. Therefor I want to create my own CA. I prefer tightly scoped (per-purpose or per-application) CAs, so I will set up a CA purely to issue certificates for my XMPP server.
One complication is the requirement to include an AIA for OCSP/CRLs — fortunately, it is not a strict “MUST” requirement but a weaker “SHOULD”. I note that checking revocation using OCSP and CRL is a “MUST” requirement for certificate validation — some specification language impedence mismatch at work there.
The specification demand that the CA certificate MUST have a keyUsage extension with the digitalSignature bit set. This feels odd to me, and I’m wondering if keyCertSign was intended instead. Nothing in the XMPP document, nor in any PKIX document as far as I am aware of, will verify that the digitalSignature bit is asserted in a CA certificate. Below I will assert both bits, since a CA needs the keyCertSign bit and the digitalSignature bit seems unnecessary but mostly harmless.
My XMPP/Jabber server will be “chat.sjd.se” and my JID will be “firstname.lastname@example.org”. This means the server certificate need to include references to both these domains. The relevant DNS records for the “josefsson.org” zone is as follows, see section 3.2.1 of RFC 6120 for more background._xmpp-client._tcp.josefsson.org. IN SRV 5 0 5222 chat.sjd.se. _xmpp-server._tcp.josefsson.org. IN SRV 5 0 5269 chat.sjd.se.
The DNS records or the “sjd.se” zone is as follows:chat.sjd.se. IN A ... chat.sjd.se. IN AAAA ...
The following commands will generate the private key and certificate for the CA. In a production environment, you would keep the CA private key in a protected offline environment. I’m asserting a expiration date ~30 years in the future. While I dislike arbitrary limits, I believe this will be many times longer than the anticipated lifelength of this setup.openssl genrsa -out josefsson-org-xmpp-ca-key.pem 3744 cat > josefsson-org-xmpp-ca-crt.conf << EOF [ req ] x509_extensions = v3_ca distinguished_name = req_distinguished_name prompt = no [ req_distinguished_name ] CN=XMPP CA for josefsson.org [ v3_ca ] subjectKeyIdentifier=hash basicConstraints = CA:true keyUsage=critical, digitalSignature, keyCertSign EOF openssl req -x509 -set_serial 1 -new -days 11147 -sha256 -config josefsson-org-xmpp-ca-crt.conf -key josefsson-org-xmpp-ca-key.pem -out josefsson-org-xmpp-ca-crt.pem
Let’s generate the private key and server certificate for the XMPP server. The wiki page on XMPP certificates is outdated wrt PKIX extensions. I will embed a SRV-ID field, as discussed in RFC 6120 section 22.214.171.124.1 and RFC 4985. I chose to skip the XmppAddr identifier type, even though the specification is somewhat unclear about it: section 126.96.36.199.1 says that it “is no longer encouraged in certificates issued by certification authorities” while section 188.8.131.52 says “Use of the ‘id-on-xmppAddr’ format is RECOMMENDED in the generation of certificates”. The latter quote should probably have been qualified to say “client certificates” rather than “certificates”, since the latter can refer to both client and server certificates.
Note the use of a default expiration time of one month: I believe in frequent renewal of entity certificates, rather than use of revocation mechanisms.openssl genrsa -out josefsson-org-xmpp-server-key.pem 3744 cat > josefsson-org-xmpp-server-csr.conf << EOF [ req ] distinguished_name = req_distinguished_name prompt = no [ req_distinguished_name ] CN=XMPP server for josefsson.org EOF openssl req -sha256 -new -config josefsson-org-xmpp-server-csr.conf -key josefsson-org-xmpp-server-key.pem -nodes -out josefsson-org-xmpp-server-csr.pem cat > josefsson-org-xmpp-server-crt.conf << EOF subjectAltName=@san [san] DNS=chat.sjd.se otherName.0=184.108.40.206.220.127.116.11.7;UTF8:_xmpp-server.josefsson.org otherName.1=18.104.22.168.22.214.171.124.7;UTF8:_xmpp-client.josefsson.org EOF openssl x509 -sha256 -CA josefsson-org-xmpp-ca-crt.pem -CAkey josefsson-org-xmpp-ca-key.pem -set_serial 2 -req -in josefsson-org-xmpp-server-csr.pem -out josefsson-org-xmpp-server-crt.pem -extfile josefsson-org-xmpp-server-crt.conf
With this setup, my XMPP server can be tested by the XMPP IM Observatory. You can see the c2s test results and the s2s test results. Of course, there are warnings regarding the trust anchor issue. It complains about a self-signed certificate in the chain. This is permitted but not recommended — however when the trust anchor is not widely known, I find it useful to include it. This allows people to have a mechanism of fetching the trust anchor certificate should they want to. Some weaker cipher suites trigger warnings, which is more of a jabberd2 configuration issue and/or a concern with jabberd2 defaults.
My jabberd2 configuration is simple — in c2s.xml I add a <id> entity with the “require-starttls”, “cachain”, and “pemfile” fields. In s2s.xml, I have the <pemfile>, <resolve-ipv6>, and <require-tls> entities.
Some final words are in order. While this setup will result in use of TLS for XMPP connections (c2s and s2s), other servers are unlikely to find my CA trust anchor, let alone be able to trust it for verifying my server certificate. I’m happy to read about Peter Saint-Andre’s recent SSL/TLS work, and in particular I will follow the POSH effort.
With DrupalCon Los Angeles underway we thought it might be a good time to introduce (or reintroduce) folks to [Dreditor](https://dreditor.org/) (short for "Drupal editor"). Dreditor is a collection of user scripts, which alter browser behavior on specific pages on the drupal.org domain. The features of dreditor are mostly helpful in the issue queue and during the patch review process.
We have a video [Installing and using Dreditor](https://drupalize.me/videos/installing-and-using-dreditor) if you'd like to follow along, but since recording installation of Dreditor is even easier. Let's take a look at the changes, and how we can use this powerful tool to make interacting with the issue queue easier.
When setting up a MySQL Server there are a lot of things to consider. Most requirements depend on the intended usage of the system.