We already edit /etc/tomcat7/server.xml after installing the tomcat7 Debian package, to get it to talk AJP instead of HTTP (so we can use libapache2-mod-jk to put it behind an Apache 2 httpd, which also terminates SSL):
We already comment out the block…<Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" URIEncoding="UTF-8" redirectPort="8443" />
… and remove the comment chars around the line…<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />
… so all we need to do is edit that line to make it look like…<Connector address="127.0.0.1" port="8009" protocol="AJP/1.3" redirectPort="8443" />
… and we’re all set.
(Your apache2 vhost needs a lineJkMount /?* ajp13_worker
and everything Just Works™ with the default configuration.)
Now, tomcat7 is only accessible from localhost (Legacy IP), and we don’t need to firewall the AJP (or HTTP/8080) port. Do make sure your Apache 2 access configuration works, though ☺
Probably many of you are facing the same issue: having a consistent UNIX identity across all multiple instances. While in an ideal world LDAP would be a perfect choice, letting LDAP open to the wild Internet is not a great idea.
So, how to solve this issue, while being secure? The trick is to use the new NSS module for SecurePass.
While SecurePass has been traditionally used into the operating system just as a two factor authentication, the new beta release is capable of holding “extended attributes”, i.e. arbitrary information for each user profile.
We will use SecurePass to authenticate users and store Unix information with this new capability. In detail, we will:
- Use PAM to authenticate the user via RADIUS
- Use the new NSS module for SecurePass to have a consistent UID/GID/….
The next generation of SecurePass (currently in beta) is capable of storing arbitrary data for each profile. This is called “Extended Attributes” (or xattrs) and -as you can imagine- is organized as key/value pair.
You will need the SecurePass tools to be able to modify users’ extended attributes. The new releases of Debian Jessie and Ubuntu Vivid Vervet have a package for it, just:# apt-get install securepass-tools
For other distributions or previous releases, there’s a python package (PIP) available. Make sure that you have pycurl installed and then:# pip install securepass-tools
While SecurePass tools allow local configuration file, we highly recommend for this tutorial to create a global /etc/securepass.conf, so that it will be useful for the NSS module. The configuration file looks like:[default] app_id = xxxxx app_secret = xxxx endpoint = https://beta.secure-pass.net/
Where app_id and app_secrets are valid API keys to access SecurePass beta.
Through the command line, we will be able to set UID, GID and all the required Unix attributes for each user:# sp-user-xattrs firstname.lastname@example.org set posixuid 1000
While posixuid is the bare minimum attribute to have a Unix login, the following attributes are valid:
- posixuid → UID of the user
- posixgid → GID of the user
- posixhomedir → Home directory
- posixshell → Desired shell
- posixgecos → Gecos (defaults to username)
In a similar way to the tools, Debian Jessie and Ubuntu Vivid Vervet have native package for SecurePass:# apt-get install libnss-securepass
For previous releases of Debian and Ubuntu can still run the NSS module, as well as CentOS and RHEL. Download the sources from:
Then:./configure make make install (Debian/Ubuntu Only)
For CentOS/RHEL/Fedora you will need to copy files in the right place:/usr/bin/install -c -o root -g root libnss_sp.so.2 /usr/lib64/libnss_sp.so.2 ln -sf libnss_sp.so.2 /usr/lib64/libnss_sp.so
The /etc/securepass.conf configuration file should be extended to hold defaults for NSS by creating an [nss] section as follows:[nss] realm = company.net default_gid = 100 default_home = "/home" default_shell = "/bin/bash"
This will create defaults in case values other than posixuid are not being used. We need to configure the Name Service Switch (NSS) to use SecurePass. We will change the /etc/nsswitch.conf by adding “sp” to the passwd entry as follows:$ grep sp /etc/nsswitch.conf passwd: files sp
Double check that NSS is picking up our new SecurePass configuration by querying the passwd entries as follows:$ getent passwd user user:x:1000:100:My User:/home/user:/bin/bash $ id user uid=1000(user) gid=100(users) groups=100(users)
Using this setup by itself wouldn’t allow users to login to a system because the password is missing. We will use SecurePass’ authentication to access the remote machine.Configure PAM for SecurePass
On Debian/Ubuntu, install the RADIUS PAM module with:# apt-get install libpam-radius-auth
If you are using CentOS or RHEL, you need to have the EPEL repository configured. In order to activate EPEL, follow the instructions on http://fedoraproject.org/wiki/EPEL
Be aware that this has not being tested with SE-Linux enabled (check off or permissive).
On CentOS/RHEL, install the RADIUS PAM module with:# yum -y install pam_radius
Note: as per the time of writing, EPEL 7 is still in beta and does not contain the Radius PAM module. A request has been filed through RedHat’s Bugzilla to include this package also in EPEL 7
Configure SecurePass with your RADIUS device. We only need to set the public IP Address of the server, a fully qualified domain name (FQDN), and the secret password for the radius authentication. In case of the server being under NAT, specify the public IP address that will be translated into it. After completion we get a small recap of the already created device. For the sake of example, we use “secret” as our secret password.
Configure the RADIUS PAM module accordingly, i.e. open /etc/pam_radius.conf and add the following lines:radius1.secure-pass.net secret 3 radius2.secure-pass.net secret 3
Of course the “secret” is the same we have set up on the SecurePass administration interface. Beyond this point we need to configure the PAM to correct manage the authentication.
In CentOS, open the configuration file /etc/pam.d/password-auth-ac; in Debian/Ubuntu open the /etc/pam.d/common-auth configuration and make sure that pam_radius_auth.so is in the list.auth required pam_env.so auth sufficient pam_radius_auth.so try_first_pass auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so Conclusions
Handling many distributed Linux poses several challenges, from software updates to identity management and central logging. In a cloud scenario, it is not always applicable to use traditional enterprise solutions, but new tools might become very handy.
Two weeks ago I left NYC for a day trip to Norfolk, Virginia, to attend xTupleCon 2014. For those who don’t know, xTuple is an incredibly reknown open source Enterprise Resource Planning (ERP) software. If you’ve been following this blog, you might recall that during my participation in the Google Summer of Code 2014, I wrote a beta JSCommunicator extension for xTuple (to see how I went about doing that, look up Kickstarting the JSCommunicator-xTuple extension).
Now, I wasn’t flying down to Virginia on the eve of my first grad school midterms (gasp!) for fun - although I will admit, I enjoyed myself a fair amount. My GSoC mentor, Daniel Pocock was giving a talk about WebRTC and JSCommunicator at xTupleCon and invited me to participate. So that was the first perk of going to xTupleCon, I got to finally meet my GSoC mentor in person!
During the presentation, Daniel provided a high level expanation of WebRTC and how it works. WebRTC (Web Real Time Communications) enables real time transmission of audio, video and data streams through browser-to-browser applications. This is one of the many perks of WebRTC; it doesn’t require installation of plugins, making its use less vulnerable to security breaches. Websockets are used for the signalling channel and the SIP or XMPP can be used as the signaling protocol - in JSCommunicator, we used SIP. What was also done in JSCommunicator and can be done in other applications is to use a library to implement the signaling and SIP stack. JSCommunicator uses (and I highly reccomend) JsSIP.
The basic SIP architecture is comprised of peers and a SIP Proxy Server that supports SIP over Websockets transport:
If you have users behind NAT, you’ll also need a TURN server for relay:
In both cases, setup is not too difficult, particularly if using reSIProcate which offers both the SIP proxy and the TURN server. Daniel Pocock has an excellent post on how to setup and configure your SIP proxy and TURN server.
With regard to JSCommunicator, it is a generic telephone in HTML5 which can easily be embedded in any web site or web app. Almost every aspect of JSCommunicator is easily customizable. More about JSCommunicator setup and architecture is detailed in a previous post.
The JSCommunicator-xTuple extension can be installed in xTuple as an npm package (xtuple-jscommunicator). It is still at a very beta - or even pre-beta - stage and there are various limitations; the configuration must be hard-coded and dialing is done manually as opposed to clicking on a contact. Some of these ‘limitations’ are features are on the wish list for future work. For example, some ideas for the next version of the extension are a click to dial from CRM records functionality and to bring up CRM records for an incomming call. Additionaly, the SIP proxy could be automatically installed with the xTuple server installation if desired.
We closed the presentation with a live demo during which I made a video call from a JSCommunicator instance embedded in freephonebox.net to the JSCommunicator xTuple extension running on Daniel’s laptop. Despite the occasionally iffy hotel Wifi, the demo was a hit - at one point I even left the conference room and took a quick walk around the hotel and invited other xTupleCon attendees to say hello to those in the room. The audience’s reception was more enthusiastic than I anticipated, giving way to a pretty extensive Q&A session. It’s great to see more and more people interested in WebRTC, I can’t emphasize enough what a useful and versatile tool it is.
Here’s an ‘action shot’ of part of the xTuple WebRTC presentation:
Before I start really digging in to reworking the Render support in Glamor, I wanted to take a stab at cleaning up some cruft which has accumulated in Glamor over the years. Here's what I've done so far.Get rid of the Intel fallback paths
I think it's my fault, and I'm sorry.
The original Intel Glamor code has Glamor implement accelerated operations using GL, and when those fail, the Intel driver would fall back to its existing code, either UXA acceleration or software. Note that it wasn't Glamor doing these fallbacks, instead the Intel driver had a complete wrapper around every rendering API, calling special Glamor entry points which would return FALSE if GL couldn't accelerate the specified operation.
The thinking was that when GL couldn't do something, it would be far faster to take advantage of the existing UXA paths than to have Glamor fall back to pulling the bits out of GL, drawing to temporary images with software, and pushing the bits back to GL.
And, that may well be true, but what we've managed to prove is that there really aren't any interesting rendering paths which GL can't do directly. For core X, the only fallbacks we have today are for operations using a weird planemask, and some CopyPlane operations. For Render, essentially everything can be accelerated with the GPU.
At this point, the old Intel Glamor implementation is a lot of ugly code in Glamor without any use. I posted patches to the Intel driver several months ago which fix the Glamor bits there, but they haven't seen any review yet and so they haven't been merged, although I've been running them since 1.16 was released...
Getting rid of this support let me eliminate all of the _nf functions exported from Glamor, along with the GLAMOR_USE_SCREEN and GLAMOR_USE_PICTURE_SCREEN parameters, along with the GLAMOR_SEPARATE_TEXTURE pixmap type.Force all pixmaps to have exact allocations
Glamor has a cache of recently used textures that it uses to avoid allocating and de-allocating GL textures rapidly. For pixmaps small enough to fit in a single texture, Glamor would use a cache texture that was larger than the pixmap.
I disabled this when I rewrote the Glamor rendering code for core X; that code used texture repeat modes for tiles and stipples; if the texture wasn't the same size as the pixmap, then texturing would fail.
On the Render side, Glamor would actually reallocate pixmaps used as repeating texture sources. I could have fixed up the core rendering code to use this, but I decided instead to just simplify things and eliminate the ability to use larger textures for pixmaps everywhere.Remove redundant pixmap and screen private pointers
Every Glamor pixmap private structure had a pointer back to the pixmap it was allocated for, along with a pointer to the the Glamor screen private structure for the related screen. There's no particularly good reason for this, other than making it possible to pass just the Glamor pixmap private around a lot of places. So, I removed those pointers and fixed up the functions to take the necessary extra or replaced parameters.
Similarly, every Glamor fbo had a pointer back to the Glamor screen private too; I removed that and now pass the Glamor screen private parameter as needed.Reducing pixmap private complexity
Glamor had three separate kinds of pixmap private structures, one for 'normal' pixmaps (those allocated by them selves in a single FBO), one for 'large' pixmaps, where the pixmap was tiled across many FBOs, and a third for 'atlas' pixmaps, which presumably would be a single FBO holding multiple pixmaps.
The 'atlas' form was never actually implemented, so it was pretty easy to get rid of that.
For large vs normal pixmaps, the solution was to move the extra data needed by large pixmaps into the same structure as that used by normal pixmaps and simply initialize those elements correctly in all cases. Now, most code can ignore the difference and simply walk the array of FBOs as necessary.
The other thing I did was to shrink the number of possible pixmap types from 8 down to three. Glamor now exposes just these possible pixmap types:
GLAMOR_MEMORY. This is a software-only pixmap, stored in regular memory and only drawn with software. This is used for 1bpp pixmaps, shared memory pixmaps and glyph pixmaps. Most of the time, these pixmaps won't even get a Glamor pixmap private structure allocated, but if you use one of these with the existing Render acceleration code, that will end up wanting a private pointer. I'm hoping to fix the code so we can just use a NULL private to indicate this kind of pixmap.
GLAMOR_TEXTURE. This is a full Glamor pixmap, capable of being used via either GL or software fallbacks.
GLAMOR_DRM_ONLY. This is a pixmap based on an FBO which was passed from the driver, and for which Glamor couldn't get the underlying DRM object. I think this is an error, but I don't quite understand what's going on here yet...
- Deal with X vs GL color formats
- Finish my new CompositeGlyphs code
- Create pure shader-based gradients
- Rewrite Composite to use the GPU for more computation
- Take another stab at doing GPU-accelerated trapezoids
Bluespark Labs: Follow the readiness of the top 100 modules for Drupal 8 with our automatically updated tool
With the first Drupal 8 beta having been released at Drupalcon Amsterdam, we thought this would be a good time to a look at the top 100 projects on drupal.org to see just how far along the line the process of preparing for Drupal 8 is. However, given that there's a lot of progress to be made and I don't feel like manually updating a long list of modules, I decided to make a small tool to get the status of these modules and keep the data up to date.
This turned out to be a fun little project, and slightly more involved than I anticipated at first. (Isn't it always the case!) However, at its heart it's a bone-simple Drupal project - one content type for the Drupal projects (and their metadata) we're interested in, and a few views to show them as a table and calculate simple statistics. The work of updating the metadata from drupal.org is handled in 85 lines of code, using hook_cron to add each project to a Queue to be processed. The queue callback borrows code from the update module and simply gets release data, parses it, and updates the metadata on the project nodes. In the end, the most work was doing the research to determine which projects are already in core, and adding notes about where to find D8 upgrade issues and so on.
So, how did it all turn out? Using the current top 100 projects based on the usage statistics on drupal.org, our tool tells us that as of today, out of the 100 most popular projects:
Thanks for reading, and be sure to keep an eye on the status page to see how the most used contrib modules are coming along!Tags: Drupal PlanetDrupal 8
With the Drupal Security team's release of a public service announcement, the infamous security update known as 'SA-005' is back in the news. Even though it's old news, we've been fielding a new round of questions, so we thought we'd try to clear up some of the confusion.
Modules Unraveled: 124 Creating Drupal Configuration in Code Using CINC with Scott Reynen - Modules Unraveled Podcast
- What is CINC?
- How is it different from Features or Configuration Management?
- Is it something you use on an ongoing basis? Or is it just for the initial site setup?
- What types of configuration can you manage with CINC?
- What if you already have a content type created, and you want to add a field to the content type?
- How does that affect existing content, and new content.
- What about the reverse? Can you remove a field?
- What happens to the data that is already in the database?
- Can you undo configuration that you’ve created with CINC?
- How do you prevent site admins from disabling the module and deleting their content types?
- CINC YAML
- CINC & Features
- CINC & Drupal 8 Config API
- How do you see CINC working in a headless Drupal setting?
- Create dozens of fields quickly.
- Add a field to a content type after an existing field.
- Update configuration only if it still matches the default settings.
- How do you use this in a dev/staging/production
- Have you noticed any improved feedback, improvements to your workflow while using CINC?
- If people want to jump in and help development or work on new features what should they do?
DrupalCon is a great place to enhance your Drupal skills, learn about the latest modules, and improve your theming techniques. Sure, there are sessions, keynotes, vendor displays, and parties... like trivia night!
But.. there is also the opportunity to look behind the curtain and see how the software really gets made. And, more importantly, to lend your hand in making it. For six days, three both before and after DrupalCon, there are dedicated sprint opportunities where you can hang out with other Drupalistas testing, summarizing issues, writing documentation, working on patches, or generally contributing to the development of Drupal and the Drupal community.
We want to share some details about the DrupalCon Amsterdam Sprints (and pictures to reminisce about the good times) and mention some upcoming sprints that you can hopefully attend.
- Sponsors supporting the sprinters
- Pre-con Extended sprints on Saturday and Sunday (60 Saturday, 100 Sunday, 180 Monday)
- During the con
- Friday Sprint (450 people)
- Post-con Extended sprints on Saturday and Sunday (80 Saturday, 60 Sun)
- Feedback about the sprints
- Upcoming sprints
- Drupal Association, @DrupalAssoc
- Acquia (Large Scale Drupal), @Acquia
- Open8, @open8roger
- Bluehost, @Bluehost
- David Hernandez, @davidnarrabilis
- Wunderkraut, @Wunderkraut
Our sponsors helped us have:
- Co-working space Saturday and Sunday before the con.
- Sprint space at the venue Monday-Thursday.
- Big sprint space Friday.
- Co-working space Saturday and Sunday after the con.
- Food and coffee all of the days.
- Sprint supplies: task cards, stickers, markers, signs, flip charts.
- Mentor thank you dinner.
The outside of the Berlage co-working space (castle) with the Drupal Association banner.
Sprinters sprinting inside the cool looking Berlage.
marthinal, franSeva, estoyausente, YesCT, Ryan Weal
We had lots of rooms for groups to gather at the Berlage.
pwolanin, dawehner, wimleers, Hydra, swentel
Sutharsan, yched, Berdir
On Monday sprint attendance grew to 180 sprinters. We moved to the conference venue, Amsterdam RAI. Other pre-conference events taking place included trainings, the Community Summit, and the Business Summit. At this particular DrupalCon there was much excitement about the anticipated beta release of Drupal. Many people did a lot of testing to make sure that the beta would be ready.
Discussing a beta blocker issue they found.
lauriii, sihv, Gábor Hojtsy, lanchez
Lots of people sprinting and testing the beta candidate, with support from experienced core contributors walking around and helping.
Sprinting continued during the conference, Tuesday through Thursday. And, to prepare for Friday's mentored sprint, the core mentoring team scheduled a series of 8 BOFs (‘Birds of a Feather’ or informal sessions). Preparations included mentor orientation, setting up local environments, and reading, updating, and tagging issues in the Drupal issue queue. Mentoring BoFs were open to all conference participants.
YesCT, sqndr, -, -, lazysoundsystem, neoxavier, Mac_Weber, patrickd, roderik, jmolivas, marcvangend, -, realityloop, rteijeiro
To promote contribution sprints, mentors volunteered at the mentoring booth in the exhibition hall during all three days of DrupalCon. Conference attendees who visited the booth learned about the Friday sprints. Mentors also recruited additional mentors, and encouraged everyone to get involved in contributing to Drupal.
The mentor booth with lots of signage, and welcoming people.
(photo: stpaultim )
At the booth, conference attendees were able to pick up our new contributor role task cards and stickers which outlined some of the various ways that people can contribute to Drupal and provided them with a sticker as recognition for the specific roles that they already play.
In Amsterdam, 450 people showed up to contribute to Drupal on Friday.
People gathered in groups to work on issues together.
-, -, -, -, -
For many people the highlight of the week is the large “mentored” sprint on Friday. 180 of the 450 participated in our First-time sprinter workshop designed to help Drupal users and developers better understand the community, the issue queues, and contribution. The workshop helped people install the tools they would use as contributors. Another 100 were ready to start work right away with our 50 mentors. Throughout the day people from the first-time sprinter workshop transitioned to contributing with other sprinters and mentors. Sprinters and mentors helped people identify issues that had tasks that aligned with their specific skills and experience.
The workshop room.
Mentors (in orange shirts): rachel_norfolk, roderik
Hand written signs were everywhere!
A group picture of some of the mentors.
mradcliffe, Aimee Degnan, alimac, kgoel, rteijero, Deciphered, emma.maria, mon_franco, patrickd, 8thom, -, lauriii, marcvangend, ceng, Ryan Weal, YesCT, realityloop, -, lazysoundsystem, roderik, Xano, David Hernández, -, -, -, -
Near the end of the day, over 100 sprinters (both beginners and veterans) gathered to watch the work of first time contributors get committed (added) to Drupal core. Angie Byron (webchick) walked the audience through the process of evaluating, testing, and then committing a patch to Drupal core.
Live commit by webchick
webchick, -, -, marcvangend
(photo: Pedro Lozano)
On Saturday after DrupalCon 80 dedicated contributors moved back to the Berlage to continue the work on Drupal core. 60 people came to contribute on Sunday. During these final days of extended sprints, Drupal beginners and newcomers had the chance to exercise their newly acquired skills while working together with some of the smartest and most experienced Drupal contributors in the world. The value of the skills exchanges and personal relationships that come from working in this kind of environment is cannot be underestimated. While there is an abundance of activity during Friday’s DrupalCon contribution sprints, the atmosphere during extended sprints is a bit more relaxed. Attending the pre and post-con sprints gives sprinters time to dive deep into issues and tie up loose ends. After a number of hallway and after-session conversations, contributors working on specific Drupal 8 initiatives meet to sketch out ideas, use whiteboards or any means of note-taking to make plans for the future.
LoMo, Outi, pfrenssen, lauriii, mortendk, emma.maria, lewisnyman
Aimee Degnan, Schnitzel, dixon, -, Xano, alimac, boris, Gábor Hojtsy, realityloop, YesCT, justafish, eatings, fgm, penyaskito, pcambra, -
-, jthorson, opdavies, drumm, RuthieF, -, -, killes, dasrecht
- Sprinting for the First Time - Blog post by AdamEvertsson
- From Rookie to Drupal Core Contributor in One Day - Blog post by @dmsmidt
- DrupalCon Amsterdam, 2014 - Blog post by @valvalg
- “Mentoring at #DrupalCon sprints is the most rewarding and enjoyable part of the week :) <3 @drupalmentoring #DrupalSprint - Original Tweet from @emma_maria88
- “One hour at the #DrupalCon code sprint and I've already submitted my first patch. It is going to be a good week.” - Original Tweet from @skwashd
- Hi, I'm George! I'm your mentor! - Blog post by Thamas (@eccegostudio)
Please contact me to get your DrupalCon Amsterdam sprint related blog added to the list here.Upcoming sprints
- BADCamp (sprint details November 5 - 10 2014)
- Global Sprint Weekend January 17, 18 2015
- DrupalCon Latin America in Bogota (sprint details Feb 8 - 13 2015)
- lots of camps, check druplical.com (The drupal event location visualization tool.)
- Drupal Dev Days April 2015
- DrupalCon North America in Los Angeles (sprint May 9 - 17 2015)
- DrupalCon Europe in Barcelona (sprint Sept 19 - 27 2015)
Plan your travel for the next event so you can sprint with us too!Corrections DrupalSprintsDrupalConDrupal Planet
It's impossible to overstate how important free software is. A movement that began with a quest to work around a faulty printer is now our greatest defence against a world full of hostile actors. Without the ability to examine software, we can have no real faith that we haven't been put at risk by backdoors introduced through incompetence or malice. Without the freedom to modify software, we have no chance of updating it to deal with the new challenges that we face on a daily basis. Without the freedom to pass that modified software on to others, we are unable to help people who don't have the technical skills to protect themselves.
Free software isn't sufficient for building a trustworthy computing environment, one that not merely protects the user but respects the user. But it is necessary for that, and that's why I continue to evangelise on its behalf at every opportunity.
Free software has a problem. It's natural to write software to satisfy our own needs, but in doing so we write software that doesn't provide as much benefit to people who have different needs. We need to listen to others, improve our knowledge of their requirements and ensure that they are in a position to benefit from the freedoms we espouse. And that means building diverse communities, communities that are inclusive regardless of people's race, gender, sexuality or economic background. Free software that ends up designed primarily to meet the needs of well-off white men is a failure. We do not improve the world by ignoring the majority of people in it. To do that, we need to listen to others. And to do that, we need to ensure that our community is accessible to everybody.
That's not the case right now. We are a community that is disproportionately male, disproportionately white, disproportionately rich. This is made strikingly obvious by looking at the composition of the FSF board, a body made up entirely of white men. In joining the board, I have perpetuated this. I do not bring new experiences. I do not bring an understanding of an entirely different set of problems. I do not serve as an inspiration to groups currently under-represented in our communities. I am, in short, a hypocrite.
So why did I do it? Why have I joined an organisation whose founder I publicly criticised for making sexist jokes in a conference presentation? I'm afraid that my answer may not seem convincing, but in the end it boils down to feeling that I can make more of a difference from within than from outside. I am now in a position to ensure that the board never forgets to consider diversity when making decisions. I am in a position to advocate for programs that build us stronger, more representative communities. I am in a position to take responsibility for our failings and try to do better in future.
People can justifiably conclude that I'm making excuses, and I can make no argument against that other than to be asked to be judged by my actions. I hope to be able to look back at my time with the FSF and believe that I helped make a positive difference. But maybe this is hubris. Maybe I am just perpetuating the status quo. If so, I absolutely deserve criticism for my choices. We'll find out in a few years.
As extremely pedantic developers we take documenting our APIs very seriously. It's not rare to see a good patch rejected in code review just because the PHPdocs weren't clear enough, or a @param wasn't declared properly.
In fact, I often explain to junior devs that the most important part of a function is its signature, and the PHPdocs. The body of the function is just "implementation details". How it communicates its meaning to the person reading it is the vital part.
But where does this whole pedantic mindset got when we open up our web-services?
I would argue that at least 95% of the developers who expose their web-service simply enable RESTws without any modifications. And here's what a developer implementing your web-service will see when visiting /node.json:
Last Wednesday I had the pleasure and honor to have a great guest again at my class: José María Serralde, talking about real time scheduling. I like inviting different people to present interesting topics to my students a couple of times each semester, and I was very happy to have Chema come again.
Chema is a professional musician (formally, a pianist, although he has far more skills than what a title would confer to him — Skills that go way beyond just music), and he had to learn the details on scheduling due to errors that appear when recording and performing.
The audio could use some cleaning, and my main camera (the only one that lasted for the whole duration) was by a long shot not professional grade, but the video works and is IMO quite interesting and well explained.
They say that hindsight is 20/20. With the many advances that have happened in the Drupal community recently, we asked our team "What is the one thing you wish you knew about Drupal two years ago?"
If someone would have told me that I would visit three feminist events this year I would have slowly nodded at them and responded with "yeah, sure..." not believing it. But sometimes things take their own turns.
It all started with the Debian Women Mini-Debconf in Barcelona. The organizers did ask me how they have to word the call for papers so that I would feel invited to give a speech, which felt very welcoming and nice. So we settled for "people who identify themselves as female". Due to private circumstances I didn't prepare well for my talk, but I hope it was still worth it. The next interesting part though happened later when there were lightning talks. Someone on IRC asked why there are male people in the lightning talks, which was explicitly allowed for them only. This also felt very very nice, to be honest, that my talk wasn't questioned. Those are amongst the reasons why I wrote My place is here, my home is Debconf.
Second event I went to was the FemCamp Wien. It was my first event that was a barcamp, I didn't know what to expect organization wise. Topic-wise it was set about Queer Feminism. And it was the first event that I went to which had a policy. Granted, there was an extremely silly written part in it, which naturally ended up in a shit storm on twitter (which people from both sides did manage very badly, which disappointed me). Denying that there is sexism against cis-males is just a bad idea, but the background of it was that this wasn't the topic of this event. The background of the policy was that usually barcamps but events in general aren't considered that save of a place for certain people, and that this barcamp wanted to make it clear that people usually shying away from such events in the fear of harassment can feel at home there.
And what can I say, this absolutely was the right thing to do. I never felt any more welcomed and included in any event, including Debian events—sorry to say that so frankly. Making it clear through the policy that everyone is on the same boat with addressing each other respectfully totally managed to do exactly that. The first session of the event about dominant talk patterns and how to work around or against them also made sure that the rest of the event was giving shy people a chance to speak up and feel comfortable, too. And the range of the sessions that were held was simply great. This was the event that I came up with the pattern that I have to define the quality of an event on the sessions that I'm unable to attend. The thing that hurt me most in the afterthought was that I couldn't attend the session about minorities within minorities. :/
Last but not least I attended AdaCamp Berlin. This was a small unconference/barcamp dedicated to increase women's participation in open technology and culture named after Ada Lovelace who is considered the first programmer. It was a small event with only 50 slots for people who identify as women. So I was totally hyper when I received the mail that was accepted. It was another event with a policy, and at first reading it looked strange. But given that there are people who are allergic to ingredients of scents, it made sense to raise awareness of that topic. And given that women are facing a fair amount of harassment in the IT and at events, it also makes sense to remind people to behave. After all it was a general policy for all AdaCamps, not for this specific one with only women.
I enjoyed the event. Totally. And that's not only because I was able to meet up with a dear friend who I haven't talked to in years, literally. I enjoyed the environment, and the sessions that were going on. And quite similar to the FemCamp, it started off with a session that helped a lot for the rest of the event. This time it was about the Impostor Syndrome which is extremely common for women in IT. And what can I say, I found myself in one of the slides, given that I just tweeted the day before that I doubted to belong there. Frankly spoken, it even crossed my mind that I was only accepted so that at least one trans person is there. Which is pretty much what the impostor syndrome is all about, isn't it. But when I was there, it did feel right. And we had great sessions that I truly enjoyed. And I have to thank one lady once again for her great definition on feminism that she brought up during one session, which is roughly that feminism for her isn't about gender but equality of all people regardless their sexes or gender definition. It's about dropping this whole binary thinking. I couldn't agree more.
All in all, I totally enjoyed these events, and hope that I'll be able to attend more next year. From what I grasped all three of them think of doing it again, the FemCamp Vienna already has the date announced at the end of this year's event, so I am looking forward to meet most of these fine ladies again, if faith permits. And keep in mind, there will always be critics and haters out there, but given that thy wouldn't think of attending such an event anyway in the first place, don't get wound up about it. They just try to talk you down.
P.S.: Ah, almost forgot about one thing to mention, which also helps a lot to reduce some barrier for people to attend: The catering during the day and for lunch both at FemCamp and AdaCamp (there was no organized catering at the Debian Women Mini-Debconf) did take off the need for people to ask about whether there could be food without meat and dairy products by offering mostly Vegan food in the first place, even without having to query the participants. Often enough people otherwise choose to go out of the event or bring their own food instead of asking for it, so this is an extremely welcoming move, too. Way to go!
I've spent the past thirty minutes installing FreeBSD as a KVM guest. This mostly involved fetching the ISO (I chose the latest stable release 10.0), and accepting all the defaults. A pleasant experience.
As I'm running KVM inside screen I wanted to see the boot prompt, etc, via the serial console, which took two distinct steps:
- Enabling the serial console - which lets boot stuff show up
- Enabling a login prompt on the serial console in case I screw up the networking.
To configure boot messages to display via the serial console, issue the following command as the superuser:# echo 'console="comconsole"' >> /boot/loader.conf
To get a login: prompt you'll want to edit /etc/ttys and change "off" to "on" and "dialup" to "vt100" for the ttyu0 entry. Once you've done that reload init via:# kill -HUP 1
Enable remote root logins, if you're brave, or disable PAM and password authentication if you're sensible:vi /etc/ssh/sshd_config /etc/rc.d/sshd restart
Configure the system to allow binary package-installation - to be honest I was hazy on why this was required, but I ran the two command and it all worked out:pkg pkg2ng
Now you may install a package via a simple command such as:pkg add screen
Removing packages you no longer want is as simple as using the delete option:pkg delete curl
You can see installed packages via "pkg info", and there are more options to be found via "pkg help". In the future you can apply updates via:pkg update && pkg upgrade
Finally I've installed 10.0-RELEASE which can be upgraded in the future via "freebsd-update" - This seems to boil down to "freebsd-update fetch" and "freebsd-update install" but I'm hazy on that just yet. For the moment you can see your installed version via:uname -a ; freebsd-version
Expect my future CPAN releases, etc, to be tested on FreeBSD too now :)
Much like an evolutionary tree our goal in technology adoption is too continue to move forward and evolve, rather than getting caught in a dead end. In the natural world, becoming bigger can be good but can lead to extinction events should the environment or food source change. Right now we are in a technology Jurassic...
geoip version 1.6.2-2 and geoip-database version 20141027-1 are now available in Debian unstable/sid, with some news of more free databases available :)
geoip changes:* Add patch for geoip-csv-to-dat to add support for building GeoIP city DB. Many thanks to Andrew Moise for contributing! * Add and install geoip-generator-asn, which is able to build the ASN DB. It is a modified version from the original geoip-generator. Much thanks for contributing also to Aaron Gibson! * Bump Standards-Version to 3.9.6 (no changes required).
geoip-database changes:* New upstream release. * Add new databases GeoLite city and GeoLite ASN to the new package geoip-database-extra. Also bump build depends on geoip to 1.6.2-2. * Switch to xz compression for the orig tarball.
So much thanks to both contributors!
If you have been around CodeKarate.com for awhile you have noticed that our branding has been, we
TL;DR; Those of you who are not able to join "X2Go: The Gathering 2014"... Join us on IRC (#x2go on Freenode) over the coming weekend. We will provide information, URLs to our TinyPads, etc. there. Spontaneous visitors are welcome during the working sessions (please let us know if you plan to come around), but we don't have spare beds anymore for accomodation. (We are still trying hard to set up some sort of video coverage--may it be life streaming or recorded sessions, this is still open, people who can offer help, see below).
Our event "X2Go: The Gathering 2014" is approaching quickly. We will meet with a group of 13-15 people (number of people is still slightly fluctuating) at Linux Hotel, Essen. Thanks to the generous offerings of the Linux Hotel  to FLOSS community projects, costs of food and accommodation could be kept really low and affordable to many people.
We are very happy that people from outside Germany are coming to that meeting (Michael DePaulo from the U.S., Kjetil Fleten (http://fleten.net) from Denmark / Norway). And we are also proud that Martin Wimpress (Mr. Ubuntu MATE Remix) will join our gathering.
In advance, I want to send a big THANK YOU to all people who will sponsor our weekend, either by sending gift items, covering travel expenses or providing help and knowledge to make this event a success for the X2Go project and its community around.
"Like a lot of people, I did both sides of technology; working on paid, proprietary systems [and open source]. There is a big difference. I can't imagine myself going back to any proprietary system where I have to pay; I can't share the code I am doing with anyone; I have to ask a company about the right tool to use. I love the way that everybody contributes to the same piece of code, trying to make it the best ... and for free!"