This month has been the month before the freeze. Lots of people uploaded a package at the last moment and wanted to have it in testing before everything is over. This resulted in even more processed package than in September. I was able to accept 407 packages and had to reject 77. The whole FTP team managed it to bring the NEW queue below 40 waiting packages. As the Release team doesn’t like to see binary-NEW packages appearing in unstable (at least those which change the soname of a lib), this number will increase again. But, that’s life …
I am glad that a freeze happens only every few years. So I would particularly thank my dear wife for her patience, when she saw me sitting in front of that damned computer again and again.
This month I got assigned a workload of 13.75h and I spent these hours to upload new versions of
- [DLA 72-1] rsyslog security update
- [DLA 72-2] rsyslog regression update
- [DLA 78-1] torque security update
- [DLA 80-1] libxml2 security update
I also prepared a new upload of wget and still wait for some feedback. In this case some default values had to be changed and I better wait a bit before I break some scripts.
Moreover five CVEs accumulated for php5, so I guess another upload has to be done for this package. This will be ready in the next days …
I also tried to work on libtasn1-3 and librack-ruby. There hadn’t been DSAs for these packages and I tried to dig into the upstream repositories. Unfortunately I failed to find the correct patches. Kudos to the Security Team who have to struggle with all kind of commit messages on a daily basis.
I didn’t have time to do any work on my own packages. But during my ftp-time I saw one or another package that deals with some kind of home automation. Up to now there doesn’t seem to be a Debian group who deals with this topic. Maybe it is time to start one?
If you would like to support my Debian work you could either be part of the Freexian initiative (see above) or consider to send some bitcoins to 1JHnNpbgzxkoNexeXsTUGS6qUp5P88vHej. Contact me at email@example.com if you prefer another way to donate. Every kind of support is most appreciated.
For background, qemu/kvm support a few ways to provide network to guests. The default is user networking, which requires no privileges, but is slow and based on ancient SLIRP code. The other common option is tap networking, which is fast, but complicated to set up. Turns out, with networkd and qemu bridge helper, tap is easy to set up.
$ for file in /etc/systemd/network/*; do echo $file; cat $file; done
Diverging from Joachims simple example, we replaced "DHCP=yes" with "Bridge=br0". Then we proceed to define the bridge (in the kvm.netdev) and give it an ip via dhcp in kvm.network. From the kvm side, if you haven't used the bridge helper before, you need to give the helper permissions (setuid root or cap_net_admin) to create a tap device to attach on the bridge. The helper needs an configuration file to tell what bridge it may meddle with.
# cat > /etc/qemu/bridge.conf <<__END__
# setcap cap_net_admin=ep /usr/lib/qemu/qemu-bridge-helper
Now we can start kvm with bridge networking as easily as with user networking:
$ kvm -m 2048 -drive file=jessie.img,if=virtio -net bridge -net nic,model=virtio -serial stdio
The manpages systemd.network(5) and systemd.netdev(5) do a great job explaining the files. Qemu/kvm networking docs are unfortunately not as detailed.
Discussing about status for LibreOffice package in Debian and share each view for it as downstream and upstream. I hope LibO folks would investigate diff under debian/patches directory and pull some of it to upstream (it also will help Debian and other downstream distros).
And hands-on event for installation with debian-installer 8.0 bate2: find an issue with Acer laptop, probably it would be reported to BTS.
Next 120th meeting will be in 29th November at same place - people, see your there! :)
The UDD bugs interface currently knows about the following release critical bugs:
- In Total:
- Affecting Jessie:
274 That's the number we need to get down to zero
before the release. They can be split in two big categories:
- Affecting Jessie and unstable:
224 Those need someone to find a fix, or to finish the
work to upload a fix to unstable:
- 30 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
- 12 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
- 182 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
- Affecting Jessie only: 50 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
- Affecting Jessie and unstable: 224 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
- Affecting Jessie: 274 That's the number we need to get down to zero before the release. They can be split in two big categories:
The Verge has an interesting article about Tim Cook (Apple CEO) coming out . Tim says “if hearing that the CEO of Apple is gay can help someone struggling to come to terms with who he or she is, or bring comfort to anyone who feels alone, or inspire people to insist on their equality, then it’s worth the trade-off with my own privacy”.
George Monbiot wrote an insightful article for The Guardian about the way that double-speak facilitates killing people . He is correct that the media should hold government accountable for such use of language instead of perpetuating it.
Shweta Narayan wrote an insightful article about Category Structure and Oppression . I can’t summarise it because it’s a complex concept, read the article.
Some Debian users who don’t like Systemd have started a “Debian Fork” project , which so far just has a web site and nothing else. I expect that they will never write any code. But it would be good if they did, they would learn about how an OS works and maybe they wouldn’t disagree so much with the people who have experience in developing system software.
A GamerGate terrorist in Utah forces Anita Sarkeesian to cancel a lecture . I expect that the reaction will be different when (not if) an Islamic group tries to get a lecture cancelled in a similar manner.
Katie McDonough wrote an interesting article for Salon about Ed Champion and what to do about men who abuse women . It’s worth reading that while thinking about the FOSS community…
-  http://www.theverge.com/2014/10/30/7130899/tim-cook-proud-to-be-gay
-  http://graydon2.dreamwidth.org/193575.html
-  http://tinyurl.com/nretmhm
-  http://www.vice.com/en_ca/read/on-jian-ghomeshi-and-the-presumption-of-innocence-830
-  http://www.doctornerdlove.com/2014/08/the-extinction-burst-of-gaming-culture/
-  http://shweta-narayan.livejournal.com/204154.html
-  http://debianfork.org/
-  http://www.theverge.com/2014/10/14/6978809/utah-state-university-receives-shooting-threat-for-anita-sarkeesian-visit
-  http://modelviewculture.com/pieces/autistics-in-the-silicon-valley-2
-  http://tinyurl.com/o6d8yu8
In June last year I bought a Samsung Galaxy Note 2 . Generally I was very happy with that phone, one problem I had is that less than a year after purchasing it the Ingress menus burned into the screen .
2 weeks ago I bought a new Galaxy Note 3. One of the reasons for getting it is the higher resolution screen, I never realised the benefits of a 1920*1080 screen on a phone until my wife got a Nexus 5 . I had been idly considering a Galaxy Note 4, but $1000 is a lot of money to pay for a phone and I’m not sure that a 2560*1440 screen will offer much benefit in that size. Also the Note 3 and Note 4 both have 3G of RAM, as some applications use more RAM when you have a higher resolution screen the Note 4 will effectively have less usable RAM than the Note 3.
My first laptop cost me $3,800 in 1998, that’s probably more than $6,000 in today’s money. The benefits that I receive now from an Android phone are in many ways greater than I received from that laptop and that laptop was definitely good value for money for me. If the cheapest Android phone cost $6,000 then I’d pay that, but given that the Note 3 is only $550 (including postage) there’s no reason for me to buy something more expensive.
Another reason for getting a new phone is the limited storage space in the Note 2. 16G of internal storage is a limit when you have some big games installed. Also the recent Android update which prevented apps from writing to the SD card meant that it was no longer convenient to put TV shows on my SD card. 32G of internal storage in the Note 3 allows me to fit everything I want (including the music video collection I downloaded with youtube-dl). The Note 2 has 16G of internal storage and an 8G SD card (that I couldn’t fully use due to Android limitations) while the Note 3 has 32G (the 64G version wasn’t on sale at any of the cheap online stores). Also the Note 3 supports an SD card which will be good for my music video collection at some future time, this is a significant benefit over the Nexus 5.
In the past I’ve written about Android service life and concluded that storage is the main issue . So it is a bit unfortunate that I couldn’t get a phone with 64G of storage at a reasonable price. But the upside is that getting a cheaper phone allows me to buy another one sooner and give the old phone to a relative who has less demanding requirements.
In the past I wrote about the warranty support for my wife’s Nexus 5 . I should have followed up on that before, 3 days after that post we received a replacement phone. One good thing that Google does is to reserve money on a credit card to buy the new phone and then send you the new phone before you send the old one back. So if the customer doesn’t end up sending the broken phone they just get billed for the new phone, that avoids excessive delays in getting a replacement phone. So overall the process of Google warranty support is really good, if 2 products are equal in other ways then it would be best to buy from Google to get that level of support.
I considered getting a Nexus 5 as the hardware is reasonably good (not the greatest but quite good enough) and the price is also reasonably good. But one thing I really hate is the way they do the buttons. Having the home button appear on the main part of the display is really annoying. I much prefer the Samsung approach of having a hardware button for home and touch-screen buttons outside the viewable area for settings and back. Also the stylus on the Note devices is convenient on occasion.
The Note 3 has a fake-leather back. The concept of making fake leather is tacky, I believe that it’s much better to make honest plastic that doesn’t pretend to be something that it isn’t. However the texture of the back improves the grip. Also the fake stitches around the edge help with the grip too. It’s tacky but utilitarian.
The Note 3 is slightly smaller and lighter than the Note 2. This is a good technical achievement, but I’d rather they just gave it a bigger battery.Update USB 3
One thing I initially forgot to mention is that the Note 3 has USB 3. This means that it has a larger socket which is less convenient when you try and plug it in at night. USB 3 seems unlikely to provide any benefit for me as I’ve never had any of my other phones transfer data at rates more than about 5MB/s. If the Note 3 happens to have storage that can handle speeds greater than the 32MB/s a typical USB 2 storage device can handle then I’m still not going to gain much benefit. USB 2 speeds would allow me to transfer the entire contents of a Note 3 in less than 20 minutes (if I needed to copy the entire storage contents). I can’t imagine myself having a real-world benefit from that.
The larger socket means more fumbling when charging my phone at night and it also means that the Note 3 cable can’t be used in any other phone I own. In a year or two my wife will have a phone with USB 3 support and that cable can be used for charging 2 phones. But at the moment the USB 3 cable isn’t useful as I don’t need to have a phone charger that can only charge one phone.Conclusion
The Note 3 basically does everything I expected of it. It’s just like the Note 2 but a bit faster and with more storage. I’m happy with it.
-  http://etbe.coker.com.au/2013/07/10/samsung-galaxy-note-2/
-  http://etbe.coker.com.au/2014/07/29/android-screen-saving/
-  http://etbe.coker.com.au/2013/12/23/nexus-5/
-  http://etbe.coker.com.au/2014/06/04/android-device-service-life/
-  http://etbe.coker.com.au/2014/06/10/google-hardware-support/
Just read the Mozilla announcement on SIMD.js and I can say I got mixed feelings about this.
I don't really comment other news/blogs/announcements, but this is an exception.
So, the latest trend of moving everything to the browser and JS,means that instead of optimizing my apps to run great on native, instead of making stuff running faster on my 5Wt big.Little 8-core ARM SoC, I have to get a much more power-hungry CPU to see the same performance. I'm totally against that. I want my newer CPUs, who are more energy-efficient and faster to actually feel like THAT. I don't want to upgrade just to experience the performance of a 486 20 years ago!
I could go on for a long time, but I have an actual SIMD-related bug to fix, cheers.
Note: I used to have comments enabled on my blog, but moderating spam was too time consuming, even with CAPTCHAs, so I disabled them entirely, if anyone could suggest of a better method, I'd gladly take advice -have been thinking about disqus, not sure if it's actually a good solution).
If you look at the idea of "The Kitchen of Tomorrow" as IKEA thought about it is the core idea is that cooking is slavery.
It's the idea that technology can free us from making food. It can do it for us. It can recognise who we are, we don't have to be tied to the kitchen all day, we don't have to think about it.
Now if you're an anthropologist, they would tell you that cooking is perhaps one of the most complicated things you can think about when it comes to the human condition. If you think about your own cooking habits they probably come from your childhood, the nation you're from, the region you're from. It takes a lot of skill to cook. It's not so easy.
And actually, it's quite fun to cook. there's also a lot of improvisation. I don't know if you ever tried to come home to a fridge and you just look into the fridge: oh, there's a carrot and some milk and some white wine and you figure it out. That's what cooking is like – it's a very human thing to do.
Therefore, if you think about it, having anything that automates this for you or decides for you or improvises for you is actually not doing anything to help you with what you want to do, which is that it's nice to cook.
More generally, if you make technology—for example—that has at its core the idea that cooking is slavery and that idea is wrong, then your technology will fail. Not because of the technology, but because it simply gets people wrong.
This happens all the time. You cannot swing a cat these days without hitting one of those refrigerator companies that make smart fridges. I don't know you've ever seen them, like a "intelligent fridge". There's so many of them that there is actually a website called "Fuck your internet fridge" by a guy who tracks failed prototypes on intelligent fridges.
Why? Because the idea is wrong. Not the technology, but the idea about who we are - that we do not want the kitchen to be automated for us.
We want to cook. We want Japanese knives. We want complicated cooking. And so what we are saying here is not that technology is wrong as such. It's just you need to base it—especially when you are innovating really big ideas—on something that's a true human insight. And cooking as slavery is not a true human insight and therefore the prototypes will fail.
(I hereby nominate "internet fridge" as the term to describe products or ideas that—whilst technologically sound—is based on fundamentally flawed anthropology.)
Hearing "I hate X" and thinking that simply removing X will provide real value to your users is short-sighted, especially when you don't really understand why humans are doing X in the first place.
But as a good data-driven person, wouldn't it be nice to have numbers rather than just handwaving? In the absence of a good public dataset, I scraped Hacker Slide to get just over two months of data in the form of hourly snapshots of stories, their age, their score and their position. I then applied a trivial test:
- If the story is younger than any other story
- and the story has a higher score than that other story
- and the story has a worse ranking than that other story
- and at least one of these two stories is on the front page
(note: "penalised" can have several meanings. It may be due to explicit flagging, or it may be due to an automated system deciding that the story is controversial or appears to be supported by a voting ring. There may be other reasons. I haven't attempted to separate them, because for my purposes it doesn't matter. The algorithm is discussed here.)
Now, ideally I'd classify my dataset based on manual analysis and classification of stories, but I'm lazy (see ) and so just tried some keyword analysis:
A few things to note:
- Lots of stories are penalised. Of the front page stories in my dataset, I count 3240 stories that have some kind of penalty applied, against 2848 that don't. The default seems to be that some kind of detection will kick in.
- Stories containing keywords that suggest they refer to issues around social justice appear more likely to be penalised than stories that refer to technical matters
- There are other topics that are also disproportionately likely to be penalised. That's interesting, but not really relevant - I'm not necessarily arguing that social issues are penalised out of an active desire to make them go away, merely that the existing ranking system tends to result in it happening anyway.
This clearly isn't an especially rigorous analysis, and in future I hope to do a better job. But for now the evidence appears consistent with my innate prejudice - the Hacker News ranking algorithm tends to penalise stories that address social issues. An interesting next step would be to attempt to infer whether the reasons for the penalties are similar between different categories of penalised stories, but I'm not sure how practical that is with the publicly available data.
(Raw data is here, penalised stories are here, unpenalised stories are here)
 Moving to San Francisco has resulted in it making more sense, but really that just makes me even more depressed.
 Ha ha like fuck my PhD's in biology
 Perhaps stories about startups tend to get penalised because of voter ring detection from people trying to promote their startup, while stories about social issues tend to get penalised because of controversy detection?
We already edit /etc/tomcat7/server.xml after installing the tomcat7 Debian package, to get it to talk AJP instead of HTTP (so we can use libapache2-mod-jk to put it behind an Apache 2 httpd, which also terminates SSL):
We already comment out the block…<Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" URIEncoding="UTF-8" redirectPort="8443" />
… and remove the comment chars around the line…<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />
… so all we need to do is edit that line to make it look like…<Connector address="127.0.0.1" port="8009" protocol="AJP/1.3" redirectPort="8443" />
… and we’re all set.
(Your apache2 vhost needs a lineJkMount /?* ajp13_worker
and everything Just Works™ with the default configuration.)
Now, tomcat7 is only accessible from localhost (Legacy IP), and we don’t need to firewall the AJP (or HTTP/8080) port. Do make sure your Apache 2 access configuration works, though ☺
Probably many of you are facing the same issue: having a consistent UNIX identity across all multiple instances. While in an ideal world LDAP would be a perfect choice, letting LDAP open to the wild Internet is not a great idea.
So, how to solve this issue, while being secure? The trick is to use the new NSS module for SecurePass.
While SecurePass has been traditionally used into the operating system just as a two factor authentication, the new beta release is capable of holding “extended attributes”, i.e. arbitrary information for each user profile.
We will use SecurePass to authenticate users and store Unix information with this new capability. In detail, we will:
- Use PAM to authenticate the user via RADIUS
- Use the new NSS module for SecurePass to have a consistent UID/GID/….
The next generation of SecurePass (currently in beta) is capable of storing arbitrary data for each profile. This is called “Extended Attributes” (or xattrs) and -as you can imagine- is organized as key/value pair.
You will need the SecurePass tools to be able to modify users’ extended attributes. The new releases of Debian Jessie and Ubuntu Vivid Vervet have a package for it, just:# apt-get install securepass-tools
For other distributions or previous releases, there’s a python package (PIP) available. Make sure that you have pycurl installed and then:# pip install securepass-tools
While SecurePass tools allow local configuration file, we highly recommend for this tutorial to create a global /etc/securepass.conf, so that it will be useful for the NSS module. The configuration file looks like:[default] app_id = xxxxx app_secret = xxxx endpoint = https://beta.secure-pass.net/
Where app_id and app_secrets are valid API keys to access SecurePass beta.
Through the command line, we will be able to set UID, GID and all the required Unix attributes for each user:# sp-user-xattrs firstname.lastname@example.org set posixuid 1000
While posixuid is the bare minimum attribute to have a Unix login, the following attributes are valid:
- posixuid → UID of the user
- posixgid → GID of the user
- posixhomedir → Home directory
- posixshell → Desired shell
- posixgecos → Gecos (defaults to username)
In a similar way to the tools, Debian Jessie and Ubuntu Vivid Vervet have native package for SecurePass:# apt-get install libnss-securepass
For previous releases of Debian and Ubuntu can still run the NSS module, as well as CentOS and RHEL. Download the sources from:
Then:./configure make make install (Debian/Ubuntu Only)
For CentOS/RHEL/Fedora you will need to copy files in the right place:/usr/bin/install -c -o root -g root libnss_sp.so.2 /usr/lib64/libnss_sp.so.2 ln -sf libnss_sp.so.2 /usr/lib64/libnss_sp.so
The /etc/securepass.conf configuration file should be extended to hold defaults for NSS by creating an [nss] section as follows:[nss] realm = company.net default_gid = 100 default_home = "/home" default_shell = "/bin/bash"
This will create defaults in case values other than posixuid are not being used. We need to configure the Name Service Switch (NSS) to use SecurePass. We will change the /etc/nsswitch.conf by adding “sp” to the passwd entry as follows:$ grep sp /etc/nsswitch.conf passwd: files sp
Double check that NSS is picking up our new SecurePass configuration by querying the passwd entries as follows:$ getent passwd user user:x:1000:100:My User:/home/user:/bin/bash $ id user uid=1000(user) gid=100(users) groups=100(users)
Using this setup by itself wouldn’t allow users to login to a system because the password is missing. We will use SecurePass’ authentication to access the remote machine.Configure PAM for SecurePass
On Debian/Ubuntu, install the RADIUS PAM module with:# apt-get install libpam-radius-auth
If you are using CentOS or RHEL, you need to have the EPEL repository configured. In order to activate EPEL, follow the instructions on http://fedoraproject.org/wiki/EPEL
Be aware that this has not being tested with SE-Linux enabled (check off or permissive).
On CentOS/RHEL, install the RADIUS PAM module with:# yum -y install pam_radius
Note: as per the time of writing, EPEL 7 is still in beta and does not contain the Radius PAM module. A request has been filed through RedHat’s Bugzilla to include this package also in EPEL 7
Configure SecurePass with your RADIUS device. We only need to set the public IP Address of the server, a fully qualified domain name (FQDN), and the secret password for the radius authentication. In case of the server being under NAT, specify the public IP address that will be translated into it. After completion we get a small recap of the already created device. For the sake of example, we use “secret” as our secret password.
Configure the RADIUS PAM module accordingly, i.e. open /etc/pam_radius.conf and add the following lines:radius1.secure-pass.net secret 3 radius2.secure-pass.net secret 3
Of course the “secret” is the same we have set up on the SecurePass administration interface. Beyond this point we need to configure the PAM to correct manage the authentication.
In CentOS, open the configuration file /etc/pam.d/password-auth-ac; in Debian/Ubuntu open the /etc/pam.d/common-auth configuration and make sure that pam_radius_auth.so is in the list.auth required pam_env.so auth sufficient pam_radius_auth.so try_first_pass auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so Conclusions
Handling many distributed Linux poses several challenges, from software updates to identity management and central logging. In a cloud scenario, it is not always applicable to use traditional enterprise solutions, but new tools might become very handy.
Two weeks ago I left NYC for a day trip to Norfolk, Virginia, to attend xTupleCon 2014. For those who don’t know, xTuple is an incredibly reknown open source Enterprise Resource Planning (ERP) software. If you’ve been following this blog, you might recall that during my participation in the Google Summer of Code 2014, I wrote a beta JSCommunicator extension for xTuple (to see how I went about doing that, look up Kickstarting the JSCommunicator-xTuple extension).
Now, I wasn’t flying down to Virginia on the eve of my first grad school midterms (gasp!) for fun - although I will admit, I enjoyed myself a fair amount. My GSoC mentor, Daniel Pocock was giving a talk about WebRTC and JSCommunicator at xTupleCon and invited me to participate. So that was the first perk of going to xTupleCon, I got to finally meet my GSoC mentor in person!
During the presentation, Daniel provided a high level expanation of WebRTC and how it works. WebRTC (Web Real Time Communications) enables real time transmission of audio, video and data streams through browser-to-browser applications. This is one of the many perks of WebRTC; it doesn’t require installation of plugins, making its use less vulnerable to security breaches. Websockets are used for the signalling channel and the SIP or XMPP can be used as the signaling protocol - in JSCommunicator, we used SIP. What was also done in JSCommunicator and can be done in other applications is to use a library to implement the signaling and SIP stack. JSCommunicator uses (and I highly reccomend) JsSIP.
The basic SIP architecture is comprised of peers and a SIP Proxy Server that supports SIP over Websockets transport:
If you have users behind NAT, you’ll also need a TURN server for relay:
In both cases, setup is not too difficult, particularly if using reSIProcate which offers both the SIP proxy and the TURN server. Daniel Pocock has an excellent post on how to setup and configure your SIP proxy and TURN server.
With regard to JSCommunicator, it is a generic telephone in HTML5 which can easily be embedded in any web site or web app. Almost every aspect of JSCommunicator is easily customizable. More about JSCommunicator setup and architecture is detailed in a previous post.
The JSCommunicator-xTuple extension can be installed in xTuple as an npm package (xtuple-jscommunicator). It is still at a very beta - or even pre-beta - stage and there are various limitations; the configuration must be hard-coded and dialing is done manually as opposed to clicking on a contact. Some of these ‘limitations’ are features are on the wish list for future work. For example, some ideas for the next version of the extension are a click to dial from CRM records functionality and to bring up CRM records for an incomming call. Additionaly, the SIP proxy could be automatically installed with the xTuple server installation if desired.
We closed the presentation with a live demo during which I made a video call from a JSCommunicator instance embedded in freephonebox.net to the JSCommunicator xTuple extension running on Daniel’s laptop. Despite the occasionally iffy hotel Wifi, the demo was a hit - at one point I even left the conference room and took a quick walk around the hotel and invited other xTupleCon attendees to say hello to those in the room. The audience’s reception was more enthusiastic than I anticipated, giving way to a pretty extensive Q&A session. It’s great to see more and more people interested in WebRTC, I can’t emphasize enough what a useful and versatile tool it is.
Here’s an ‘action shot’ of part of the xTuple WebRTC presentation:
Before I start really digging in to reworking the Render support in Glamor, I wanted to take a stab at cleaning up some cruft which has accumulated in Glamor over the years. Here's what I've done so far.Get rid of the Intel fallback paths
I think it's my fault, and I'm sorry.
The original Intel Glamor code has Glamor implement accelerated operations using GL, and when those fail, the Intel driver would fall back to its existing code, either UXA acceleration or software. Note that it wasn't Glamor doing these fallbacks, instead the Intel driver had a complete wrapper around every rendering API, calling special Glamor entry points which would return FALSE if GL couldn't accelerate the specified operation.
The thinking was that when GL couldn't do something, it would be far faster to take advantage of the existing UXA paths than to have Glamor fall back to pulling the bits out of GL, drawing to temporary images with software, and pushing the bits back to GL.
And, that may well be true, but what we've managed to prove is that there really aren't any interesting rendering paths which GL can't do directly. For core X, the only fallbacks we have today are for operations using a weird planemask, and some CopyPlane operations. For Render, essentially everything can be accelerated with the GPU.
At this point, the old Intel Glamor implementation is a lot of ugly code in Glamor without any use. I posted patches to the Intel driver several months ago which fix the Glamor bits there, but they haven't seen any review yet and so they haven't been merged, although I've been running them since 1.16 was released...
Getting rid of this support let me eliminate all of the _nf functions exported from Glamor, along with the GLAMOR_USE_SCREEN and GLAMOR_USE_PICTURE_SCREEN parameters, along with the GLAMOR_SEPARATE_TEXTURE pixmap type.Force all pixmaps to have exact allocations
Glamor has a cache of recently used textures that it uses to avoid allocating and de-allocating GL textures rapidly. For pixmaps small enough to fit in a single texture, Glamor would use a cache texture that was larger than the pixmap.
I disabled this when I rewrote the Glamor rendering code for core X; that code used texture repeat modes for tiles and stipples; if the texture wasn't the same size as the pixmap, then texturing would fail.
On the Render side, Glamor would actually reallocate pixmaps used as repeating texture sources. I could have fixed up the core rendering code to use this, but I decided instead to just simplify things and eliminate the ability to use larger textures for pixmaps everywhere.Remove redundant pixmap and screen private pointers
Every Glamor pixmap private structure had a pointer back to the pixmap it was allocated for, along with a pointer to the the Glamor screen private structure for the related screen. There's no particularly good reason for this, other than making it possible to pass just the Glamor pixmap private around a lot of places. So, I removed those pointers and fixed up the functions to take the necessary extra or replaced parameters.
Similarly, every Glamor fbo had a pointer back to the Glamor screen private too; I removed that and now pass the Glamor screen private parameter as needed.Reducing pixmap private complexity
Glamor had three separate kinds of pixmap private structures, one for 'normal' pixmaps (those allocated by them selves in a single FBO), one for 'large' pixmaps, where the pixmap was tiled across many FBOs, and a third for 'atlas' pixmaps, which presumably would be a single FBO holding multiple pixmaps.
The 'atlas' form was never actually implemented, so it was pretty easy to get rid of that.
For large vs normal pixmaps, the solution was to move the extra data needed by large pixmaps into the same structure as that used by normal pixmaps and simply initialize those elements correctly in all cases. Now, most code can ignore the difference and simply walk the array of FBOs as necessary.
The other thing I did was to shrink the number of possible pixmap types from 8 down to three. Glamor now exposes just these possible pixmap types:
GLAMOR_MEMORY. This is a software-only pixmap, stored in regular memory and only drawn with software. This is used for 1bpp pixmaps, shared memory pixmaps and glyph pixmaps. Most of the time, these pixmaps won't even get a Glamor pixmap private structure allocated, but if you use one of these with the existing Render acceleration code, that will end up wanting a private pointer. I'm hoping to fix the code so we can just use a NULL private to indicate this kind of pixmap.
GLAMOR_TEXTURE. This is a full Glamor pixmap, capable of being used via either GL or software fallbacks.
GLAMOR_DRM_ONLY. This is a pixmap based on an FBO which was passed from the driver, and for which Glamor couldn't get the underlying DRM object. I think this is an error, but I don't quite understand what's going on here yet...
- Deal with X vs GL color formats
- Finish my new CompositeGlyphs code
- Create pure shader-based gradients
- Rewrite Composite to use the GPU for more computation
- Take another stab at doing GPU-accelerated trapezoids
It's impossible to overstate how important free software is. A movement that began with a quest to work around a faulty printer is now our greatest defence against a world full of hostile actors. Without the ability to examine software, we can have no real faith that we haven't been put at risk by backdoors introduced through incompetence or malice. Without the freedom to modify software, we have no chance of updating it to deal with the new challenges that we face on a daily basis. Without the freedom to pass that modified software on to others, we are unable to help people who don't have the technical skills to protect themselves.
Free software isn't sufficient for building a trustworthy computing environment, one that not merely protects the user but respects the user. But it is necessary for that, and that's why I continue to evangelise on its behalf at every opportunity.
Free software has a problem. It's natural to write software to satisfy our own needs, but in doing so we write software that doesn't provide as much benefit to people who have different needs. We need to listen to others, improve our knowledge of their requirements and ensure that they are in a position to benefit from the freedoms we espouse. And that means building diverse communities, communities that are inclusive regardless of people's race, gender, sexuality or economic background. Free software that ends up designed primarily to meet the needs of well-off white men is a failure. We do not improve the world by ignoring the majority of people in it. To do that, we need to listen to others. And to do that, we need to ensure that our community is accessible to everybody.
That's not the case right now. We are a community that is disproportionately male, disproportionately white, disproportionately rich. This is made strikingly obvious by looking at the composition of the FSF board, a body made up entirely of white men. In joining the board, I have perpetuated this. I do not bring new experiences. I do not bring an understanding of an entirely different set of problems. I do not serve as an inspiration to groups currently under-represented in our communities. I am, in short, a hypocrite.
So why did I do it? Why have I joined an organisation whose founder I publicly criticised for making sexist jokes in a conference presentation? I'm afraid that my answer may not seem convincing, but in the end it boils down to feeling that I can make more of a difference from within than from outside. I am now in a position to ensure that the board never forgets to consider diversity when making decisions. I am in a position to advocate for programs that build us stronger, more representative communities. I am in a position to take responsibility for our failings and try to do better in future.
People can justifiably conclude that I'm making excuses, and I can make no argument against that other than to be asked to be judged by my actions. I hope to be able to look back at my time with the FSF and believe that I helped make a positive difference. But maybe this is hubris. Maybe I am just perpetuating the status quo. If so, I absolutely deserve criticism for my choices. We'll find out in a few years.
Last Wednesday I had the pleasure and honor to have a great guest again at my class: José María Serralde, talking about real time scheduling. I like inviting different people to present interesting topics to my students a couple of times each semester, and I was very happy to have Chema come again.
Chema is a professional musician (formally, a pianist, although he has far more skills than what a title would confer to him — Skills that go way beyond just music), and he had to learn the details on scheduling due to errors that appear when recording and performing.
The audio could use some cleaning, and my main camera (the only one that lasted for the whole duration) was by a long shot not professional grade, but the video works and is IMO quite interesting and well explained.
If someone would have told me that I would visit three feminist events this year I would have slowly nodded at them and responded with "yeah, sure..." not believing it. But sometimes things take their own turns.
It all started with the Debian Women Mini-Debconf in Barcelona. The organizers did ask me how they have to word the call for papers so that I would feel invited to give a speech, which felt very welcoming and nice. So we settled for "people who identify themselves as female". Due to private circumstances I didn't prepare well for my talk, but I hope it was still worth it. The next interesting part though happened later when there were lightning talks. Someone on IRC asked why there are male people in the lightning talks, which was explicitly allowed for them only. This also felt very very nice, to be honest, that my talk wasn't questioned. Those are amongst the reasons why I wrote My place is here, my home is Debconf.
Second event I went to was the FemCamp Wien. It was my first event that was a barcamp, I didn't know what to expect organization wise. Topic-wise it was set about Queer Feminism. And it was the first event that I went to which had a policy. Granted, there was an extremely silly written part in it, which naturally ended up in a shit storm on twitter (which people from both sides did manage very badly, which disappointed me). Denying that there is sexism against cis-males is just a bad idea, but the background of it was that this wasn't the topic of this event. The background of the policy was that usually barcamps but events in general aren't considered that save of a place for certain people, and that this barcamp wanted to make it clear that people usually shying away from such events in the fear of harassment can feel at home there.
And what can I say, this absolutely was the right thing to do. I never felt any more welcomed and included in any event, including Debian events—sorry to say that so frankly. Making it clear through the policy that everyone is on the same boat with addressing each other respectfully totally managed to do exactly that. The first session of the event about dominant talk patterns and how to work around or against them also made sure that the rest of the event was giving shy people a chance to speak up and feel comfortable, too. And the range of the sessions that were held was simply great. This was the event that I came up with the pattern that I have to define the quality of an event on the sessions that I'm unable to attend. The thing that hurt me most in the afterthought was that I couldn't attend the session about minorities within minorities. :/
Last but not least I attended AdaCamp Berlin. This was a small unconference/barcamp dedicated to increase women's participation in open technology and culture named after Ada Lovelace who is considered the first programmer. It was a small event with only 50 slots for people who identify as women. So I was totally hyper when I received the mail that was accepted. It was another event with a policy, and at first reading it looked strange. But given that there are people who are allergic to ingredients of scents, it made sense to raise awareness of that topic. And given that women are facing a fair amount of harassment in the IT and at events, it also makes sense to remind people to behave. After all it was a general policy for all AdaCamps, not for this specific one with only women.
I enjoyed the event. Totally. And that's not only because I was able to meet up with a dear friend who I haven't talked to in years, literally. I enjoyed the environment, and the sessions that were going on. And quite similar to the FemCamp, it started off with a session that helped a lot for the rest of the event. This time it was about the Impostor Syndrome which is extremely common for women in IT. And what can I say, I found myself in one of the slides, given that I just tweeted the day before that I doubted to belong there. Frankly spoken, it even crossed my mind that I was only accepted so that at least one trans person is there. Which is pretty much what the impostor syndrome is all about, isn't it. But when I was there, it did feel right. And we had great sessions that I truly enjoyed. And I have to thank one lady once again for her great definition on feminism that she brought up during one session, which is roughly that feminism for her isn't about gender but equality of all people regardless their sexes or gender definition. It's about dropping this whole binary thinking. I couldn't agree more.
All in all, I totally enjoyed these events, and hope that I'll be able to attend more next year. From what I grasped all three of them think of doing it again, the FemCamp Vienna already has the date announced at the end of this year's event, so I am looking forward to meet most of these fine ladies again, if faith permits. And keep in mind, there will always be critics and haters out there, but given that thy wouldn't think of attending such an event anyway in the first place, don't get wound up about it. They just try to talk you down.
P.S.: Ah, almost forgot about one thing to mention, which also helps a lot to reduce some barrier for people to attend: The catering during the day and for lunch both at FemCamp and AdaCamp (there was no organized catering at the Debian Women Mini-Debconf) did take off the need for people to ask about whether there could be food without meat and dairy products by offering mostly Vegan food in the first place, even without having to query the participants. Often enough people otherwise choose to go out of the event or bring their own food instead of asking for it, so this is an extremely welcoming move, too. Way to go!
I've spent the past thirty minutes installing FreeBSD as a KVM guest. This mostly involved fetching the ISO (I chose the latest stable release 10.0), and accepting all the defaults. A pleasant experience.
As I'm running KVM inside screen I wanted to see the boot prompt, etc, via the serial console, which took two distinct steps:
- Enabling the serial console - which lets boot stuff show up
- Enabling a login prompt on the serial console in case I screw up the networking.
To configure boot messages to display via the serial console, issue the following command as the superuser:# echo 'console="comconsole"' >> /boot/loader.conf
To get a login: prompt you'll want to edit /etc/ttys and change "off" to "on" and "dialup" to "vt100" for the ttyu0 entry. Once you've done that reload init via:# kill -HUP 1
Enable remote root logins, if you're brave, or disable PAM and password authentication if you're sensible:vi /etc/ssh/sshd_config /etc/rc.d/sshd restart
Configure the system to allow binary package-installation - to be honest I was hazy on why this was required, but I ran the two command and it all worked out:pkg pkg2ng
Now you may install a package via a simple command such as:pkg add screen
Removing packages you no longer want is as simple as using the delete option:pkg delete curl
You can see installed packages via "pkg info", and there are more options to be found via "pkg help". In the future you can apply updates via:pkg update && pkg upgrade
Finally I've installed 10.0-RELEASE which can be upgraded in the future via "freebsd-update" - This seems to boil down to "freebsd-update fetch" and "freebsd-update install" but I'm hazy on that just yet. For the moment you can see your installed version via:uname -a ; freebsd-version
Expect my future CPAN releases, etc, to be tested on FreeBSD too now :)
geoip version 1.6.2-2 and geoip-database version 20141027-1 are now available in Debian unstable/sid, with some news of more free databases available :)
geoip changes:* Add patch for geoip-csv-to-dat to add support for building GeoIP city DB. Many thanks to Andrew Moise for contributing! * Add and install geoip-generator-asn, which is able to build the ASN DB. It is a modified version from the original geoip-generator. Much thanks for contributing also to Aaron Gibson! * Bump Standards-Version to 3.9.6 (no changes required).
geoip-database changes:* New upstream release. * Add new databases GeoLite city and GeoLite ASN to the new package geoip-database-extra. Also bump build depends on geoip to 1.6.2-2. * Switch to xz compression for the orig tarball.
So much thanks to both contributors!
TL;DR; Those of you who are not able to join "X2Go: The Gathering 2014"... Join us on IRC (#x2go on Freenode) over the coming weekend. We will provide information, URLs to our TinyPads, etc. there. Spontaneous visitors are welcome during the working sessions (please let us know if you plan to come around), but we don't have spare beds anymore for accomodation. (We are still trying hard to set up some sort of video coverage--may it be life streaming or recorded sessions, this is still open, people who can offer help, see below).
Our event "X2Go: The Gathering 2014" is approaching quickly. We will meet with a group of 13-15 people (number of people is still slightly fluctuating) at Linux Hotel, Essen. Thanks to the generous offerings of the Linux Hotel  to FLOSS community projects, costs of food and accommodation could be kept really low and affordable to many people.
We are very happy that people from outside Germany are coming to that meeting (Michael DePaulo from the U.S., Kjetil Fleten (http://fleten.net) from Denmark / Norway). And we are also proud that Martin Wimpress (Mr. Ubuntu MATE Remix) will join our gathering.
In advance, I want to send a big THANK YOU to all people who will sponsor our weekend, either by sending gift items, covering travel expenses or providing help and knowledge to make this event a success for the X2Go project and its community around.
I've been wanting to set up DANE for my domain, but I seem to be unable to find a provider that offers DNSSEC that can also do TLSA records in DNS. I've contacted several companies and most don't even seem to be offering DNSSEC. And if they offer DNSSEC they can't do TLSA records. I would like to avoid actually running my own nameservers.
So if someone knows someone that can provide that, please contact me at email@example.com.