One final update on my work for UEFI improvements in Jessie!
All of my improvements have been committed into the various Debian packages involved, and the latest release candidate for Jessie's debian-installer build (RC2) works just as well as my test builds on the Bay Trail system I've been using (Asus X205TA). Job done! :-)
I'm still hoping to maybe get more hardware support for this particular hardware included in Jessie, but I can't promise. The mixed EFI work has also improved things for a lot of Mac users, and I'm planning to write up a more comprehensive list of supported machines in the Debian wiki (for now).
There's now no need to use any of the older test installer images - please switch to RC2 for now. See http://cdimage.debian.org/cdimage/jessie_di_rc2/ for the images. If you want to install a 64-bit system with the 32-bit UEFI support, make sure you use the multi-arch amd64/i386 netinst or DVD. Otherwise, any of the standard i386 images should work for a 32-bit only system.Upstreaming
My kernel patch to add the new /sys file was accepted upstream a while back, and has been in Linus' master branch for some time. It'll be in 4.0 unless something goes horribly wrong, and as it's such a tiny piece of code it's trivial to backport to anything remotely recent too.
I've also just seen that my patch for grub2 to use this new /sys file has been accepted upstream this week. Again, the change is small and self-contained so should be easy to copy across into other trees too.
Mixed EFI systems should now have better support across all distros in the near future, I hope.
I usually use rdiff-backup to backup several of my systems. One is a workstation which goes to sleep after some time of idling around. Now having a user logged in running rdiff-backup (or rsync, rsnapshot etc for that matter) won't prevent the system from being put to sleep. Naturally this happens before the backup is complete. So some time ago I was looking for a resolution and recieved a suggestion to use a script in /etc/pm/sleep.d/. I had to modify the script a bit, because the query result always was true. So this is my solution in /etc/pm/sleep.d/01_prevent_sleep_on_backup now:
command_exists rdiff-backup || exit $NA
case "$1" in
if ps cax | grep -q rdiff-backup
Currently testing ...
The percentage that women in Debian occupy as DDs is ~2%. Yes, just ~2% ladies that are DDs! So that means ~98% of DDs are gentelmen.
I know there are more of ladies in Debian, so I firstly urge you, for love of Debian, to apply if you are contributing to this project, love its community and want to see Debian taking over the universe (okay, it seems that we conquered outer space so we need a help on Earth).
So why is the number this low? Well maybe it's too precious to us currently inside that we want to prevent it being spoiled from outside. Also there seems to be not that much of younger DDs. Why is that important - well, young people like to do it and not to think about it. Many time they just break it, but many time they also do a breakthrough. Why is difference important and why should we embrace it? It's very important because it breaks a monopoly on view and behavior. It brings views not just from a larger number of people, but also from people from different backgrounds, and in constructive conversation it can put even more pluses on current workflow or it can counter it with good arguments. In a project of its size and worldwide geolocation of its developers, this is true for Debian more then any other projects I know. We need more women so we can balance our inner workings and have a better understanding of humanity and how is it moving, what and why does it need and where is it steering. That way we can produce a community which will improve quality of OS that we produce - because of sheer number of different people working on the same thing bringing to it its own personal touch. So, ladies and youth all over the world, unite and join in Debian because without diversity Debian can't grow beyond its current size. Also, no, Debian is not about code only, it needs painters, musicians, people that want to talk about Debian, people that share love and happiness, people that want to build better communities, UI/UX designers, makers, people who know how to repair a bike, athletes, homebrew beer producers, lawyers (just while world gets rid of laws, then we don't need you), actors, writters... Why, well because world and communities are made up from all that diversity and that's what makes it a better and not a monotone place.
But I just use Debian. Well, do you feel love towards Debian and its work? Would you like to feel more as integral part of community? If the answer is big fat YES, then you should be a DD too. Every person that feels it's part of Debians philosophy about freedom and behaving in good manner should join Debian. Every person that feels touched and enhanced by Debian's work should become part of community and share its experience how Debian touched their soul, impacted their life. If you love Debian, you should be free to contribute to it in whatever manner and you should be free to express your love towards it. If you think lintian is sexy, or shebang is a good friends of yours, or you enjoy talking to MadameZou about Debian and zombies (yeah, we do have all kinds of here), or you like Krita, or you hate the look of default XFCE theme, or you can prove that you a more crazy developer then paultag - just hop into community and try to integrate in it. You will meet great folks, have a lot of conversation about wine and cheese, play some dangerous card games and even learn about things like bokononism (yeah I am looking at you dkg!).
Now for the current Debian community - what the hell is packaging and non-packaging Debian Developer? Are one better then others? Do others stink? They don't know to hug? WHAT? Yes I know that inexperienced person shouldn't have a permission to access Debian packaging infrastructure, but I have the feeling that even that person knows that. Every person should have a place in Debian and acknowledge other fields. So yes, software developers need access to Debian packaging infrastructure, painters don't. I think we can agree on this. So lets abolish the stupid term and remove the difference in our community. Lets embrace the difference, because if someone writes a good poem about Debian heroism I could like it more then flashplugin-nonfree! Yep, I made that comparison on purpose so you can give a thought about it.
Debian has excellent community regarding operating system that it's producing. And it's not going away, not at least anytime soon. But it will not go forward if we don't give additional push as human beings, as people who care about their fellow Debianites. And we do care, I know that, we just need to push it more public. We don't hide bugs, we for sure shouldn't hide features. It will probably bring bad seeds too, but we have mechanisms and will to counter that. If we, on average 10 bad seeds, get some crazy good hacker or crazy lovely positive person like this lady, we will be on right path. Debian is a better place, it should lead in effort to bring more people into FLOSS world and it should allow people to bring more of diversity into Debian.
arm-none-eabi-objdump -D -b binary -m arm -EB dump.bin | lessThe options mean:
- -D - disassemble
- -b binary - input file is a raw file
- -m arm - arm architecture
- -EB - big endian
After a way too long hiatus, I finally got back to working on some side-projects and wrote a small go library for solving linear programming problems. Say hi to golp!
Since I’m no LP expert, golp makes use of GLPK to do the actual weight-lifting. Unfortunately, GLPK currently isn’t reentrant, so it can’t really be used with go’s great goroutines. Still, works well enough to be used for a next little project.
Now, if only I could get back to working on Debian…
Only parts of us will ever
touch o̶n̶l̶y̶ parts of others –
one’s own truth is just that really — one’s own truth.
We can only share the part that is u̶n̶d̶e̶r̶s̶t̶o̶o̶d̶ ̶b̶y̶ within another’s knowing acceptable t̶o̶ ̶t̶h̶e̶ ̶o̶t̶h̶e̶r̶—̶t̶h̶e̶r̶e̶f̶o̶r̶e̶ so one
is for most part alone.
As it is meant to be in
evidently in nature — at best t̶h̶o̶u̶g̶h̶ ̶ perhaps it could make
our understanding seek
another’s loneliness out.
– unpublished poem by Marilyn Monroe, via berlin-artparasites
This poem inspired me to put some ideas into words this morning, an attempt to summarize my current working theory of consciousness.
Ideas travel through space and time. An idea that exists in my mind is filtered through my ability to express it somehow (words, art, body language, …), and is then interpreted by your mind and its models for understanding the world. This shifts your perspective in some way, some or all of which may be unconscious. When our minds encounter new ideas, they are accepted or rejected, reframed, and integrated with our existing mental models. This process forms a sort of living ecosystem, which maintains equilibrium within the realm of thought. Ideas are born, divide, mutate, and die in the process. Language, culture, education and so on are stable structures which form and support this ecosystem.
Consciousness also has analogues of the immune system, for example strongly held beliefs and models which tend to reject certain ideas. Here again these can be unconscious or conscious. I’ve seen it happen that if someone hears an idea they simply cannot integrate, they will behave as if they did not hear it at all. Some ideas can be identified as such a serious threat that ignoring them is not enough to feel safe: we feel compelled to eliminate the idea in the external world. The story of Christianity describes a scenario where an idea was so threatening to some people that they felt compelled to kill someone who expressed it.
A microcosm of this ecosystem also exists within each individual mind. There are mental structures which we can directly introspect and understand, and others which we can only infer by observing our thoughts and behaviors. These structures communicate with each other, and this communication is limited by their ability to “speak each other’s language”. A dream, for example, is the conveyance of an idea from an unconscious place to a conscious one. Sometimes we get the message, and sometimes we don’t. We can learn to interpret, but we can’t directly examine and confirm if we’re right. As in biology, each part of this process introduces uncountable “errors”, but the overall system is surprisingly robust and stable.
This whole system, with all its many minds interacting, can be thought of as an intelligence unto itself, a gestalt consciousness. This interpretation leads to some interesting further conclusions:
- The notion that an individual person possesses a single, coherent point of view seems nonsensical
- The separation between “my mind” and “your mind” seems arbitrary
- The attribution of consciousness only to humans, or only to living beings, seems absurd
Yesterday, which happened to be my 30th birthday, a small package got delivered to my office: The printed proceedings of last year's “Trends in Functional Programming” conference, where I published a paper on Call Arity (preprint). Although I doubt the usefulness of printed proceedings, it was a nicely timed birthday present.
Looking at the rather short table of contents – only 8 papers, after 27 presented and 22 submitted – I thought that this might mean that, with some luck, I might have chances to get the “Best student paper award”, which I presumed to be announced at the next iteration of the conference.
For no particular reason I was leisurely browsing through the book, and started to read the preface. And what do I read there?
Among the papers selected for these proceedings, two papers stood out. The award for Best Student Paper went to Joachim Breitner for his paper entitled Call Arity, and the award for Best Paper Overall went to Edwin Brady for his paper entitled Resource-dependent Algebraic Effects. Congratulations!
Now, that is a real nice birthday present! Not sure if I even would have found out about it, had I not have thrown a quick glance at page V...
I hope that it is a good omen for my related ICFP'15 submission.
The UDD bugs interface currently knows about the following release critical bugs:
- In Total:
155 bugs affecting
- Affecting Jessie:
97 (key packages:
65) That's the number we need to get down to zero
before the release. They can be split in two big categories:
- Affecting Jessie and unstable:
77 (key packages:
51) Those need someone to find a fix, or to finish the
work to upload a fix to unstable:
- 13 bugs are tagged 'patch'. (key packages: 9) Please help by reviewing the patches, and (if you are a DD) by uploading them.
- 4 bugs are marked as done, but still affect unstable. (key packages: 1) This can happen due to missing builds on some architectures, for example. Help investigate!
- 60 bugs are neither tagged patch, nor marked done. (key packages: 41) Help make a first step towards resolution!
- Affecting Jessie only: 20 (key packages: 14) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
- Affecting Jessie and unstable: 77 (key packages: 51) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
- Affecting Jessie: 97 (key packages: 65) That's the number we need to get down to zero before the release. They can be split in two big categories:
How do we compare to the Squeeze and Wheezy release cycles?Week Squeeze Wheezy Jessie 43 284 (213+71) 468 (332+136) 319 (240+79) 44 261 (201+60) 408 (265+143) 274 (224+50) 45 261 (205+56) 425 (291+134) 295 (229+66) 46 271 (200+71) 401 (258+143) 427 (313+114) 47 283 (209+74) 366 (221+145) 342 (260+82) 48 256 (177+79) 378 (230+148) 274 (189+85) 49 256 (180+76) 360 (216+155) 226 (147+79) 50 204 (148+56) 339 (195+144) ??? 51 178 (124+54) 323 (190+133) 189 (134+55) 52 115 (78+37) 289 (190+99) 147 (112+35) 1 93 (60+33) 287 (171+116) 140 (104+36) 2 82 (46+36) 271 (162+109) 157 (124+33) 3 25 (15+10) 249 (165+84) 172 (128+44) 4 14 (8+6) 244 (176+68) 187 (132+55) 5 2 (0+2) 224 (132+92) 175 (124+51) 6 release! 212 (129+83) 161 (109+52) 7 release+1 194 (128+66) 147 (106+41) 8 release+2 206 (144+62) 147 (96+51) 9 release+3 174 (105+69) 152 (101+51) 10 release+4 120 (72+48) 112 (82+30) 11 release+5 115 (74+41) 97 (68+29) 12 release+6 93 (47+46) 87 (71+16) 13 release+7 50 (24+26) 97 (77+20) 14 release+8 51 (32+19) 15 release+9 39 (32+7) 16 release+10 20 (12+8) 17 release+11 24 (19+5) 18 release+12 2 (2+0)
Over the time I started to get more and more requests to have python-gammu working with Python 3. Of course this request makes sense, but I somehow failed to find time for that.
Also for quite some time python-gammu has been distributed together with Gammu sources. This was another struggle to overcome when supporting Python 3 as in many cases users will want to build the module for both Python 2 and 3 (at least most distributions will want to do so) and with current CMake based build system this did not seem to be easy to achieve.
So I've decided it's time to split python module out of the library. The reasons for having that together are no longer valid (libGammu has quite stable API these days) and having standard module which can be installed by pip is a nice thing.
Once the code has been put into separate git module, I've slowly progressed on porting to Python 3. Most of the problems were on the C side of the code, where Python really does not make it easy to support both Python 2 and 3. So the code ended up with many #ifdefs, but I see no other way. While doing these changes, many points in the API were fixed to accept unicode stings in Python 2 as well.
Anyway, today we have first successful build of python-gammu working on both Python 2 and 3. I'm afraid there is still some bug leading to occasional segfaults on Travis, but not reproducible locally. But hopefully this will be fixed in upcoming weeks and we can release separate python-gammu module again.
Olivier Berger: New short paper : “Designing a virtual laboratory for a relational database MOOC” with Vagrant, Debian, etc.
Here’s a short preview of our latest accepted paper (to appear at CSEDU 2015), about the construction of VMs for the Relational Database MOOC using Vagrant, Debian, PostgreSQL (previous post), etc. :Designing a virtual laboratory for a relational database MOOC
Olivier Berger, J Paul Gibson, Claire Lecocq and Christian Bac
Keywords: Remote Learning, Virtualization, Open Education Resources, MOOC, Vagrant
Abstract: Technical advances in machine and system virtualization are creating opportunities for remote learning to
provide significantly better support for active education approaches. Students now, in general, have personal
computers that are powerful enough to support virtualization of operating systems and networks. As a conse-
quence, it is now possible to provide remote learners with a common, standard, virtual laboratory and learn-
ing environment, independent of the different types of physical machines on which they work. This greatly
enhances the opportunity for producing re-usable teaching materials that are actually re-used. However, con-
figuring and installing such virtual laboratories is technically challenging for teachers and students. We report
on our experience of building a virtual machine (VM) laboratory for a MOOC on relational databases. The
architecture of our virtual machine is described in detail, and we evaluate the benefits of using the Vagrant tool
for building and delivering the VM.
- A brief history of distance learning
- Virtualization : the challenges
- The design problem
- The virtualization requirements
- Scenario-based requirements
- Related work on requirements
- Scalability of existing approaches
- The MOOC laboratory
- Exercises and lab tools
- From requirements to design
- Making the VM as a Vagrant box
- Portability issues
- Delivery through Internet
- Availability of the box sources
- Reliability Issues with VirtualBox
- Student feedback and evaluation
- Future work
- Laboratory monitoring
- More modular VMs
- Alario-Hoyos et al., 2014
Alario-Hoyos, C., Pérez-Sanagustın, M., Kloos, C. D., and Muñoz Merino, P. J. (2014).
Recommendations for the design and deployment of MOOCs: Insights about the MOOC digital education of the future deployed in MirÃadaX.
In Proceedings of the Second International Conference on Technological Ecosystems for Enhancing Multiculturality, TEEM ’14, pages 403-408, New York, NY, USA. ACM.
- Armbrust et al., 2010
Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R., Konwinski, A., Lee, G., Patterson, D., Rabkin, A., Stoica, I., and Zaharia, M. (2010).
A view of cloud computing.
Commun. ACM, 53:50-58.
- Billingsley and Steel, 2014
Billingsley, W. and Steel, J. R. (2014).
Towards a supercollaborative software engineering MOOC.
In Companion Proceedings of the 36th International Conference on Software Engineering, pages 283-286. ACM.
- Brown and Duguid, 1996
Brown, J. S. and Duguid, P. (1996).
Universities in the digital age.
Change: The Magazine of Higher Learning, 28(4):11-19.
- Bullers et al., 2006
Bullers, Jr., W. I., Burd, S., and Seazzu, A. F. (2006).
Virtual machines – an idea whose time has returned: Application to network, security, and database courses.
SIGCSE Bull., 38(1):102-106.
- Chen and Noble, 2001
Chen, P. M. and Noble, B. D. (2001).
When virtual is better than real [operating system relocation to virtual machines].
In Hot Topics in Operating Systems, 2001. Proceedings of the Eighth Workshop on, pages 133-138. IEEE.
- Cooper, 2005
Cooper, M. (2005).
Remote laboratories in teaching and learning-issues impinging on widespread adoption in science and engineering education.
International Journal of Online Engineering (iJOE), 1(1).
- Cormier, 2014
Cormier, D. (2014).
Rhizo14-the MOOC that community built.
INNOQUAL-International Journal for Innovation and Quality in Learning, 2(3).
- Dougiamas and Taylor, 2003
Dougiamas, M. and Taylor, P. (2003).
Moodle: Using learning communities to create an open source course management system.
In World conference on educational multimedia, hypermedia and telecommunications, pages 171-178.
- Gomes and Bogosyan, 2009
Gomes, L. and Bogosyan, S. (2009).
Current trends in remote laboratories.
Industrial Electronics, IEEE Transactions on, 56(12):4744-4756.
- Hashimoto, 2013
Hashimoto, M. (2013).
Vagrant: Up and Running.
O’Reilly Media, Inc.
- Jones and Winne, 2012
Jones, M. and Winne, P. H. (2012).
Adaptive Learning Environments: Foundations and Frontiers.
Springer Publishing Company, Incorporated, 1st edition.
- Lowe, 2014
Lowe, D. (2014).
MOOLs: Massive open online laboratories: An analysis of scale and feasibility.
In Remote Engineering and Virtual Instrumentation (REV), 2014 11th International Conference on, pages 1-6. IEEE.
- Ma and Nickerson, 2006
Ma, J. and Nickerson, J. V. (2006).
Hands-on, simulated, and remote laboratories: A comparative literature review.
ACM Computing Surveys (CSUR), 38(3):7.
- Pearson, 2013
Pearson, S. (2013).
Privacy, security and trust in cloud computing.
In Privacy and Security for Cloud Computing, pages 3-42. Springer.
- Prince, 2004
Prince, M. (2004).
Does active learning work? A review of the research.
Journal of engineering education, 93(3):223-231.
- Romero-Zaldivar et al., 2012
Romero-Zaldivar, V.-A., Pardo, A., Burgos, D., and Delgado Kloos, C. (2012).
Monitoring student progress using virtual appliances: A case study.
Computers & Education, 58(4):1058-1067.
- Sumner, 2000
Sumner, J. (2000).
Serving the system: A critical history of distance education.
Open learning, 15(3):267-285.
- Watson, 2008
Watson, J. (2008).
Virtualbox: Bits and bytes masquerading as machines.
Linux J., 2008(166).
- Winckles et al., 2011
Winckles, A., Spasova, K., and Rowsell, T. (2011).
Remote laboratories and reusable learning objects in a distance learning context.
- Yeung et al., 2010
Yeung, H., Lowe, D. B., and Murray, S. (2010).
Interoperability of remote laboratories systems.
Finally winter seems to be over and it's time to take out camera and make some pictures. Out of many areas where you can see spring snowflakes, we've chosen area Čtvrtě near Mcely, village which is less famous, but still very nice.
The original DruCall was based on SIPml5 and released in 2013 as a proof-of-concept.
It would be great to take DruCall further in 2015, here are some of the possibilities that are achievable in GSoC:
- Updating it for Drupal 8
- Support for logged-in users (currently it just makes anonymous calls, like a phone box)
- Support for relaying shopping cart or other session cookie details to the call center operative who accepts the call
My background is in real-time and server-side infrastructure and I'm providing all the WebRTC SIP infrastructure that the student may need. However, for the project to have the most impact, it would also be helpful to have some input from a second mentor who knows about UI design, the Drupal way of doing things and maybe some Drupal 8 experience. Please contact me ASAP if you would be keen to participate either as a mentor or as a student. The deadline for student applications is just hours away but there is still more time for potential co-mentors to join in.WebRTC at mini-DebConf Lyon in April
The next mini-DebConf takes place in Lyon, France on April 11 and 12. On the Saturday morning, there will be a brief WebRTC demo and there will be other opportunities to demo or test it and ask questions throughout the day. If you are interested in trying to get WebRTC into your web site, with or without Drupal, please see the RTC Quick Start guide.
I installed today Gogs and configured it with mysql (yes, yes, I know - use postgres you punk!). I will not post details of how I did it because:
- It still has "weird" coding as pointed already by others
- It doesn't have fork and pull request ability yet
And there was end of journey. When they code in fork/PR , I will close my eyes on other coding stuff and try it again because Gitlab is not close to my heart and installing their binary takes ~850MB of space which means a lot of ruby code that could go wrong way.
It would be really awesome to have in archive something to apt install and have github-like place. It would be great if Debian infrastructure would have the possibility to have that.Diaspora*
Although I am thrilled about it finally reaching Debian archive, it still isn't ready. Not even closely. I couldn't even finish installation of it and it's not suitable for main archive as it takes files from github repo of diaspora. Maybe poking around Bitnami folks about how they did it.The power of Free software
Text Secure is was an mobile app that I thought it could take on Viber or WhatsUp. Besides all its goodies it had chance to send encrypted SMS to other TS users. Not anymore. Fortunate, there is a fork called SMSSecure which still has that ability.Trolls
So there is this Allwinner company that does crap after crap. Their latest will reach wider audience and I hope it gets resolved in a matter how they would react if some big proprietary company was stealing their code. It seems Allwinner is a pseudo for Alllooser. Whoa, that was fun!A year old experiment
So I had a bet with a friend that I will run for a year Debian Unstable mixed with some packages from experimental and do some random testings on packages of interest to them. Also I promised to update aggressively so it was to be twice a day. This was my only machine so the bet was really good as it by theory could break very often. Well on behalf of Debian community, I can say that Debian hasn't had a single big breakage. Yay!
The good side: on average I had ~3000 packages installed (they were in range from 2500-3500). I had for example xmonad, e17, gnome, cinnamon, xfce, systemd from experimental, kernels from experimental, nginx, apache, a lot of heavy packages, mixed packages from pip, npm, gems etc. So that makes it even more incredible that it stayed stable. There is no bigger kudos to people working on Debian, then when some sadist tries countless of ways to break it and Debian is just keeps running. I mean, I was doing my $PAID_WORK on this machine!
The bad side: there were small breakages. It's seems that polkit and systemd-side of gnome were going through a lot of changes because sometimes system would ask password for every action (logout, suspend, poweroff, connect to network etc), audio would work and would not work, would often by itself just mute sound on every play or it would take it to 100% (which would blow my head when I had earplugs), bluetooth is almost de facto not working in gnome (my bluetooth mice worked without single problem in lenny, squeeze, in wheezy it maybe had once or twice a problem, but in this year long test it's almost useless). System would also have random hangs from time to time.
The test: in the beginning my radeon card was too new and it was not supported by FLOSS driver so I ended up using fglrx which caused me a lot of annoyance (no brightness control, flickering of screen) but once FLOSS driver got support I was on it, and it performed more fluid (no glitches while moving windows). So as my friends knew that I have radeon and they want to play games on their machines (I play my Steam games on FLOSS driver) they set me the task to try fglrx driver every now end then. End result - there is no stable fglrx driver for almost a year, it breaks graphical interface so I didn't even log into DE with it for at least 8 months if not more. On the good side my expeditions in flgrx where quick - install it, boot into disaster, remove it, boot into freedom. Downside seems to be that removing fglrx driver, leaves a lot of its own crap on system (I may be mistaking but it seems I am not).
Well, that's all for today. I think so. You can never be sure.
now you can install the following package versions from wheezy-backports:
- apt-dater-host (Source split, 0.9.0-3+wheezy1 => 1.0.0-2~bpo70+1)
- glusterfs (3.2.7-3+deb7u1 => 3.5.2-1~bpo70+1)
- geoip-database (20141009-1~bpo70+1 => 20150209-1~bpo70+1)
geoip-database introduces a new package geoip-database-extra, which includes the free GeoIP City and GeoIP ASNum databases.
glusterfs will get an update in a few days ago to fix CVE-2014-3619.
- It's not hugely well tested on a wide range of hardware
- The interface is not yet guaranteed to be stable
- You'll also need this module if you want to deal with IBM (well, Lenovo now) servers
- The IBM support is based on reverse engineering rather than documentation, so who really knows how good it is
There's documentation in the README, and I'm sorry for the API being kind of awful (it suffers rather heavily from me writing Python while knowing basically no Python). Still, it ought to work. I'm interested in hearing from anybody with problems, anybody who's interested in getting it on Pypi and anybody who's willing to add support for new HP systems.
So I started migrating some of my LXCs to Jessie, to test the migration in advance. The upgrade itself was easy (the LXC is mostly empty and only runs radicale), but after the upgrade I couldn't login anymore (using lxc-console since I don't have lxc-attach, the host is on Wheezy). So this is mostly a note to self.
auth.log was showing:Mar 25 22:10:13 lxc-sync login: pam_loginuid(login:session): Cannot open /proc/self/loginuid: Read-only file system Mar 25 22:10:13 lxc-sync login: pam_loginuid(login:session): set_loginuid failed Mar 25 22:10:13 lxc-sync login: pam_unix(login:session): session opened for user root by LOGIN(uid=0) Mar 25 22:10:13 lxc-sync login: Cannot make/remove an entry for the specified session
The last message isn't too useful, but the first one gave the answer. Since LXC isn't really ready for security stuff, I have some hardening on top of that, and one measure is to not have rw access to /proc. I don't really need pam_loginuid there, so I just disabled that. I just need to remember to do that after each LXC upgrade.
Other than that, I have to boot using SystemV init, since apparently systemd doesn't cope too well with the various restrictions I enforce on my LXCs:lxc-start -n sync Failed to mount sysfs at /sys: Operation not permitted
(which is expected, since I drop CAP_SYS_ADMIN from my LXCs). I didn't yet investigate how to stop systemd doing that, so for now I'm falling back to SystemV init until I find the correct customization:lxc-start -n sync /lib/sysvinit/init INIT: version 2.88 booting [info] Using makefile-style concurrent boot in runlevel S. hostname: you must be root to change the host name mount: permission denied mount: permission denied [FAIL] udev requires a mounted sysfs, not started ... failed! failed! mount: permission denied [info] Setting the system clock. hwclock: Cannot access the Hardware Clock via any known method. hwclock: Use the --debug option to see the details of our search for an access method. [warn] Unable to set System Clock to: Wed Mar 25 21:21:43 UTC 2015 ... (warning). [ ok ] Activating swap...done. mount: permission denied mount: permission denied mount: permission denied mount: permission denied [ ok ] Activating lvm and md swap...done. [....] Checking file systems...fsck from util-linux 2.25.2 done. [ ok ] Cleaning up temporary files... /tmp. [ ok ] Mounting local filesystems...done. [ ok ] Activating swapfile swap...done. mount: permission denied mount: permission denied [ ok ] Cleaning up temporary files.... [ ok ] Setting kernel variables ...done. [....] Configuring network interfaces...RTNETLINK answers: Operation not permitted Failed to bring up lo. done. [ ok ] Cleaning up temporary files.... [FAIL] startpar: service(s) returned failure: hostname.sh udev ... failed! INIT: Entering runlevel: 2 [info] Using makefile-style concurrent boot in runlevel 2. dmesg: read kernel buffer failed: Operation not permitted [ ok ] Starting Radicale CalDAV server : radicale. Yes, there are a lot of errors, but they seem to be handled just fine.
I've happily been using 2015/akonadi-install for my calendars, and yesterday I added an .ics feed export from Google, as a URL file source. It is a link in the form: https://www.google.com/calendar/ical/person%40gmail.com/private-12341234123412341234123412341234/basic.ics
After doing that, I noticed that the fan in my laptop was on more often than usual, and I noticed that akonadi-server and postgres were running very often, and doing quite a lot of processing.The evil
I investigated and realised that Google seems to be doing everything they can to make their ical feeds hard to sync against efficiently. This is the list of what I have observed Gmail doing to an unchanged ical feed:
- Date: headers in HTTP replies are always now
- If-Modified-Since: is not supported
- DTSTAMP of each element is always now
- VTIMEZONE entries appear in random order
- ORGANIZER CN entries randomly change between full name and plus.google.com user ID
- ATTENDEE entries randomly change between having a CN or not having it
- TRIGGER entries change spontaneously
- CREATED entries change spontaneously
This causes akonadi to download and reprocess the entire ical feed at every single poll, and I can't blame akonadi for doing it. In fact, Google is saying that there is a feed with several years worth of daily appointments that all keep being changed all the time.The work-around
As a work-around, I have configured the akonadi source to point at a local file on disk, and I have written a script to update the file only if the .ics feed has actually changed.
Have a look at the script: I consider it far from trivial, since it needs to do a partial parsing of the .ics feed to throw away all the nondeterminism that Google pollutes it with.The setup
To reload the configuration after editing: systemctl --user daemon-reload.Further investigation
I wonder if ConditionACPower needs to be in the .timer or in the .service, since there is a [Unit] section is in both. Update: I have been told it can be in the .timer.
I also wonder if there is a way to have the timer trigger only when online. There is a network-online.target and I do not know if it is applicable. I also do not know how to ask systemd if all the preconditions are currently met for a .service/.timer to run.
Finally, I especially wonder if it is worth hoping that Google will ever make their .ics feeds play nicely with calendar clients.
We are very pleased to announce that HP has committed support of DebConf15 as Platinum sponsor.
"The hLinux team is pleased to continue HP's long tradition of supporting Debian and DebConf," said Steve Geary, Senior Director at Hewlett-Packard.
Hewlett-Packard is one of the largest computer companies in the world, providing a wide range of products and services, such as servers, PCs, printers, storage products, network equipment, software, cloud computing solutions, etc.
Hewlett-Packard has been a long-term development partner of Debian, and provides hardware for port development, Debian mirrors, and other Debian services (HP hardware donations are listed in the Debian machines page).
With this additional commitment as Platinum Sponsor, HP contributes to make possible our annual conference, and directly supports the progress of Debian and Free Software, helping to strengthen the community who continue to collaborate on their Debian projects throughout the rest of the year.
Thank you very much, Hewlett-Packard, for your support of DebConf15!Become a sponsor too!
DebConf15 is still accepting sponsors. Interested companies and organizations may contact the DebConf team through firstname.lastname@example.org, and visit the DebConf15 website at http://debconf15.debconf.org.
TSDgeos had a good idea:
Lazyweb travel recommodations.
So, dear lazyweb: What are things to do or to avoid in Hongkong and Shenzhen if you have one and a half week of holiday before and after work duties? Any hidden gems to look at? What electronic markets are good? Should I take a boat trip around the waters of Hongkong?
If you have any decent yet affordable sleeping options for 2-3 nights in Hongkong, that would also be interesting as my "proper" hotel stay does not start immediately. Not much in ways of comfort is needed as long as I have a safe place to lock my belongings.
In somewhat related news, this Friday's bug report stats may be early or late as I will be on a plane towards China on Friday.
I follow a few blog aggregators (or "planets") and it's always a struggle to keep up with the amount of posts that some of these get. The best strategy I have found so far to is to filter them so that I remove the blogs I am not interested in, which is why I wrote PlanetFilter.Other options
In my opinion, the first step in starting a new free software project should be to look for a reason not to do it So I started by looking for another approach and by asking people around me how they dealt with the firehoses that are Planet Debian and Planet Mozilla.
It seems like a lot of people choose to "randomly sample" planet feeds and only read a fraction of the posts that are sent through there. Personally however, I find there are a lot of authors whose posts I never want to miss so this option doesn't work for me.
A better option that other people have suggested is to avoid subscribing to the planet feeds, but rather to subscribe to each of the author feeds separately and prune them as you go. Unfortunately, this whitelist approach is a high maintenance one since planets constantly add and remove feeds. I decided that I wanted to follow a blacklist approach instead.PlanetFilter
PlanetFilter is a local application that you can configure to fetch your favorite planets and filter the posts you see.
You can either:
- add file:///var/cache/planetfilter/planetname.xml to your local feed reader
- serve it locally (e.g. http://localhost/planetname.xml) using a webserver, or
- host it on a server somewhere on the Internet.
The software will fetch new posts every hour and overwrite the local copy of each feed.
A basic configuration file looks like this:[feed] url = http://planet.debian.org/atom.xml [blacklist] Filters
There are currently two ways of filtering posts out. The main one is by author name:[blacklist] authors = Alice Jones John Doe
and the other one is by title:[blacklist] titles = This week in review Wednesday meeting for
In both cases, if a blog entry contains one of the blacklisted authors or titles, it will be discarded from the generated feed.Tor support
Since blog updates happen asynchronously in the background, they can work very well over Tor.
In order to set that up in the Debian version of planetfilter:
- Install the tor and polipo packages.
Set the following in /etc/polipo/config:proxyAddress = "127.0.0.1" proxyPort = 8008 allowedClients = 127.0.0.1 allowedPorts = 1-65535 proxyName = "localhost" cacheIsShared = false socksParentProxy = "localhost:9050" socksProxyType = socks5 chunkHighMark = 67108864 diskCacheRoot = "" localDocumentRoot = "" disableLocalInterface = true disableConfiguration = true dnsQueryIPv6 = no dnsUseGethostbyname = yes disableVia = true censoredHeaders = from,accept-language,x-pad,link censorReferer = maybe
Tell planetfilter to use the polipo proxy by adding the following to /etc/default/planetfilter:export http_proxy="localhost:8008" export https_proxy="localhost:8008"
The source code is available on repo.or.cz.
I've been using this for over a month and it's been working quite well for me. If you give it a go and run into any problems, please file a bug!
I'm also interested in any suggestions you may have.