If you've forgotten your Drupal password and try unsuccessfully to login, you may get this message:
Sorry, there have been more than 5 failed login attempts for this account. it is temporarily blocked
The image below shows how the message appears. I'm going to show you how you can fix this error.
Debian/sid is going through a big restructuring with the switch to a new gcc and libstc++. Furthermore, libcec3 is now the default. So I have updated my PHT builds for Debian/sid to build and install on the current status, both for amd64 and i386.
Add the following lines to your sources.list:deb http://www.preining.info/debian/ sid pht deb-src http://www.preining.info/debian/ sid pht
The release file and changes file are signed with my official Debian key 0x860CDC13.
For Debian/testing I am waiting until the transition has settled. Please wait a bit more.
Now be ready for enjoying the next movie!
Well, that was quite some feedback to my last post; via blog, email, irc, and in person. I actually think this may be the most feedback I ever got to any single blog post. If you are still waiting for a reply after this new post, I will get back to you.
To handle common question/information at once:
- It was the first download from an official Tor-enabled mirror; I know people downloaded updates via Tor before
- Yes, having this in the Debian installer as an option would be very nice
- Yes, there are ways to load balance Tor hidden services these
days and the pre-requisites are being worked on already
- Yes, that load balanced setup will support hardware key tokens
- A natively hidden service is more secure than accessing a non-hidden service via Tor because there is no way for a third-party exit node to mess with your traffic
- apt-get etc will leak information about your architecture, release, suites, desired packages, and package versions. That can't be avoided, but else it will not leak anything to the server. And even if it did.. see above
- Using Tor is also more secure than normal ftp/http/https as you don't build up an IP connection so the server can not get back to the client other than through the single one connection the client built up
- noodles Tor-enabled
his partial debmirror as well: http://earthqfvaeuv5bla.onion/
- It took him 14322255 tries to get a private key which produced that address
- He gave up to find one starting with earthli after 9474114341 attempts
- I have been swamped with queries if I had tried apt-transport-tor
instead of torify
- I had forgotten about it, re-reading the blog post reminded me about apt transports
- Tim even said in his post that Tor hidden mirror services would be nice
- Try it yourself before you ask ;)
- Yes, it works!
So this whole thing is a lot easier now:# apt-get install torsocks apt-transport-tor # mv /etc/apt/sources.list /etc/apt/sources.list--backup2 # > /etc/apt/sources.list << EOF deb tor+http://vwakviie2ienjx6t.onion/debian/ unstable main contrib non-free deb tor+http://earthqfvaeuv5bla.onion/debian/ unstable main contrib non-free EOF # apt-get update # apt-get install vcsh
Wight & Company (Wight) is an integrated architecture, engineering, and construction services firm with offices in Chicago and Darien, Illinois. Wight has expertise in key markets including corporate, commercial, federal government, higher education, local government, PK-12 education, and transportation and infrastructure.
TOKY Branding + Design created a website that sets Wight apart from the all-too-common aesthetic and functionality of competing firms. TOKY specializes in digital and print work for clients in architecture, building, and design, as well as the arts, education, and premium consumer products.Key modules/theme/distribution used: Advanced MenuAPC - Alternative PHP CacheEntity APIEntity cacheField collectionImageAPI Optimize (or Image Optimize)Memcache API and IntegrationMetatagRemote stream wrapperSpeedyTaxonomy access fixTaxonomy displayOrganizations involved: TOKY Branding + DesignTeam members: Daniel Korte
This would be my last weekly update as far as Google Summer of Code 2015 is concerned. The long road is coming to an end as the season closes on Friday, 28th August 2015. This week I tackled a bug in core of Drupal which I discussed in my last week’s update.
This issue is #2553531 on the Drupal bug tracker. Previously when a user was accessing an area which required them to be logged in without logging in, Drupal would call authentication providers for a “challenge”. This challenge allows Basic Auth to specify it’s WWW-Authenticate header and send a HTTP 401 unauthorised error telling the user that they need to be logged in and can use Basic Auth as a means to log-in. This was good, as basic was the only protocol which would communicate via WWW-Authenticate until Hawk came along.
WWW-Authenticate can have multiple values, a server sending WWW-Authenticate: Hawk, Basic for example is saying that the client can use hawk or basic auth protocol. This wasn’t possible in the current code base as Drupal did not allow multiple Auth providers to specify the challenge. I modified the code to allow multiple auth providers to send their challenge which gets compiled by the authentication provider manager into an exception. Previously, the auth provider would send an exception itself which is why multiple auth providers could not specify their own challenge.
This fix is still to be accepted into Drupal core, although I hope it would get accepted soon.
Concluding Summer of Code
This would probably be the last coding I will be doing during Summer of Code, but it’s not last related to Drupal or my project as I plan to continue it’s development after GSoC as well and hopefully I get to stick around Drupal for a long time.
I had a lot of fun during the summer, and I got to learn a lot of new things as well as got introduced to Drupal and it’s community. I worked on implementing a new protocol within PHP, developing a general purpose library which can be used by anyone willing to use the protocol with PHP and implemented the protocol as a Drupal module. All things that I have never done in the past, and the things I struggled with at times but ultimately learned them and managed to succeed to the best of my abilities. I also improved my understanding of concepts such as Dependency Injection, unit testing, composer, authentication and authorization as well as security concepts related to them, encryption, hashing and general Drupal architecture and development.
For students participating in the future, don't hesitate to ask around the Drupal community via the forums or IRC if you get stuck doing something as they are very helpful. Drupal is a complicated beast and there are a lot of people apart form your mentor who are willing to help, it would also be faster at times when your mentor might not be available. I took a lot of help from the community during my project and the community really helped around.
I’m glad to have taken part in this year’s summer of code and I will remember this experience forever. A big thanks to my mentor Jingsheng Wang (skyred) and the Drupal community for their support as well as Avantika Agarwal for proofreading my blog and documents related to Summer of Code. I will continue with what I started this summer of code and try to learn and share as many things as I can.
Hugging people with whom one has been working tirelessly for months gives a lot of warm-fuzzy feelings. Several recorded and hallway discussions paved the way to solve the remaining issues to get “reproducible builds” part of Debian proper. Both talks from the Debian Project Leader and the release team mentioned the effort as important for the future of Debian.
A forty-five minutes talk presented the state of the “reproducible builds” effort. It was then followed by an hour long “roundtable” to discuss current blockers regarding dpkg, .buildinfo and their integration in the archive.
- Kenneth J. Pronovici uploaded epydoc/3.0.1+dfsg-12 which makes class and modules ordering predictable (#795835) and fixes __repr__ so memory addresses don't appear in docs (#795826). Patches by Val Lorentz.
- Sergei Golovan uploaded erlang/1:18.0-dfsg-2 which adds support for SOURCE_DATE_EPOCH to erlc. Patch by Chris West (Faux) and Chris Lamb.
- Stéphane Glondu uploaded ocaml/4.02.3-2 to experimental, making startup files and native packed libraries deterministic. The patch adds deterministic .file to the assembler output.
- Enrico Tassi uploaded lua-ldoc/1.4.3-3 which now pass the -d option to txt2man and add the --date option to override the current date.
Reiner Herrmann submitted a patch to make rdfind sort the processed files before doing any operation. Chris Lamb proposed a new patch for wheel implementing support for SOURCE_DATE_EPOCH instead of the custom WHEEL_FORCE_TIMESTAMP. akira sent one making man2html SOURCE_DATE_EPOCH aware.
Stéphane Glondu reported that dpkg-source would not respect tarball permissions when unpacking under a umask of 002.
After hours of iterative testing during the DebConf workshop, Sandro Knauß created a test case showing how pdflatex output can be non-deterministic with some PNG files.Packages fixed
The following 65 packages became reproducible due to changes in their build dependencies: alacarte, arbtt, bullet, ccfits, commons-daemon, crack-attack, d-conf, ejabberd-contrib, erlang-bear, erlang-cherly, erlang-cowlib, erlang-folsom, erlang-goldrush, erlang-ibrowse, erlang-jiffy, erlang-lager, erlang-lhttpc, erlang-meck, erlang-p1-cache-tab, erlang-p1-iconv, erlang-p1-logger, erlang-p1-mysql, erlang-p1-pam, erlang-p1-pgsql, erlang-p1-sip, erlang-p1-stringprep, erlang-p1-stun, erlang-p1-tls, erlang-p1-utils, erlang-p1-xml, erlang-p1-yaml, erlang-p1-zlib, erlang-ranch, erlang-redis-client, erlang-uuid, freecontact, givaro, glade, gnome-shell, gupnp, gvfs, htseq, jags, jana, knot, libconfig, libkolab, libmatio, libvsqlitepp, mpmath, octave-zenity, openigtlink, paman, pisa, pynifti, qof, ruby-blankslate, ruby-xml-simple, timingframework, trace-cmd, tsung, wings3d, xdg-user-dirs, xz-utils, zpspell.
The following packages became reproducible after getting fixed:
- apr/1.5.2-3 by Stefan Fritsch.
- aprx/2.08.svn593+dfsg-2 uploaded by Colin Tuckley, original patch by Chris Lamb.
- blkreplay/1.0-3 uploaded by Andrew Shadura, patch by Dhole.
- cal3d/0.11.0-6 uploaded by Manuel A. Fernandez Montecelo, patch by akira.
- cgsi-gsoap/1.3.8-1 by Mattias Ellert.
- eyefiserver/2.4+dfsg-1 by Jean-Michel Vourgère.
- gnujump/1.0.8-3 uploaded by Evgeni Golov, original patch by Chris Lamb.
- hkgerman/1:2-29 by Roland Rosenfeld.
- jove/220.127.116.11-4 by Cord Beermann.
- libevhtp/1.2.10-3 by Vincent Bernat.
- libmkdoc-xml-perl/0.75-4 uploaded by gregor herrmann, original patch by Niko Tyni.
- libparse-debianchangelog-perl/1.2.0-6 by Niko Tyni.
- librostlab-blast/1.0.1-5 uploaded by Andreas Tille, original patch by Chris Lamb.
- libxray-absorption-perl/3.0.1-3 uploaded by gregor herrmann, original patch by Niko Tyni.
- lua-penlight/1.3.2-2 by Enrico Tassi.
- mosquitto/1.4.3-1 by Roger A. Light.
- nagios-plugins-contrib/15.20150818 by Jan Wagner and Bernd Zeimetz.
- nn/6.7.3-10 uploaded by Cord Beermann, original patch by Chris Lamb.
- pybik/2.1-1 by B. Clausius.
- pyepr/0.9.3-1 uploaded by Antonio Valentino, original patch by Juan Picca.
- python-xlrd/0.9.4-1 by Vincent Bernat.
- transmissionrpc/0.11-2 uploaded by Vincent Bernat, original patch by Juan Picca.
- unoconv/0.7-1.1 sponsored by Vincent Bernat, fix by Dhole.
- vim-latexsuite/20141116.812-1 uploaded by Johann Felix Soden, original patch by Chris Lamb.
- volk/1.0.2-2 by A. Maitland Bottoms.
- xbmc/2:13.2+dfsg1-5 by Balint Reczey.
- xdotool/1:3.20150503.1-2 uploaded by Daniel Kahn Gillmor, initial patch by Chris Lamb.
- xfig/1:3.2.5.c-5 by Roland Rosenfeld.
- xfireworks/1.3-10 uploaded by Yukiharu YABUKI, original patch by Chris Lamb.
- xul-ext-monkeysphere/0.8-2 uploaded by Daniel Kahn Gillmor, original patch by Dhole.
Uploads that might have fixed reproducibility issues:
- brian/1.4.1-3 uploaded by Yaroslav Halchenko, original patch by Juan Picca.
- opennebula/4.12.3+dfsg-1 by Dmitry Smirnov.
- pcsx2/1.3.1-1008-g9f291a6+dfsg-1 by Miguel A. Colón Vélez.
- webassets/3:0.11-1 uploaded by Agustin Henze, original patch by Reiner Herrmann.
Some uploads fixed some reproducibility issues but not all of them:
- apache2/2.4.16-3 uploaded by Stefan Fritsch, original patch by Jean-Michel Vourgère.
- gerris/20131206+dfsg-6 uploaded by Anton Gladky, original patch by Reiner Herrmann.
- kodi/15.1+dfsg1-1 by Balint Reczey.
- zshdb/0.05+git20101031-4 uploaded by Iain R. Learmonth, original patch by Chris Lamb.
Patches submitted which have not made their way to the archive yet:
- #795861 on fakeroot by Val Lorentz: set the mtime of all files to the time of the last debian/changelog entry.
- #795870 on fatresize by Chris Lamb: set build date to the time of the latest debian/changelog entry.
- #795945 on projectl by Reiner Herrmann: sort with LC_ALL set to C.
- #795977 on dahdi-tools by Dhole: set the timezone to UTC before calling asciidoc.
- #795981 on x11proto-input by Dhole: set the timezone to UTC before calling asciidoc.
- #795983 on dbusada by Dhole: set the timezone to UTC before calling asciidoc.
- #795984 on postgresql-plproxy by Dhole: set the timezone to UTC before calling asciidoc.
- #795985 on xorg by Dhole: set the timezone to UTC before calling asciidoc.
- #795987 on pngcheck by Dhole: set the date in the man pages to the latest debian/changelog entry.
- #795997 on python-babel by Val Lorentz: make build timestamp independent from the timezone and remove the name of the build system locale from the documentation.
- #796092 on a7xpg by Reiner Herrmann: sort with LC_ALL set to C.
- #796212 on bittornado by Chris Lamb: remove umask-varying permissions.
- #796251 on liblucy-perl by Niko Tyni: generate lib/Lucy.xs in a deterministic order.
- #796271 on tcsh by Reiner Herrmann: sort with LC_ALL set to C.
- #796275 on hspell by Reiner Herrmann: remove timestamp from aff files generated by mk_he_affix.
- #796324 on fftw3 by Reiner Herrmann: remove date from documentation files.
- #796335 on nasm by Val Lorentz: remove extra timestamps from the build system.
- #796360 on libical by Chris Lamb: removes randomess caused Perl in generated icalderivedvalue.c.
- #796375 on wcd by Dhole: set the date in the man pages to the latest debian/changelog entry.
- #796376 on mapivi by Dhole: set the date in the man pages to the latest debian/changelog entry.
- #796527 on vserver-debiantools by Dhole: set the date in the man pages to the latest debian/changelog entry.
Package pages on reproducible.debian.net now have a new layout improving readability designed by Mattia Rizzolo, h01ger, and Ulrike. The navigation is now on the left as vertical space is more valuable nowadays.
armhf is now enabled on all pages except the dashboard. Actual tests on armhf are expected to start shortly. (Mattia Rizzolo, h01ger)
The limit on how many packages people can schedule using the reschedule script on Alioth has been bumped to 200. (h01ger)
Following the rename of the software, “debbindiff” has mostly been replaced by either “diffoscope” or “differences” in generated HTML and IRC notification output.
Connections to UDD have been made more robust. (Mattia Rizzolo)diffoscope development
New command line options are available: --max-diff-input-lines and --max-diff-block-lines to override limits on diff input and output (Reiner Herrmann), --debugger to dump the user into pdb in case of crashes (Mattia Rizzolo).
jar archives should now be detected properly (Reiner Herrman). Several general code cleanups were also done by Chris Lamb.strip-nondeterminism development
During the “reproducible builds” workshop at DebConf, participants identified that we were still short of a good way to test variations on filesystem behaviors (e.g. file ordering or disk usage). Andrew Ayer took a couple of hours to create disorderfs. Based on FUSE, disorderfs in an overlay filesystem that will mount the content of a directory at another location. For this first version, it will make the order in which files appear in a directory random.Documentation update
Dhole documented how to implement support for SOURCE_DATE_EPOCH in Python, bash, Makefiles, CMake, and C.
Chris Lamb started to convert the wiki page describing SOURCE_DATE_EPOCH into a Freedesktop-like specification in the hope that it will convince more upstream to adopt it.Package reviews
44 reviews have been removed, 192 added and 77 updated this week.
New issues identified this week: locale_dependent_order_in_devlibs_depends, randomness_in_ocaml_startup_files, randomness_in_ocaml_packed_libraries, randomness_in_ocaml_custom_executables, undeterministic_symlinking_by_rdfind, random_build_path_by_golang_compiler, and images_in_pdf_generated_by_latex.
117 new FTBFS bugs have been reported by Chris Lamb, Chris West (Faux), and Niko Tyni.Misc.
Some reproducibility issues might face us very late. Chris Lamb noticed that the test suite for python-pykmip was now failing because its test certificates have expired. Let's hope no packages are hiding a certificate valid for 10 years somewhere in their source!
Pictures courtesy and copyright of Debian's own paparazzi: Aigars Mahinovs.
I've spent a fair amount of time thinking about how to win back the Open Web, but in the case of digital distributors (e.g. closed aggregators like Facebook, Google, Apple, Amazon, Flipboard) superior, push-based user experiences have won the hearts and minds of end users, and enabled them to attract and retain audience in ways that individual publishers on the Open Web currently can't.
In today's world, there is a clear role for both digital distributors and Open Web publishers. Each needs the other to thrive. The Open Web provides distributors content to aggregate, curate and deliver to its users, and distributors provide the Open Web reach in return. The user benefits from this symbiosis, because it's easier to discover relevant content.
As I see it, there are two important observations. First, digital distributors have out-innovated the Open Web in terms of conveniently delivering relevant content; the usability gap between these closed distributors and the Open Web is wide, and won't be overcome without a new disruptive technology. Second, the digital distributors haven't provided the pure profit motives for individual publishers to divest their websites and fully embrace distributors.
However, it begs some interesting questions for the future of the web. What does the rise of digital distributors mean for the Open Web? If distributors become successful in enabling publishers to monetize their content, is there a point at which distributors create enough value for publishers to stop having their own websites? If distributors are capturing market share because of a superior user experience, is there a future technology that could disrupt them? And the ultimate question: who will win, digital distributors or the Open Web?
I see three distinct scenarios that could play out over the next few years, which I'll explore in this post.
This image summarizes different scenarios for the future of the web. Each scenario has a label in the top-left corner which I'll refer to in this blog post. A larger version of this image can be found at http://buytaert.net/sites/buytaert.net/files/images/blog/digital-distrib....Scenario 1: Digital distributors provide commercial value to publishers (A1 → A3/B3)
Digital distributors provide publishers reach, but without tangible commercial benefits, they risk being perceived as diluting or even destroying value for publishers rather than adding it. Right now, digital distributors are in early, experimental phases of enabling publishers to monetize their content. Facebook's Instant Articles currently lets publishers retain 100 percent of revenue from the ad inventory they sell. Flipboard, in efforts to stave off rivals like Apple News, has experimented with everything from publisher paywalls to native advertising as revenue models. Except much more experimentation with different monetization models and dealmaking between the publishers and digital distributors.
If digital distributors like Facebook succeed in delivering substantial commercial value to the publisher they may fully embrace the distributor model and even divest their own websites' front-end, especially if the publishers could make the vast majority of their revenue from Facebook rather than from their own websites. I'd be interested to see someone model out a business case for that tipping point. I can imagine a future upstart media company either divesting its website completely or starting from scratch to serve content directly to distributors (and being profitable in the process). This would be unfortunate news for the Open Web and would mean that content management systems need to focus primarily on multi-channel publishing, and less on their own presentation layer.
As we have seen from other industries, decoupling production from consumption in the supply-chain can redefine industries. We also know that introduces major risks as it puts a lot of power and control in the hands of a few.Scenario 2: The Open Web's disruptive innovation happens (A1 → C1/C2)
For the Open Web to win, the next disruptive innovation must focus on narrowing the usability gap with distributors. I've written about a concept called a Personal Information Broker (PIM) in a past post, which could serve as a way to responsibly use customer data to engineer similar personal, contextually relevant experiences on the Open Web. Think of this as unbundling Facebook where you separate the personal information management system from their content aggregation and curation platform, and make that available for everyone on the web to use. First, it would help us to close the user experience gap because you could broker your personal information with every website you visit, and every website could instantly provide you a contextual experience regardless of prior knowledge about you. Second, it would enable the creation of more distributors. I like the idea of a PIM making the era of handful of closed distributors as short as possible. In fact, it's hard to imagine the future of the web without some sort of PIM. In a future post, I'll explore in more detail why the web needs a PIM, and what it may look like.Scenario 3: Coexistence (A1 → A2/B1/B2)
Finally, in a third combined scenario, neither publishers nor distributors dominate, and both continue to coexist. The Open Web serves as both a content hub for distributors, and successfully uses contextualization to improve the user experience on individual websites.Conclusion
Right now, since distributors are out-innovating on relevance and discovery, publishers are somewhat at their mercy for traffic. However, a significant enough profit motive to divest websites completely remains to be seen. I can imagine that we'll continue in a coexistence phase for some time, since it's unreasonable to expect either the Open Web or digital distributors to fail. If we work on the next disruptive technology for the Open Web, it's possible that we can shift the pendulum in favor of “open” and narrow the usability gap that exists today. If I were to guess, I'd say that we'll see a move from A1 to B2 in the next 5 years, followed by a move from B2 to C2 over the next 5 to 10 years. Time will tell!
At Annertech, there are three things we take very seriously: website/server security, accessibility, and website load times/performance. This article will look at website performance with metrics from recent work we completed for Oxfam Ireland.
We use a suite of tools for performance testing. Some of these include Apache Benchmark, Yahoo's YSlow, and Google's PageSpeed Insights. Our favourite at the moment is NewRelic, though this does come at a cost.
During Jacob Applebaum's talk at DebConf15, he noted that Debian should TLS-enable all services, especially the mirrors.
His reasoning was that when a high-value target downloads a security update for package foo, an adversary knows that they are still using a vulnerable version of foo and try to attack before the security update has been installed.
In this specific case, TLS is not of much use though. If the target downloads 4.7 MiB right after a security update with 4.7 MiB has been released, or downloads from security.debian.org, it's still obvious what's happening. Even padding won't help much as the 5 MiB download will also be suspicious. The mere act of downloading anything from the mirrors after an update has been released is reason enough to try an attack.
The solution, is, of course, Tor.
weasel was nice enough to set up a hidden service on Debian's infrastructure; initally we agreed that he would just give me a VM and I would do the actual work, but he went the full way on his own. Thanks :) This service is not redundant, it uses a key which is stored on the local drive, the .onion will change, and things are expected to break.
But at least this service exists now and can be used, tested, and put under some load:http://vwakviie2ienjx6t.onion/
I couldn't get apt-get to be content with a .onion in /etc/apt/sources.list and Acquire::socks::proxy "socks://127.0.0.1:9050"; in /etc/apt/apt.conf, but the torify wrapper worked like a charm. What follows is, to the best of my knowledge, the first ever download from Debian's "official" Tor-enabled mirror:~ # apt-get install torsocks ~ # mv /etc/apt/sources.list /etc/apt/sources.list.backup ~ # echo 'deb http://vwakviie2ienjx6t.onion/debian/ unstable main non-free contrib' > /etc/apt/sources.list ~ # torify apt-get update Get:1 http://vwakviie2ienjx6t.onion unstable InRelease [215 kB] Get:2 http://vwakviie2ienjx6t.onion unstable/main amd64 Packages [7548 kB] Get:3 http://vwakviie2ienjx6t.onion unstable/non-free amd64 Packages [91.9 kB] Get:4 http://vwakviie2ienjx6t.onion unstable/contrib amd64 Packages [58.5 kB] Get:5 http://vwakviie2ienjx6t.onion unstable/main i386 Packages [7541 kB] Get:6 http://vwakviie2ienjx6t.onion unstable/non-free i386 Packages [85.4 kB] Get:7 http://vwakviie2ienjx6t.onion unstable/contrib i386 Packages [58.1 kB] Get:8 http://vwakviie2ienjx6t.onion unstable/contrib Translation-en [45.7 kB] Get:9 http://vwakviie2ienjx6t.onion unstable/main Translation-en [5060 kB] Get:10 http://vwakviie2ienjx6t.onion unstable/non-free Translation-en [80.8 kB] Fetched 20.8 MB in 2min 0s (172 kB/s) Reading package lists... Done ~ # torify apt-get install vim Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: vim-common vim-nox vim-runtime vim-tiny Suggested packages: ctags vim-doc vim-scripts cscope indent The following packages will be upgraded: vim vim-common vim-nox vim-runtime vim-tiny 5 upgraded, 0 newly installed, 0 to remove and 661 not upgraded. Need to get 0 B/7719 kB of archives. After this operation, 2048 B disk space will be freed. Do you want to continue? [Y/n] Retrieving bug reports... Done Parsing Found/Fixed information... Done Reading changelogs... Done (Reading database ... 316427 files and directories currently installed.) Preparing to unpack .../vim-nox_2%3a7.4.826-1_amd64.deb ... Unpacking vim-nox (2:7.4.826-1) over (2:7.4.712-3) ... Preparing to unpack .../vim_2%3a7.4.826-1_amd64.deb ... Unpacking vim (2:7.4.826-1) over (2:7.4.712-3) ... Preparing to unpack .../vim-tiny_2%3a7.4.826-1_amd64.deb ... Unpacking vim-tiny (2:7.4.826-1) over (2:7.4.712-3) ... Preparing to unpack .../vim-runtime_2%3a7.4.826-1_all.deb ... Unpacking vim-runtime (2:7.4.826-1) over (2:7.4.712-3) ... Preparing to unpack .../vim-common_2%3a7.4.826-1_amd64.deb ... Unpacking vim-common (2:7.4.826-1) over (2:7.4.712-3) ... Processing triggers for man-db (18.104.22.168-5) ... Processing triggers for mime-support (3.58) ... Processing triggers for desktop-file-utils (0.22-1) ... Processing triggers for hicolor-icon-theme (0.13-1) ... Setting up vim-common (2:7.4.826-1) ... Setting up vim-runtime (2:7.4.826-1) ... Processing /usr/share/vim/addons/doc Setting up vim-nox (2:7.4.826-1) ... Setting up vim (2:7.4.826-1) ... Setting up vim-tiny (2:7.4.826-1) ... ~ #
More services will follow. noodles, weasel, and me agreed that the project as a whole should aim to Tor-enable the complete package lifecycle, package information, and the website.
Maybe a more secure install option on the official images which, amongst others, sets up apt, apt-listbugs, dput, reportbug, et al up to use Tor without further configuration could even be a realistic stretch goal.
I have just released a wastly improved new version of the Kobo Japanese Dictionary Enhancer. It allows you to enhance the Kobo Japanese dictionary with English translations.
The new version provides now 326064 translated entries, which covers most non-compound words, including Hiragana. In my daily life reading Harry Potter and some other books in Japanese, I haven’t found many untranslated words by now.
Please head over to the main page of the project for details and download instructions. If you need my help in creating the updated dictionary, please feel free to contact me.
I therefore spent some time to finish a couple of features in the editor for sources.debian.net. Here are some of the changes:
- Compare the source file with that of another version of the package
- And in order to present that: tabs! editor tabs!
- at the same time: generated diffs are now presented in a new editor tab, from where you can download it or email it
Get it for chromium, and iceweasel.
If your browser performs automatic updates of the extensions (the default), you should soon be upgraded to version 0.1.0 or later, bringing all those changes to your browser.
Want to see more? multi-file editing? in-browser storage of the editing session? that and more can be done, so feel free to join me and contribute to the Debian sources online editor!
Moodle is a free and open-source software learning management system written in PHP and distributed under the GNU General Public License. Moodle is used for blended learning, distance education, flipped classroom and other e-learning projects in schools, universities, workplaces and other sectors.
Our main objective is that we wanted to manage all the users from Drupal i.e., use drupal as the front end for managing users. For this purpose, we have a moodle plugin and drupal module. Drupal services is a moodle authorization plugin that allows for SSO between Drupal and Moodle. Moodle SSO provides the Drupal functionality required to allow the Moodle training management system to SSO share Drupal sessions.
In order to make SSO work, we need to ensure that sites can share cookies. Drupal and moodle sites should have url like drupal.example.com and moodle.example.com. As mentioned earlier, sites should be able to share cookies. To make sites use shared cookie, we need set the value of $cookie_domain in settings.php file on the drupal site. In our case, the site urls was something like drupal.example.com and moodle.example.com. For these type of sub-domains, the cookie_domain value can be set like the below one:$cookie_domain = ".example.com";
Note: The dot before "example.com" is necessary.
Let's start with the steps that need to followed for achieving SSO between drupal and moodle:
1. Moodle site
This post explains about creating slideshow in drupal. There are many ways and plugins available to create slideshow in drupal and I am going to discuss some methods which will be very efficient and useful.
1) Using Views slideshow module
2) Using jQuery cSlider plugin
3) Using Bootstrap carousel
1. Using Views slideshow module:
The modules required for this method are:
3) jQuery cycle plugin ( Download here and place it at sites/all/libraries/jquery.cycle/)
Enable the added modules. To create views slideshow, create a new content type for instance "Slideshow" with an image field which can be used as slideshow image.
Add multiple slideshow nodes with images. Then, we have to create a view block with slideshow content. Select "slideshow" as required format and configure transition effect in the Settings link.
After saving this view, place this view block at neccessary region at admin/structure/blocks.
2. Using jQuery cSlider plugin:
1) You can download this plugin from here. There is also a demo file in this plugin which can be used as a reference.
One of our key values at the Drupal Association is communication:
We value communication. We seek community participation. We are open and transparent.
One of the ways that we try to live this value is by making our numbers -- both operating targets and financial -- available to the public. The monthly board reports share basic financial numbers and all our operational metrics. Once a quarter, we release full financial reports for the previous quarter. You can access all this information at any time on the Association web site.
At the close of each year, we take the opportunity to have our financials reviewed (and sometimes audited). The review process ensures that we've represented our financials properly. This work takes some time. Though our fiscal year closes on 31 December, it takes six to eight weeks to get the final bits and pieces handled in our financial systems. The independent review or audit adds another 8+ weeks to the process of closing out our year. Then we have to review the findings with the Finance Committee and the full Board before we share them publicly. That's why it's August and we're just now sharing the 2014 reviewed financial statements with you.
In 2014 we also began tracking our progress towards several operational goals for the first time. Though we share those numbers every month in the board report, we pulled some of our favorite stats and stories together into an Annual Report to share the work that our financials represent.What happened in 2014?
2014 was an investment year. Per our Leadership Plan and Budget for the year, our key focus was building an engineering team to first address technical debt on Drupal.org and then take on actual improvements to the site. We purposely built a budget that anticipated a deficit spend in order to fully fund the team. The intent was to also build some new revenue programs (like Drupal Jobs) that would ramp up and eventually allow us to fund the new staff without deficit spending. And that's what we did. We went from two full time staff focused on Drupal.org to ten.
The investment has been paying off. We spent a lot of 2014 playing catch up with technical debt, but also managed to improve site performance markedly while also increasing the portability of our infrastructure. On top of that, staff worked with community volunteers to release new features related to commit messages, profiles, and Drupal 8 release blockers. Most importantly, staff and the working groups prioritized upcoming work and published a strategic roadmap for improvements to Drupal.org.
We held two huge DrupalCons, one in Austin and one in Amsterdam, and planned for a third. Our very small team of events staff and a crew of remarkable volunteers hosted over 5,500 people across our two events, all while planning our first Con in Latin America. We had some stumbling blocks and learning opportunities, and have been applying what we learned to the three 2015 DrupalCons.
We launched Drupal Jobs. This was something the community asked for very clearly when we conducted a 2013 study. We’ve seen steady growth in usage since our quiet launch and will continue to refine the site, including our pricing models, so that it is accessible to Drupalers around the world.
We diversified our revenue streams. DrupalCons used to be 100% of our funding. Not only is this a risky business strategy, it puts undue pressure on the Cons to perform financially, leaving us little room to experiment or make decisions that may be right for attendees, but could negatively impact the bottom line. As we increase the funding sources for the Association, we can make more and different choices for these flagship programs and also grow new programs with the community.
We introduced branded content including white papers, infographics, and videos. These materials have been widely used by the community and have helped us understand the Drupal.org audience in a better way. You can see a lot of this work on the Drupal 8 landing pages, where the key content pieces were downloaded thousands of times in 2014.
We released new vision, mission, and values statements for the Association. These tools are really useful in defining the focus of the organization and helping to guide how we get our work done. Working in a community of this size and diversity is extremely challenging. There is no choice we can make that will include everyone’s ideals, but our values help us make those decisions in a way that allows for transparency and open dialogue with the community. It’s something that we try to honor every day.What about money in 2014?
As anticipated, we ran a deficit in 2014. However, we did manage to grow our overall revenue by about 6% from 2013 to 2014. This trend has continued into 2015, though not at the rate we had hoped. Still, we are now on track to support the investment we made in 2014 into the future. Another key win in 2014 is that we grew non-DrupalCon revenue to 21% of our total revenue. Diversifying our revenue streams reduces our financial risk and takes the pressure off of Cons, allowing us to experiment more.I want all the details
Excellent! You can check out:
Even though the week of DebCamp took its toll and the stress level will not go down any time soon...
...DebConf15 has finally started! :)
Even though Debian has moved to systemd as default a long while ago now, I've stayed with sysv as I have somewhat custom setups (self-built trimmed down kernels, separate /usr not pre-mounted by initrd, etc.).
After installing a new system with Jessie and playing a bit with systemd on it a couple of months ago, I said it's finally time to upgrade. Easier said than starting to actually do it ☹.
The first system I upgraded was a recent (~1 year old) install. It was a trimmed-down system with Debian's kernel, so everything went smoothly. So smoothly that I soon forgot I made the change, and didn't do any more switches for a while.
Systemd was therefore out of my mind until this recent Friday when I got a bug report about mt's rcS init script and shipping a proper systemd unit. The first step should be to actually start using systemd, so I said - let's convert some more things!
During the weekend I upgraded one system, still a reasonably small install, but older - probably 6-7 years. First reboot into systemd flagged the fact that I had some forced-load modules which no longer exist, fact that was too easy to ignore with sysv. Nice! The only downside was that there seems to be some race condition between and ntp, as it fails to start on boot (port listen conflict). I'll see if it repeats. Another small issue is that systemd doesn't like duplicate fstab entries (i.e. two devices which both refer to the same mount point), while this works fine for mount itself (when specifying the block device).
I said that after that system, I'll wait a while until to upgrade the next. But so it happened that today another system had an issue and I had to reboot it (damn lost uptimes!). The kernel was old so I booted into a newer one (this time compiled with the required systemd options), so I had a though - what if I take the opportunity and also switch to systemd on this system?
Caution said to wait, since this was the oldest system - installed sometime during or before 2004. Plus it doesn't use an initrd (long story), and it has a split /usr. Caution… excitement… caution lost ☺ and I proceeded.
It turns out that systemd does warn about split /usr but itself has no problems. I learned that I also had very old sysfs entries that no longer exist, and which I didn't know about as sysv doesn't make it obvious. I also had a crypttab entry which was obsolete, and I forgot about it, until I met the nice red moving ASCII bar which—fortunately—had a timeout.
To be honest, I believed I'll have to rescue boot and fix things on this "always-unstable" machine, on which I install and run random things, and which has a hackish /etc/fstab setup. I'm quite surprised it just worked. On unstable.
So thanks a lot to the Debian systemd team. It was much simpler than I thought, and now, on to exploring systemd!
P.S.: the sad part is that usually I'm a strong proponent of declarative configuration, but for some reason I was reluctant to migrate to systemd also on account on losing the "power" of shell scripts. Humans…
Periodically, there is a complaint that PHP conferences are just "the same old faces". That the PHP community is insular and is just a good ol' boys club, elitist, and so forth.
It's not the first community I've been part of that has had such accusations made against it, so rather than engage in such debates I figured, let's do what any good scientist would do: Look at the data!