In the next couple of weeks we'll be launching a new sponsorship opportunity for Drupal Supporters on the homepage of Drupal.org. The following is background information and a proposal for the program. We would like a period of public community feedback. Feedback is open until the 6th of July. At that time, we will incorporate the feedback into the sponsorship program plan.Background
The Drupal Association has been creating advertising programs on Drupal.org in an effort to do more to serve our mission, and to take the pressure off of DrupalCons to perform financially. We’ve been working to develop advertising products that are meaningful for advertisers, deliver value to the community, and are respectful of users contributing to the project.About the Program
The Homepage Sponsorship will highlight partners who support the community through Drupal Supporter Programs. This includes Supporting Partners, Hosting Supporters and Tech Supporters. The sponsorship will display in the 300 x 250 ad block that already exists on the Drupal.org homepage. The creative template is designed and maintained by the Association. The featured supporter will provide a logo, body copy, button copy, and a link to that will direct to their website. We will display the partner’s supporter badge, and eventually, pass in any applicable organization credits.
The idea for the Homepage Sponsorship originates from the rewards mechanism that Dries discussed in his DrupalCon Amsterdam 2014 Keynote. His vision involves building a system that creates an incentive for Drupal companies to contribute to the project by rewarding them with benefits and giving recognition.
There is a larger project in motion which includes the Drupal Association building commit credits for organizations, and developing the algorithm to apply a value to the credits. The Homepage Sponsorship is one potential reward that will eventually feed into the system. Until that larger project is complete, the Homepage Sponsorship will be available for purchase by Drupal Supporters. It will be sold in one week increments, giving the partner 100% of the page views during the campaign. The program will expand recognition for those organizations who already give back, and will encourage more organizations to participate in Supporter Programs.Homepage Sponsorship Mock Advertising Guidelines for Drupal.org
The Drupal Association interviewed representatives of the Drupal Community to help guide our advertising strategy and ensure a positive advertising experience on Drupal.org. We developed informal guidelines; for example, advertising is not appropriate in issue queues, and when possible, products should monetize users who are logged out and not contributing to the Project. After we received feedback on our most recent program - Try Drupal, we started work on formalizing these guidelines for advertising on Drupal.org.
We created an issue to share a draft advertising policy developed by the Association and Drupal.org Content Working Group. The policy will set guidelines for how we advertise - addressing issues like the labeling of ads, content guidelines, etc. with the aim of providing an advertising experience that complements Drupal.org and supports our community values. Whatever decisions are made in that policy will be applied to existing programs, including the Homepage Sponsorship and Try Drupal program.Talk To Us
We want your input about the Homepage Sponsorship. Please comment on this post, or in the issue, with your questions and insights.
AttachmentSize HP Sponsor Mock-small.png130.36 KB
Commerce Guys has been promoting the value of content-driven commerce for many years, and we are thrilled to see more and more people talking about this continued transformation in the eCommerce market. One company that has recognized this important trend is Forrester Research, who makes a strong and compelling case in their "Content And Commerce: The Odd Couple Or The Power Couple?". In particular, they point out that companies who differentiate themselves by providing a unified user experience to tell their story should consider a tightly integrated solution that provides both a rich Content Management System (CMS) and a flexible eCommerce transactional engine.br />
Today there is almost no barrier to selling online, making it increasingly difficult for companies to differentiate themselves online, create a strong web presence, and attract customers. The solution for many will be to focus more on creating unique user experiences, supported by interesting content, which allows their users to execute transactions anywhere along the buying journey within the context of that information. The challenge today is that this experience requires CMS and eCommerce to work together seamlessly. Unfortunately most companies manage these two functions separately with two distinct systems. This approach results in added complexity and a disjointed and inconsistent user experience that is confusing to users and damages their brand.
According to Forrester, "the convergence of content and commerce platforms is already well underway. [They] expect that these two solution categories to be foundational elements in digital customer experience management"1. They go on to say that "In an ideal world, commerce and content platforms would have fully converged into customer experience management platforms, with commerce services seamlessly exposed through best-in-class digital engagement tools and supported by social, testing, and content management services." - "But this ideal isn’t likely to exist in the near future"1.
The future is NOW - and the reality is that Drupal + Drupal Commerce is the only platform with commerce natively embedded in a CMS, offering a seamless digital experience management solution with a single code base, administration, and database.
Why is this not more widely known?
While this may be news to many, Drupal Commerce has been around for over 5 years and has over 57k active sites. It consists of core and contributed modules, support by Commerce Guys and the broader community, that can be dropped into Drupal (which itself has been around for 10+ years and has over 5 million active sites) allowing transactions to occur anywhere within the user experience created. Contextual relationships between content and products are extremely easy to create - something that is hard to do when you bolt together separate CMS and eCommerce platforms. A great example of the power of Drupal + Drupal Commerce is www.lush.co.uk which helps Lush in the UK tell their story, engage their customers, and sell more product.
Read the Forrester report. They get it right, and they are one of a growing number of analysts talking about the value of content-driven commerce.
2. Don't get stuck on features. Yes, they are important, but they will also change, and you need a solution that will adapt and allow you to take advantage of new ideas quickly. Instead, consider how your business will benefit by creating an experience that keeps your customers coming back and makes it easy for them to buy.
3. If you think your business would benefit from a richer user experience, or if you just want to simplify your infrastructure with a single platform that can serve both your content and commerce needs, take a look at Drupal Commerce - you will be pleasantly surprised by what you see.
Who Benefits from a Content & Commerce Solution?
Potentially everyone, but in particular are brands who benefit from a differentiated user experience that enables them to tell their story through interesting content and community engagement driving sales within the context of that experience. In addition, existing Drupal sites looking to add transactional capabilities is another obvious fit. With an existing investment in technology, skills and content, there is no better choice than to "drop in" commerce functionality, through Drupal Commerce modules, anywhere. Integrating with a separate eCommerce solution and bolting it onto Drupal is a common approach and certainly possible, but the result is added complexity, cost and valuable customer information that is spread out across multiple systems. Two systems makes it harder to create a level of contextualization and a unified experience that buyers are looking for. Given the increasing importance of targeting and personalizing content and offers and knowing your customer, having customer information in one place allows companies to merchandise more effectively.
What Should You Do?
1. Stephen Powers, Peter Sheldon with Zia Daniell Wigder, David Aponovich, Rebecca Katz Content And Commerce: The Odd Couple Or The Power Couple? How To Choose Between Using A Web Content Management Solution, An eCommerce Platform, Or Both (Forrester, November 19, 2013) 11,14
I've been a proponet of CaCert.org for a long time and I'm still using those certificates in some places, but lately I gave in and searched for something that validates even on iOS. It's not that I strictly need it, it's more a favour to make life for friends and family easier.
I turned down startssl.com because I always manage to somehow lose the client certificate for the portal login. Plus I failed to generate several certificates for subdomains within the primary domain. I want to use different keys on purpose so SANs are not helpful, neither are wildcard certs for which you've to pay anyway. Another point against a wildcard cert from startssl is that I'd like to refrain from sending in my scanned papers for verification.
On a sidenote I'm also not a fan of random email address extractions from whois to sent validation codes to. I just don't see why the abuse desk of a registrar should be able to authorize on DV certificates for a domain under my control.
So I decided to pay the self proclaimed leader of the snakeoil industrie (Comodo) via cheapsslshop.com. That made 12USD for a 3 year Comodo DV certificate. Fair enough for the mailsetup I share with a few friends, and the cheapest one I could find at that time. Actually no hassle with logins or verification. It looks a bit like a scam but the payment is done via 2checkout if I remember correctly and the certificate got issued via a voucher code by Comodo directly. Drawback: credit card payment.
That provides now a free and validating certificate for sven.stormbind.net in case you'd like to check out the chain. The validation chain is even one certificate shorter then the chain for the certificate I bought from Comodo. So in case anyone else is waiting for letsencrypt to start, you might want to check wosign until Mozilla et al are ready.
From my point of view the only reason to pay one of the major CAs is for the service of running a reliable OCSP system. I also pointed that out here. It's more and more about the service you buy and no longer just money for a few ones and zeroes.
Today I will show how to authorize a user in your mobile app with a Drupal website. First of all we should configure Drupal to allow REST authentication, so go to /admin/structure/services and click the “Edit” link opposite of your service name. Here, check the “Session authentication” option and save. Next go to the resources tab and edit user resource alias and check off the following options for it: login, logout, token, and register.
Also, we should check the user registration settings on /admin/config/people/accounts to allow new user registration by visitors, and account activation without email or admin confirmation.
To login a user with the services module, we must do following steps:
Send GET request to /services/session/token.
Get the response with CSRF token.
Send POST request with username, password with token received before in X-CSRF-Token header to /user/login endpoint.
Receive an object with user data and new CSRF token on login success or error code with a message on login fail.
To log out a user, we have to send POST request to /user/logout with X-CSRF-token header, that contains the token which we receive on login.
And to register a new user we send a POST request to /user/register API URL with user data. As a response we should get a new user object and error status message on registration fail. The minimum data required for a user registration is username, e-mail address, and password, but we should add a status equal to 1, to immediately make a new account active, ready for use.In-app integration
It is good practice to save some data on a device to prevent the user from manually editing any information that is needed every time that we run application. We should use Local Storage to store user login status, tokens from the last login time and user data. AngularJS has some modules to add to Local Storage support for an application; I chose angularLocalStorage. It also has a cookie fallback and uses the ngCookies module for it.
So we should download these two modules and plug them in our index.html file.
Next we should define the UI Router state for account tab, angularLocalStorage module dependency in our app.js file and add local storage prefix constant, we will add it to our config constant.
In the services.js file we will create a new factory called User. This will contain a list of methods to work with user operations on the REST server, and to save / delete user data from local storage. We use $rootScope services to have access to login status and user information from any part of the application.
In tabs.html we add an Account tab link.
Let’s create a tab-account template. Here we should show the following: for an unauthorized user, show login and register buttons that will open a popup form for each action; and for a logged-in (authorized) user, show information about user and a logout button. To show these parts of the template conditionally, we use the ng-show directive and loginSuccess variable that are stored globally in $rootScope.
To show popups with login/register forms we should use $ionicModal service, which comes with Ionic Framework core. We must create templates for each modal window: login.html and register.html.
Finally, we should define a controller for each (login and register) popup. We initialize a new $ionicModal instance and set a template for it, creating methods to open and close the modals, and doLogin, doLogout and doRegistration actions. It will be easy to handle any error message because our User Factory methods return promises. Also, we should save user information to Local Storage and send requests to the server only if it is necessary.
You can clone and try all this code from my GitHub repository, and to get the code of this part, checkout the part5 branch (just run “git checkout -f part5”). Now we can test the application in a browser - run “ionic serve” prompt command from the project directory and see the result of our work.
Tomorrow, I will add comments to articles and ability to post a comment for logged in users.DrupalBest PracticesDrupal How toDrupal PlanetLearning SeriesTechnology ToolsPost tags: IonicAppsDrupal
A very common use-case for Maestro is to launch a workflow in order to moderate some piece of content. You may have an expense form as a content type and you wish to have a manager review and approve it before handing it off to other departments for processing.
This post will show you how to fire off a moderation workflow after saving content with Rules.
Step 1: Create a simple test flow
I know you have a super-ultra-complex workflow, but this is best to get off the ground with a simple 1 step flow for the time being!
The content that you display on your Drupal site doesn’t necessarily have to be content that you own or store in your own database. In addition to your own content, there could be various use cases and methods for dynamically displaying data from outside sources and displaying it on your site.
In the Drupal modules that I inherit or review, I see a lot of different ways of factoring out into separate files, of what might have begun in the main module file. This can be useful for performance (to a limited extent) and legibility, but depending on how you do it, you might end up ironically spoiling both.
How should you break down your Drupal module files? Well, I'm not here to tell you the perfect file breakdown. Matching the architecture is good, although what "the architecture" means in Drupal 7 isn't clear. Outside of a Drupal 8/Symfony-style architectural model, there's a limit to how much the file breakdown really needs to match the architecture, and a limit to how useful doing so would be.
Sprints are times dedicated to focused work on a project or topic. People in the same location (physical space, IRC, certain issue pages) work together to make progress, remove barriers to completion, and try to get things to "done".
This summer, there are many Drupal 8- focused sprint opportunities before DrupalCon Barcelona.
Some of these are open to everyone, some have mentors or workshops to help new contributors, some have limited space, depending on the event.Earlier this summer
DrupalCon Los Angeles had very productive extended sprints, and the main sprint on Friday was huge! Since then, many sprints have continued the progress, including: Drupal Camp Spain, DrupalCamp Wroclaw, Moldcamp in Chișinău, Moldova, and Frontend United.
And, we had two sprints (New Hampshire and New Jersey) aided by Drupal 8 Accelerate. If you have more money than time for sprinting or resources for planning or hosting a sprint, and you want to help get Drupal 8 out, giving to D8 Accelerate really helps.New Hampshire, USA
Jersey Shore Sprint, New Jersey, USA
Twin Cities Drupal Camp, Minneapolis, Minnesota, USA
Sprints are concurrent with the training day on Thursday, and concurrent with sessions Friday and Saturday. The dedicated sprint day is on Sunday. We are expecting about 10 people on Thursday and 60 people on Sunday. Sunday will have a workshop for new contributors, Core/Contrib sprints, and a Drupal 8 Manual sprint.
See the sprint page on the tcdrupal.org site for details.June 25-28, 2015
Drupal North, Toronto, Canada
Sunday is a dedicated Drupal 8 sprint day. There will be a small unofficial sprint on Thursday.
The whole 4 day camp is focused on Drupal 8.June 28, 2015
This is a dedicated one day sprint to help get Drupal 8 out.
See the announcement for details.July 2-8, 2015
D8 Accelerate critical issue sprint, London
This 7 day sprint will be focused on Drupal 8 critical issues and space is limited.
See the groups.drupal.org post for details.July 4, 2015
DrupalCamp Bristol, United Kingdom
Sprints will run concurrent with sessions on Saturday.July 16-19, 2015
NYC Camp, New York, USA
Monday to Wednesday are sprint only days, with Drupal 8 Core and Media for Drupal 8 scheduled. Panopoly is scheduled for Tuesday and Wednesday. Sprints will also be concurrent with trainings, summits, and sessions on Thursday through Sunday.
DrupalCamp North, Sunderland, United Kingdom
GovCon, Bethesda, Maryland, USA
GovCon conference is July 22-24 with sprints concurrent with sessions, but there will be two dedicated sprint days following on Saturday and Sunday, July 25 and 26 in Washington, DC at the ForumOne offices.August 6-9, 2015
Sprints will be concurrent with the camp all days.
Last year was super focused and relaxing.August 12-15, 2015
MidWest Developers Summit (MWDS), Chicago, Illinois, USA
4 days of only sprinting, hosted in the Palantir.net offices.
Details still to be announced. See the groups.drupal.org event page for more details.September 11-18, 2015
Montréal to Barcelona Sprint, Montréal, Canada
For some people in North America, Montréal could be on the way to Barcelona, where DrupalCon extended sprints start September 19.
See the groups.drupal.org event page for more details.P.S.
I will be at Twin Cities, GovCon, and MWDS.Drupal PlanetDrupalSprints
If you want pizza you can either go to cafe or order delivery, right? So why should Drupal be different? :) We made Drupal delivery possible to any Ukrainian city with DrupalTour!Read more
Debian now have over 22 000 source packages and 45 500 binary packages. To counter that, the FTP masters and I have created a dak tool to automatically remove packages from unstable! This is also much more efficient than only removing them from testing! :)
The primary goal of the auto-decrufter is to remove a regular manual work flow from the FTP masters. Namely, the removal of the common cases of cruft, such as “Not Built from Source” (NBS) and “Newer Version In Unstable” (NVIU). With the auto-decrufter in place, such cruft will be automatically removed when there are no reverse dependencies left on any architecture and nothing Build-Depends on it any more.
Despite the implication in the “opening” of this post, this will in fact not substantially reduce the numbers of packages in unstable. :) Nevertheless, it is still very useful for the FTP masters, the release team and packaging Debian contributors.
The reason why the release team benefits greatly from this tool, is that almost every transition generates one piece of “NBS”-cruft. Said piece of cruft currently must be removed from unstable before the transition can progress into its final phase. Until recently that removal has been 100% manual and done by the FTP masters.
The restrictions on auto-decrufter means that we will still need manual decrufts. Notably, the release team will often complete transitions even when some reverse dependencies remain on non-release architectures. Nevertheless, it is definitely an improvement.
Omelettes and eggs: As an old saying goes “You cannot make an omelette without breaking eggs”. Less so when the only “test suite” is production. So here are some of the “broken eggs” caused by implementation of the auto-decrufter:
- About 30 minutes of “dak rm” (without –no-action) would unconditionally crash.
- A broken dinstall when “dak auto-decruft” was run without “–dry-run” for the first time.
- A boolean condition inversion causing removals to remove the “override” for partial removals (and retain it for “full” removals).
- Side-effect, this broke Britney a couple of times because dak now produced some “unexpected” Packages files for unstable.
- Not to mention the “single digit bug closure” bug.
Of the 3, the boolean inversion was no doubt the worst. By the time we had it fixed, at least 50 (unique) binary packages had lost their “override”. Fortunately, it was possible to locate these issues using a database query and they have now been fixed.
Before I write any more non-trivial patches for dak, I will probably invest some time setting up a basic test framework for dak first.
Filed under: Debian, Release-Team
What happened about the reproducible builds effort this week:Toolchain fixes
- Brendan O'Dea uploaded help2man/1.47.1 which adds support setting SOURCE_DATE_EPOCH for the date for the generated pages.
- Emmanuel Bourg uploaded maven-debian-helper/1.6.12 which sets the locale to en_US when generating the javadoc.
- Emmanuel Bourg uploaded javatools/0.51 which sets the locale to en_US when generating the javadoc.
- Joachim Breitner uploaded haskell-devscripts/0.9.10 which will always run sorts in LC_ALL=C.
The following 12 packages became reproducible due to changes in their build dependencies: collabtive, eric, file-rc, form-history-control, freehep-chartableconverter-plugin , jenkins-winstone, junit, librelaxng-datatype-java, libwildmagic, lightbeam, puppet-lint, tabble.
The following packages became reproducible after getting fixed:
- acorn/0.12.0-1 uploaded by Bas Couwenberg, original patch by Reiner Herrmann.
- avarice/2.13+svn347-3 by Tobias Frost.
- br.ispell/3.0~beta4-19 by Agustin Martin Domingo.
- cpp-netlib/0.11.1+dfsg1-3 by Ximin Luo.
- device3dfx/2013.08.08-3 by Guillem Jover.
- dnsmasq/2.73-1 uploaded by Simon Kelley, original patch by Chris Lamb.
- eigen3/3.2.5-4 by Anton Gladky.
- eo-spell/3.0~beta4-19 by Agustin Martin Domingo.
- espa-nol/3.0~beta4-19 by Agustin Martin Domingo.
- firejail/0.9.26-1 by Reiner Herrmann.
- gcc-mingw-w64/15.2 by Stephen Kitt.
- geoip-database/20150616-1 uploaded by Patrick Matthäi, original patch by Reiner Herrmann.
- ikiwiki-hosting/0.20150614 by Simon McVittie.
- inkscape/0.91-5 by Mattia Rizzolo.
- jtreg/4.1-b12-1 by Emmanuel Bourg.
- libfile-scan-perl/1.43-3 uploaded by gregor herrmann, original patch by Niko Tyno.
- libgpiv/0.6.1-4.2 by akira.
- libnet-twitter-lite-perl/0.12006-4 by Niko Tyni.
- mingw-w64/4.0.2-5 by Stephen Kitt.
- oslo.messaging/1.8.3-3 uploaded by Thomas Goirand, original patch by Juan Picca.
- seabios/1.8.2-1 by Michael Tokarev. Suggested fix by Lunar based on recent upstream changes.
- thuban/1.2.2-7 uploaded by Bas Couwenberg, original patch by Reiner Herrmann.
- tiptop/2.2-3 by Tomasz Buchert.
- ucl/1.03+repack-3 by Robert Luberda.
Some uploads fixed some reproducibility issues but not all of them:
- amsynth/1.5.1-2 uploaded by Alessio Treglia, original patch by Dhole.
- brickos/0.9.0.dfsg-12 uploaded by Michael Tautschnig, original patch by akira.
- cucumber/2.0.0-1 by Cédric Boutillier; currently FTBFS.
- netbeans/8.0.2+dfsg1-2 by Markus Koschany.
- pyopencl/2015.1-2 uploaded by Tomasz Rybak, original patch by Tomasz Rybak.
Patches submitted which have not made their way to the archive yet:
- #788747 on 0xffff by Dhole: allow embedded timestamp to be set externally and set it to the time of the debian/changelog.
- #788752 on analog by Dhole: allow embedded timestamp to be set externally and set it to the time of the debian/changelog.
- #788757 on jacktrip by akira: remove $datetime from the documentation footer.
- #788868 on apophenia by akira: remove $date from the documentation footer.
- #788920 on orthanc by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
- #788955 on rivet by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
- #789040 on liblo by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
- #789049 on mpqc by akira: remove $datetime from the documentation footer.
- #789071 on libxkbcommon by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
- #789073 on libxr by akira: remove $datetime from the documentation footer.
- #789076 on lvtk by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
- #789087 on lmdb by akira: pass HTML_TIMESTAMP=NO to Doxygen.
- #789184 on openigtlink by akira: remove $datetime from the documentation footer.
- #789264 on openscenegraph by akira: pass HTML_TIMESTAMP=NO to Doxygen.
- #789308 on trigger-rally-data by Mattia Rizzolo: call dh_fixperms even when overriding dh_fixperms.
- #789396 on libsidplayfp by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
- #789399 on psocksxx by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
- #789405 on qdjango by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
- #789406 on qof by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
- #789428 on qsapecng by akira: pass HTML_TIMESTAMP=NO to Doxygen.
Bugs with the ftbfs usertag are now visible on the bug graphs. This explain the recent spike. (h01ger)
Andreas Beckmann suggested a way to test building packages using the “funny paths” that one can get when they contain the full Debian package version string.debbindiff development
Lunar started an important refactoring introducing abstactions for containers and files in order to make file type identification more flexible, enabling fuzzy matching, and allowing parallel processing.Documentation update
Ximin Luo detailed the proposal to standardize environment variables to pass a reference source date to tools that needs one (e.g. documentation generator).Package reviews
41 obsolete reviews have been removed, 168 added and 36 updated this week.
Some more issues affecting packages failing to build from source have been identified.Meetings
Minutes have been posted for Tuesday June 16th meeting.
The next meeting is scheduled Tuesday June 23rd at 17:00 UTC.Presentations
Annertech is #1 on Google for a number of key search phrases and when we're not, we're usually only beaten by the Drupal Ireland page from g.d.o. (groups.drupal.org/ireland). How did we get to the top of Google? How do we stay there? Two words: hard work - but it really revolves around two other words: content strategy. Let's get down to the details.
DrupalCamp St. Louis 2015 was held this past weekend, June 20-21, 2015, at SLU LAW in downtown St. Louis. We had nine sessions and a great keynote on Saturday, and a full sprint day on Sunday.
The view coming off the elevators at SLU LAW.
Every session was recorded (slides + audio), and you can view all the sessions online:
- Keynote - Finding the entrance: Why and how to get involved with the Drupal community
- .git basics
- Empowering Content Creators Through Better UX
- Out on a TWIG: custom theme development
- Securing Drupal
- Age of Content: Building Meaningful Content Strategies
- Scale up your Drupal Search
- High Performance Drupal
- Challenges of designing for a CMS
- Status of Migrate in 8
- Closing Session
The Camp went very well, with almost sixty participants this year! We had a great time, learned a lot together, and enjoyed some great views of downtown St. Louis (check out the picture below!), and we can't wait until next year's DrupalCamp St. Louis (to be announced)!
While walking, I started listening to Jeff Eaton’s Insert Content Here podcast episode 25: Noz Urbina Explains Adaptive Content. People must’ve looked strangely at me because I was smiling and nodding — while walking :) Thanks Jeff & Noz!
Jeff Eaton explained how the web world looks at and defines the term WYSIWYG. Turns out that in the semi-structured, non-web world that Noz comes from, WYSIWYG has a totally different interpretation. And they ended renaming it to what it really was: WYSIWOO.
Jeff also asked Noz what “adaptive content” is exactly. Adaptive content is a more specialized/advanced form of structured content, and in fact “structured content”, “intelligent content” and “adaptive content” form a hierarchy:
- structured content
- intelligent content
- adaptive content
- intelligent content
In other words, adaptive content is also intelligent and structured; intelligent content is also structured, but not all structured content is also intelligent or adaptive, nor is all intelligent content also adaptive.
Basically, intelligent content better captures the precise semantics (e.g. not a section, but a product description). Adaptive content is about using those semantics, plus additional metadata (“hints”) that content editors specify, to adapt the content to the context it is being viewed in. E.g. different messaging for authenticated versus anonymous users, or different nuances depending on how the visitor ended up on the current page (in other words: personalization).
Noz gave an excellent example of how adaptive content can be put to good use: he described how we he had arrived in Utrecht in the Netherlands after a long flight, “checked in” to Utrecht on Facebook, and then Facebook suggested to him 3 open restaurants, including cuisine type and walking distance relative to his current position. He felt like thanking Facebook for these ads — which obviously is a rare thing, to be grateful for ads!
Finally, a wonderful quote from Noz Urbina that captures the essence of content modeling:
How descriptive do we make it without making it restrictive?
If it isn’t clear by now — go listen to that podcast! It’s well worth the 38 minutes of listening. I only captured a few of the interesting points, to get more people interested and excited.1What about adaptive & intelligent content in Drupal 8?
First, see my closely related article Drupal 8: best authoring experience for structured content?.
Second, while listening, I thought of many ways that Drupal 8 is well-prepared for intelligent & adaptive content. (Drupal already does structured content by means of Field API and the HTML tag restrictions in the body field.) Implementing intelligent & adaptive will surely require experimentation, and different sites/use cases will prefer different solutions, but:
- An intelligent_content module for Drupal 8: allow site builders/content strategists to define custom HTML tags (e.g. <product_description>) to capture site-specific semantics. A CKEditor Widget could hugely simplify the authoring experience for creating intelligent content, by showing a specific HTML representation while editing (WYSIWOO!), thanks to HTML (Twig) templates associated with those custom HTML tags.
- An adaptive_content module for Drupal 8: a text filter that allows any tag to be wrapped in a <adaptive_content> tag, which specifies the context in which the wrapped content should be shown/hidden.
- The latter leads to cacheability problems, because the same content may be rendered in a multitude of different ways, but thanks to cache contexts in Drupal 8 and the fact that text filters can specify cache contexts means adaptive content that is still cacheable is perfectly possible. (This is in fact exactly what it was intended for!) cache contexts
I think that those two modules would be very interesting, useful additions to the Drupal ecosystem. If you are working on this, please let me know — I would love to help!
That’s right, this is basically voluntary marketing for Jeff Eaton — you’re welcome, Jeff! ↩
In my long quest towards closing #540218, I have uploaded a new libept to experimental. Then I tried to build debtags on a sid+experimental chroot and the result runs but has libc's free() print existential warnings about whatevers.
At a quick glance, there are now things around like a new libapt, gcc 5 with ABI changes, and who knows what else. I figured how much time it'd take me to debug something like that, and I've used that time to rewrite debtags in python3. It took 8 hours, 5 of pleasant programming and the usual tax of another 3 of utter frustration packaging the results. I guess I gained over the risk of spending an unspecified amount of hours of just pure frustration.
So from now on debtags is going to be a pure python3 package, with dependencies on only python3-apt and python3-debian. 700 lines of python instead of several C++ files built on 4 layers of libraries. Hopefully, this is the last of the big headaches I get from hacking on this package. Also, one less package using libept.
Following a successful MidCamp and with some new ideas how to improve the kit, I was eager to hit the road for more testing. Problem is, I'm a freelancer with a limited budget, and getting to camps comes out of my own pocket. On a lark, I tweeted the following:
— Kevin Thull (@kevinjthull) April 8, 2015
To my delight, both Twin Cities and St. Louis camps took me up on my offer. Of course, the stakes are even higher now, because it's no longer my own money on the line.
But I'm also feeling more confident about this solution and improve on the process with each camp. Connecting to non-HDMI-capable laptops remains the biggest challenge overall. I've added in a couple (full) DisplayPort to HDMI converters and even successfully tested a new VGA to HDMI converter that got my ancient Sony VAIO to display on my home flatscreen:
— Kevin Thull (@kevinjthull) June 16, 2015
And at DrupalCamp STL I finally got the 100% success rate that I've been shooting for! Three sessions needed fixing in post, but overall, this camp went very smoothly. A huge bonus was the fact that the two rooms were next to each other, minimizing the distance to cover when trying to coordinate laptop hookups and verify timely starts and stops of the records.
Twin Cities is next week, with a much more challenging schedule: five concurrent sessions across two buildings and multiple floors. My Fitbit will likely hit a new high. That, and I need to finally get down to some documentation and podium signage. It's time to share the knowledge I've gained and get more hands and minds involved.
And now for the learnings from DCSTL:
- swapping thumb drives throughout the day means recordings can be posted during camp
- well-timed presenter starts/stops means no trimming, which means more recordings can be posted during camp
- one room had screen flicker and setting the PVR resolution to 1080 helped (typically, the resolution needs to come down to 720 for this, as well as fixing color shifts)
- having extra SD cards means bad audio can be fixed during down times, which means more recordings can be posted during camp
- power strips at the podium shouldn't be assumed, and the powered USB hub and voice recorder both have short plugs
- never plug the powered usb into the laptop, because that can kill your record if resolution changes or the laptop goes to sleep
- taping down individual components means less cord chaos throughout the day
- access to ethernet port with a reasonably large pipe going up will get videos posted faster
Recently I've been experimenting with camlistore, which is yet another object storage system.
Camlistore is designed exactly how I'd like to see an object storage-system - each server allows you to:
- Upload a chunk of data, getting an ID in return.
- Download a chunk of data, by ID.
- Iterate over all available IDs.
It should be noted more is possible, there's a pretty web UI for example, but I'm simplifying. Do your own homework :)
With those primitives you can allow a client-library to upload a file once, then in the background a bunch of dumb servers can decide amongst themselves "Hey I have data with ID:33333 - Do you?". If nobody else does they can upload a second copy.
In short this kind of system allows the replication to be decoupled from the storage. The obvious risk is obvious though: if you upload a file the chunks might live on a host that dies 20 minutes later, just before the content was replicated. That risk is minimal, but valid.
There is also the risk that sudden rashes of uploads leave the system consuming all the internal-bandwith constantly comparing chunk-IDs, trying to see if data is replaced that has been copied numerous times in the past, or trying to play "catch-up" if the new-content is larger than the replica-bandwidth. I guess it should possible to detect those conditions, but they're things to be concerned about.
Anyway the biggest downside with camlistore is documentation about rebalancing, replication, or anything other than simple single-server setups. Some people have blogged about it, and I got it working between two nodes, but I didn't feel confident it was as robust as I wanted it to be.
I have a strong belief that Camlistore will become a project of joy and wonder, but it isn't quite there yet. I certainly don't want to stop watching it :)
On to the more personal .. I'm all about the object storage these days. Right now most of my objects are packed in a collection of boxes. On the 6th of next month a shipping container will come pick them up and take them to Finland.
For pretty much 20 days in a row we've been taking things to the skip, or the local charity-shops. I expect that by the time we've relocated the amount of possesions we'll maintain will be at least a fifth of our current levels.
We're working on the general rule of thumb: "If it is possible to replace an item we will not take it". That means chess-sets, mirrors, etc, will not be carried. DVDs, for example, have been slashed brutally such that we're only transferring 40 out of a starting collection of 500+.
Only personal, one-off, unique, or "significant" items will be transported. This includes things like personal photographs, family items, and similar. Clothes? Well I need to take one jacket, but more can be bought. The only place I put my foot down was books. Yes I'm a kindle-user these days, but I spent many years tracking down some rare volumes, and though it would be possible to repeat that effort I just don't want to.
I've also decided that I'm carrying my complete toolbox. Some of the tools I took with me when I left home at 18 have stayed with me for the past 20+ years. I don't need this specific crowbar, or axe, but I'm damned if I'm going to lose them now. So they stay. Object storage - some objects are more important than they should be!
The main competitor of these two compilers, and the most promising one, is GHCJS. Back then, it was too annoying to install. But after two years, things have changed, and it only takes a few simple commands to get GHCJS running, so I finally created the circle packing demo in a GHCJS variant.
With GHCJS now available at my fingertips, maybe I will produce some more Haskell to be run in your browser. For example, I could port FrakView, a GUI program to render, expore and explain iterated function systems, from GTK to HTML.