Weblog
Presentations at Zest
This Friday we had various presentations and food at the Zest Software headquarters.
Every few months at Zest Software we have our so-called "eten en weten" meetings; literally: eating and knowing, but in Dutch it rhymes. Basically it means: presentations by colleagues at the end of the afternoon, then food, then more presentations, with some discussions sprinkled around. The later the evening, the more technical the presentations get, so the less technical colleagues can flee. :-) It is also a nice way to catch up with former colleagues who are invited; this time we were joined by Joris Slob; always a pleasure. Meet our strong team here.
Coaching
It started off with Esther and Jean-Paul introducing new coaching options for the team members. Everyone gets the chance to have a few talks per year with a coach, paid for by Zest. The coaches were also there to introduce themselves. A very nice option.
Fred: breathe in, breathe out
Then colleague Fred happily surprised us with a workshop about breathing techniques. In his spare time he sings in a choir, so he has experience here. Sorry, I don't think any pictures were taken. :-)
Laurens: featured features
Our CSS specialist Laurens showed some nice features he found on several websites. Ask him for some links. :-)
Joris: the semantic web
Now the floor was for former colleague Joris Slob, currently working at the Leiden Institute of Advanced Computer Science. He talked about the semantic web and the technologies that float around in that area. Web 1.0 can be summarised as: smart people put things on the web; less smart people only look at it. With web 2.0 lots more people can dump content, whether it is smart or not. But there is nothing fundamentally new about this web: it is just a different use of the same old techniques. For that you can look at what some call web 3.0: the semantic web. Computers currently do not understand the semantics of web pages. They fetch an html page and see something like this:
blah blah blah blah blah blah
blah blah important blah blah blah
Let Google search for images of jaguars and you will get both cars and animals. But try Google's Wonder wheel: it realises you could be looking for jaguar cars, jaguar parts, jaguar animals and more.
As human you have knowledge. What do you do with that knowledge? Where do you store it? Can you put it in a database? Our databases do not fit this reality very well.
In comes RDF, the Resource Description Framework. RDF offers user-defined relations, instead of a rigid database where all possible relations between items in tables have to be defined beforehand. RDF consists of triples: subject + predicate + object = fact. For example: Joris (subject) likes (predicate) Python (object). So the predicate is the relation between subject and object.
But how do you know who this "Joris" is? Lots of people are called Joris, at least in The Netherlands. So you use a URI, a Uniform Resource Identifier, like this: http://www.liacs.nl/~jslob/foaf.rdf#me
This URI uses the Friend Of A Friend (foaf) vocabularies. These describe relations between persons. In this example we get an xml representation of the RDF triples, but they can also be stored in the native RDF non-xml format.
Another part of the semantic web is OWL, the Web Ontology Language. (You may think this three letter acronym is in the wrong order, but the French would disagree with you; just ask our French colleague Vincent. Update: actually, see the section on the acronym in the encyclopedia for the real reason.) OWL gives greater expression possibilities. You also need reasoners. Reasoners can go through your knowledge base and find inconsistencies, or infer new relations. Fact 1: Joris is a human. Fact 2: humans have 2 legs. So we infer a third fact: Joris has 2 legs. Some inconsistencies can be resolved: "Obama is a good president" versus "Obama is a bad president" could become: 'John thinks "Obama is a good president"' and 'Lucy thinks "Obama is a bad president"'.
Then there are ontologies. Ontologies define possible relations in a knowledge domain. Examples are SIOC, Dublin Core, Foaf, Good Relations and Basic Geo Vocabulary. If you can reuse an existing one, that is very good, as you do not want to create your own if you can avoid it.
To get information out of a knowledge base, you can use SPARQL, which is a query language that looks a bit like SQL. To get everything, you would write:
SELECT ?s, ?p, ?o where {?s ?p ?o}Another part of the semantic web is formed by microformats. These embed meaning in html. Examples are hCards, hCalendar, RDFa (pushed by Google, blocked by Adobe for HTML5). Also: Google Rich Snippets. This can for example be used to have information about ratings (4 out of 5 stars) show up in Google search results.
For the python programming language you want to look at the rdflib library. Most readers of this blog will likely be familiar with Zope, so I'll mention that you can also use the ZODB (Zope Object DataBase) as a back end.
Finally, if you want to learn more, you should read this book: Programming the Semantic Web, by Toby Segaran, Colin Evans and Jamie Taylor.
Vincent: jQuery, plus libraries, plus Proxy
At this time the non-technical people made good their escape, and the rest buckled up to learn about the wonderful world of jQuery, several jQuery libraries, and the soon to be released jQuery Proxy, made by Vincent. He made the presentation with live demos using Django; there was also a pony somewhere...
The jQuery principle is: write less, do more. You usually do not have to care about differences between browsers. It has got great documentation.
Some libraries are:
- jQuery UI: the official UI for jQuery. It has themes, widgets, interactions (like draggable, droppable, resizable).
- jQuery Tools: less effects than jQuery UI, more web oriented, with tabs, tooltip, overlay, expose, scrollable.
You can find lots more on http://plugins.jquery.com/ which is basically a pypi or CPAN for jQuery, including a demo for each plugin. Some that Vincent uses are cookie, color and lightbox.
And now there is jQuery Proxy, written by Vincent. This gives easy integration of AJAX in python sites. At least it works in Django and Plone. You get jQuery syntax in python code and can send this back to the browser. It also allows using jQuery libraries/plugins. It could eventually replace KSS. The way you use it is for example this:
from jquery.pyproxy.django import JQueryProxy, jquery # The @jquery decorator handles the transformation of your results # into JSON so we can decode it on client side. @jquery def ajax_add_comment(request): # The JQuery proxy object helps us to manipulate the page the user sees. jq = JQueryProxy() # The data/form sent with Ajax just apear like a classical POST form. form = request.POST #we do some validation of the form. ... if errors: # We display an error message. jq('#my_error_message').show() return jq ... # We display a success message. jq('#my_success_message').show() return jq
The source is available on github and we will hopefully see a release next weekend. Very interesting!
Mark: DVCS
Next up was Mark, with a talk about Distributed Version Control Systems (DVCS), or as he liked to call it: git. :-) See his weblog for the slides. At Zest we are using subversion to store our code and share it among the developers. With subversion you have a central server that every commit goes to. But some would prefer doing local commits, so as not to disturb others with experiments or with a temporarily broken code state, or just because they are in a train and have no network connection. This is where distributed systems like git and mercurial or bazaar shine. You can still use a central repository if you want to though.
With git you have a complete local repository, which means it is much much faster, as you need less network traffic: basically the only times you need to connect to a different server are when you pull code (get new code from trunk or a branch) and when you push code (send your changes to a server). Checking in code is fast, because you do not need to do get fresh code first (svn up) to see if someone else changed some code, so you also do not get clashes when checking in, and the commit is local so that makes it extra fast.
Not everyone at Zest is sold on the idea of using git yet. For now we will keep using subversion. Those wanting to use git can use git-svn, and have been doing that already. If possible we'll try and keep our buildouts git-friendly; we have already created some client buildouts with the client packages directly in the buildout src directory instead of including it using infrae.subversion, mr.developer or svn:externals (though the last one is apparently less of a problem).
Maurits: collective.watcherlist
I was the last presenter of the evening. I talked about a client project, but there was a Non Disclosure Agreement involved, so I have probably already said too much. :-) So we will skip that and jump straight to my presentation about a new package I have created, called collective.watcherlist
collective.watcherlist is a package that enables you to keep a list of people who want to receive emails when an item gets updated. The main use case is something like Products.Poi, which is an issue tracker for Plone. That product lets you create an issue tracker. In this tracker people can add issues. The tracker has managers. Every time a new issue is posted, the managers should receive an email. When a manager responds to an issue, the original poster (and the other managers) should get an email. Anyone interested in following the issue, should be able to add themselves to the list of people who get an email.
The origins of collective.watcherlist also lie in the mentioned Products.Poi package, first created by Martin Aspeli and now maintained by me. A while ago I fixed some bugs in the email sending part of Poi, as sending international emails can be tricky. After this, I thought this part of the code was quite solid (the rest as well, actually :-)) and could be useful for other packages that needed to send out email. So I decided to factor this code out into a separate package. This also made some parts of Poi cleaner and simpler.
There is no release as of this writing, but the source code is in the collective. You may want to grab the buildout of a branch of Poi that uses it. That branch of Poi is meant for Plone 4. I should create a proper alpha release for that soon. The collective.watcherlist package itself works just fine on both Plone 3.3 and 4, though there are some minor test failures on Plone 3. The test coverage is at a solid 98 percent, so I am quite sure most bugs have been ironed out.
I showed what the package does by going through the code in the sample directory. For this blog entry I will resort to some copy-pasting from the readme file.
It is not a package for end users. Out of the box it does nothing. It is a package for integrators or developers. You need to write some python and zcml in your own package (like Poi now does) to hook collective.watcherlist up in your code.
collective.watcherlist might also be usable as a basis for a newsletter product. If you feel Singing and Dancing is overkill for you, or too hard to adapt to your specific needs, you could try writing some code around collective.watcherlist instead.
In its simplest form, the integration that is needed, is this:
- Register an adapter from your content type to collective.watcherlist.interfaces.IWatcherList. In a lot of cases using the default implementation as factory for this adapter is fine: collective.watcherlist.watchers.WatcherList
- Create an html form where people can add themselves to the watcher list.
- Register a BrowserView for your content type, inheriting from collective.watcherlist.browser.BaseMail and override its properties subject, plain and/or html.
- Create an event handler or some other code that gets the adapter for your content type and uses that to send an email with the subject and contents defined in the browser view you created.
That is it. There are currently no viewlets, portlets or other templates that you need to override, so it should be easy to fit into the theme of your website, provided you do not mind coming up with a UI yourself. The sample directory is a good spot to look for the basis of that though.
Future plans include:
- an option to send to all members (though this should already be very easy to do yourself),
- allow setting a different from-address,
- personalised emails, even if just with a footer to point to an unsubscribe option,
- optionally storing more info per subscriber, like a preference for plain text or html,
- more newsletter-like functionality, for example double-opt-in (similar to what the PasswordResetTool does when creating a Plone member).
A first look at Erik Rose: Plone 3 for Education
Erik Rose has written a book about: Plone 3 for Education. Here are my first impressions.
Plone 3 for Education is targeted at people working in education or in other larger organizations. You are the webmaster responsible for one or more websites of the organization. You are comfortable using Plone daily, but there just never is time to do everything that is needed. You want to delegate responsibilities to other users and wonder what is the best way to do that. The website needs some extra functionality and you would like to use an existing product for that; but which one is the safe bet for the future? And the website looks a bit outdated and could use some visual freshness. You have done a few tweaks in the Zope Management Interface, but have also heard that this is not the best way. So how do you make a nice theme these days?
If you recognize yourself in that description, then this book is for you. Hardened Plone programmers will not find much news here; still, most chapters give info about how best to use some third party products, like Plone4Artists Calendar and Faculty/Staff Directory. If you know what these products can do, you become a better consultant; and you know when to pick one of these products off the shelf instead of programming something totally new.
As the preface says, most chapters stand on their own. So the first thing I did, was to take on chapter 7: Creating Forms Fast. This is about PloneFormGen. My experience with PloneFormGen was mostly how to add it in a buildout, as at Zest Software we use this for quite a lot of clients. I do not think I have ever actually used it myself. So this looked like an interesting chapter to start with.
PloneFormGen is well maintained by Steve McMahon, who was one of the reviewers of this book, so you can be pretty sure the information in this chapter is correct.
The chapter starts out by telling you to "install PloneFormGen by adding Products.PloneFormGen to your buildout, as per usual." So you are expected to know a bit about buildout already. Earlier chapters may explain a bit more about this. The chapter then continues with a few very practical steps to take in the css and javascript registry when you want to support adding Rich Text or Date/Time fields on forms. Good to know.
Erik then takes you through your first steps with PloneFormGen, adding a FormFolder in the site and doing a bit of editing there. He presents all form fields that you can add. He explains that you should edit the default Mailer form action and set a recipient email address there, otherwise form submission will fail. When you add a Save Data Adapter, to store the submissions in the zodb, you get two valuable tips. Always keep a mailer adapter as backup in case something goes wrong and you lose the saved data; and do not remove or reorder fields when the form is already live, as the saved data will not get changed to fit.
The chapter gives a short recommendation on when to use PloneFormGen and when to create an Archetypes content type. Then it ends with giving you a taste of the flexibility of PloneFormGen. You can use it to create online tests for your students. You can use it to create a simple form (or a complex one if your organization needs that) as front end for creating news items (or other content items).
I'll write another review with a look at the other chapters later. If those chapters are similar to this one (and I have peeked already), then this looks like a very practical book. It presents clear goals, with step-by-step instructions to reach them, without magically sounding jargon, and with some hard earned wisdom so you can step around the common pitfalls. I think a lot of people could benefit from this.
Disclaimer: I got this book for free from Packt Publishing in exchange for a review; and ordering the book via one of the links in this article will land me some money.
Indexes in catalog.xml considered harmful
Do not add indexes in catalog.xml. Do that in a separate import or upgrade step. Read on for how to do that; I will throw in some general GenericSetup best practices along the way.
Basic use of catalog.xml
Using GenericSetup you can add indexes and metadata columns to the portal_catalog with a catalog.xml file like this:
<!--?xml version="1.0"?--> <object name="portal_catalog" width="300" height="150">
<index name="getSomething" meta_type="KeywordIndex">
<indexed_attr value="getSomething"></indexed_attr>
</index>
<column value="getSomething"></column></object>
Specifying an index will add an index for getSomething in the portal_catalog so you can search on it. Specifying a column will add getSomething to the metadata of the catalog brains, so you can ask a brain what his value is for getSomething. These are very different use cases, so before you add both an index and a column you may want to think if you really need them both or if one of them is enough.
Anyway, specifying a column here is fine. Nothing wrong with it. Do note that when you add a column here this does not make getSomething available in the current brains in the catalog. You will need to do a reindex; a clear and rebuild of the catalog would do it, but it may be enough to find and reindex items of one specific content type that has this field. Depending on your specific situation this may or may not be an issue.
What happens with indexes?
What is almost never a good idea however, is specifying an index here. What this does is it creates the index in the portal_catalog. The index is not filled automatically, so you will have to reindex it manually (or write some code for that). But what happens the next time you reinstall your product or reapply your profile? The index gets removed and recreated. So the index is empty and you will need to reindex it manually again! That is not very handy.
This might be fixable in the GenericSetup import handler for catalog.xml. But this is hard to do as it is currently not possible to verify without a doubt that the index that is currently in the portal_catalog has the same configuration as specified in the catalog.xml. For example, the id might be the same but the existing index might be a FieldIndex and catalog.xml might specify a KeywordIndex. This specific check might be doable, but there are other indexes for which this is not so simple.
Import handler
So, what do you do instead? You add an import handler. I have done that in several products, so instead of copy-pasting code from one of those products I might as well copy-paste it from my weblog. :-)
Write an import step in setuphandlers.py:
import logging from Products.CMFCore.utils import getToolByName # The profile id of your package: PROFILE_ID = 'profile-your.product:default' def add_catalog_indexes(context, logger=None): """Method to add our wanted indexes to the portal_catalog. @parameters: When called from the import_various method below, 'context' is the plone site and 'logger' is the portal_setup logger. But this method can also be used as upgrade step, in which case 'context' will be portal_setup and 'logger' will be None. """ if logger is None: # Called as upgrade step: define our own logger. logger = logging.getLogger('your.package') # Run the catalog.xml step as that may have defined new metadata # columns. We could instead add to # the registration of our import step in zcml, but doing it in # code makes this method usable as upgrade step as well. Note that # this silently does nothing when there is no catalog.xml, so it # is quite safe. setup = getToolByName(context, 'portal_setup') setup.runImportStepFromProfile(PROFILE_ID, 'catalog') catalog = getToolByName(context, 'portal_catalog') indexes = catalog.indexes() # Specify the indexes you want, with ('index_name', 'index_type') wanted = (('getSomething', 'FieldIndex'), ('getAnother', 'KeywordIndex'), ) indexables = [] for name, meta_type in wanted: if name not in indexes: catalog.addIndex(name, meta_type) indexables.append(name) logger.info("Added %s for field %s.", meta_type, name) if len(indexables) > 0: logger.info("Indexing new indexes %s.", ', '.join(indexables)) catalog.manage_reindexIndex(ids=indexables) def import_various(context): """Import step for configuration that is not handled in xml files. """ # Only run step if a flag file is present if context.readDataFile('your_package-default.txt') is None: return logger = context.getLogger('your.package') site = context.getSite() add_catalog_indexes(site, logger)
If you need to replace an existing FieldIndex with a KeywordIndex this code is not enough, but we ignore that possibility here.
The rest should be nothing new, but let's make it clear and explicit by showing everything here.
Register your GenericSetup code
I usually end up moving the registration of GenericSetup profiles, import and upgrade steps in a separate zcml file called profiles.zcml. We need to include that in our configure.zcml:
We register our profile and our steps in profiles.zcml:
metadata.xml
Create a profiles/default directory if this does not exist yet. This must have a metadata.xml file like this:
1001
The version number should be an integer. This profile version has nothing at all to do with the number in our version.txt or setup.py, but that is a different discussion. The destination number in our last upgrade step registration must match this metadata version.
Flag file
When you apply a GenericSetup profile or (re)install a product, every import step defined by any package is called. One step looks for a catalog.xml file within the profile directory of the profile that is being applied and exits if it is not there; another looks for a skins.xml and exits if it is not there. Our own import step must do the same, otherwise our code is executed far too often, even when our product is not installed.
As seen above, our import_various import handler starts with this check:
if context.readDataFile('your_package-default.txt') is None: return
So we must add a file with the name your_package-default.txt in profiles/default. The contents don't really matter; it can be something like this:
Flag file for the import handler of your.package
Sprint report Sunday
- People have been signing contributor agreements. We got 23 new core Plone contributors this weekend. Fantastic!
- Deco: testing javascript stuff (30 percent coverage currently), describing use cases of deco UI, see what differences will be with Plone 4 and 5.
- slc.linguatools: interface improvements, rewrite from scratch, zc3 based, making sure functionalities are working.
- funkload buildout, measure nightly performance testing: it works, except creation of a Plone Site currently fails (please help fix this).
- collective.hostout: worked on multiple python versions, completed plugin for mr.developer integration, database integration, fixed bugs, helped out video guys.
- amberjack: show to user which tours are completed and which not, translation work, completed some of the tours, talked about using amberjack in third party projects.
- Trying to revitalize plone Italian community, talking, user map on google.
- Content import and export, transmogrifier: using zope.brokenfile, dependency graphs, store everything on portal.
- mr.git: commands, detailed readme.
- testing crawler: create a very small tests.py file that finds all tests
- QA: testing packages against different Plone versions with buildbot.
- fluxbin application/tool: routing, website
- blob support: CMFEditions, fixed bugs, 1 line of zcml and 3 lines of code will get you blob support.
- Singing and Dancing: Italian translation, discussion on users handling.
- Plone Marketing material: get existing material and links.
- Video: annotations for storing height, width, etc. We think various sizes can be stored, collective.flowplayer work. We will be sprinting at the open society, near the Basilica, with food. Contact us. Send mail to participants list.
- Roadrunner: working now with dexterity cts, working with z3c.autoinclude, preparing 0.3 release. Check out the dev version and try it out.
- Banjo: point and click improvements.
- Limi: Plone now has less css files, move files to the Plone Classic Theme. Some strategic talking. How to improve templating story, add-on story, which things to move to WSGI, what we can kill off in Zope 2, how to handle at versions dexterity references, what parts of CMF to keep.
- 3rd party products: PloneSoftwareCenter, ZopeSkel, Scrawl, PressRelease, several others.
First day sprint report
Plone conference 2009
- ZopeSkel ui has 100% test coverage now
- folder based folder view has improved, should help get CMF 2.2 closer
- OTTO: new logo, example package
- getpaid: testing new branch, select currency in setup panel, got more ideas
- roadrunner: test coverage added, newer zope.testing (need more info, please let me know), if you care about Plone 2.5 let me know; tomorrow will work on z3c.autoinclude.
- hostout: getting tests complete, installing and initializing server, work on supervisor
- LinguaTools: improving test coverage and fixed some bugs because of that, jquery integration.
- Third party products for Plone 4: mr.parker to check if a package has only one owner on pypi; updating CacheSetup, Poi, AddRemoveWidget, DataGridField, collective.ads, collective.classifieds, collective.discuss, Scrawl, Collage, ref browser widget, Maps, uploadify, flash upload, image editor, ploneboard.
- Social media: moving plone.org to WordPress (joke), brainstorming, writing.
- Versioning and CMFEditions for dexterity and East Asian Language.
- AGX: added UML 2.2 profile support, needed for generation chain, work on transformation.
- Singing and Dancing newletter package: closing bugs, blueprinting some new features, refactoring for features that we want, work on letting portal users and groups be subscribers.
- Video sprint: cleaned up Plumi buildout, translated in Indonesia (also Plone 3 core), blob work (come hang out with us if you know about blobs), plonevideosuite buildout (in collective), uploadify integration, TinyMCE integration to render as a player, creative commons licenses. Tomorrow pod casting, metadata extraction, nginx.
- Banjo: getting up to speed getting Deliverance installed and comfortable with it, looking at jqGrid for better UI integration.
- Amberjack: finished tour number 9, fixed problems with kupu, great new things, graphical elements to make it more usable, translations for Slovenian, Italian, Spanish and Polish.
- Plone social RPX platform for SSO, like with google, facebook. Profile editing view. Code is at bit bucket.
- Integration of git for Plone dev tool chain. Svn upstream, git locally, caching, concept is finished and we started working, shorter-named repositories in mr.developer.
- Limi: I have written zero lines of css today, helped people get Plone 4, lot of discussions, I released firefox 3.6 beta 1, take an image and drop it into Deco is now possible with that, ZMI security page does not lock up anymore. xdv, theme discussions.
- xdv fixed on Plone 4.
- Deco: some dexterity issues, fixes.
- Blob types, LinguaPlone adding tests, ATContentTypes can remain unchanged to keep working in Plone 3 and 4.
- Funkload used for load testing of core Plone.
Tomorrow we start at 9:00.