Plone

published Nov 03, 2021

This is here to serve as contents for the atom/rss feed for Plone, also read by planet.plone.org.

Érico Andrei: The case for Plone Distributions

published Oct 30, 2014 , last modified Nov 16, 2014

Talk by Érico Andrei at the Plone Conference 2014 in Bristol.

In some cases there are many sites that are almost the same, for example universities using Plone EDU. Different look and feel, different content, same base.

At Simples Consultoria we have been successful in dealing with this market.

Marketing Plone is very hard. It can do so much, is so flexible, how are you going to market that to a specific audience.

Marketing a solution is easier. If a client wants an intranet, I am not going to tell him that we made a portal for the Brazilian government.

Customizing Plone should not be hard. Distributions should be first class citizens. Offer a download specifically for educations or for an intranet on plone.org. The code we use to make the standard installers, VMs, packages, should be there for distributions too.

We need documentation, tests. We would love to have help with Jenkins and stuff.

Talk the customer's language, know their needs. For example PloneGov and local initiatives.

Something like the Plone Intranet Consortium is the very best thing that happened to Plone in a long time. We need to work like this. Companies should act together. Plone and Intranet are a perfect match. Bring new people to Plone. Companies will save Plone. We love Plone, but Plone needs customers and companies.

Watch the video of this talk.

Eric Bréhault: Running a Plone product on Substance D

published Oct 30, 2014 , last modified Nov 16, 2014

Talk by Eric Bréhault at the Plone Conference 2014 in Bristol.

Why should you want to create or run a Plone product on Substance D? Because it is fun. It might be a good experience for the future of Plone.

Substance D has all the good things from Pyramid. Plus stores data in a ZODB. It is a CMS.

Rapido is the next Plomino version. Plomino started in 2006, still based on Archetypes, stores data into CMF objects. Uses extensively ZCatalog and PythonScript.

I turned it into Rapido. Plone 5. Based on dexterity.

  • rapido.core, totally independent of Plone.
  • Storage service: rapido.souper provides a storage service based on Souper. Souper works on Plone and Pyramid, so I chose it.
  • rapido.plone, standard dexterity content types, adapts them using rapido.core, ideally uses nothing but plone.api.
  • rapido.substanced, standard substanced.content classes, uses nothing but Substance D api.

[Demo of Plone part and Substance D part.]

So how is this done?

  • TTW scripting is what Rapido is about. I could not use PythonScript, but I used zope.untrustedpython.
  • Catalog: repozo.catalog is just fine, working on both systems.
  • Content persistence: souper, created by Blue Dynamics, designed to work on both pyramid and Plone.
  • settings persistence: using annotations. Very basic, but just works. Both contents can be IAttributeAnnotatable.
  • Forms and widgets. Substance D has Deform, but it is not rich enough. Porting z3c.form to Substance D... maybe not. So: client side rendering with Angular Schema Form.
  • Access control. Both systems have a granular ACL service. Probably possible to support them both transparently, but for now I created a custom security implementation.

My experience with Substance D. Pros:

  • Fun.
  • Happy to find all the good ingredients.
  • Fast testing.

Cons:

  • Not 100 percent ZCA ready, need to call config.hook_zca(), it works fine, no problem, I am just not comfortable with the 'hook' term here. Also, we would probably need a local registry.

Conclusions for me about Plone future:

  • ZCA plus buildout plus ZODB make our identity, and we must preserve it. It sets us apart, it is something strong.
  • We can find clever approaches to avoid a full rewrite. For example do more in plone.api instead of relying on what Zope does for us.
  • Can we easily migrate to Substance D? No.
  • Should we migrate to something else? No.

Watch the video and slides of this talk.

Laurence Rowe: Layering web applications on web services with JSON Linked Data

published Oct 30, 2014 , last modified Nov 16, 2014

Talk by Laurence Rowe at the Plone Conference 2014 in Bristol.

This talk is about the ENCODE portal, an encyclopedia of DNA elements.

I want to talk here about a pattern, not really about the specific technologies involved.

I work at the data coordination center, generating and organizing data. We get metadata submissions, store it in a metadata database. It is really a knowledge management system. It could have been built in Plone, but it would not be a great fit. I started from scratch based on Pyramid and ReactJS. Nowadays services more and more have a Javascript UI, where Javascript talks to the backend.

Embrace Javascript. There has been progressive enhancement. Single page web apps really need it. For building a portal, I have been looking at isomorphic web applications. Originally from the Render framework. It is important that pages are loading quickly. The exit rate for visitors goes just as quickly up with the loading time.

Json is the lowest common denominator for data. xml is more flexible, but more complex. In Python it is much easier to use json.

JSON-LD: json link data. Adopted recently by the w3c. It is partly about semantic data, but we are not using that yet.

At first we needed to duplicate the routing information on the server and the client side. JSON-LD allows us to express type information, which avoids the duplication.

You can have JSON-LD like this [in "pseudo json" here, just for the idea, Maurits]:

{
  @context: context/jsonld
  @id: ...
  @type: [biosample, item]
  @submitted_by: {
    @id: /users/lrowe
    @type: [user, item]
    name: Laurence Rowe
    }
}

Defined using JSON Schema, which is an IETF draft. Schema version, field definitions, types, validators. It is an extensible format.

All our normalized data is stored in Postgres as json. JSON-LD has the concept of framing, which objects should be embedded (mapping to a view in Plone terms), possibly doing some elasticsearch. Above it is server rendering (with NodeJs) that is creating html, and sends it to the browser. After the first page load, the browser uses Javascript to talk the the JSON-LD backend directly, instead of via the NodeJS server, letting the ReactJS Javascript do its own rendering.

The browser opens a page. Gets html back. Then it does a query for json data and gets that back and shows it on the page.

Indexing linked data. Updating one item may need a reindex of other items that link to it. We have written code for that. Using elasticsearch as a cache.

Come work with us, we are hiring.

See code at https://github.com/ENCODE-DCC

Watch the video and slides of this talk.

Guido Stevens: The Plone Intranet Consortium

published Oct 30, 2014 , last modified Nov 16, 2014

Talk by Guido Stevens at the Plone Conference 2014 in Bristol.

This is a big picture presentation. But don't worry, there will be a demo at the end.

United we stand, divided we fall. Plone is in a rough spot. Let's design and build some solutions. Yes, Wordpress is eating our lunch and our cake, but we should not sit back.

Code: https://github.com/ploneintranet/ploneintranet.suite

I have my own company, Cosent.

The Plone community is good. The backend code is good. The user interface was good, but has fallen back. Good enough is not good enough anymore. We need to do better. Only excellent user experience will win us customers.

When as a Plone company you want to sell Plone as an Intranet, you have a good starting point, but you are missing lots of pieces. You would have to build that yourself and let one customer pay for it. That is not going to happen, or at least it is a hard sale.

In the Plone community, the number of commits per month is rising, and the number of committers per month is also rising. Doing pretty well.

So what is wrong? We need to evolve or we will die, like dinosaurs. Plone Core currently is Web 1.0. We need a Plone Social, web 2.0: read/write, social networking, activity stream, time centric, personal perspectives, bottom-up sharing.

Roadmap: http://cosent.nl/roadmap

In 2010 I had the choice to ditch Plone or fix Plone. I chose to put all my eggs in one basket: Plone. Is Plone Social good enough? No. Look at Jive, that is some serious competition. We do not need to beat such a SAAS product, but we need to be close. Then customers will still prefer the local company that is closer to them and more attuned to their needs.

I think the "spare time" model of development is broken. It has brought us this far, but it is not enough anymore. Stuff is nearly finished on sprints, and then lags too long.

We need a new model. We have the talent, the technology. We can do it. We need to invest in a high quality, out-of-the-box product baseline. Low purchase cost, immediate sales appeal, fast delivery, shared maintenance across the community, new community ethos in collaborating together.

As Plone Intranet Consortium we band together and want to work on this. We had a meeting after the Plone conference last year. Every company there had their own workspace solution, everyone was maintaining their own stack. But they did not have enough resources to generalize it for the community.

Design first. Design is not about eye candy. You start with a decent vision of what your strategy is, what your project is trying to solve.

Roadmap-driven Scrum development. Normal working day, in company time. Legitimate leadership serves the community. The consortium board funds the roadmap. Investment per company: 1000 euro per month, plus one developer day per week. Cash is used to hire people to help with the design. Sprint every Wednesday.

It is 100 percent open source. It is not a product that we will make money on. We will make money on the service we deliver. We want to move the license to the Plone Foundation, we will talk about that.

What we are developing, are add-ons, not a core fork. Plone 5 compatible. Will port to mockup. You are welcome to join the consortium.

Cornelis has made a patternslib-based clickable prototype, that needs no backend to operate.

Demo by Alexander Pilz.

User experience sells. We showed this demo to a client last week and he thought it was an impressive preview of social functions in future Plone.

Roadmap. Phase one: activity streams, team spaces, dashboards, document structures/wiki. Phase two: calendaring, search, news hub.

We are pioneering a new business model for open source.

  1. Dream a vision.
  2. Combine investment.
  3. Design first! Use dedicated designers.
  4. Develop and release.
  5. (or really 3.1 already) Win customers.

We can boldly go where no one has gone before. We are Plone, we can do anything.

We have an open space tomorrow. Welcome! Sprint on Saturday and Sunday.

Code: https://github.com/ploneintranet/ploneintranet.suite

Watch the video and slides of this talk.

Jens W. Klein: Big Fat Fast Plone - Scale Up, Speed Up.

published Oct 30, 2014 , last modified Nov 16, 2014

Talk by Jens W. Klein at the Plone Conference 2014 in Bristol.

I am owner of Klein & Partner, member of Blue Alliance, Austria. Doing Plone since version 1.0.

Default Plone is not so fast. Scales great horizontally (so adding machines), but there are still bottlenecks, primarily loading stuff from zodb.

First customer Noeku, over 30 Plone sites, hi availability, low to mid budget, self hosted on hardware, VMs. The pipeline is: nginx, varnish, pound, several Plone instances, databases (zodb, mysql, samba).

Second customer Zumtobel, brand specific international product portals, customer extranets, b2b e-shop, hosting on dedicated machines.

Third customer HTU Graz, one Plone site with several subsites (with lineage), lots of students looking at it, so we have a peak load.

Main Plone database is ZEO or PostgreSQL, plus blobstorage (NFS, NAS). Load balancer: haproxy or pound. Caching proxy: varnish (don't use squid please). Webserver: nginx (better not use apache).

The Plone instances and the client connection pool (between Plones and database) can use memcached (maybe multiple), LDAP, other third party services.

If you want to improve things, you must measure it: use munin, everywhere you can. fio is a simple but powerful tool to get measures on your io. Read up on how Linux manager disk/ram. Know your hardware and your VMs (if any).

Database level

  • Noeku: zeo server, blobstorage, both replicated with drdb
  • Zumtobel: RelStorage on PostgreSQL, blobs from NAS over NFS.
  • HTU Graz: RelStorage on PostgreSQL, all on one machine.

First things first: never store blobs in ZODB: use blobstorage. Standard Plone 4.3 images from news items are stored in zodb. You can change that. Check your code and add-ons.

ZEO server plus blobstorage: ensure a fast IO to harddisk or RAM, and have enough RAM for disk buffering.

Blobstorage on NFS/NAS: shared blobs and mount them on each node. Mount read-only on web server node and use collective.xsendfile (X-HTTP-Accel) to make it faster.

RelStorage plus blobstorage: never store blobs in the SQL database (same as zodb). No MySQL if you can avoid it. Configure your SQL database.

Connections pool: ZEO vs RelStorage. ZEO server pushes invalidations to client. RelStorage: ZEO client polls for invalidated objects. Disk cache of pickled objects per zope instance. On RelStorage size you can use memcached, which is a big advantage, reducing load on the database.

  • Noeku, ZEO.
  • Zumtobel: RelStorage, history free, 2 VMs, 16 instances plus some worker instances for asynchronous work, each 2 or 4 threads. RAM cache 30,000 or 100,000 objects, memcached as shared connection cache. If packing takes too long, try relstorage_packer.
  • HTU: RelStorage, history free. 6 instances, each 1 thread. RAM cache 30,000 objects. This is something you need to tweak, try out some values and measure the effect. Memcached. Poll interval 120 seconds. Blobstorage: shared folder on same machine.

The above is not specific for Plone. The below is.

Plone

  • Turn off debug mode, logging, deprecation warnings.
  • Configure plone.app.caching, even if you are not using varnish. Browsers cache things too and it can really help.
  • Multiple instances: use memcached instead of standard ram cache.
  • Know plone.memoize and use it.
  • Never calculate search twice. Check your Python and template code to avoid things that boil down to: if expensive_operation(): expensive_operation().
  • Use the catalog.
  • Do not overuse metadata: of you add too many metadata to the catalog brains, they may become bigger than the actual objects, slowing your site down.

Write conflicts:

  • 90% of write conflicts happens in the catalog.
  • To avoid it, try to reduce the time of the transaction. Hard in standard situations, but you may be able to first prepare some data and later commit it to the database.
  • Use collective.solr or collective.indexing. I hope that in Plone 6 we will no longer have our own catalog, but use SOLR.

Lots of objects, hundreds of thousands? Are catalog queries slow? Use a separate mount point for the portal_catalog, with higher cache sizes.

Archetypes versus Dexterity. In AT, you should avoid to wake up the object, ask the catalog instead. With Dexterity, it is sometimes cheaper to wake up the object: if objects are small and you iterate over a folder or subtree, or if adding lots of metadata to the catalog would be needed.

Third party services, like LDAP and other databases, need to be cached. Talking to external systems over the network is slow.

In case of serious trouble: measure! munin, fio, collective.traceview, Products.ZopeProfiler, haufe.requestmonitoring, Products.LongRequestLogger. Change one thing at a time and measure it. Really important!

plone.app.caching: always install it. For custom add-ons with own types and templates, you need extra configuration for each type and template. Do this! Calculate in some time for this task, it is some work, but it is well documented.

On high traffic, introduce a new caching rule for one/two/five minute caches, really helps against peak load.

Load balancer. Pound is stable but old and difficult to get measurements. Haproxy not that simple, newer, nice WebUI for stats. You should point the same request type to the same instance.

Web server: nginx. Set the proxy_* to recommended values.

Read more: on docs.plone.org.

Looking up attributes in Dexterity can be slow, we need to fix that.

Watch the video of this talk.