betabug... Sascha Welter

home english | home deutsch | Site Map | Sascha | Kontakt | Pro | Weblog | Wiki

Entries : Category [ zope ]
All around the Zope application server
[digital]  [language]  [life]  [security]  [media]  [zope]  [tourism]  [limnos]  [mac]  [athens]  [travel]  [montage]  [food]  [fire]  [zwiki]  [schnipsel]  [music]  [culture]  [shellfun]  [photography]  [hiking]  [pyramid]  [politics]  [bicycle]  [naxos]  [swim] 

13 October 2006

Writing Zope and RewriteRule documentation

Meanwhile, over at

Yesterday evening I made an effort to rewrite the Zope and Apache page over at Something that I had intended to do for a long time. The old version of the page always had seemed confusing to me, especially in mentioning stuff like ProxyPass that zopistas don't really use any more nowadays. I hope "my" version is better now.

Today I noticed still some things missing, I went a step further and added a section about "Debugging, Common Pitfalls, Problems", in short the things that can go wrong and where to look after them.

Usually I would have written this kind of guide or how-to on my own site somewhere. But on the one hand, the basis has been there before (and I left in much more than I added or rewrote) and on the other hand being part of something like and sharing in a community project is an experiment for me that gave me some good vibes.

Posted by betabug at 18:50 | Comments (0) | Trackbacks (0)
22 October 2006

One Rule to RewriteRule them all

New version of the witch out

This very moment I've uploaded a new version of the RewriteRule Witch, the RewriteRule generator that helps you to get proper RewriteRules for Zope VirtualHosting through apache. So, what's new? The witch outputs now only one rule. We still cover both cases that the old "two rule" version did, but with an "or" inside the regular expression we need only one rule now...

Umm, you might ask, which two cases? The problem arises mostly in "inside out hosting", i.e.. when you have most of your site as static content in apache, but you host only (for example) the URL dynamic in Zope. That case is easy to cover in a regex, but something overly simplified, like ^/zope(.*) will get you into trouble the next day. Why? On the next day you might decide to publish your own version of the zopelist archives. To do this you decide to use some wonderful PHP application, which you place into See how you get bitten? ^/zope(.*) matches that and your shiny PHP archives will not work, because users keep getting sent to Zope.

The old version of the witch covered the simple case that there was nothing after /zope and the normal case that there was a slash and more of the URL in two rules. The "new" witch gets this grouped into one rule. I've tested both the functioning of the witch and the functionality of the new rules, but if you run into any problems, please let me know!

Posted by betabug at 00:04 | Comments (3) | Trackbacks (0)
24 October 2006

Automatic Menus on the Nautica Project

The easiest CMS on Zope just got easier

Yesterday evening I spent some "quality time" online with sm (this morning for him) to improve further on the ZWiki Nautica Project. In case you missed that, it's a site that skins a ZWiki in a good looking template from, thereby trying to prove the point that a ZWiki is really the easiest CMS on Zope. There still are some things missing, but we improved a lot on how the menu is done...

The template we chose has those four big menu points with subtitles, right under a big decorative image. This is appropriate for a small company site or so, but for a wiki it poses the question: Where do those links (and their text) get set. My first rough setup had those links in a static HTML file. Not a nice setup, since in the "small company" example you can't be certain that we have someone confident enough to do raw html editing when the "offer of the month" changes. A more wiki-like approach was needed and sm found one.

What we do now is generate the menu from the subtopics of the wiki's start page. We limit that to only the first four (using pythons min(xy, 4) construct), apply some zwiki API magic to it to get a list of objects (happy to have had sm's help at that point), and then use the summary() function to get the subtitles. The summary() function basically takes the first paragraph of a page and cuts it down if it is too long. Our pages get nice headlines that are now reflected in the subtitles of the menu, something that is a Good Thing(TM), as it gives continuity to the visitor. As this is all inside a wiki, the documentation was updated right after the code.

Next step will probably be to bring the wiki administration pages into some shape, either right inside the styling of the template, or in something that references the design a bit.

Posted by betabug at 09:22 | Comments (0) | Trackbacks (0)
13 November 2006

More ZWiki Skins and Skin Know-How

Redesigned my own wiki, updated the docs
Screenshot website

Friday evening I finally took the (seemingly) big step and slapped some new paint on my own little wiki site Just like with the nautica project, I took a template from and created a zwiki skin out of it. The template needed only a few changes, the biggest of which was the replacement of the main deco image with one of my sketches. I like the result so far, even though the content of the site may look greek to you. It's been the fourth ZWiki skin I'm building, and I'm getting faster with each one. I'm down to a couple of hours for a site with a reduced set of wiki functionality.

Since I've learned a lot in the experience, I was able to update the HowTo describing the process. At first had some problems with a full disk, so I stored the howto right there on my own zwiki. Now that the database of has been packed, the HowTo found its home at CreateZWikiSkinsFromOpenWebTemplates. The other places (nautica and papaki) still have it, but they point to, where I plan to keep the howto updated. (Reminder for those coming in late: The ideas behind this are explained in The Easiest CMS on Zope... Zwiki and Skinning a ZWiki.)

Given the latest trolling on the zope mailing list I think it's interesting to note that Simon provides one of the biggest documentation projects for the Zope community, yet works on a shoestring hosting. Maybe some people could contribute something for that instead of spending hours for a useless and irresponsible trolling project.

Posted by betabug at 23:38 | Comments (0) | Trackbacks (0)
16 November 2006

Searching ZCTextIndex in Greek, properly

Treat those pronunciation marks right with GRSplitter in Zope

A long time ago I had made my own Greek Unicode splitter for ZCTextIndex. That worked fine, but it didn't take the pronunciation marks into consideration (so searching for "ελληνικα" didn't find "ελληνικά"). Today I found through the greek plone forum the GRSplitter, which came out a few months ago. I've set it up with this blog as a guinea pig, and the search works even better now. No need for me to fix my own splitter any more. Thanks go to George Gozadinos!

Setup: Untar the archive, place in your Zope Instance's "Products" folder. Even though the GRSplitter is a Zope Product, it won't show up in your "Add" menu in the ZMI. Instead you will have to add (or replace) your "ZCTextIndex Lexicon", which usually lives inside your ZCatalog and is usually named "lexicon" or so. When you add a new Lexicon, you specify to use the GRSplitter and off you go. If this is a replacement, you will have to recatalog.

Posted by betabug at 12:32 | Comments (2) | Trackbacks (0)
12 December 2006

Category RSS feeds for COREBlog

A 5 minute change

It's so easy to make RSS feeds for all categories in a COREBlog, it took me only a couple of minutes. Here is the recipe:

  1. Copy the DTML method rdf_10_xml, rename the copy to rdf_10_category_xml.
  2. In rdf_10_category_xml look for every mention of rev_day_entry_items(...) - there are three of them. The first one gets changed to category_entry_items(category_id=category_id, count=1) the two other ones to category_entry_items(category_id=category_id, count=15).
  3. In the "modules" folder in the ZMI, edit the "categories" DTML method. Add links to your_blog_url/rdf10_category_xml?category_id=<dtml-var id> inside the loop for each category - I've got them on the same line, along with the item count.

...and that's it. Oh yeah, and the announcement: Dear visitors, please enjoy the category RSS feeds. Get updates only on your specific interests!

Posted by betabug at 14:23 | Comments (0) | Trackbacks (0)
20 December 2006

The Garage Site using ZWiki

Taking ZWiki Design for a spin!
Screenshot Graphics Garage Website

For a while now I've been talking about skinning a ZWiki and using it as the easiest CMS on Zope. I've done a demo site with Simon (of ZWiki fame), done my own wiki site in this way, and we've done a couple of client demos at my workplace. Now we did what we recommend others for our own site and it rocks: Looking at the new Graphics Garage website (went online yesterday evening) you wouldn't guess this to be a ZWiki, it's a pure design site. But there is some nice tech behind it which results in a very simple user interface...

The first step was to shut out all the world from editing, sorry folks but a company site isn't a "free for all" :-). For logged in users the wiki interface stays almost the same. Once logged in a small "control form" (pic) is displayed on the page, some other Zwiki functionality has been removed since we don't need it. We added a very simple custom image container product. This does simplified uploading, editing, searching of images. Most important: It displays the markup needed to put the image into reStructuredText, ready for copy and paste. It also associates those images with an "Accelerated HTTP Cache Manager" so they get some caching love from proxies and browsers.

wiki controls for logged in users

Looking at the site you will notice that the pages aren't uniform at all. Almost every page looks different. My target was that there would have to be no special treatment through the ZMI for pages to be different. It would be "easy" in the first place to set properties on pages and then change stuff around in the ZPT. But that is not transparent to users, in my opinion it's a recipe for later disasters when editors want to change things around. What we needed was something that gives designers the freedom to style pages while retaining technical boundaries.

So instead we standardized on reStructuredText, using some "compound" statements to build blocks of content and marking things with :class:. That stuff is then picked up from the CSS where all the styling happens. Editors don't need to be too proficient with reSTX, most of the times looking up how it was done before or on some other page is enough. Of course the ZWiki "Preview" function helps. What we don't have though is some kind of "Workflow"... mess up the page and it really is messed up. For playing around editors can use the wiki "Sandbox", which is open only to logged in user.

wiki controls for logged in users

For the "Nuts and Bolts" section I used a python script and ZPT to build an automatic sub menu. This gets included into the pages using the reSTX "raw directive" and some DTML. It's not the best solution for me, because it binds us to reStructuredText and we have to enable DTML (not that much of a security problem in a closed site, but still). Also it inserts an extra "span" into the code, which breaks validation. Yes, apart from those pages, all the site validates as XHTML 1.0 strict. It wasn't even difficult.

It also displays fine on lynx. Some pictures don't have alt-attributes. It would be great if images inserted into reSTX out of Zope would automatically use the "title" attribute of Zope's "Image" object for "alt". That would make it much easier for editors (in reSTX you can use :alt: with the image directive, but it's not as obvious as uploading the image with a description right away).

Images are my only other grief (so far, knock on wood), in the "Portfolio" page. That page contains a lot of images. They also have to be in reasonable good quality, being a showcase of our company's work. Since our line here isn't too big the download takes quite long and I'm still thinking about good solutions.

Posted by betabug at 11:25 | Comments (0) | Trackbacks (0)
08 January 2007

gzip encoding/compressing ZWiki pages

Smaller is beautyful!

I had been gzip encoding/compressing my COREBlog pages for some time now, now I thought about doing the same for my ZWiki pages at the papaki. There are two approaches:

Likely using mod_gzip would be considered the better solution. For some reason I just never got around to installing and using it. I guess the reason is just what brought me to use the gzip support in Zope in the first place: It's easy to set up. It works only on the pages you tell it to (now that could be considered a downside too), but it's easy to assign to those pages you want, no messing with regular expressions in httpd.conf to try to "catch" zope pages. I imagine that using mod_gzip would be easier on the server's RAM usage, but I have no hard figures to prove or disprove that. I'll keep it on my to-do list to play around with.

For now my wiki pages are still quite small. But I imagine that the usual huge wiki pages would profit a lot from gzip compression. Next on my list is If-modified-since support for ZWiki pages, another fun project with bandwidth payoff. sm mentioned that there is already some setup for that for ZWiki stylesheets (and some other parts) in the code.

Posted by betabug at 10:06 | Comments (2) | Trackbacks (0)
31 January 2007 online

Eine weitere ZWiki Website
Screenshot Website

Seit ein paar Wochen waren wir am vorbereiten, jetzt mit meinem frisch gezügelten Server hat es geklappt: Die Website vom Restaurant Drahtseilbahn in St.Gallen ist online unter Die Website hat bereits einen kleinen Bestand an Seiten, angelegt um nach und nach zu wachsen. Technisch gesehen basiert sie auf Zope und ZWiki. Das führt dazu, dass ich sie ohne zögern in die Hände von Ruth und Urs (meinem Vater) zur weiteren Inhaltspflege übergeben kann. Tatsächlich hat den aktuellen Inhalt bereits mein Vater eingefügt, ZWiki zu bedienen ist halt angenehm und einfach.

Die Lösung mit ZWiki ist für mich gut, weil sie ein paar der klassischen Probleme angeht, die entstehen, wenn man "für jemanden" eine Website macht...

Auch nicht schlecht ist natürlich, dass ich nur ein paar Stunden brauche, um eines der Templates von in eine ZWiki-Site umzuwandeln. Less is more und faul war ich ja schon immer.

Posted by betabug at 21:33 | Comments (0) | Trackbacks (0)
14 February 2007

ZWiki Code Contributions

Doing a start

Yesterday I started to look more into contributing patches and bugfixes to ZWiki. As I start to feel more confident in Zope hacking, I feel I can maybe contribute something for the first time. So yesterday I went through the steps on the page, installed darcs and set it up. Easy. Sent in my first patches too.

Instant gratification: My patches appear on the "zwiki repo summary page". Browsing through that commit list made me want to contribute even more. ZWiki is such a great product and so far it's all been Simon's work, blood and sweat! He really could use more contributions!

Posted by betabug at 21:15 | Comments (2) | Trackbacks (0)
15 February 2007 offline, awaiting backup

Crossing fingers, looking for future solutions

The host of (and with it all the wikis on are currently offline due to technical problems. To me it looks like filesystem corruption (about the only thing that can bring a ZODB to its knees). Simon requested restoration from a backup from his provider after trying all kind of things. We'll have to wait for that now...

In the meantime we can meditate on how much of a community service and really are providing. Simon currently has a very small hosting plan, so I'm thinking about finding ways to get together the means to upgrade to a bigger plan (see his provider's VPS plans), Simon currently has VPS-1, which is really tight for such a big and busy Zope site. Getting him to put on VPS-2 would be the fastest and simplest solution. Ideas?

Posted by betabug at 10:12 | Comments (0) | Trackbacks (0) is back!

Only a few edits lost

Thanks to Simon and his provider (and thanks to regular backups - you're doing yours, right?) is back online. We've lost a few edits that happened in the hours from the last backup to the problems. That's bad, but not as bad as a full desaster would have been. Thank you Simon for all the work you are putting into this community thing!

Posted by betabug at 21:58 | Comments (1) | Trackbacks (0)
22 February 2007

Zwiki Bugday this Friday?

Anybody in to smoke out those little buggers?

Got some "free to play" time at work (yay my boss!). This Friday (tomorrow) would be a nice chance to make a Zwiki bugday, like Simon used to organize a few times. Anybody in for it? I would be at it from 10 in the morning till 18:30 in the Afternoon (GMT+2).

The is full of stuff that needs sorting out. Even if you don't know much Zope/Python programming you could help by going through the issues, try to reproduce them, add instructions how to reproduce them and feedback into the comments. If you *do* know python/zope hacking, patches, comments, hints are of course welcome!

So, what do you think? Join us at #zwiki on, comment on or or leave a comment here!

Posted by betabug at 11:05 | Comments (0) | Trackbacks (0)
23 February 2007

Zwiki Bugday Fun

...but tired

I'm done with my part of the BugDay and Frank went home too. We have so far closed 13 issues and opened 1 new one. A lot of "oldies" have been closed after failing to reproduce them, but we also got patches for some issues into the darcs repository.

I started the day with something a little bigger (#1299 getting Zwiki-in-Plone to do mailout again) and rounded it out with smaller stuff. Quite often all I had to do was to implement and test existing patches or the patches Frank gave me and put them into darcs. Zope hacking is fun.

Now I'm really tired but happy! Good bughunting to Simon and whoever lends a hand to continue the bugday!

Posted by betabug at 18:42 | Comments (0) | Trackbacks (0)
28 February 2007

Zwiki "Conditional HTTP GET" handling

On on

Today a new development version of Zwiki code went online on (and related sites). This version has the ability to handle "If-modified-since" based "conditional HTTP GET" requests. The result? A bunch of much, much snappier Zwiki sites.

Based on my previous work on my COREBlog, I had hacked 304 handling (as "conditional HTTP GET" is often refered too, after the 304 status response the server may send back) into the Zwiki code. There are even setup instructions online. Links with more explanation about "conditional HTTP GET" at the end of that page. It's really easy to do in Zope, standard "Image" objects even do this out of the box.

The RFC says that the server "SHOULD" implement "conditional HTTP GET" using ETags together with If-Modified-Since. But I believe for Zwiki it doesn't make sense to implement that, since calculating a checksum on the page would require the page to be rendered for each request. That's exactly what we try to avoid to steer clear of the limited RAM problem on The "Last-Modified" time is cheap to get and works fine most of the time.

Simon also made some changes to the ZEO and ZODB cache setup on the site. All together, the result is that the sites on feel much faster. This is in part to the browser taking pages from cache, and also in part to the overall load on the server being reduced.

Posted by betabug at 23:25 | Comments (0) | Trackbacks (0)
24 March 2007

A Story of Workarounds and Bugfixes

Footnote here

I like bugfixing. No idea if it's my nickname that's at fault (the nickname was there before my appetite for bugfixing), but I also seem to have some success with it. Sometimes it can get a nice twist. Take for example this problem Zwiki had in rendering footnotes in RST pages: It looked like we would need a workaround in the code to make the problem go away, a special case with extra checks, bells and whistles. I took the plunge into the code and started looking for where to apply that...

(This part always reminds me of Diomedes Spinellis Book "Code Reading".) Using "tags" in vi is the first weapon of the bug hunter, sprinkling the code with "print" statements the second. Pretty quickly I found the spot to apply the "special case" to be somewhere within a function called markLinksIn. There I started to build my special case. This function identifies wiki links inside the page text, it already has special treatment e.g. for wiki links that are inside other links. As I was trying to understand the logical flow of the code, I looked through the method within_literal, which does part of those checks, something similar to what I was building.

My "special check" was trying to detect RestructuredText footnote or citation markup around a zwiki freeform link, for example [1]_ would be the marker for footnote 1, while .. [1] would start the actual footnote display. The general code gave me the "coordinates" of the found link (the position of the '[' and ']' in the overall text). I tried to "look back" and "look forward" from those positions to detect the special RST markup. But it just didn't work, it wouldn't detect the footnote markup. "print" to the rescue, I started to print out what I had "seen looking back" and surprise! surprise! at this point in the code the RST footnote had already been transformed to an html link element, so looking for .. couldn't possibly have found anything.

Now I hope I didn't lost you till here... because if there was a link around the wiki freeform link, then that should already have been detected by the existing code, specifically the within_literal method. But why wasn't it detected? Some more friendly and helpful "print" statements later (and while chatting with Simon on #zwiki) I spotted the problem: The find function was looking for <a href= while RST generates tags of the form <a name="..." href="..." (which is perfectly legal, just looks unusual). It took me a little bit more time to actually fix the bug, but at that time the poor critter was already lost.

Now the funny thing about this story is how everybody was shouting "we need a special case, oh my, it will clutter up our code, but we need it". (I'm not exempting myself from this, in fact I was halfway into actually writing that special case code.) But in the end it was just a small bug, stemming from the way we usually see HTML links in difference to what HTML links really can look like.

Posted by betabug at 10:57 | Comments (0) | Trackbacks (0)
Prev  1   2   3   [4]   5   6   7   8   Next