betabug... Sascha Welter

home english | home deutsch | Site Map | Sascha | Kontakt | Pro | Weblog | Wiki

Entries : Category [ zope ]
All around the Zope application server
[digital]  [language]  [life]  [security]  [media]  [zope]  [tourism]  [limnos]  [mac]  [athens]  [travel]  [montage]  [food]  [fire]  [zwiki]  [schnipsel]  [music]  [culture]  [shellfun]  [photography]  [hiking]  [pyramid]  [politics]  [bicycle]  [naxos]  [swim] 

25 December 2005

Unit Testing the Witch

Split and test

After the last few weeks at work, where we worked hard and fast to finish part of a project, with the holidays and free time at my hand, I had nothing better to do than to code at home... I finally got around to writing some unit tests for the RewriteRule witch (a script to generate apache RewriteRules for Zope). The code of the witch stayed the same as far as actual output is concerned.

There is one difference to the code, something that the last few weeks working with ZopeTestCase unit tests had taught me: Split up the code on the boundary that affects the web browser. The witch consists mainly of one simple method. It picks up the parameters from the http request, shows the form (if no input given), or shows the form together with the result (if form input was provided).

Since unit testing form input is complicated, what I ususally do is split request processing and actual logic work. So there is now a public method to handle the request, and a private method to do calculation. And the private method is written in a way that it can be tested easily, without having to rely on form handling. That makes testing at least the important stuff possible. I can make changes to the code and be sure that the most basic variants of output stay the same.

Posted by betabug at 23:15 | Comments (0) | Trackbacks (0)
04 January 2006

Spirited Platforms for Running Zope

#zope humour

IRC has it's moments of joy, even #zope (on freenode) is sometimes funny (not just helpful and interesting):

vermoos> error: invalid Python installation: unable to open
    /usr/lib/python2.3/config/Makefile (No such file or directory)
betabug> you're on some funny platform that requires installing a
                 python-dev package?
philiKON> probably
vermoos> ubuntu breezy badger
betabug> sounds like one of them kiddy booze drinks
philiKON> lol
vermoos> its no alcopop - better than xandros which isnt free

All references to existing operating systems are purely coincidental and mean no endorsement by the author, rather the author would endorse OpenBSD to run any server out there.

Posted by betabug at 17:04 | Comments (0) | Trackbacks (0)
02 February 2006

Μαθαίνοντας Zope στην Ελλάδα

Learning Zope in Greece - Weblog

Στην δουλειά είμαι προϊστάμενος τώρα. Η Μαίρη και ο Ανδρέας με βοηθάνε και μαθαίνουν το Zope. Ενδιαφέρουσα κατάσταση. Από την μια πλευρά δεν είχα πότε "μαθητευόμενο" στον προγραμματισμό, και είναι ωραία να βλέπω πως μαθαίνουν, κάθε μέρα προχωράνε. Από την άλλη πλευρά πολλές μέρες μου σπάνε λίγο τα νεύρα που ρωτάνε συνέχεια. Για να τους έχω απασχολημένους, τους έβαλα να κάνουν ένα weblog Learning Zope in Greece. Έτσι...

Κανονικά τους έχω πει να διαβάζουν καθημερινά την Zope mailing list, και να γράφουν κάτι για το τι διάβαζαν εκεί. Αυτό δεν έγινε (εκτός από μια φορά) δυστυχώς. Μήπως κάποια μέρα τους έρθει η όρεξη για κάτι τέτοιο. Χρήσιμο θα ήτανε σίγουρα. Άλλα και μέχρι τότε, το "blog" τους έχει πολύ ενδιαφέρον. Καλορίζικο!

Posted by betabug at 13:21 | Comments (6) | Trackbacks (0)
09 March 2006

The Easiest CMS on Zope... Zwiki

Install Zwiki, style it, lock it down, done

Remember, Zope is an application server, not a CMS (content management system). A typical beginners mistake while learning Zope is to only use the built-in elements (for example Folder and Page Templates) and push content into these. That is "wanting it the easy way", which is one good recipe to end up with the head on a brickwall while having a velocity above the usual rotation of this planet. "Zope is an application server, not a CMS" means: You need an application to run on Zope. A proper application consists of a bunch of python filesystem based products. One very typical application of that is a CMS. There are some CMS for Zope out there that are well known, but that I don't like so much. One easy, fast, and simple alternative is a closed Zwiki...

The easiest and cleanest CMS on Zope is a Zwiki: Install it, style it, close it to allow editing/commenting only for authorized users, educate those users how to use it. Done. The content model of that is pretty simple. And you might have to follow some guidelines to get a consistent navigation and styling. But for a small company, a personal site, or a small event site (what I would call "a bunch of pages" sites), a zwiki based site is fine. If the users are clever enough to be able to learn structured text or html editing you are rolling real fast. I think some people have even integrated Epoz or Kupu into Zwiki (though I didn't try, stx/html is good enough for me). Want an example? My Greek site in the works is being done this way.

But won't people just change the wiki?

One objection you could have here is "but it's a wiki, people from all over the place will go in and change my pages". Which is the reason why we remove the editing and commenting privileges for anonymous users. That way, the wiki part ("go in and edit any page") is only for our own editors. The wiki turns into a CMS. Anonymous visitors can only see the pages, not change them.

You likely don't need the workflow anyway

A lot of CMS solutions don't need big stuff like workflows. If you don't know what a workflow is here: It's a way to specify that changes to a page have to go through different levels of approval. A writer submits changes to a page. An editor has to check those changes and give his OK. In large organizations maybe a lawyer has to give her OK too. You likely don't need this nightmare. But once you have one of those big systems you will have a huge learning curve explaining your users why they have to click through 7 checkboxes and dialogs to put some small change online (or you might spend the time to switch off the built in workflow). Workflows are not for the small folks.

Mix and match with other Zope products

Some CMS also have lot's of add-on tools, like weblogs, a user forum, image galleries. That is a more valid point to choose a big CMS solution over a "closed" Zwiki solution. Zwiki has some things built in, but not all those. But we should not forget that Zwiki is just one application running on Zope. It can easily be combined with COREBlog (for weblogs), or one of the Zope gallery products, or ... etc. Definitely the work to integrate the layout of these solutions is larger than for plugging in a module into a "big" CMS. But the hard part -- integration of user accounts -- is taken care of by the Zope security and authentication machinery.


Example sites:,,, 2 more in the works.

HowTo:, that site is also "open", so you can look at the user interface, also on

Posted by betabug at 11:02 | Comments (4) | Trackbacks (0)
24 March 2006

Beefed Up Trackback Notification

Looking at Zope sending email

This morning I needed an example of how to send a simple plaintext mail notification from Zope. So... [think, think], COREBlog has a notification example, which I had beefed up a bit sometime. I used that one as my example and went on to augment the corresponding Trackback-Notification script....

My problem with the Trackback-Notification method out of the box is that it doesn't show to which post the trackback is attached. With more than 300 posts so far, searching is a bit of a drag, especially since trackback spam arrives for any random post. I changed the script to include the URL of the posts trackback list in the ZMI. Saves a lot of time. So, in front of the line:

Title      : %s
I include this line:
and then change the substitution line to add me another substitution in the right place, like this:
%s""" % (to_addr,from_addr,d['parent_id'],d["title"],d["url"],\

A long time ago, in order to get mails with Greek content to work properly, I also changed the headers of the mail to look like this:

Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bi

Posted by betabug at 10:25 | Comments (0) | Trackbacks (0)

COREBlog Existences

Look who's using COREBlog

It's sometimes just astonishing how fast Open Source developers are. It took Atsushi Shibata (famous author of COREBlog) only about an hour to implement a new feature on I had jammered on the mailing list that "existences" - the ping page of that site - is overrun by spammers and suggested as a feature that only COREBlog trackbacks would be listed. Now we have a new ping tracker that lists pings from COREBlog users only, so we can have a look at what other COREBlog users are up to. Great job!

Posted by betabug at 13:35 | Comments (0) | Trackbacks (0)
27 March 2006

Hax0ring around with COREBlog again

Added If-Modified-Since Headers to some methods

I've been playing around with COREBlog this evening again. Managed to get processing of If-Modified-Since Headers [1] working for my RSS feeds and the weblog entries. Hopefully this will save me some bandwidth and my server some work. If things break for some reason, please give me a shout. It looks like Safari just ignores this, but apparently Firefox uses it and Google/Yahoo use it too. I'll publish the patch and description soonish I hope.

[1]: aka "Conditional HTTP GET", see for example HTTP Conditional Get for RSS Hackers from The Fishbowl.

Posted by betabug at 23:58 | Comments (0) | Trackbacks (0)
28 March 2006

How-To For Conditional HTTP GET For COREBlog

Patch for handling the If-Modified-Since header

Last nights hack seems to work just fine. Nobody has complained yet and the log shows some successfull entries. As seen in the previous post, conditional HTTP GET is saving bandwidth by giving the browser (or RSS reader) the page content only when it has changed since a given date. So, time to publish the patch. Use at your own risk, better you know what you are doing, no warranty, it just works for me YMMV etc. but here it is...

Last night I put it up on my weblog (for the RSS feeds and the entries pages. There might still be nasty bugs around, but I got those heartwarming entries in my log file: - - [28/Mar/2006:06:14:55 +0200] "GET /blogs/ch-athens/288
HTTP/1.0" 304 1 "-" "Googlebot/2.1 (+
83.171.XXX.YYY - - [28/Mar/2006:09:19:35 +0200] "GET /blogs/ch-athens/rdf91_xml 
HTTP/1.1" 304 0 "-" "NetNewsWire/2.1b17 (Mac OS X;"
62.1.XXX.YYY - - [28/Mar/2006:09:14:39 +0200] "GET /blogs/ch-athens/rdf91_xml 
HTTP/1.1" 304 0 "-" "Akregator/1.2.1; librss/remnants"
... - - [28/Mar/2006:08:33:31 +0200] "GET /blogs/ch-athens/rdf91_xml 
HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O; de; 
rv:1.8. 0.1) Gecko/20060111 Firefox/"
As I already noticed yesterday, my own Safari (on Mac OS X 10.3.9) doesn't bother. I've seen Safari handle 304's in my log, but only on images. Either it's something I do wrong or Safari isn't up to it.

Anyway, enough of the bragging, here is the patch for conditional HTTP GET for COREBlog. Apply the patch to, restart Zope (or refresh the COREBlog product if you run in debug mode). That's not enough yet, as the code still has to be called by the methods in question. The code has two new methods:

How we use these methods

In our standard COREBlog DTML methods (rdf10_xml) we wrap all the content with a DTML-IF like this:

<dtml-if "handle_modified_headers(last_mod=None, REQUEST=REQUEST)">

<!-- normal dtml-code here: -->
... etc. etc. ...
Notice how we have only an empty line inside the dtml-if and all the code is inside the dtml-else? When the page data hasn't changed we get away to send only those two empty lines to the client and don't need to process all the other code.

We use the same code to apply this to the other RSS feed (and any other homebrew feeds we may have. We need slightly different code for the single entry pages. I want to do these, since Googlebot regularly swipes them and they rarely change. This is how I do the entry_html:


<!-- normal dtml-code here: -->
<dtml-comment>* Don't shor header when noheader has set.</dtml-comment>
... etc. etc. ...
If you are using one of the ZPT skins for COREBlog (lucky you), you will have to adopt this dtml - but I don't think it will be difficult.

Posted by betabug at 10:21 | Comments (0) | Trackbacks (0)
15 April 2006

Pushing The Limit On BTreeFolder2

Who needs SQL anyway?

The last few days I've been working at one part of our app that holds lots of small information bits. They are structured very uniform, so there was the thought to take them out of the ZODB and put them into some SQL database. But so far our app has no need for SQL anywhere and adding the installation of a RDBMS to our servers and developers machines would add a lot of complexity to the setup. Given the data structure of BTrees in python, Zope and the ZODB gave some options that I wanted to test and explore first...

Holding on to an object oriented structure and the ZODB, I can see some main strategies for implementing things:

  • Use BTrees in a main "bucket" style object and keep our bits of information contained in one or several BTrees.
  • Use BTreeFolder2 as a "bucket" to store other Zope objects (in this case based on SimpleItem), which in fact boils down to the first strategy behind the scenes, but the hard parts are already being taken care of.
  • Spreading out a structure of folderish Zope objects so we do not have an overload of many objects inside one other object. This works really well if there is some inherently parent-child like structure between your objects, but it doesn't work out in this part of our app.

    What do we expect?

    From this point of "brain storming" on I gathered some facts, mainly "how many objects are we expecting?", "how can they be structured?" and "how do they need to be accessed, reported on, etc?" Turns out we do not have that many objects to be expected, especially if we implement some kind of housekeeping. Most of these objects have to hang around for about a year and a half, after that they are old news. If we do not fall into the old "let's ignore that our db will grow forever" trap and plan for removal of our old objects, we can assume we will have a maximum of about 8000 - 10000 main objects, each with at most 15 - 20 subobjects. Not that much, but we still have to deal a. with a lot of objects for a simple Zope folder and b. a lot of objects in total.

    My goal was to build the stupidest solution I could get away with.

    That's sounds dumb, but in real life it's a good plan. A "stupid" solution for me involves using as many things you can reuse from standard Zope products, to use normal Zope object behavior, to refrain from coding "clever" hacks and special stuff nobody understands after 6 months. So doing things the "normal" Zope way would be my choice unless failing performance would force me otherwise. Stress testing and mass testing would have to show if I was on the right track.

    The outline of the implementation

    I went on and hacked my initial code together. Got it running both in some Unit Tests and in a minimal version of a user interface. Basically it's a class based on BTreeFolder2 as a main container. This one contains by default a ZCatalog. Inside that are the great mass of objects, based on OrderedFolder. And in them are the smaller subobjects who are based on SimpleItem. Both the main mass of objects and all the subobjects are cataloged in that ZCatalog.

    Something is wrong

    Then I wrote another method. This "mass_bash" method would create 1000 objects, with each between 0 and 28 subobjects. There is some play in that, so the contents of the fields are not too much the same, but I did not care much about real randomness. My first tests started well enough, the mass_bash method ran through and my main bucket still worked. But with each run of mass_bash (and another 1000 of objects), the time required to add new objects increased a lot. At first mass_bash ran real fast, after a while it took half an hour, then an hour, later even one and a half hours to run! Something was clearly wrong.

    At first I had suspected my extensive use of metadata columns in the catalog. The general wisdom is that metadata columns in ZCatalogs make retrieval of object attributes obviously a lot faster, but writes to the catalog take a lot longer too. So out went the metadata. To my dismay that did not make that much of a difference. The particular workday in question came to an end and I had to sleep over it. Which was a good idea, as with the dawn my mistake also dawned on me. I had made a stupid coding mistake.

    Conclusion: Who needs SQL anyway?

    My code for assigning new ID's to new objects went through all possible integer IDs till it found a "free" one. Nice strategy when you expect 20 objects (or even 100) in a container. But looping through 5000 numbers each time you add another object can't really be called optimal. I'd call that "fucking stupid" on my part. It took me a short while to devise a new strategy. My first idea was to ask the catalog for the highest ID, i.e. make a simple query, sort by ID, convert to integer and take the next one. That didn't work, because the catalog's idea of sorting isn't the same as an integer's idea of sorting. But given the fact that I allocate all my ID's in the same way, even through the same method, I had another option. I asked the catalog for the count of objects of that given meta_type in my "bucket". That is my starting point to find the next "free" integer to be turned into an ID. That worked fine.

    Update: d2m informed me on #zope that BTreeFolder2 offers its own method generateId, which should do what I need. Well, yeah, reading the API is always a good idea :-). Thanks d2m!

    The result? Even with 12000 or 13000 objects, the script that adds another 1000 objects takes about 1.5 minutes - instead of 1.5 hours. Optimizing stupid code gives instant gratification. I also learned that a BTreeFolder and ZCatalog combination doesn't break a sweat with 15000 objects in the BTreeFolder and about 80000 - 100000 in the ZCatalog. The user interface that displays 50 items in batches, adds and edits such objects is responsive even if both the Zope server and the browser live on my own workstation (which is a dual G4 1Ghz, with constantly overfilled, fragmented HD). I expect acceptable performance on our production server, though I might do some testing with ab or siege still. But SQL is out of the game for now.

    Posted by betabug at 17:33 | Comments (4) | Trackbacks (0)
  • 25 May 2006

    Technorati Tags working with COREBlog Categories

    Making links to categories valid for tags

    Currently when I look at my Technorati profile, I see my "Top Tags" listed as stupid stuff like "categorylist_html?cat_id=2"... which is the result of me trying to use the COREBlog category links for Technorati tagging. As described in the Using Technorati Tags page, all one has to do is insert rel="tag" into the link and link to any web page where the URL ends in that tag. Normal COREBlog category links end in "categorylist_html?cat_id=#" which gives those stupid tags. Now I fixed that for me, so hopefully my stuff will be properly tagged in Technorati soon...

    What I did is make a little Python Script that allows me to put the name of the tag last in the URL, e.g. /...somepath.../athens - see the current links to the categories for an example. This works using traverse_subpath and redirecting to the proper category page. There are only a few changes to make in the ZMI. Operation for users should stay pretty much the same. The redirect isn't the most pretty solution (a redirect is always more load on the server and more delay for the user), but on a normal small blog this shouldn't matter too much.

    Another solution I implemented is Luistxo's description to put Technorati tags into your COREBlog's RSS feeds. With both of these, I'll have to wait and see if it works.

    Now for the code, we need a new "Script (Python)" in our COREBlog contents. Paste this code in there:

    url = context.blogurl()
    if len(traverse_subpath) == 2:
        category_name = traverse_subpath[1]
        category_id = traverse_subpath[0]
        url = url + '/categorylist_html?cat_id=' + category_id
    return context.REQUEST.RESPONSE.redirect(url)

    Next we change one single line in the entry_body dtml-method (which lives in COREBlogs contents in the ZMI as well. Change this line:

    <a href="<dtml-var blogurl missing="">/categorylist_html?cat_id=<dtml-var id>" rel="tag">[<dtml-var name missing="category name is missing">]</a> 
    To look like this:
    <a href="<dtml-var blogurl missing="">/categoryname/<dtml-var id>/<dtml-var name missing="none">" rel="tag">[<dtml-var name missing="category name is missing">]</a> 
    That should do the trick. Use at your own risk, your mileage may vary, etc. etc.

    Posted by betabug at 12:24 | Comments (0) | Trackbacks (0)
    16 June 2006

    Reject No-Reference Trackback

    You don't link to me? Don't trackback to me!

    To help me against the plague of trackback spam on this server, I've adapted my COREBlog to check for links to my blog on pages that want to trackback to me. The patch is just one changed line. COREBlog is so great, it has already code in there that checks for such links, and one could make such trackbacks without link get moderated (the "Moderate No-Reference Trackback" option in the "Comment,Trackback" settings). With my patch, such trackbacks will now just be rejected...

    Trackback SPAM is a problem. It's such a big problem for weblog operators, that many decided to turn off trackback reception entirely. The problem is much bigger compared to comment spam, because comment spam is easier filtered and (in the worst case) commenters can be ushered through a CAPTCHA procedure - bad as that is by itself. But trackbackσ are meant to be automated, so we can only filter some words and moderate. Myself I have trackbacks moderated and I filter IPs of trackback spamming machines on my firewall. But I still got the annoying trackback notifications, and I still had to delete the trackbacks from the weblog (even though they would never appear).

    The patch changes just one line, it raises an exception instead of setting moderation on. It will only work when the "Moderate No-Reference Trackback" option in the "Comment,Trackback" settings is checked. But beware, this patch is drastic. There might be legitimate trackback without a reference to you, those trackbacks would never go through and you wouldn't get even a notification. If there is a human who does the trackback, he/she will get an error "Link required, suspect spam." So use at your own risk, your mileage may vary, etc. etc. You have been warned.

    RCS file: /home/betabug/work/cvs/COREBlog/,v
    retrieving revision 1.3
    diff -u -r1.3
    ---    20 Mar 2006 10:50:19 -0000      1.3
    +++    16 Jun 2006 08:52:02 -0000
    @@ -1038,7 +1038,7 @@
                     #Check property for trackback_moderation
                     if"moderate_noreference_trackback") and \
                        not link_to_my_blog(, val['url']):
    -                        post_moderation = 1
    +                        raise RuntimeError,"Link required, suspect spam."

    Posted by betabug at 11:11 | Comments (0) | Trackbacks (1)
    07 July 2006

    How To Associate Objects to a Cache Manager Programmatically

    Got objects?

    Zope offers a caching framework with some cache managers available "out of the box". Your friendly Zope documentation tells you how to add a cache manager in the ZMI and associate objects with it. Since my site setup for our application is done from python code, I want this automated. All I needed was a tiny bit of digging around in the Zope source code and here we go...

    First step: Add a cache manager object. Depending on what we need (HTTP Cache Manager or RAM Cache Manager), our "add" method is a bit different. In this example we are going to use a HTTP Cache Manager to set the caching headers on uploaded images. Here is what we use ('assets' is the object where we add our stuff):

    cache_manager_id = 'HTTPCacheManager'
    if cache_manager_id not in assets.objectIds():

    We throw the id of our cache manager into a variable, then check for its existence. If it's not around, we add it. Easy enough. Next step is to configure it:

    settings = {'anonymous_only':0, 'interval':360000, 'notify_urls':()}
    assets[cache_manager_id].manage_editProps('Cache Headers', settings=settings)

    The "settings" dictionary is the same stuff that you can edit in the "Properties" tab of the cachemanager in the ZMI. Then we use the manage_editProps method to set this stuff (along with the title). After this we simply want to add all objects in this folder to be associated with this cache manager:

    for ob in assets.objectValues():
        if getattr(ob.aq_explicit, '_isCacheable', 0) \
        and ob.getId() not in ['assets', cache_manager_id]:

    This code gets all the objects in our folder (if you need another set of objects, this is obviously the place to change that). It then iterates over them and tests if the object can be cached and is not the cache manager itself. Then it sets the association with the cache on each object. And with that we're done.

    Posted by betabug at 16:27 | Comments (0) | Trackbacks (0)
    08 August 2006

    Looks like my Zope was down

    Obviously restarted now

    Noticed right now that my Zope server process was producing errors for a while. Unfortunately I didn't notice, so I had sent in two posts, and they went online right now. Looks like I have a hidden problem somewhere, I saw a bit of a traceback about a "CacheException: The data for the cache is not pickleable." I'll have to investigate when I'm back in full Internet civilization, as it would get a bit expensive with the gprs connection here. Sorry for the inconvenience and I hope the service will work for a while now!

    Posted by betabug at 22:15 | Comments (0) | Trackbacks (0)
    05 September 2006

    Rewriting All... but a few

    More Zope RewriteRule fun

    The RewriteRule Witch has eliminated a lot of questions about "apache and zope" on the #zope channel. One use case is not really covered yet though: The case where one wants to host everything in Zope, except for a few directories. This is easy to do though...

    First step: We get a simple RewriteRule from the witch. We pretend for these examples that Zope serves internally on port 8080. This rule rewrites all requests to Zope:

    RewriteRule ^$ \\
    http/%{SERVER_NAME}:80/VirtualHostRoot/ [L,P]
    RewriteRule ^/(.*) \\
    http/%{SERVER_NAME}:80/VirtualHostRoot/$1 [L,P]
    In its current incarnation the witch produces a rule which is doing too much (it's not doing any harm though). I'll likely update the witch for this special case Real Soon Now(TM). But for the moment, it can be shortened to this one line:
    RewriteRule ^(.*) \\
    http/%{SERVER_NAME}:80/VirtualHostRoot$1 [L,P]

    Second step: Exclude access to some top-level directories (and their content), as these will be served by apache:

    RewriteCond %{REQUEST_URI} !^/(stats|manual)
    This is really childs play, but if you haven't been brought up with regular expressions instead of mothers milk, you might want a closer look. What do we have here in detail:

    Put the RewriteCond line in front of the RewriteRule and Apache will happily hand off everything but these directories to Zope to serve. The opposite case (only a few directories handled by Zope, everything else by Apache aka "inside out hosting") is covered well in the witch.

    Posted by betabug at 09:24 | Comments (2) | Trackbacks (0)
    29 September 2006

    Password Protected RSS Feeds and CookieCrumbler

    Disable the cookie login for RSS!

    Another one of those "remind myself" posts. Just tried to get past a problem with our "extranet" feeds, which are password protected and through SSL. The problem was that they are embedded in our Zope site, which uses CookieCrumbler for cookie based authentication. The feedreaders would always get the login page instead of the feed, even when using proper authentication. Solution: Disable cookie login by appending "?disable_cookie_login__=1" to the URL. Of course the state of feedreaders was part of the problem...

    The RSS feedreader on Mac OS X in widest use appears to be NetNewsWire Lite. It's actually fine and free, and advertised as "Mac like". My biggest grief with this thing is the almost total lack of feedback if something goes wrong. There sure is an error log, but it doesn't always log errors. Most of the times the feed just displays an old listing of entries, not even an indication that something went wrong. I had a lot of pain with that back when I tried to get the encoding on my own weblog feeds right.

    Other RSS readers don't support password protected feeds at all (as far as I can see), for example Vienna or Shrook. Didn't try some of the "for pay" ones and maybe putting the login credentials in the URL would work for these too (like "http://username:secretpass@domain.tld/rss_10.xml).

    One that works with passwords and is really quite funny for its mininalistic style is RSS Menu. RSS Menu puts another menu item in the Macs menu bar and gives you a quick and low profile indication of what's new. You need the free menu bar space though, nothing for my 1024x768 at home :-).

    Posted by betabug at 11:56 | Comments (0) | Trackbacks (0)
    01 October 2006

    Skinning a ZWiki

    Coding at home for the "nautica" ZWiki project

    This weekend I wasn't content with relaxing and enjoying going out with friends for coffee and with the HelMUG guys for... eh... coffee too. I worked on a little coding project with ZWiki. Hacking in my spare time today was fun. Even though I did nearly the same stuff I do at work, I feel relaxed. It seems that because I did something new, was on a "discovery" tour, there was something playful to it, like playing a game. Now, what was it all about, this ZWiki and Skins thing?...

    Screenshot nautica05 design on ZWiki

    You know, I have had this idea that a ZWiki (or any Wiki for that matter) is really a simple little Content Management System, in fact I wrote once about the easiest CMS on Zope. But to make good use of a CMS, the result has to look good and people don't want to spend weeks to restyle the basic ZWiki look. That's what templates are for. At, lot's of free and good looking web templates are available. My little project was to pick one of those and make it "dynamic" with a ZWiki.

    I'm not going into the details here, that will have to wait for a How-To somewhere on But basically the work consisted of dropping the HTML files, images, and CSS files into the ZWiki folder. Then adapting some of the ZWiki templates to produce HTML that "fits" with the openwebdesign template. Even allowing for some time where I had to orientate myself in the ZWiki code base and template system, it took just a couple of hours. There are still some rough corners and some things not yet done. But the intermediary result at is pretty nice (if I may say so myself, and I didn't do the design anyway :-). Motivation enough to write out the procedure in more detail soon.

    Posted by betabug at 21:00 | Comments (2) | Trackbacks (0)
    Prev  1   2   [3]   4   5   6   7   8   Next