[Home]MichaelSamuels

MeatballWiki | RecentChanges | Random Page | Indices | Categories

My interests at the moment are to try and take wikis to a distributed model similar in a way to usenet (local editting of content with changes handled by a distribution system). I've got this partially working at the moment using a CVS based approach.

I also "run" the development of the O'Wiki wiki clone. See http://owiki.org/FeatureTracking, http://owiki.org/CHAOS, http://owiki.org/OffTopic if you're curious. O'Wiki's a fork of the TWiki project (after 4 years I got bored of the slow development cycle) in case it looks familiar. I'm currently throwing feature after feature into O'Wiki to find out what happens when you do this - what you get out the other side - since I think that the standard wiki model is just one step on a route to somewhere very interesting. (TWiki:Main.GrantBow has a lot to answer for - in a good way - in opening my eyes to some very interesting ideas - thanks Grant!)

Currently I'm now interested in getting collaboration between different wiki implementations going, and rather than hashing out a standard markup. (I view the latter as almost akin to telling the French or they must speak English and nothing else)

I'm attacking this on two fronts - 1) by creating a BorgWiki, 2) by looking to design a workable WikiInterchangeFormat that can be generated by all wikis, and can gracefully degrade into wiki markup.

I normally sign my name using my initials on many other systems - including IRC (MS-) and on owiki.org. (MS, MS- on IRC). Other contact routes are [email protected], #wiki, #owiki

-- MichaelSamuels

To everyone who's been welcoming - thanks!


Interesting topics: (anyone else reading, please feel free to add stuff you think I might find interesting :) WikiMarkupStandard, WikiInterchangeFormat, InfiniteMonkey

Michael, you did a great job with the CDML-Parser on WikiInterchangeFormat. It's much more elegant than my own parser and I like it a lot, although maybe there are a few features missing (but maybe they aren't needed anyway). I never thought about CDML as an exchange format, which makes a lot of sense. If you continue this work, I'll try to contribute useful parts of my codebase and give any support I can. -- HelmutLeitner

Thanks Helmut - much appreciated. I'd be interested in hearing what features are missing from the parser that you currently make use of. I must say I'm quite impressed with it in many respects - a nice, simple idea. What I'll probably do next is try implementing the parser in a couple of different languages to see how well it translates between languages. (I suspect quite well) I also want to try a traditional lex/yacc style implementation as well, to see how well that copes. This would be particularly nice since it means that the serialised parse tree it represents would be parseable in a single pass - making it particularly effective. One possible glitch here might come from nested elements - we'll see! (Either way, I prefer to play with code to see what works, and CDML certainly seems good so far :) -- MichaelSamuels

Ok, looking at [1] I notice that the parser

Currently the ProWiki parser supports recursive CDML but it is not generally enabled.

-- HelmutLeitner

Thanks for the feedback. After spending some time with CDML now I'm pretty certain it's the kind of approach I think is worth exploring and working on more, so I'll probably spend some time on producing a better parser.

-- MichaelSamuels

OK, I got distracted by other things for several months, but I've now written a simple lex/yacc based CDML parser. It handles nested CDML tags, and allows parameters to be mixed with the text. Current parser is mainly a proof of concept, and only in my CVS tree here: http://cerenity.org/viewcvs/viewcvs.cgi/Scratch/CDML/Lexer.py

It's implemented using Lex & Yacc, and since I'm expecting it to be either an ondisk format or an interchange format, the fact it does NOT allow square brackets to be used aside from how they are syntactically used doesn't strike me as a problem. (Obviously this isn't the case for things like ProWiki, but for an interchange or intermediate format it strikes me as fine - you just provide a means of encoding [ and ] in the text.

The benefit here of course is that the syntax is clearly context free, and able to be parsed in a single pass rather than the usual N-rules, N-passes approach.

-- MichaelSamuels

That's great. Thank you for sharing. Do you mean that [ and ] in text could with your parser be escaped like \[ and \]? Or encoded like [ and ] ? In fact there are some examples where this turns out to be a PITA, for example programming code that uses [...] for indexed arrays or (recently) a dot/neato interface for graph-generation where "node [fontcolor=red]" is allowed (but avoidable) syntax. -- HelmutLeitner
They would need to be encoded as something like [ and ] . This would be a real PITA for me if I was intending to use this as user entered syntax, I agree, but I'm mainly thinking as an ondisk format and/or interchange, and clearly in that scenario you would escape user level "[" and "]"'s into their respective forms when stored, and convert back when retrieved. (Currently a lot my documents are about python programs, so I'm naturally wary of breaking usage of [ and ] - this is strictly backend :-) ) -- MichaelSamuels

Aa small note: I once implemented a prototype lex/yacc-style wiki parser, that could do lists and paragraphs. It worked by lexing '\n*' as a symbol and similar hacks. While quite complex it did singlepass parsing. -- DavidSchmitt

Hi David - I've done this in the past as well - before I came across wikis in fact. I was converting a tool "dokkenou" that used texinfo-like syntax from Amiga E into SML using lex & yacc. (Actually it took the human readable/writeable syntax and converted that in LateX?, which I then used for HTML and postscript generation). I've also written up in ConsumeParseRenderVsMatchTransform in the past what I think to be the key differences between lex/yacc style parsing and traditional wiki style parsing. It really boils down (IMO) to that if a wiki engine fails to display the text & throws errors (most common approach with lex/yacc approach), you end up with something you can't read. However with wiki-style you do end with something readable, since the original text as edited itself was readable.

Essentially from my perspective, the paragraph near the end with An interesting side effect is that if you are able to extract the AST from a wiki through appropriate repeated I'm using this as an on disk very simple AST. (And hence lex/yacc is IMO appropriate there, but not at the UI level) -- MichaelSamuels

Yes, it was quite an ugly hack (and did not work very well). Probably better to use wiki transformations to insert contextfree parsable tokens and use that as on-disk-format? -- David


distributed wiki

Is there anything I could do to help get a distributed wiki online?

Have you seen [WikiFeatures:FailSafeWiki] and Wiki:DistributedWiki ?

At CommunityWiki:CommunityMayNotScale , I speculate that perhaps a wiki could almost entirely hosted on user's computers -- use them as a form of grid computing. (Another social software application claims it is already doing just that). This naturally leads to good scalability -- the more users, the more resources available.

-- CommunityWiki:DavidCary


CategoryHomePage

Discussion

MeatballWiki | RecentChanges | Random Page | Indices | Categories
Edit text of this page | View other revisions
Search: