The ZZ-collection -- by Eugene Reimer circa 1986...1996

These C-programs for MS-DOS were developed with Borland Turbo-C and Watcom 32-bit C-compiler plus runtime "DOS-Extender", tested on PC-hardware running MS-DOS and IBM-DOS systems, between 1986 and 1996.  I acquired my first PC in 1986 while living in Seal Beach CA.  I also lived in Winnipeg and near St-Jean-Baptist MB (on the Red River) during this time-period.

My first PC was a Kaypro laptop from the days when a PC had no hard-drive.  That Kaypro 2000 laptop was the first laptop PC according to wikipedia.org/wiki/History_of_laptops;  it was introduced in 1985;  I managed to buy a used one only one year later.  After acquiring a new PC in 1987, I only rarely used the Kaypro.  Having once used a PC with hard-drive makes a floppy-only machine almost unbearably cumbersome.  In mid-1987 my employer supplied a new 286-powered PC with 40MB hard-drive;  for home use the 8088-powered Kaypro continued until Christmas-time of 1987 when I bought a 386-powered PC, so I had a cpu with proper 32-bit-addressing well before either of the rival operating-systems OS/2 from IBM and Windows from Microsoft had even finished their Intel-286-style addressing with that nightmarish gluing-together of 64KB chunks.  A wise friend (Peter Buhr) once remarked that the silicon in 286 cpus had better been left as sand on the beach -- but that's a subject for a separate rant.  In 1989 my home-office got an employer-provided Intel-386-powered IBM PS/2 model 70 with 8514/A video-adapter and 8507 monitor -- bleeding-edge stuff, and I recall being shocked by the price-tag where just the extra-memory option for the video-adapter was roughly a year's pay for a typical working bloke;  clearly the choice was not mine (some of it was, but not its having to be by IBM).

The family-tree (genealogy) programs are non-C, written in 1986 in Seal Beach California.  These programs are really one program that's been translated into different languages, a new one in a new language being written whenever I acquired a compiler or interpreter for my first PC.  The program uses a true tree to represent a family-tree which isn't a tree (not as mathematics defines a tree).  This is possible by reversing the direction;  although we think of a family-tree as having Adam & Eve as the root (convenient even for a non-creationist:) and youngsters as the leaves, which does not a mathematician's tree make since it lacks unique parentage, by reversing the links we get a true tree, a binary tree, since each person has exactly two links, to mother and father.  The program (written before I'd heard of GEDCOM files) reads a flat-file GEDCOM-like representation of person-info (ie: birth records), builds the tree form, then answers how-related questions just as my Lifelines related program does.  I wrote it first in Intel 8086 Assembly language, and in Pascal (Turbo-Pascal), then a few more times in other languages including a Prolog version (it's a stretch to call that one identical:), since it was a fine way to learn a new programming-language as well as the associated tools.  After acquiring a C-compiler in 1987, I don't recall using any of the other translators again.  I also tried to get Del Plett (as president of Hanover Steinbach Historical Society, and board-member of Manitoba Mennonite Historical Society) interested in assembling a collection of family-tree books in electronic form;  what I had in mind is exactly what CalMenno.Org has since created as GRANDMA;  however, such "vision" was lacking and my program went nowhere;  I first heard of the CalMenno project several years later.  

I went out of my way to make the test-data absurdly incestuous (the "I am my own grandpa" sort of thing), since that provides a lot of different relatedness results with only a small number of persons.  Although probably inspired by the Adam & Eve story.

My "revolving dice" graphics experiment from 1989 provides a demonstration of how to use Intel-386 32-bit-arithmetic instructions within the 16-bit Turbo-C Borland IDE  (worth talking about in 1989;  laughable in 2002;  ancient history when PCs went to 64-bits).  Getting the redrawing code fast enough for movie-like smoothness was a challenge on 1980's hardware.  The program includes a well-developed package of graphics primitives making transparent the different low-level code for CGA, EGA, VGA, or 8514 video-harware  (which is more "ancient history").

Also of interest is the camera-like user-interface where one of the things the user can tweak is the "Perspective effect" which the reader will remember from grade-school art class as "Perspective", in the same way as when zooming the lens on a camera.  In this case the lens can go further into fish-eye territory than typical zoom-lenses can.  In this case, the size is being "normalized" so that what's most noticeable when zooming a lens (the subject getting bigger) does not happen -- only the distortion known as "perspective effect" changes in degree.  Staying with the camera-analogy, adjusting this tweakable setting does 2 things simultaneously: the camera-operator moves physically further way from the subject as she adjusts the lens toward longer focal-length so the size of the subject remains constant.  You may prefer the other-way-round description:  you move closer as you unzoom.  Besides being easier to say, it's nicer to think of increasing the perspective "distortion", since that's what this adjustment is about.  Incidentally, the math although mildly mind-boggling is a lot easier than the description will lead you to think.  The math reveals that the more you crank up this perspective effect the closer to the screen your eyes need to be for for the picture to look realistic, until you get to the point where your nose prevents being able to get close enough;  the same is true of extremely wide-angle (fish-eye lens) photos.

The reader may be surprised to learn that this little moving-picture program was actually a first step toward writing a CAD/CAM system, which remains unfinished.  Research for this system included learning the math for making 2-dimensional "photos" (projections) of a 3-dimensional virtual world, and other topics in pure and applied mathematics.  The project began with my thinking about the best shape for a canoe.  Whence came my interest in curves that behave exactly like the physical splines used by someone designing a canoe, rather than the cubic splines or Bezier spline curves typically used by computer-software as a substitute for those physical splines.  This brought me to something much more difficult than I had expected:  the need for a new transcendental function!  Since the shapes taken by the draftsman's spline cannot be described using trig-functions sine and cosine, nor with hyperbolic-functions sinh and cosh...    Note that I did this when my only access to the internet was email;  the world-wide-web did not yet exist;  Google didn't exist.  (Nor was there anything google-like, although I've heard since that full-text or indexed searching of files available by FTP, Gopher, Usenet articles(?) had been possible)  My research involved spending time at the U of Manitoba libraries (Science and Engineering), and saw me learning more about how to go about finding recent articles pertaining to a given subject than I had learned during all my years as a student.  However I have no software to show for it:-) 

(An aside: during this research in 1991, in the early days of my semi-retirement, when commuting from St-Jean to Winnipeg I would occassionally manage to join a bunch of former fellow U-of-Manitoba grad-students & profs for Friday happy-hour at a Winnipeg bar, and on one such occasion I first heard of HTML, and about how this simple markup-language with hyperlinks and these triple-double-you things called "home pages" were going to be the next big thing.  I also remember a 3-way pun when "user" is someone addicted to Usenet;  even the pre-web internet had its enslaved users:-)

My Unix-tools-for-DOS include grep and d (an ls-like or dir-like program).  My D.EXE has globbing that goes beyond Unix-globbing (beyond what Unix-globbing did at that time) by having a meta-char that matches a path-segment containing slashes, with a notation that's still worth looking at.  My grep supports more-extended-than-extended regexes (regular-expressions) having AND, NOT, and SUBTRACTION (set-difference) as well as OR.  It also offers alternatives to line as the unit of matching;  file-at-a-time matching is self-explanatory although why it's useful may not be immediately obvious.  The third choice of matching-unit is not easily explained, but is useful when searching non-line-oriented files such as machine-language programs;  given an unanchored regex (one not containing "^" nor "$" metachars) it treats each occurance as a match, though also displaying some specifiable amount of context. 

My Unix-for-DOS-tools do their own low-level IO with caching that replaces the Microsoft code for locating a file as well as reading it;  this provides more than an order-of-magnitude performance-improvement over the operating-system code.  Grep runs in less than one-tenth the (clock-on-the-wall) time with my IO-routines versus the ones provided by the operating-system!  Which is dramatic on my large test-case, searching every file on my hard-drive.  I had noticed that their routines were bad but was still astounded by how much they could be improved upon with a very simple strategy.  I'm curious about whether present-day Windows-users are still handicapped by this hideously bad performance, however I presently have no Windows system to investigate.

My grammar and parsing tools include the unfinished DOG and PUP yacc-like parser-generators (named for downward and upward; downward being commonly known as Top-down...).  DOG has innovations that will make Top-down methods superior to LALR (bottom-up) methods, in terms of enabling a programming-language to be defined with a grammar that's easy to read and that results in parse-trees in a form that programmers will prefer, whether they be compiler-writers, writers of other transforming tools, or users of such tools.  I'm getting at something subtle yet important here:  how a grammar is written has profound effects on the sort of trees one gets as parse-trees;  also I'm envisioning a world where parse-trees are much more important than they have been so far.

My Cobol-transforming tools include a (vaguely) sed-like way to express systematic transformations in parse-tree terms.  This is not easily described...  (an example might help).  Using this tool provided the biggest "Eureka" moment of my entire life.  The 2nd of these programs includes a botched implementation of the Aho, Sethi, Ullman algorithm for data-flow analysis, where a minor flaw, that became a major one when I failed to find it in time and settled for an ill-considered workaround, led to bizarre results, much head-scratching, and wasting the time of the programmers involved in that legacy-Cobol-code conversion project, and is what I'm most ashamed of in my computer-programming career.

In 1989 I turn 40, notice my ability to program is beginning to deteriorate, and realize I'll never ride through Paris in a sports car with the warm wind in my hair  (meaning I'll never get to write that operating-system -- Ms Faithful surely intended the metaphore I find in her poetry).

These programs only cover the latter part of my programming career;  programs from the "mainframe era" are probably lost forever, although I have some paper listings and an old reel of tape that can be read by hardware one might find in a museum.  The highlights of my pre-PC-era computer-programming career are descibed in rants/programming-highlights.htm.

Links to these programs:
...  [coming soon]



Send your questions, suggestions, requests to ereimer@shaw.ca.

[the programs were written between 1986 and 1996;  a rough-draft of this description in 2002;  it went "up" in 2011-03.]
[why are they called the ZZ-collection?  obviously because their directory was to be last in an alphabetically-ordered list:-]