Intro, Part II.

The moment came about two years ago, in the middle of a lecture on biologically-inspired algorithms (EC, ant-colony optimization, etc.) My attention had strayed so very briefly - and yet the material immediately ceased to make sense. It seemed obvious that continuing to follow the proof on the board was futile - the house of cards inside my head had tumbled down. Dark thoughts came: if my train of thought is so easily derailed, what am I doing in the thinking business? The answer "nothing else has come remotely close to making me happy" won't pay the bills. Floor, ceiling, and proofy whiteboard swirled together as I continued in this misery. It was then that I suddenly realized exactly what had led me to pick up programming when I was young, and to continue studying every aspect of computing I could lay my hands on. It was the notion that a computer could make me smarter. Not literally, of course - no more than a bulldozer is able to make me stronger. I thirsted for a machine which would let me understand and create more complex ideas than my unassisted mind is capable of, in the same way that heavy construction equipment can let mediocre biceps move mountains.

What, exactly, has the personal computer done to expand my range of thinkable thoughts? Enabling communication doesn't count - it makes rather light use of the machine's computational powers. From the dawn of programmable computing, AI has been bouncing back and forth between scapegoat and media darling, while IA steadily languishes in obscurity. It is very difficult to accurately imagine being more intelligent than one already is. What would such a thing feel like? It is easier to picture acquiring a specific mental strength, such as a photographic memory. The latter has been a fantasy of mine since early childhood. Improving long-term memory would give me a richer "toybox" for forming associations/ideas, whereas a stronger short-term memory might feel like an expanded cache.

Before the lecture was through, I had formed a very clear picture of the mythical photographic memory simulator. With only a few keystrokes, it would allow me to enter any thought which occurs to me, along with any associations. The latter set would be expanded by the program to include anything which logically relates to the entry in question. As the day went on, the idea became less and less clear in my mind, until what once appeared to be a thorough understanding of the problem and its solution had mostly vanished. All that remained was a set of clumsily scribbled notes.

Later, I discovered that the idea was not new at all. This did not surprise me. What surprised me was the fact that none of the attempted solutions had caught on. What I found ranged from the hopelessly primitive to the elephantine-bloated, with some promising but unfinished and promising but closed-source ones mixed in. None of the apps featured expandability in a homoiconic language, even by virtue of being written entirely in one. From my point of view, the lack of this feature is a deal-killer. I must be able to define programmatic relationships between datums, on a whim - plus new sytaxes for doing so, also on a whim. There must be no artificial boundary between code and data, for my thoughts are often best expressed as executable code - even when entirely unrelated to programming.

Thus I began work on Skrode - named after the combination of go-cart and artificial short-term memory used by a race of sentient plants in a well-known novel. The choice of languages came down to Common Lisp vs. Scheme, as the Lisp family is the only environment where I do not feel caged by a stranger's notions of what programming should be like. I've always felt CL to be ugly and bloated with non-orthogonal features. Scheme, on the other hand, is minimal to the point of uselessness unless augmented with non-standard libraries. Neither CL nor any of the existing Schemes seemed appetizing. What I needed was a Lisp system which I could squeeze into my brain cache in its entirety - thus, practically anything written by other people would not fit the bill.

By this time, I had come to believe that every piece of information stored on my computer should be a first-class citizen of the artificial memory. The notion of separate applications in which arbitrarily divided categories of data are forever trapped seemed more and more laughable. Skrode would have to play nicely with the underlying operating system in order to display smooth graphics, manage files, and talk TCP/IP. Thus I set to work on a Scheme interpreter, meant to be as simple as possible while still capable of these tasks. This proved to be a nightmarish job, not because of its intellectual difficulty but from the extreme tedium. None of the mature cross-platform GUI libraries play nicely with the Lispy way of thinking about graphics. (Warning: don't read Henderson's paper if you are forced to write traditional UI code for a living. It might be hazardous to your mental health.) I learned OpenGL, and found it to be no solution at all, for the same reasons.

My concept of the system expanded from that of a hierarchical notebook to a complete programming system. Yet I knew that it could not satisfy me if it contained any parts not modifiable from within itself. Emacs would stand a chance, were it not for its C-language core. Then I discovered that system-level reflection is not an unreasonable demand. I found out that instant documentation and source access for any object known to the user/programmer is a reasonable thing to expect. That it is possible to halt, repair, and resume a running program. And, depressingly, that one could do these things on 1980s vintage hardware but not on modern machines.

The wonders of the Lisp Machine world and the causes of its demise have been discussed at great length by others. The good news is that CPU architectures have advanced to the point where the fun can start again, on commodity hardware. I have worked out some interestingly efficient ways to coax an AMD Opteron into becoming something it was never meant to be.

Skrode remains in cryostasis, and awaits the completion of Loper - an effort to (re)create a sane computing environment.

Learning an API does not have to feel like dealing with a Kafkaesque bureaucracy.

And that is enough hot air for the time being. The next post will speak only of usefulness. Such as how one might build an OS where yanking and re-inserting the power cord results in no tears. And other outlandish yet very practical things.

This entry was written by Stanislav , posted on Tuesday November 27 2007 , filed under Hot Air, LoperOS, Philosophy . Bookmark the permalink . Post a comment below or leave a trackback: Trackback URL.

31 Responses to “Intro, Part II.”

  • [...] Web Service - ReadWriteStartreddit Who moved my 'Delete' key? Lenovo did. Here's why. Loper OS: every layer of abstraction by anyone else is crap "While most of the world's attention is focused on thebeatings in the streets of Iran [...]

  • j says:

    Sounds like you might be kind of a douche. Ever thought about redesigning the car? Because driving with my feet always made more sense that these hands on a clumsy wheel.

  • j says:

    Hey, on the off chance you're the next Linus Torvalds though, more power to you. Just don't burn yourself out before you make it to the final test: was the redesign worth it?

  • Darkstar says:

    I lol'd hard at this:

    "Scheme, on the other hand, is minimal to the point of uselessness unless augmented with non-standard libraries."

    If there was ever a language with standardized libraries then it is Scheme (google for SRFI).

    I stopped reading after that comment.

    -Darkstar

    • Myaushka says:

      And yet, http://www.r6rs.org/final/r6rs.pdf states:

      'While it was possible to write portable programs in Scheme
      as described in Revised5 Report on the Algorithmic Language
      Scheme, and indeed portable Scheme programs were
      written prior to this report, many Scheme programs were
      not, primarily because of the lack of substantial standardized
      libraries and the proliferation of implementationspecific
      language additions.'

  • rm says:

    better?, suck?

    hrrm, language design is pretty subjective in this area. people still use fortran for a reason, it was designed to do what it does. you should really take a look at squeak's design if your'e interested in an OS design that is centered entirely around a single language. this has been the paradigm in smalltalk since it's inception, and as you pointed out symbolics used to produce hardware, so it's not entirely new. but i would suggest that UNIX is the most popular OS design atm, and i suspect that most people at some point conclude that UNIX is also a very decent language design (assuming you look at it from this viewpoint).
    it sounds like your main focus is on Intelligence Amplification in environment and i would first and foremost strive to not lose focus of that goal while delving into the details of implementation. stick your head up every once in awhile and make sure you're heading in the direction you want to be. good luck 🙂

  • A says:

    Please change your links to something that doesn't require payment. For example the Henderson papers can be found on his own site:

    http://users.ecs.soton.ac.uk/ph/papers/funcgeo2.pdf
    http://users.ecs.soton.ac.uk/ph/funcgeo.pdf

  • Ingar Roggen says:

    So, what you hoped for was "that a computer could make me smarter. Not literally, of course – no more than a bulldozer is able to make me stronger. I thirsted for a machine which would let me understand and create more complex ideas than my unassisted mind is capable of, in the same way that heavy construction equipment can let mediocre biceps move mountains." To achieve this, you must first study the Gestalt laws and learn to Gestalt-program your own mind. For, we are self-programming Gestalt-computers, Stanislaw. Relax and look at these five patterns while sensing their perceptual force. Note how they are self-defining in any language. Through perceptual induction they govern all the logical deductions that you will ever be able to make:
    SIMILARITY | | | | | | |
    PROXIMITY || || || |
    CLOSURE ][ ][ ][ ]
    DESTINY >>>>>>>
    CONTINUITY —|—|—|—
    Modus Ponens is made of this… To achieve mastery of Gestalt self-programming you must start using the method of mentat computation (after Frank Herbert, of course) that I describe in the paper in this link: http://iro.on-rev.com/Statistical_masking.htm. Then you can return to using electronic computer programming languages, after having established your own Gestalt logical basis.
    Regards
    Ingar

  • Feliksas says:

    Greetings,
    I've found your article about interfacing to the SID 6581 (http://www.loper-os.org/vintage/parallelsid/parasid.html) and I liked the sample you played - http://www.loper-os.org/vintage/parallelsid/mp3/ussr.mp3 Could you be so kind to email me with the original .sid file or the raw SID command dump (address-data) of this melody, if that's possible?
    Thanks in advance,
    Andrew

  • BlindMind says:

    Have you heard of Anki? It's a flashcard program that presents you the cards just are you are about to forget them. It's super useful for remembering anything you want to know.

    • Stanislav says:

      Dear BlindMind,

      I have no problem remembering formulae, quotes, and other tidbits quite well after seeing them just once. The objective described in this post was the creation of an automated system for storing and inferring connections between a set of items measured in the gigabytes. Flash cards and the like are of no use here.

      Yours,
      -Stanislav

  • Ivan says:

    Stanislav,

    Where are you with this goal today? I thirst the same thing as I want the machine to tell me what I *could* have associated if I had enough operational memory capacity. And not even on the global level... just on the personal level of the things I've read and saw in the past.

    Will I remember that I wrote this and the accompanying thoughts so poorly expressed in these words? Will I remember enough of your writings that I read today to consciously (and thus, maybe, more effectively) inform my thoughts in the future? And why would I need to read all of them instead of just associating Stanislav's output to be included in my association machine? RSS is such a poor substitute.

    Thanks,
    Ivan

  • Nilern says:

    Have you looked at TIddlyWiki. It's been good enough for me. It's in Javascript though.

    • Stanislav says:

      Dear Nilern,

      Yes, I did, some years ago. Vaguely interesting, and perhaps more than adequate for storing plain/hyperlinked text. But - browser apps do have a tendency to break and malfunction in annoying ways over time as browsers 'progress.'

      Yours,
      -Stanislav

  • datenwolf says:

    Hi, I think this might be very much in the line of LoperOS: a Lisp implementation on the metal level. Still just Verilog, but could be implemented in silicon any time now:

    http://www.mail-archive.com/picolisp@software-lab.de/msg04823.html

  • Sean says:

    Seems like you might like Urbit: http://urbit.org/

  • Toafan says:

    Just now after reading all the comments, I'm begining to wonder if maybe I want the same thing just coming from another direction. For some time I've wanted a web browser that would remember what I read, when I read it, and how I found it; would keep track of anything I said in response; would let me search through all of that; and would be usable from wherever.

    I'm interested in this. I'll read your progress reports. Do you have pages of things others could potentially do to help?

  • Julian says:

    You should also have a Look at the FORTH Systems around. Power of lisp, but without all the ugly braces...
    Make sure to have a Look at Colorforth and this one here:
    http://www.ultratechnology.com/aha.htm

  • winter says:

    Firstly, please excuse typos as I'm typing this one handed after a shoulder injury.

    To my comment, have you looked at Plan 9, and more specifically, Inferno running one the Dis virtual machine? I'm quite interested in your thoughts on this as I've consdered a similar idea from a different angle and as far as I managed (not very experienced in this realm) it seemed like one of the best engineered and modifiable systems out there. Oberon is also very interesting in this reggard and fits the entire system ncluding graphics text, networking in less than 10k LOC, it aalso doesn't consider code or text or anything as separateentities. code is merely text that has been executed.

    I'm mainly inerested in your thoughts on Inferno and Dis, but I've noticed a common thread running through all of these systems.

  • Bobbart says:

    I think it's time for you to seriously consider the possibility that you're not going to usher in a profound renaissance in computing, and further, the possibility that you're not nearly as smart or creative as you currently think you are.

    My expectation is that the level of utility you'll actually provide through releases (as opposed to grandiose statements of intent) will be less than, say, a totally average Javascript library. And that's only because this hypothetical Javascript library has the important advantage of existing.

  • Cleverson says:

    Hello Stanislav, I came across your blog yesterday and couldn't stop reading. I'm mostly a hobbist programmer with no much practice. Once in a while, I read someone saying how lisp languages are that wonderful, but I never got quite interested. Now your articles seem quite clarifying and convincing in terms of identifying what is wrong with modern personal computing and how to fix it, so I'm interested again in giving lisp a try. What lisp language would you recomend nowadays especially for a beginner? I already know a little of Common Lisp and Racket, and upon searching a bit, I came across this article, would like to know if you agree with him:
    http://fare.livejournal.com/188429.html

    In case I go for Common Lisp, are there good learning materials besides http://www.gigamonkeys.com/book/ ?

    Greetings.

  • Cleverson says:

    Have you considered opening a croud funding to sponsor the manufacturing of a ship more suitable to your ideas, e.g. one that processes high level instructions? I would donate a little. 🙂

    • Stanislav says:

      Dear Cleverson,

      If somewhere there's a concept that's a poorer candidate for the crowd funding circus than a ~$billion ASIC that ~ten people want, I should like to know what it is.

      Yours,
      -S

  • Cleverson says:

    Hi, well, maybe there are many people who want it, but haven't yet manifested that publicly, so we simply haven't met them, e.g. you didn't meet me until I posted here...

    Yours,

  • Bystroushaak says:

    I feel your pain. Inspired by mix of science fiction, old research papers (Engelbart, Bush, Licklider, Kay ..), I too wanted an intelligence Amplifier in the form of the personal wiki, information manager and later, a whole programming environment.

    I am in walking similar direction, but on the path of Smalltalk like, or more precisely, Self like system. You should check it out. It is not in good shape, but definitely worthy inspiration.

  • Mike says:

    Here is a research gate link to a free copy of the paper on the Lisp-based microprocessor:


    https://www.researchgate.net/publication/220421958_Design_of_a_LISP-based_microprocessor

    I generally agree with most of the sentiments expressed in this post. Modern computing environments feel supremely unsatisfactory on a great number of levels and perhaps a virtual Lisp processor running atop a lightweight, easily-retargeted hardware abstraction layer (for modern iron and beyond) represents a viable path forward (for those that seek it, not the world--alas, much of that is already lost). If nothing else, it represents a solitary beacon of sanity in the otherwise bleak and unrelenting darkness... such a shame most of the hardware available to run it ships with Fritz and flaws; perhaps generic ARM64 or RISC-V will eventually have its day to shine. Otherwise, the only kit you'll ever be able to trust will be cobbled together from TTL or 70s/80s era processors (Z80, M68k, etc) and lacks the spare cycles needed for many modern computing tasks (encryption, compression, video, audio, etc) without a handful dedicated subordinate processors (Amiga, etc). I suppose it all comes down to whether one values trust and understanding over literally every other consideration or convenience. It's a fairly categorical proposition. It's good to hear that at least one person is trying; whether he succeeds or fails, perhaps progress can still be obtained by stacking the corpses of each attempt--you may never reach the moon but you still stand well above those resigned to swimming in their own digital faeces.

    -M

    • Stanislav says:

      Dear Mike,

      >...perhaps generic ARM64 or RISC-V will eventually have its day to shine...

      ARM64 currently "shines" to the very limited extent where I was able to turn up one model that can boot a reasonable Linux without "blob".

      Aside from this -- ARM is arguably one of the ugliest, "hairiest" (and least-documented) CPU archs ever marketed. AFAIK all, including the cheapest Chinesium ARMs, are blind copies of a massive, unauditable British gate-list "blob", such as is supplied to "licensed" vendors.

      As for RISC-V, I'm currently unaware of anyone selling a workstation-grade board with it. It seems to be confined to the microcontroller ghetto.

      >...Otherwise, the only kit you'll ever be able to trust will be cobbled together from TTL or 70s/80s era processors (Z80, M68k, etc) and lacks the spare cycles needed...

      A non-Xilinx-Altera duopoly FPGA with documented, homogeneous internals that could drive a bank of DRAMs would more than handily suffice for building an experimental sane computer.

      Presently no such thing is known to exist on the open market. And not because it is hanging on some titanic open problem in electronic engineering, but simply because no single literate person has to date had, simultaneously, the required money bag, the courage (to piss on patents; Chinese -- piss, but happy to make Altera clones for mass market, so as to use "warez" Verilog compiler from USA) and the necessary ideological alignment.

      Hypothetically, a method of manufacturing VLSI-equiv. semiconductors without the massive NRE cost associated with traditional photolithography would also do the trick. But AFAIK this is somewhere between "star flight engine" and "immortality pill" crackpot terrain, as the state of the art presently stands.

      Yours,
      -S

  • John Doe says:

    I just want to thank you for your site. I have visited sporadically since I was a young angsty teen, and I have always appreciated your posts and insight. You've led me in many different directions, regardless of what I actually understood of what you've written.

    Just wanted to let you know that I have always had a great fondness for your thoughts.

    Take care.

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> <pre lang="" line="" escaped="" highlight="">


MANDATORY: Please prove that you are human:

76 xor 8 = ?

What is the serial baud rate of the FG device ?


Answer the riddle correctly before clicking "Submit", or comment will NOT appear! Not in moderation queue, NOWHERE!