Planet Croquet

blogs about Squeak, Pharo, Croquet and family
planet squeak - planet squeak es - planet squeak jp - planet croquet - planet squeak code - planet smalltalk

September 08, 2021

Historical - Vanessa Freudenberg

Deconstructing Floats: frexp() and ldexp() in JavaScript

While working on my SqueakJS VM, it became necessary to deconstruct floating point numbers into their mantissa and exponent parts, and assembling them again. Peeking into the C sources of the regular VM, I saw they use the frexp() and ldexp() functions found in the standard C math library.

Unfortunately, JavaScript does not provide these two functions. But surely there must have been someone who needed these before me, right? Sure enough, a Google search came up with a few implementations. However, an hour later I was convinced none of them actually are fully equivalent to the C functions. They were imprecise, that is, deconstructing a float using frexp() and reconstructing it with ldexp() did not result in the original value. But that is the basic use case: for all float values, if

[mantissa, exponent] = frexp(value)


value = ldexp(mantissa, exponent)

even if the value is subnormal. None of the implementations (even the complex ones) really worked.

I had to implement it myself, and here is my implementation (also as JSFiddle):

function frexp(value) {
if (value === 0) return [value, 0];
var data = new DataView(new ArrayBuffer(8));
data.setFloat64(0, value);
var bits = (data.getUint32(0) >>> 20) & 0x7FF;
if (bits === 0) { // denormal
data.setFloat64(0, value * Math.pow(2, 64)); // exp + 64
bits = ((data.getUint32(0) >>> 20) & 0x7FF) - 64;
var exponent = bits - 1022;
var mantissa = ldexp(value, -exponent);
return [mantissa, exponent];

function ldexp(mantissa, exponent) {
var steps = Math.min(3, Math.ceil(Math.abs(exponent) / 1023));
var result = mantissa;
for (var i = 0; i < steps; i++)
result *= Math.pow(2, Math.floor((exponent + i) / steps));
return result;
My frexp() uses a DataView to extract the exponent bits of the IEEE-754 float representation. If those bits are 0 then it is a subnormal. In that case I normalize it by multiplying with 264, getting the bits again, and subtracting 64. After applying the bias, the exponent is ready, and used to get the mantissa by canceling out the exponent from the original value.

My ldexp() is pretty straight-forward, except it needs to be able to multiply by very large and very small numbers. The smallest positive float is 0.5-1073, and to get its mantissa we need to multiply with 21073. That is larger then the largest float 21023. By multiplying in steps we can deal with that. Three steps are needed for e.g. ldexp(5e-324, 1023+1074) which otherwise would result in Infinity.

So there you have it. Hope it's useful to someone.

Correction: The code I originally posted here for ldexp() still had a bug, it did not test for too small exponents. I fixed it above, and updated the JSFiddle, too. Also, Nicolas Cellier noticed other rounding and underflow problems, his suggestions for ldexp() are now used above.

by Vanessa ( at September 08, 2021 06:55 PM

August 31, 2021

Vanessa Freudenberg

Frontend-only Multi-Player. Unlimited Bandwidth. Or: What is, really?

A multi-player web app needs a backend, right?

What if I told you, it doesn’t?

Read on for how Croquet gets rid of servers. No, really.

Instantaneous Shared Experiences is how we describe Croquet on our website. And while that excellently describes What Croquet does, as Croquet's Chief Architect, I wanted to share a bit about How we do that.
So I wrote a Twitter thread. Here it is in blog form, slightly extended.

Click the animation above if it does not play automatically

Croquet lets you build completely client-side multi-user web apps.

Read that again.


No I’m not kidding. I built it, I know it works. 😁 

Croquet apps run completely client-side:
  • can be hosted as a static web site
  • no server-side code needed
  • no networking code needed 
Croquet is literally virtualizing the server: Instead of running code on a server (or in a serverless function) we run it as a virtual machine (VM) on each client. 
Croquet carefully controls the inputs to these identical VMs so they execute in sync. 

The input events for the shared VMs are routed via Croquet’s global fleet of “reflector” servers. A reflector just bounces events from one user to all users. There’s no computation involved. Reflectors are small and fast. We have them around the globe, and will have many more with extremely low latency soon.

Crucially, the reflector guarantees event order and timing so every VM processes the same event at the same time. (That’s what the animation on top shows.)
Croquet reflectors also send out a continuous heartbeat to keep the VMs running, and in sync, without user input. 

Since the “server” is really a VM running on the client, you get unlimited bandwidth from that “server” to the client.


In fact, you can directly “reach into” the replicated VM to pull out any data for rendering. It’s just a pointer 🤷🏻‍♀️ 

Croquet also snapshots these VMs, encrypts the snapshots, and uploads them to a server until the next time they are needed.
The encryption key never leaves the client. Our servers cannot peek into the snapshots. They can’t peek into events either, since those are fully end-to-end encrypted, too. 

And yes this really works. In a web browser. In your browser, in fact. Try this:
That’s WideWideWorld, an editable voxel world with hundreds of AIs running around, and a water simulation. Scan the QR code in the lower left with your phone (or share the session URL). Both clients will be in sync. 

Of course, to build something like WideWideWorld on top of Croquet you’ll need to know how to build a game in the first place. This prototype is by our own Brian Upton (whom you might recognize from Rainbow Six & Ghost Recon). He joined us because no other tech would allow him to build this. 

Croquet is especially good for:
  • multi-user AR / VR / WebXR
  • multi-player games
  • collaborative design and construction
  • synchronized media
  • shared simulations
  • quick prototyping of shared experiences
These are hard to build, and some of the really hard parts are always the networking and server design, implementation, deployment, and scaling. 

Croquet eliminates this headache.

By the way, we are not literally running a Docker image in the browser. Although that would be a fun exercise – theoretically, Croquet can run anything that is deterministic and serializable.

No, it’s just JavaScript. The ECMAScript standard provides the semantic guarantees we need to achieve bit-identical execution across all platforms. Even WASM can be used with some care.

Plus our API is much simpler than traditional client/server code. Your shared server “VM” can be a single class, and you interact with it simply using publish/subscribe!
class Counter extends Croquet.Model {                 init() {                     this.n = 0;                     this.subscribe("counter", "set", this.set);                     this.tick();                 }                 set(value) {                     this.n = value;                     this.publish("counter", "changed");                 }                 tick() {                     this.n++;                     this.publish("counter", "changed");                     this.future(1000).tick();                 }             }
The value of n will be synchronized between all VMs
even as it is incrementing automatically.

We have many simple tutorials to get you started: 
(also: hiring a #DevRel engineer to write more) 

We are working on higher-level frameworks, too:
VDOM is already available, and WorldCore soon! See

By the way, if you heard of Croquet about 15 years ago – yes, that was us too, in particular our founder David Smith working with Alan Kay.  

That system was written in Squeak Smalltalk and had a fancy 3D UI.
Our new JS system has no UI, so it's good for many kinds of apps.

JavaScript has become incredibly rich, but it has no simple way of communicating with the person on the phone next to you.

Croquet provides this missing infrastructure for the web.

Try it! No servers needed.

(if you 💜 servers come work with us though)
End-Of-Thread 🧵

Compiled from a Twitter Thread I posted the day after Croquet's public launch.

by Vanessa ( at August 31, 2021 03:41 PM

January 21, 2021

Historical - Vanessa Freudenberg

Emulating Smalltalk-76

If you got as excited as me about Dan Ingalls' live Smalltalk-76 demo on an actual 1970's Xerox Alto, you may have wanted to try it yourself. 

For one, you could try my Smalltalk-78 VM. Smalltalk-78 is a leaner version of Smalltalk-76 but very much identical in syntax semantics. 

It is also possible to run the full Smalltalk-76 environment, and here is how:

First, you need an emulator for the Alto computer. Ken Shiriff posted a nice piece on how to run ContrAlto on Windows. It is written in C# and I got it to work on my Mac using Mono. So here's a step-by-step:

  1. Install Mono from
  2. Download from
  3. Download this Smalltalk-76 disk image:
  4. Unzip both and in the same folder.
  5. In a terminal, change to the ContrAlto directory and run mono Contralto.exe. This opens the Alto screen in a new window, and the emulator control in the terminal. 
  6. (if you get an error about a missing SDL library at this point, install that via homebrew using brew install sdl2)
  7. At the emulator's ">" prompt, type load disk 0 xmsmall.dsk and then start:
    $ mono Contralto.exe
    ContrAlto v1.2.2.0 (c) 2015-2017 Living Computers: Museum+Labs.
    Bug reports to

    You are at the ContrAlto console.  Type 'show commands' to see
    a list of possible commands, and hit Tab to see possible command completions.
    >load disk 0 xmsmall.dsk
    Drive 0 loaded.
    Alto started.

  8. In the Alto screen window, type resume xmsmall.boot to start up Smalltalk-76:
  9. And now you should see Smalltalk-76 running!
  10. There is no explicit "save" option. The OOZE object swapping system continuously writes objects to the disk. Use "quit" from the main menu to flush all objects back to disk.
Note that you will need a 3 button mouse to properly interact with the system. On my MacBook, I installed MagicPrefs to emulate a third mouse button. After installing, in MagicPref's "MacBook Trackpad" section, for "Three Finger Click" choose "Middle Click".

You will also need some keyboard shortcuts to access the special characters in Smalltalk-76:
ctrl A             ≤ (less or equal)
ctrl B             bold
ctrl C             user interrupt
ctrl F             ≡ (identical)
ctrl G             ◦ (index operator)
ctrl I             italic
ctrl N             ≠ (not equal)
ctrl O             ↪ (quote)
ctrl Q             ⇑ (return)
ctrl R             ≥ (greater or equal)
ctrl S             's (eval operator)
ctrl T             ┗  (prompt)
ctrl U             ¬ (unary minus)
ctrl X             clear emphasis
ctrl /             ⇒ (if then)
ctrl =             ≡ (identical)
ctrl shift =       ≠ (not equal)
ctrl \             ≠ (not equal)
ctrl ]             ⌾ (create point)
ctrl [             ◦ (index operator)
ctrl :             ⦂ (open colon)
ctrl '             ↪ (literal quote)
ctrl <             ≤ (less or equal)
ctrl >             ≥ (greater or equal)
shift -            ¬ (unary minus)
cursor down        doit (in dialog view)
To learn Smalltalk-76, the User Manual is a good starting point:!1.pdf

Have fun!

by Vanessa ( at January 21, 2021 12:47 AM

September 20, 2020

David Smith

Alan Kay Croquet Project Demo 2003

I created an up-resed version of the 2003 demo that Alan Kay and I did at the O'Reilly ETech conference. It really holds up nicely.

by David A. Smith ( at September 20, 2020 10:41 PM

June 26, 2020

Historical - Vanessa Freudenberg

SqueakJS: A Lively Squeak VM

I'm proud to announce SqueakJS, a new Squeak VM that runs on Javascript:

It was inspired by Dan's JSqueak/Potato VM for Java, and similarly only runs the old Squeak 2.2 mini.image for now. But I developed it inside the Lively Kernel, which allowed me to make a nice UI to look inside the VM (in addition to all the Lively tools):

It represents regular Squeak objects as Javascript objects with direct object references. SmallIntegers are represented as Javascript numbers, there is no need for tagging. Instance variables and indexable fields are held in a single array named "pointers". Word and byte binary objects store their data in arrays named "bytes" or "words". CompiledMethod instances have both "pointers" and "bytes". Float instances are not stored as two words as in Squeak, but have a single "float" property that stores the actual number (and the words are generated on-the-fly when needed).

For garbage collection, I came up with a hybrid scheme: the bulk of the work is delegated to the Javascript garbage collector. Only in relatively rare circumstances is a "manual" garbage collection needed. This hybrid GC is a semi-space GC with an old space and a new space. Old space is a linked list of objects, but newly allocated objects are not added to the list, yet. Therefore, unreferenced new objects will be automatically garbage-collected by Javascript. This is like Squeak's incremental GC, which only looks at objects in new space. The full GC is a regular mark-and-sweep: it's marking all reachable objects (old and new), then unmarked old objects get removed (a very cheap operation in a linked list), and new objects (identified by their missing link) are added to the old-space list. One nice feature of this scheme is that its implementation does not need weak references, which Javascript currently does not support.

This scheme also trivially supports object enumeration (Squeak's nextObject/nextInstance primitives): If the object is old, the next object is just the next link in the list. Otherwise, if there are new objects (newSpaceCount > 0) a GC is performed, which creates the next object link. But if newSpaceCount is 0, then this was the last object, and we're done.

The UI for now copies the Squeak display bitmap pixel-by-pixel to a typed array and shows it on the HTML 2D canvas using putImageData(). Clipboard copying injects a synthetic CMD-C keyboard event into the VM, then runs the interpreter until it has executed the clipboard primitive in response, then answers that string. This is because the web browser only allows clipboard access inside the copy/paste event handlers. You can drag an image file from your disk into the browser window to load it.

Besides running it on your desktop, you can install it as offline web app on an iPad:

On the iPad there is neither right-click nor command keys, but the menu is available on the inside of the flop-out scrollbars. It needs a fairly recent browser, too - it works in iOS 7, but apparently not in older ones. On Android it works in Chrome 31, but not quite as well (for example, the onscreen-keyboard does not come up on an Galaxy Note tablet).

Go to the project page to try it yourself. The sources are on GitHub, and contributions are very welcome.

Have a great Christmas!

by Vanessa ( at June 26, 2020 09:08 PM

June 25, 2019

Nikolay Suslov

Krestianstvo Luminary for Open Croquet architecture and Virtual World Framework in peer-to-peer Web

Everyone who is familiar with Croquet architecture are anticipating (waiting breathless) the updates for Open Croquet architecture from Croquet V by David A. Smith and Croquet Studios!

However, while working on project by that is heavily based on Virtual World Framework (containing elements of Open Croquet architecture), I have started revising the current Reflector server.

Let me introduce to you an ideas and early prototype of the Krestianstvo Luminary for Open Croquet architecture and Virtual World Framework. 
Krestianstvo Luminary potentially could replace Reflector server in flavour of using offline-first Gun DB pure distributed storage system. That allows instead of ‘Reflecting’ messages with centralised Croquet’s time now, to ‘Shining’ time on every connected node using Gun’s Hypothetical Amnesia Machine, running in peer-to-peer Web. Also to secure all external messages streams by using peer-to-peer identities and SEA cryptographic library for Gun DB. More over running Luminary on AXE blockchain.
Krestianstvo Luminary simply transforms the only server related Croquet’s part - Reflector (taken from VWF version) into the pure peer-to-peer application, running on a client’s Web Browsers.

For those who are not familiar with Open Croquet architecture, just want to mark key principals behind it in simple words. 

Croquet Architecture

Croquet introduced the notion of virtual time for decentralised computations. Thinking on objects as stream of messages, which lead to deterministic computations on every connected node in decentralised network. All computations are done on every node by itself while interpreting an internal queue of messages, which are not replicated to the network. But these queues are synchronised by an external heartbeat messages coming from Reflector - a tiny server. Also any node’s self generated messages, which should be distributed to other nodes are marked as external. They are explicitly routed to the Reflector, where the are stamped with the Reflector’s time now and are returned back to the node itself and all other nodes on the network. 
Reflector is not only used for sending heartbeat messages, stamping external messages, but also it is used for holding the list of connected clients, list of running virtual world instances, bootstrapping new client connections.


So, in Croquet architecture for decentralised networks, the Reflector while being a very tiny or even being a micro service - it remains a server. 
It uses WebSockets for coordinating clients, world instances, providing ‘now time’ for the clients, reflecting external messages.

Let’s look how it works in Virtual World Framework (VWF). I will use the available open source code from VWF, which I am using in project by

That’s a function returning time now by Reflector. Time is getting from a machine, running a Reflector server:
(server code from lib/reflector.js)

function GetNow( ) {
    return new Date( ).getTime( ) / 1000.0;

Then it uses to make a stamp for a virtual world instance:

return ( GetNow( ) - this.start_time ) * this.rate

Reflector send this time stamps using WebSockets. And on a client side VWF has a method for dispatching: 
(client code from public/vwf.js)

socket.on( "message", function( message ) {
  var fields = message;
  fields.time = Number( fields.time );
  fields.origin = "reflector";
  queue.insert( fields, !fields.action );

Look at send and respond methods, where clients use WebSocket to send external messages back to the Reflector:
   var message = JSON.stringify( fields );
   socket.send( message );


Now, let’s look at how Krestianstvo Luminary could identically replace the Reflector server.

First of all clients are never forced using WebSockets directly from the application itself for sending or receiving messages. Instead Gun DB responds for that functionality internally. All operations which previously relay on WebSocket connection are replaced by subscribing to updates and changes on a Gun DB nodes and properties.
So, instances, clients - are just Gun DB nodes, available to all connected peers. In that scene, the required Reflector’s application logic is moving from the server to the clients. As, every client on any moment of time could get actual information about instance he is connected to, clients on that instance, etc. Just requesting a node on Gun DB.

Now, about time.

Instead of using machine’s new Date().getTime(), Krestianstvo Luminary uses state from Gun’s Hypothetical Amnesia Machine which combines timestamps, vector clocks, and a conflict resolution algorithm. So, every written property on a Gun’s node stamped with HAM. This state is identical for all peers. That’s meaning that we could get this state just on any client.
Taking in consideration that Gun DB guaranteers that, every change on every node or property will be delivered in right order to all peers. We could make a heartbeat node and subscribe peers to it updates.

Here is the code for creating a heartbeat for VWF:

Gun.chain.heartbeat = function (time, rate) {
              // our gun instance
              var gun = this;
                  'start_time': 'start_time',
                  'rate': 1
              }).once(function (res) {
                  // function to start the timer
                  setInterval(function () {
                      let message = {
                          parameters: [],
                          time: 'tick'
                  }, 50);

              // return gun so we can chain other methods off of it
              return gun;
Client, which start firstly or create a new virtual world instance, create heartbeat node for that instance and run a metronome (that part could be run on Gun DB instance somewhere on the hosting server, for anytime availability):

let instance = _LCSDB.get(vwf.namespace_); //
instance.get('heartbeat').put({ tick: "{}" }).heartbeat(0.0, 1);

So, every 50 ms, this client will writes to property ‘tick’ the message content, thus changing it, so Gun HAM will move the state for this property, stamping it with the new unique value, from which the Croquet time will be calculated later.
The start time will be the state value of HAM at ‘start_time’ property, of heartbeat node. Please notice, that actual Croquet timestamp is not calculated here, as it was in Reflector server. The timestamp used for the Croquet internal queue of messages will be calculated on reading of ‘tick’ by the VWF client in its main application.

Here is the simplified core version of dispatching ‘tick’ on VWF client main app, just to get the idea: (full code on public/vwf.js, links below)

let instance = _LCSDB.get(vwf.namespace_);
instance.get('heartbeat').on(function (res) {
   if(res.tick) {
  let msg = self.stamp(res, start_time, rate);
  queue.insert(fields, !fields.action);
this.stamp = function(source, start_time, rate) {
            let message = JSON.parse(source.tick);
            message.state =, 'tick');
            message.start_time = start_time; //, 'start_time');
            message.rate = rate; //source.rate;
            var time = ((message.state - message.start_time)*message.rate)/1000;
            if (message.action == 'setState'){
                time = ((_app.reflector.setStateTime - message.start_time)*message.rate)/1000;
            message.time = Number( time );
            message.origin = “reflector";
            return message
The main point here is the calculation of Croquet time using Gun’s HAM state: ( node, property )

for message:

message.state =, ‘tick’); // time of updating tick
message.start_time =, ‘start_time'); //start time of the instance heartbeat
message.rate = source.rate;
var time = ((message.state - message.start_time)*message.rate)/1000;

So, all peers will calculate exactly the same Croquet time on getting an update from Gun DB,  regardless of the time when they get this update (network delays, etc).

As you could imagine, sending external messages will be as simple as just writing the message by a peer to an instance heartbeat with a new message’s content. All connected peers and a peer itself will get that message, stamped with Croquet time, while they are subscribed on changes on heartbeat node (look above at instance.get(‘heartbeat’).on() definition )


Actually that’s it!


  • Reflector server is no longer required for running virtual worlds (any existed GunDB instance on a network fits, could know nothing about Croquet and clients)
  • clients, world instances, connecting logic are hold by a distributed DB
  • stamping messages are doing by clients themselves using Gun’s HAM
  • one dedicated peer, producing metronome empty messages for moving time forward (could be anywhere)

All advantages that Gun DB provides, could be applicable inside a Croquet Architecture. One of scenarios could be the use of Gun’s Timegraph. That’s will allow to store and retrieve the history of messages for recording and replaying later. Using SEA Security, Encryption, & Authorization library, will allow to create a highly secure instance’s heartbeats using peer-to-peer identifies and being deployed anywhere, anytime available on AXE blockchain.


For making a fully functional prototype, there are still an issues in porting Reflector application logic to a functional-reactive Gun DB architecture nature. That concerns to the procedure of connecting clients to a running instance. As it is connected with getting/setting instance state, pending messages and then replaying them on a new connected peers. But, all that is not critical, as does not affect the main idea behind Krestianstvo Luminary.
There are performance issues, as Gun DB is using RAD storage adapter. But configuring several RAD options could be helpful, concerning opt.chunk and opt.until (due to RAD or JSON parse time for each chunk).

Source code

The raw code is available at GitHub repository under the branch ‘luminary’.

The branch ‘luminary-partial’ contains working prototype of partial Luminary, when one master-client is chosen for reflector logic, it uses Gun.state() for stamping messages, as it was done in the original reflector app, and then distribute as updates to other peers through Gun DB.

Thanks for reading and I will be gladfull if you will share your comments and visions on that.

Nikolai Suslov

by Suslov Nikolay ( at June 25, 2019 10:54 AM

June 11, 2019

David Smith

Croquet Multi-user Demonstration

by David A. Smith ( at June 11, 2019 03:39 AM

May 20, 2019

David Smith

Croquet Lives Again

We have been working hard on the latest, greatest version of Croquet. Next week we will be rolling it out for our friends to play with. 

David A. Smith

Twitter: @Croquet
Skype: inventthefuture

I am a part of all that I have met; 
Yet all experience is an arch wherethro' 
Gleams that untravell'd world whose margin fades 
For ever and forever when I move. 

by David A. Smith ( at May 20, 2019 11:16 PM

April 02, 2019

David Smith


Croquet V embedded in my blog.

by David A. Smith ( at April 02, 2019 04:40 PM

January 15, 2019

David Smith

Why AR Will Win - And Why it Matters How it Will Win

Why AR Will Win
And Why it Matters How it Will Win

David A Smith
December 28, 2018
Originally printed in Michael Swaine's PragPub.

“man is much more than a tool builder … he is an inventor of universes.”
Alan Kay - “A Personal Computer for Children of All Ages”

There is a race between the extraordinary and compelling Augmented Realities – and the necessary and powerful Augmented Human. It is essential that these be in balance or we will become a slave to the increasingly virtual world we live in rather than the master of it. We consider here the idea of a truly Augmented Reality – a digital extension to the world in which we will live all our future waking hours. A far more important concept is that of the Augmented Human – our next steps in evolution that will allow us to understand and control this Augmented Reality and, in turn, the universe. The Augmented Human is a better you.

What is AR
I consider VR (virtual reality) to be a full subset of AR (augmented reality); a mode if you like. I deeply understand and appreciate the difference between the two, but I think that it is as irrelevant as trying to draw a line between smart phone and an MP3/music player.

I suppose I should give you my definition of what Augmented Reality will be. It is concise.
AR will be everything your smart phone is today, but it will be visible every waking second, displaying the world as a living web browser.

This last is important, as it subsumes everything else. Think about how you use the web on a daily, or sometimes even minute by minute basis. Why is this important? Because AR will be like that but so much more. It will be amazingly addictive, and as we all know, addicting products are by far the best market – just ask the cigarette manufacturers.

Imagine an interesting opportunity for distraction every step you take walking down a city street. I am not just talking about restaurant menus here. You can query a quaint hotel you are standing beside and view a list of the famous people that lived … and died there. Interesting that it was once a brothel, even more interesting how that movie star ended up in a closet. Who was the architect, what other buildings did he design? When did he die and what from? What was the Spanish Flu, anyway?
Everything you look at or is near you becomes a trigger for a wonderful exploration of streams of information and ideas. It will be useful in other ways too - you can look down through the pavement and see the subway lines and watch the trains move down the tracks. SimCity comes to life and you and everyone around you is a Sim. You can query for the closest subway stop, which cars are most crowded, are there any sexual predators on the car you are planning to get on? No problem, you have set your system to automatically tag people like that – when you do see him, he is painted a bright red and has a neon sign floating over his head. You can even select his history.

You walk past a car dealership and see a new sedan in the window. You pause to step into a fully immersive (VR) virtual car seat to explore its interior and try out a few features (without leaving the sidewalk). Oops, you are alerted that your red painted guy has just stepped within 100 feet of you. 

Time to move on.

Walking through a crowd, you can see Facebook icons pop over the heads of everyone that is a friend of yours on that service. I am talking about what is technologically feasible, not necessarily what society will approve of, but who knows. You see a good-looking person walking your way – cool, you can see that they are only one degree of separation from you on LinkedIn. You invite them into your network.

One of the creatures from a game you are playing appears from behind a car. It sees you and tries to escape, but you corner and capture it.

You get an alert that your video conference is about to start. Walk into the nearest Starbucks – immediately pick up the coffee you already ordered and paid for and sit down at an empty table. The other participants soon join you. Your AR device captures the position of your head, your face and facial expressions and of course where you are looking. Your colleagues look almost as live as if they were sitting next to you. No one else in the coffee shop even notices or cares that you are in a conversation with ghosts. One of the conference participants drops a document onto the table. It is a 3D chart of the projected growth of the new IoT toaster. The new toaster has just integrated Alexa, so it can carry on a conversation with you along with the rest of your other kitchen appliances. You have never met one of your colleagues in person and are not aware that he has removed a mole from his virtual cheek and has a virtual nose job.

The hot new app allows you to transform any of your friends into Hollywood characters, including their voices. You have decided that one of your co-workers will be Humphry Bogart, including hat and trench coat. Your boss is Hannibal Lector from Silence of the Lambs – it makes everything he says so much more interesting. The receptionist becomes Rhett Butler – Clarke Gable with a sort of southern accent.

Most of these capabilities already exist in some form – probably on your phone. All of them will exist soon. They are just a bit inconvenient to access today. That is why you don’t use them every second. You can’t. Yet.

That sounds like a wonderful world. Why don’t we have it now? When can we have it?
Today, AR and VR are not great experiences. They are terrible. Current devices are complex, hard to set up, even harder to use. They are ugly, heavy and don’t really do much. They do sort of show what is possible once you finally get it working, but at the same time they highlight just how bad the current state of the art is. VR games demonstrate t he potential of the new medium, but even there, it is difficult to spend the same amount of time and attention that you would bring to a screen based version.

Friction is the barrier that your product must overcome to satisfy the customer.

In marketing, the reverse of Friction is, oddly enough, Stickiness. Today, a smart phone is incredibly sticky. AR and VR has a great deal of friction to overcome.

Compare AR and VR to the smart phone in your pocket. AR fails in so many ways.

Phones are beautiful, sleek, elegant – AR makes you look like an alien Bono with an umbilical cord. VR is worse, as not only can’t you see the world, the world sees you with a literal box on your head.
Phones are invitingly seductive and like to be touched. The user strokes their finger across the screen, creating wonderful ripples in their personal probability pond.  Interestingly, phones are quite useless for fine manipulation. It is very hard to edit a document on the phone – almost impossible to select between letters in text. That fine manipulation is very important as we will see later. AR and VR have pointing rods with almost no haptic feedback at all aside from perhaps vibrating. It has none of the compelling qualities of the phone interface and is not even particularly good at gross manipulation. I know some people do amazing things in VR. Frankly, I am amazed that they can do that. I can’t. Picasso could draw amazing things in the air too. But he was Picasso.

We need a new way of interacting with this new idea space in the same way that the mouse was invented to be able to interact with our virtual desktop.

Phones are relatively inexpensive and self-contained. A great phone costs almost nothing, a high end one is still amazingly cheap for what you get.  Consider the power it provides when you have it in your hand. You don’t need to plug anything into it or plug it into anything to use it. It is a complete and efficient distraction. AR and VR usually require a wire with a hefty, expensive computer on the other end. There are cheaper alternatives, but there is a significant gap in capabilities.

Phones have so many interesting things you can do with them. Communicating with your friends, watching videos, playing games. VR and AR are still emerging markets based up some very flawed hardware. Developers consider the size of the target market for their apps, and though the phone today boasts billions of users, the user base for VR and AR worlds is in the relatively inactive millions.

Phones are instantly available and invisible. Your phone is in your pocket or purse. It alerts you that it wants attention and you have it turned on and in front of your face in just a few seconds. Most people do not remember taking their phone out to use it. It just magically appears in their hands and takes over their focus­. You never think about using the phone when you are using the phone – you see through it to the rich world on the other side of the screen. On the other hand, you have no choice but to think about AR and VR when you are attempting to use it from the time you set it up, to when you are engaged and immersed within it to when you try to do anything within it. It forces you to see the device as much as the world it is trying to present.

This last is very important. In a way, phones own you more than you own them. They constantly cry for your attention, and when they have it, they study you to determine how best to continue to keep you occupied. The phone is literally using the data it gets from you to figure out how to make itself even more addicting to you. Now that is a great drug.

If phones are such a powerful force that we can’t resist them or maybe even live without them, why does AR matter?

Because phones only OWN you. AR is everything a phone is but will BECOME you.

A Better You
AR has some serious challenges but not one of them is insurmountable. What would AR devices be like for it to offer a serious challenge to the phone?

Most of all, AR must be friction free. It must be and will be easier to use than a phone. You just put it on your head and leave it there – a lightweight pair of glasses that look just like those you might be wearing today. You no longer search all your pockets to find it and then swipe your fingerprint or enter a code or stare deep into the eyes of the phone – the camera – hoping you look somehow like you did when you got to know it.  AR knows when you want it. It hears you when you speak. It knows what you are looking at. So be good for goodness sake. It will seem like you just think something, and it immediately appears. That is an invisible interface. Your phone is a boat anchor compared to AR done right.

AR must do everything a phone does now. AR devices will be phones – or at least a seamless extension of them. In turn, Will Wright pointed out that technology is an extension of the human body. If somebody hits your car, you don’t say that “my car was hit”. You say “someone hit me while I was driving”’. The car becomes “me”, an extension of your body. Your phone is an extension and a reflection of you in the same way, but AR is far more intimate and compelling. It is you.

AR must look cool. Or it must be worn by someone that is cool so that you think it is OK to wear it too. We are so shallow. It also needs to be lightweight, as well as cool (as in temperature).

noun: synergism
1.    the interaction or cooperation of two or more organizations, substances, or other agents to produce a combined effect greater than the sum of their separate effects.

When AR is really great, and it will be, you are going to put on your AR device in the morning and wear it all day long. It will be like hearing aids or dentures – or glasses. Indeed, normal glasses (and in-the-eye glasses from cataract surgery) augment your vision so that you can see the real world. Without them, you are blind, if not totally, to a debilitating degree. The same thing is true of a light bulb in a closed room. Nothing is visible until you turn the light on. As McLuhan says: “a light bulb creates an environment by its mere presence”. Does the room exist without the bulb? Yes, but you could not see it, and would have a great deal of trouble interacting with it. But you never think about technologies like a light switch – they are friction free and have disappeared in a sense, though indeed they redefined what you are at a fundamental level. The digital world already exists – and is as much a part of our existence as the physical, but we are quite blind to it. We see glimpses of it in our offices and labs every once in a while, (at least I get to), and there is no question that the virtual light will soon be turned on. And just like with our glasses and light bulbs, the medium will seem to disappear, but in fact will redefine what we are.

This thing is far stickier than a phone. It is going to be the fusion of man and the Internet. The line between the two is about to be erased. And it is going to be glorious – and terrible.

The Bus or the Bulldozer
Though it is rare that you use a computer to think, you never use a phone that way. You are a consumer of information, not a creator. Part of this is due to the phone’s limitations. You simply can’t be truly creative as a human today without fine manipulation. The phone interface is designed around gross manipulation because touch screens don’t allow you to easily specify a touch point – it is a touch area. This means the interaction targets on the phone must be larger to accommodate the gross manipulation that you can do with it. Without fine manipulation, it is very difficult to create, but virtually impossible to edit anything interesting. The vast majority of effort in creating anything interesting is in the editing, re-working, re-thinking. Things that you just can’t do with a phone. This doesn’t mean it is impossible – we are often impressed by some beautiful work that was created completely on a phone. Impressed not just because the creative effort is better – but because it could be created and completed that way at all.

That limitation in creativity is partially due to the inherent limitations of the design of the smart phone. It has a small screen and you have big fingers. But the display itself is small, which means that you don’t really have the room to express yourself if you wanted to. Real artists need room.

What is quite interesting is that this limitation became something of a feature – as far as the phone manufacturers saw it. The phone is not a very good creative device. However, it is an amazing consumer device. Consuming has two parts. Select what to consume – then consume it. We don’t need a particularly fine interface for that. More problematical was Apple Computer’s decision to enshrine this “feature” and to not allow any kind of programming environment on their phones at all. In fact, Scratch – a block programming language designed for children that works great on a phone – was blocked from the app store for many years until recently. This was an attempt to enforce the consumer nature of the device and it succeeded.

Phones are designed around the idea that you are a consumer of information and ideas. You can control where you go, but you can’t go anywhere that the phone hasn’t already anticipated. This is like getting on a bus. If you already know where you want to go, this is a convenient and fast way to travel there.

However, you can’t take the bus to someplace that is new and unexplored. For that, you need a more refined and powerful set of tools. You are better off with a bulldozer, something that is responsive to new ideas and lets you go anywhere you want. There are no barriers, except your imagination.

That is the true promise of AR – assuming it can escape the challenge of gross manipulation versus fine manipulation.

Why is it necessary to use AR and VR as creative platforms? How do we ensure that the human is empowered to extend, explore and share their ideas and their worlds? How do we amplify the properties that define humans as creative tool builders?

First, we can’t really understand how AR should work until we can create and modify the AR system from within it. It must be a creative amplifier. Our creations extend the capabilities of the system which then allows us to create something even better. The problem of AR today is that it is being built from the outside – like a ship in a bottle, with the intent that it can then be launched and won’t sink. It might be beautiful indeed, but you are coming at it from the wrong direction. Instead, we need to launch a raft with just enough capabilities so that we can design a ship as we learn to navigate the ocean. We will design the right kind of ship because it is the one we will be living on. We need to invent the VR and AR interfaces from within it and as we use it. This is not a new idea – it was the foundation of what Doug Engelbart demonstrated 50 years ago, create tools that allow you to create even better tools. Xerox Parc created the modern UI for computers and phones from within Smalltalk. Smalltalk allows the developer to reinvent every aspect of the system, from the foundational operating environment underneath to the actual widgets that the user can manipulate. Almost everything you use on a modern computer or phone was invented at Xerox in this way. I believe it is the only way to create a truly new and powerful platform. Will Wright describes it this way:
“We now, with these little micro-worlds, have the ability to basically externalize what is in our imagination and share it with other people. You know, it used to be that you had to have a very rich skill set, like you had to be a fine artist to do that – you know, to paint something in your imagination and then share it with other people. But now with these tools, the creative leverage they give us, average, casual game players have the ability to externalize, create things out of their imagination, share it with other players, and actually have these shared imaginary worlds. And so I think that’s one of the examples of the computer giving creative leverage – a creative amplifier.”

Second, we need a vehicle to amplify our intelligence, both individually and collectively. Humanity is in a race with itself to determine its fate. We may be losing this race. We need to provide the right side with an unfair advantage that it very seriously lacks today. We may not be able to get there from here, at least not directly. Doug Engelbart demonstrated a new starting line in this race 50 years ago. He not only showed many of the foundations of human computer interface that we now take for granted, he also provided us with a new perspective on what the computer could really mean as an extension of a human. This was not an accident, but a pre-planned intentional act to redefine the nature of how the computer and human engage to become something greater than their parts. A synergism that defines the symbiont.

Even more spectacular was how he harnessed the newly empowered symbiont to include other humans, equally empowered with these technologies, so that they could collaborate to explore and create even greater things. His goal wasn’t to merely amplify the intelligence of a human – he intended to amplify the intelligence of humanity. For AR to truly shine, we must also look at it as a fundamentally disruptive event. AR isn’t a better phone, though the first popular versions of it will likely embrace that approach. They will be the first demonstration of what Augmented Reality has in store for us – for better and worse. We need something far more powerful – a tool that allows us to think,  explore, and create new tools that amplify our intelligence even more. Engelbart’s tricycle is a great analogy to the problem. Extending the phone into AR, with its inherent limitations and biases is like improving the capabilities of a tricycle. The new tricycle will be based on user feedback and will be a very compelling product, very stable and safe. It offers an awesome user experience, and is what everyone thinks they want. But that process will never result in a bicycle, a far more powerful product that dramatically multiplies its users speed. This is a disruptive product and requires what Clayton Christensen referred to as “discontinuous” innovation. Apple used to refer to the Macintosh as a Bicycle for the Mind – and indeed the Macintosh provided many of us with an incredible vehicle to construct new realities with. The Mac was NOT a better Apple ][. It was a very new thing. Our challenge is to ensure AR is done right.

Third, we need an intention amplifier. We need a new approach to user interaction in the 3D world that provides us with the free-form gross gestures that we have, but also the fine control that we get with the mouse, another Engelbart invention. The mouse was not an accidental design –Engelbart explored many alternatives for interfacing with the computer. The mouse was significantly better. The mouse amplifies the user’s actions while retaining control. Even on a large, multi-foot display like the one I am using, I can move my mouse from one side of the screen to the other and still be able to place the cursor between two letters to edit this document. The mouse amplified my reach but maintained the necessary fine control.

We have far better sensors than Engelbart had in his day. An equivalent to the mouse is certainly within reach. The body and hand are obvious targets for enhancement, but it is a mistake to simply project the bodies motion into the 3D world. Great user interfaces are invisible. The user thinks about what they want and the necessary actions to accomplish it occur. You don’t think about using a mouse – you think about what you want to be true. The best interfaces for AR will be based upon eye-tracking and voice. Hands, finger motions and gestures will aid that interface in the way meta keys aid the keyboardist, but these will be exceptions.

Fourth, we need a control amplifier. Facebook and Google make money by aggregating large amounts of attention and selling it to the highest bidder. Further, they train, or allow others to train the users to desire certain things. The main goals of machine learning on these platforms is to understand what people want and to then provide it to them. We see an almost real-time morph into these machine learning systems exploiting a user’s weakness to train them in what they might desire. Manipulating human behavior is a game, and Machine Learning systems are extremely good at games. The alternative is that the machine learning systems be harnessed to enable the individual user’s creativity and exploration. It should be a full partner in engaging, constructing and exploring new virtual universes. It needs to become another vehicle for fine manipulation by the user, an even sharper blade for our bulldozer enabling us to drive anywhere we want.

We are in a race between the extraordinary, compelling and addicting augmented realities – and the necessary and powerful augmented human. It is essential that these be in balance or humanity will become a slave to the increasingly virtual world we live in rather than the master of it. Augmented Reality is the place where we will live all our future waking hours. The “real” world as we know it will still be the digital world will co-exist and be mixed-in with it and be just as relevant as a light switch in a dark room is today. The digital world exists – you just can’t see it yet. The Augmented Human is our next step in evolution. We will understand, control, and extend this augmented reality and, in turn, the universe. We are about to turn on the light.

by David A. Smith ( at January 15, 2019 03:22 AM

September 01, 2018

David Smith

Virtus Walkthrough

I just posted the video I made of Virtus Walkthrough. I created this in 1990 with David Easter and Mark Uland. I am actually demonstrating Virtus 4 here. Scott Haynes was responsible for this version, and I think it is the best version of Walkthrough we ever did. It was also, unfortunately the last. The overall design and interaction is almost identical to the original 1990 version, though. This is using a software renderer that I wrote and was greatly enhanced by the team led by Greg Rivera. It uses portals extensively, and though you won't notice it, it does not actually have a z-buffer. The objects are sorted using a kind of BSP thing I did that is extremely fast. I am actually running this on Parallels on an older Macbook, and it feels faster to me than Sketchup 7.1, which I run native. But of course, considering that this had to run in real time on sub-20 mHz 68010 and 68020 machines, this had to be pretty damn fast to work at all.

This is the system I first prototyped the virtual collaboration space in that I showed to Alan Kay. This led to the development of ICE (see previous post), OpenSpace, the Croquet Project, and Teleplace.

Virtus Walkthrough won many awards, including the very first Breakthrough Product of the Year from MacUser Magazine, and the PC Computing Best Drawing Program, where we beat Adobe Photoshop.

Here is the demo:

by David A. Smith ( at September 01, 2018 03:10 AM

March 23, 2017

Nikolay Suslov

Virtual World Framework & A-Frame

In this post I want to share the details about the latest project being done on Krestianstvo SDK.
Virtual World Framework provides a robust decentralised architecture for building virtual world apps based on replicated computation model. It's JavaScript version is strongly based on ThreeJS ibrary for programming apps with 3D visualisation and deep interaction support. So, for building such apps, the developer should be aware of ThreeJS internals, not to mention the knowing of the VWF component's architecture. But, actually VWF is working just with any programmable elements whatever simple they are. The A-Frame framework solves the problem of ThreeJS сomplexity for developing Web apps for Virtual Reality. It provides the component-based architecture for that. A-Frame incapsulates ThreeJS, hiding the internals, and providing high-level interface for describing web app declaratively.
So, I have developed a model and view drivers for VWF, that provides basic support for using A-Frame components in Virtual World Framework apps. That allows to build a VWF collaborative apps with 3D visualisation, WebVR, HMD, trackers and mobile devices support easily.

Source code at GitHub

Here is a small video demonstration, that shows the interaction within collaborative Virtual World Framework app, which is composed by the A-Frame components.

In the video three Google Chrome web-browsers are directing to the same VWF app instance's URL. Every browser shows the replicated A-Frame scene with components in it. The users are represented with small cubes and are visible to each other. The cube on the right is holding the simulation, which is staying the same on all browsers.

Try online demo here:

Simple scenario for collaboration:

  • Open in Web-browser the given URL (
  • Copy the generated URL and open it in another browser window
  • or direct Web-browser to,
  • where you could find all running VWF app instances to join to.
  • Open the generated URL at your phone or tablet.
  • Move in space with arrows or WASD and point objects with a cursor in the centre of the screen (this will generate Click event).
  • You could create any number of isolated VWF app instances, but for connecting to them you will need to know the generated URL.

So, how a simple VWF app with A-Frame is look like?
Here is a simple code of index.vwf.yaml:
      value: "Virtual World Framework & A-Frame"
      textColor: "#b74217"
      position: [-2, 2.5, -2]
      position: [1, 1.25, -4]
      color: "#e0e014"
      radius: 1
      wireframe: true
          position: [2, -1.25, 0]
          color: "#2167a5"
          depth: 1
      color: "#ECECEC"
      position: [0, 0, 0]
          look-controls-enabled: true
          forAvatar: true

by Suslov Nikolay ( at March 23, 2017 09:56 PM

July 04, 2014

Historical - Vanessa Freudenberg

SqueakJS runs Etoys now

TL;DR: Try Etoys in your web browser without a plugin (still buggy, but even works on iPad). Feedback from more platforms is very welcome, and fixes to improve the compatibility, too.

Half a year has passed since my initial release of SqueakJS. Now I can report on some significant progress since then.

For one, I adopted a UI layout similar to Dan’s Smalltalk-72 emulator, where the debugger interface is only visible when the system is stopped. Now that the basics are working, there is no need to show the debugger all the time. Try it yourself at the Lively page.

But more importantly, many more subsystems are working now. BitBlt is almost complete (all the important modes are implemented), WarpBlt works (for scaling and rotating morphs), the image can be saved, an emulated file system supports reading and writing of persistent files. This now is enough to not only run the very old and undemanding “mini.image”, but SqueakJS now can even run the very latest Etoys image, the same version as on Squeakland. Beware of the many incomplete features and outright bugs still left to be fixed, but try it for yourself here.

While Etoys feels a lot slower than the MVC “mini.image”, and some operations take many seconds, it is surprisingly responsive for normal interaction. On the browsers with the fastest JIT compilers (Safari on Mac, IE on Windows) it is almost good enough, even though no serious optimizations were done yet. It is also interesting to see that some browsers (Chrome and Firefox) are currently significantly slower. And not just a little slower, but Safari outperforms Chrome by 200% for this workload! This is likely due to Safari›’s excellent LLVM-based FTL JIT.

The remarkable thing about the screenshot above is how unremarkable it looks. Apart from the missing white oval behind the “Home” label it looks just like it’s supposed to. In comparison, a week ago the screen still looked like this:

The difference is that Tobias Pape and I added support for Balloon2D rendering. This is Squeak’s default vector rendering engine, originally created by Andreas Raab to show Flash animations. But unlike the rest of the SqueakJS VM, we did not port the original code. Instead, our plugin intercepts the drawing commands and renders them using HTML5 canvas drawing routines. While still far from complete, it can already render one kind of important shapes: TrueType font glyphs. They are defined by Bézier curves, which need to be rendered with anti-aliasing to look nice. And now that we can render text, the graphics are almost complete. Many more details still need to be implemented, especially color gradients.

This highlights one strength of Squeak: The VM and its plugin modules present a well-defined, stable interface to the outside world. That is what makes a machine truly “virtual”. In contrast, other systems rely on FFI (the foreign function interface) or similar techniques for extension. While convenient during rapid development, it does not keep the interface small and stable. That interface is overly broad and unpredictable. Typically, client code must be special-cased per platform. It's calling C functions directly, which may or may not exist on a given platform. That makes it much harder to move the system to another platform, and in particular one that is completely different, like the web browser. The Squeak Etoys image on the other hand did not have to be modified at all.

What I’d like to see fixed in Squeak is that there should be working fallback code for all non-essential primitive functions. This would make it much easier to get up and running on new platforms.

For SqueakJS, bugs need to get fixed, and many features are still missing to run Etoys fully. Adding support for other Squeak releases than Etoys would be great (closure/Cog/Spur images). Contributions are welcome: fork my github project.

by Vanessa ( at July 04, 2014 08:42 PM

October 09, 2013

Nikolay Suslov

ADL Sandbox project and Virtual World Framework

I am very excited with the ADL Sandbox project and Virtual World Framework at all, and that it is available in open source to experiment with.
By the way, I am from Smalltalk world and an acceptor of OpenCroquet architecture, which has it's own history. OpenCroquet in it's latest form of OpenQwaq take all the features and benefits from the platform being realized on (the open source Smalltalk language dialect Squeak).
But, Virtual World Framework goes further now, especially in moving from class based language architecture to prototypes, that gives a shared code and behaviours used in distributed computation, to be modified at runtime!
So, that's one of the main criteria, I am start moving now from OpenCroquet to Virtual World Framework, despite of sadness of parting with all the benefits of Smalltalk language and it's IDE, comparable to JavaScript and Ruby. Although, I have the insights of looking a Smalltalk language as a DSL language for Virtual World Framework in the future (for example as Seaside web framework for HTML and JavaScript).
ADL Sandbox project uses already one JavaScript language for all stuff.  And one interesting thing to realize in it is having OMeta'JS workspace inside Sandbox script editor, which will allow to define own language's grammar and replicate it through the application instances, then have a running scripts on that shared grammar. Almost all JavaScript language dynamic capabilities will be used in distributed computation then. For example, you could have all the languages down from Logo (Turtle graphics) to Smalltalk available for scripting the virtual world just in the Web browser.
More over the integration between LivelyKernel and Virtual World Framework will give the endless capabilities of in browser project development and editing.
So, I have started using the ADL Sandbox project and Virtual World Framework to build a Virtual Learning Environment for modern mathematics and phisics, exploring the new ways of developing the concrete tools for modelling, rendering and interaction with the content of the virtual world.
Will post about the progress here.

by Suslov Nikolay ( at October 09, 2013 02:02 PM

July 27, 2013

Nikolay Suslov

Curved Space Explorer for Squeak

Want to introduce the Curved Space Explorer for Squeak project,  known as CCSE by Krestianstvo SDK.
It is a Smalltalk port version of Curved Spaces, originally developed by Jeff Weeks ( in C language.
This Squeak version is derived from Krestianstvo SDK project's version, where Curved Space Explorer is collaborative in it's nature and available mainly for distributed computation.
The aim of this project is to make Curved Space Explorer in Smalltalk being available for the large Smalltalk audience and mainstream Squeak distribution, so that anybody interested could work with it.
The project is Open Source and the code is available here:

To run the CCSE you need to download the latest Squeak distribution from the official site
Also I recommend to use the latest Smalltalk CogVM from the
and in the running image, execute in the workspace:

"1. Load FFI"

(Installer repository: '')   
 install: 'FFI-Pools';   
 install: 'FFI-Kernel';    
install: 'FFI-Tests'.

"2. Load 3DTransform "

(Installer repository: '')
    install: '3DTransform'.

"3. Load OpentGL and CCSE"

(Installer repository: '')
    install: 'OpenGL-Pools';
    install: 'OpenGL-Core';
    install: 'OpenGL-NameManager';
    install: 'CCSpaceExplorer'.

"4. Run sample application"

CCSEMorphRender runApp

" Help

In running application there are some options available using the keyboard and mouse:

"up" and "down" arrows on the keyboard - speed of the ship movement
"left" and "right" arrows on the keyboard - change aperture

mouse move with left button pressed - rotation of the ship
mouse move with left button pressed and shift pressed - translation of the ship

press "o" on keyboard - switch between "head" and "body" rotation
press "p" on keyboard - switching on stereo (anaglyph) mode
press "l" on keyboard - switching shaders support (only for Mac OS X for now)


Also you can use the preinstalled image from here:

Happy exploring!

by Suslov Nikolay ( at July 27, 2013 11:11 PM

OpenGL procedural textures generator (by David Faught) for Squeak 4.4

Repost from mail list to blog

I have successfully proceeded in running TweakCore on the recent Squeak 4.4 trunk image (from Jenkins).
And one of the famous existed applications developed in Tweak is the OpenGL procedural textures generator by David Faught.
I make it also loadable to the current Squeak.  

You could download the ready to run image from here:
execute in the workspace in own image:

"1. Load FFI"
(Installer repository: '')
    install: 'FFI-Pools';
    install: 'FFI-Kernel';
    install: 'FFI-Tests'..

"2. Load CroquetGL " 
(Installer repository: '')
    install: '3DTransform';
    install: 'OpenGL-Pools';
    install: 'OpenGL-Core'.

"3. Load TweakCore and Procedural textures application for Tweak"
(Installer repository: '')
    install: 'tweakcore';
    install: 'Tweak-OpenGL-sn.3'.

"4. Set the default settings in Tweak"    
CDefaultWidgetLibrary setDefaultSettings.

"5. Run one of two examples"
CProjectMorph open: Wrinkle1.
CProjectMorph open: Wrinkle2.

The attached screenshot shows the running application. 


by Suslov Nikolay ( at July 27, 2013 10:07 PM

April 13, 2013

Takashi Yamamiya

Various examples in Haskell's FRP.Reactive

After playing with Flapjax library in Javascript, I moved to Reactive to learn more about FRP. Because research on Functional Reactive Programming is most active in Haskell, I thought it would be better to do that. Reactive seems to be a nice library, but unfortunately I couldn't find many working code examples. So I show some of them as my exercise. To write this, I owe a maoe's great article in Japanese.

(This page has been translated into Spanish language by Maria Ramos from

As I didn't have much time, I couldn't write a good explanation now. But still I hope it helps some people who learn Reactive like me. I used Haskell Platform 2010 (slightly old) and did cabal install reactive --enable-documentation to install Reactive.

The first example shows "Hello, World!" after three seconds. atTime generates a timer event, and <$> convert this event to IO action (\_ -> putStrLn "Hello, World!") which writes a string.

This is as same as above, but it makes events each second.

This makes running Fibonnaci numbers. You can use scanlE to process previous value and current value of the event in a function. In this case, (0, 1) is the initial value, and when an event occurs, the function \(n0, n1) _ -> (n1, n0 + n1) calculates next value, and the result (the first case is (1, 1)) is used as a next argument when a new event occurs.

It shows characters as you type. It looks difficult but you don't have to worry about run function. The important part is machine :: Event Char -> Event (IO ()) that convert a character input event to an IO action.

This example shows how to merge two events. onType is same as machine in the previous example, and onClock is same as helloMany.hs example. I used `mappend` to merge the two events

This shows a simple state machine. The function next defines the state machine, and mealy_ convert the definition to an event. zipE is another way to merge two events. Unlike mappend, you can see two values in the two events in a same time.

by Takashi ( at April 13, 2013 01:43 AM

February 25, 2013

Vanessa Freudenberg

Smalltalk Bindings for Minecraft Pi

The Raspberry Pi is a cute little computer. Quite cheap at $35, you plug in USB keyboard+mouse and a TV as monitor. And it is surprisingly capable, even for running 3D games.

One particularly interesting game is Minecraft: Pi Edition. As in other Minecraft versions, the main goal is to create a world. But unlike other versions, you can not only use the tools provided by the game, you can make your own tools! That's because it comes with a programming interface.

The Minecaft world is made of little cubes, and you normally place or remove these blocks by hand, one after another. This is fun, but for larger structures also quite cumbersome. For example, this rainbow here might take a long time to construct manually:

But I did not make the rainbow by hand. I programmed it, using the Smalltalk programming language. It's just these dozen lines of code in the Squeak programming environment:

Squeak is already installed on the Raspberry Pi, because Scratch was made in Squeak. Of course you need a little more to make this dozen lines of code work. Mojang (the developers of Minecraft) have provided "bindings" for the Python and Java programming languages, but not for Smalltalk. So I had to make these bindings first.

Here are the Bindings

Now you can use the bindings too, because I am publishing my code:
Squeak can either run on the Raspberry Pi itself (a VM is already installed) or on another computer in your network.

There are two packages, 'Minecraft-Pi-Base' and 'Minecraft-Pi-Demo', load them in this order. At the time of writing, the demo package has only the rainbow method in it. The code is not heavily commented, but from the examples it should be fairly obvious how to use it. 

The bindings are still somewhat basic, but cover all the functions of the current Minecraft-Pi 0.1.1 release. There is certainly room for improvement. E.g. it would be nice to add symbolic block names, so you could write "wool" instead of "35". And the hit testing (when you right-click on a block with your sword) works, but could be made more convenient to use, perhaps by introducing an event class like in the other bindings.

I made the repository open, so anyone can easily contribute. I'm curious what others will come up with.   Like, control Minecraft from Etoys or Scratch? How about a Croquet bridge? Build a little game? In any case, have fun! :)

by Vanessa ( at February 25, 2013 11:42 AM

January 17, 2013

David Smith

Andreas Raab

Andreas Raab was my best friend.

Andreas loved to challenge me to express and exceed my abilities. In so doing, he forced me to challenge him. We pushed each other up the tallest mountains where, together, we had the honor of viewing the world from a new point of view.

He loved to program. He was the best programmer I have ever known – by a lot. This is not an idle statement – I have known and worked with the best in the world. Andreas was better.

He loved exploring and understanding new systems. He was fearless. He would not only understand how to use the new tool to accomplish his task, he would also figure out how to make it even better for the next person.

Andreas loved to make great things for people to use. He was never content with “good enough”. Every line of code he wrote was an opportunity to teach someone a new idea. Every system he built greatly empowered the person willing to embrace it and it allowed them a new freedom to create and explore.

He loved beer and introduced me to some fantastic brews. Learning to drink from a German is a valuable skill.

He loved to violently explore ideas. We had many loud discussions –often at a bar, where we threw ideas back and forth like rag dolls. Most ideas did not survive. The ones that did were very strong.  He was as intent a listener as he was a proponent, and often succeeded in convincing himself he was wrong. He certainly convinced me he was right more often than not.

He loved food and was as essential a partner in exploring great restaurants as he was in exploring ideas.  We visited Canto do Brasil in San Francisco every weekend we could and always had two (three?) Caiparinhas each and the Feijoada. And of course, the steak at Angus Barn… (yes, he was jealous when we went without him).

He loved language – he was always searching for how to express an idea in English with just the right word or phrase. He loved coding in the same way. He loved exploring the subtleties and power of how ideas could be expressed, communicated and unleashed.

Andreas loved classical music. We carpooled to work every day and he always had the radio on the classical station when he picked me up. We challenged each other on what the piece was, who the composer was, and even who the performers were. It was probably the only thing I was better at than him.

Andreas loved his friends. There were no conditions or requirements. He worked to see the world from their perspective – honoring them by always caring enough to understand them. He also expected them to live up to his high standards of thought, action, caring and love.

Andreas loved Kathleen. He was always a positive person, but he glowed from the moment he met her. He loved her more than anything else in his life. She completed his world and he completed hers.

by David A. Smith ( at January 17, 2013 12:26 AM

December 17, 2012

Historical - Vanessa Freudenberg

Squeak Etoys running on OLPC XO-3 tablet

SJ brought a hand-assembled XO-3 prototype to the OLPC Community Summit in San Francisco (mass production only starts when someone puts in a large-scale order), and of course I tried to run Etoys on it. It's pre-installed (as on all XOs) and worked right out of the box.

I was able to paint and drag objects, but since there is no right-click support yet there was no halo to bring up a new viewer. Also, touch targets are rather small for my adult-sized hands, and since there are no hover events, some features don't work correctly (as we found out with the iPad version two years ago).

So more work is needed, as well as for the XO-4 which has a multitouch screen in addition to a keyboard and touchpad. Help welcome :)

by Vanessa ( at December 17, 2012 11:49 AM

April 09, 2012

Nikolay Suslov

Virtual World Framework (aka Croquet 2) goes live!

"The Virtual World Framework (VWF) is a fast, light-weight, web-based architecture for creating and distributing secure, scalable, component-based, and collaborative virtual spaces. It leverages existing web-based standards, infrastructure, and emerging technologies with the intent of establishing a powerful yet simple to use platform that is built on top of the next generation of web browsers. " from

Here is the information about VWF, that is available on Internet for now:
Official site:
The VWF source code, published during the conference
Video showing the Virtual World Framework, starting from 0:40 min.
Slides from WebGl camp 4 about VWF architecture.
I have tested the installation process of VWF server on Mac OS X Lion 10.7.3, here are the steps:
Launch a terminal window:
1. Load the source code from VWF Git repository:
$ git clone vwf
2. Install RVM
$ curl -L | bash -s stable
2. Reload your shell environment
$ source ~/.bash_profile
3. Find the requirements (follow the instructions)
$ rvm requirements
4. Install ruby
$ rvm install 1.9.3
5. cd to your VWF development directory
$ cd vwf/
6. Install bundler
$gem install bundler
7. Install the RubyGems to the system
$bundle install --binstubs
8. Edit the file "", correcting the file paths:
require "init.rb"
change to
require "./init.rb"
5. Set Ruby 1.9.3 as the default for current shell.
$ rvm use 1.9.3
6. Run the VWF server:
$ ./bin/thin start
7. Open http://localhost:3000 in WebGL enabled web-browser (for full experience you will need the latest Mozilla Firefox web browser)
If you want to start VWF server as a background service, just add -d key:
$ ./bin/thin start -d
Also I tested the installation procedure on FreeBSD 8.1
and has successfully ran the VWF server instance on

Happy Birthday to Virtual World Framework!

by Suslov Nikolay ( at April 09, 2012 08:03 PM

January 25, 2012

Nikolay Suslov

Krestianstvo SDK at C5-2012 conference

This year I was very happy to be at the The Tenth International Conference on Creating, Connecting and Collaborating through Computing 18-20 January 2012 (Institute for Creative Technologies, University of Southern California, CA, USA) and to demonstrate Krestianstvo SDK's projects and quit new features of it, like Microsoft Kinect and CAVE support for OpenQwaq. The primerily proceedings are available for download (publication to appear).

Almost all Viewpoints research institute's team was there!
There was a great tour of USC Institute for Creative Technologies and demonstrations of their projects.

Coach Mike (programming robot with blocks)

ICT Graphics Lab: Light Stage X, Gunslinger: Virtual Human integration demonstration, ICT Mixed Reality.

in.. Los Angeles, California...

by Suslov Nikolay ( at January 25, 2012 08:22 PM

December 21, 2011

Nikolay Suslov

David A. Smith has revealed a new Croquet-like framework available in March 2012

David A. Smith, one of six principal architects of the Croquet Project has revealed a new Croquet-like framework that is built on WebGL and HTML, on which he is working now!
"We plan to have an open beta in March 2012" - David A. Smith said.
That's incredible!
Merry Christmas and Happy New Year!

by Suslov Nikolay ( at December 21, 2011 09:32 PM

December 06, 2011

Takashi Yamamiya

Flapjax vs Tangle

Functional Reactive Programming (FRP) is a framework to deal with time-varying data in a clean way. It is a combination of beauty of functional programming and dynamics of object oriented programming. The basic principle is easy enough as spreadsheets, however, its vague scope and arcane terminologies keep you from grasping it. It's not quite easy to answer the question such as what makes FRP different from Observer Pattern, Data Flow, etc ??. I think a good way to explain FRP is to compare FRP library against non-FRP library, and I could show you where FRP is special, and pros-and-cons of FRP.

I examined Flapjax as an example of FRP, and took Bred Victor's Tangle as the comparison target. Although Tangle has similar goal of FRP as he wrote "Tangle is a library for creating reactive documents", its implementation is quite different from Flapjax.

Side-effect is hidden inside the framework. Time-varying data is represented by dependent tree, and you can compose those trees to implement a complex behavior.
Tangle provides a simple framework and UI widgets, but the data flow is represented by a normal imperative programming and assignments.

Because of those properties, I think comparing the two libraries is helpful to understand what FRP is. I hope it makes clear idea about FRP in your mind.

Simple Calorie Calculator in Tangle

This is the first example from the Tangle's documentation. You can modify the number of cookies by dragging, and it keeps calculating the calories as you change the value.

When you eat cookies, you will consume calories.

To make this nice reactive document. This document consists with two parts, HTML for the view and javascript for the model.

<p id="tangle"
  When you eat <span data-var="cookies" class="TKAdjustableNumber" data-min="2" data-max="100"> cookies</span>,
  you will consume <span data-var="calories"></span> calories.

The HTML part is straightforward, this is just a normal HTML except special attributes for Tangle. Data-var is used to connect HTML elements to Tangle object's properties. Class name TKAdjustableNumber makes a draggable input control. Data-min and data-max are its parameters.

var element = document.getElementById("tangle");

new Tangle(element, {
  initialize: function () {
    this.cookies = 4;
  update: function () {
    this.calories = this.cookies * 50;

The actual model of the document is described in the second argument of Tangle object's constructor (new Tangle). It consists with just two parts. initialize sets up the initial state, and update is invoked whenever you modify the input value. Tangle connects the model and the HTML element specified by getElementById("tangle").

This initialize-update structure is fairly common among end-user programming language like Processing and Arduino.

Simple Calorie Calculator in Flapjax

Let's move on to Flapjax. Unfortunately, Flapjax doesn't have a nice input widget as Tangle has. Instead, we use a traditional input field. But other than that, the behavior is identical.

When you eat cookies, you will consume calories.

As Tangle, the Flapjax version has HTML part and Javascript part. Note that Flapjax provides "Flapjax Syntax" which allows you to write a simpler notation, but we don't use it because I want to compare those as Javascript libraries.

<p id="flapjax" class="example">
  When you eat <input id="cookies" value="4" /> cookies,
  you will consume <span id="calories"></span> calories.

Flapjax's HTML part is similar as Tangle's. The element identifiers (cookies and calories) are given by id attributes. Unlike Tangle, the initial number of cookies is written in the input field.

var behavior = extractValueB("cookies");
var colories = behavior.liftB(function (n) { return n * 50; });
insertDomB(colories, "calories");

In Flapjax, time-varying data is called behavior. The goal of the program is to make a behavior which always calculates calories of the cookies. It's not so difficult than it seems. ExtractValueB creates a behavior from a form element, in this case, extractvalueB("cookies") tracks every changes happening in the input field named "cookies". This created behavior is processed by the function at the argument of liftB, in this case, whenever you modify "cookies" field, colories represents a value which is always 50 times by the number of cookies.

Eventually, insertDomB insert the content of colories where HTML element "calories" is and the calories are shown on the screen. This element is automatically updated.

Unlike Tangle, there is no side-effect in the program. One advantage of FRP is that you are not confused between old values and new values. In Tangle's example, this.cookies is old value (input) and this.calories is new value (output). But you are free to be mixed up those. In Flapjax, a new value is always the return value of a function, and there is no chance to be mistaken.

Implement Adjustable Number Widget in Flapjax

One of advantages of FRP is its composability. You can make a complicated behavior by combining simple behaviors (occasionally, imperative programming gives you a hard time for debugging if the bug involves with connected program modules with side-effects). To demonstrate this feature, I will show you how to make a Tangle-style draggable widget in Flapjax. This problem is particularly interesting because processing drag and drop involves a state machine, but a state machine is not quite fit with a functional programming style. So you might find pros and cons of FRP clearly from this example.

When you eat cookies, you will consume calories.

The HTML part is almost identical except adjustable class in the input field which points a Tangle like (but not fashionable enough) stylesheet.

<p id="flapjax-drag" class="example">
  When you eat <input id="cookies-drag" value="4" class="adjustable"/> cookies,
  you will consume <span id="calories-drag"></span> calories.

The main Javascript part is also similar as above. But in this time, we are implementing makeAdjustableNumber to make a draggable widget from the element named "cookies-drag".

var element = document.getElementById("cookies-drag");
var behavior = makeAdjustableNumber(element);
var colories = behavior.liftB(function (n) { return n * 50; });
insertDomB(colories, "calories-drag");

A drag gesture consists of three events, mousedown, mousemove, and mouseup. After a mousedown is detected, it has to track mousemove events to know how far you are dragging. You can make such a state machine to construct a higher order event stream. Here are two new concepts. An event stream is similar as behavior, but it is a stream of discrete events instead of continuous values. But you don't have to worry about that. It's just another object which has slightly different API. A higher order event stream is an event stream of event streams. This is used to make a stream which behavior is switched depends on the input.

This mouseDownMove makes a higher order event stream that tracks mousedown and mousemove. extractEventE(element,"mousedown") extracts mousedown event in the element. When the event signaled, the function inside the mapE is evaluated. MapE is similar as liftB but it is only for an event stream. Inside the function, extractEventE(document,"mousemove") find mousemove events and track the distance from mousedown. Note that I used document to find the event because occasionally you drag a mouse to outside the widget.

function mouseDownMove (element) {
  return extractEventE(element,"mousedown").mapE(function(md) {
    var initValue = parseInt(element.value);
    var offset = md.layerX;

    return extractEventE(document,"mousemove").mapE(function(mm) {
      var delta = mm.layerX - offset;
      return Math.max(1, Math.round(delta / 20 + initValue));

We need to handle mouseup event also. The mouseUp function returns a higher order event stream that find mouseUp event and the zeroE happily does nothing.

function mouseUp (element) {
  return extractEventE(document,"mouseup").mapE(function() {
    return zeroE();

And these two event stream make by mouseDownMove and mouseUp are going to be merged by the mouseDownMoveUp function to complete a mousedown, mousemove, and mouseup cycle. MergeE is used to merge two events streams. We need one more step switchE to convert a higher order stream to a nomal stream, in this case, a stream of numbers (distance).

function mouseDownMoveUp(element) {
  var downMoveUp = mouseDownMove(element).mergeE(mouseUp(element));
  return downMoveUp.switchE();

Finally, we connect the event stream into an HTML element. Here I did slightly dirty work. Whenever a drag gesture happens, the element.value attribute is set. Probably using insertDomB to make an output element is cleaner way, but I chose this dirty way to make it simple. At the last line, the event stream is converted to a behavior object by startsWith. And that's how makeAdjustableNumber is implemented.

function makeAdjustableNumber (element) {
  var drag = mouseDownMoveUp(element);
  drag.mapE(function(n) { element.value = n; });
  return drag.startsWith(element.value);

Honestly, Flapjax doesn't seems to be too easy to use. But part of the reasons might be that I chose to show a plain Javascript syntax to introduce the mechanism. Flapjax also provides its own compiler which provides cleaner syntax. This Flapjax syntax should improve readability a lot. Anyway, I hope this short note helps you to grab a brief idea of Flapjax and FRP.


by Takashi ( at December 06, 2011 09:12 AM

November 14, 2011

Nikolay Suslov

Krestianstvo SDK2 goes CouchDB and OSC through the Web

After some silence the new version Krestianstvo SDK v.2.0.4 is available.
The updated version contains just the preloaded packages, which will allow to realize a lot of interesting things in near future!

1. Seaside 3 and Pier 2 for ForumPages and web-services.
2. OSC support for TUIO, Kinect, FaceAPI and WebApp controllers.
3. OMeta for user-defined markup languages.
4. CouchDB for services serialization on distributed DB, instead of platform-dependent file system.
+ some fixes, mainly Windows dependent

So, feel free to Download, Register and enter the Krestianstvo.
Looking forward to meet you online in space!

by Suslov Nikolay ( at November 14, 2011 07:27 AM

November 01, 2011

Historical - Vanessa Freudenberg

Squeak Etoys on ARM-based OLPC XO-1.75

First post this year, yikes! The last one was about ESUG 2010 in Barcelona, now I just returned from ESUG 2011 in Edinburgh. While I was there, a package with the shiny new XO-1.75 prototype arrived.

Incredibly, the pre-installed Etoys simply worked! Never mind the change in processor architecture, the Fedora folks have done a great job compiling the Squeak VM for ARM and so Etoys just works. Of course that's just as it should be, but it's still awesome. And e.g. Squeakland's own Etoys-To-Go would not have worked, as it only includes binaries for Intel-compatible processors.

Another great addition is a 3-axis accelerometer. The Linux kernel's driver exposes it as a file at /sys/devices/platform/lis3lv02d/position. Gotta love the unix design of exposing devices as files. All it took to make this usable from an Etoys project was just an object with ax, ay, and az variables that get set with one simple textual script:

Another simple script can use this to control a ball (the "rebound" script just keeps it on-screen):
Fun all around—it's a bit a hard to see the yellow ball in the Video, but Jakob enjoys it anyway:
Also, uploading from Etoys directly to Squeakland using Wifi just worked. Yay!

Update: If you want to try my uploaded project on your XO-1.75, you need to save it once from Etoys, quit Etoys, and run it again. Otherwise it won't work - it was signed by my key so the Etoys security sandbox prevents it from opening the accelerometer device. The saved copy will be signed using your key so no sandboxing happens.

by Vanessa ( at November 01, 2011 01:31 PM

September 23, 2011

Takashi Yamamiya

Yet Another "Alligator Eggs!" Animation

Bret Victor came to our office yesterday, and we had a great chat. He is a great thinker and has a beautiful sense about visualizing abstract ideas. I really like his works. I want to learn his idea more, but as a starter, I tried to implement his early famous Alligator Eggs! game. This game was made to teach about lambda calculus to eight years old kids. But it's even more fun to adult hackers!

Alligator and an egg : λx.x

This is a green alligator and her egg. This family shows a lambda expression λx.x (because I know you are not an eight years old, I use formulas without hesitation!). There is a no animation as there is nothing to eat.

An alligator eats an egg : (λx.x) y

But things are getting fun when there is something to eat before the alligator mother. In this case, a blue egg. If you click on the diagram, you see what's happening (I only tested Chrome, Safari, and Firefox). The alligator eats the poor blue egg. But price for the sacrifice is too high. The mother will die, and we will see the new baby.

And then, things are getting curiouser. The new baby doesn't look like the mother at all, rather it is like a blue egg, the victim of the slaughter. What's a amazing nature of the lambda land!

Take first : (λx.λy. x) a b

This is slightly a hard example. There are two alligators "x" and "y", and two victim eggs "a" and "b" on the right side. If there are more than two things next to an alligator, the alligator eats left one first (it is called as left associative in jargon). Can you guess what does happen after the meal? Alligator "x" eats egg "a", and alligator "y" eats egg "b". And only egg "a" survives (because it transmigrates through the green "x" egg).

You can think that this alligator family (λx.λy. x) eats two things and leave the first one. In a same way, can you think of an alligator family which eats two things and leave the second one? Here is the answer.

Old alligator : (λx.x) ((λy.y) (λz.z))

There are a few things to know more. Old alligators are not hungry. But they keep guarding their family while they guard more than one things. They behave like parenthesis in a lambda expression.

Color rule : (λx.λy.x) (λy.y)

This rule is the most tricky one. There are two blue alligators "y" at left and right, but those two are not in a same family. The only mother of the blue egg "y" is the right one. It gets trickier when the family is eaten by the green alligator because the blue family is reborn at the green egg is, where is bottom of another blue alligator. To make them different, the right blue family change the name and color to "y1" and orange.

Omega (Mockingbird hears the Mockingbird song) : (λx.x x) (λx.x x)

By these rules, you can make various kinds of alligator ecosystem. This is my favorite one. (λx.x x) is called a "Mockingbird" or, rather we should call it Mockingalligator. It doubles its prey twice. So what happens if a mockingalligator eats a mockingalligator? The result is called one of omegas, an infinite loop. They are eating forever. To stop the endless violence, please click the diagram again. But please do not to click three times! Because of my bug, something wrong will be happening.

Y combinator : λg.(λx.g (x x)) (λx.g (x x))

This is dangerous but beautiful one. The omega ecosystem above kills each other but it doesn't make any, but this Y combinator is very fertile. It produce many, so you have to watch it carefully, otherwise it consumes all the CPU power you have eventually!!

3 + 4 : (λa.λb.λs.λz.(a s (b s z))) (λs.λz.(s (s (s z)))) (λs.λz.(s (s (s (s z)))))

Actually, alligators also can do serious jobs. If you design carefully, you can teach them how to calculate 3 + 4! In this example, the middle family represents three and the right family represents four (count green eggs). And the result is a family with seven green eggs! This is called Church numbers (I don't have a time to explain the theory, so please read the link).

I only introduced very few alligator families. If you want play it, visit and design by your self. You can also download from The source code is messy because I haven't written javascript recently, but I'll clean it up soon.

by Takashi ( at September 23, 2011 02:51 AM

July 10, 2011

Takashi Yamamiya

A hidden story behind the EToys Castle

Demon Castle Demon Castle Demon Castle Demon Castle Demon Castle Demon Castle

If you have played with Etoys, you might have seen The Etoys Castle (or The Demon Castle) tutorial. But you would never know how the story ends, because the Etoys distribution only includes the first chapter, and the last slide shows "To Be Continued ...". However, there are actually the hidden sequels, and the story has a happy ending.

When I first wrote the story in 2006, there were three chapters. The first chapter was about learning "handles", the second one was about the painter, and the third one was about scripting. But due to some technical issues, I gave up to publish them. Today, I happened to clean up my hard drive and I found old files. It's shame that I have never published rest of them. So I gathered the screen shots and made up one page html.

by Takashi ( at July 10, 2011 02:24 AM

June 30, 2011

Takashi Yamamiya

My personal history of Web Authorizing Tools (2)

Tinlizzie Wiki

Tinlizzie Wiki is a wiki written in Tweak. It uses OpenDocument Format (ODF) as data format, and WebDAV as server.

Although data format in StackWiki was Squeak specific binary, In Tinlizzie Wiki existing common format is used. A part of reason why I choose ODF was that it was a research project to find a possibility to exchange eToys content among different platform. So it was necessary to find platform independent and transparent format. ODF, especially its presentation format, was quite close to my demonds which are a) text based b) enable to embed graphics c) enable to use original element d) internal and external link supported.

A ODF file is just a zip archive which includes XML text and multimedia binary files. And it is easy to extract image file in a project by an another tool. Both embeded object and external resource can be represented by common URL notation. And if necessary, new tag for Tweak specific object can be used. For example, a project which includes fully dynamic behavior written as Tweak objects can be viewed on ordinary OpenOffice Org application, although dynamic feature would in it be disabled.

To export Tweak object to ODF as natural as possible, special care was needed to save. It is not the best way to define a new tag for Tweak specific object even though it is possible. It was preferable to map from Tweak to ODF properly. For example, if a Book object in Tweak is stored as a presentation within frame in ODF, the project looks somewhat more normal even on other application.

There is a issue how much detail information is needed to save an object. For example, if a text is saved during its editing, whether if position of the cursor should be saved or not?? There are two strategy in terms of implementation. One is to save everything except specified status (deep copy), another one is to save only specified status. Tinliziie Wiki adopted the latter one although Squeak and Tweak native serialize mechanism were the former.

Saving only specific status has two disadvantages. a) A user might expect to save everything including minor information because combining arbitrary objects in even any peculiar way is possible in Tweak. b) Each new widget needs to implement each exporter. But "saving everything by default" strategy has a problem of compatibility because even just one change of variable name makes trouble for old version. Especially it is problematic for sharing in Internet. So I din't choose this strategy.

WebDAV is used as the server. Both StackWiki and Tinlizzie doesn't need server side logic, but simple storage is required. WebDAV is the best option for that matter. Even version control system can be plugged in the server with Subversion modlule in Apache for free,

Javascript Workspace

Javascript Workspace is a simple web application. It uses bare Javascript on client, and Ruby CGI on server. It behave like a Smalltalk Workspace, and the contents are managed same manner as Wiki.

Let me make sure about workspace again. Workspace is a text editor, and it has two additional commands "do it" and "print it". Do it command envokes a source code selected by user, and print it command output the result into cursor position. The function is similar to REPL shell on dynamic language, but the use case is slightly different. A typical way to use workspace is as an explanation of program. An author writes example source code inside the documentation, so that a user can try actual function while reading a text. Namely, REPL is two ways dialog between a machine and a human, but workspace is tree ways conversation among a machine, an author, and a user.

Workspace is indispensable tool for Smalltak though, which doesn't mean only for Smalltalk. It would be nice if there is a workspace for Javascript language. This was the initial motivation of Javascript Workspace. And then, it was a natural consequence that Wiki was used to save the content because Javascript lives on web browser intrinsically, and there are no way to save to local disk.

During the development, however, I realize that it can be more than just a workspace in terms of media. Javascript workspace has only simple user interface, which includes a couple of buttons and one big text area. Even there are no hyper link nor emphasized text. But variety things can be happend from such minimal configuration by source code. Hyper link is enable to make from location property, rich text can be shown to modify DOM tree, and even game can be made to set up event hander. Source code can do everthing.

Just one textbox on a web page is a very radical idea. This is completely opposite direction to current trend of rich internet application. Web application consists with number of hidden functions these days, but Javascript Workspace can not have any invisible information. Everything what it does is shown to you as source code entirely on the screen. Javascritp Workspace looks like dangerous as it runs any Javascript code, but in fact, it is a quite safe system.

The idea of uset interface of Javascript Workspace is adopted to OMeta/JS.


TileScript uses Scriptaculous as GUI library and WebDAV for server storage. JSON is used for its data format.

A TileScript document consists with one or more paragraphs, and a paragraph is either Javascript code, "tile script", or HTML expression. A tile script is set of tiles, which each tile represents some syntactical element in a programming language. A user can connect tiles to construct a program with drag and drop. This is an easy way to make a program avoiding syntax error. Javascript is used to represent more complicated program than tile script. And HTML is used as annotation. It can be seen as rich version of Javascript Workspace.

The initial motivation of TileScript was to remake eToys on the web environment. The research had got started by making tile available on web browser. I considered to use Lively Kernel (SVG), but it was unnecessary if Table element in HTML DOM is used as tile. Scriptaculous is used to keep the source code simple.

After tile is ported, then next step was eToys environment itself which includes event handling, scheduling, and bitmap animation, etc. But those issues seemed too difficult for nature of web document.

Flow layout, which actual position of document elements are dynamically changed by reader's browser environment, is a significant feature of web. An author don't specify concrete position of elements, but rather care about logical structure. And then, a part of document which can not be shown on the screen is accessable by a vertical scrool bar.

On the other hand, eToys provides page layout, which size and position of elements is fixed, and presume particular screen size. Althogh, it is quite fit as a metaphor of physical paper, and best for a graphical environment like eToys, but clumsy operation like zooming and horizontal scrool is required.

Because ultimate goal of TileScript was not just reinventing eToys, but investigating further possibility, flow layout is adopted to TileScript. But still absolute coordination can be supported in form of embeded object even in flow layout. TileScript provides variable watcher like eToys, but those widget is also layouted along with flow.

And then

Now I'm working on next version of Javascript Workspace, which especially its target application is Active Essays. Our group have found that Javascript is quite reasonable tool to show some ideas of computer science. One reason is language's simplicity, and other one is easiness of collaboration. We have a lot of new ideas about programming language, and some of the part should be simple enough to understand even by junior high student. I believe my tool can be used to explain such ideas.

The problem is any project intoroduced here is not intended for real use, rather just for demo or prototype of further real development. So it is not be so useful as it looks because it includes too experimental aspect, too fragile, or too slow. Now I'm thinking that it is not bad idea if I make somewhat stable version of them. Even it might not have exotic feature like tile script, but only basic and simple functions are enough to play with everyone. I really like my first idea of Javascript Workspace, which has only simple text. I admit it is extreme, so next version might support emphasized text and inline image (basic HTML elements) at least.

by Takashi ( at June 30, 2011 09:54 PM

June 18, 2011

Nikolay Suslov

Krestianstvo SDK2 users registration is online!

Glad to invite everybody to register for exploring now the OpenQwaq's collaborative 3D forums using Krestianstvo SDK2 and ongoing Krestianstvo's 2 forums (CCSE, Learning math, Collaborative music, Art disks, ect.) in near future.
The registration page is available online now for all: (RU: http://неучи.рф/регистрация ).
As soon as you get the Login and Password, you could enter into the collaborative 3D forums anywhere through the internet. But for that you'll need to download the new version of SDK.v.2.0.2: here or using an alternative link, or update the 2.0.1 one (Monticello repository here:
The main feature of the new 2.0.2 SDK is the new database storage logic, entirely based on XML and taking away the ODBC/MySQL usage. So, the OpenQwaq's service provider (not the Local's one) use the XML based db just as MySQL db, while storing it in Smalltalk class variable and serializing it's tables onto disc with plain XML files. That scenario is suitable for local and internet servers, thus turning OpenQwaq onto really mobile platform. Another feature is directed against Apache/PHP, meaning the coal/web/forum services running on :9991. So, the first step you could look at working online registration on Krestianstvo site and next will be having all admin pages using just Javascript and Ajax, hosted on the same Smalltalk image.

See you in the forums!

by Suslov Nikolay ( at June 18, 2011 10:39 PM

May 16, 2011

Nikolay Suslov

Krestianstvo SDK v2 based on OpenQwaq

OpenQwaq is the most awaited framework in the virtual world's development domain!
Four years left after the latest OpenCroquet release and OpenCobalt has done a lot to bring virtual worlds closer to life, but OpenQwaq set the final point!
Now anybody could set up it's own virtual space or forum and do not worry about the underlying network architecture, just create Forums, create content for them and place on the servers through the web and collaborate.
So, everything looks just fine, but... to start own server on LAN or WAN someone need to install Linux, Apache, PHP, MySql and OpenQwaq server itself following config rules and avoiding the pitfalls.
But hey? We are in Squeak/EToys platform, Smalltalk at last.. in the self-contained environment, so why all these third-party tools a needable? (Yes, for: Security reasons, Application services, Streaming, account politics ect). But in learning situation at classroom or in Art gallery during installation all of these features are not too critical.
I decided to explore, how it is possible to have just One-Click OpenQwaq image with server and client on it, that could run as internet or local server or just a client. Image, that in a few clicks could be used by children setting up the classroom's network or artists making performance / installation.
And the Krestianstvo SDK 2.0 as experimental platform was borned!
Comparable to OpenQwaq, I started Krestianstvo2 with only one image for server and client versions (the server one based on Squeak 4.2).
The aim is to start the server with no need of installing any other third party applications (apache, mysql). Of course, some (a lot of) features of OpenQwaq could not be available in such scenario, but the work is in progress...
In the current version, anybody could easily setup the running server on LAN or WAN (the server has already running the same image as provided for download).
SDK is developed on top of OpenQwaq, so that no part of the original OpenQwaq code in image is modified from the functional point of view. So, SDK image allows to run OpenQwaq Forums just in pure mode. Nevertheless, for localization purposes the String method #translateMe was introduced and was injected into OpenQwaq code for the most of string's objects. Translation process is still in progress too.
SDK works with Russian language support by default!
Try out the running space!
The demo logins and passwords, that's working on

Login: member
Organization: krestianstvo

Login: guest
Organization: krestianstvo
You could add as new members and new groups in setup, visually.
In main login screen use right mouse button to have the context menu and select: "Add new user...".

Example connection scenario:

1. On the 'A' host you start Krestianstvo with as server (type instead of the in the 'Server:' field).
2. On the 'B' host you start Krestianstvo with IP address of 'A' host as server.
3. Then try to login with l: admin/p: admin (l: member/ p: member) accounts (or add new members in 'A' host (on the main login screen, where is proxy configuration dialog).

So, in final it will be good to have: CoalServices working with local storage database in memory (instead of mysql) with Smalltalk WebServices front-end (instead of apache/php).

The project page:
Download the image:

by Suslov Nikolay ( at May 16, 2011 10:22 PM

December 28, 2010

Nikolay Suslov

Multi-touch table based on Krestianstvo SDK

Here is a video from the recent event, which was held in the "Museum of Science" (Russia, Vologda), where the Multi-touch table based on Krestianstvo SDK was shown. The table is controlled directly by Krestianstvo virtual space and it's objects shared on the Croquet island. So, several such tables could be organized into p2p network and become a really interactive classroom, programmable just in Smalltalk. For recognizing reacTIVision fiducial markers and TUIO protocol are used (based on Simon Holland TUIO for Squeak work). For music synthesising SuperCollider through OSC is connected, using the idea from SCIMP (SuperCollider Server client for Impromptu) and realized in Smalltalk.

by Suslov Nikolay ( at December 28, 2010 08:53 PM

November 26, 2010

Nikolay Suslov

Microsoft Kinect sensor in Krestianstvo SDK

Want to share my early experiments with the novel controller from Microsoft: Kinect, fine working in Krestianstvo SDK (based on Smalltalk dialect Squeak/Croquet). Here you could find an interesting list of projects and ideas already evolving using Kinect in different programming languages (c, c++, of, max/msp, processing. java, ect.)
But, why not in Smalltalk?.. And in pure 3D Virtual Space (Croquet), controlling your Avatar with own body and operating on objects just with own hands..
So, using OpenKinect driver, I prepared the plugin KinectSqueak.framework and the changeset for Krestianstvo (source code is avaliable here) with Mac OS X support only for now (Window will be very soon). The latest downloadable SDK is also include Kinect support, so just download it and try the sensor, if you have one..

Happy Kinecting!

by Suslov Nikolay ( at November 26, 2010 01:04 AM

October 25, 2010

Nikolay Suslov

World serialization support in Krestianstvo SDK | OWML

Happy to announce an October update of Krestianstvo SDK with it's main feature:

World serialization support

Meaning, that one could freely save a current work, being done in active space and then restore it at any time later. The spaces are saved into OWML text file format (check them in Resources/resources/MySpaces folder).
In the next post, I will write an introduction to this new format. Briefly to say, it is mainly based on Sophie XUL and partially C3X serialization logic.

Some other features: new objects (Text3D), Seaside and Squeak base image update, Cog VM update, UTF-8 encoding is default in Mac VM now.

Happy serializing!

by Suslov Nikolay ( at October 25, 2010 09:52 PM

October 04, 2010

Takashi Yamamiya

Tamacola (5)

Tamacola is not just another LISP language, it is designed as a meta-language to make a new language. I'll explain this feature today. Today's goal is to design a subset of lisp language. If you think that a lisp is too simple to keep your passion, sorry, be patient, simple thing first.

Prepare your Tamacola environment

To setup Tamacola environment, you need to download both Tamacola distribution and Tamarin VM. Those are available on You need add the PATH environment variable to find the avmshell command, and also it would be useful to set the PATH to bin/ in the tamacola tree. To make sure Tamacola works, plese type:
make run-example
It runs all of the examples in the Tamacola distribution as well as recompile the compiler. If you don't find any error, you are ready to go. Otherwise, please let me know the problem.

Tamacola command

Tamacola command read a tamacola program and run immediately. If you want to make a Flash contents, another command tamacc (Tamacola Compiler) is more suitable. Now we are playing with an interactive shell of tamacola command, so I'll give you a brief explanation. The interactive shell starts with minus (-) option. Let's try a simple arithmetic. If you didn't setup PATH environment, please specify the directory name, too.
$ tamacola -
> (+ 3 4)
You can also give Tamacola source files as well as compiled binary names. Typically, source code ends with .k, and a binary ends with .abc. Tamacola is smart enough to detect newer file between .k and .abc.

Match against a string constant

Suppose you are on some working directory, and you have already set PATH environment to the bin/ directory. And then, we are going to write a very simple language, greeting:

;; greeting.g - A simple PEG example

greeting = "morning" -> "Good Morning!"
         | "evening" -> "Good Evening!"

This stupid example answers "Good Morning!" if you say "morning", and it answers "Good Evening!" if you say "evening". This PEG syntax is easy to understand. The right hand side of = is a rule name. A rule name is translated as a function once it is built. -> means an action rule, where if the left hand is matched the right hand side is returned. | is an Ordered options. In this case, the parser tries the first case "morning", and tries the second case "evening" only if the first case fails.

Save this syntax named "greeting.g". To test this language, type those commands:

$ mkpeg greeting.g
$ tamacola greeting.k -
compiling:  greeting.k
> (parse-collection greeting "morning")
"Good Morning!"
> (parse-collection greeting "evening tokyo")
"Good Evening!"

Mkpeg command converts grammar file (greeting.g) to tamacola source (greeting.k), a rule "greeting" the result can be read by tamacola shell. Greeting.k is built on the fly and the command prompt is shown.

Parse-collection's first argument is a parser name (in this case "greeting"), and the second is a input collection. As the name implies, it accepts any collection as the input stream.

The second case shows an interesting property of PEG syntax. Although the second rule matches the beginning part of the input "evening tokyo", still the input remains more string " tokyo". PEG doesn't care if the input is competely consumed or not. If you really want to make sure that the entire input is matched, you need to explicitly tell the Parser the point where end of the file.

Number parser

The last example only matched a predefined constant, but we make a parser for any integer number here.

;; number.g -- A number parser

digit   = [0123456789]
number  = digit+

We also convert the grammar specification into the tamacola program, but in this case, we give -n option to tell the namespace. A namespace is useful when you want to use a common name as a rule name like "number". Because "number" is already used in the system, you can not use it without namespace.

The grammar itself is easy to understand if you have an experience with regular expressoins. Brackets ([]) matches one of characters inside, and postfixed plus (+) repeats previous expression with one-or-many times.

$ mkpeg -n number number.g 
$ tamacola number.k -
compiling:  number.k
> (parse-collection number/number "xyz")
> (parse-collection number/number "345")
(53 52 51)}

Because we use the namespace "number", we need specify the namespace before slash(/) in the function name.

As you might notice, this parser correctly rejects a non-number like "xyz", and accepts "345". But the result is not so useful. The return value of plus is a special object named "token-group", but we would want a number represented by the string, instead. So we put a conversion function to get the value.

number  = digit+:n      -> (string->number (->string n))
$ tamacola number.k -
compiling:  number.k
> (parse-collection number/number "345")

Now parser returns a number conveniently. Perhaps you might think that it is somewhat cheating. As the string->number function itself is a kind of number parser, we should have write a number parser without string->number! Yes we could. But it leads more interesting topic about left and right recursion, so I leave it for later.

S-expression parser

Now we are going to write a parser for almost real S-expression. This parser can only handle number and list, but it is useful enough to explain the essence of Tamacola.

;; sexp.g
;; Lexical Parser

spaces  = [ \t\r\n]*

digit   = [0123456789]
number  = digit+ :n spaces              -> (string->number (->string n))

char    = [+-*/abcdefghijklmnopqrstuvwxyz]
symbol  = char+ :s spaces               -> (intern (->string s))
sexp    = symbol
        | number
        | "(" sexp*:e ")"               -> (->list e)

In this grammar, only new operator is the postfix star (*) which repeats zero-or-many times. Rest is straightforward. To test this grammar, we use Tamacola's simple test framework. Writing test case is better than the interactive shell, because you don't have to type same expression many times.

;; sexp-test.k

(check (parse-collection sexp/spaces "    ")            => 'SPACES)
(check (parse-collection sexp/digit "0")                => 48)
(check (parse-collection sexp/number "345")             => 345)
(check (parse-collection sexp/char "a")                 => 97)
(check (parse-collection sexp/symbol "hello")           => 'hello)

(check (parse-collection sexp/sexp "345")               => 345)
(check (parse-collection sexp/sexp "hello")             => 'hello)
(check (parse-collection sexp/sexp "(hello world)")     => '(hello world))
(check (parse-collection sexp/sexp "(3 4)")             => '(3 4))
(check (parse-collection sexp/sexp "(print 4)")         => '(print 4))

The check function comes from SRFI-78. This function complains only if the left hand value and the right hand value differ. Otherwise, does nothing. I like this UNIX stile conciseness.

As a convention, a test program is added a postfix "-test" with the main program's name. I borrowed this custom from Go language.

Make sure this program do nothing.

$ tamacola sexp.k sexp-test.k 

Lisp Compiler

The PEG parser can handle any list structure as well as string. It allows you to write compiler in PEG. In a string parser, the input is a string and the output is some object (a list in our case), but in a compiler, the input is a lisp program and the output is a assembler code.

;; Compiler

arity   = .*:x                          -> (length (->list x))
insts   = inst* :xs                     -> (concatenate (->list xs)) 
inst    = is-number:x                   -> `((pushint ,x))
        | is-symbol:x                   -> `((getlex ((ns "") ,(symbol->string x))))
        | '( '+ inst:x inst:y )         -> `(,@x ,@y (add))
        | '( '- inst:x inst:y )         -> `(,@x ,@y (subtract))
        | '( '* inst:x inst:y )         -> `(,@x ,@y (multiply))
        | '( '/ inst:x inst:y )         -> `(,@x ,@y (divide))
        | '( inst:f &arity:n insts:a )  -> `(,@f (pushnull) ,@a (call ,n))

There are some new elements in the grammar. Quoted list '( ) matches a list structure, and a quoted symbol matches a symbol.

A prefix ampersand (&) prevents to consume the stream even if the rule matches. For example, &arity rule examine the rest of the list, but the contents are matched again by the insts rule later.

Is-number is matched against number, and is-symbol is for a symbol. Those rule can not be described as PEG grammar, but as a lisp function.

(define is-number
  (lambda (*stream* *parser*)
    (if (number? (peek *stream*))
        (begin (set-parser-result *parser* (next *stream*))

(define is-symbol
  (lambda (*stream* *parser*)
    (if (symbol? (peek *stream*))
        (begin (set-parser-result *parser* (next *stream*))

A rule is a function which receives the stream and the parser (an object which store the result). The rule function returns #t if it matches, and #f if it fails.

I think it is easier to see the test code than read my explanation.

(check (parse-collection sexp/arity '(a b c))   => 3)

(check (parse-collection sexp/insts '(3 4)      => '((pushint 3)
                                                     (pushint 4)))

(check (parse-collection sexp/inst '(3))        => '((pushint 3)))

(check (parse-collection sexp/inst '((+ 3 4)))  => '((pushint 3)
                                                     (pushint 4)

(check (parse-collection sexp/inst '((f 3 4)))  => '((getlex ((ns "") "f"))
                                                     (pushint 3)
                                                     (pushint 4)
                                                     (call 2)))

Put it in an envelope

We still need a little bit to construct a real assembler code. This detail topic is out of the context, so I simply show the code.

program = inst:x  -> `(asm
                       (method (((signature
                                  ((return_type *) (param_type ()) (name "program")
                                   (flags 0) (options ()) (param_names ())))
                                 (code ((getlocal 0)
                      (script (((init (method 0)) (trait ())))))

And the test case.

(check (parse-collection sexp/program '((print 42)))
       => '(asm
             (((signature ((return_type *) (param_type ()) (name "program")
                           (flags 0) (options ()) (param_names ())))
               (code ((getlocal 0)
                      (getlex ((ns "") "print"))
                      (pushint 42)
                      (call 1)
            (script (((init (method 0)) (trait ()))))))

You can read the entire program in example/sexp.g in the Tamacola distribution. To try the program, please enter:

make -C example test-sexp

Left recursion

We left an interesting topic about left and right recursion. Let me show you our number parser again.

digit   = [0123456789]
number  = digit+:n               -> (string->number (->string n))

If we don't want to use string->number function, I would write the parser as:

;; Use fold-left
digit1   = [0123456789]:d        -> (- d 48)
number1  = digit1:x digit1*:xs   -> (fold-left
                                      (lambda (n d) (+ (* n 10) d))
                                      (->list xs))

Digit1 rule converts the ascii value of the the digit character, and number1 rule construct a decimal number. As you see, you need to use fold-left function to construct a number because a number notation is essentially left recursion. For example, a number 34567 actually means:

(((3 * 10 + 4) * 10 + 5) * 10 + 6) * 10 + 7

However, PEG parser doesn't parse left recursion grammar in general. So I had to reconstruct the left recursion structure by fold-left. This is not hard at all if you familiar with functional programming. In functional programming, a list is considered as a right recursive data structure and it is even natural that a list is parsed by a right recursive way. However, I admit that it looks awkward for some people.

Yoshiki Ohshima provides a very useful extension to support a direct left recursion. To use his extension, the number parser is written as:

;; Use left-recursion

digit2   = [0123456789]:d        -> (- d 48)
number2  = number2:n digit2:d    -> (+ (* n 10) d)
         | digit2
number2s = number2

You need to load runtime/peg-memo-macro.k to use this extension.

$ tamacola ../runtime/peg-memo-macro.k number.k -
> (parse-collection number/number2s "345")

The real parser and compiler are bigger than presented grammars here, but I explained all of the essential ideas. I hope it helps you to make your own language!

by Takashi ( at October 04, 2010 09:38 PM

September 24, 2010

Historical - Vanessa Freudenberg

ESUG 2010 in Barcelona

This year's conference logo was designed by my good friend Patty Gadegast.
I just returned from the European Smalltalk User Group conference in Barcelona, Spain. It was a really nice experience. There was too much going on to report everything here, so I will just pick some favorites.

Photo by Bert Freudenberg

The event was hosted by citilab Cornellà. It started off with a Camp Smalltalk over the weekend. I already met quite a few people there. I couldn't mingle as much as I hoped to because I had to get the first Etoys 4.1 release candidate out of the door:
Photo by Adriaan van Os
Close by was "Yokohama Wok", a Japanese/Spanish restaurant with the best all-you-can-eat buffet imaginable. You could have everything from freshly cut ham to sushi, grilled steak or seafood, bread, pasta, rice, fruits, cake, desserts.
Photo by Bert Freudenberg
I talked to Stef (president of ESUG) and gave him a Squeak Etoys button, which he ended up wearing the whole week:
Photo by Bert Freudenberg
We also played together in a 2-on-2 Magic game (which we promptly lost ...):

Photo by Bert Freudenberg
On Monday I gave my Squeak Community Update talk, outlining what has happened in the Squeak and Etoys communities lately. Got some nice comments afterwards, including the request to give an  Etoys demo the next time. I of course used Etoys to give the presentation, but did not really include an Etoys introduction for people who had not seen it before. But I got a slot in the "show us your projects" session on Tuesday where I made up for that with a 10 minute demo.
Photo by Adriaan van Os
Gonzalo Zabala and his students from Argentina presented Physical Etoys:
Photo by Adriaan van Os

I also liked the Xtreams presentation by Martin Kobetic:
Photo by Adriaan van Os

I was session chair on Wednesday morning, so I could see Travis' update on Pango text rendering from the first row. Would love to have that in Squeak, but it only builds easily on Linux:
Photo by Adriaan van Os

But the most exciting thing on Wednesday was of course that Physical Etoys won the ESUG Innovation Technology Award:
Photo by Adriaan van Os
On Thursday, I participated in a panel discussion about open-source licenses, organized by Julian Fitzell and Jason Ayers of Cincom.
Photo by Adriaan van Os

In the projects session, Ricardo demoed some of his Etoys work done during Google Summer of Code:
Photo by Adriaan van Os

Besides showing his graphing tools, the comic-like bubbles were a favorite with the audience:
Photo by Adriaan van Os

Dale showed the beginnings of Bibliocello, a repository for Monticello packages that can actually analyze them. You get to search implementors and senders across all packages, take statistics etc.
Photo by Adriaan van Os

And at the end of the day, an exciting demo was given by HwaJong Oh, a Smalltalker and iPhone developer from Korea. He demonstrated Drag-and-Drop for Squeak Tools, e.g. dragging the object held in an instance variable directly to another inspector.
Photo by Adriaan van Os

He also used cool animated mind-maps for his introduction:
Photo by Adriaan van Os

The highlight on Friday was Lukas' Helvetia presentation. I particularly liked the integration of PetitParser with the Smalltalk tools.
Photo by Adriaan van Os

All in all it was a rather refreshing conference at a great location with interesting people. Looking forward to next year's ESUG :)

by Vanessa ( at September 24, 2010 01:47 PM

Takashi Yamamiya

Tamacola (1)

Table of Contents


I have published the source code of Tamacola, a lisp compiler which runs on Adobe Flash / Tamarin VM (or Adobe Virtual Machine 2) I'm pretty sure that the current version is useless if you are just looking for a lisp implementation on Tamarin (las3r and scheme-abc are much better), but Tamacola includes abundant tips if you are interested in making a self-hosting compiler on Tamarin VM. That's why I decided to publish it as-is.

I'm also working on a presentation slide for S3 conference to show it. I'm writing random thoughts about the compiler here so that I will compile them to a thread of talk.

I've already written the motivation on the paper (perhaps I will paste the URL in a month) so I don't repeat it. But in short, I wanted make a tiny language which bootstraps and runs on Adobe Flash.

A tiny language and bootstrapping seem contradicting idea as bootstrapping requires various language functions which tends to be large. On the other hand, this is practically a nice constrain because it keeps the language from too simple or too fat. Choosing Scheme like language as a target is natural to me because I wanted to concentrate basic implementation technique instead of language design.

Well, as one reviewer of the paper said, this is not particularly surprising or dramatically different in comparison with previous systems in the area, but some of the stories from the compiler should interest you!

How I started the assembler

In the beginning I created the assembler. Honestly, I wanted to avoid the task because writing assembler seemed not quite an interesting job. But in that time, I couldn't find a nice AVM2 assembler that suite my project. So I've done it. In retrospect, this was not bad at all. I could understand what avm2overview.pdf (the AVM2 specification) said quite well, and I got self confidence.

I wrote my assembler in PLT-Scheme because Ian Piumarta's COLA (Tamacola was supposed to be written in COLA and Tamacola itself, I'll tell you this later) is not finished yet in that time and Duncan Mak, a friend of mine, recommend it. This was actually a good choice. This is my first Scheme application and PLT's good documentation helped me a lot.

An interesting part of PLT-Scheme was it encourages a functional programming style, even PLT doesn't suppport set-car! and set-cdr! in the default library. So it was natural that my assembler was written without side-effect except I/O. This is the first key of the development of the assembler. Unfortunately, because Tamarin doesn't support tail-recursion optimazion and Tamarin's stack size is small, I gave up to eliminate all side-effect later. But the implementation was pure functional up to the time, and it was quite clean.

Indeed, it had to be clean considering boot-strapping. I wanted to make the assembler run in my language itself even before enough debugging facility is not ready. If it were not clean, a tiny bug would cause a few days of debugging. I avoided the nightmare with a functional style and Test Driven Development.

Test Driven Development is the second key. I virtually wrote every test case for each function even if it looks silly. Scheme has a couple of options of testing frame work. I chose SRFI-78. It only report assertion failer only something happen, otherwise it keeps silence. I somewhat like this UNIX taste terse.

The third key was to write an assembler and a disassembler in a same time. It sounds like an unnecessary job because I only needed an assembler eventually. But I had to analyze an output from asc (an asembler in Adobe Flex) and learn how an ActionScript program was converted to the Tamarin byte-code. The disassembler was very helpful to read the byte-code as well as debugging. If output of the disassembler generates the original byte-code by the assembler, there is high chance that my imprementation is correct, unless my understanding is wrong.

The assembler is named ABCSX and it was ported to Gauche, COLA, and Tamacola later. I ported it to Gauche because I was curious about portability of Scheme language.

I had realized there are many places where I could reduce code redundancy in the assembler. An assembler tends to include repetitive process, but some of them are not quite captured well by function abstraction. I would be effective to apply macro and domain specific language in those part. I didn't have tried to solve it yet, but I want to solve it later.

(to be continued)

by Takashi ( at September 24, 2010 07:51 AM

September 16, 2010

Takashi Yamamiya

Tamacola (4)

Tamacola in Tamacola

After I made the Tamacola compiler written in COLA, next thing to do was to implement it in Tamacola itself. A language is called self-hosting if the language is written in the language itself. This implies various advantage.

First, once self-hosting is done, you don't need to use COLA anymore, you can improve or modify any language aspects on Tamarin VM. If I carefully design the environment, it would be possible to do language design only on the Web browser (it needs server side help for security reason, so it hasn't done yet).

Second, self hosting is a benchmark for the language to tell that it is good enough. Scheme is especially simple language, so there are a lot of people who implement toy-Scheme. But because my Tamacola is now self-hosting, I could proudly claim that this is not a toy! Well, this is rather self satisfaction, though.

Third, it provides a rich library including "eval" function. A compiler uses various programming techniques, and those must be useful for other programs, too.

To make it self-hosting, there were two key problem which are macros and eval.

Bootstrapping Macros

I heavily used macros in my compiler, for example, the parser written in PEG was converted a bunch of macro expressions. The problem is, expanding macros requires eval function but I wasn't able to make eval before the parser was done. It's a deadlock! Here is a typical macro written in COLA:

(define-form begin e (cons 'let (cons '() e)))
This is how the macro works. When the compiler find a expression like:
  (print "Hello")
  (print "World"))
Expressions inside begin is bound to e, the body (cons 'let (cons '() e)) is executed in compile time and the expression is expanded to:
(let ()
  (print "Hello")
  (print "World"))

Such expansion is impossible without eval function because the compiler need to evaluate a list (cons 'let (cons '() e)) given by user. What I would do when I didn't have eval yet. But I realized that macros only include basic list functions like car, cdr, and cons in many cases. And a more complicated macro could be hard corded as a special form in the compiler. So I invented a pattern base macros.

(define-pattern ((begin . e) (let () . e)))

Basically this is a subset of Scheme's syntax-rule. If the compiler finds an expression starting with begin, rest of the expression is bound to e and substituted as a right hand side. Those expansion requires only limited set of list functions, so the compiler doesn't have to provide full eval function. This macro syntax made my compiler readable, and I was able to continue happily.

Even after I implemented more dynamic traditional macro with eval function, I keep using this pattern base macros mainly.


To implement eval function, you need to understand the dynamic code loading facility provided by the VM. Note that this is not part of AVM2 specification, and Avmshell (a console Tamarin shell program) and Adobe Flash have different API.

Avmshell has straightforward API. You give compiled byte code, and the function returns the value. Because Tamacola is now written in Tamacola, you can invoke the compiler as a library function and get byte code you want to execute.


You can get the domain object by Domain.currentDomain() static method. Those useful functions in Avmshell are found shell/ directory in the Tamarin-central repository.

Flash Player has somewhat tricky API for dynamic code loading. The signature is normal.

flash.display.Loader.loadBytes(bytes:ByteArray, context:LoaderContext = null):void

There are two problems for our purpose. First, this method is not designed mainly for dynamic loading, it only accepts SWF, JPG, PNG, or GIF files, and byte code happen to be accepted inside a SWF file. So I had to construct SWF file to load code. In case if you don't know about SWF file, SWF file is a kind of container format. You can embedded vector graphics, mp3 sounds, and ActionScript byte code. Making a SWF file is not particularly difficult though, it needs nasty bit fiddling.

Second, this is far more problematic, is that this method works as asynchronously. In other words, this doesn't return the result value. Instead, you need to give it a callback function to wait to finish the code. Additionally, this method doesn't return value at all, so if you want the return value, you need to setup some explicit return mechanism by yourself.

Practically, this cause a problem if you want to write a traditional macro definition and use the macro in a same source code. Because a traditional macro need to evaluate a lisp expression in a compile time, but the eval function doesn't return before the compilation thread is done. I could solve the problem by setting up compilation queue or something, but it would cost performance penalty which I don't want. And now I simply gave up.

I have explained pretty much all interesting aspect of the self hosting compiler. I'll talk about how to make a new language on the Tamacola environment later.

by Takashi ( at September 16, 2010 05:47 AM

September 15, 2010

Takashi Yamamiya

Tamacola (3)

How a lisp program is compiled to Tamarin VM

Now I'm going to talk a bit about how a lisp (almost Scheme) program is compiled into Tamarin's byte code. This topic is especially interesting if you are curious to make your own language or VM.

Tamarin VM is made for ActionScript, so its byte code is also specifically designed for ActionScript. In other words, it is a slightly tricky to implement other language than ActionScript. In case if you don't know about ActionScript, it is almost identical as JavaScript in the execution model. Difference between them is about optimization with explicit type notion and static field.

ActionScript and Scheme are common for those aspects:

But there are significant difference.

Those limitations sound like that Tamarin VM is inferior. But no, actually those limitations come from Tamarin VM's advantage and optimization. If you happen to have a chance to design your VM, please learn from the lesson. There ain't no such thing as a free optimization. Any optimization kills some generality. I'll explain each case.

ActionScript doesn't have a simple function call neither Tamarin VM. This is rather harmless though. When you see a function like expression like trace("hello"), this is actually mean (the global object).trace("hello"), and eventually, the receiver passes to the function as the first argument. In other words, if you want to construct a function call with two arguments, you need to make three arguments where the first argument is "this" object. A slightly tricky part is primitive operators like + or -, which don't have "this" object. Those primitives are special case.

ActionScript also has lexical scope, but only a function has a scope. So I have to be careful when I compile let expression in Scheme. Most simplest way to implement a let expression is to use a function. A let expression can be always translated to a lambda in theory though, this is a huge performance disadvantage. So I use "with" expression in ActionScript. "With" expression is an unpopular syntax in ActionScript, but you can use any object as a scope object. I borrowed this idea from Happy-ABC project

Lack of the tail call optimization in Tamarin VM was the most disappointed thing to me. It prevents a functional programming style. I simply gave up it. Tail call optimization is not difficult topic at all. If the target were a native code like x86, it would be a matter of swapping stack and jump. But Tamarin VM doesn't allow direct access of stack or jump to other function. I understand that it might cause a security issue though, it would be wonderful if VM would provide special byte code for tail call.

Finally, you can't access the call stack directly, therefore you can't implement call/cc. The reason why I can't call Tamacola as Scheme is the lack of tail call optimization and call/cc. It prevents many experimental language features like generator, process, or so. But considering rich libraries provided by the Flash API, I would say Tamacola will be a reasonably useful language eventually.

I'll tell you convolved self hosting process and macros tomorrow.

by Takashi ( at September 15, 2010 06:04 AM

September 14, 2010

Takashi Yamamiya

Tamacola (2)


Once the assembler was done, I was able to test verious Tamarin VM's features, even I wrote a tiny GUI application on Adobe Flash in the assembler. Then next step is the compiler.

Another goal of the project was to port Ian Piumarta's COLA framework to Tamarin (the project name came from this). And perhaps this is only the real technical contribution of the project. COLA is a meta language (a programming language to design another language) which resembles Scheme. COLA has a nice sub-language called Parser Expression Grammar that makes parser very terse. My plan was to write a boot-strappng compiler in PEG and COLA, then to implement COLA library, and to write the real compiler in PEG and Tamacola itself.

I won't give you the detail of PEG. But briefly, it is as simple as a regular expression and as powerful as a context free grammar.

When that time I started writing the compiler, COLA has no library at all except PEG framework, so I needed to write necessary libraries by myself from scratch. Fortunately COLA has quite a powerful external function call feature (a kind of FFI), macro sysytem, and a flexible object oriented framework. So writing library is not so hard. But I tried not to use COLA specific features as possible because it would be a problem when I rewrite the compiler in Tamacola itself later.

To implement the library, I borrowed function specifications from R6RS as well as possible to avoid unnecessary confusion. There were exception because COLA treat a slash "/" character as special for namespaces, I took PLT's function names in this case.

Writing lisp libraries is interesting puzzle to me because there were some requirements and constrain for the domain. Those requiments are:

These requirements were carefully chosen. Because COLA has only modest debugging facility, the unit test framework must be there. So my first goal was to implement all functions needed by the unit testing. I needed a pretty printer for debugging, too.

Another "must have" library was bit operators, and file / in-memory streams that is needed to the assembler. Interestingly enough, R6RS doesn't define enough functions to support those. For example, there are no portable way to specify a stream to be binary or text. So I needed a bit creativity.

Eventually, I wrote all libraries and the compiler. And I got a pretty good sense about a minimun set of functions needed for compiler, which are testing framework, pretty printer, bit operators, and streams. In other words, if your language has those features, your language can be self-hosting.

The real puzzle part was the order. Those requirements must be precisely ordered by need. For example, the pretty printer must follow stream and string functions because the pretty printer uses those functions. Although you can write functions in random order as you like in Lisp, precise order makes testing and debugging is easy. I kept this discipline. I even implemented the test library twice, the first one was concise assert function, and the second one has more friendly fail message by the pretty printer.

It took a few weeks to build a simple compiler, but still there were long way up to the point where self-hosting can be done. One thing that I had learned from the stage was, even without debugger, debugging is not so hard if you have enough test cases and a good pretty printer.

by Takashi ( at September 14, 2010 04:43 PM

September 13, 2010

Nikolay Suslov

Krestianstvo update: New UI and Cog VM

Happy to announce that from now Krestianstvo SDK is running on the new Cog VM (Windows, Mac OS X, Linux Ubuntu 10.x are tested).

Also the new graphical user interface was introduced, including russian and english versions.
Island to island connection through local network is now possible and works just from the main window. OSC features now work properly.
Below is the introduction video to the updated SDK:

Рад сообщить, что отныне Крестьянство SDK работает на новой Cog VM (Windows Vista, Mac OS X 10.6, Ubuntu Linux 10.x проверены).
Кроме того, представлен новый графический интерфейс пользователя, в том числе на русском и английском языках.
Остров на остров соединения по локальной сети теперь работают, с возможностью запуска и из главного окна программы.
Ошибки в OSC функциях исправлены.

by Suslov Nikolay ( at September 13, 2010 01:54 AM

August 20, 2010

David Smith

New Teleplace CEO

Tony Nemelka has just been named CEO of Teleplace and I couldn't be more pleased. I have had a number of discussions with him and some of the exceptional people he is bring with him since he started working with the company, and I am really impressed with his vision for the direction for the business and the clear focus he has on customer value. I am very proud that the company has been able to attract someone of Tony's caliber to be the CEO. No question that the company was ready for this next step. Greg Nuyens did an exceptional job positioning the company technically - Teleplace is the best collaboration platform in the world today, by quite a wide margin. Now it is time to leverage that technical advantage into a market advantage as well.

by David A. Smith ( at August 20, 2010 01:02 AM

August 09, 2010

Nikolay Suslov

Man'J - running Arduino, Krestianstvo and SuperCollider

Creating music in movement by interacting with people and architectural objects in realtime.

Video shows the Man'J chair prototype with built in Ultrasonic sensor, controlled by Arduino board, open sourced software Krestianstvo and Supercollider.

Sorce code:

by Suslov Nikolay ( at August 09, 2010 04:45 AM

July 31, 2010

Nikolay Suslov

Krestianstvo SDK 1.1 is avaliable

Krestianstvo SDK 1.1 is available for download and now it is based on the new Squeak 4.1 image!

SDK includes:

1. Working version of Collaborative Curved Space Explorer
2. Examples of Seaside 3.0 and Open Croquet integration in web browser with Comet support.
3. Examples of Collaborative coding trough web-browser on several images running Open Croquet
4. Open Sophie CSS/XUL logic for describing Tweak user interfaces.
4. Math examples on matrices and vectors in Krestianstvo's learning 3D space
5. OSC remote controlling support
6. 3d vision (red/blue) support
7. Tool for building multi-pixel projection walls or CAVE

and more..

Happy exploring!

by Suslov Nikolay ( at July 31, 2010 03:45 PM

July 28, 2010

Nikolay Suslov

Open Croquet running on Squeak 4.1

Yes, now it is possible...!
This is very experimental, but working announce.
Open Croquet code from 1.0.18 SDK is now working in new Squeak 4.1 image and even ready to run on Cog VM.

Download to try it out:
image with all needed content (full):
just image

Images are based on updated Squeak4.2-10160-alpha.

by Suslov Nikolay ( at July 28, 2010 05:34 PM

June 21, 2010

Historical - Vanessa Freudenberg

Squeak Etoys on iPad

In preparation of making Etoys work on the recently announced OLPC tablet, I ported it to the iPad. Here is a video—read on below for some details:

This might look exciting, and it certainly is, but it feels actually a lot more clunky than it looks. You may have noticed the little target circles I added whenever a finger touches the screen. That's to know where exactly the finger hit. It's really hard to hit anything, took me a while of training to hit the colorful halo handle buttons on first try. We really need to redesign the user interface before giving this into the hands of kids ...

But for now, some technical details: John McIntosh ported the Squeak Virtual Machine to Apple's touch-based OS last year (source available at I modified it slightly to enable multi-touch and keyboard input. Also, I rewrote his code to deal with touch events in Squeak, and added multi-touch handling to Morphic. Fortunately, Morphic was designed to handle multiple "hands" (pointing devices) from the beginning, so adding this was much easier than in a system that assumes there is only one mouse. That's why moving multiple objects at the same time, and painting with more than one finger, just works once the events are in the system.

So far this is just an early test. We should work on improving the Etoys touch UI in next year's release. The Sugar menu bar works fine, but everything else is way too small. At least we have the luxury of being able to test Etoys already—getting the rest of Sugar running on a touch device might take a while. Hopefully OLPC will have developer machines soonish. If this test has shown one thing, then that there is lots of work to do (and it may even be necessary to start over).

by Vanessa ( at June 21, 2010 09:16 PM

May 13, 2010

Nikolay Suslov

Video cupture support in Krestianstvo SDK (from Scratch / Etoys)

There are recent great news from Etoys dev-list, that "Derek's new CameraPlugin morph should work on Windows, Mac and Linux in Etoys image" :)
And now it is available and working in Krestianstvo SDK as video window in 3D space (several camera instances support also). Online video is replicated to all participants of an island in realtime.

So to try, just update your recent image or download the latest SDK with CameraPlugin support in it.
CameraPlugin binaries for Mac OS X/Linux/Windows are available here or could be found in Scratch distribution.
When running Krestianstvo world, check the new item "Видео камера" in menu "Инструменты".
Now everything is ready to:

  1. develop the 3D stereo 'real world' vision support, using 2 video-cameras for left and right eyes, then
  2. combine it with 'virtual world' stereo vision (already existed as 3D stereo filter) using some of video tracking technique, and
  3. do all this in real-time

Happy VJeing!

Прекрасные новости из Etoys листа разработчиков, что "новый Derek's CameraPlugin может работать на Windows, Mac и Linux " :)
И вот теперь он доступен и работает в Крестьянство SDK как видео-окно в 3D пространстве (с поддержкой нескольких видео-камер). Онлайн-видео репликацируется для всех участников соединения в реальном времени.

тобы попробовать, обновите систему по кнопке "Обновить" или загрузить последнюю версию SDK с поддержкой CameraPlugin в нем.
Двоичные файлы
CameraPlugin для Mac OS X / Linux / Windows можно скачать здесь или они могут быть найдены в дистрибутиве Scratch.
"Видео камера" добавлена в меню "Инструменты".

Теперь все готово к:

1. разработке 3D стерео видения "реального мира", используя 2 видеокамеры для левого и правого глаза соответсвенно, а затем
2. его соединения со стереовидением в "виртуальном мире" (уже существуеющем в виде 3D стерео-фильтра), для этого использовать некоторые техники видео-трекинга
3. делать все это в режиме реального времени

Счастливого Виджеинга!

by Suslov Nikolay ( at May 13, 2010 11:30 AM

May 01, 2010

David Smith

Chief Innovation Officer

About three months ago, I joined Lockheed Martin STS as their Chief Innovation Officer. I have been wanting to pursue a project that was quite different from the direction that Teleplace has been going, and after about a year of discussion, it was clear that the direction I wanted to go and what Lockheed Martin was looking for fit extremely well.

It is too early to talk about what I will be doing, but if it works the way I think it should, it could have some very big results and may have an impact on everyone someday. I can tell you it is a very different world inside of a big company - there is still a requirement to be entrepreneurial and sell the ideas, but once a big company like this gets behind it, the resources available are simply amazing.

More to come for sure...

by David A. Smith ( at May 01, 2010 02:26 PM

April 29, 2010

Nikolay Suslov

Методическое пособие по Крестьянство SDK | Krestianstvo SDK Documentation

Вот и вышла долгожданная первая версия методического пособия по текущей версии Крестьянство SDK, состоящая из описаний, рекоммендаций и документации.
Скачать в формате (pdf) можно здесь: Методическое пособие

The first version of Krestianstvo SDK documentation book (on Russian) has gone online.
Download is available in pdf here.

by Suslov Nikolay ( at April 29, 2010 08:43 PM

April 14, 2010

Nikolay Suslov

Stereo 3D filter in Open Cobalt

Experimental Stereo 3D anaglyph filter for Open Cobalt.
To try, just load Cobalt-Stereo package into the recent Cobalt image from the contributions repository.
To view filter menu item in the View bar, load or take the changes from the CobaltUI-sn.200.mcz

by Suslov Nikolay ( at April 14, 2010 10:44 AM

April 08, 2010

Nikolay Suslov

Крестьянство | Krestianstvo SDK support Stereo 3D

As of April 2010 Krestianstvo SDK has the following new features:

  1. Stereo 3D anaglyph filter portal (derived from Croquet Jasmine stereo code)
  2. Work with virtual Lights (creating, editing, composing)
  3. CAVE ready
  4. Examples of learning mathematics collaboratively in 3D
Vector product in 3D | Векторное произведение
Matrix calculator | Матричный калькулятор
Pyramid example | Задача о пирамиде

Stereo 3D filter window

Новые возможности Крестьянство SDK (апрель 2010)
  1. Поддержка Стерео 3D визуализации для красно-синих очков (работа основывается на коде из Croquet Jasmine)
  2. Поддержка работы с виртуальными световыми источниками (создание, редактирование, композиция)
  3. CAVE подготовлена
  4. Примеры математических интерактивных объектов для совместного изучения в 3D

by Suslov Nikolay ( at April 08, 2010 09:38 PM

February 05, 2010

David Smith

Palm Pixi Rox

I just bought a new phone, and it wasn't an iPhone, but it is still very cool. I have been planning to buy an iPhone as my next communication device - all my friends have one. I am very glad that they no longer feel the urge to demonstrate how cool they are by showing me some stupid picture or new app on their iPhone - except for Frank, of course.

So why didn't I buy an iPhone? Well, it is a bit complicated. My family has five phones with Sprint. I actually had a Rumor, which is a pretty good texting phone, but not much use for anything else. I was reasonably happy with it, though the blue-tooth quality was pretty poor. It was not too useful in California - unless you are married to the governator. Sprint is pretty good for quality of reception, but I always thought the selections of their phones sucked. A few years ago, I decided I needed a Blackberry and tried to upgrade at Sprint. Believe it or not, they would not sell me one. They only sold them to businesses. Morons. I did buy a Blackberry from someone else and carried two phones. I dropped the Blackberry after a while, because I realized I could almost always check my email with my MacBook, even when I was traveling.

So why the new phone? Somehow, I must have either dropped or stepped on my Rumor. I had made two or three phone calls earlier in the day from my office. I put it in my pocket and went downstairs to talk to my wife and while I was talking to her I received another call. I took the phone out and pressed the answer button and then saw that it had a big crack in the center of the screen. Worse, the voice quality on the other end was totally garbled - like some sort of electronica convolution filter. It was basically toast. I needed my phone to work this week, as there are many things going on in my life (that I will be writing about soon) so whether I liked it or not, I had to go visit the Sprint store for a replacement.

When I got there I had a few nice surprises - first the selections of phones was not terrible. In fact it was getting interesting. No iPhone of course - this wasn't AT&T, but they had an Android phone and yes, they finally were offering Blackberries to real people, and a they had the new Palm WebOS-based phones. I was interested in the Google phone, but a friend of mine had just bought one and was trying to take some pictures in the bright sunlight. The screen was unusable. Also it was kind of big and bulky - sort of like what the old Soviet Union might build if they were trying to make an iPhone knock-off.

What really caught my eye was the Palm Pixi. It is the smaller and slightly cheaper brother to the Palm Pre (which had the very strange commercials last year). I like small phones, as I already carry too much hardware in my pockets. The Pixi was actually smaller than my old Rumor (which really wasn't that small) and it was a real smart phone. Better yet, Sprint was offering a hell of a data plan where my family's monthly costs, which did not include data, would actually drop with the new plan which did include it. The phone was $200 with a $100 rebate - including a new two year contract, but now, even that was pro-rated. Sprint is definitely getting aggressive.

So now the review of the Pixi. In a nutshell, it is terrific.

Here is a list of pros followed by cons:

- Setup was trivial. The guy at the Sprint store did most of the work and I had a working phone with all my contact info when I walked out.
- The phone perfectly integrates with Gmail, which is my main email connection these days. It pulled all of my contact info into the phone and I was immediately reading the most recent emails.
- It supports Google Calendar, which my wife and I share. And it issued calendar alerts. Very nice.
- It is quite small, but the built in thumb-keyboard is quite usable. It took a little getting used to, but it works fine and has a nicer feel than my Blackberry or my earlier RIM device. I do love the size - it is quite thin and feels very nice in my hands.
- Sprint's wireless network performance is great. I was at a Starbuck's with a friend of mine and wanted to show him a video I had posted to YouTube. He tried to access it with his iPhone, but I had it running on the Pixi well before he even had an connection. He never did get it to work, but he was using the AT&T network and not the local Wi-Fi.
- Web browsing works pretty good, given the size of the screen. Certain sites, like Boing Boing are excellent, because they have a mobile version automatically loaded. Amazon was a bit crappy, surprisingly.

- The text is too small, especially for us old guys.
- The sound volume is not quite loud enough, even with it maxed out.
- When I called my mother, she said the sound quality on her end was echo-y, like I was far away from the phone.
- I tried to read a PDF document using the included Adobe reader app, but it didn't word wrap, so this was basically a lost cause. Seriously - if I zoom into a document using PDF, you really need to provide an option to wrap the text so I don't have to scroll left and right.
- Camera quality is poor. Good enough for a random picture now and then, but it won't replace my Casio Exilim.
- Takes a long time to charge (maybe 4-5 hours?)
- Charge lasts about 3 days with use. Since it was a new phone, I probably did more with it than I normally will in the future, so this probably caused the battery to drain quickly.
- Some apps re-orient based on the orientation of the phone. Others don't.
- It really needs a search app that has instant access.

Overall, this is a great non-Android Google phone. It is really nice how cleanly and easily it interfaces with my Google life. And I love the size. It is really quite elegant. It has only been three days now, so let's see how it holds up over the next few months.

As an aside, I had a rental car last week - a Ford Flex. This included the Microsoft Sync software. I might comment more on this later, but short answer - it was terrible. I actually liked the car (more of an SUV actually), but this software was really opaque. After linking it to my now sadly demised phone via BlueTooth I could not figure out how to get to the phone interface to make a call. I nearly had an accident trying to figure this thing out. I never did figure out how to use the built-in GPS for directions. Luckily I had my Garmin Nuvi with me. Sync is Bad Bad Bad. If anyone from Ford is reading this - I did like the Flex, but will never buy anything with Sync installed.

by David A. Smith ( at February 05, 2010 05:36 AM

February 03, 2010

Nikolay Suslov

Krestianstvo and CCSE in action [video]

For those who have not tried out the new Collaborative Curved Space Explorer in Krestianstvo SDK yet, here is the video which was recorded at Vologda State Technical University (Russia) during the "Virtual museum of geometry" opening event:

Данное видео с открытия "Интерактивного виртуального музея различных геометрий пространства" показывает работу многопользовательского исследователя искривленных пространств.

by Suslov Nikolay ( at February 03, 2010 07:55 PM

January 29, 2010

Takashi Yamamiya

Recent Tamarin and ABC tools

Tamarin-central, the stable source tree of open source VM used by Adobe Flash, was updated last December (Dec 22 2009) after relatively long blank. The newer tree has faster VM and includes updated ABC assembler and disassembler. Especially those ABC utilities are quite useful to a binary hacker of AVM2.

Download latest Flex SDK

I found that neither Flex SDK 3.5 nor 4.0 stable build can compile abcdump. You need to download later version from the Download Flex site. Flex 4-Beta 2 ( works well. I would set the Flex directory to environment variable FLEX.
$ export FLEX=~/Downloads/flex_sdk_4.0.0.10485_mpl

Download and build Tamarin-central

Building procedure is well documented in Tamarin_Build_Documentation. Only my additional suggestion is to add --enable-debugger, it makes error messages easy to read, it helps you, really.
$ hg clone
$ cd tamarin-central
$ mkdir objdir-release
$ cd objdir-release
$ python ../ --enable-shell --enable-debugger
$ make
$ ./shell/avmshell -Dversion
shell 1.5 release-debugger build cyclone
features AVMSYSTEM_32BIT; ...

Build abcdump

There are various useful utilities in utils/ directory. Some utilizes are written in ActionScript, so you need to compile them to use. Abcdump, ABC disassembler, is one of such utilities.
$ cd ..
$ java -jar $FLEX/lib/asc.jar -import core/ -import shell/ utils/
core/ and shell/ are basic libraries provided by tamarin, you can use them to try to see how abcdump works. Note that you need to separate abc file names with --, otherwise arguments are processed by avmshell instead of abcdump.
$ ./objdir-release/shell/avmshell ./utils/ -- core/ 
// magic 2e0010
// Cpool numbers size 158 0 %
I recommend you to make a tiny shell script to ease such a complicated command line.
~/tmp/tamarin-central/objdir-release/shell/avmshell ~/tmp/tamarin-central/utils/ -- $@

How to use abcasm

Abcasm is a ABC assembler. It is written in java and shell script, so you don't need to compile to try it. utils/abcasm/test/ directory includes various interesting sample programs. You can test them easily and quickly.
$ cd utils/abcasm/
$ ./ test/hello.abs
$ ../../objdir-release/shell/avmshell test/
Hello, world

by Takashi ( at January 29, 2010 09:58 PM

January 27, 2010

Historical - Vanessa Freudenberg

Interactive OLPC XO Display Simulation

Many people still have not seen the innovative display of the OLPC project's "XO" laptop. It has twice the resolution of a regular LCD (200 dpi), and works in bright daylight in gray-scale reflective mode. It's impossible for me to increase your screen's resolution by software, and I cannot make your display reflective, but here is an interactive simulation of the backlight mode with its interesting color pattern. This pattern is the source of a lot of confusion about the "color resolution" of the display. The LCD has 1200x900 square pixels, but the backlight puts a full color through each pixel. It is not made of red, green, and blue sub-pixels like a regular LCD, but the first pixel is full red, the second green, the third blue, and so on. The DCON chip (Display CONtroller) selects the color components from the full-color frame buffer.

My simulation of the DCON achieves the same effect by selecting either the red, green, or blue color component in each pixel. Just move the mouse pointer around to see how different colors are reproduced. You'll notice strong diagonal patterns, but remember, on the actual display the pixels are only half as large. Note that the actual DCON optionally applies a bit of anti-aliasing in hardware which is not simulated here. It helps reproducing fine structures and depicts colors more accurately. Additionally, the simulation shows a magnified image to better illustrate the principle, but it is not accurate because the reflective area of each pixel is not depicted. Maybe I can add this in a later version.

I made the simulation using Squeak / Etoys, which is one of the programming environments on the OLPC machine, but also works on Windows, Mac OS X, Linux, and many more systems. If you run the simulation on the actual laptop (download the project, place it in /home/olpc/.sugar/default/etoys/MyEtoys, run Etoys, choose Load Project), then you should close the small simulated screen and just leave the magnified view open.

For the interactive simulation, download Squeak (this version installs both, a regular application and a browser plugin), then click here to run the simulation in your browser, or download the project file, launch Squeak, and drop the project into it.

Intel-Mac users
beware, the plugin is not supported directly yet. To see the project in Safari, you have to quit Safari, set it to open in Rosetta (select Safari in the finder, press Cmd-i), and reopen. Or, use the download method, Squeak itself is running fine on Intel Mac, it's just the browser plugin that's making problems.

by Vanessa ( at January 27, 2010 03:37 PM

Etoys kid-tested on XO

I brought my green machine home this weekend, and my twins had fun with it. Enormous fun in fact for the two 7-year olds, pounding on TamTam furiously. I couldn't bear it anymore after half an hour or so.

Instead, I showed Jakob how to make a little figure bounce around on the screen in Etoys, while his sister went to practice her cello. He painted a simple head, and then we used the "forward by" and "bounce" tiles in a tiny two-line script making it move around. I made the mistake of pointing out that the "bounce" tile can produce some noise when bouncing. Endless fun trying the different noises ensued. Oh well.

Disturbed in her practice by these noises, Sophie came over and wanted to paint, too. So we saved Jakob's project and started a new one for her. I sat back to work on my email and let her brother teach. She spend like half an hour just painting the figure. The paint tool showed that it is not tuned to the XO's display resolution yet, it's far too small. But not giving up that easily, Sophie was erasing and repainting it over and over until she was satisfied with her "cow girl". Then Jakob proudly told her how to let it move and bounce, he had rembered almost everything needed. Together they quickly made it work, and just started exploring the noise-making possibilities again when we were saved by the call to dinner ...

by Vanessa ( at January 27, 2010 03:37 PM

OLPC talk at design school

I gave a talk about the $100-laptop at the Magdeburg school of Industrial Design. We did some very inspiring projects using Squeak, Etoys, and Croquet together before. The designers always come up with interesting ideas, even though not everything is directly implementable by us developers.

Carola Zwick, dean of the school, wrote a book Designing for Small Screens that certainly gives valuable insight for OLPC developers, and she provided (though indirectly) some very important infrastructure for the OLPC office: her group designed the chairs they are sitting on. I got the actual invitation by Christine Strothotte, who got her PhD doing computer graphics in Smalltalk just a few years before I got mine from the same school. She's teaching interaction design nowadays. I'm looking forward to doing an OLPC-related project with these great folks.

A student took some photographs during the talk. Also, from his blog post it seems I convinced him of the merits of the OLPC project (it was a lively discussion). Thanks for posting, Cheng!

by Vanessa ( at January 27, 2010 03:37 PM

Sophie, Tweak on the OLPC laptop

I just installed Sophie on my green machine. Sophie is a project of the Institute for the Future of the Book, is implemented in Squeak (just like my Etoys activity on the laptop) using Tweak as its UI framework (which is the original topic of my blog). Tweak is also the base for the next-gen Etoys.

Installation went pretty smooth. I downloaded the cross-platform zip file using the Web activity from Sugar
and unpacked it using the command line. The first start of Sophie failed, but after replacing the failing plugin with one from the pre-installed Squeak it started and worked. Yay!

This is an excellent example why it's a good idea to have a regular X11 installation on the kid's laptop: a lot of software will just work, even if it is not correctly integrated into the Sugar UI.

Michael Rüger of impara (a Squeak shop leading Sophie development here in Magdeburg, Germany) came over and made a little book, downloading two logos directly from the web (Sophie can do that!), adding a bit of text and color ... Tweak performance is not exactly blazing on the XO machine, I think we made the right decision to not use the Tweak-based Etoys but stick to the proven Morphic-based one. Of course one could optimize it a lot, but who has time for that? Anyway, it was useable - click the image to get a larger view:

by Vanessa ( at January 27, 2010 03:37 PM

Squeak for every child

Lately I work on Squeak integration in the One Laptop Per Child (OLPC) project, perhaps better known as the "$100 laptop". The whole etoys group came over to OLPC's office in Cambridge. Squeak looks surprisingly well on the display prototype, and also etoys are reasonably fast. Ian Piumarta took some nice pictures, which might very well be the first photos of the actual display in the wild.

by Vanessa ( at January 27, 2010 03:37 PM

January 25, 2010

Takashi Yamamiya

Wooden Half Adder Video

*UPDATED* New half adder video is released!

by Takashi ( at January 25, 2010 01:19 AM