Sunday, May 3, 2020

Emissary Architecture Starting to Take Shape

So, I've been pretty deep in WASM architecture, and the WASI APIs.   I've pretty much gotten to the point where we can launch an emissary and interact with it.    So, now, it's time to actually build out the 3DOM APIs for interacting with the emissary and the message passing loops.   Initially I'm going to start building a Rust Module for this.  I do want to do this with other languages as it becomes practical.   I'd love to be able to do C# and Python, but I don't think they're quite there yet.   I've used mono-wasm to generate c# wasm, and pyodide for Python, but so far they're not particularly oriented to WASI yet and are more intended to run on a website (and interact with a DOM).

I think it will mostly go like this: one exposed setup method from the API layer, which will receive a protobuf buffer.   This will be using 'proto 3' and will be decoded and processed.  This gives us a backward and forward compatible startup.    The startup will register callbacks on the rust side for the message pump to process.   Then the emissary's 'main' basically returns.    A channel is created via grpc to the browser and eventually to the entity/avatar server.  Protobuf packets are passed in via these callback mechanisms.   The emissary runs and then returns and possibly returns one or more protobuf packets.   For now, I don't intend the emissary to have a thread to run when it is not processing a callback.   Initially I'd rather an emissary not be able to do things like mining bitcoin or other such undesirable activity.   The service managing the emissary will probably have a call timeout and will disable it if it doesn't respond in a timely fashion.

This facility kind of makes it hard to directly do gRPC, but at least we can do gRPC-like behavior passing protobufs around.   I can cheat and have a grpc wrapper call which contains a command type and the protobuf  as a raw vector.   The server calls are tricker, as the interior emissary-to-server gRPC calls are theoretically private (and ad-hoc)--so their formats can't be known by this middleware layer which is operating the emissary for the browser.    There are some up-and-coming grpc proxies out there.   Or I will have to do my own, which is just calling methods on the emissary and returning an answer (no actual network involved).   With my WASI layer, I could actually permit the emissary to open a socket, but I don't trust this yet. 

So, now I actually have to work out exactly what the browser-emissary interaction (which I've been calling the 3DOM api) actually is and how it works.   I've got some prototypes of this that I worked out last summer, but now I have to start building my .proto files to implement it.

Initially there will be no world, and no entity server to work this out.  I'll have the concept of a "system entity", which can simply run inside the loading area (which will eventually will become a mini-world which is served from inside the browser).   Also, eventually we'll have an avatar server, but initially the avatar is mostly just a vr camera FPS rig, with no smarts.    I'll be doing some checkins soon of some of this structure. 

The emissary will have an initial built-in parasitic overhead to have to have a copy of the loading module and the protobuf module.  Eventually I'm sure I'll be able to dynamically link such things so they can be shared between emissaries.   There's an effort for a package manager for webassembly (like npm) called wapm.  Perhaps there can be a package manifest and emissaries can be dynamically loaded.   I am ultimately intending for there to be hundreds, if not tens of thousands of emissaries to exist (in the distant future).   So optimization, sharing, hot-deploy, and versioning will have to be seamless.


I have ordered a set of Gear VR lenses with adapters to put into my vive HMD.   They're supposed to greatly improve the appearance of the scenes and lower the screen-door effect.   We'll see how they do.

I also found an inexpensive device which does 3d body tracking of the lower body and does motion capture for realistic locomotion called WalkOVR.   I think you mostly just walk in place, and you can use it to walk up stairs and such.  I'm keeping an eye on this product.   It would be nice do double the sensors and track the upper body too.  I may buy one one of these days.  It is the goal to have as realistic rendering of an avatar as possible -- distributed holodeck anyone?   I'd also like to be able to just hang my legs off the end of my chair and swing them and walk that way (I'm getting to the age where being immersed in VR for any period of time, I'd rather not have to stand--at least until I've gotten an adequate workout).

In my studies, I've thought it would be useful to learn to program directly in wasm assembler.   It's been 40 years since I primarily programmed in assembler (on the DEC PDP-10), and I always loved it.  I've had a silly idea that it would be fun to build a full-on modern macro-assembler which outputs to wat.


No comments:

Post a Comment

Random Thoughts, Some Progress

 Lots of work at work, so not as much time for projects. Smattering of thoughts:   Godot doesn't really seem ready for prime-time with c...