Coding with Balls

February 19, 2017

Makt

Filed under: Uncategorized — codingwithballs @ 03:19

About a week back we released a new demo called “Makt” at Datastorm in Gothenburg and won the Amiga 500 demo competition there.
While we *strongly* recommend you watch it on proper hardware and a nice fat CRT you can also check out this youtube capture:

Preface

When I heard Datastorm would return after a 3-year break I knew we had to make something special for the Amiga competition there. Mainly because I’ve really enjoyed previous Datastorms (it’s *the* oldschool party for me), but also because it was likely there’d be some decent competition. (And it was! Although it’d be cool if more groups ventured away from the comfort of 1992-style..)

I also knew exactly which project to choose from my list of “stuff I’d really like to implement (but rarely get the time to start)”. For a while I’d been playing with the idea of making A500 versions of the main effects in You are Lucy and Dataskull and if I managed to pull it off I hoped it’d be a proper party banger.

I obviously didn’t plan to port those effects directly. Lucy and Dataskull were made for high-end 060 / AGA machines and even there their chunkybuffer effects were chugging along at 16 to 25 frames per second. Instead I wanted to do something that would give a similar look & feel, while running on much less capable (and much more fun!) hardware.

Note: While writing this I’ve been informed that there are unimaginative and uninformed people out there who believe the whole thing is just an animation (“it’s only 1 bitplane so you have room for a lot. durrr…”). While hilarious it’s of course also utter bollocks. 🙂

So, what *is* the trick then? Image-based rendering! Just like on serious computers! Or.. well… BOBs, really.

Bob the billboard imposter

All the effects in the demo are drawn using a (fairly low) number of bobs which are used to stencil parts of a texture into a single bitplane. In other words: you move small pixel masks around in the framebuffer while also scrolling the texture that you can see “through” the mask.

It’s all pretty much Amiga Blitter 101, although I did have to spend a bit of time rediscovering the blitter’s reverse mode for the first time in ages.
But while it’s basic stuff at this level (not unlike “plotting pixels in a buffer”), the really interesting bit comes when you decide *how* to move things around and what to put in the source texures. What we’ve effectively got here is a fast way of drawing a lot of moving pixels by just playing with the on-screen bob positions and the texture coordinates. As such, the “real work” was to use this rendering technique to create visuals that looked powerful (and quite atypical) for the old A500.

Details on the bob rendering:

  • All bobs are 32×32 pixels (48×32 blit size to enable scrolling)
  • We use suitably noisy / dithered masks to make things nice’n’fuzzy and avoid any unattractive sharp edges where the bobs overlap.
  • The bob plotter is fairly fast by itself but definitely not record material. I also did some basic experiments interleaving it with the coordinate transformations but saw surprisingly little improvement. Things seemed fast enough for what we needed so I didn’t pursue it further.
  • In the final demo (with copper post-processing & music) all effects run at 25 fps on a standard A500, except for the wobbly intestine which drops down to 16 fps for a few seconds.
  • Different effects use different texture sizes, from 96×96 in the cube parts to 960×256 for the intestine.
  • In the spirit of Just Hacking It As We Go Along each part has it’s own bob routine that’s 90% the same as the others.
  • Most of the parts draw just 70-80 bobs per frame. The intestine uses the most, maxing out at around 100 on-screen bobs.
  • Everything is drawn into just 1 bitplane. The 2 most recent frames are displayed using 2 bitplanes and then a bunch of flickering copper gradients are applied on top.

On to the effect code!

Face

This was the first thing I tried out after deciding to do bob imposters, mainly because it seemed like it’d give results with fairly little work.
The goal was to make “something that looks like You are Lucy on A500” and I even ended up using a texture (post-processed and dithered to 2 colors) and depth map from there.
The parallax effect in Lucy does per-pixel depth distortion but that’s of course a no-go on A500. Instead we just assign a depth value to each of the bobs and do per-bob depth distortion. If the bobs overlap a bit and the depth projection works out, then it just might look ok! 🙂

An important point for all the effects were that the source images needed to be detailed enough to give a feeling of depth and surface texture. In practice this was handled by playing with contrast and blur in Photoshop before noisedithering down to 2 colors.

Texture from You are Lucy

Texture from You are Lucy

Depth map (just a slightly blurred version of the texture)

Depth map (just a slightly blurred version of the texture)

1 bitplane texture used in Makt

1 bitplane texture used in Makt

In action!

Aaaand action!

Other tech stuff:

  • The dataset consists of one picture (256×256 x 1bpl) and around 80 vertices for the bobs.
  • The bobs are pre-sorted and never switch drawing order
  • The depth transforms are done in two stages:
    • Regular projection (makes the face turn slightly as it moves around the screen).
    • A Face Dragging Vector for more brutal mimicry (e.g. when it gets stretched and torn apart).
  • As far as I remember there’s no animation of the texture coordinates at all. It could probably have been used for more interesting distortion and stretching.
  • Like in several of the other parts the blitter only clears every 2nd line of the current framebuffer. This adds a nice little messy trail of pixels after the bobs and also helps hide some of the glitches where they don’t overlap.

Wobbly Intestine

Pretty much the same effect as the face, just with a larger texture and more bobs (higher density and larger object). The intention here was to take it a bit slower & more majestic to give the idea that it miiight be the last effect already.

The waving and pulsating is just a simple sine-distortion. I had originally planned a much more interesting twist here (no, not actual twisting) but I got a bit short on time (and also noticed that I was already quite close to dropping below 25 fps as it was).
Due to the higher number of bobs this was the only part where I had to do any culling prior to the actual plotting stage in order to (mostly) reach 25 fps. The quick zoom-out before the waving begins drops down to every 3rd frame though (e.g. 16.6 fps).

Warped cubes

This is all done by changing the texture coordinates and there’s no movement of the actual bobs themselves except for a randomized 0-7 pixels jitter (which adds a *lot* to the final look though!)
The deformation is just the standard thing you’d do for a grid expander: scaling the texture coordinates based on the bobs distance from a given point (mainly the center of the screen because I’m old). However, since we’re just offsetting the textures per-bob rather than stretching them it looks a bit rawer and less generic. It just becomes a mess if you distort too much of course, so we try to avoid that.

The patterned cube is rendered into a 96×96 bitplane buffer which is then used as the source texture for the bobs. The cube drawing is really naive and inefficient but I never got around to optimizing it. That would’ve enabled for a fair bit more bobs on screen but as the effect still looked ok it wasn’t a major priority.

Zoomer

This was a bit more fun and, together with the Skull, one of the few effects to use animated bob masks and multiple textures.
The idea is simple enough though: just adapt a normal “endless zoomer” effect to stencil-bob rendering:

  1. “Pre-generate several images of the same motive with different zoom levels.” In the demo there’s 4 badly looping ones.
  2. “Zoom a bit on one image.” Here done simply by applying a uniform scale to all the texture coordinates, while the bobs themselves don’t move at all. Obviously we’re just sliding different bits of the underlying image around, but at scaling factors in the range of 0.7 – 1.3 it looks decent.
  3. “Blend between two images at different zoom levels.” Which we did by jittering in more and more pixels from the next image, while removing the bobs using the previous texture.

As it would’ve been too slow to draw all the bobs for both images at the same time we just jittered in 8 at the time. E.g. for a given zoom factor you have:
– Draw X bobs with texture 0
– Draw 80-8-X bobs with texture 1
– Draw 8 bobs with texture 1, using masks with increasing amounts of pixels.

zoomdither

Jitter-mixing in action (some artifacts here due to reading outside of the texture)

With proper textures and a bit of coloring

With proper textures and a bit of coloring

Moving around in the picture just came down to offsetting the center of the coordinate scaling. This of course won’t affect the pre-generated images so if you move too far, too slow and don’t flash the screen enough then it might look a bit crap.

Cuberush

Both this part and the warped cubes would go nicely in a 4k intro as there’s really no data to speak of. We’re rendering cubes to textures again, but this time we’re keeping 64 of them in memory and updating one per frame. Different textures are then selected for each cluster of bobs that ends up on the screen. Each cube consists of 4 bobs and the depth scaling is simply done by moving the bobs closer together or further apart.
For each line-of-bobs there are 8 cubes, thus there’s a total of: 3 lines x 8 cubes x 4 bobs = 96 bobs in total, although there’s always some that are off-screen and get culled. (On a side-note: I was really lazy this time around and just immediately culled anything close to the screen edges. It’s sloppy but it’s less of a concern with the kind of busy & noisy visuals we have here.)

The bobs can move freely around in 3D but as I couldn’t use too many of them I opted for simple linear patterns and only rotating around the z-axis. I kinda wish I’d pushed that a little bit further.
The movement patterns were set up (there were just 3 lines in 3d space, so “keyframed” is kinda misleading) by hooking in mouse & keyboard controls to the effect code itself. Perhaps a bit overkill but I think the end result got better than what I’d managed by just typing in coordinates by hand.

Adjusting movement patterns

Adjusting movement patterns

This is also the only part where there’s just 1 bitplane enabled instead of mixing the two most recent frames. I wanted fast movement while still being able to make out the sharp patterns in the cubes and the pseudo-afterburn of the 2nd bitplane just made the visuals too smudgy.

Skull

This is the fun one. The face & the wobbly intestine were effectively just “2D images with a bit of depth added” (and the cuberush more traditional imposter billboarding) but here I wanted something that I could play around with properly in 3D. It’s somewhat related to Dataskull (they’re both rotating a bunch of points that make up a skull) but while Dataskull uses (comparatively) many particles, Makt relies on less than 100 of them and clever texturing instead.

As for the subject matter I just like a good skull effect. They’re interesting to look at, corny enough to remind us that all of this demostuff is just a good laugh, and kinda tough & evil in the right setting.

The basic principle of the effect is easy, as always:
1. Generate some images of a 3D skull from different angles. We used 4 different images to represent 180 degrees rotation around the y-axis.
2. Also generate depth maps for the same angles (e.g. just dump the z-buffer).
3. For each of the 4 images make a small batch of bobs. Assign the bobs z-values based on the depth map.
4. Find some way to draw this *efficiently* while using bobs from different batches based on how you’d like to rotate the on-screen skull.

Points 1. – 3. here are pretty much the same as what we did for the face and the intestine. The main differences being that the texture and depth map was now generated by rendering a skull object in OpenGL rather than retouching photos of faces and tree branches in Photoshop.

skullz0

One of the depth buffers, this time in 16 bit packed into 2 color channels

The 4 images used to create the rotating skull:

skull-2 skull-1 skull0 skull1

 
The last point – actually making stuff look good and doing it with reasonable performance – required quite a bit of experimentation and content-specific tweaking.
This is what we ended up with:

  • Generate the data for each of the 4 images / batches as described above.
  • Obviously you only need to deal with 2 of the 4 batches in any specific frame. For instance: if you want to render the object at 22 degrees rotation (around the y-axis) then you use the image (and accompanying bob batch) representing 0 degrees, the next one representing 60 degrees, and then just “mix them together in some way”.
  • At the rendering level the “mixing together” was done in the same way as for the zoomer: use bob masks with different amounts of pixels.
  • Determining which bobs to remove and which to add while rotating was done mainly based on each bobs x-coordinate in the original “straight-on” position (e.g. the position the texture was generated in). This required quite a bit of tweaking to make sure enough bobs were actually removed (so we didn’t always render 2 full batches) without too many gaps appearing. Of course: for performance reasons this culling of bobs was done *before* anything was actually rotated.
  • Rotation around the x-axis (when the skull nods or tips back-/forward) is just basic 3D rotation and required no custom work. We just had to make sure it didn’t tip too far as there were no textures showing the top or bottom of the skull (quite similar to Dataskull which also had a hole in the head).
  • Sorting the bobs in real-time (on top of all the other stuff going on) was a bit too slow. What we did instead was to pre-sort and merge 2 & 2 batches (e.g. one buffer with batches A+B, one with B+C and one with C+D). Simply presorting based on the z-value of each bob in its original “straight-on” position worked surprisingly well.
skulldissolve

Dissolving a single batch of bobs as the rotate

fullskullshorter

And using all 4 batches & textures. In the demo some of the glitches and noticeable transitions are covered up by the 2nd bitplane “afterglow” and color flashes. 🙂

And that’s about it really. Summing it up now it seems very straight-forward but there was a significant amount of trying, adjusting and tweaking involved. 🙂

Tooling

As always in recent years there were some ad-hoc tools involved. The main “tool” this time was messy piece of 68k asm code for manually placing bobs in 2D space, sampling each bob’s z-value from a depth map and then checking the results on the fly. It’s a “tool” only in the most rudimentary sense, with mouse control and F-keys used to control different drawing and rendering modes. I had various versions of this for each of the effects depending on the specific characteristics of each and I hope I never have to edit any of those sources again.

A word of warning though: This is very much the development process *I* prefer. If you were to do similar effects on Amiga (that would be awesome by the way!) then you’d probably be better off doing a lot more of the data generation in a high-level language on a PC. For instance: generating a texture atlas with impostors from many different angles might give much better results than manually picking them from a small number of full images. That said, when experimenting with new stuff I like to stay in Asm-One (on a blisteringly fast WinUae-emulated Amiga) to minimize the amount of mental context switches, benefit from all the old bits of code I’ve got and be able to freely move prototype code to the actual effect (when it’s not too slow).

Post processing

Nothing new in the copper department for this demo, except for the orange bars briefly used in the early zoomer glitch-outs. The rest of the copper coloring was taken directly from Party Elkstravaganza and then dumbed down to only use one base color (whereas Elkstravaganza blended multiple). We’re also just sampling from the same color table that I described in a previous post.
I originally planned to do a lot more fancy stuff here but then I kinda fell in love with the rawer single-gradient look and stuck with it.

Data accounting

It’s actually a rather small demo. Uncrunched it comes in at just above 500k and after going through Cranker it’s 322k. No attempt was ever made to reduce the file size during development as I tend to postpone that until it’s really required. Here the only real concern was memory usage rather than disk space. We kept the voice sample separate from the tune itself so that it could be kept in slowmem until it was needed. Other than that there was none of the tedious janitorial RAM-shuffling you sometimes have to do.

The larger bits of data are:

  • 131k for the soundtrack
  • 72k for the voice sample
  • 40k texture data for the Skull (four 1-bitplane images at 320×256, where 30-50% is completely empty)
  • 40k texture data for the Zoomer (four images at 320×256)
  • 30k texture data for the wobbly intestine (960×256, again with a lot of empty space)
  • 8k texture for the face (256×256)
  • 37k of sine tables (!) just because I forgot they were there.
  • 34k for the intro text, “MAKT” logo, end text, bob masks & stencil patterns for the cubes
  • 32k color table (same as in Elkstravaganza but not delta-modulated this time as there was need to crunch)
  • 13k (or thereabouts) of inefficiently stored bob coordinate data, including at least 2 batches that were never used.

Missed opportunities

Of which there were *so many* this time around!
Even when excluding the crazier and potentially-impossible ideas I’d say that the final product is only about 65% of what we aimed for. Some of the missing bits might show up in a later demo but I’ll definitely do something completely different first.

Things that weren’t:

  • Mixing different bob masks in the same frame. This looked promising in the trials and there are actually more masks in the demo data.
  • Morphing and growing stuff on objects. Would’ve made the skull way more evil.
  • Lots of ideas for abstract patterns in both the effects and the backgrounds (we don’t fear the black background of death but it’s not always what we aim for either).
  • Some fairly glitchy bitplane-distortions were also implemented but never used.
  • Feedback effects! The bob rendering would be well suited for noisy variations on Dweezil-style chaos zoomers.
  • The entire first part. Lug00ber finished the soundtrack for it and I have one-and-a-half effect ready (completely different stuff from what’s in the main part).
  • The intro sequence for part 2 was not planned to be just text. 🙂

Onwards

It was great fun working on this one. The effects were fun to play around with and looked *almost* the way I imagined them in my head. I would of course have liked to spend more time at the party than just two late nights but the last two days of hotel coding were quite enjoyable and without any of the desperation & doubts that can appear when you’re over-tired and fed up.

In summary: we still like to make demos and we still enjoy winning compos so we’ll continue with both.

Advertisements

Blog at WordPress.com.