Analogy Circus!

I’ve been hearing a lot about the “Dark-web” in the news recently.
I’m not one to get irate about a stupid term (Snrk! Yes I am, it happens *all* *the* *time* :P), but it seems a little silly to me to categorise a whole subset of documents as nefarious or illicit based solely on their availability.
Imagine for a moment the last post-it note you scrawled something on.
Unless you immediately took a photo of that note and placed it online for all to see, along with transcribed text so it’s easy to index and find, then you’re participating in “DARK-POST-IT” activity.
Doesn’t that sound ridiculous?
It does to me.
Sure “non-indexed web-site” or “non-referenced filestore” aren’t as catchy, but I tend to swing towards not vilifying a group if there are obviously sections of said group that shouldn’t be vilified.

Also, isn’t the content of every single web-server and personal computer that is specifically protected from access part of the *blech* “Dark-web” by definition?

Just a thought, brought to you by the Department Of Non-ambiguous Terminology.
Remember, if you’re planning on generalising over a huge group with a single word or phrase, just DON’T‚ĄĘ.

Dave

Blending in a Winter Wonderland

Last weekend I was thinking “Should I have a go at something seasonal?” and after thinking about it for a little while I still didn’t have any decent ideas, so instead I tried simulating *really* gentle snowfall with particles.

I threw a few brown cylinders on top of a white plane, stuck another plane above it and started playing with particle physics until I had what I thought was a fairly decent looking effect.

That was when things started to get interesting. I remembered I’d been talking with someone recently on G+ about Depth Of Field effects and, as I’d played with the effect previously, I thought I’d see how these particles looked when they were out of focus.

Hours passed really quickly, and before I knew it I’d textured and displaced the ground plane, done the same to the “trees” which were looking a bit more like trees (sans quotation marks), put a sky environment in, started building props, set up some other materials and stuff, parented the camera to an empty so I could have it track nicely, and was slinging a focus empty about like a madman.

The end result is shown below, and I think it looks pretty neat.

Now, there are bits of the story I’m missing out here. There was a point where I’d rendered over 30Gb worth of frames that were of little or no use whatsoever. There was another point where I left it rendering overnight, and ran out of disk space about half an hour in. There was the time I altered the camera path and effectively “re-shot” the whole thing really late at night and made an utter balls-up of it.

I won’t go into great detail about those times. Suffice it to say that it was still worth it IMHO.

I really like this little clip, and it makes me feel better to watch it and think about what I might make next.

ūüėÄ

Dave

Epiphany

*PERSONAL OPINION ALERT*
OK, I’ve figured it out.
I know why Blender’s UI is good and other interfaces can go die in a ditch somewhere.
Stick with me here, I actually did do some research.
I was keeping one eye on G+ today and someone asked about rendering for 3D displays. I was going to comment on that post, but I buggered it right up and lost everything I wrote, so I got a bit miffed and wrote up what I was going to say on this site instead – can be read here if you care about the subject at all.
Anyhow, the questioner was using Lightwave 11. Now, I’ve used Lightwave before, but at the time it was a demo version on my Amiga, so you can Imagine (snrk, I actually preferred Imagine at the time) how long ago that was.
I figured I’d have a look and see what Lightwave was up to these days.
NewTek offer a 30 day free trial of Lighwave and after a few minutes I was up and running.
Within about five seconds of loading up the modeller application, it hit me like a bolt of lightning, and now I know, specifically and absolutely why I think all non-Blender style UIs make me so frustrated that I want to smash them to tiny bits.
They *RUIN* my mouse.
That sounds odd, I know, but if you’ve read this far, maybe you’ll hang on a little longer to see what I mean.
OK, so when you want to move a bunch of verts in Blender, you select what you want to move (or grab if you will), hit “g” on the keyboard then move your mouse and hit left mouse to confirm or right mouse to cancel.
What’s missing from that action? At least one click, and normally a click-hold according to other UIs.
In the Blender interface, once I’ve hit the key on the keyboard, the mouse becomes the active tool. It’s activated. It’s on. Get cracking.

You see, hitting the key on the keyboard WAS the click. Not only that, it was a STICKY CLICK! (not sure I like the sound of that, but still…) I can let go of it now, and until I’m done doing what I need to do, all I have to do is move the mouse.
There’s still a choice left to be made of whether I wanted to actually *do* the thing I’m doing or not, but if I didn’t, I needn’t ctrl-z it under most circumstances (some exceptions apply), I can just right-click to cancel.
OK, now contrast that with what I experienced in Lightwave. Same job, shift some verts.
I found out pretty quickly that “t” will get me into translate mode. Makes sense so far. Once in translate mode, the status bar tells me I can “Drag to translate in view plane. CTRL forces one axis motion”.
But now I have to actively hold the left button down for the duration of the repositioning, letting go of it to confirm the new position.
There *is* no “hold on, I didn’t want to do that” function, other than to let go of the left mouse button, confirming the action, then undoing it again.
This may seem like almost nothing from a UX standpoint if you’ve never experienced anything else, but the way I see it is that by jumping straight from pressing a key to having the mouse active in the current context, you’re effectively getting a *several hundred button mouse*, which sounds perfect for complex tasks with lots of different modes if you ask me.

I was going to carry this on to talking about on-screen buttons for navigation.
Well, actually I will, albeit briefly.
Don’t.
Just don’t.
I can’t think of a single reason why panning, rotating or zooming the view should be done by left-click dragging on a 16×16 pixel icon in the corner of the viewport.
It’s frankly ridiculous.
I know that’s not the only way to do it, and to each his own, but seriously, if you had to use that method to pan/tilt/zoom, everything would take about fifty times longer and you’d very quickly go insane.

This is the thing that broke me when I tried UnrealEd too. Actually click-dragging on manipulators is so slow and error prone as to be essentially a joke.
Sometimes it might be unavoidable, I’ll admit. By all means provide some manipulators for those corner cases where the user’s keyboard has caught fire or something.
But also provide a mature, admittedly different, but blazingly fast way to interact with your work and I’m guessing that those who *can* get it, *will*.

3D Rendering

Someone asked how to render for 3D TV on G+ and I spent quite a while writing out a long comment about how it can be done and what you should pay attention to and so on and so forth.
Then I got a bit cocky and thought I’d try and do an ASCII art stereo pair.
Then I hit ctrl+shift+left arrow and my browser went back and lost my comment.
I whimpered at that point.

So lets have another go.

3D footage isn’t super mysterious really. The information needed to display a 3D image is simply a view from the left eye and a view from the right.
By presenting each of these images to their respective eyes, via shutter glasses for example, or the good old cross-eyed viewing method, the effect is that your brain perceives depth in the image.

So in order for you to render your fantastic new animation of a jelly having an anvil dropped on it, IN FULL 3D!!!!oneone!!11, you need only have two cameras.

Quick(ish) test. Load up your favourite 3D package (It’s Blender isn’t it? Isn’t it? I bet it is.) and render out a scene. For this experiment it makes sense to have a square or portrait orientation.
Save the image.
Go back to the scene and shove the camera over to the left or right (with respect to its orientation) by a few centimeters (keeping the focus of the camera the same if you can, but for anything but close foreground work, this probably won’t matter too much) and render it out again.
Save that image and open up your favourite graphics package (It’s GIMP isn’t it? Isn’t it? I bet it is.) and position each of those images side by side. Zoom out if necessary (it might be at first but with practice you can do pretty big stereo pairs) and go cross-eyed such that your left eye is looking at the right image and your right eye is looking at the left image.

Where the two ghostly images come together in the middle, you should get the depth effect. If the effect seems to be reversed i.e. things that were closer to the camera look further away than they should be) swap the order of the images round and try again.

For extra credit, do it on a piece of paper. Get a pencil and draw two boxes of as close to the same dimensions as you can, side by side.
In the boxes draw the same object in each, but position one of them slightly off to the left or right with reference to its box.
Go cross eyed such that your left eye is looking at the right hand box and your right eye is looking at the left hand box and prepare to have your mind blown (until you realise that it’s pretty simple optical mechanics really, and then you’ll calm down a bit, and then you might wonder how your brain does all this stuff on the fly and your mind will once again be blown, only this time it’ll be due to contemplating the power of your mind… which is blown… meta or what?).

It was at this point that I tried to do the ASCII art demonstration in a G+ comment and I buggered it all up.
I shan’t be trying that this time.
Maybe once it’s saved and published.
I might come back with some Blender made stereo pairs also.
Edit: Here we go! An example! And you can quite plainly see that it is in actual fact, a sailboat.

SailboatStereoPairExample

So in theory, all you need to do is render your scene/animation from two viewpoints to represent the two eyes in the head of the person doing the looking at, and present them to the two eyes of the person viewing the scene/animation.

In practice, exactly how you achieve that last step is down to what you want to use to view the image/animation. I have no clue whatsoever how your particular 3D TV expects its data to be presented, but the answer will be out there somewhere.
I know that NVidia’s 3D vision system is pretty flexible about whether you put the images side by side, over/under, or interlace them in some way, but personally I prefer side by side, right eye first. That way I can test it by just going cross-eyed.

Some people are able to diverge their focus, so the left eye looks at the left image and the right eye looks at the right, but I find that very difficult to do on command. Much harder than the alternative. In fact, the only reliable way I’ve found to do that is to actually look at something much further away than the image then slide the image into view. This works a treat with those old magic eye books because you can look over the top of the book out the window then bring the book into view and relax into the right separation for the image.

Why they tended to be mostly¬†divergent type, I’ve no idea.

Incidentally, those magic eye pictures are random dot stereograms. They don’t use two images as such, they use vertical strips of a repeating but semi-random pattern that is then “shifted” left and right in the shape of the object they want to show in depth.

So there.

That’s how.

Dave

(edited to remove split infinitive)

Stress Maps – Continued

If you read my previous post about stress maps, you’ll find I’ve put a link to a blend file in there.

Well, if you download that, then use the “Animated Bake” addon (http://wiki.blender.org/index.php/Extensions:2.6/Py/Scripts/Object/Animated_Render_Baker) to bake out the first hundred frames of the texture on the “wormy” object, then load the blend file linked below and plug the first one of those baked textures into the image texture node on the material on the “wormy” object, then take a look at the render output, you should see that the diffuse shader with the wrinkly normal map is only being mixed in where the bends occur and this is being driven by a shape key.

Basically (heh), there’s a driver attached to the “offset” value of the image sequence texture node in the material such that when the value of the shape key changes it does the maths: frameOffsetToUse = valueOfShapeKey * numberOfFramesIHave which means that given that the shape key goes from 0 to 1, you should get a frame range of 0 to numFramesIHave. This means that when the shape key is fully “on” you get the fully bent texture from the baked sequence, and when it’s fully “off” you get the fully relaxed texture from the baked sequence (plus you get all the gradiation inbetween).

Video Demo: http://youtu.be/69_aZf2fjgM

Link to second Blend File: https://docs.google.com/file/d/0BworKSfkE1_AaV9GbUtOUHU5M3c

Stress Maps – Interesting Stuff

I’ve been playing with Stress Maps in Blender.
Basically, what I’ve been trying to do is set up a texture such that faces which are squished are one colour, and faces that are stretched are another colour.
I just got this working quite neatly.
Now, I can’t explain why the blend stops have to be where they are on my gradient to make it go from black to (nearly) white on this level of squish and stretch, but I toyed with it for a while and it looks pretty good.

The next thing to do would be to unwrap the object to a nice UV layout, bake out each of the frames to a sequence, then load that sequence in as a texture in cycles, but use the colour value to blend between two different shaders/shader noodles.

Might be useful for wrinkles and suchlike.

Video Demo: http://youtu.be/YfXxZjw6djc

Link to Blend File: https://docs.google.com/file/d/0BworKSfkE1_AOHgwVUdmOWJ3Mjg

More bake sound to f-curves!

Wheeeee!

http://www.youtube.com/watch?v=jwhh3xj7dBo

I love the “Bake Sound to F-Curve feature” in blender. Whenever I’m playing with shape keys or armatures I always end up doing something with sound.

I suppose then it’s a bit shameful that I’ve not yet figured out the frequency bands to create a good visualisation from. No matter, it’s funny enough with my hamfisted frequency selection.

I have been wondering though if the bake sound to f-curve function is still being worked on. I have no idea how much work has gone into it up to now, but I don’t remember seeing much happening over the past few releases on that front.

It’d be really nice to see it have some default values for attempting to pull out drums/melody/vocals which I’d have though would be pretty similar frequencies across tracks although I could be totally and unforgivably wrong on that.

I might take a look at some visualisation code. There must be some online that I could look at and see what bands they use…

Anyhoo, enjoy.

Music: “Interlude(Total Tea Time)” by Anamanaguchi
Textures: 3D Reference Face Loops by Athey on Deviantart (http://athey.deviantart.com/art/3d-reference-face-loops-141698442) and something else that I munged together from other sources.
Models: Me (but obviously the one on the left was heavily influenced by the loop layout in the original picture by Athey)

 

On Me, Blender’s Interface, Andrew’s Podcast, and Naming Things

On Me

I call myself a programmer, mainly because I often find myself writing software and enjoying it.
I also attended university to study software engineering, but I needn’t have done so in order to call myself a programmer.

With that out of the way, anyone reading this will know all they need to know about whether to trust my opinion on which terms are common knowledge and which are known only to programmers and computer scientists.
The answer of course is not to trust my opinion on that matter at all.

On the other hand, having been actively involved in programming, software development and other nerdy pursuits for many many years, I’ve seen lots and lots of User Interfaces.
I’ve seen UIs that start out looking crappy and hard to work with and never get any better.
I’ve seen UIs that immediately look useable but soon show themselves to be limited, finicky and annoying.
The block, I has been round it.

On Blender’s Interface

So it is with all of the above stated plainly that I declare that I do indeed, for the most part, love Blender’s UI (post 2.49).
Take from that what you will. A programmer overlooking flaws in a UI for very geeky reasons is certainly nothing new, but I also have other reasons that I don’t think are so geeky and are actually mostly based on usability. I shan’t go into them unless pressed but if you’re interested, they align neatly with those expressed by¬†Sebastian K√∂nig in this video:¬†https://plus.google.com/110725366823695093673/posts/QhypqEYnZh6

On Andrew Price’s Podcast

I appreciate the frankness with which Andrew Price (http://www.blenderguru.com/) is approaching his issues with the UI in his podcast, I really really do. Believe me when I say that the frankness, is thoroughly appreciated.
What I don’t appreciate however, is the apparent lack of research into the problems that he finds, that leads him to conclusions that I would say are flawed, such as “I don’t know this word therefore it must be the wrong word or some archaic and obscure word that nobody’s heard before”.
So if I’m being honest, I’d take the criticism of the UI more seriously if the people doing the criticising would just hop over that first hurdle, look it up, understand it, decide *at that point* whether it’s the UI that’s wrong, or the user in that case, then take a more humble approach if required. It’s not surprising, given how I started the very first paragraph of this post that I absolutely don’t subscribe to the theory that the user is always right. It’s simply not true. Sometimes the user is wrong, and listening to them is fine, but acting upon their requests would also be wrong.

On Naming Things, Propaganda and Other Stuff

Now, indulge me for a moment while I enumerate the list of things that I *would* like to mention with respect to user interfaces, the idea of uniformity across packages and a few other things.

1. In my opinion, just because best selling package A does a UI element one way, doesn’t mean that’s the best way to do it and anyone who says that it does is peddling propaganda and should stop it right now.
“Best” is also a totally subjective term. It depends what you’re after. Which leads me on to…

2. Being the kind of guy that I am, I want the UI to be optimal with respect to speed, while retaining any elegance possible after that optimisation. Furthermore, and this is probably my nerdyness showing through, I hate using the mouse when it technically isn’t necessary. Compared to the keyboard I find it to be slow, inaccurate and basically overkill when I’ve got hundreds of key combinations at my fingertips that don’t require me to navigate to a point on the screen with an analog (roughly, you know what I mean) pointing device. Give me keyboard shortcuts. They’re fast, and work well for a lot of tasks.

3. If you have difficulty remembering what something does, the name should tell you, *but* and this is a big *but*, it should be the proper name. It should be a word that even if you don’t know exactly what it means at the time, that should you look it up in a dictionary, it would be so accurate and so close to the actual function that the button enables that there’s no ambiguity at that point as to what that button achieves.
If you take this standpoint and run with it, then yes your users may have to look something up once in a while, but probably only once, then they’ll know a new word, one they should probably already have known anyhow, they’ll know *precisely* what it means in that context, they’ll remember what it means I suspect, and they’ll perhaps appreciate the opportunity to learn more about the field in which they’re working by using your software.

4. If your software is built for working in a technically dense field i.e. there’s a depth to the knowledge required to fully understand the field that you wouldn’t expect a novice to intuitively grasp, don’t try to dumb it down too much unless you are *absolutely sure* that you won’t cut off access to advanced options and features.
It’s not a good enough argument to say that you shouldn’t, for instance, label a slider “normal” and place it within the “velocity” pane of the “particle” settings because a user wouldn’t immediately guess that that slider controls the “velocity” of a “particle” along the “normal” axis. That’s not good enough in my opinion, because the user WILL WANT TO KNOW THIS if they plan on using the feature. In this example, if the slider in question was simply called “velocity” or worse yet “speed”, how on earth would you define in which direction you expect the particle to travel?

5. As a user, have some respect for the options that have been given to you. The default settings may not be optimal for you. It’s highly unlikely that *any* set of default options would be optimal for everyone.
You are given choices however.
Choose.
Look at the options.
Change them, try them out, save them if you like them.
If you don’t learn how to customise a piece of software in a very basic way like this, then you will be forever at the mercy of the software that you choose to use, and the way it presents itself by default.

6. Keep an open mind. If it takes a long time to get used to something, that doesn’t mean that it isn’t worth getting used to. I’ve often thought that cars would be easier to control with a joystick and buttons, rather than a wheel and pedals, but if I ever do learn to drive, I won’t say the wheel and pedals are dumb because I don’t know how to use them right away. It takes time to learn things, but if they’re worth learning, you’ll be glad you spent the time.

In summary

I’m a fan of the Blender UI. I find that it allows me to work really, really quickly on most tasks and doesn’t often slow me down on anything that I can personally think of a better way of doing off the top of my head. I’ll accept that it’s tough to get into, but you know what? CG is a pretty technically complex topic.
I didn’t expect it to be super easy to do straight off the bat with no prior knowledge at all.

*Goes off to mumble grumpily about this sort of thing*

Dave

C-64

I built a model of a C-64 in blender some time ago now and had been holding off putting pictures of it online.
I didn’t get around to modeling the ports but other than that I think it looks pretty decent.
The keys were a pain in the butt, and I ended up having to redo them a couple of times.
Also, I hadn’t realised (silly me) that there were a couple of styles of case for the 64. The first one that I built (kind of more wedge shaped, like an Amiga 500) was for the C-64c I think, and that had different keys that I didn’t have any texture source for, so I ended up redoing the case to fit the keys rather than the keys to fit the case.
So, here it is.

c64FinalBareRender

Fedora

I had a quick go at modelling a fedora this evening. I’m hoping it’ll become a prop for a bigger project, but I need to think it through and make sure I know what I’m doing (which is rare).
Fedora