Month: November 2013

Thanks Mum!

About a year or so ago now I’d guess, I started getting (back) into Blender in a fairly big way.
I followed tutorials and build some fairly naff stuff, but I got better I think.

One of the things that helped immeasurably was that my Mum had a Wacom Intuos 3 that she wasn’t using a great deal and, to my eternal gratitude, passed on to me.

I’ve been very thankful for that ever since, but may not have expressed it as much as I perhaps should.

What got me thinking about this was that I’ve been practicing sculpting recently and for a change of pace, decided to try doing the base mesh by hand to as great a level of detail as I could manage, with no reference images or sculpt tools allowed.

So using only the mouse and keyboard I got this far:

vertModelledHead

Then checked out what it looked like subdivided:

subDivVertModelledHead

And it was at that point I said “right, enough is enough, lets get those sculpt tools out”, grabbed the stylus and got cracking.

About two minutes later I had this:

sculptedSubDivVertModelledHead

So, thanks Mum! 😀
I guess I’ll never know how far I’d have progressed in my work/play/practice/learning in this field without the tablet, but I can tell you with some certainty, I would very likely have given up.

Dave

Glowing Stuff!

I like glowing things. Little floaty lights make me smile.
With that in mind, I started having a play with some particle effects that I could “glow up” in the compositor.

In short, I created a plane to use as a dynamic paint canvas, set to vertex and wave mode so that the particles made ripples in the surface when they connected with it, then I made another plane above it and set it up as a dynamic paint brush, set to particle mode, and added a particle system.

I then created a small icosphere and used it as the particle duplication object, lessened the effect of gravity and cranked the random initial velocity.

I set up an emissive material for the particles and a glossy/glass mix for the canvas surface, gave the particles lots of frames to fall down, rendered it out and composited in some blurring on the emission pass and a few other bits to get the look I was after.

The result is below 🙂

So after that, I was reading some bits and pieces and was reminded about the Particle Instance modifier. I had a quick play and used that, and hair mode on the emitter and ended up with some pretty nice tendrils which worked well with my materials and compositor setup in individual frames, an example of which is shown below.

LightTendrilTestFrame

What they didn’t work very well with was the dynamic paint, which I’m putting down to my settings, or the fact that most of the interpenetration of the tendrils and the canvas is edge-face rather than vertex-face. (Further info on that if requested.)

The result was underwhelming, but posted below for completeness.

How Important *IS* Hardware

Ay up.

I was listening this evening to Mr. Price’s latest podcast entitled “How Important is Hardware?” and reached the unofficial end of the piece struggling a little to maintain my normally unshakably placid demeanour .

*snrk*
Only kidding, I was raging, as always, without fail.
Ranty opinion piece follows.

Throughout those first ten or so minutes, while not gratefully receiving the wisdom not to walk into any old Asian massage parlour, I actually spluttered to myself “but… but… what about…”, and then held my tongue and continued listening, only for the same thing to occur moments later.

Don’t get me wrong, the overall message is one of hope, and that’s lovely. To say to all the “budding artists” that it doesn’t matter if you can’t afford high-end hardware is, for the most part, a noble platitude (or cynical coaxing? …nah…) but for one thing; it’s just not true.
The power of your rig will influence how and what you learn and how and what you produce, more or less radically, depending on the circumstances.

The *most* important thing, literally, the *most* important thing is to be comfortable using your own machine. To know the limits of your rig is to prevent disaster (lost work from crashes), to ease frustration (understand why that operation takes so *damned long*), to optimise workflow (easy on the verts there tiger) and have a happier experience all round.
It’s simply not true to say that hardware isn’t that important. It absolutely *IS* that important, just not in the way the headline might lead you to think. You don’t need the fastest clocked CPU, the largest, zippiest RAM money can buy, or even a graphics card that keeps your local power plant in business. What you need, what you absolutely must have, is hardware you’re comfortable with, and an understanding that a beefier graphics card won’t magically make you better at art (which is probably what Andrew was getting at, but isn’t the whole story).

I’m not supremely artistic.
I can’t paint worth a damn, and the best thing I’ve ever sculpted in real life is a blu-tac teddy bear. I know my limitations, and for that reason (among others) I haven’t gone out to buy oil paints. I might, one day, if I have the time to spare, but I don’t expect great things so I’m not going to invest just yet.

What this past year using Blender has taught me is that I do have a little aptitude for some things and if I practice, I might get better at them.

What I feel I have a better than average aptitude for though, is knowing what my machine and I are capable of. It doesn’t expect me to be able to spread paint on a canvas in a pleasing formation and I don’t expect it to be able to handle forty million vertices with any kind of fluidity(see footnote). It doesn’t expect me to be able to chisel a new Venus de Milo and I don’t expect it to be able to load twenty 8k textures without straining a little at the time (numbers mostly pulled from somewhere sunless, but you get the gist).

The point is, that I’ve come to find my preferred group of tools and techniques based primarily on what we two are able to do together; what I can work with, and what my machine can manage.

So the importance is this; your hardware could, and probably will, shape your usage of any software.
If you’re lucky enough to have CUDA cores out the wazoo, you’ll do certain things that someone with fewer resources will not do. This can be good or bad on either side of the equation. Only the other day I saw a tutorial showing how to model a length of rope in a way that ended up spawning tens of thousands of vertices where a simple repeating normal map (or even a shader level displacement rather than the mesh level one they used) would have achieved much the same result with a tiny fraction of the memory/CPU footprint. The same person who is able to handle scenes using this technique, adding verts willy nilly, will undoubtedly also be able to do some very fine sculpting work on their rig, but if they never learn to work within the constraints of lower spec hardware, they may well miss out on some approaches that can be helpful in other situations.

That last section, as I read it back to myself, sounds a bit like “Worry not poor penniless Blenderhead, you’ll learn more than those rich folks!” which wasn’t my intention. I’m not trying to make anyone *feel* better, the poor about the experience they may gain or the rich about the freedoms they may enjoy. I simply want to say that while it’s not exactly the hardware itself that’s important, the hardware, and your relationship with it will be a major part of what will ultimately shape how you learn, what you learn, how you produce and what you produce.

Saying otherwise is to ignore some pretty obvious facts and is more than a little misleading.

Dave

Embarrassing footnote: Yesterday evening I joined together three meshes, one of which had over 30k verts, and only realised with horror in the moments thereafter that one of the meshes I’d joined had five or so levels of multires subdivision on it, and the selection order had been such that it was now attempting to subdivide my 30k+ vert mesh five times. This doesn’t invalidate my point though, because once I realised what I’d done, I had no illusions about why my machine had just ground to a halt, so there 😛

Analogy Circus!

I’ve been hearing a lot about the “Dark-web” in the news recently.
I’m not one to get irate about a stupid term (Snrk! Yes I am, it happens *all* *the* *time* :P), but it seems a little silly to me to categorise a whole subset of documents as nefarious or illicit based solely on their availability.
Imagine for a moment the last post-it note you scrawled something on.
Unless you immediately took a photo of that note and placed it online for all to see, along with transcribed text so it’s easy to index and find, then you’re participating in “DARK-POST-IT” activity.
Doesn’t that sound ridiculous?
It does to me.
Sure “non-indexed web-site” or “non-referenced filestore” aren’t as catchy, but I tend to swing towards not vilifying a group if there are obviously sections of said group that shouldn’t be vilified.

Also, isn’t the content of every single web-server and personal computer that is specifically protected from access part of the *blech* “Dark-web” by definition?

Just a thought, brought to you by the Department Of Non-ambiguous Terminology.
Remember, if you’re planning on generalising over a huge group with a single word or phrase, just DON’T™.

Dave

Blending in a Winter Wonderland

Last weekend I was thinking “Should I have a go at something seasonal?” and after thinking about it for a little while I still didn’t have any decent ideas, so instead I tried simulating *really* gentle snowfall with particles.

I threw a few brown cylinders on top of a white plane, stuck another plane above it and started playing with particle physics until I had what I thought was a fairly decent looking effect.

That was when things started to get interesting. I remembered I’d been talking with someone recently on G+ about Depth Of Field effects and, as I’d played with the effect previously, I thought I’d see how these particles looked when they were out of focus.

Hours passed really quickly, and before I knew it I’d textured and displaced the ground plane, done the same to the “trees” which were looking a bit more like trees (sans quotation marks), put a sky environment in, started building props, set up some other materials and stuff, parented the camera to an empty so I could have it track nicely, and was slinging a focus empty about like a madman.

The end result is shown below, and I think it looks pretty neat.

Now, there are bits of the story I’m missing out here. There was a point where I’d rendered over 30Gb worth of frames that were of little or no use whatsoever. There was another point where I left it rendering overnight, and ran out of disk space about half an hour in. There was the time I altered the camera path and effectively “re-shot” the whole thing really late at night and made an utter balls-up of it.

I won’t go into great detail about those times. Suffice it to say that it was still worth it IMHO.

I really like this little clip, and it makes me feel better to watch it and think about what I might make next.

😀

Dave

Epiphany

*PERSONAL OPINION ALERT*
OK, I’ve figured it out.
I know why Blender’s UI is good and other interfaces can go die in a ditch somewhere.
Stick with me here, I actually did do some research.
I was keeping one eye on G+ today and someone asked about rendering for 3D displays. I was going to comment on that post, but I buggered it right up and lost everything I wrote, so I got a bit miffed and wrote up what I was going to say on this site instead – can be read here if you care about the subject at all.
Anyhow, the questioner was using Lightwave 11. Now, I’ve used Lightwave before, but at the time it was a demo version on my Amiga, so you can Imagine (snrk, I actually preferred Imagine at the time) how long ago that was.
I figured I’d have a look and see what Lightwave was up to these days.
NewTek offer a 30 day free trial of Lighwave and after a few minutes I was up and running.
Within about five seconds of loading up the modeller application, it hit me like a bolt of lightning, and now I know, specifically and absolutely why I think all non-Blender style UIs make me so frustrated that I want to smash them to tiny bits.
They *RUIN* my mouse.
That sounds odd, I know, but if you’ve read this far, maybe you’ll hang on a little longer to see what I mean.
OK, so when you want to move a bunch of verts in Blender, you select what you want to move (or grab if you will), hit “g” on the keyboard then move your mouse and hit left mouse to confirm or right mouse to cancel.
What’s missing from that action? At least one click, and normally a click-hold according to other UIs.
In the Blender interface, once I’ve hit the key on the keyboard, the mouse becomes the active tool. It’s activated. It’s on. Get cracking.

You see, hitting the key on the keyboard WAS the click. Not only that, it was a STICKY CLICK! (not sure I like the sound of that, but still…) I can let go of it now, and until I’m done doing what I need to do, all I have to do is move the mouse.
There’s still a choice left to be made of whether I wanted to actually *do* the thing I’m doing or not, but if I didn’t, I needn’t ctrl-z it under most circumstances (some exceptions apply), I can just right-click to cancel.
OK, now contrast that with what I experienced in Lightwave. Same job, shift some verts.
I found out pretty quickly that “t” will get me into translate mode. Makes sense so far. Once in translate mode, the status bar tells me I can “Drag to translate in view plane. CTRL forces one axis motion”.
But now I have to actively hold the left button down for the duration of the repositioning, letting go of it to confirm the new position.
There *is* no “hold on, I didn’t want to do that” function, other than to let go of the left mouse button, confirming the action, then undoing it again.
This may seem like almost nothing from a UX standpoint if you’ve never experienced anything else, but the way I see it is that by jumping straight from pressing a key to having the mouse active in the current context, you’re effectively getting a *several hundred button mouse*, which sounds perfect for complex tasks with lots of different modes if you ask me.

I was going to carry this on to talking about on-screen buttons for navigation.
Well, actually I will, albeit briefly.
Don’t.
Just don’t.
I can’t think of a single reason why panning, rotating or zooming the view should be done by left-click dragging on a 16×16 pixel icon in the corner of the viewport.
It’s frankly ridiculous.
I know that’s not the only way to do it, and to each his own, but seriously, if you had to use that method to pan/tilt/zoom, everything would take about fifty times longer and you’d very quickly go insane.

This is the thing that broke me when I tried UnrealEd too. Actually click-dragging on manipulators is so slow and error prone as to be essentially a joke.
Sometimes it might be unavoidable, I’ll admit. By all means provide some manipulators for those corner cases where the user’s keyboard has caught fire or something.
But also provide a mature, admittedly different, but blazingly fast way to interact with your work and I’m guessing that those who *can* get it, *will*.

3D Rendering

Someone asked how to render for 3D TV on G+ and I spent quite a while writing out a long comment about how it can be done and what you should pay attention to and so on and so forth.
Then I got a bit cocky and thought I’d try and do an ASCII art stereo pair.
Then I hit ctrl+shift+left arrow and my browser went back and lost my comment.
I whimpered at that point.

So lets have another go.

3D footage isn’t super mysterious really. The information needed to display a 3D image is simply a view from the left eye and a view from the right.
By presenting each of these images to their respective eyes, via shutter glasses for example, or the good old cross-eyed viewing method, the effect is that your brain perceives depth in the image.

So in order for you to render your fantastic new animation of a jelly having an anvil dropped on it, IN FULL 3D!!!!oneone!!11, you need only have two cameras.

Quick(ish) test. Load up your favourite 3D package (It’s Blender isn’t it? Isn’t it? I bet it is.) and render out a scene. For this experiment it makes sense to have a square or portrait orientation.
Save the image.
Go back to the scene and shove the camera over to the left or right (with respect to its orientation) by a few centimeters (keeping the focus of the camera the same if you can, but for anything but close foreground work, this probably won’t matter too much) and render it out again.
Save that image and open up your favourite graphics package (It’s GIMP isn’t it? Isn’t it? I bet it is.) and position each of those images side by side. Zoom out if necessary (it might be at first but with practice you can do pretty big stereo pairs) and go cross-eyed such that your left eye is looking at the right image and your right eye is looking at the left image.

Where the two ghostly images come together in the middle, you should get the depth effect. If the effect seems to be reversed i.e. things that were closer to the camera look further away than they should be) swap the order of the images round and try again.

For extra credit, do it on a piece of paper. Get a pencil and draw two boxes of as close to the same dimensions as you can, side by side.
In the boxes draw the same object in each, but position one of them slightly off to the left or right with reference to its box.
Go cross eyed such that your left eye is looking at the right hand box and your right eye is looking at the left hand box and prepare to have your mind blown (until you realise that it’s pretty simple optical mechanics really, and then you’ll calm down a bit, and then you might wonder how your brain does all this stuff on the fly and your mind will once again be blown, only this time it’ll be due to contemplating the power of your mind… which is blown… meta or what?).

It was at this point that I tried to do the ASCII art demonstration in a G+ comment and I buggered it all up.
I shan’t be trying that this time.
Maybe once it’s saved and published.
I might come back with some Blender made stereo pairs also.
Edit: Here we go! An example! And you can quite plainly see that it is in actual fact, a sailboat.

SailboatStereoPairExample

So in theory, all you need to do is render your scene/animation from two viewpoints to represent the two eyes in the head of the person doing the looking at, and present them to the two eyes of the person viewing the scene/animation.

In practice, exactly how you achieve that last step is down to what you want to use to view the image/animation. I have no clue whatsoever how your particular 3D TV expects its data to be presented, but the answer will be out there somewhere.
I know that NVidia’s 3D vision system is pretty flexible about whether you put the images side by side, over/under, or interlace them in some way, but personally I prefer side by side, right eye first. That way I can test it by just going cross-eyed.

Some people are able to diverge their focus, so the left eye looks at the left image and the right eye looks at the right, but I find that very difficult to do on command. Much harder than the alternative. In fact, the only reliable way I’ve found to do that is to actually look at something much further away than the image then slide the image into view. This works a treat with those old magic eye books because you can look over the top of the book out the window then bring the book into view and relax into the right separation for the image.

Why they tended to be mostly divergent type, I’ve no idea.

Incidentally, those magic eye pictures are random dot stereograms. They don’t use two images as such, they use vertical strips of a repeating but semi-random pattern that is then “shifted” left and right in the shape of the object they want to show in depth.

So there.

That’s how.

Dave

(edited to remove split infinitive)