I cant control how you see my world.

Every now and then Second Life seems to get into this ‘THINGS ARE HAPPENING’ atmosphere, and thats whats currently happening now.

Those who know and in the know will know that we are approaching the 10th anniversary celebrations, Oculus Rift is just around the corner as is the now called ‘Experience keys’ system and the Materials System and Shining project fixes… Of course corners in SL arn’t exactly right angles and can be quite curvy making the time taken to travel past said corner last sometimes years.

But even with all these exciting new things arriving i’m still not happy yet Mr Humble, so here are two things im still gonna be waiting for in order to make really good interactive immersive experiences in SL.

 

1. Animated Objects

Pathfinding, Great feature, awesome Ai. But i cant get my animated creatures to appear . Im getting increasingly fed up of making multiple posed objects and having to script the transparency of faces to simulate animation. I currently import rigged objects that copy the animations of the avatar, why cant we animate the objects without them having to be attached to an avatar? i would have backed Karl to create code for that rather than the stupid Fashionista Deformer. People are already building these assets, we just need the dots connected.

hop
Simplify the animating process by using other assets we already use for rigging mesh clothing and avatars and we’ll have Hobyahs that animate 1000 times better and more efficiently.

2. Materials and Windlight

In the right lighting environment custom Normal and specula maps can make for a really detailed immersive atmosphere to rival AAA games. But thats the thing, i cant control how you have your atmosphere (wind light) settings. I could create a wonderful dark cave with specualr mapping optimised perfectly for being seen by torch light at midnight or a custom windlight setting. Then you come along with your settings set to midday with a stupid face lamp ruining EVERY bit of atmosphere i crafted GAAHHHH!. Perhaps now is the time to give me perms to control your viewer side windlight settings, i already have perms to animate you, control your camera view, teleport you. Give me perms to change your windlight settings and i can give you the best atmosphere i can at any given time, wether it be when entering a dark cave, or transporting into space.

 

I know what i want to be able to do In SL, i know what sort of experiences i want to give my friends in SL, we’ll get there eventually 🙂

17 thoughts on “I cant control how you see my world.

  1. Given that the whole-region-only implementation of Windlight controls has already been done (after how many years?), I think the second is less likely than the first. We know the Lab doesn’t fix things that have been deployed. They do do new shiny.

  2. I totally agree with everything you said! Especially the windlight settings. I had a space themed sim for a while but 80-90% of visitors forced midday (IN SPACE) totally ruining the setting and causing some ppl to be confused what the theme was… Somehow this would have to be controlled serverside though. Otherwise I’m sure your (IMO: Copy paste viewers) third parties will manage to block that packet that tells the viewer the windlight setting… I wish the lab would also just put everyone on one viewer. It would give better control, support, function and compatibility between residents, servers and the lab. I’m sick of trying to explain to 1x users on phoenix why interactive media doesn’t work and such… That’s my 2-cents. 🙂

  3. The vision impaired thing is an interesting and valid point. I would imagine there will always be an override somewhere, or if im real smart offer a version of my experience for visually impaired.

    • What would be cool is if there was a script that allowed you to share windlight settings. Place it in a sign, visitor touches it, visitor gets a popup prompt to ask if they would like to change windlight settings. If they choose yes, windlight is shared, and a prompt asks if visitor would like to keep that setting, in case they can’t deal with it.

      • I agree with the OP – the visual environment is obviously a part of the presentation, so the creator of a place needs that control.

        But I also battle visual impairment so I am glad to see that issue raised as well.

  4. The Material is already in the work.

    However, soon as SL have these material feature in main release, still won’t rival AAA games as much you’d like to think. Second Life is still missing out more render elements such as…

    Animated particles
    Physical particles
    Pivot Position for particles (seriously…)
    Ribbon particle/string (this is in the work though.)
    Vector/2D Draw UI
    Displacement Map (not to be confused with Normals Map)
    RimLight Map (not to be confused with Specular Map)
    Tessellation
    Fresnel Map (not to be confused with Diffused Map)
    Illumination Map
    Ref-Light Map (aka Refraction/Reflection/Environment Map)
    Light Map (not to be confused with AO shadow)

    The truth is… Second Life need to ditch their current Render Engine and buy one of the top-notch engines out there, like CryEngine 3. This way they don’t have to update the Render Engine anymore, let 3rd party company do that for them.

    Too bad they’re deep rooted themselves into their messy viewer codes anyway.

    • Hi Nacon! I’m the guy who wrote the original spec for materials and also did a fair bit of the rendering work. It’s also probably important to mention that I’ve actually worked with “top-notch engines out there, like CryEngine3”, including Unreal Engine 3, Unigine, and of course Unity as well. It’s probably safe to call me a game developer at this point.

      The best part is, most games (even the major ones) don’t make use of even half of these additional texture maps you list.

      In the industry, “AAA” is a term that effectively means it was funded from the start, as in the developer found a publisher, sold their game concept to the publisher, and developed the game from publisher funds.

      Here’s maps that are considered common by many nice looking games:
      Diffuse maps
      Normal maps
      Specular maps
      Subsurface Scattering color maps (this one is only just recently picking up steam)
      And depending on if the hardware the game targets supports tessellation, Displacement maps (though there’s still a lot of games that don’t use these)

      As nice as animated particles are, not many games actually take full advantage of them. To me, if anything other game engines have better tools for these, not so much better technologies for them (though I will admit, it’d be nice if we had better lighting on particles, but that’s a different story entirely).

      Physical particles we sort of already have, and there really aren’t many good ways to do this in an online environment that’s always changing. Pivot position for particles is also a tools problem, not a rendering problem or general technology problem.

      UI is always something that will be in a state of flux, and frankly a vector UI wouldn’t make much of a difference. In fact, most games on the market are still using generic textures for their UIs anyways. Having actually seen how the viewer handles UI rendering however, there is something to be said with regards to how the viewer handles UI elements that needs improvement, but that’s unrelated to content creation since content creators don’t have the ability to create custom UI elements.

      Rim lighting is almost always combined with the diffuse map, and fresnel is also typically combined with the specular or environment. In SL, we combine rim based upon the position of nearby light sources with specularity so that the edges of objects that are just barely facing a light source appear to have a slightly brighter specular intensity. So we already have fresnel as part of SL’s BRDF (which to some mild degree, is physically based now), just not for the environment cube map.

      Light maps only really work for scenes that can be guaranteed to stay static, which in SL we can’t guarantee that an object will so much as stay the same color. The moment an object moves, or the geometry of a scene changes it ruins the illusion of illumination that light mapping provides.

      Environment maps would be nice, but right now there’s things that need to happen before environment maps can be properly supported. We didn’t have time for these in materials unfortunately. Though we did do emissive maps in a way similar to how engines like Unity and Unigine handle them.

      All in all, we have three of what you could consider “standard” or “increasingly standard” texture maps in SL now, and even then we decided to add additional capabilities to these textures to allow content creators to further customize their content. Here’s some of the points that you may not be aware of:

      Diffuse textures have an alpha mode! This allows content creators to change whether the object has alpha blending, alpha masking (not to be confused with blending; these objects have sharp edges where they go transparent where blending has soft edges), and emissive mask which is similar to Unity and Unigine’s idea of emissive textures.

      Normal maps can modify the sharpness of specular highlights with their alpha channels. This is known as specular exponent mapping. What this allows you to do is have softer specular highlights on one part of a surface, then having sharper highlights on another part of the same surface without having to use multiple prims or different materials to simulate multiple materials on the same surface.

      Specular maps can store environment intensity in their alpha channels as well, allowing content creators to determine how much a surface can reflect the environment cube map. It may not be as good as having a dedicated environment mask map that would also allow you to modulate the color of the environment map, but it works well enough for SL. This allows you to have parts of a surface be fully reflective, and parts of the same surface not be reflective at all (such as windows in a wooden frame for example, where the windows may be highly reflective and the wood may not be reflective at all).

      On the subject of ditching the current renderer for a “top-notch” one such as CryEngine 3, I’ll say this much. There’s more to a rendering technology than what kind of fancy effects it simulates. Performance is a major factor when it comes to something like Second Life, where anything that would be safe to assume in a major game engine about content quickly goes well out the window due to the fact that what people do with user generated content is incredibly hard to predict, causing many interesting performance problems that typical games just don’t have to worry about. And that’s where many third party game engines just don’t make the cut when it comes to SL: they all assume that the content going into the engine is very well optimized by individuals who have the appropriate practice and training to target given platforms where the game will be available for the best possible performance of their game. Second Life differs in that you can’t reasonably expect that everyone producing content are as highly skilled as a professional 3D artist from a major studio like BioWare or id Software, and in fact you have to assume that there’s far more people on SL who don’t know how to optimize content then there are who do.

      Because of this Second Life is unique in this regard, and will probably never go for third party middlewares until there’s one that was purpose built for user dynamic generated content that makes zero assumptions regarding how well optimized that content is, and can perform even remotely as well or better than what SL currently does on the matter.
      It’s funny how the best thing about SL is also the thing that makes it slower than other game engines isn’t it? This is the price you pay for a dynamic environment.

  5. I gotta admit that even I knew all this much, I had to dumb down to level where other people may not understand we’re talking about right now.

    The thing about the performance, say that if Second Life had CryEngine3, performance would have actually increased when only using Second Life’s current level of graphical use… They wouldn’t able to use all of the supported features it had to offer… Or the rest of the DX11 features for that matter. Such engine like CryEngine or Unreal have their code already optimized, professionally.
    The Render engine in Second Life is by all means not completely optimized as it’s still somewhat an developing learning process for Linden Lab.

    And maybe for the worst part, using an (still) evolving OpenGL. I don’t like that, at least not for long term. I know Second Life is always evolving, but putting on top of another evolving development is always bound to have some problems and generate less optimized code-work. I can understand you may not agrees with this statement, but that’s how I see it when realizing where they had better options before viewer had gone open source that one day.

    The point and the problem here was to realize that Second Life can never really catches up with “AAA” games on the term of graphical support. Not at the idea it has the funding it need to produce something. No sires, but hey… funding isn’t really an issue for Linden Lab right now.

    I do realize that most content creators don’t have the education to work with CryEngine or Unreal… chances that they won’t have that skill for Second Life as well. After they had learned from doing things as hobby, in some degree that they can’t carry out those same skills to another platform as they’re different and dumb`ed down. If Linden Lab had gone with CryEngine or Unreal, they would have created more jobs and bring more skilled people into industrial work. I’m willing to bet you that most skilled creators in Second Life already had their skills learned from CryEngine or Unreal.

    Now the fluff stuff.

    Problem with particles was one of a lot of things how Linden Lab could have handled it. Physical particle can always be client-side with some degree, but there’s no point worrying about the ever-changing environment… kinda the point in having physical engine in first place. Now which means they need to use an open source physic engine to go with the viewer. (Bullet? ODE?) This calls for another strange hack-job, which they should’ve done long ago. I’m sure there are quite few things could have benefit the client-side physic. How about a car with reactive wheels for thing one? or Hair? clothes? Too much to go on.

    And when you say pivot position for particle is a tools problem… That came off quite absurd, because particle are only accessed by script (LSL). How hard is it to add [PIVOT, ] in the list param? This is actually a big deal for visual content use. This is one of the most minor and easy features Linden tend to miss out and left behind for so many years.

    Must Second Life guarantee object to be static for Light Map use? I kinda hope you were kidding. It did sounded like that excuse alone was the reason why we can’t have it. (Ok, maybe taken out of context there.) There are few uses for it that doesn’t have to be static. It’s no worst than creators that have built their store with light and shadow baked into textures. Without using real benefit of texture repeats, could have cut down a lot of texture memory and boost Second Life’s network traffic.

    Overall, my experience with Second Life’s graphic rendering has always been pitiful and depressing. Always working around such limitation and lower down any high expectation. Sure, it’s always progressing and evolving, but painfully few years behind. Watching people evolve and get too greedy with such details as they’re craving for it. End up causing more problem for Second Life’s network grid and people’s client viewer, bog down in typical everyday-lag.
    I often try to help out much I can to get some of the things we need back on stage, but things just don’t always go quite smooth. I’m sure you know how that goes.

    PS: In case you’re wondering, I do a lot of graphic art as profession and scripting as a hobby.

  6. EDIT: Doh… the part where I was talking about pivot for particle, this comment system made “vector” tag disappeared. So it should have been [PIVOT, Vector] but whatever, you get the idea. 🙂

    • I used to be something of an artist at one point. Debatably, I still am to some degree. But my job title now days tends to be “Graphics Programmer” more often than not.

      Anyways, here’s yet another long winded comment!

      So let’s start with the performance argument. Yes, CryEngine 3 and Unreal are pretty optimized in terms of rendering performance. Let’s focus on how they’re able to squeeze so much out of GPU shall we?

      So in 3D rendering, there’s this thing called batching. The benefit of batching is you can render a lot of objects in a single draw call. This is especially important for Direct3D, which tends to have a bit of additional overhead for a draw call than OpenGL does.

      You can effectively break batching down into two variants: static and dynamic. Static batches never change, that is you never have to redetermine if object A belongs to static batch E that consists of objects B, C, and D. These objects will never change; that is they will never move, and they all share the same surface properties (such as textures). Static batching is very fast and actually fairly easy to implement, and provided a content creator is taking advantage of such a concept they can produce pretty efficient content by reusing textures and ensuring that the surface properties of each object that gets batched are the same. And this is where the idea of static batching outright breaks in Second Life. Most people don’t take advantage of having something like a texture pallet being applied to multiple surfaces in the scene to ensure that their content performs optimally.

      Dynamic batching is a little more complex. How dynamic batching works, is you have objects A through F, which may be able to move, and may not have the same surface properties. What you have to do for situations like this is compare these objects on the fly as you receive updates for them in the context of Second Life. For a few objects, this isn’t a problem, and works reasonably well in most games. For Second Life, you don’t just have a few that you have to worry about. You have sometimes tens of thousands of surfaces you have to consider for dynamic batching.

      This isn’t necessarily a technology problem as much as it is a combination of content creator and resident problems. Content creators can optimize with something like batching in mind, but then you can’t expect your average resident to know what batching is when they start placing whatever objects they want to rez out in the world. The viewer actually does what it can to account for cases like this without inhibiting what content creators can do, and what residents can rez in-world. You may not want to believe this is the case because it Linden Lab produced it, but I can assure you it is. Now days, the biggest performance bottleneck by far in Second Life is the number of draw calls that get submitted to the CPU. This is an accumulation of people either not knowing how to optimize their content or outright just not caring enough to optimize their content. It also doesn’t help that you can’t expect residents to know what to look for in terms of well optimized content.

      Another factor in the performance argument is the amount of data that gets pushed to the GPU. Second Life tends to have some pretty big data requirements when it comes to rendering, especially knowing how many objects that tend be in-world. We’re talking geometry data, texture data, even some rendering pipeline data like the deferred renderer’s geometry buffer. It’s hard to fix the texture data problem, but to some degree we can tackle the geometry data problem. Here’s where game engines tend to differ from Second Life on this, and where performance could potentially be worse with an off the shelf game engine than with Second Life’s custom setup.

      In the viewer, geometry can be expressed primarily as a set of vertex positions (three 32-bit numbers), normal directions (another three 32-bit numbers), and texture coordinates (two 32-bit numbers). This is your most basic definition of a vertex in Second Life. There are cases where there’s more data, and the viewer does what it can to pick and choose which data gets submitted to the GPU. You can have things like multiple texture matrices on the surface that get submitted (as of materials), to having bi-normals associated with surfaces, and even surface color can be encoded in a vertex as well. The viewer does a lot to make sure just the bare minimum required for a surface is submitted to the GPU to help save on VRAM.

      Here’s how an engine like Unity3D handles it (and most other engines aren’t really much different in this regard). You have a consistent set of data, typically involving:
      Vertex position (x, y, z)
      Vertex normal (x, y, z)
      Texture coordinate 1 (x, y)
      Texture coordinate 2 (x, y)
      Tangents (x, y, z, w)
      Vertex color (x, y, z, w)
      Light map texture coordinates (x, y; sometimes x, y, z, w depending on the engine)
      This data is typically applied to most, if not all, objects in the scene. Usually vertex color and light map coordinates are only sent on objects that actually specifically have that data, but here’s some food for thought for you: Most (if not all) objects have tangents, texture coordinates, normals, and positions on a per-vertex basis. Submitting this information for each and every piece of geometry in SL would actually result in a large amount of bloat for a lot of objects. So the viewer literally has to decide if an object needs it or not.

      Thus far, I believe I’ve established that something like Second Life has some unique requirements that most out of the box solutions like CE3 and Unreal would need some custom work anyways to be able to meet.

      In terms of “AAA” visuals, sure Second Life could catch up. The question is though, where is the viewer currently bound in performance? Global illumination was a part of the viewer at one point, but the technique that was used didn’t scale very well as more objects were added to the scene. A popular technique being employed by Epic for Unreal Engine 4 would be ideal, but would be limited to GeForce GTX 6 series hardware (and the AMD equivalent) and up. Both of these probably wouldn’t be feasible, since the viewer is draw call bound in performance.

      And regarding OpenGL “still evolving”: Any well maintained API will always be “evolving”. Development is an iterative process, and if OpenGL weren’t evolving I’d be very concerned about its future. Granted, me and many other graphics programmers are always concerned about OpenGL’s future and the decisions the Khronos Group makes about it, but that’s besides the point. It’s a stable API that’s pretty portable that offers the same features (and in some circumstances, more features) that Direct3D has. OpenGL shouldn’t even be a factor in whether or not Second Life could be improved in terms of performance as far as I’m concerned actually having some idea of where OpenGL currently stands. The same goes for visual quality as well, since again OpenGL has roughly the same features as the latest version of Direct3D (including compute shaders as of OpenGL 4.3).

      Regarding client-side physics. As it turns out, that’s a fair bit of work for a feature that chances are a lot of people would simply disable out of concerns of performance. It’s only really good for simple effects that wouldn’t interact with other people, and even then you’re looking at more overhead when it comes to viewer performance. Few games are doing physical particles now days, and those that do actually disable them by default, and not many people seem to actually notice them otherwise (which is probably why NVIDIA is focusing more on making dynamic destructible environments the focus of their work on PhysX more and more now days). It’s something that may happen some day, but it’s by no means something that should be considered “something that should have happened a long time ago”. Materials falls more in line with something that should have happened a long time ago.

      Also, fun fact: particles are actually a primitive parameter, and don’t actually need scripts to set them on objects. I believe someone on Firestorm was actually working on an in-viewer particle editor at one point. Just like many primitive parameters though, it can be controlled through a script as well making the script part optional.

      And finally, light maps. Light mapping was created with the idea that the surrounding environment would not change substantially. This means light mapped objects would stay in the same position, and that any baked lights would never so much as change colors. That last part is especially important for Second Life, where we have a dynamic sun and moon cycle, and as Loki points out in his post, people can change their windlight settings at will. This makes the idea of light mapping very difficult to apply in Second Life; and removing the ability to change windlight settings would only upset people even if it were just a region-wide flag.

      • @ Necon & Spaz, thank you for these interesting comments :), With regards to OpenGL, SL does not currently support the latest versions of OpenGL, the Mac viewer only Supports OpenGL 2.1. Is this something that could Hold SL back graphically?

        • For most things (tesallation aside), OpenGL 2.1 would be fine. The viewer needs a lot of work before it can support a higher version of OpenGL on OS X, but on windows and Linux this isn’t a problem.

          • It’s also to some vague degree worth mentioning that OS X does actually support *most* of OpenGL 3.0’s extensions without the OpenGL core profile. This includes things like geometry shaders, transform feedback, and similar. It just doesn’t support GLSL 1.30.

            This is one of the good things about OpenGL. Drivers don’t have to have full support for a version of OpenGL in order to use specific features of that version of OpenGL; it can just support specific extensions that are part of a version of OpenGL’s specification. But Apple’s case is a frustrating one, in that where the rest of the driver vendors (AMD, Nvidia, and even Intel to some degree) are allowing applications that don’t support the OpenGL core profile have access to newer features of newer OpenGL versions, Apple outright will not do it despite the demands of graphics programmers.

    • Ah yes Rosedales SL2.0 codenamed high fidelity. Isn’t he still on the Linden Lab Board in some form?. Well anyway until proven otherwise, i will see High Fidelity as just another Blue Mars.

      • Rosedale appears to no longer be on the Linden Research (Linden Lab) Board of Directors (see: http://lindenlab.com/about).

        I suspect this is because he is heading-up High Fidelity and the Lab has invested in the company. Therefore, having him as both “CEO” for High Fidelity and Chairman of the Board for Linden Research as an investor in the company might be construed as a conflict of interest.

        For another take on High Fedlity, it’s worth reading Will Burns’ (personal) notes on the matter (see: http://cityofnidus.blogspot.co.uk/2013/04/perspective.html). They are a bit of an eye-opener.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: