Camera and Polyvector

Previous topic - Next topic


I ask this because, if I'm not mistaken, an image loaded into memory,have the same size as a BMP.

My png image is W: 317 H 493 and the png in HDD is 292 KB, but in memory the uncompressed size is: 421 KB

If I have to use Pfor2 Texture I have to use the size W: 512 and H: 512, and the uncompressed size in memory is: 768 KB (nearly the double)

I have to Loading over 800 Image in Memory, and Pfor2 it's good only for DXT S3 Texture Compression.

My calculations are wrong?


On most handheld devices, and a fair few PC graphics cards, power of 2 bitmaps are needed, unfortunately.


So why, you can not use Powerx2 Texture in Unity?


I don´t know that answer.

But for what Mr. Tatoad said, I have my game currently going with multiple imgage file sizes in pixel, they are not 2x and some are not on the 1k, 2k or 4k square limit.
It has been working fine on everything I try, had no trouble with it yet.

I get from Space Fractal´s experience that it is wise to keep the 2k limit if you are going to target platforms that might have an issue with it

Also, I haven´t done 3d texture with those,should they may fail when so if hardware is not capable?
So far I´m using the 2d commands and some of it, if I recall correctly from Gernot, is software based and has no trouble with hardware.

I guess I should try some 3d soon to have a saying on the subject. ;)


Quote from: Kyo on 2014-May-25So why, you can not use Powerx2 Texture in Unity?
Its a bit Offtopic, but: As far as I have read it, Unity strongly recommends that the textures are power of 2 and automatic scales it, if its needed on the device. So it does approximately the same that I have done in my last example, just a bit more sophisticated (I hope so).

See here, under Texture Sizes:

BTW because of this I tried none square power of 2 textures like 256x512 in GLBasic and it works just fine, so my earlier posted code could get updated to be a bit more memory friendly.
Lenovo Thinkpad T430u: Intel i5-3317U, 8GB DDR3, NVidia GeForce 620M, Micron RealSSD C400 @Win7 x64


Yes for best compression memory (DXT), but Unity work fine also with not px2 texture.  ;)


So does GLB on my end,

I don´t get the point of this discussion, I can´t seem to grasp the objective or what is on the way. Well I keep trying :zzz:


has no reason to exist, this Thread ...  maybe .... :D


For a *real* game, you wouldn't do sprite animations like this anyways.
This method, each animation frame would be one sprite, and therefore one opengl 'material', and therefore one draw call.
(I know you only show one frame at a time per animation, but each visible on-screen sprite will be an extra draw call, slowing things down gradually).

The answer is texture atlases aka sprite sheets.  Check out apps like TexturePacker that creates larger texture atlases composed of all of your sprites (or a portion if you have many sprites).  It outputs a text file telling you where every sprite it located by x and y coordinates, and the sprites name.  You just build an interpreter library for these files, and have commands such as Sprite.Draw("name", frameNumber) and your code will extract the x,y location (and size) from the text file, convert those values to uv coordinates to use in your X_OBJADDVERTEX commands (the examples floating around all go from 0 to 1, so only the entire sprite is displayed, not a fraction of one).  Your code could detect multiple frames by the file name, such as "walk01", "walk02", etc to detect animations automatically, and set their start / end limits.  TexturePacker outputs in ^2 sizes if requested too.

Search these forums for sample code for TexturePacker.  I think there are some glbasic libs that read it.  There is another called "DarkFunction" or similar I think.
My current project (WIP) :: TwistedMaze <<  [Updated: 2015-11-25]


Yes that is what I was thinking too, but here is the problem: in GLBasic you can not change the texture coordinates of an 3D object. This means in this project here, you would be either need to create one 3D quad for each texture, or create that rendering quad at render time. Obviously both of this has big drawbacks on memory consumption/ execution speed and is worse then using single textures, without atlas. Thats why I said we could achieve a better result if we would use native OpenGL, because there this would be no problem (see my libSPRITE).
Lenovo Thinkpad T430u: Intel i5-3317U, 8GB DDR3, NVidia GeForce 620M, Micron RealSSD C400 @Win7 x64


Yes, you can't change the texture coordinates.  You would have to generate a new quad every time the texture changes.  Or pre-generate the various animation frame quads and flip through them as needed.  I suppose this all depends on how many active sprites you have displayed at any one time, plus how many are animated.  I'm sure GLBasic can generate new quads for at least 20 sprites per frame.  I'd be surprised if not 50, but I've never ran this through any testing.  Just comparing to other SDKs such as Unity where this wouldn't be a bottleneck.

Using native OpenGL (and not GLBasic commands), you wouldn't need a new model for every sprite.  A model could be composed from many unconnected quads, and therefore contain all your sprites.  But this may be way worse if you can't update a specific sprite without regenerating all sprite quads.  I think one of Unity's original sprite libraries did it this way, but Unity used arrays to access and work with the vertices, so you knew which array indexes to update for each sprite.

I like the idea of creating a native OpenGL sprite library, and bypass GLBasic altogether.  Is that what your 'libSPRITE' is?
My current project (WIP) :: TwistedMaze <<  [Updated: 2015-11-25]


Although the OBJADDVERTEX texture coordinates can't be modified, X_SETTEXTUREOFFSET will allow for animation texture cells.  :-[


I was already thinking of that.  It would only work if the sprites were the exact same size: just find the new sprite texture location, and offset the texture by the difference.  You could do any sprites this way, just not animations.  Generating a bunch of generic quads at the start, and set to the set texture size (128x128 for example), and just modify the offset to display the proper sprite.  You could reuse quads for different sprites displayed at different times.  (object pooling) 

But every sprite would need to be offset per frame calling the offset texture command multiple times per frame, as opposed to having the texture quads precalculated and quads already setup properly.  Maybe a mixture of the two methods would work best?  (X_SETTEXTUREOFFSET for animations, and normal quad per sprite for non-animations).  Again, only if the source sprites share the same dimensions.  Your mileage may vary!
My current project (WIP) :: TwistedMaze <<  [Updated: 2015-11-25]


Slydog libSPRITE is my X_SPRITE replacement that you find here:

It would be easy to expand this to draw non square rectangles. Since its already possible to set texture coordinates (and colour but thats not documented), this would be perfect for these needs here.
But with a bit more work, we could go even farther, it would also be possible to make a 3D Polyvector 'replacement' (I mean draw non rectangles) or even completely abandon the 'turn to camera feature' and allow the fast construction of meshes at rendertime.
Lenovo Thinkpad T430u: Intel i5-3317U, 8GB DDR3, NVidia GeForce 620M, Micron RealSSD C400 @Win7 x64