I guess there is some technical reason I don't know about but in the code below why have you forced the glenum('typ') type to be GL_DOUBLE in quite a few functions within this lib? Is it OK to remove this restriction and use GL_DOUBLE when not using INLINE?
INLINE
} extern "C" { void __stdcall glVertexPointer( GLint size , GLenum typ , GLsizei stride , const GLvoid *ptr );; }; namespace __GLBASIC__ {
ENDINLINE
FUNCTION glVertexPointer: size, typ, stride, ptr[]
typ=GL_DOUBLE;
INLINE
OGL glVertexPointer(size, typ, stride, &ptr(0));
ENDINLINE
ENDFUNCTION
That being said would it be OK to use this instead?
IMPORT "C" void __stdcall glVertexPointer( int size , unsigned int typ , int stride , void* ptr );
GLenum is int (simple enum) and GLsizei is unsigned int.
Did I define that as double? That's propably because the wrapper was from when GLB had no integers, yet.
OK, thanks, I'll remove them and use IMPORT, since I don't think there needed any more.
P.S. you sure about GLsizei? It's #typedef'ed as just a plain 'int' at the top of the code. Also GLenum is #typedef'ed as 'unsigned int' too.
On windows I have:
typedef unsigned int GLenum;
typedef unsigned char GLboolean;
typedef unsigned int GLbitfield;
typedef signed char GLbyte;
typedef short GLshort;
typedef int GLint;
typedef int GLsizei;
Hmm, should I be worried about different GL typedefs for different platforms, the ones that GLB currently support that is?
usually that doesn't really matter as long as the type size is the same (int=32 bit, short=16 bit, double=64 bit, float=32 bit).
Phew!, OK thanks Gernot.
I am keeping these all in the one GL thread, I hope no one minds?
Any ways, trying to optimise a particle system that use drawarrays.
Why does this not work, it compiles and runs but there is no output? I am trying to avoid wrapping code, for speed if possible. The wrapped version works fine BTW.
INLINE
glVertexPointer(2, GL_FLOAT, 0, &v(0));
ENDINLINE
Edit:
NM, it needs to be GL_DOUBLE not GL_FLOAT, I would imagine that the mobile platforms need GL_DOUBLE = float, instead?
P.S.
I tried this, to sneakily get around inline at all but still being able to use GLB float arrays...
IMPORT "C" void __stdcall glVertexPointer( int size , unsigned int typ , int stride , unsigned int ptr );
ALIAS vPtr AS v#[0]
glVertexPointer(2, GL_FLOAT, 0, vPtr)
...really stretching it here but although it compiles fine, it does not seem to pass on the array pointer, so that's a no go.
on win/lin/mac GLBasic's DGInt is a double, on others it's float (GL_FLOAT/GL_DOUBLE).
OK that's mo big deal then, on mobile I'll just set the constant gl_double = gl_ float for GLES. :good: