In ES 3, seriously consider creating a pixel unpack buffer and mapping it in order to get a location to which to formulate your pixel data. My question is, how do I actually do either of them? More specifically, how should I set the internalFormat, format and type for glTexImage2D()? I tried a lot of combinations but all of them delivers only 0 in the shaders (and I double-checked the data source that they are none-zero). I think I can either fetch them as a texture of unsigned bytes, or more efficiently, as a texture of unsigned integers then separate them with bit shifts. The tricky part is how to get the data (unsigned int8's of RGBA) in efficiently.Īs mentioned, I used to use openGL ES 3.0 to do GPGPU computing, so I only have experience with float-point textures, which is set-up by glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA32F,WIDTH,HEIGHT,0,GL_RGBA,GL_GLOAT,data) Īnd delivered to the shaders by texelFetch()īut now my input data is stored as an array of unsigned bytes (or uint8) and I need to sequentially fetch 64 of them each time. With transform feedback, this should be doable, as I have used that to get great results in GPGPU computing. From my tests with CPU, I believe using GPU to do the DCT and quantization steps makes good sense and should boost the over performance significantly (compressing a huge number of JPEGs is the bottle neck in my app). I am playing some tricks on IOS to try to build a CPU-GPU-hybrid JPEG encoder.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |