glGetShaderPrecisionFormat

glGetShaderPrecisionFormat: man4/glGetShaderPrecisionFormat.xml

glGetShaderPrecisionFormat retrieves the numeric range and precision for the implementation's representation of quantities in different numeric formats in specified shader type. shaderType specifies the type of shader for which the numeric precision and range is to be retrieved and must be one of GL_VERTEX_SHADER or GL_FRAGMENT_SHADER. precisionType specifies the numeric format to query and must be one of GL_LOW_FLOAT, GL_MEDIUM_FLOAT GL_HIGH_FLOAT, GL_LOW_INT, GL_MEDIUM_INT, or GL_HIGH_INT. range points to an array of two integers into which the format's numeric range will be returned. If min and max are the smallest values representable in the format, then the values returned are defined to be: range [0] = floor(log2(|min|)) and range [1] = floor(log2(|max|)). precision specifies the address of an integer into which will be written the log2 value of the number of bits of precision of the format. If the smallest representable value greater than 1 is 1 +, then the integer addressed by precision will contain floor(-log2(eps)).

@OpenGL_Version(OGLIntroducedIn.V4P1)
@OpenGL_Extension("GL_ARB_ES2_compatibility")
fn_glGetShaderPrecisionFormat glGetShaderPrecisionFormat;

See Also

Meta