Age | Commit message (Collapse) | Author |
|
Just two ints like Gfx::IntPoint.
|
|
We have a new, improved string type coming up in AK (OOM aware, no null
state), and while it's going to use UTF-8, the name UTF8String is a
mouthful - so let's free up the String name by renaming the existing
class.
Making the old one have an annoying name will hopefully also help with
quick adoption :^)
|
|
Otherwise, we end up propagating those dependencies into targets that
link against that library, which creates unnecessary link-time
dependencies.
Also included are changes to readd now missing dependencies to tools
that actually need them.
|
|
|
|
Also do this for Shell.
This greatly simplifies the CMakeLists in Lagom, replacing many glob
patterns with a big list of libraries. There are still a few special
libraries that need some help to conform to the pattern, like LibELF and
LibWebView.
It also lets us remove essentially all of the Serenity or Lagom binary
directory detection logic from code generators, as now both projects
directories enter the generator logic from the same place.
|
|
Now that we have OS macros for essentially every supported OS, let's try
to use them everywhere.
|
|
|
|
This remained undetected for a long time as HeaderCheck is disabled by
default. This commit makes the following file compile again:
// file: compile_me.cpp
#include <LibDNS/Question.h>
// That's it, this was enough to cause a compilation error.
Likewise for most other files touched by this commit.
|
|
We can count on the dynamic loader for each platform, and the RPATH of
our build infrastrucuture, to load the lib up automagically.
|
|
|
|
Each texture unit now has its own texture transformation matrix stack.
Introduce a new texture unit configuration that is synced when changed.
Because we're no longer passing a silly `Vector` when drawing each
primitive, this results in a slightly improved frames per second :^)
|
|
We can now generate texture mipmaps on the fly if the client requests
it. This fixes the missing textures in our PrBoom+ port.
|
|
Looking at how Khronos defines layers:
https://www.khronos.org/opengl/wiki/Array_Texture
We both have 3D textures and layers of 2D textures, which can both be
encoded in our existing `Typed3DBuffer` as depth. Since we support
depth already in the GPU API, remove layer everywhere.
Also pass in `Texture2D::LOG2_MAX_TEXTURE_SIZE` as the maximum number
of mipmap levels, so we do not allocate 999 levels on each Image
instantiation.
|
|
These two methods copy from the frame buffer to (part of) a texture.
|
|
This makes it consistent with our other `blit_from_color_buffer` and
paves the way for a third method that will be introduced in one of the
next commits.
|
|
`GL_COMBINE` is basically a fixed function calculator to perform simple
arithmetics on configurable fragment sources. This patch implements a
number of texture env parameters with support for the RGBA internal
format.
|
|
|
|
|
|
The Quake 3 port makes use of this extension to determine a more
efficient multitexturing strategy. Since LibSoftGPU supports it, let's
report the extension in LibGL. :^)
|
|
In OpenGL this is called the (base) internal format which is an
expectation expressed by the client for the minimum supported texel
storage format in the GPU for textures.
Since we store everything as RGBA in a `FloatVector4`, the only thing
we do in this patch is remember the expected internal format, and when
we write new texels we fixate the value for the alpha channel to 1 for
two formats that require it.
`PixelConverter` has learned how to transform pixels during transfer to
support this.
|
|
|
|
A GPU (driver) is now responsible for reading and writing pixels from
and to user data. The client (LibGL) is responsible for specifying how
the user data must be interpreted or written to.
This allows us to centralize all pixel format conversion in one class,
`LibSoftGPU::PixelConverter`. For both the input and output image, it
takes a specification containing the image dimensions, the pixel type
and the selection (basically a clipping rect), and converts the pixels
from the input image to the output image.
Effectively this means we now support almost all OpenGL 1.5 formats,
and all custom logic has disappeared from:
- `glDrawPixels`
- `glReadPixels`
- `glTexImage2D`
- `glTexSubImage2D`
The new logic is still unoptimized, but on my machine I experienced no
noticeable slowdown. :^)
|
|
For unknown reasons, unveil() does not work on symlinks. This prevents
it from being used in an unveil environment such as WebContent.
|
|
This commit implements glClipPlane and its supporting calls, backed
by new support for user-defined clip planes in the software GPU clipper.
This fixes some visual bugs seen in the Quake III port, in which mirrors
would only reflect correctly from close distances.
|
|
Implement (anti)aliased point drawing and anti-aliased line drawing.
Supported through LibGL's `GL_POINTS`, `GL_LINES`, `GL_LINE_LOOP` and
`GL_LINE_STRIP`.
In order to support this, `LibSoftGPU`s rasterization logic was
reworked. Now, any primitive can be drawn by invoking `rasterize()`
which takes care of the quad loop and fragment testing logic. Three
callbacks need to be passed:
* `set_coverage_mask`: the primitive needs to provide initial coverage
mask information so fragments can be discarded early.
* `set_quad_depth`: fragments survived stencil testing, so depth values
need to be set so depth testing can take place.
* `set_quad_attributes`: fragments survived depth testing, so fragment
shading is going to take place. All attributes like color, tex coords
and fog depth need to be set so alpha testing and eventually,
fragment rasterization can take place.
As of this commit, there are four instantiations of this function:
* Triangle rasterization
* Points - aliased
* Points - anti-aliased
* Lines - anti-aliased
In order to standardize vertex processing for all primitive types,
things like vertex transformation, lighting and tex coord generation
are now taking place before clipping.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This loads libsoftgpu.so during GLContext creation and instantiates the
device class which is then passed into the GLContext constructor.
|
|
This adds a virtual base class for GPU devices located in LibGPU.
The OpenGL context now only talks to this device agnostic interface.
Currently the device interface is simply a copy of the existing SoftGPU
interface to get things going :^)
|
|
This introduces a new device independent base class for Images in LibGPU
that also keeps track of the device from which it was created in order
to prevent assigning images across devices.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This introduces a new abstraction layer, LibGPU, that serves as the
usermode interface to GPU devices. To get started we just move the
DeviceConfig there and make sure everything still works :^)
|