Age | Commit message (Collapse) | Author |
|
This patch adds support for opening a ConfigFile using a file
descriptor rather than trying to open a the file by name directly.
In contrast to the previous implementation, ConfigFile now always keeps
a reference to an open File and does not reopen it for writing.
This requires providing an additional argument to open functions if a
file gets opened based on its name and the user of the api intends to
write to the file in the future.
|
|
|
|
|
|
Now that we only have 3 ccached builds on CI we can comfortably increase
the ccache size limit to get some free speed-up in CI.
|
|
|
|
This allows us to collect the tokens iteratively instead of having to
lex the whole program and then get a tokens vector.
|
|
Previously, the preprocessor first split the source into lines, and then
processed and lexed each line separately.
This patch makes the preprocessor first lex the source, and then do the
processing on the tokenized representation.
This generally simplifies the code, and also fixes an issue we
previously had with multiline comments (we did not recognize them
correctly when processing each line separately).
|
|
For example, '# include <stdio.h>' is now supported by the Lexer.
|
|
|
|
Classes reading and writing to the data heap would communicate directly
with the Heap object, and transfer ByteBuffers back and forth with it.
This makes things like caching and locking hard. Therefore all data
persistence activity will be funneled through a Serializer object which
in turn submits it to the Heap.
Introducing this unfortunately resulted in a huge amount of churn, in
which a number of smaller refactorings got caught up as well.
|
|
- Added a connection banner
- Added '.quit' synonym for '.exit'
- Do not display updated/created/deleted banner if there were no changes
|
|
This patch provides very basic, bare bones implementations of the
INSERT and SELECT statements. They are *very* limited:
- The only variant of the INSERT statement that currently works is
SELECT INTO schema.table (column1, column2, ....) VALUES
(value11, value21, ...), (value12, value22, ...), ...
where the values are literals.
- The SELECT statement is even more limited, and is only provided to
allow verification of the INSERT statement. The only form implemented
is: SELECT * FROM schema.table
These statements required a bit of change in the Statement::execute
API. Originally execute only received a Database object as parameter.
This is not enough; we now pass an ExecutionContext object which
contains the Database, the current result set, and the last Tuple read
from the database. This object will undoubtedly evolve over time.
This API change dragged SQLServer::SQLStatement into the patch.
Another API addition is Expression::evaluate. This method is,
unsurprisingly, used to evaluate expressions, like the values in the
INSERT statement.
Finally, a new test file is added: TestSqlStatementExecution, which
tests the currently implemented statements. As the number and flavour of
implemented statements grows, this test file will probably have to be
restructured.
|
|
These are standard SQL concepts which columns should be aware of.
|
|
The implemtation of the Value class was based on lambda member variables
implementing type-dependent behaviour. This was done to ensure that
Values can be used as stack-only objects; the simplest alternative,
virtual methods, forces them onto the heap. The problem with the the
lambda approach is that it bloats the Values (which are supposed to be
lightweight objects) quite considerably, because every object contains
more than a dozen function pointers.
The solution to address both problems (we want Values to be able to live
on the stack and be as lightweight as possible) chosen here is to
encapsulate type-dependent behaviour and state in an implementation
class, and let the Value be an AK::Variant of those implementation
classes. All methods of Value are now basically straight delegates to
the implementation object using the Variant::visit method.
One issue complicating matters is the addition of two aggregate types,
Tuple and Array, which each contain a Vector of Values. At this point
Tuples and Arrays (and potential future aggregate types) can't contain
these aggregate types. This is limiting and needs to be addressed.
Another area that needs attention is the nomenclature of things; it's
a bit of a tangle of 'ValueBlahBlah' and 'ImplBlahBlah'. It makes sense
right now I think but admit we probably can do better.
Other things included here:
- Added the Boolean and Null types (and Tuple and Array, see above).
- to_string now always succeeds and returns a String instead of an
Optional. This had some impact on other sources.
- Added a lot of tests.
- Started moving the serialization mechanism more towards where I want
it to be, i.e. a 'DataSerializer' object which just takes
serialization and deserialization requests and knows for example how
to store long strings out-of-line.
One last remark: There is obviously a naming clash between the Tuple
class and the Tuple Value type. This is intentional; I plan to make the
Tuple class a subclass of Value (and hence Key and Row as well).
|
|
Tuple descriptors are basically the same for for example all rows in
a table. Makes sense to share them instead of copying them for every
single row.
|
|
|
|
This is an interesting quirk that occurs due to us using the x87 FPU
when Serenity is compiled for the i386 target. When we calculate our
depth value to be stored in the buffer, it is an 80-bit x87
floating point number, however, when stored into the DepthBuffer,
this is truncated to 32 bits. This 38 bit loss of precision means
that when x87 `FCOMP` is eventually used here the comparison fails.
This could be solved by using a `long double` for the depth buffer,
however this would take up significantly more space and is completely
overkill for a depth buffer. As such, comparing the first 32-bits of
this depth value is "good enough" that if we get a hit on it being
equal, we can pretty much guarantee that it's actually equal.
|
|
We now use the compiler's buitin version of bitcast if it's available
instead of just resorting to using the builtin `memcpy`.
|
|
The changed check for `fs` was accidentally comparing the current `ds`
to the previous `fs`.
|
|
The current flap strength makes the game a lot more difficult than
other flappy games. Decrease the flap strength to make it a little
easier to get higher scores.
Before this change I could only get past 3 or 4 obstacles, now I can
get 15 or 20 in, which seems more on par with other flappy games.
|
|
This allows `--version` and `--help` to work properly even if we do not
supply the required positional arguments to a command.
|
|
This is an Annex B extension to RegExp.prototype.
|
|
For example, consider the following pattern:
new RegExp('\ud834\udf06', 'u')
With this pattern, the regex parser should insert the UTF-8 encoded
bytes 0xf0, 0x9d, 0x8c, and 0x86. However, because these characters are
currently treated as normal char types, they have a negative value since
they are all > 0x7f. Then, due to sign extension, when these characters
are cast to u64, the sign bit is preserved. The result is that these
bytes are inserted as 0xfffffffffffffff0, 0xffffffffffffff9d, etc.
Fortunately, there are only a few places where we insert bytecode with
the raw characters. In these places, be sure to treat the bytes as u8
before they are cast to u64.
|
|
RegExp.prototype.compile will require invoking RegExpInitialize on an
already-existing RegExpObject. Break up RegExpCreate into RegExpAlloc
and RegExpInitialize to support this.
|
|
I messed this up when I changed it before, which was causing a crash.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Currently just sets the renderer option for what polygon mode we
want the rasterizer to draw in. GLQuake only uses `GL_FRONT_AND_BACK`
with `GL_FILL` )which implies both back and front facing triangles
are to be filled completely by the rasterizer), so keeping this as
a small stub is perfectly fine for now.
|
|
With the new parser, we started interpreting the `opacity` property as a
string value, which made it turn into `auto` and so anything with
opacity ended up not visible (e.g the header on google.com)
This patch restores our old behavior for `opacity` by interpreting it
as a numeric value with optional decimals.
|
|
|
|
|
|
Load the wallpaper in a background action instead of on the main thread.
This reduces the time to first paint, and makes the UI feel more
responsive when clicking on wallpaper thumbnails.
The behavior of the method is changed slightly to return true if it
succesfully "loads" the empty path. This makes the API a little more
consistent, where "true" means "I made changes" and "false" means "I did
not make changes". No call sites currently use the return value, so no
changes are needed to those.
|
|
Instead of loading every icon, only load the filetype image icon if it
hasn't been already. This icon is used by IconViews that need to lazily
load thumbnails, which don't need any of the other icon types.
Spending the time to load the unneeded images was causing delays to
first paint in BackgroundSettings.
|
|
This is a very similar fix as the previous commit, but here it's
due to my oversight when I was adding an 'Save as..' feature.
|
|
Prior this change, the window title was updated only when a new file
has been opened, which means that it wasn't updated when user selected
an already opened file in the split view.
This change updates the title whenever the active editor changes.
In addition, this title update logic has now its own function
as it'll also be used in the next commit. :)
|
|
The on-target pipelines have a timeout of 6 hours to allow time for a
clean toolchain + Serenity build. Tests should time out much sooner than
that though.
|
|
Caches on Azure are immutable - so if a cache changes, but its key does
not, then the cache is not updated. Include a timestamp in the ccache
key so that we always push an updated cache from the master branch. Then
use a subkey without the timestamp to pull the cache.
We use a similar trick on GitHub Actions.
|
|
Previously we'd synthesize an event and invoke context_menu_event()
directly. This prevented the event from bubbling even if ignored.
|
|
|
|
This has several benefits:
1) We no longer just blindly derefence a null pointer in various places
2) We will get nicer runtime error messages if the current process does
turn out to be null in the call location
3) GCC no longer complains about possible nullptr dereferences when
compiling without KUBSAN
|
|
For example, "property.br\u{64}wn" should resolve to "property.brown".
To support this behavior, this commit changes the Token class to hold
both the evaluated identifier name and a view into the original source
for the unevaluated name. There are some contexts in which identifiers
are not allowed to contain Unicode escape sequences; for example, export
statements of the form "export {} from foo.js" forbid escapes in the
identifier "from".
The test file is added to .prettierignore because prettier will replace
all escaped Unicode sequences with their unescaped value.
|
|
The parser now stores this as a FlyString everywhere, so consumers can
also use it as a FlyString.
|
|
Unfortunately, this requires a slight divergence in the way the capture
group names are stored. Previously, the generated byte code would simply
store a view into the regex pattern string, so no string copying was
required.
Now, the escape sequences are decoded into a new string, and a vector
of all parsed capture group names are stored in a vector in the parser
result structure. The byte code then stores a view into the
corresponding string in that vector.
|
|
|
|
This will allow regex::Lexer users to invoke GenericLexer consumption
methods, such as GenericLexer::consume_escaped_codepoint().
This also allows for de-duplicating common methods between the lexers.
|
|
|