Age | Commit message (Collapse) | Author |
|
In the given String, invert_case() swaps lowercase characters with
uppercase ones and vice versa.
|
|
This is an issue on systems that don't have the empty base class
optimisation (such as windows), and we normally don't need to care -
however static_cast is technically the right thing to use, so let's use
that instead.
Co-Authored-By: Daniel Bertalan <dani@danielbertalan.dev>
|
|
The compiler would complain about `__builtin_memcpy` in ByteBuffer::copy
writing out of bounds, as it isn't able to deduce the invariant that the
inline buffer is only used when the requested size is smaller than the
inline capacity.
The other change is more bizarre. If the destructor's declaration
exists, gcc complains about a `delete` operation causing an
out-of-bounds array access.
error: array subscript 'DHCPv4Client::__as_base [0]' is partly outside
array bounds of 'unsigned char [8]' [-Werror=array-bounds]
14 | ~DHCPv4Client() = default;
| ^
This looks like a compiler bug, and I'll report it if I find a suitable
reduced reproducer.
|
|
We are allowed to directly compare `f32x4` with a `float`, so make use
of it.
|
|
|
|
This fixes the lagom build on aarch64, as `__builtin_sincosf` doesn't
take double arguments.
|
|
This fixes the build on M1 Macs.
|
|
|
|
This allows direct inlining and hides away some assembly and
bit-fiddling when manipulating the floating point environment.
This only implements the x87/SSE versions, as of now.
|
|
This uses the `fistp` and `cvts[sd]2si` respectively, to potentially
round floating point values with just one instruction.
This falls back to `llrint[fl]?` on aarch64 for now.
|
|
This is very annoying if we're (intentionally) passing invalid UTF8 into
Utf8View.
|
|
|
|
This new class with an admittedly long OOP-y name provides a circular
queue in shared memory. The queue is a lock-free synchronous queue
implemented with atomics, and its implementation is significantly
simplified by only accounting for one producer (and multiple consumers).
It is intended to be used as a producer-consumer communication
datastructure across processes. The original motivation behind this
class is efficient short-period transfer of audio data in userspace.
This class includes formal proofs of several correctness properties of
the main queue operations `enqueue` and `dequeue`. These proofs are not
100% complete in their existing form as the invariants they depend on
are "handwaved". This seems fine to me right now, as any proof is better
than no proof :^). Anyways, the proofs should build confidence that the
implemented algorithms, which are only roughly based on existing work,
operate correctly in even the worst-case concurrency scenarios.
|
|
This is particularly important to avoid false sharing, which thrashes
performance when two process-shared atomics are on the same cache line.
|
|
|
|
This allows for calling this function with any argument type for which
the appropriate traits and operators have been implemented so it can be
compared to the Vector's item type
|
|
|
|
This patch adds a header containing the fuzzy match algorithm
previously used in Assistant. The algorithm was moved to AK
since there are many places where a search may benefit from fuzzyness.
|
|
Instead of just to_uint<u64>().
|
|
Some functions want to ignore cv-qualifiers, and it's much easier to
constrain the type through a concept than a separate requires clause on
the function.
|
|
Currently there is no AK::IPv6Address in the kernel. But when there is,
KStrings won't resolve properly because they are in Kernel namespace.
|
|
|
|
Both calls essentially only differ in one boolean, which dictates
whether to print the value in uppercase or lowercase.
Move the long function call into a new function and pass in the
"uppercase" boolean seperately to avoid having to write everything
twice.
|
|
Those functions only differ by the input type of `number`. No other
wrapper does this, as they rely on adjusting the type of the argument on
the caller side instead.
Avoid specializing too much by just doing the same for signed numbers.
|
|
|
|
|
|
This allows us to use float and double as hash keys.
|
|
Since we no longer use `String` inside of the kernel code, we can drop
this `#ifndef`.
|
|
We were decoding and then re-encoding the query string in URLs.
This round-trip caused us to lose information about plus ('+')
ASCII characters encoded as "%2B".
|
|
This matches what the URL and HTML specifications ask us to do.
|
|
A change was made prior to percent encode plus signs in order to fix an
issue with the Google cookie consent page.
Unforunately, this was treating a symptom of a problem and not the root
cause and is incorrect behavior.
|
|
This is the name that is used for every other collection type so let's
be consistent.
|
|
|
|
When we want to use the find_first_index that base Vector provides, we
need to provide an element of the real contained type. That's impossible
for OwnPtr, however, and even with RefPtr there might be instances where
we have a raw reference to the object we want to find, but no smart
pointer. Therefore, overloading this function (with an identical body,
the magic is done by the find_index templatization) with `T const&` as a
parameter allows there use cases.
|
|
On oss-fuzz, the LibJS REPL is provided a file encoded with Windows-1252
with the following contents:
/ô¡°½/
The REPL assumes the input file is UTF-8. So in Windows-1252, the above
is represented as [0x2f 0xf4 0xa1 0xb0 0xbd 0x2f]. The inner 4 bytes are
actually a valid UTF-8 encoding if we only look at the most significant
bits to parse leading/continuation bytes. However, it decodes to the
code point U+121c3d, which is not a valid code point.
This commit adds additional validation to ensure the decoded code point
itself is also valid.
|
|
These functions are _very_ misleading, as `first()` and `last()` return
references, but `{first,last}_matching()` return copies of the values.
This commit makes it so that they now return Optional<T&>, eliminating
the copy and the confusion.
|
|
This avoids a useless copy of the value, as most of the users (except
one) actually just need a reference to the value.
|
|
While the previous implementation always copied the object, returning a
non-const reference to a const object is not valid.
|
|
This implements Optional<T&> as a T*, whose presence has been missing
since the early days of Optional.
As a lot of find_foo() APIs return an Optional<T> which imposes a
pointless copy on the underlying value, and can sometimes be very
misleading, with this change, those APIs can return Optional<T&>.
|
|
This method exploits the fact that the values themselves hold the tree
pointers, and as a result this let's us skip the O(logn) traversal down
to the matching Node for a Key-Value pair.
|
|
|
|
|
|
|
|
|
|
Adds a new optional parameter 'reserved_chars' to
AK::URL::percent_encode. This new optional parameter allows the caller
to specify custom characters to be percent encoded. This is then used
to percent encode plus signs by HttpRequest::to_raw_request.
|
|
|
|
|
|
|
|
This simplifies some of the bucket state handling code, as there's now
an easy way of checking the basic category of bucket state.
|
|
As seen on TV, HashTable can get "thrashed", i.e. it has a bunch of
deleted buckets that count towards the load factor. This means that hash
tables which are large enough for their contents need to be resized.
This was fixed in 9d8da16 with a workaround that shrinks the HashTable
back down in these cases, as after the resize and re-hash the load
factor is very low again. However, that's not a good solution. If you
insert and remove repeatedly around a size boundary, you might get
frequent resizes, which involve frequent re-allocations.
The new solution is an in-place rehashing algorithm that I came up with.
(Do complain to me, I'm at fault.) Basically, it iterates the buckets
and re-hashes the used buckets while marking the deleted slots empty.
The issue arises with collisions in the re-hash. For this reason, there
are two kinds of used buckets during the re-hashing: the normal "used"
buckets, which are old and are treated as free space, and the
"re-hashed" buckets, which are new and treated as used space, i.e. they
trigger probing. Therefore, the procedure for relocating a bucket's
contents is as follows:
- Locate the "real" bucket of the contents with the hash. That bucket is
the starting point for the target bucket, and the current (old) bucket
is the bucket we want to move.
- While we still need to move the bucket:
- If we're the target, something strange happened last iteration or we
just re-hashed to the same location. We're done.
- If the target is empty or deleted, just move the bucket. We're done.
- If the target is a re-hashed full bucket, we probe by double-hashing
our hash as usual. Henceforth, we move our target for the next
iteration.
- If the target is an old full bucket, we swap the target and to-move
buckets. Therefore, the bucket to move is a the correct location and the
former target, which still needs to find a new place, is now in the
bucket to move. So we can just continue with the loop; the target is
re-obtained from the bucket to move. This happens for each and every
bucket, though some buckets are "coincidentally" moved before their
point of iteration is reached. Either way, this guarantees full in-place
movement (even without stack storage) and therefore space complexity of
O(1). Time complexity is amortized O(2n) asssuming a good hashing
function.
This leads to a performance improvement of ~30% on the benchmark
introduced with the last commit.
Co-authored-by: Hendiadyoin1 <leon.a@serenityos.org>
|