Age | Commit message (Collapse) | Author |
|
This simplifies some of the bucket state handling code, as there's now
an easy way of checking the basic category of bucket state.
|
|
As seen on TV, HashTable can get "thrashed", i.e. it has a bunch of
deleted buckets that count towards the load factor. This means that hash
tables which are large enough for their contents need to be resized.
This was fixed in 9d8da16 with a workaround that shrinks the HashTable
back down in these cases, as after the resize and re-hash the load
factor is very low again. However, that's not a good solution. If you
insert and remove repeatedly around a size boundary, you might get
frequent resizes, which involve frequent re-allocations.
The new solution is an in-place rehashing algorithm that I came up with.
(Do complain to me, I'm at fault.) Basically, it iterates the buckets
and re-hashes the used buckets while marking the deleted slots empty.
The issue arises with collisions in the re-hash. For this reason, there
are two kinds of used buckets during the re-hashing: the normal "used"
buckets, which are old and are treated as free space, and the
"re-hashed" buckets, which are new and treated as used space, i.e. they
trigger probing. Therefore, the procedure for relocating a bucket's
contents is as follows:
- Locate the "real" bucket of the contents with the hash. That bucket is
the starting point for the target bucket, and the current (old) bucket
is the bucket we want to move.
- While we still need to move the bucket:
- If we're the target, something strange happened last iteration or we
just re-hashed to the same location. We're done.
- If the target is empty or deleted, just move the bucket. We're done.
- If the target is a re-hashed full bucket, we probe by double-hashing
our hash as usual. Henceforth, we move our target for the next
iteration.
- If the target is an old full bucket, we swap the target and to-move
buckets. Therefore, the bucket to move is a the correct location and the
former target, which still needs to find a new place, is now in the
bucket to move. So we can just continue with the loop; the target is
re-obtained from the bucket to move. This happens for each and every
bucket, though some buckets are "coincidentally" moved before their
point of iteration is reached. Either way, this guarantees full in-place
movement (even without stack storage) and therefore space complexity of
O(1). Time complexity is amortized O(2n) asssuming a good hashing
function.
This leads to a performance improvement of ~30% on the benchmark
introduced with the last commit.
Co-authored-by: Hendiadyoin1 <leon.a@serenityos.org>
|
|
The hash table buckets had three different state booleans that are in
fact exclusive. In preparation for further states, this commit
consolidates them into one enum. This has the added benefit on not
relying on the compiler's boolean packing anymore; we definitely now
only need one byte for the bucket state.
|
|
|
|
Currently this can parse XML and resolve external resources/references,
and read a DTD (but not apply or verify its rules).
That's good enough for _most_ XHTML documents as the HTML 5 spec
enforces its own rules about document well-formedness, and does not make
use of XML DTDs (aside from a list of predefined entities).
An accompanying `xml` utility is provided that can read and dump XML
documents, and can also run the XML conformance test suite.
|
|
Similar to 'SameAs', but for multiple types.
|
|
It's much easier to spot the function name (which is what you often
expect) like this.
|
|
It's often useful to have the negated version, so instead of making a
local lambda for it, let's just add the negated form too.
|
|
This is pretty useful for making trees.
|
|
|
|
|
|
This makes us able to return one from a function
|
|
This is an enum-like type that works with arbitrary sized storage > u64,
which is the limit for a regular enum class - which limits it to 64
members when needing bit field behavior.
Co-authored-by: Ali Mohammad Pur <mpfard@serenityos.org>
|
|
|
|
This makes it usable in the Kernel. :^)
|
|
Since this macro was created we gained a couple more parsers in the
system :^)
|
|
|
|
|
|
Previously, case-insensitively searching the haystack "Go Go Back" for
the needle "Go Back" would return false:
1. Match the first three characters. "Go ".
2. Notice that 'G' and 'B' don't match.
3. Skip ahead 3 characters, plus 1 for the outer for-loop.
4. Now, the haystack is effectively "o Back", so the match fails.
Reducing the skip by 1 fixes this issue. I'm not 100% convinced this
fixes all cases, but I haven't been able to find any cases where it
doesn't work now. :^)
|
|
Day and month name constants are defined in numerous places. This
pulls them together into a single place and eliminates the
duplication. It also ensures they are `constexpr`.
|
|
Even though the StringView(char*, size_t) constructor only runs its
overflow check when evaluated in a runtime context, the code generated
here could prevent the compiler from optimizing invocations from the
StringView user-defined literal (verified on Compiler Explorer).
This changes the user-defined literal declaration to be consteval to
ensure it is evaluated at compile time.
|
|
|
|
C++20 provides the `requires` clause which simplifies the ability to
limit overload resolution. Prefer it over `EnableIf`
With all uses of `EnableIf` being removed, also remove the
implementation so future devs are not tempted.
|
|
|
|
|
|
|
|
Alphabet and lookup table are created and copied to the stack on each
call. Create them and store them in static memory.
|
|
Since the allocated memory is going to be zeroed immediately anyway,
let's avoid redundantly scrubbing it with MALLOC_SCRUB_BYTE just before
that.
The latest versions of gcc and Clang can automatically do this malloc +
memset -> calloc optimization, but I've seen a couple of places where it
failed to be done.
This commit also adds a naive kcalloc function to the kernel that
doesn't (yet) eliminate the redundancy like the userland does.
|
|
Calculating sin and cos at once is quite a bit cheaper than calculating
them individually.
x87 has even a dedicated instruction for it: `fsincos`.
|
|
This allows, for example, to create a Vector from a subset of another
Vector.
|
|
For security critical code we need to have some way of performing
constant time buffer comparisons.
|
|
This keeps us from stopping early and not rendering the argument at all.
|
|
This shrinks the JsonParser class from 2072 bytes to 24. :^)
|
|
Previously, if you forgot to set a key on a SourceGenerator, you would
get this less-than-helpful error message:
> Generate_CSS_MediaFeatureID_cpp:
/home/sam/serenity/Meta/Lagom/../../AK/Optional.h:174: T
AK::Optional<T>::release_value() [with T = AK::String]: Assertion
`m_has_value' failed.
Now, it instead looks like this:
> No key named `name:titlecase` set on SourceGenerator
Generate_CSS_MediaFeatureID_cpp:
/home/sam/serenity/Meta/Lagom/../../AK/SourceGenerator.h:44:
AK::String AK::SourceGenerator::get(AK::StringView) const: Assertion
`false' failed.
|
|
|
|
Now it is possible to use range for loop in reverse mode for a
container.
```
for (auto item : in_reverse(vector))
```
|
|
|
|
|
|
|
|
This is the IPv6 counter part to the IPv4Address class and implements
parsing strings into a in6_addr and formatting one as a string. It
supports the address compression scheme as well as IPv4 mapped
addresses.
|
|
|
|
|
|
If the utilization of a HashTable (size vs capacity) goes below 20%,
we'll now shrink the table down to capacity = (size * 2).
This fixes an issue where tables would grow infinitely when inserting
and removing keys repeatedly. Basically, we would accumulate deleted
buckets with nothing reclaiming them, and eventually deciding that we
needed to grow the table (because we grow if used+deleted > limit!)
I found this because HashTable iteration was taking a suspicious amount
of time in Core::EventLoop::get_next_timer_expiration(). Turns out the
timer table kept growing in capacity over time. That made iteration
slower and slower since HashTable iterators visit every bucket.
|
|
This was only used by remove_all_matching(), where it's no longer used.
|
|
Just walk the table from start to finish, deleting buckets as we go.
This removes the need for remove() to return an iterator, which is
preventing me from implementing hash table auto-shrinking.
|
|
This can be quite noisy and isn't generally useful information.
|
|
|
|
This helper allows Time to be constructed from a tick count and a ticks
per second value.
|
|
|
|
The log base 2 is implemented using the binary logarithm algorithm
by Clay Turner (see the link in the comment)
|