Age | Commit message (Collapse) | Author |
|
|
|
Some sites don't have favicon.ico, so we may get 404 response.
In such cases, ResourceLoader still calls success_callback.
For favicon loading, we are not checking response headers or payload
size.
This will ultimately fail in Gfx::ImageDecoder::try_create().
So avoid unnecessary work by returning early, if data is empty.
|
|
Our current way of signalling a missing port with m_port == 0 was
lacking, as 0 is a valid port number in URLs.
|
|
This namespace will be used for all interfaces defined in the URL
specification, like URL and URLSearchParams.
This has the unfortunate side-effect of requiring us to use the fully
qualified AK::URL name whenever we want to refer to the AK class, so
this commit also fixes all such references.
|
|
|
|
This better matches the spec nomenclature. Note that we don't yet
*retrieve* the active document according to spec.
|
|
We already have a base class for frame elements that we call
BrowsingContextContainer. This patch makes BrowsingContext::container()
actually return one of those.
This makes us match the spec names, and also solves a FIXME about having
a shared base for <frame> and <iframe>. (We already had the shared base,
but the pointer we had there wasn't tightly typed enough.)
|
|
This fixes a long-standing bug where the view wouldn't update when
navigating to a new page after looking at the ACID2 test. This happened
because ACID2 actually scrolls the viewport far down. We didn't reset
the scroll position upon navigation, and so the new page thought that
we were still scrolled very far down, and this broke the invalidation
rect calculations.
|
|
|
|
Previously in Browser, when we navigate back from a page that has an
icon to a page that does not have an icon, the icon does not update and
the old icon is displayed because FrameLoader does not set the default
favicon when the favicon cannot be loaded. This patch ensures that
Browser receives a new icon bitmap every time a load takes place.
|
|
|
|
To transparently support multi-frame images, all decoder plugins have
already been updated to return their only bitmap for frame(0).
This patch completes the remaining cleanup work by removing the
ImageDecoder::bitmap() API and having all clients call frame() instead.
|
|
Previously, ImageDecoder::create() would return a NonnullRefPtr and
could not "fail", although the returned decoder may be "invalid" which
you then had to check anyway.
The new interface looks like this:
static RefPtr<Gfx::ImageDecoder> try_create(ReadonlyBytes);
This simplifies ImageDecoder since it no longer has to worry about its
validity. Client code gets slightly clearer as well.
|
|
The LexicalPath instance methods dirname(), basename(), title() and
extension() will be changed to return StringView const& in a further
commit. Due to this, users creating temporary LexicalPath objects just
to call one of those getters will recieve a StringView const& pointing
to a possible freed buffer.
To avoid this, static methods for those APIs have been added, which will
return a String by value to avoid those problems. All cases where
temporary LexicalPath objects have been used as described above haven
been changed to use the static APIs.
|
|
Instead of doing a VERIFY(is<T>(x)) and *then* casting it to T, we can
just do the cast right away with verify_cast<T>. :^)
|
|
This makes it much clearer what this cast actually does: it will
VERIFY that the thing we're casting is a T (using is<T>()).
|
|
This makes redirects work when the HTTP server responds with just
headers and no data.
|
|
This replaces a call to URL::set_path() with URL::set_paths(), as
set_path() will be deprecated and removed.
|
|
Our "frame" concept very closely matches what the web specs call a
"browsing context", so let's rename it to that. :^)
The "main frame" becomes the "top-level browsing context",
and "sub-frames" are now "nested browsing contexts".
|
|
Surprisingly this is not used by the browser's page reload functionality
but only JS's location.reload() - that's probably why this hasn't been
noticed yet. Make sure we notify the page client about the load start in
that case as well. :^)
|
|
Otherwise we would sometimes (dependent on the load time, I believe) end
up setting the document and eventually calling title change callbacks
before communicating that the page started loading.
|
|
This patch implements the HTML specification's "encoding sniffing
algorithm", which is used when no encoding can be obtained from the
Content-Type header (either because it doesn't contain a charset=...)
value or the file has not been opened via HTTP (as with local files).
It also modifies the creator of the HTMLDocumentParser to use the new
HTMLDocumentParser::create_with_uncertain_encoding static method, which
runs the encoding sniffing algorithm before instantiating the parser.
This now allows us to load local HTML pages (or remote pages without a
charset specified in the 'Content-Type' header) with a non-UTF-8
encoding such as 'windows-1252'. This would previously crash the
browser. :^)
|
|
This modifies the Document class to use Optional<String> for the
encoding. If the encoding is unknown, the Optional will not have a
value. It also implements the has_encoding() and encoding_or_default()
instance methods, the latter of which will return "UTF-8" as a fallback
if no encoding is present.
The usage of Optional<String> instead of the null string is part of an
effort to explicitly indicate that a string could not have a value.
This also modifies the former callers of encoding() to use
encoding_or_default(). Furthermore, the encoding will now only be set if
it is actually known, rather than just guessed by earlier code.
|
|
This modifies the Resource class to use Optional<String> for the
encoding. If the encoding is unknown, the Optional will not have a
value (instead of using the null state of the String class). It also
implements a has_encoding() instance method and modifies the callers
of Resource::encoding() to use the new API.
|
|
This prevents the browser from crashing when trying to load an infinite
redirects loop. The chosen limit is based on the fetch specification:
"If request's redirect count is twenty, return a network error."
|
|
A Frame now knows about its nesting-level.
The FrameLoader checks whether the recursion level of the current
frame allows it to be displayed and if not doesn't even load the
requested resource.
The nesting-check is done on a per-URL-basis, so there can be many many
nested Frames as long as they have different URLs.
If there are however Frames with the same URL nested inside each other
we only allow this to happen 3 times.
This mitigates infinetely recursing <iframe>s in an HTML-document
crashing the browser with an OOM.
|
|
|
|
|
|
This makes it more symmetrical with adopt_own() (which is used to
create a NonnullOwnPtr from the result of a naked new.)
|
|
SPDX License Identifiers are a more compact / standardized
way of representing file license information.
See: https://spdx.dev/resources/use/#identifiers
This was done with the `ambr` search and replace tool.
ambr --no-parent-ignore --key-from-file --rep-from-file key.txt rep.txt *
|
|
Small regression introduced by 3857148, we still have to escape HTML
entities.
|
|
Instead of storing a format string in a file, let's be reasonable
and use SourceGenerator's template functionality. :^)
|
|
This error page template is slightly hilarious and should probably
be replaced with AK::SourceGenerator or some such, but for now let's
just get rid of the call to String::format().
|
|
|
|
To implement the HttpOnly attribute, the CookieJar needs to know where a
request originated from. Namely, it needs to distinguish between HTTP /
non-HTTP (i.e. JavaScript) requests. When the HttpOnly attribute is set,
requests from JavaScript are to be blocked.
|
|
Note: HTTP response headers are currently stored in a hash map, so the
Set-Cookie entry will only appear once here.
|
|
This is needed for XMLHttpRequest, and will certainly be useful for
other things, too.
|
|
Extremely small change, all I did is check for application/json,
mime type and parsed it as text.
|
|
(...and ASSERT_NOT_REACHED => VERIFY_NOT_REACHED)
Since all of these checks are done in release builds as well,
let's rename them to VERIFY to prevent confusion, as everyone is
used to assertions being compiled out in release.
We can introduce a new ASSERT macro that is specifically for debug
checks, but I'm doing this wholesale conversion first since we've
accumulated thousands of these already, and it's not immediately
obvious which ones are suitable for ASSERT.
|
|
|
|
These changes are arbitrarily divided into multiple commits to make it
easier to find potentially introduced bugs with git bisect.
|
|
|