Age | Commit message (Collapse) | Author |
|
Much like the sonar cloud workflow, this workflow runs pvs-studio
static analysis, and uploads the SARIF results to github. This
is the most "convenient" way to publish results, but unfortunately
users need write access to the repository to reach static analysis
results rendered in github.
As a work around folks can just look at the logs where issues are
printed during analysis, this works reasonably well.
In the future it might make sense to also render the results as HTML
and publish them using github page, much like we do with man pages.
I believe the pvs-studio plog-converter tool supports that as well.
https://pvs-studio.com/en/docs/manual/0036/
|
|
Having the version included in each analysis allows you to do more
filtering in the UI where results are viewed.
|
|
Sonar Cloud raises a warning if this is not Explicitly enabled or
disabled, so lets mark it disabled to avoid that.
|
|
The version is 4.6.2.2472, I had a typo when I committed the previous
change to update the version.
|
|
I didn't realize there was a new release, as it wasn't posted in the
Sonar Cloud Documentatoin, but was tagged on the github project page.
See: https://github.com/SonarSource/sonar-scanner-cli/releases
|
|
We need to exclude this file from analysis for now, as there is a bug in
the sonar-runner tool where it crashes when trying to understand the use
of AK::Variant in LibWasm/Parser/Parser.cpp
See #10122 for details + link to the bug report to Sonar Cloud.
|
|
I was experimenting with using caching while doing the initial prototype
of the Sonar Cloud workflow. However the cache size for the static
analysis data ended up being large enough that it would put us over the
git hub actions limit. Given that we currently only run this pipeline
once a day, it seems reasonable to just remove caching.
If in the future we decide to run the pipeline on every PR, caching
would become crucial as the current un-cached analysis time is around
1 hour and 50 minutes. If we did this we would need to move the pipeline
to Azure DevOps where we have effectively infinite cache available.
|
|
Without the `$` GitHub Actions doesn't do the environment variable
replacement and CMake thinks we want a source directory of `./}}`
|
|
This requires exposing the `configure` step on the `serenity`
ExternalProject in the SuperBuild CMakeLists so that we can continue to
only build the generated sources and not the entire OS.
|
|
Replace the old logic where we would start with a host build, and swap
all the CMake compiler and target variables underneath it to trick
CMake into building for Serenity after we configured and built the Lagom
code generators.
The SuperBuild creates two ExternalProjects, one for Lagom and one for
Serenity. The Serenity project depends on the install stage for the
Lagom build. The SuperBuild also generates a CMakeToolchain file for the
Serenity build to use that replaces the old toolchain file that was only
used for Ports.
To ensure that code generators are rebuilt when core libraries such as
AK and LibCore are modified, developers will need to direct their manual
`ninja` invocations to the SuperBuild's binary directory instead of the
Serenity binary directory.
This commit includes warning coalescing and option style cleanup for the
affected CMakeLists in the Kernel, top level, and runtime support
libraries. A large part of the cleanup is replacing USE_CLANG_TOOLCHAIN
with the proper CMAKE_CXX_COMPILER_ID variable, which will no longer be
confused by a host clang compiler.
|
|
This statement ensures that the `Sonar Cloud Static Analysis` workflow
runs only for the official repository and not for the forks.
|
|
|
|
This commit snuck into the tree via a PR for some sonar cloud fixes.
Some how I cross contaminated my branches.
Unfortunately the coverity workflow isn't ready for prime time yet,
so lets remove it until we have all the issues ironed out.
|
|
The matrix variables were left over from copy/pasting the contents
of the normal CI workflow. We also should always skip saving the
cache, as the normal CI pipeliens will refresh the toolchain and
we should just be reading the cache.
|
|
|
|
Sonar cloud detects PRs and fails the job at the very end, so there
isn't much use in including this testing feature.
|
|
The cache is saving, but by the time we run again, it looks like the
cache has been purged from other jobs consuming the cache.
This causes the cache to fail restore. Given we run nightly and there
is no time bound, we can just run without cache.
|
|
All of our python scripts use python3
|
|
Test files were getting analyzed twice, which the tool does
not like, and causes it to exit with a fatal error.
Also make the workflow run in PRs anytime the file is edited,
so that we can get immediate feedback without waiting till the
next day.
|
|
I fat fingered this last minute when converting from the trigger
I was using for development/testing to the cron schedule for use
in the main repo.
|
|
This action executes once a day, the sonar cloud runner analyzes the
code and then uploads the results.
The current code base takes almost 3 hours of computer time to analyze.
The runner supports multi threaded executing and caching of results, so
we save that cache as part of the github action work flow to allow for
the analysis to skip unchanged files.
|
|
Moving this helper CMake file to the centralized Meta/CMake folder helps
to get a better grasp on what extra files are required for the build,
and what files are generated.
While we're at it, don't use add_compile_definitions for
ENABLE_UNICODE_DATA, which only needs to be seen by LibUnicode sources.
|
|
The CLDR database comes in a .zip file.
|
|
|
|
Now that we only have 3 ccached builds on CI we can comfortably increase
the ccache size limit to get some free speed-up in CI.
|
|
We were over-hashing for the GNU build on GitHub Actions by including
the LLVM patch as well. The GNU Toolchain doesn't care about our LLVM
patches.
For Azure, fix the inversion of the condition for which jobs check which
Build*.sh script, and add the Toolchain patch files to the cache
hash calculation.
|
|
Fuzzing was the only Lagom build left.
|
|
|
|
|
|
This prevents command injection through backticks in commit messages.
|
|
|
|
Clang builds will no longer be apart of the automated CI for every Push/
Pull Request, and will instead be ran at 00:00 UTC every day, with the
results posted to the discord #clang-toolchain channel.
|
|
|
|
We need test-js for the parser tests for test262, but we don't need to
rebuild all of Lagom twice. This was missed when we did the initial
change to shared libraries. Before #9017, the Lagom build for test-js
is what built libLagom.a for the libjs-test262-runner to link against.
Now that we are building libjs.so and its dependencies in the runner's
build directory, we should build test-js there as well.
Requires linusg/libjs-test262#32 in order to properly find the built
test-js.
|
|
We already cache these files to prevent re-downloading them in the other
CI workflows, so this just brings the test262 runner up to speed with
the rest of them.
|
|
Otherwise this generates an "unexpected inputs" warning.
|
|
Because the Serenity tooling autodetects the presence of gcc 10
it is no longer necessary to change the system gcc version to 10.
|
|
After linusg/libjs-test262/pull/30 goes into libjs-test262, we'll need
to pass SERENITY_SOURCE_DIR manually to the job to prevent it from
trying to do its own shallow clone. Also, remove the now defunct static
library build from the test262 workflow.
|
|
This was missing 2 of the recently added checks. Also added a reminder
in the CI linter to update the Meta (commit hook) version.
|
|
This should prevent 5 unnecessary downloads for each CI run.
|
|
|
|
|
|
|
|
The WASM spec tests caused a stack overflow when generated with wat2wasm
version 1.0.23, which ships with homebrew. To give feature parity,
manually download the same version from GitHub packages for Ubuntu.
Document the dependencies of the WASM spec tests option, as well.
|
|
The commit linter will now run for draft pull requests as well, but
BuggieBot will not post a message on failing draft PRs.
|
|
Unlike the github hosted runners, github's software for self-hosted
runners does not do this automatically.
|
|
This should speed it up quite a bit and give us more consistent
performance. (So this workflow could eventually be used for perf
regression testing as well)
|
|
It turns out that ccache caches are highly compressible (total size is
reduced by about 65%), so we should be able to increase the cache limit
for some free speedups. :^)
|
|
|
|
These are created when a pull request is force-pushed to, which results
in the unneeded waste of action runners on the obsolete CI run.
|