coreboot-kgpe-d16/util/docker/coreboot.org-status/board-status.html
Martin Roth 0ad5fbd48d util: Update all shebangs to use /usr/bin/env
Instead of hardcoding paths to the executables, use the version in the
path.  This allows the scripts to work on more systems, and allows the
binary version to be changed more easily if needed.

Signed-off-by: Martin Roth <martin@coreboot.org>
Change-Id: Ifcc56aa21092cd3866eacb6a02d198110ec6051d
Reviewed-on: https://review.coreboot.org/c/coreboot/+/48904
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Reviewed-by: Angel Pons <th3fanbus@gmail.com>
2021-01-25 08:57:40 +00:00
..
README
bucketize.sh util: Update all shebangs to use /usr/bin/env 2021-01-25 08:57:40 +00:00
foreword.html
status-to-html.sh util: Update all shebangs to use /usr/bin/env 2021-01-25 08:57:40 +00:00
tohtml.sh util: Update all shebangs to use /usr/bin/env 2021-01-25 08:57:40 +00:00

README

Scripts to publish board-status data to the wiki
================================================

These scripts parse the board-status repository (and the coreboot repository as companion)
to build a meaningful representation of the test coverage stored in board-status.

The server runs these nightly (CET/CEST), so no user interaction with the wiki page is needed.

How to use
----------
When modifying the scripts, or when publishing the results elsewhere, you might want to run them
yourself. You'll need the board-status and the coreboot repository checked out side by side, named
"board-status" and "coreboot" respectively (in particular without .git suffix).

To emit wiki-text, in the board-status repository's top-level directory, run

    $ ../util/board_status/to-wiki/status-to-wiki.sh

The output ends up on stdout, so you'll have to store it yourself, if you need it later.

`push-to-wiki.sh FILENAME TITLE` can be used to push a file into the wiki.
User credentials are looked up in ~/.wikiaccount, which should look like

    USERNAME=user
    USERPASS=password

How it works
------------
status-to-wiki collects the reports and sorts them in buckets by report date. These can have
weekly, monthly and quarterly granularity.
It then passes these into the towiki script, which reads the data in more details and prints
them in the output format.

Contributions
-------------
These scripts are rather bare, and you're welcome to extend them to extract more useful data
from both repositories, and to present the data in a nicer way.
A rewrite into another (reasonable) language is fine, too - shell quickly finds its limits
for this kind of text processing.