This allows to use a local untitled repository.
It is also now possible to build offline by cloning the lbwww,
lbwww-img and untitled repositories locally and passing them to
configure with --with-*-path.
Signed-off-by: Denis 'GNUtoo' Carikli <GNUtoo@cyberdimension.org>
Acked-by: Adrien 'neox' Bourmault <neox@a-lec.org>
The previous code was simple and worked but it didn't scale.
With one --with-*-path argument, we have only one elif clause. With 2
--with-*-path arguments we end up with 4 elif clauses. And with 3
--with-*-path arguments we end up with 13 elif clauses which is way
too much.
Signed-off-by: Denis 'GNUtoo' Carikli <GNUtoo@cyberdimension.org>
Acked-by: Adrien 'neox' Bourmault <neox@a-lec.org>
This allows to use a local lbwww-img repository.
Signed-off-by: Denis 'GNUtoo' Carikli <GNUtoo@cyberdimension.org>
Acked-by: Adrien 'neox' Bourmault <neox@a-lec.org>
Using autotools has several advantages against trying to add such a
feature to the Makefile:
- we don't need to always pass an extra option to make, so once
configured there is less to type
- we also check for dependencies along the way
- the trade-off between easy to use and code simplicity looks better
than with plain Makefile: with a single option we can easily make
the Makefile use --share and --with-lbwww-path conditionally. Doing
that with a plain Makefile would probably be way more complex, or
would require code duplication (to only use --share and
--with-lbwww-path when an option is passed to the Makefile), or
would require to pass raw build.sh options (which would complicate
usage).
Signed-off-by: Denis 'GNUtoo' Carikli <GNUtoo@cyberdimension.org>
Acked-by: Adrien 'neox' Bourmault <neox@a-lec.org>
The ideal situation would be to move integrate all the files of this
repository inside lbwww to be able to easily test local changes.
However that doesn't work as untitled expect lbwww to be in
untitled/www/lbwww and there is no way to configure that.
Using symlinks doesn't work either as untitled doesn't trust symlinks
as there are TOCTU attacks with it if the distribution doesn't
fs.protected_hardlinks and fs.protected_symlinks in sysctl.
Patching untitled is also not the best option here as it could
potentially add extra maintenance in the long run.
So we add the configuration inside lbwww-build instead.
Signed-off-by: Denis 'GNUtoo' Carikli <GNUtoo@cyberdimension.org>
Acked-by: Adrien 'neox' Bourmault <neox@a-lec.org>
This change is urgent to make as my talk will start in less than 1
hour, so it was not sent for review.
Signed-off-by: Denis 'GNUtoo' Carikli <GNUtoo@cyberdimension.org>
For some reasons pandoc on Guix fails at guix commit
5312d798ac36a72d8a977325a7c6ff7647be670a ("gnu:
go-golang-zx2c4-com-wireguard: Update to 0.0.20211016.") and produce
the following error:
build of /gnu/store/<hash>-ghc-basement-0.0.15.drv failed
View build log at '/var/log/guix/drvs/h7/<hash>-ghc-basement-0.0.15.drv.gz'.
Since I use i686 and that we need to publish the website now, we
workaround the build failure by using a known working commit hash.
Signed-off-by: Denis 'GNUtoo' Carikli <GNUtoo@cyberdimension.org>
As adding organization support to Source Hut is still a
work in progress, we will use this workaround in the
meantime.
Sourceware also uses a similar workaround.
Signed-off-by: Denis 'GNUtoo' Carikli <GNUtoo@cyberdimension.org>
This makes sure that we host all the source code to reproduce that
website and that we build it with that.
Signed-off-by: Denis 'GNUtoo' Carikli <GNUtoo@cyberdimension.org>
The token will need to be added in id_oauth2_bearer.
It's not very safe to pass the content to a command as any user on the
system on which it is run will be able to get the token, though I
didn't find a command line argument in curl to pass it a file path
instead.
Signed-off-by: Denis 'GNUtoo' Carikli <GNUtoo@cyberdimension.org>
Sourcehut has a way to setup a static website by uploading a tarball
of the content[1].
Even if we don't end up using Sourcehut, generating a tarball of the
website enables more easy deployments.
Note that we didn't touch to the website code yet, so it still uses
the old URLs, the old image locations, etc.
After creating a token (documentation[1]), the website can then be
uploaded with the following command:
curl \
--oauth2-bearer "<token>" \
-Fcontent=@website.tar.gz \
https://pages.sr.ht/publish/gnutoo.srht.site
[1]https://srht.site/quickstart
Signed-off-by: Denis 'GNUtoo' Carikli <GNUtoo@cyberdimension.org>