Hi
Here is a software announcement with pointers to Sigsum proofs
https://lists.gnu.org/archive/html/help-libtasn1/2025-02/msg00000.html
The artifact can be reproduce by GitLab pipeline or offline by following the same recipe as in .gitlab-ci.yml ('R-guix' job) on the git tag.
Ideas for improvements?
Are you able to build a "libtasn1 release monitor" out of this information?
/Simon
All,
I just announced GNU InetUtils with Sigsum signatures:
https://lists.gnu.org/archive/html/bug-inetutils/2025-02/msg00002.html
Below are the Sigsum-related commands I ran to Sigsum-sign the release including verification.
Does anyone have suggestions on how to improve the announcement text and/or commands used to sign things?
I'd like to establish a "best practice" on how to Sigsum-protect software source code releases, for other maintainers to follow.
It uses a 8/15 GoogleTrustFabric witness quorum in the trust policy file, not mentioned in the release notes since this information doesn't seem widely published yet.
I don't like storing the rate-limiting domain token private key on disk. I noticed it is recommended to use a separate key for this, but what are the risks if people would start to use their software signing key instead? Aren't the signatures domain separated? Key management is a hassle, so reducing the number of private keys people have to manage the lifecycle for leads to a overall security improvement. I would also prefer to use my hardware-bound OpenPGP signing subkey (instead of my hardware-bound OpenPGP authentication subkey), but I haven't been able to figure out the SSH agent tooling for this.
/Simon
ssh-add -L > jas.pub sigsum-submit -k jas.pub inetutils-2.6.tar.gz sigsum-submit -k jas.pub inetutils-2.6.tar.xz sigsum-submit -k jas.pub inetutils-v2.6-src.tar.gz
cp ../www-inetutils/sigsum-policy.txt .
sigsum-submit --timeout 30s --diagnostics=debug -p sigsum-policy.txt --token-signing-key ~/self/sigsum-token-secret-josefsson.org/mykey --token-domain josefsson.org inetutils-2.6.tar.gz.req sigsum-submit --timeout 30s --diagnostics=debug -p sigsum-policy.txt --token-signing-key ~/self/sigsum-token-secret-josefsson.org/mykey --token-domain josefsson.org inetutils-2.6.tar.xz.req sigsum-submit --timeout 30s --diagnostics=debug -p sigsum-policy.txt --token-signing-key ~/self/sigsum-token-secret-josefsson.org/mykey --token-domain josefsson.org inetutils-v2.6-src.tar.gz.req
sigsum-verify -k jas.pub -p sigsum-policy.txt inetutils-2.6.tar.gz.proof < inetutils-2.6.tar.gz sigsum-verify -k jas.pub -p sigsum-policy.txt inetutils-2.6.tar.xz.proof < inetutils-2.6.tar.xz sigsum-verify -k jas.pub -p sigsum-policy.txt inetutils-v2.6-src.tar.gz.proof < inetutils-v2.6-src.tar.gz
sha256sum inetutils-2.6.tar.gz | cut -d' ' -f1 | base16 -d | sha256sum sha256sum inetutils-2.6.tar.xz | cut -d' ' -f1 | base16 -d | sha256sum sha256sum inetutils-v2.6-src.tar.gz | cut -d' ' -f1 | base16 -d | sha256sum
sigsum-monitor --interval 5s -p sigsum-policy.txt jas.pub
build-aux/gnupload --to ftp.gnu.org:inetutils inetutils-2.6.tar.gz.proof inetutils-2.6.tar.xz.proof inetutils-v2.6-src.tar.gz.proof
Simon Josefsson via Sigsum-general sigsum-general@lists.sigsum.org writes:
Hi
Here is a software announcement with pointers to Sigsum proofs
https://lists.gnu.org/archive/html/help-libtasn1/2025-02/msg00000.html
The artifact can be reproduce by GitLab pipeline or offline by following the same recipe as in .gitlab-ci.yml ('R-guix' job) on the git tag.
Ideas for improvements?
Are you able to build a "libtasn1 release monitor" out of this information?
/Simon
Sigsum-general mailing list -- sigsum-general@lists.sigsum.org To unsubscribe send an email to sigsum-general-leave@lists.sigsum.org
On Fri, Feb 21, 2025 at 01:11:11PM +0100, Sigsum General wrote:
All,
I just announced GNU InetUtils with Sigsum signatures:
https://lists.gnu.org/archive/html/bug-inetutils/2025-02/msg00002.html
Below are the Sigsum-related commands I ran to Sigsum-sign the release including verification.
Does anyone have suggestions on how to improve the announcement text
I think the announcement text looks as good as it gets right now. As we discussed before, the main improvement points would be:
1. Better way to communicate/get the trust policy 2. Better primer page(s) rather than the broad getting started
For (1), I would like if we could remove the wget (or EOF blurbs) and just say something like this:
sigsum-verify -k inetutils-sigsum-key.pub -p sigsum-strict \ inetutils-2.6.tar.gz.proof < inetutils-2.6.tar.gz
where sigsum-strict is a named policy maintained by Sigsum that, e.g., gives a high level of assurance at the cost of higher risk of unavailability at the time of submission. I'd expect the name to be mapped to a trust policy by the tool and/or its packaging, and for there to be other named policies than the one(s) maintained by Sigsum folks.
It's on our current roadmap to come up with a proposal in the direction of named policies / how to get rid of the EOF blurbs (which you did now with a wget -- it works but also seems a bit unoptimal in the long run).
https://git.glasklar.is/sigsum/project/documentation/-/blob/main/archive/202...
As soon as there's any more concrete thoughts to comment on I'll let you know. (I already took note of your feedback in the other email thread.) And before we implement anything, a proposal will (as usual) be circulated and eventually decided on a sigsum weekly meet.
and/or commands used to sign things?
Do you sign and submit on separate setups? If yes, fun to see that someone uses that feature! If no, you could sign and submit in a single step by passing both -k and -p when running sigsum-submit.
See below for a few more suggestions.
I'd like to establish a "best practice" on how to Sigsum-protect software source code releases, for other maintainers to follow.
Nice, keep up the good work! Your early usage is very helpful.
It uses a 8/15 GoogleTrustFabric witness quorum in the trust policy file, not mentioned in the release notes since this information doesn't seem widely published yet.
For more diversity you could consider adding Glasklar's witness:
https://git.glasklar.is/glasklar/services/witnessing/-/blob/main/witness.gla...
Mullvad also started running a stable witness (not documented similar to Google's witness, only mentioned on Matrix a couple weeks back):
witness witness.mullvad.net 15d6d0141543247b74bab3c1076372d9c894f619c376d64b29aa312cc00f61ad
I'd probably set the threshold to 2/3 for google,glasklar,mullvad; and keep the google threshold at 8/15.
(Note that you probably want to create a new policy file rather than updating the existing one, and then link the new file in newer announcements. Since otherwise the old instructions will break.)
I don't like storing the rate-limiting domain token private key on disk. I noticed it is recommended to use a separate key for this, but what are the risks if people would start to use their software signing key instead? Aren't the signatures domain separated? Key management is a hassle, so reducing the number of private keys people have to manage the lifecycle for leads to a overall security improvement. I would also prefer to use my hardware-bound OpenPGP signing subkey (instead of my hardware-bound OpenPGP authentication subkey), but I haven't been able to figure out the SSH agent tooling for this.
You're right that submit and rate-limit signatures are domain separated with a namespace. So, there's no security risk for your signed checksums if you re-use the same key for submission and rate limiting.
The only caveat is that the rate-limit key is essentially used to compute token=Sign(key, <log pubkey>). Then "$domain, $token" is passed as an HTTP header to the log server when doing a submission.
The risk: if $token is somehow revealed to an unauthorized party, then that party can start spending your rate-limit. Recovery requires rotating the rate-limit key and updating your DNS TXT record. In which case you're back to having separate keys.
Further reading:
https://git.glasklar.is/sigsum/project/documentation/-/blob/main/log.md#4--r...
Does this context help wrt. if it is safe to reuse the same key? (If it helps I would also start by using the same submit and rate limit key.)
/Simon
ssh-add -L > jas.pub sigsum-submit -k jas.pub inetutils-2.6.tar.gz sigsum-submit -k jas.pub inetutils-2.6.tar.xz sigsum-submit -k jas.pub inetutils-v2.6-src.tar.gz
Could be a single command:
sigsum-submit -k jas.pub inetutils-2.6.tar.gz inetutils-2.6.tar.xz inetutils-v2.6-src.tar.gz
cp ../www-inetutils/sigsum-policy.txt .
sigsum-submit --timeout 30s --diagnostics=debug -p sigsum-policy.txt --token-signing-key ~/self/sigsum-token-secret-josefsson.org/mykey --token-domain josefsson.org inetutils-2.6.tar.gz.req sigsum-submit --timeout 30s --diagnostics=debug -p sigsum-policy.txt --token-signing-key ~/self/sigsum-token-secret-josefsson.org/mykey --token-domain josefsson.org inetutils-2.6.tar.xz.req sigsum-submit --timeout 30s --diagnostics=debug -p sigsum-policy.txt --token-signing-key ~/self/sigsum-token-secret-josefsson.org/mykey --token-domain josefsson.org inetutils-v2.6-src.tar.gz.req
Could also be a single command:
sigsum-submit --timeout 30s --diagnostics=debug -p sigsum-policy.txt --token-signing-key ~/self/sigsum-token-secret-josefsson.org/mykey --token-domain josefsson.org inetutils-2.6.tar.gz.req inetutils-2.6.tar.xz.req inetutils-v2.6-src.tar.gz.req
But you'd probably want to increase the timeout to get comparable behavior since efficient batch submit is not merged yet.
https://git.glasklar.is/sigsum/core/sigsum-go/-/merge_requests/227
The above should be merged ~next week (nisse is not here this week).
sigsum-verify -k jas.pub -p sigsum-policy.txt inetutils-2.6.tar.gz.proof < inetutils-2.6.tar.gz sigsum-verify -k jas.pub -p sigsum-policy.txt inetutils-2.6.tar.xz.proof < inetutils-2.6.tar.xz sigsum-verify -k jas.pub -p sigsum-policy.txt inetutils-v2.6-src.tar.gz.proof < inetutils-v2.6-src.tar.gz
Running sigsum-verify is redundant, i.e., sigsum-submit does not give you a .proof file unless it verifies for the input and provided policy.
sha256sum inetutils-2.6.tar.gz | cut -d' ' -f1 | base16 -d | sha256sum sha256sum inetutils-2.6.tar.xz | cut -d' ' -f1 | base16 -d | sha256sum sha256sum inetutils-v2.6-src.tar.gz | cut -d' ' -f1 | base16 -d | sha256sum
sigsum-monitor --interval 5s -p sigsum-policy.txt jas.pub
Given the tools we have right now it doesn't get better than this. I'll comment more below on the topic of writing a monitor for your use-case.
build-aux/gnupload --to ftp.gnu.org:inetutils inetutils-2.6.tar.gz.proof inetutils-2.6.tar.xz.proof inetutils-v2.6-src.tar.gz.proof
Simon Josefsson via Sigsum-general sigsum-general@lists.sigsum.org writes:
Hi
Here is a software announcement with pointers to Sigsum proofs
https://lists.gnu.org/archive/html/help-libtasn1/2025-02/msg00000.html
The artifact can be reproduce by GitLab pipeline or offline by following the same recipe as in .gitlab-ci.yml ('R-guix' job) on the git tag.
Ideas for improvements?
Specify somewhere what you're claim is. To me it looks like the claim is: you will only sign checksums that can be reproduced from source at $LOCATION, see $R-GUIX-JOB. This is what I'm sketching a monitor on.
The claim could also include where you publish release announcements, in which case the monitor could try to falsify something like: "official releases are announced at $MAILING-LIST". So if you ever make a release without announcing it there, then the monitor could flag it as well.
(Claims are helpful so monitors know what to falsify / verify.)
Are you able to build a "libtasn1 release monitor" out of this information?
From a quick glance it seems possible, yes.
Sketch:
Monitor tails release tags from the git repository. For each release tag, run the rebuild recipe and arrive at the expected checksums. Now we basically have a list of (release tag, checksums) pairs. In your case it looks like there are three checksums per release tag.
Monitor tails sigsum logs in the trust policy, filtering entries on your submit key. And ensure it sees enough cosignatures for your trust policy. Now we basically have a list of signed checksums that users would have accepted as valid.
Compute the diff between checksums in the log and checksums that were reproduced. If there's ever a checksum in the log that haven't been reproduced from source, flag it and escalate to the monitor's operator.
This might happen if:
- There is a reproducibility issue - Submit key signed something that's out of tree (hidden release)
If there is a claim about where official release announcements are, the monitor could also verify that all releases have been published there.
(FYI: it's on my wish list to put together a monitor for your use-case by adapting age-release verify. But right now I'm a bit backlogged.)
-Rasmus
/Simon
Sigsum-general mailing list -- sigsum-general@lists.sigsum.org To unsubscribe send an email to sigsum-general-leave@lists.sigsum.org
Sigsum-general mailing list -- sigsum-general@lists.sigsum.org To unsubscribe send an email to sigsum-general-leave@lists.sigsum.org
Rasmus Dahlberg via Sigsum-general sigsum-general@lists.sigsum.org writes:
and/or commands used to sign things?
Do you sign and submit on separate setups? If yes, fun to see that someone uses that feature! If no, you could sign and submit in a single step by passing both -k and -p when running sigsum-submit.
Right, I noticed this but I prefer to keep privileges of commands that need private keys as minimal as possible, for easier auditing. If both -k and -p is given, the tool goes from a offline tool to an online tool that talks to remote servers. I was happy to see that you made an effort to separate these two steps, so I can tighten down permissions when running the private key operation.
It uses a 8/15 GoogleTrustFabric witness quorum in the trust policy file, not mentioned in the release notes since this information doesn't seem widely published yet.
For more diversity you could consider adding Glasklar's witness:
https://git.glasklar.is/glasklar/services/witnessing/-/blob/main/witness.gla...
Mullvad also started running a stable witness (not documented similar to Google's witness, only mentioned on Matrix a couple weeks back):
witness witness.mullvad.net 15d6d0141543247b74bab3c1076372d9c894f619c376d64b29aa312cc00f61ad
I'd probably set the threshold to 2/3 for google,glasklar,mullvad; and keep the google threshold at 8/15.
Thanks -- I'll try to do this next time.
(Note that you probably want to create a new policy file rather than updating the existing one, and then link the new file in newer announcements. Since otherwise the old instructions will break.)
I was thinking I could make changes to the trust policy file if it were a strict superset of the earlier information. What is important is that both new and old commands still work, not that the trust policy is necessarily identical, or am I missing something?
I don't like storing the rate-limiting domain token private key on disk. I noticed it is recommended to use a separate key for this, but what are the risks if people would start to use their software signing key instead? Aren't the signatures domain separated? Key management is a hassle, so reducing the number of private keys people have to manage the lifecycle for leads to a overall security improvement. I would also prefer to use my hardware-bound OpenPGP signing subkey (instead of my hardware-bound OpenPGP authentication subkey), but I haven't been able to figure out the SSH agent tooling for this.
You're right that submit and rate-limit signatures are domain separated with a namespace. So, there's no security risk for your signed checksums if you re-use the same key for submission and rate limiting.
The only caveat is that the rate-limit key is essentially used to compute token=Sign(key, <log pubkey>). Then "$domain, $token" is passed as an HTTP header to the log server when doing a submission.
The risk: if $token is somehow revealed to an unauthorized party, then that party can start spending your rate-limit. Recovery requires rotating the rate-limit key and updating your DNS TXT record. In which case you're back to having separate keys.
Further reading:
https://git.glasklar.is/sigsum/project/documentation/-/blob/main/log.md#4--r...
Does this context help wrt. if it is safe to reuse the same key? (If it helps I would also start by using the same submit and rate limit key.)
Thanks for the link! I find this design surprising: if I understand you correctly, it means the private key is only needed once to generate the token, not on every sigsum-submit usage. So effectively the token is a long-lived key-equivalent?! I was expecting some kind of challenge-response or at least monotonically increasing signed counter, to avoid replay attacks of earlier submission tokens.
Could also be a single command:
Thanks!
sigsum-verify -k jas.pub -p sigsum-policy.txt inetutils-2.6.tar.gz.proof < inetutils-2.6.tar.gz sigsum-verify -k jas.pub -p sigsum-policy.txt inetutils-2.6.tar.xz.proof < inetutils-2.6.tar.xz sigsum-verify -k jas.pub -p sigsum-policy.txt inetutils-v2.6-src.tar.gz.proof < inetutils-v2.6-src.tar.gz
Running sigsum-verify is redundant, i.e., sigsum-submit does not give you a .proof file unless it verifies for the input and provided policy.
Okay. I find it good to actually test what end-users will run to make sure I get it correct.
sha256sum inetutils-2.6.tar.gz | cut -d' ' -f1 | base16 -d | sha256sum sha256sum inetutils-2.6.tar.xz | cut -d' ' -f1 | base16 -d | sha256sum sha256sum inetutils-v2.6-src.tar.gz | cut -d' ' -f1 | base16 -d | sha256sum
sigsum-monitor --interval 5s -p sigsum-policy.txt jas.pub
Given the tools we have right now it doesn't get better than this. I'll comment more below on the topic of writing a monitor for your use-case.
Fwiw, a sigsum-keygen sub-command to compute the SHA256(SHA256(file)) value would be useful. Initially I didn't have the 'base16' tool and it isn't that common.
Ideas for improvements?
Specify somewhere what you're claim is. To me it looks like the claim is: you will only sign checksums that can be reproduced from source at $LOCATION, see $R-GUIX-JOB. This is what I'm sketching a monitor on.
The claim could also include where you publish release announcements, in which case the monitor could try to falsify something like: "official releases are announced at $MAILING-LIST". So if you ever make a release without announcing it there, then the monitor could flag it as well.
(Claims are helpful so monitors know what to falsify / verify.)
Thanks! The concept of "claims" really helps to make the concepts understandable to my mind.
Unfortunately it is not clear to me what I think is useful to actually claim here. The R-GUIX-JOB is complex and tied to the GitLab pipeline infrastructure. I don't think it is a very good claim going forward. You may find the B-Guix job simpler - effectively ./bootstrap + ./configure + make dist after adjusting for build dependencies.
I think a better claim would be something like: I claim to sign only source tarballs that were prepared reproducibly via 'make dist' by me by using 'guix time-machine ...' in a git checkout of the tagged commit. The 'guix time-machine' command should be a complete command that people can run to reproduce the tarball.
We don't know if the 'guix time-machine' will actually work in 10 years, but I don't know of any other approach available that offers any similar promise. Specifying a particular Debian snapshot timestamp and related commands get complex quickly. I think most normal GNU/Linux operating system doesn't give me the kind of behavior that is needed here.
Are you able to build a "libtasn1 release monitor" out of this information?
From a quick glance it seems possible, yes.
Sketch:
Monitor tails release tags from the git repository. For each release tag, run the rebuild recipe and arrive at the expected checksums. Now we basically have a list of (release tag, checksums) pairs. In your case it looks like there are three checksums per release tag.
Monitor tails sigsum logs in the trust policy, filtering entries on your submit key. And ensure it sees enough cosignatures for your trust policy. Now we basically have a list of signed checksums that users would have accepted as valid.
Compute the diff between checksums in the log and checksums that were reproduced. If there's ever a checksum in the log that haven't been reproduced from source, flag it and escalate to the monitor's operator.
This might happen if:
- There is a reproducibility issue
- Submit key signed something that's out of tree (hidden release)
If there is a claim about where official release announcements are, the monitor could also verify that all releases have been published there.
Great! The main thing to improve here, I think, would be to establish some commonality for the "run the rebuild recipe" step. How about mandating that a script ".sigsum/reproduce-release-artifacts" should be present to support Sigsum artifact reproduction? Maybe that is useful beyond Sigsum even.
/Simon
On Tue, Feb 25, 2025 at 04:41:10PM +0100, Simon Josefsson wrote:
Rasmus Dahlberg via Sigsum-general sigsum-general@lists.sigsum.org writes:
and/or commands used to sign things?
Do you sign and submit on separate setups? If yes, fun to see that someone uses that feature! If no, you could sign and submit in a single step by passing both -k and -p when running sigsum-submit.
Right, I noticed this but I prefer to keep privileges of commands that need private keys as minimal as possible, for easier auditing. If both -k and -p is given, the tool goes from a offline tool to an online tool that talks to remote servers.
Ack, makes sense!
I was happy to see that you made an effort to separate these two steps, so I can tighten down permissions when running the private key operation.
Happy to see it used -- thanks nisse for designing it that way!
It uses a 8/15 GoogleTrustFabric witness quorum in the trust policy file, not mentioned in the release notes since this information doesn't seem widely published yet.
For more diversity you could consider adding Glasklar's witness:
https://git.glasklar.is/glasklar/services/witnessing/-/blob/main/witness.gla...
Mullvad also started running a stable witness (not documented similar to Google's witness, only mentioned on Matrix a couple weeks back):
witness witness.mullvad.net 15d6d0141543247b74bab3c1076372d9c894f619c376d64b29aa312cc00f61ad
I'd probably set the threshold to 2/3 for google,glasklar,mullvad; and keep the google threshold at 8/15.
Thanks -- I'll try to do this next time.
(Note that you probably want to create a new policy file rather than updating the existing one, and then link the new file in newer announcements. Since otherwise the old instructions will break.)
I was thinking I could make changes to the trust policy file if it were a strict superset of the earlier information. What is important is that both new and old commands still work, not that the trust policy is necessarily identical, or am I missing something?
If the old proofs that you already published also work for the new policy it should be fine. That may or may not be the case, depends.
E.g., now you're releases probably have cosignatures that would satisify a google,glasklar,mullvad 2/3 policy (you can check with sigsum-verify).
But if glasklar's and mullvad's witness had been put into operation today, you wouldn't have had those cosignatures in your .proof files and so the update of your out-of-band trust policy would have been breaking.
I don't like storing the rate-limiting domain token private key on disk. I noticed it is recommended to use a separate key for this, but what are the risks if people would start to use their software signing key instead? Aren't the signatures domain separated? Key management is a hassle, so reducing the number of private keys people have to manage the lifecycle for leads to a overall security improvement. I would also prefer to use my hardware-bound OpenPGP signing subkey (instead of my hardware-bound OpenPGP authentication subkey), but I haven't been able to figure out the SSH agent tooling for this.
You're right that submit and rate-limit signatures are domain separated with a namespace. So, there's no security risk for your signed checksums if you re-use the same key for submission and rate limiting.
The only caveat is that the rate-limit key is essentially used to compute token=Sign(key, <log pubkey>). Then "$domain, $token" is passed as an HTTP header to the log server when doing a submission.
The risk: if $token is somehow revealed to an unauthorized party, then that party can start spending your rate-limit. Recovery requires rotating the rate-limit key and updating your DNS TXT record. In which case you're back to having separate keys.
Further reading:
https://git.glasklar.is/sigsum/project/documentation/-/blob/main/log.md#4--r...
Does this context help wrt. if it is safe to reuse the same key? (If it helps I would also start by using the same submit and rate limit key.)
Thanks for the link! I find this design surprising: if I understand you correctly, it means the private key is only needed once to generate the token, not on every sigsum-submit usage. So effectively the token is a long-lived key-equivalent?!
Correct, so what's really computed is a per-log-and-per-key HTTP token. And sigsum-submit derives that (identical) token every time it runs, rather than using the token(s) that can be produced with `sigsum-token`.
I was expecting some kind of challenge-response or at least monotonically increasing signed counter, to avoid replay attacks of earlier submission tokens.
Here's the proposal when we migrated to this fixed token, which discusses something in the direction of what you're sketching on:
https://git.glasklar.is/sigsum/project/documentation/-/blob/main/proposals/2...
If you think it was the wrong decision it could be re-opened. The best way to would be to write a proposal and circulate it for discussion.
Could also be a single command:
Thanks!
sigsum-verify -k jas.pub -p sigsum-policy.txt inetutils-2.6.tar.gz.proof < inetutils-2.6.tar.gz sigsum-verify -k jas.pub -p sigsum-policy.txt inetutils-2.6.tar.xz.proof < inetutils-2.6.tar.xz sigsum-verify -k jas.pub -p sigsum-policy.txt inetutils-v2.6-src.tar.gz.proof < inetutils-v2.6-src.tar.gz
Running sigsum-verify is redundant, i.e., sigsum-submit does not give you a .proof file unless it verifies for the input and provided policy.
Okay. I find it good to actually test what end-users will run to make sure I get it correct.
Makes sense!
sha256sum inetutils-2.6.tar.gz | cut -d' ' -f1 | base16 -d | sha256sum sha256sum inetutils-2.6.tar.xz | cut -d' ' -f1 | base16 -d | sha256sum sha256sum inetutils-v2.6-src.tar.gz | cut -d' ' -f1 | base16 -d | sha256sum
sigsum-monitor --interval 5s -p sigsum-policy.txt jas.pub
Given the tools we have right now it doesn't get better than this. I'll comment more below on the topic of writing a monitor for your use-case.
Fwiw, a sigsum-keygen sub-command to compute the SHA256(SHA256(file)) value would be useful. Initially I didn't have the 'base16' tool and it isn't that common.
Good idea, filed:
https://git.glasklar.is/sigsum/core/sigsum-go/-/issues/103
Ideas for improvements?
Specify somewhere what you're claim is. To me it looks like the claim is: you will only sign checksums that can be reproduced from source at $LOCATION, see $R-GUIX-JOB. This is what I'm sketching a monitor on.
The claim could also include where you publish release announcements, in which case the monitor could try to falsify something like: "official releases are announced at $MAILING-LIST". So if you ever make a release without announcing it there, then the monitor could flag it as well.
(Claims are helpful so monitors know what to falsify / verify.)
Thanks! The concept of "claims" really helps to make the concepts understandable to my mind.
Amazing, yes it's a helpful concept!
Unfortunately it is not clear to me what I think is useful to actually claim here. The R-GUIX-JOB is complex and tied to the GitLab pipeline infrastructure. I don't think it is a very good claim going forward. You may find the B-Guix job simpler - effectively ./bootstrap + ./configure + make dist after adjusting for build dependencies.
I think a better claim would be something like: I claim to sign only source tarballs that were prepared reproducibly via 'make dist' by me by using 'guix time-machine ...' in a git checkout of the tagged commit.
Sounds good (and I also like the well-known script path further below.)
The 'guix time-machine' command should be a complete command that people can run to reproduce the tarball.
We don't know if the 'guix time-machine' will actually work in 10 years, but I don't know of any other approach available that offers any similar promise. Specifying a particular Debian snapshot timestamp and related commands get complex quickly. I think most normal GNU/Linux operating system doesn't give me the kind of behavior that is needed here.
This sounds like an orthogonal improvement point; that it already works to do this for (quite some time?) is really good. I would suspect that the most interesting issues are anyway detected close to the release?
Are you able to build a "libtasn1 release monitor" out of this information?
From a quick glance it seems possible, yes.
Sketch:
Monitor tails release tags from the git repository. For each release tag, run the rebuild recipe and arrive at the expected checksums. Now we basically have a list of (release tag, checksums) pairs. In your case it looks like there are three checksums per release tag.
Monitor tails sigsum logs in the trust policy, filtering entries on your submit key. And ensure it sees enough cosignatures for your trust policy. Now we basically have a list of signed checksums that users would have accepted as valid.
Compute the diff between checksums in the log and checksums that were reproduced. If there's ever a checksum in the log that haven't been reproduced from source, flag it and escalate to the monitor's operator.
This might happen if:
- There is a reproducibility issue
- Submit key signed something that's out of tree (hidden release)
If there is a claim about where official release announcements are, the monitor could also verify that all releases have been published there.
Great! The main thing to improve here, I think, would be to establish some commonality for the "run the rebuild recipe" step. How about mandating that a script ".sigsum/reproduce-release-artifacts" should be present to support Sigsum artifact reproduction? Maybe that is useful beyond Sigsum even.
That sounds exactly like what we've discussed for age, i.e., the repository should have a script that knows how to reproduce the tarballs (when checked out at a given version). It makes sense that this script has a well-known path. I agree it doesn't have to be .sigsum/. We just need to know what the well-known path is to configure sigsum's monitor.
-Rasmus
Rasmus Dahlberg via Sigsum-general sigsum-general@lists.sigsum.org writes:
The risk: if $token is somehow revealed to an unauthorized party, then that party can start spending your rate-limit. Recovery requires rotating the rate-limit key and updating your DNS TXT record. In which case you're back to having separate keys.
One could improve this by adding a salt to the DNS record, and have the signature cover salt + log's pubkey. Then tokens could be rotated by changing only the salt. This was discussed back then, see https://git.glasklar.is/sigsum/project/documentation/-/blob/main/proposals/2...,
Not entirely sure why we rejected this extension at the time, we probably did it in order to keep things simple, under the theory that it's fine to use a separate lower-security key for the rate limits.
But we could reconsider, if it makes things easier for Sigsum users.
Regards, /Niels
Niels Möller via Sigsum-general sigsum-general@lists.sigsum.org writes:
Rasmus Dahlberg via Sigsum-general sigsum-general@lists.sigsum.org writes:
The risk: if $token is somehow revealed to an unauthorized party, then that party can start spending your rate-limit. Recovery requires rotating the rate-limit key and updating your DNS TXT record. In which case you're back to having separate keys.
One could improve this by adding a salt to the DNS record, and have the signature cover salt + log's pubkey. Then tokens could be rotated by changing only the salt. This was discussed back then, see https://git.glasklar.is/sigsum/project/documentation/-/blob/main/proposals/2...,
Not entirely sure why we rejected this extension at the time, we probably did it in order to keep things simple, under the theory that it's fine to use a separate lower-security key for the rate limits.
But we could reconsider, if it makes things easier for Sigsum users.
I think using a salt opens up for replays or DNS MITMs, which aren't that hard to implement given DNS is not secure.
I do sympathise with low complexity here though. I suppose keeping per-client state on the server is a deal breaker?
Could the rate-limiting signature include the signature/message that is intended to be uploaded into the log?
Then the generated token becomes useless after the initial upload. An attacker can only cause the same upload to happen again, which I suppose then is ignored (or is the same signature added to the log twice?).
Or sign a timestamp, and the server require it to be fresh and within 5 minutes of current time?
Both these approaches require no additional state to be kept on the server for validating the rate-limiting token.
The advantage is that it allows reduction of additional private key life-cycle management for the clients.
Of course, clients should be PERMITTED to use a separate rate-limiting private key if they so prefer. But they could be PERMITTED to use the same private-key used for signing content also for the rate-limiting statements. This is already possible now, but the method to generate the rate-limiting Token seems more fragile than it could be. So there is room for improvement.
/Simon
Simon Josefsson via Sigsum-general sigsum-general@lists.sigsum.org writes:
I think using a salt opens up for replays or DNS MITMs, which aren't that hard to implement given DNS is not secure.
I think the only benefit of adding such a salt is to make it possible to rotate tokens without also rotating the underlying signing key, and hence enable usage of a more longer-lived key. It won't bring replay protection or any other security benefits.
For replay, I think it's relevant that a leak lets an attacker both use up the legitimate user's quota, and use more log resources than intended, so both user and log has some incentive to not leak the token.
I do sympathise with low complexity here though.
We were thinking that the threat model for the rate-limit mechanism allowed for a simple mechanism without replay-protection. There are ways to do it differently (some variants outlined in the original proposal). If you feel something different is needed, the way forward is a proposal with motivation and some details on the suggested improvement.
I suppose keeping per-client state on the server is a deal breaker?
One way of keeping state on the log server is to explicitly allow-list the users's submit key in the rate limit config; then no other tokens are required.
But I think we want to avoid any more dynamic per-client state.
Could the rate-limiting signature include the signature/message that is intended to be uploaded into the log?
Then the generated token becomes useless after the initial upload. An attacker can only cause the same upload to happen again, which I suppose then is ignored (or is the same signature added to the log twice?).
The log doesn't store duplicate entries, so I think this should work fine.
Regards, /Niels
On Mon, Mar 03, 2025 at 10:42:40AM +0100, Sigsum General wrote:
Niels Möller via Sigsum-general sigsum-general@lists.sigsum.org writes:
Rasmus Dahlberg via Sigsum-general sigsum-general@lists.sigsum.org writes:
The risk: if $token is somehow revealed to an unauthorized party, then that party can start spending your rate-limit. Recovery requires rotating the rate-limit key and updating your DNS TXT record. In which case you're back to having separate keys.
One could improve this by adding a salt to the DNS record, and have the signature cover salt + log's pubkey. Then tokens could be rotated by changing only the salt. This was discussed back then, see https://git.glasklar.is/sigsum/project/documentation/-/blob/main/proposals/2...,
Not entirely sure why we rejected this extension at the time, we probably did it in order to keep things simple, under the theory that it's fine to use a separate lower-security key for the rate limits.
But we could reconsider, if it makes things easier for Sigsum users.
I think using a salt opens up for replays or DNS MITMs, which aren't that hard to implement given DNS is not secure.
I do sympathise with low complexity here though. I suppose keeping per-client state on the server is a deal breaker?
Correct -- that's something we would like to avoid.
Could the rate-limiting signature include the signature/message that is intended to be uploaded into the log?
It could, yes. I think there are two variations:
1. Binds only to the message 2. Binds to the message and the log.
(1) would allow the log operator to replay the message in a different log. Probably not a big deal in practise, but also seems not great.
(2) prevents such replays, but implementation on the client-side increases slightly since a new signature is needed if, e.g., sigsum-submit tries to submit to log A and then fails over to log B.
Implementation wise I think (2) would be OK, but it's nice if it's easy to tell the user how many signatures will be needed. E.g., so that they know they would be expected to click a hardware token, say, 3 times.
Then the generated token becomes useless after the initial upload. An attacker can only cause the same upload to happen again, which I suppose then is ignored (or is the same signature added to the log twice?).
Correct -- byte-identical entries will not be added multiple times.
Or sign a timestamp, and the server require it to be fresh and within 5 minutes of current time?
This would also work. The hard part is tuning the time T, but if we assume that the rate limit key pair is always available on the online machine (which seems reasonable) I think 5m is in the right ballpark.
Complexity wise I think it's in the same ballpark as what we have now.
Both these approaches require no additional state to be kept on the server for validating the rate-limiting token.
The advantage is that it allows reduction of additional private key life-cycle management for the clients.
Of course, clients should be PERMITTED to use a separate rate-limiting private key if they so prefer. But they could be PERMITTED to use the same private-key used for signing content also for the rate-limiting statements. This is already possible now, but the method to generate the rate-limiting Token seems more fragile than it could be. So there is room for improvement.
I agree there's room for improvement; and from the top of my head it sounds like you have two ideas that are worth cosindering. If I had to pick one approach I prefer more, then it would be to use timestamps.
Note that it's relatively easy to extend
https://git.glasklar.is/sigsum/project/documentation/-/blob/main/log.md#4--r...
with an additional rate-limit approach. If it strictly superseeds what we had before in every way, then I would expect that the old way can eventually be deprecated. (But it wouldn't be a lot of complexity for the log-go implementation to just support two different HTTP headers.)
Tooling wise--sigsum-submit--it would also be easy to just start using the new approach underneath the hood. Because passing tokens directly to sigsum-submit is not supported (we're only doing that in our CI).
As nisse said, a proposal would be great if you'd like to help move anything towards decision. Otherwise I think we should open an issue in
https://git.glasklar.is/sigsum/project/documentation
that describes the gist of the ideas with a link to this email thread, i.e., so that someone remembers to circle back to it at a later time.
-Rasmus
Rasmus Dahlberg via Sigsum-general sigsum-general@lists.sigsum.org writes:
Could the rate-limiting signature include the signature/message that is intended to be uploaded into the log?
It could, yes. I think there are two variations:
- Binds only to the message
- Binds to the message and the log.
(1) would allow the log operator to replay the message in a different log. Probably not a big deal in practise, but also seems not great.
(2) prevents such replays, but implementation on the client-side increases slightly since a new signature is needed if, e.g., sigsum-submit tries to submit to log A and then fails over to log B.
Implementation wise I think (2) would be OK, but it's nice if it's easy to tell the user how many signatures will be needed. E.g., so that they know they would be expected to click a hardware token, say, 3 times.
Agreed - binding to message and log seems good. Comparing this to other situations, maybe the signature should be a "full transcript" signature of the upload request which presumably includes both message signature and log identity?
Of course, clients should be PERMITTED to use a separate rate-limiting private key if they so prefer. But they could be PERMITTED to use the same private-key used for signing content also for the rate-limiting statements. This is already possible now, but the method to generate the rate-limiting Token seems more fragile than it could be. So there is room for improvement.
I agree there's room for improvement; and from the top of my head it sounds like you have two ideas that are worth cosindering. If I had to pick one approach I prefer more, then it would be to use timestamps.
Note that it's relatively easy to extend
https://git.glasklar.is/sigsum/project/documentation/-/blob/main/log.md#4--r...
with an additional rate-limit approach. If it strictly superseeds what we had before in every way, then I would expect that the old way can eventually be deprecated. (But it wouldn't be a lot of complexity for the log-go implementation to just support two different HTTP headers.)
Tooling wise--sigsum-submit--it would also be easy to just start using the new approach underneath the hood. Because passing tokens directly to sigsum-submit is not supported (we're only doing that in our CI).
As nisse said, a proposal would be great if you'd like to help move anything towards decision. Otherwise I think we should open an issue in
https://git.glasklar.is/sigsum/project/documentation
that describes the gist of the ideas with a link to this email thread, i.e., so that someone remembers to circle back to it at a later time.
I'll think about this some more -- I had an initial reaction that the assumption to require control of some DNS zone to insert the public key of the rate-limiting signature is a serious problem.
I would prefer if people without control of a DNS zone could insert things into the Sigsum log.
I suppose it is not possible to design the server to scale to arbitrary loads, limited by network bandwidth? I'm thinking Letsencrypt-levels of requests, they must have similar challanges about rate limiting.
/Simon
On Wed, Mar 05, 2025 at 01:27:17PM +0100, Simon Josefsson wrote:
Rasmus Dahlberg via Sigsum-general sigsum-general@lists.sigsum.org writes:
Could the rate-limiting signature include the signature/message that is intended to be uploaded into the log?
It could, yes. I think there are two variations:
- Binds only to the message
- Binds to the message and the log.
(1) would allow the log operator to replay the message in a different log. Probably not a big deal in practise, but also seems not great.
(2) prevents such replays, but implementation on the client-side increases slightly since a new signature is needed if, e.g., sigsum-submit tries to submit to log A and then fails over to log B.
Implementation wise I think (2) would be OK, but it's nice if it's easy to tell the user how many signatures will be needed. E.g., so that they know they would be expected to click a hardware token, say, 3 times.
Agreed - binding to message and log seems good. Comparing this to other situations, maybe the signature should be a "full transcript" signature of the upload request which presumably includes both message signature and log identity?
I think both would work. With a few minutes of thinking I can't think of a good argument for why a full transcript would be better here; but if it is easier to implement and/or describe that might be a decider.
(My thoughts go in the direction of malleability of signatures and whether there's a point in locking down the entire transcript.)
Of course, clients should be PERMITTED to use a separate rate-limiting private key if they so prefer. But they could be PERMITTED to use the same private-key used for signing content also for the rate-limiting statements. This is already possible now, but the method to generate the rate-limiting Token seems more fragile than it could be. So there is room for improvement.
I agree there's room for improvement; and from the top of my head it sounds like you have two ideas that are worth cosindering. If I had to pick one approach I prefer more, then it would be to use timestamps.
Note that it's relatively easy to extend
https://git.glasklar.is/sigsum/project/documentation/-/blob/main/log.md#4--r...
with an additional rate-limit approach. If it strictly superseeds what we had before in every way, then I would expect that the old way can eventually be deprecated. (But it wouldn't be a lot of complexity for the log-go implementation to just support two different HTTP headers.)
Tooling wise--sigsum-submit--it would also be easy to just start using the new approach underneath the hood. Because passing tokens directly to sigsum-submit is not supported (we're only doing that in our CI).
As nisse said, a proposal would be great if you'd like to help move anything towards decision. Otherwise I think we should open an issue in
https://git.glasklar.is/sigsum/project/documentation
that describes the gist of the ideas with a link to this email thread, i.e., so that someone remembers to circle back to it at a later time.
I'll think about this some more -- I had an initial reaction that the
Sounds good, thanks for thinking about it!
assumption to require control of some DNS zone to insert the public key of the rate-limiting signature is a serious problem.
I would prefer if people without control of a DNS zone could insert things into the Sigsum log.
I suppose it is not possible to design the server to scale to arbitrary loads, limited by network bandwidth? I'm thinking Letsencrypt-levels of requests, they must have similar challanges about rate limiting.
Let's Encrypt rate-limits on DNS names and IP addresses.
https://letsencrypt.org/docs/rate-limits/
Rate limits on CT logs are (upper) bound by the number of certificates CAs are willing to issue. So, either it is bound by DNS names and IP addresses or it is bound by how much money the attacker spends on certs.
CT logs typically use IP addresses for rate limiting as well.
It would be possible to configure, e.g., seasalp to accept up to a quota of submissions per time unit without any proof of controlling a DNS zone. IP addresses could be used to make it harder for a single party to consume the entire quota. But generally speaking I don't think that achieves much, it's too easy to send requests from many different IPs. And going further down this rabbit hole seems like the wrong direction.
But do let me know if you think it would be useful to allow some submissions to seasalp (global quota that anyone can use). My thinking is that's mainly useful for testing, which is why we have jellyfish.
What I'd be more interested in is other rate-limit approaches than DNS; and which doesn't depend on IP addresses. What I'm looking for is something that's "hard" to get many of, "easy" to get one of, and it is "easy" for a log server to verify that the submitter "has the thing".
When it comes to Let's Encrypt scale, Sigsum is in a better position here since a log only needs to store 128 bytes per submission. Compare this to an entire certificate chain: ~5-6 KiB, ~1-2 KiB assuming dedup.
That said (as you already noted), simply accepting everything from everyone and adding that to append-only storage doesn't work.
It's not at the top of my list to find alternative ways to do rate limiting, but it's in the back of my head to think about the options.
I'd be more than happy to hear about your ideas in this space!
-Rasmus
On 2025-03-05 15:46, Rasmus Dahlberg via Sigsum-general wrote:
On Wed, Mar 05, 2025 at 01:27:17PM +0100, Simon Josefsson wrote:
Rasmus Dahlberg via Sigsum-general sigsum-general@lists.sigsum.org writes:
Could the rate-limiting signature include the signature/message that is intended to be uploaded into the log?
It could, yes. I think there are two variations:
- Binds only to the message
- Binds to the message and the log.
(1) would allow the log operator to replay the message in a different log. Probably not a big deal in practise, but also seems not great.
(2) prevents such replays, but implementation on the client-side increases slightly since a new signature is needed if, e.g., sigsum-submit tries to submit to log A and then fails over to log B.
Implementation wise I think (2) would be OK, but it's nice if it's easy to tell the user how many signatures will be needed. E.g., so that they know they would be expected to click a hardware token, say, 3 times.
Agreed - binding to message and log seems good. Comparing this to other situations, maybe the signature should be a "full transcript" signature of the upload request which presumably includes both message signature and log identity?
I think both would work. With a few minutes of thinking I can't think of a good argument for why a full transcript would be better here; but if it is easier to implement and/or describe that might be a decider.
(My thoughts go in the direction of malleability of signatures and whether there's a point in locking down the entire transcript.)
Of course, clients should be PERMITTED to use a separate rate-limiting private key if they so prefer. But they could be PERMITTED to use the same private-key used for signing content also for the rate-limiting statements. This is already possible now, but the method to generate the rate-limiting Token seems more fragile than it could be. So there is room for improvement.
I agree there's room for improvement; and from the top of my head it sounds like you have two ideas that are worth cosindering. If I had to pick one approach I prefer more, then it would be to use timestamps.
Note that it's relatively easy to extend
https://git.glasklar.is/sigsum/project/documentation/-/blob/main/log.md#4--r...
with an additional rate-limit approach. If it strictly superseeds what we had before in every way, then I would expect that the old way can eventually be deprecated. (But it wouldn't be a lot of complexity for the log-go implementation to just support two different HTTP headers.)
Tooling wise--sigsum-submit--it would also be easy to just start using the new approach underneath the hood. Because passing tokens directly to sigsum-submit is not supported (we're only doing that in our CI).
As nisse said, a proposal would be great if you'd like to help move anything towards decision. Otherwise I think we should open an issue in
https://git.glasklar.is/sigsum/project/documentation
that describes the gist of the ideas with a link to this email thread, i.e., so that someone remembers to circle back to it at a later time.
I'll think about this some more -- I had an initial reaction that the
Sounds good, thanks for thinking about it!
assumption to require control of some DNS zone to insert the public key of the rate-limiting signature is a serious problem.
I would prefer if people without control of a DNS zone could insert things into the Sigsum log.
I suppose it is not possible to design the server to scale to arbitrary loads, limited by network bandwidth? I'm thinking Letsencrypt-levels of requests, they must have similar challanges about rate limiting.
Let's Encrypt rate-limits on DNS names and IP addresses.
https://letsencrypt.org/docs/rate-limits/
Rate limits on CT logs are (upper) bound by the number of certificates CAs are willing to issue. So, either it is bound by DNS names and IP addresses or it is bound by how much money the attacker spends on certs.
CT logs typically use IP addresses for rate limiting as well.
It would be possible to configure, e.g., seasalp to accept up to a quota of submissions per time unit without any proof of controlling a DNS zone. IP addresses could be used to make it harder for a single party to consume the entire quota. But generally speaking I don't think that achieves much, it's too easy to send requests from many different IPs. And going further down this rabbit hole seems like the wrong direction.
But do let me know if you think it would be useful to allow some submissions to seasalp (global quota that anyone can use). My thinking is that's mainly useful for testing, which is why we have jellyfish.
What I'd be more interested in is other rate-limit approaches than DNS; and which doesn't depend on IP addresses. What I'm looking for is something that's "hard" to get many of, "easy" to get one of, and it is "easy" for a log server to verify that the submitter "has the thing".
Regarding other rate-limit approaches than DNS, I think what Rasmus is hinting at is that other rate-limit approaches could be added alongside the existing DNS approach.
There could be several rate-limit mechanisms with separate quotas for each of them, DNS would then remain as one possibility but for those who cannot or do not want to use the DNS way there could be other options. Such other options could be added in the future to make the system more widely useful, and the addition of new options would not mean any problems for users employing the existing options (i.e. DNS), it would only mean making new ways possible in addition to what existed before.
Someone who wants to submit to a sigsum log would get to decide which rate-limit approach they want to use, so having control of some DNS zone would not be required as long as one of the other approaches is acceptable for the submitter to use.
Does this make sense?
/ Elias
Elias Rudberg via Sigsum-general sigsum-general@lists.sigsum.org writes:
Regarding other rate-limit approaches than DNS, I think what Rasmus is hinting at is that other rate-limit approaches could be added alongside the existing DNS approach.
There could be several rate-limit mechanisms with separate quotas for each of them, DNS would then remain as one possibility but for those who cannot or do not want to use the DNS way there could be other options. Such other options could be added in the future to make the system more widely useful, and the addition of new options would not mean any problems for users employing the existing options (i.e. DNS), it would only mean making new ways possible in addition to what existed before.
Someone who wants to submit to a sigsum log would get to decide which rate-limit approach they want to use, so having control of some DNS zone would not be required as long as one of the other approaches is acceptable for the submitter to use.
Does this make sense?
+1
So how about a rate-limiting mechanism where the Sigsum log (when it decide it wants to perform rate-limiting) returns a URL to the client which the human operating the client has to visit in a browser and perform some kind of CAPTCHA, OpenID login, OAuth exchange against GitLab/GitHub/Mastodon/whatever, Bitcoin transfer, credit card payment, Suduko puzzle solver, watch commercials for 1 minute etc, that upon acceptable user interaction ultimately leads to the Sigsum log accepting the request?
I really wish that I could suggest something better than this.
I think this idea is more reasonable to a new user without a DNS zone than any other alternative that I can come up with.
Implemented right, it doesn't seem that risky for the Sigsum log to support -- it would have to generate a random URL and wait for some kind of event from a separate server approving the request.
As I user, I would be frustrated with a mechanism like this, but I suppose that is an appropriate feeling for a rate-limiting mechanism.
/Simon
Simon Josefsson via Sigsum-general sigsum-general@lists.sigsum.org writes:
So how about a rate-limiting mechanism where the Sigsum log (when it decide it wants to perform rate-limiting) returns a URL to the client which the human operating the client has to visit in a browser and perform some kind of CAPTCHA, OpenID login, OAuth exchange against GitLab/GitHub/Mastodon/whatever,
Some variant of OpenID login could perhaps make sense, with quota per id (maybe somehow using dns/publicsuffixlist-based rate limit for the openid provider, to prevent an abuser from creating millions of openid providers and millions of accounts at each provider). Do you know how that relates to how sigstore handles user identities? I've never looked into the details.
Bitcoin transfer, credit card payment, Suduko puzzle solver, watch commercials for 1 minute etc,
I bet selling those commercials will be very profitable :-)
Regards, /Niels
Niels Möller via Sigsum-general sigsum-general@lists.sigsum.org writes:
Simon Josefsson via Sigsum-general sigsum-general@lists.sigsum.org writes:
So how about a rate-limiting mechanism where the Sigsum log (when it decide it wants to perform rate-limiting) returns a URL to the client which the human operating the client has to visit in a browser and perform some kind of CAPTCHA, OpenID login, OAuth exchange against GitLab/GitHub/Mastodon/whatever,
Some variant of OpenID login could perhaps make sense, with quota per id (maybe somehow using dns/publicsuffixlist-based rate limit for the openid provider, to prevent an abuser from creating millions of openid providers and millions of accounts at each provider). Do you know how that relates to how sigstore handles user identities? I've never looked into the details.
To approve Sigstore operations you get redirected to oauth2.sigstore.dev which currently offers Login via GitHub, Google and Microsoft. You can reproduce their workflow using my recipe posted here:
https://lists.debian.org/debian-go/2024/12/msg00020.html
If you follow the first URL, you get here:
https://oauth2.sigstore.dev/auth/auth?access_type=online&client_id=sigst...
GitHub don't seem to implement any kind of replay protection, so I'm able to complete a GitHub authentication using that stale link and get a HTTP redirect to localhost with (presumably) a fresh token.
Another nice thing with this approach is that you can automate it from within a GitHub Action runner, so that if Sigsum would trust GitHub's OAuth flow for rate-limit bypass, you could make this work automatically from within a GitHub Action. Presumably the same will be true for GitLab runners, whenever Sigstore gets around to supporting those via oauth2.sigstore.dev (maybe this already happened).
Of course, there are many less nice things with this approach (see my rant in the e-mail above). But supporting this for rate-limiting bypass seems like a relative low risk trade-off. Knowing the GitHub.com username of spammers of the Sigsum log is probably sufficient to be able to block them or open a conversation with github.com or directly with that individual.
/Simon
On 6 Mar 2025, at 13:04, Simon Josefsson via Sigsum-general sigsum-general@lists.sigsum.org wrote:
Signed PGP part Elias Rudberg via Sigsum-general <sigsum-general@lists.sigsum.org mailto:sigsum-general@lists.sigsum.org> writes:
Regarding other rate-limit approaches than DNS, I think what Rasmus is hinting at is that other rate-limit approaches could be added alongside the existing DNS approach.
There could be several rate-limit mechanisms with separate quotas for each of them, DNS would then remain as one possibility but for those who cannot or do not want to use the DNS way there could be other options. Such other options could be added in the future to make the system more widely useful, and the addition of new options would not mean any problems for users employing the existing options (i.e. DNS), it would only mean making new ways possible in addition to what existed before.
Someone who wants to submit to a sigsum log would get to decide which rate-limit approach they want to use, so having control of some DNS zone would not be required as long as one of the other approaches is acceptable for the submitter to use.
Does this make sense?
+1
So how about a rate-limiting mechanism where the Sigsum log (when it decide it wants to perform rate-limiting) returns a URL to the client which the human operating the client has to visit in a browser and perform some kind of CAPTCHA, OpenID login, OAuth exchange against GitLab/GitHub/Mastodon/whatever, Bitcoin transfer, credit card payment, Suduko puzzle solver, watch commercials for 1 minute etc, that upon acceptable user interaction ultimately leads to the Sigsum log accepting the request?
I really wish that I could suggest something better than this.
I think this idea is more reasonable to a new user without a DNS zone than any other alternative that I can come up with.
Implemented right, it doesn't seem that risky for the Sigsum log to support -- it would have to generate a random URL and wait for some kind of event from a separate server approving the request.
As I user, I would be frustrated with a mechanism like this, but I suppose that is an appropriate feeling for a rate-limiting mechanism.
I just remembered the first time I generated SSL/TLS keys for my servers. You had to move the mouse around the screen for a few minutes (yes minutes) to generate entropy. And then wait for the keys to be created.
/O ;-)
sigsum-general@lists.sigsum.org