Dear all, I've started looking into building a more complete and stable Sigsum verifier to run in the browser extension I'm prototyping. The model I sent previously changed a bit, we are removing Sigstore, to allow website administrators to specify their own ed25519 signing keys, and bring their own logs. The "bring your own log" model has been suggested in the WAICT proposal[1], and I think it improved decentralization for the better.
I think the WAICT proposal refers to a type of log, or in general to log software that does not exists yet, and I think Sigsum fits the job well. I would like thus for website administrators to specify a Sigsum policy, but since that will be shipped in the HTTP headers, I'd need something more serialization friendly, such as JSON.
While looking into the policy format, I was wondering why the quorum is global and not per log?
In a JSON like format, I was imagining something like this, also to reduce to the minimum key/texts duplication:
{ "witnesses": { "X1": "base64-key-X1", "X2": "base64-key-X2", "X3": "base64-key-X3", "Y1": "base64-key-Y1", "Y2": "base64-key-Y2", "Y3": "base64-key-Y3", "Z1": "base64-key-Z1" }, "groups": { "X-witnesses": { "2": ["X1", "X2", "X3"] }, "Y-witnesses": { "any": ["Y1", "Y2", "Y3"] }, "Z-witnesses": { "all": ["Z1"] }, "XY-majority": { "all": ["X-witnesses", "Y-witnesses"] }, "Trusted-Bloc": { "any": ["XY-majority", "Z-witnesses"] } }, "logs": [ { "base_url": "https://log-a.example.org", "public_key": "base64-logkey-A", "quorum": "X-witnesses" }, { "base_url": "https://log-b.example.org", "public_key": "base64-logkey-B", "quorum": "Trusted-Bloc" } ] }
It's just exploratory, but I'm a bit confused by the multi-log model. For instance, you'd expect the signers to send to two logs and then provide back two proofs bundles, or you'd expect a log with a policy with multiple logs, to propagate to the second log?
In this format, I'd support per-log quorum, and probably thus expect multiple proofs.
Cheers Giulio
[1] https://github.com/rozbb/draft-waict-transparency/blob/main/draft-waict-tran...
Giulio via Sigsum-general sigsum-general@lists.sigsum.org writes:
It's just exploratory, but I'm a bit confused by the multi-log model. For instance, you'd expect the signers to send to two logs and then provide back two proofs bundles, or you'd expect a log with a policy with multiple logs, to propagate to the second log?
The intention of having multiple logs in the policy is that they are all acceptable, we expect each logged item to be in one of the listed logs, but we don't care which one. (So it is crucial that all listed logs are subject to appropriate monitoring).
Having multiple logs is not to increase security, but to increase reliability. You want to be able to make a new logged update and push it out to users, even if one log server is temporarily down.
At the other end, if you look at the sigsum-submit tool. When you give it a policy with multiple logs, it will just randomly select one of them for submission, and if you provide many items to submit, they will be distributed between the listed logs.
About the json serialization: Looks reasonable to me at a first look (except the per-log quorum). If it is intended to be machine generated, maybe you can omit the "all"/"any" keywords, and require numerical thresholds. If it intended for verifier only (not monitors or submitters), you could also omit the log urls, they aren't needed.
Regards, /Niels
On 15/05/2025 16:19, Niels Möller via Sigsum-general wrote:
The intention of having multiple logs in the policy is that they are all acceptable, we expect each logged item to be in one of the listed logs, but we don't care which one. (So it is crucial that all listed logs are subject to appropriate monitoring).
Having multiple logs is not to increase security, but to increase reliability. You want to be able to make a new logged update and push it out to users, even if one log server is temporarily down.
At the other end, if you look at the sigsum-submit tool. When you give it a policy with multiple logs, it will just randomly select one of them for submission, and if you provide many items to submit, they will be distributed between the listed logs.
About the json serialization: Looks reasonable to me at a first look (except the per-log quorum). If it is intended to be machine generated, maybe you can omit the "all"/"any" keywords, and require numerical thresholds. If it intended for verifier only (not monitors or submitters), you could also omit the log urls, they aren't needed.
Thank you, all of this makes perfect sense. I'll keep the global quorum and make it so the JSON is fully compatible with the original format. More questions will come as I write the code :)
I'm curious about the possibility of removing the log url. On one hand, it would be optimal because that saves some bytes in http headers that get sent along with every http response. On the other, there's the usability downside that it reduces discoverability of the log itself, meaning that it has to be advertised somewhere else if somebody want to add a monitor?
Cheers Giulio
Giulio via Sigsum-general sigsum-general@lists.sigsum.org writes:
I'm curious about the possibility of removing the log url. On one hand, it would be optimal because that saves some bytes in http headers that get sent along with every http response. On the other, there's the usability downside that it reduces discoverability of the log itself, meaning that it has to be advertised somewhere else if somebody want to add a monitor?
Depending on the details of your use case, a third-party monitor likely needs additional information, to enable it to verify the implied claims.
When sending policy (and submitter pubkeys?) in the http response, a client needs to know, at least, that some appropriate monitor is aware of the listed logs, submitter pubkeys, and the selection of witnesses. So there some need for authentication or bootstrap of trust.
Regards, /Niels
sigsum-general@lists.sigsum.org