Small Grant: Chronicle

Project Name:

Chronicle (A Sia Host Proof-of-Performance Benchmarking Tool)

Name of the organization or individual submitting the proposal:

Aniket Rawat (Individual Contributor)


Description

This project is an independent, standardized benchmarking tool that allows Sia host operators to measure and attest to their real-world performance under transparent, reproducible conditions.

The tool is run locally by a host operator and executes a fixed set of performance tests including upload throughput, download throughput, latency, and transfer stability against their own node. The results are packaged into a cryptographically signed “Proof of Performance” artifact, which can be shared publicly or verified by third-party community tools such as explorers or host evaluation services.

The project avoids building global rankings, centralized services, or continuous monitoring systems. Instead, it delivers a reusable benchmarking primitive that complements existing observational tools by providing opt-in, verifiable performance evidence directly from hosts.


How does the projected outcome serve the Foundation’s mission of user-owned data?

Decentralized storage requires trust without intermediaries. This project strengthens Sia’s mission of user-owned data by improving transparency and accountability at the infrastructure layer.

By enabling hosts to voluntarily generate cryptographically verifiable performance attestations, renters gain clearer signals when choosing where to store their data without relying on centralized authorities or proprietary systems. Independent verification ensures that performance claims can be validated by any third party, reinforcing decentralization and user control.

This approach encourages higher-quality hosting, healthier competition, and stronger trust in Sia as a decentralized storage network.


Are you a resident of any jurisdiction on that list?
No

Will your payment bank account be located in any jurisdiction on that list?
No


Grant Specifics

Amount of money requested and justification with a reasonable breakdown of expenses

Total requested: $7,500 USD

Estimated breakdown:

  • Core benchmarking engine development (Golang, CLI-first): $3,500
  • Proof of Performance format, signing, and verification logic: $1,200
  • Lightweight React-based local interface: $1,400
  • Documentation, specifications, testing, feedback and final polish: $1400

The project does not require any hosted infrastructure or third-party services. All work is focused on local tooling, documentation, and ecosystem-ready outputs.


What is the high-level architecture overview for the grant? What security best practices are you following?

Architecture Overview

  • A local host-side benchmark runner (cli / local service) executes standardized performance tests.
  • Test results are serialized into a signed JSON Proof of Performance, cryptographically bound to the host’s public key.
  • A lightweight local React UI communicates with the benchmark runner via localhost HTTP endpoints.
  • Third-party tools can independently verify proofs using published verification logic and documentation.

Security Best Practices

  • No centralized servers or telemetry
  • Cryptographic signatures to ensure integrity and authenticity
  • Fixed, versioned benchmark parameters to prevent manipulation
  • No private keys exposed or transmitted
  • Local-only interfaces with no external network exposure

What are the goals of this small grant? Please provide a general timeline for completion.

Goals

  • Define a transparent, standardized benchmarking methodology for Sia hosts
  • Implement a verifiable Proof of Performance format
  • Deliver an open-source benchmark runner with a minimal local UI
  • Publish documentation enabling independent verification and ecosystem reuse

Timeline (3 months)

  • Month 1: Benchmark methodology, proof schema, core CLI implementation
  • Month 2: Local API, React UI, verification logic
  • Month 3: Community testing, documentation, final release

Potential risks that will affect the outcome of the project

  • Limited early adoption by hosts (mitigated by keeping the tool lightweight, opt-in, and well-documented)
  • Performance variability due to real-world network conditions (addressed through fixed parameters and documented limitations)
  • Third-party integrations are optional and not required for the project’s success

Development Information

Will all of your project’s code be open-source?

Yes.
All code developed as part of this project will be released under MIT license.

Leave a link where code will be accessible for review.

GitHub repository:
GitHub - AniketR10/chronicle: Open-source performance benchmarking for Sia hosts with verifiable proofs.

Do you agree to submit monthly progress reports?

Yes.


Contact Info

Email:
[email protected]

Any other preferred contact methods:
GitHub: AniketR10 (Aniket Rawat) ¡ GitHub
Discord: @puddingpants01

Hello,

Please explain how your pitch stands out from GitHub - SiaFoundation/troubleshootd: An open source host troubleshooting API or hostscore.info which already measures hosts.

Are you trying to do something differently from these projects?

Asking for clarity.

Kudos.

1 Like

Hello @pcfreak30 ,

Thank you for the question. I see Chronicle as a complementary that fills a specific gap between the internal diagnostics of troubleshootd and the external observability of hostscore.info.

Here is how this project differs and adds value →

  • Hostscore.info provides an external perspective how a remote observer sees the host. This is often influenced by the observer’s own network conditions and the “public” route to the host.

  • Chronicle provides an internal performance benchmark. It measures the host’s actual hardware and bandwidth potential from the source. It’s the difference between someone else saying “I think you’re fast” and you providing a certified “speedtest” result of your own hardware’s capabilities.

  • troubleshootd is a diagnostic tool for identifying why a host is failing (connectivity, port forwarding, wallet state). It is for fixing things.

  • Chronicle is for certifying things. Its goal is to create a standardized “Proof of Performance” (PoP). This is not about debugging, it’s about providing a verifiable performance label that the host operator can use to build trust.

Neither troubleshootd nor hostscore.info produces a portable, cryptographically signed artifact. My tool focuses on creating a signed JSON Proof of Performance.

  • This proof is bound to the host’s public key.

  • It allows host operators to prove their performance metrics to potential renters or community-run lists without those lists having to perform their own heavy testing.

  • Also, services like hostscore.info could even include these proofs to add a “Owner-Verified” badge to host profiles.

  • CLI-first and host-executed, not observer-driven, Opt-in and point-in-time, not continuous or centralized, with fixed parameters and versioned benchmarks

Thanks!
Happy to clarify more.

Benchyd seems like a better comparison than Troubleshootd.
The tool is currently not working, as there are currently higher priorities for the Foundation.

If I understand correctly, you want to create some sort of badge for hosts to say “I think I’m fast”. What value does this bring for (potential) renters?
If for example your hardware is fast, but the peering between the host and the renter suck, another host could still offer better service for the renter.

Renterd (and likely also Indexd) already perform renter-side benchmarking and selection based on observed performance, which seems far more suitable for determining which hosts are worth using than host-reported metrics.

I don’t currently see the added value for renters and may be missing the intended use case. The only clear value I see for hosts is the ability to test the effects of configuration or hardware changes, which updating Benchyd could likely also address.

As I said, I might be misunderstanding the intended use case.

1 Like

Hi @CtrlAltDefeat,

Thank you for bringing up Benchyd and the peering issue, I do agree with your argument.

That said, I believe Chronicle addresses a gap that neither renterd nor Benchyd are designed to solve.

1. The Supply-Side Cold Start Problem

New hosts face a significant cold start issue. A host may have excellent hardware and bandwidth, but renter-side scoring will remain low for an extended period simply due to lack of history.

Chronicle does not replace renter observation. Instead, it allows a host to generate an immediate, verifiable attestation of its underlying hardware capabilities a kind of “hardware birth certificate.” This can help bootstrap trust and encourage initial contract formation.

2. Why not Benchyd?

Benchyd is a useful internal benchmarking utility, but it stops at local measurement.

Chronicle is building a portable, cryptographically signed primitive. Instead of just seeing numbers on a screen, the host generates a signed artifact that proves: ‘These specific hardware metrics were recorded on this specific host node.’ This artifact can be ingested by third-party explorers, insurance protocols etc. without them needing to run their own infrastructure to test the host.
the motivation partly comes from:

  • Benchyd currently being non-functional
  • Lack of a maintained, well-documented benchmarking spec
  • No standardized, verifiable output format that can be reasoned about over time

Rather than reviving Benchyd directly, this project aims to:

  • Be minimal, host-focused, and opt-in
  • Emphasize reproducibility and verification over raw scores
  • Remain independent of renter logic and host ranking systems

3. Complementing Renter-Side Metrics

Renter-side metrics correctly capture real-world peering and performance. However, they cannot observe internal host health directly.

Chronicle focuses on internal I/O performance and local stability. When a renter sees degradation, Chronicle can help distinguish between network issues and host-side regression. Combined, renterd/indexd provides the “observed truth” while Chronicle provides the “baseline truth.”

The primary purpose is host-side diagnostics, standardization, and reproducibility, not advertising performance.

Giving hosts a deterministic, reproducible way to benchmark their own setup
Allow hosts to compare before/after changes (hardware upgrades, kernel tuning, network configuration, geographic relocation, etc.)
Provide a standard benchmark methodology and proof format so results can be shared, reviewed, or discussed meaningfully (rather than “my host is fast” claims)

Thanks for the feedback!
Happy to discuss more

Umm what? Sorry to nitpick but how does insurance protocols matter here?

And even if you did get this working, you would need the clients to support taking in the hosts, which is a bit of an unknown. The foundation at one point considered allowing delegating to 3rd party allow lists for hosts, but punted on the idea. You may end up needing a tool that uses an API to keep your list you create in sync with chain-discovered hosts (and such a tool should wait until after indexd, if it even made sense).

It is also unclear to me how a certification of specific hardware would help as well. Renters benefit from the performance of a host, not from ensuring a host has a specific CPU. Thus the rep they get is based on them actually delivering.

This idea might make more sense with a network like Akash where you DO care about the compute power being used, but here you just want to know how said hardware translates to latency and disk i/o performance, and network pipe.

So, while in general I like the intent of this idea, the data you are certifying with the hosts key… I find it flawed.

If it makes sense, and doesn’t just reproduce benchyd + a wallet signature, I would change what you are trying to certify, or I could have mis-understood your intent :man_shrugging: .

Kudos.

1 Like

Hi @pcfreak30 ,

Chronicle is not intended to certify specific hardware or act as an authoritative signal for renter selection, and I agree that renters ultimately care about observed delivery rather than claimed specs. The goal is to provide a standardized, host-side performance baseline that measures what a machine demonstrably does under a fixed workload, such as sustained disk I/O, latency, throughput consistency, and short-term stability. This is meant to complement renter-side observation, not replace it, by helping hosts diagnose regressions, validate configuration or hardware changes, and provide a reasonable early signal during the cold-start period before renter-side data exists. The output is a signed, portable artifact that can be inspected or ignored entirely, requires no protocol or client support, and avoids turning host-reported data into a trust requirement.

And, yes I went little too far with insurance protocols, the insurance mention came from thinking too far ahead about generic third-party consumers of signed performance data, not from a concrete plan or requirement of this project.
Thanks!

Thanks for your proposal to The Sia Foundation Grants Program.

After review, the Committee has decided to reject your proposal citing the following reasons:

  • The Committee found there to be insufficient technical detail in the proposal to properly outline the goals of this project (i.e. how is the information for the tool being generated?).
  • The Committee does not see how local benchmarking demonstrates real world performance.
  • There are projects that already exist that can showcase hosting standard information, namely HostScore and SiaGraph.

We’ll be moving this to the Rejected section of the Forum. Thanks again for your proposal, and you’re always welcome to submit new requests if you feel you can address the Committee’s concerns. We would recommend revisiting this idea by developing a tool connecting to an indexd node in order to give baseline performance on your network, which will be possible with delegated nodes once indexd is released.