Small Grant: HostProbe (MVP) — self-serve host usability scanner for Sia

Introduction

Project Name:
Small Grant: HostProbe (MVP) — self-serve host usability scanner for Sia

Name of the organization or individual submitting the proposal:
Dapps over Apps


Describe your project.
Host selection on Sia still has a practical gap: sometimes a builder or operator already has a shortlist of hosts and does not need a full ranking platform or a full autopilot workflow. They just need a quick, repeatable way to check which hosts are reachable and likely usable from their own machine right now.

HostProbe is a repo-local CLI + GitHub Action that scans a chosen shortlist of Sia hosts and produces a saved report showing which hosts are usable, risky, or unreachable, and why. We have a working prototype has already been built and is included with this submission for review. This Small Grant is for turning that existing prototype into a cleaner, more stable, and field-tested MVP that is easier to run, easier to review, and easier to reuse.

This MVP is intentionally narrow. It is not a portal, not a public API, not a hosted service, and not a HostScore replacement. It is a small self-serve utility for scanning a shortlist of hosts from one vantage point and saving the result for review or CI.

The MVP will:

  • take a list of host public keys or announced addresses
  • probe each host from a single vantage point
  • collect a small set of hard signals
  • save the result as machine-readable output

The signals in scope for the MVP are:

  • basic reachability
  • whether host settings or price table data can be fetched successfully
  • whether the host appears to be accepting contracts
  • response time / latency from the scanning machine
  • basic protocol metadata where available

The outputs will be:

  • report.json
  • report.csv
  • report.html

Each host will be classified as one of:

  • usable
  • risky
  • unreachable

Each result will include explicit reason codes, for example:

  • UNREACHABLE
  • SETTINGS_FETCH_FAILED
  • NOT_ACCEPTING_CONTRACTS
  • HIGH_LATENCY

This MVP does not attempt full upload/download throughput tests, contract formation, multi-region probing, continuous monitoring, or host ranking. The goal of this Small Grant is to take the current prototype and develop it into a practical shortlist scanner that is easy to test, useful immediately, and realistic at the $10,000 Small Grant scope.

A working prototype repository and demo video are included with this submission so the Committee can inspect the current state of the tool directly.

How does the projected outcome serve the Foundation’s mission of user-owned data?
Sia works better when builders and operators can quickly identify which hosts are reachable and likely usable from a real vantage point. A shortlist scanner does not solve the full host-selection problem, but it does reduce wasted time on obviously unreachable or unsuitable hosts and gives teams a reusable report they can act on. That improves the quality of host selection and lowers avoidable friction for developers and operators building on top of Sia.

Are you a resident of any jurisdiction on that list? Yes/No
No

Will your payment bank account be located in any jurisdiction on that list? Yes/No
No

Grant Specifics

Amount of money requested and justification with a reasonable breakdown of expenses:
$10,000 USD total

Rate: $125/hour
Total: 80 hours

Breakdown:

  • CLI cleanup, input format, and stable report schema: 10h = $1,250
  • Probe engine refinement, timeouts, retries, and lightweight host checks: 22h = $2,750
  • Classification engine, reason codes, and decision rules: 14h = $1,750
  • JSON / CSV / HTML report outputs: 12h = $1,500
  • GitHub Action, fixtures, and regression tests: 14h = $1,750
  • Docs, sample report updates, demo packaging, and release hardening: 8h = $1,000

Total: 80h = $10,000

What is the high-level architecture overview for the grant? What security best practices are you following? Please review our Development Guide for further details.

High-level architecture

Inputs

  • hosts.txt or hosts.json
  • optional config file for timeouts and output paths

Pipeline

  • parse host list
  • probe each host from one vantage point
  • collect hard signals
  • classify each host
  • emit saved reports

Outputs

  • report.json
  • report.csv
  • report.html

Security / operational practices

  • no wallet keys
  • no signing
  • no contract formation in the MVP
  • no hosted persistence of user data
  • timeouts and rate limits on probe behavior
  • clear labeling that results are vantage-point-specific, not a network-wide ground truth

That last point matters. HostProbe is designed to answer, “what do these hosts look like from this machine right now?” It is not designed to claim a universal network score. That keeps the tool honest and keeps the MVP in scope.

What are the goals of this small grant? Please provide a general timeline for completion.

Goal:
Take the existing HostProbe prototype and develop it into a field-tested MVP that lets a user scan a shortlist of Sia hosts and generate a saved report showing which hosts are usable, risky, or unreachable from that vantage point.

Timeline (4 weeks total)

Week 1

  • review and clean up the current prototype
  • finalize host list input format
  • define a stable JSON output schema
  • add baseline fixtures

Week 2

  • refine probe behavior
  • tighten host classification logic
  • finalize reason code system

Week 3

  • add CSV + HTML rendering
  • add GitHub Action
  • produce updated sample public reports on a small host shortlist

Week 4

  • field test the MVP
  • improve docs and demo packaging
  • publish final release and a short report on what the MVP found in practice

What are your plans for this project following the grant?

  • maintain the MVP for at least 4-6 weeksafter the grant ends.
  • keep a compatibility note in the README
  • accept issues and PRs for new reason codes and report improvements
  • if the tool shows clear real-world use, propose a follow-up only for the next logical step, such as broader report views or optional deeper checks

Potential risks that will affect the outcome of the project:

  • Hosts can behave differently in real network conditions.
    Mitigation: the MVP focuses on a small set of lightweight checks that should work across common host setups, and each result includes explicit reason codes so behavior can be reviewed and improved during field testing.

  • Network conditions can influence scan results.
    Mitigation: the tool treats results as vantage-point-specific, uses controlled timeouts and retries, and saves the raw report so scans can be repeated and compared instead of treated as a one-time absolute result.

  • Turning the current prototype into a stable MVP may reveal edge cases that need cleanup.
    Mitigation: we already have a working prototype and demo, so the grant is focused on hardening the implementation, improving test coverage, and making the outputs more stable and easier to review.

  • The tool needs to be easy to run and easy to understand for early users.
    Mitigation: the MVP includes a simple CLI flow, saved JSON/CSV/HTML outputs, sample reports, and a demo video so the workflow is straightforward for reviewers and early users.

.

Development Information

Will all of your project’s code be open-source?
Yes. All grant-funded code will be open-source. No closed-source components are planned.

Leave a link where code will be accessible for review.
Current MVP repository:
https://github.com/steven3002/HostaProbe

Links.

Do you agree to submit monthly progress reports?
Yes. Monthly progress reports will be posted in the forum with milestone status, release links, sample report updates, and any new fixtures or demo changes.


Contact info

Email:

[email protected]

Any other preferred contact methods:
X/Twitter

Hi @DappsoverApps - given your other recent proposal was rejected by the Committee due to lack of stated community need, have you heard from any community members about the problem this MVP is trying to solve?

Hello @mecsbecs ,Not through one on one outreach before posting. But there is public community feedback on this exact need.

In the HostScore thread, a community member asked for more granular comparison by location, said uptime, reliability, and longevity matter for host selection, and called this kind of tooling a needed project. That is the same core workflow this MVP targets: checking a shortlist of hosts from one vantage point to see which ones are actually usable and why.That is why the scope is narrow. It is not a HostScore replacement, a portal, or a leaderboard. It is a practical shortlist scanner built around a need the community has already pointed to.

Thank you.

The features you mentioned have already been implemented in HostScore.

Hi @mike76 , you are absolutely right, and I apologize; mentioning geographic granularity in my last reply was the wrong way to frame our tool. HostScore does a fantastic job of benchmarking from distributed global data centers, and it is the absolute gold standard for network-wide host reputation. Where HostProbe diverges is that it is fundamentally a local, subjective diagnostic CLI, not an objective global leaderboard.

These are how we see them solving two different problems:

  1. Subjective Local Reachability vs. Objective Benchmarks
    In p2p networks, reachability is subjective. A host might have a perfect 100% score on HostScore’s backend, but be completely unreachable from a specific developer’s local machine due to their specific ISP routing, strict NAT, or local firewalls. HostProbe is a stateless, zero-cost CLI that developers can run locally (or bake into their CI/CD) to instantly verify if their exact environment can establish a cryptographic handshake with a shortlisted host before they attempt to spend Siacoins or hang their application.

  2. The Pre-Flight Check for RHP3 (SiaMux)
    While HostScore fully tests RHP3 hosts by funding Ephemeral Accounts and forming contracts, developers often just need to know if their local network can successfully negotiate a SiaMux connection on port 9984. HostProbe acts as a lightweight pre-flight check to validate local SiaMux connectivity without requiring a synced node or a funded wallet.

  3. Built for the RHP4 / WebTransport Future
    This client-side architecture is also critical for where the network is heading. With RHP4 and tools like indexd, web apps will stream directly from hosts via WebTransport. A centralized backend service cannot test if an end-user’s specific local browser/UDP setup can successfully negotiate a WebTransport stream with a host. HostProbe’s architecture is designed specifically to evolve into testing this localized, client-side connectivity.

Our submitted MVP proves this lightweight, local probing architecture using RHP2. Moving forward, we see the tools as highly complementary: builders use HostScore’s API to discover the top global hosts, and then use HostProbe locally to instantly triage which of those hosts are actively reachable from their specific client environment.

Please we will aprreciate your follow on response as it helps clear the air on any overlapping concerns between Hostprobe and Hostscore.
Thank you.

If HostScore sees a host online and “100%” OK, that usually means that host is running fine and no further interference is required. It’s usually the other way around, when a host seems to be working fine from a local perspective but can’t be accessed from outside.

After the V2 hardfork, there are no more RHP3 hosts on the network. I’m surprised this is still a topic.

Hi @mike76 , HostScore benchmarks hosts across the network. HostProbe is narrower in the sense that, it lets a builder take a shortlist, run a check from their own environment, and save a point-in-time report showing which hosts connected, timed out, or failed the probe. So this is meant as local validation for a specific setup, not another public scoring or discovery tool.

That is the boundary we intend to keep. If you still think it overlaps, we would appreciate where you think the overlap starts. Looking forward to your response. thanks.

The gap we are trying to fill isn’t about the host’s configuration; it’s about the developer’s local environment. This is the exact developer experience we are trying to solve:

The “Inside-Out” Network Problem & The RHP4 Future HostScore tests connections from the outside in (from a data center to the host). HostProbe tests from the inside out (from the developer’s specific machine to the host).

As the network fully migrates to RHP4 and WebTransport, this client-side vantage point becomes critical. WebTransport relies on UDP, which is notoriously mangled, throttled, or outright blocked by restrictive corporate VPNs, strict university Wi-Fi, and bad local NATs.
Standard network tools like netcat or telnet are essentially useless for verifying complex WebTransport handshakes. HostScore will correctly show the host is 100% globally healthy, but the developer’s client app will still hang because their own network is dropping the UDP packets. Setting up a full native node just to watch the logs fail is slow and tedious.

HostProbe is a 10-second, zero-cost ‘pre-flight’ CLI that lets the developer instantly check: “Is this shortlisted host dead, or is my local environment just failing the protocol handshake?”
Ultimately, we don’t see HostProbe as a replacement for HostScore at all. We see a workflow where a developer queries HostScore’s API to discover the best 50 global hosts as a shortlist, and then runs HostProbe locally to verify which of those 50 their specific network environment can actually handshake with before writing any contract code.

As a team, our standard engineering philosophy is to build for deep backward compatibility to ensure no legacy setups are left behind. That mindset led us to anchor our MVP’s architecture to the RHP2 base layer. However, it was an oversight on our part not to realize that the network had completely deprecated RHP3 and is already aggressively pivoting to RHP4.
That being said, this rapid transition actually makes the need for a client-side probe even stronger. As the network moves toward RHP4 and WebTransport, web apps will be streaming directly from hosts. WebTransport relies on UDP, which is notoriously blocked or throttled by strict client-side firewalls and enterprise networks. A global backend service cannot verify if an end-user’s specific browser setup can successfully negotiate a UDP WebTransport stream.

Our current MVP proves this localized, point-in-time diagnostic architecture using RHP2. We believe evolving this exact architecture will be highly necessary for developers to troubleshoot local WebTransport connectivity in the upcoming RHP4 era.

Hi @DappsoverApps - thank you for your explanation. Your proposal comes at an interesting time, however, as we will be releasing new Grants Program funding guidelines next week. This also means the next Grants Committee meeting will be held on April 28th (with April 22 as the proposal submission deadline) to allow for adequate time for these new guidelines to be reviewed & incorporated into proposals.

Please review these guidelines when they’re released next week and then tag me when/if you’ve updated your proposal accordingly to be reviewed.