Introduction
Project Name:
Small Grant: HostProbe (MVP) — self-serve host usability scanner for Sia
Name of the organization or individual submitting the proposal:
Dapps over Apps
Describe your project.
Host selection on Sia still has a practical gap: sometimes a builder or operator already has a shortlist of hosts and does not need a full ranking platform or a full autopilot workflow. They just need a quick, repeatable way to check which hosts are reachable and likely usable from their own machine right now.
HostProbe is a repo-local CLI + GitHub Action that scans a chosen shortlist of Sia hosts and produces a saved report showing which hosts are usable, risky, or unreachable, and why. We have a working prototype has already been built and is included with this submission for review. This Small Grant is for turning that existing prototype into a cleaner, more stable, and field-tested MVP that is easier to run, easier to review, and easier to reuse.
This MVP is intentionally narrow. It is not a portal, not a public API, not a hosted service, and not a HostScore replacement. It is a small self-serve utility for scanning a shortlist of hosts from one vantage point and saving the result for review or CI.
The MVP will:
- take a list of host public keys or announced addresses
- probe each host from a single vantage point
- collect a small set of hard signals
- save the result as machine-readable output
The signals in scope for the MVP are:
- basic reachability
- whether host settings or price table data can be fetched successfully
- whether the host appears to be accepting contracts
- response time / latency from the scanning machine
- basic protocol metadata where available
The outputs will be:
report.jsonreport.csvreport.html
Each host will be classified as one of:
usableriskyunreachable
Each result will include explicit reason codes, for example:
UNREACHABLESETTINGS_FETCH_FAILEDNOT_ACCEPTING_CONTRACTSHIGH_LATENCY
This MVP does not attempt full upload/download throughput tests, contract formation, multi-region probing, continuous monitoring, or host ranking. The goal of this Small Grant is to take the current prototype and develop it into a practical shortlist scanner that is easy to test, useful immediately, and realistic at the $10,000 Small Grant scope.
A working prototype repository and demo video are included with this submission so the Committee can inspect the current state of the tool directly.
How does the projected outcome serve the Foundation’s mission of user-owned data?
Sia works better when builders and operators can quickly identify which hosts are reachable and likely usable from a real vantage point. A shortlist scanner does not solve the full host-selection problem, but it does reduce wasted time on obviously unreachable or unsuitable hosts and gives teams a reusable report they can act on. That improves the quality of host selection and lowers avoidable friction for developers and operators building on top of Sia.
Are you a resident of any jurisdiction on that list? Yes/No
No
Will your payment bank account be located in any jurisdiction on that list? Yes/No
No
Grant Specifics
Amount of money requested and justification with a reasonable breakdown of expenses:
$10,000 USD total
Rate: $125/hour
Total: 80 hours
Breakdown:
- CLI cleanup, input format, and stable report schema: 10h = $1,250
- Probe engine refinement, timeouts, retries, and lightweight host checks: 22h = $2,750
- Classification engine, reason codes, and decision rules: 14h = $1,750
- JSON / CSV / HTML report outputs: 12h = $1,500
- GitHub Action, fixtures, and regression tests: 14h = $1,750
- Docs, sample report updates, demo packaging, and release hardening: 8h = $1,000
Total: 80h = $10,000
What is the high-level architecture overview for the grant? What security best practices are you following? Please review our Development Guide for further details.
High-level architecture
Inputs
hosts.txtorhosts.json- optional config file for timeouts and output paths
Pipeline
- parse host list
- probe each host from one vantage point
- collect hard signals
- classify each host
- emit saved reports
Outputs
report.jsonreport.csvreport.html
Security / operational practices
- no wallet keys
- no signing
- no contract formation in the MVP
- no hosted persistence of user data
- timeouts and rate limits on probe behavior
- clear labeling that results are vantage-point-specific, not a network-wide ground truth
That last point matters. HostProbe is designed to answer, “what do these hosts look like from this machine right now?” It is not designed to claim a universal network score. That keeps the tool honest and keeps the MVP in scope.
What are the goals of this small grant? Please provide a general timeline for completion.
Goal:
Take the existing HostProbe prototype and develop it into a field-tested MVP that lets a user scan a shortlist of Sia hosts and generate a saved report showing which hosts are usable, risky, or unreachable from that vantage point.
Timeline (4 weeks total)
Week 1
- review and clean up the current prototype
- finalize host list input format
- define a stable JSON output schema
- add baseline fixtures
Week 2
- refine probe behavior
- tighten host classification logic
- finalize reason code system
Week 3
- add CSV + HTML rendering
- add GitHub Action
- produce updated sample public reports on a small host shortlist
Week 4
- field test the MVP
- improve docs and demo packaging
- publish final release and a short report on what the MVP found in practice
What are your plans for this project following the grant?
- maintain the MVP for at least 4-6 weeksafter the grant ends.
- keep a compatibility note in the README
- accept issues and PRs for new reason codes and report improvements
- if the tool shows clear real-world use, propose a follow-up only for the next logical step, such as broader report views or optional deeper checks
Potential risks that will affect the outcome of the project:
-
Hosts can behave differently in real network conditions.
Mitigation: the MVP focuses on a small set of lightweight checks that should work across common host setups, and each result includes explicit reason codes so behavior can be reviewed and improved during field testing. -
Network conditions can influence scan results.
Mitigation: the tool treats results as vantage-point-specific, uses controlled timeouts and retries, and saves the raw report so scans can be repeated and compared instead of treated as a one-time absolute result. -
Turning the current prototype into a stable MVP may reveal edge cases that need cleanup.
Mitigation: we already have a working prototype and demo, so the grant is focused on hardening the implementation, improving test coverage, and making the outputs more stable and easier to review. -
The tool needs to be easy to run and easy to understand for early users.
Mitigation: the MVP includes a simple CLI flow, saved JSON/CSV/HTML outputs, sample reports, and a demo video so the workflow is straightforward for reviewers and early users.
.
Development Information
Will all of your project’s code be open-source?
Yes. All grant-funded code will be open-source. No closed-source components are planned.
Leave a link where code will be accessible for review.
Current MVP repository:
https://github.com/steven3002/HostaProbe
Links.
- Current MVP repository: https://github.com/steven3002/HostaProbe
- Demo video: https://drive.google.com/file/d/1Nxd4glLzFBByXpWmJNfB88NRl1Ay4i_n/view?usp=sharing
Do you agree to submit monthly progress reports?
Yes. Monthly progress reports will be posted in the forum with milestone status, release links, sample report updates, and any new fixtures or demo changes.
Contact info
Email:
Any other preferred contact methods:
X/Twitter