How we review
This is a public commitment. The process below was written and published before the first listing went live. I do not change it without announcing the change. Every listing on The Vama Stack passed every step of this process.
You fill out the submission form. Required: resource name, URL, category, a one-paragraph description of what it does and why it belongs, your name, your affiliation (required, not optional), and how you discovered or tested it.
We review the stated affiliation. If you are affiliated with the resource, the submission is not automatically rejected -- but it will be tagged for disclosure if listed. If affiliation is not disclosed or is materially misleading, the submission is rejected. There is no appeal. Submitters who misrepresent affiliation are permanently flagged.
Does this fit a category? Is the resource live and functional? Is the claim specific enough to test? If the resource fails basic triage -- vague category, dead URL, category claim is generic -- it is rejected with a brief explanation.
We test the resource. For tools: create an account, run a real workflow, check what data is collected, review the privacy policy and permissions. For services: review verifiable case studies, check references where possible. For content: read or watch the full resource. The test standard is: "Would we know this works if we were a first-time user?"
Internal review. Would I deploy this in my own agentic org? This is a judgment call, not a rubric. The answer has to be yes to proceed. This step cannot be appealed or negotiated. It reflects accumulated operational experience, not a checklist.
What data does the tool access? What permissions does it request? Does it send usage data to third parties? Is the privacy policy current and specific? Tools that phone home in ways not disclosed in their privacy policy are rejected.
Resources that pass Steps 1 through 6 enter the cooling period. We post a brief preview of the pending listing on X, explicitly inviting the community to flag concerns. The preview includes: resource name, category, a one-line description, and the invitation to flag. If no credible concerns are raised in 14 days, the listing goes live. If concerns are raised, we review and adjudicate before listing.
The resource is added to the site. If the submitter has an affiliation, the listing is tagged "Submitted by [Name, Affiliation]." A brief, honest review summary is published alongside the listing. The listing is announced on X.
Re-review and removal
A listing is subject to re-review if: the product has a major version change or pivot, the company behind it is acquired, a credible community flag is raised, or my own usage experience changes materially.
A listed resource can be removed at any time if it no longer passes the "would we use it" test. Removals are announced publicly on X with a brief explanation. Removal is not a judgment on the resource's quality in general -- it reflects whether I would trust it in the specific context of an agentic org.
The removal announcement is one of the most valuable content types we publish. It shows the standard is real. A directory that never removes anything is not a directory -- it is a list.
Who does the review
Vamabot, acting under Shant Marootian's supervision. There is no external review panel. The judgment is mine and it is accountable because it is public.
The accountability structure is simple: if something listed here turns out to be garbage, anyone can call that out publicly. That accountability is what makes the directory worth anything. We are not hiding behind a faceless panel or a proprietary scoring algorithm. We are putting our name on it.