How Gera Thinks About AI Discoverability in 2026
/llms.txt and /llms-full.txt on every domain, a published MCP server per product API, Article/FAQ/HowTo/Product JSON-LD on every content page, definitional opening sentences for GEO, and registered listings in MCP catalogues and ChatGPT-GPT stores. If an AI agent can't answer a user's question with our product, we've failed discoverability.The premise
By late 2025, a meaningful share of product discovery happens inside AI assistants. A user asks ChatGPT, Claude, Perplexity, Gemini, or Copilot "what's the best telemedicine platform for Armenia" and the assistant picks 1-3 answers from what it can see. If our product is invisible to the assistant, we lose that user before we know they existed.
Traditional SEO — backlinks, keyword density, schema — still matters, but is no longer the only discovery layer. We call the new layer AID (AI Discoverability). It has four parts.
1. /llms.txt and /llms-full.txt
Two files at the root of every Gera domain. /llms.txt lists the 5-10 most important pages with one-sentence descriptions; /llms-full.txt contains the product's full content as plain text. Serve both at Content-Type text/plain. Refresh when key pages change.
Why: assistants that respect the llms convention prefer these files over crawling HTML. Cost to produce: minutes. Cost to omit: invisibility.
2. MCP servers on every product API
Every Gera product exposes its core actions through a Model Context Protocol (MCP) server published in packages/mcp-servers/. We register each with MCP catalogues so any Claude, ChatGPT, or MCP-aware agent can invoke GeraJobs search, GeraHome bookings, or GeraClinic appointments directly. The MCP server becomes the AI-agent front door.
3. GEO — content that AI can cite
Every product landing page, blog post, and FAQ follows five rules: (1) definitional opening sentence (“X is a Y that does Z”); (2) quick-answer block above the fold; (3) specific numbers, statistics, and dates; (4) FAQ section with schema.org/FAQPage markup; (5) authored, dated, citable format. We refresh 7-14 days on time-sensitive pages.
4. Tracking AI citations
Daily automated queries across ChatGPT, Claude, Perplexity, Gemini, Copilot for our target keywords. Did we appear? In what position? With what citation? Logged to packages/data/ai-citations/ and reviewed weekly. When citation rate drops, we investigate: moved page? Indexing gap? llms.txt stale? Each failure mode has a standard fix.
Why this beats paid ads for us
Ad spend in AI-assistant answers doesn't exist yet at meaningful scale. The early-mover advantage of being the cited answer for a category question compounds: every cite is a recommendation to the user, and the model's next training cycle will see us quoted. Content and infrastructure, not ad spend, buy this position.
Where we'll take it next
- More MCP-first functionality — write paths, not just reads.
- Structured data feeds for specific assistants (Perplexity Pages, ChatGPT Apps).
- Native Gera assistant integrations so every product is callable from anywhere AI runs.
Related reading
How AI agents find Gera · Why 31 products · GeraNexus — AI platform
Try any Gera product at gera.services. 31 products, one account, worldwide.