Skip to content

Conversation

@sacOO7
Copy link
Contributor

@sacOO7 sacOO7 commented Jan 26, 2026

Summary by CodeRabbit

  • Refactor
    • Simplified page serialization process by removing multi-language handling and variant generation.
    • Updated URL generation to include .md file extensions in output.
    • Reduced GraphQL query requirements by eliminating unnecessary fields.

✏️ Tip: You can customize this high-level summary in your review settings.

@sacOO7 sacOO7 requested a review from GregHolmes January 26, 2026 09:32
@coderabbitai
Copy link

coderabbitai bot commented Jan 26, 2026

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

Removed language-related validation and handling from llmstxt generation. Simplified the GraphQL query by eliminating language fields and extraction logic. Updated base URL generation to explicitly include .md file extensions in the serialized output.

Changes

Cohort / File(s) Summary
Language Logic Removal & URL Extension Update
data/onPostBuild/llmstxt.ts
Deleted language validation imports, VALID_LANGUAGES constant, and language extraction code paths. Removed language fields from GraphQL query (allFileHtml, internal.contentFilePath). Simplified MDX processing to skip async language extraction. Updated URL generation to explicitly append .md extension to page slugs. Removed per-language variant entry generation.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 Languages fade, but links stand tall,
With .md extensions visible to all,
One simpler path through the code we weave,
Clarity wins—no need to believe! ✨📄

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Linked Issues check ✅ Passed The changes fully implement FTF-479 requirements by updating the llmstxt generation script to produce explicit markdown links to .md files instead of requiring consumers to append .md extension.
Out of Scope Changes check ✅ Passed All changes are directly scoped to the stated objective: simplifying language handling and generating explicit markdown links, with no unrelated modifications detected.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Title check ✅ Passed The title accurately reflects the main change: updating the llmstxt script to support explicit markdown links instead of relying on consumers to append '.md' extensions.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@sacOO7 sacOO7 requested a review from kennethkalmer January 26, 2026 09:32
@GregHolmes GregHolmes added the review-app Create a Heroku review app label Jan 26, 2026
@ably-ci ably-ci temporarily deployed to ably-docs-update-llmstx-m9sw28 January 26, 2026 09:55 Inactive
Copy link
Contributor

@GregHolmes GregHolmes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

- Markdown link returns content for all code snippets
- Hence, having lang specific links is unncessary and increases llms.txt size
@ably-ci ably-ci temporarily deployed to ably-docs-update-llmstx-m9sw28 January 26, 2026 10:35 Inactive
@sacOO7 sacOO7 requested a review from Copilot January 26, 2026 10:39
@sacOO7 sacOO7 changed the title Updates script to support explicit markdown links in llms.txt Update script to support explicit markdown links in llms.txt Jan 26, 2026
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates the llms.txt generation script to emit explicit Markdown (.md) links for documentation pages and removes language-specific link variants to reduce file size.

Changes:

  • Removes language detection and language-specific URL generation for llms.txt entries.
  • Simplifies the GraphQL query and page model to only rely on MDX frontmatter + file path info.
  • Updates serialization to link to /docs/<slug>.md for each docs page.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@sacOO7
Copy link
Contributor Author

sacOO7 commented Jan 26, 2026

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Jan 26, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@GregHolmes
Copy link
Contributor

I know I approved beforehand but there's some things that might need changing while we're updating LLMs.txt. The generation script was before we had AI Transport as a product in the docs. It currently messes up how that's shown. So instead of being shown as a product first you see the guides in Platform:

## Platform

- [About Ably](https://ably-dev.com/docs/platform.md): An introduction to Ably and its highly-available, scalable platform.
- [Guide: Stream Anthropic responses using the message-per-response pattern](https://ably-dev.com/docs/guides/ai-transport/anthropic-message-per-response.md): Stream tokens from the Anthropic Messages API over Ably in realtime using message appends.
- [Guide: Stream OpenAI responses using the message-per-response pattern](https://ably-dev.com/docs/guides/ai-transport/openai-message-per-response.md): Stream tokens from the OpenAI Responses API over Ably in realtime using message appends.
- [Guide: Stream Anthropic responses using the message-per-token pattern](https://ably-dev.com/docs/guides/ai-transport/anthropic-message-per-token.md): Stream tokens from the Anthropic Messages API over Ably in realtime.
- [Guide: Stream OpenAI responses using the message-per-token pattern](https://ably-dev.com/docs/guides/ai-transport/openai-message-per-token.md): Stream tokens from the OpenAI Responses API over Ably in realtime.
- [Guide: Stream Vercel AI SDK responses using the message-per-token pattern](https://ably-dev.com/docs/guides/ai-transport/vercel-message-per-token.md): Stream tokens from the Vercel AI SDK over Ably in realtime.
- [Guide: Export chat data to your own systems](https://ably-dev.com/docs/guides/chat/export-chat.md): Learn how to export chat data from Ably Chat to your own systems.
- [Guide: Building livestream chat at scale with Ably](https://ably-dev.com/docs/guides/chat/build-livestream.md): Architecting livestream chat with Ably: performance, reliability, and cost at scale. Key decisions, technical depth, and why Ably is the right choice.
- [llms.txt](https://ably-dev.com/docs/platform/ai-llms/llms-txt.md): Discover all Ably documentation pages using llms.txt, a machine-readable index optimized for LLMs and AI assistants.
- [Building with LLMs](https://ably-dev.com/docs/platform/ai-llms.md): Learn how to use LLMs to build with Ably documentation. Access markdown versions of docs and use our LLM-optimized resources.
- [Support tickets](https://ably-dev.com/docs/platform/support.md): Learn more about Ably's AI Transport and the features that enable you to quickly build functionality into new and existing applications.
- [Ably CLI](https://ably-dev.com/docs/platform/tools/cli.md): The Ably CLI is a command-line interface for managing Ably resources and interacting with Ably's products APIs directly from your terminal.

And last at the end all of these are AIT except FAQs:

## General

### Documentation

- [About AI Transport](https://ably-dev.com/docs/ai-transport.md): Learn more about Ably's AI Transport and the features that enable you to quickly build functionality into new and existing applications.
- [Pub/Sub FAQs](https://ably-dev.com/docs/faq.md): Complete collection of Ably FAQ answers covering SDK issues, connection troubleshooting, configuration problems, and technical solutions.
- [User input](https://ably-dev.com/docs/ai-transport/messaging/accepting-user-input.md): Enable users to send prompts to AI agents over Ably with verified identity and message correlation.
- [Chain of thought](https://ably-dev.com/docs/ai-transport/messaging/chain-of-thought.md): Stream chain-of-thought reasoning from thinking models in AI applications
- [Citations](https://ably-dev.com/docs/ai-transport/messaging/citations.md): Attach source citations to AI responses using message annotations
- [Tool calls](https://ably-dev.com/docs/ai-transport/messaging/tool-calls.md): Stream tool call execution visibility to users, enabling transparent AI interactions and generative UI experiences.
- [Sessions & identity overview](https://ably-dev.com/docs/ai-transport/sessions-identity.md): Manage session lifecycle and identity in decoupled AI architectures
- [Human in the loop](https://ably-dev.com/docs/ai-transport/messaging/human-in-the-loop.md): Implement human-in-the-loop workflows for AI agents using Ably capabilities and claims to ensure authorized users approve sensitive tool calls.
- [Identifying users and agents](https://ably-dev.com/docs/ai-transport/sessions-identity/identifying-users-and-agents.md): Establish trusted identity and roles in decoupled AI sessions
- [Push notifications FAQs](https://ably-dev.com/docs/faq/push-faqs.md): Frequently asked questions about Ably's push notification service, including debugging, configuration, and troubleshooting guides.
- [Message per response](https://ably-dev.com/docs/ai-transport/token-streaming/message-per-response.md): Stream individual tokens from AI models into a single message over Ably.
- [Online status](https://ably-dev.com/docs/ai-transport/sessions-identity/online-status.md): Use Ably Presence to show which users and agents are currently connected to an AI session
- [Token streaming limits](https://ably-dev.com/docs/ai-transport/token-streaming/token-rate-limits.md): Learn how token streaming interacts with Ably message limits and how to ensure your application delivers consistent performance.
- [Message per token](https://ably-dev.com/docs/ai-transport/token-streaming/message-per-token.md): Stream individual tokens from AI models as separate messages over Ably.
- [Token streaming](https://ably-dev.com/docs/ai-transport/token-streaming.md): Learn about token streaming with Ably AI Transport, including common patterns and the features provided by the Ably solution.

@sacOO7
Copy link
Contributor Author

sacOO7 commented Jan 26, 2026

Thanks for the feedback, will take a look 👍

@sacOO7 sacOO7 changed the title Update script to support explicit markdown links in llms.txt Support explicit markdown links in llms.txt Jan 26, 2026
@sacOO7
Copy link
Contributor Author

sacOO7 commented Jan 26, 2026

Okay, added separate category for ai-transport and FAQs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

review-app Create a Heroku review app

Development

Successfully merging this pull request may close these issues.

4 participants