sendMessageDraft
Use this method to stream a partial message to a user while the message is being generated. Returns True on success.
Parameters
chat_idIntegerRequiredUnique identifier for the target private chat
message_thread_idIntegerOptionalUnique identifier for the target message thread
draft_idIntegerRequiredUnique identifier of the message draft; must be non-zero. Changes of drafts with the same identifier are animated
Text of the message to be sent, 1-4096 characters after entities parsing
parse_modeStringOptionalMode for parsing entities in the message text. See formatting options for more details.
A JSON-serialized list of special entities that appear in message text, which can be specified instead of parse\_mode
Returns
On success, True is returned.
GramIO Usage
ts
// Stream a draft that updates in place with animation
bot.on("message", async (ctx) => {
// Show the initial partial content
await ctx.sendMessageDraft({ draft_id: 1, text: "Thinking..." });
// Simulate generating a response
await new Promise((r) => setTimeout(r, 1000));
// Update the same draft — animates the change
await ctx.sendMessageDraft({
draft_id: 1,
text: "Thinking... done! Here is your answer.",
});
});ts
// High-level: stream an AsyncIterable (e.g., LLM output) as live typing previews
bot.on("message", async (ctx) => {
async function* generateResponse() {
yield "Hello";
yield ", world";
yield "!";
}
// streamMessage handles draft_id management, batching, and finalization
const messages = await ctx.streamMessage(generateResponse());
console.log(`Finalized into ${messages.length} message(s)`);
});ts
// Pass finalParams to streamMessage to attach a keyboard after streaming
bot.on("message", async (ctx) => {
async function* llmStream() {
yield "Processing your request";
yield "... here is the result!";
}
await ctx.streamMessage(llmStream(), {
messageParams: {
reply_markup: new InlineKeyboard().text("Done", "ack"),
},
});
});ts
// Direct API call — update an existing draft
await bot.api.sendMessageDraft({
chat_id: 123456789,
draft_id: 42,
text: "Loading data...",
});Errors
| Code | Error | Cause |
|---|---|---|
| 400 | Bad Request: chat not found | chat_id is invalid or the bot has no private chat history with that user |
| 400 | Bad Request: not enough rights | Bot does not have forum topic mode enabled — configure it via @BotFather |
| 400 | Bad Request: can't parse entities | Malformed entities array or wrong parse_mode markup |
| 403 | Forbidden: bot was blocked by the user | User blocked the bot — catch and mark as inactive |
| 429 | Too Many Requests: retry after N | Rate limit hit — check retry_after, use auto-retry plugin |
TIP
Use GramIO's auto-retry plugin to handle 429 errors automatically.
Tips & Gotchas
- Private chats only.
chat_idmust be the numeric ID of a private chat. Groups, supergroups, and channels are not supported. Username strings (@username) are also not accepted. - Bot must have forum topic mode enabled. This method only works for bots with forum topic mode enabled in @BotFather. Enable it under Bot Settings → Group Privacy → Forum Topic Mode.
- Same
draft_id= animated update. Sending a draft with the same non-zerodraft_idupdates the existing draft in place with a smooth animation — ideal for streaming token-by-token output. draft_idmust be non-zero. Adraft_idof0is explicitly forbidden.- Use
ctx.streamMessage()for LLM output. The high-levelctx.streamMessage(asyncIterable, options?)method handlesdraft_idmanagement, batching up to 4096 chars, and callingsendMessageto finalize. Avoid managingsendMessageDraftcalls manually for streaming use cases. parse_modeandentitiesare mutually exclusive. GramIO'sformathelper producesentities— never passparse_modealongside it.
See Also
- sendMessage — Finalize and send a complete text message
- Formatting guide —
formathelper, HTML, MarkdownV2 - auto-retry plugin — Handle rate limits automatically