Skip to content

sendMessageDraft

Use this method to stream a partial message to a user while the message is being generated. Returns True on success.

Parameters

chat_idIntegerRequired
Unique identifier for the target private chat
message_thread_idIntegerOptional
Unique identifier for the target message thread
draft_idIntegerRequired
Unique identifier of the message draft; must be non-zero. Changes of drafts with the same identifier are animated
textStringRequired✏️ FormattableminLen 1maxLen 4096
Text of the message to be sent, 1-4096 characters after entities parsing
parse_modeStringOptional
Mode for parsing entities in the message text. See formatting options for more details.
entitiesMessageEntity[]Optional
A JSON-serialized list of special entities that appear in message text, which can be specified instead of parse\_mode

Returns

On success, True is returned.

GramIO Usage

ts
// Stream a draft that updates in place with animation
bot
.
on
("message", async (
ctx
) => {
// Show the initial partial content await
ctx
.
sendMessageDraft
({
draft_id
: 1,
text
: "Thinking..." });
// Simulate generating a response await new
Promise
((
r
) =>
setTimeout
(
r
, 1000));
// Update the same draft — animates the change await
ctx
.
sendMessageDraft
({
draft_id
: 1,
text
: "Thinking... done! Here is your answer.",
}); });
ts
// High-level: stream an AsyncIterable (e.g., LLM output) as live typing previews
bot
.
on
("message", async (
ctx
) => {
async function*
generateResponse
() {
yield "Hello"; yield ", world"; yield "!"; } // streamMessage handles draft_id management, batching, and finalization const
messages
= await
ctx
.
streamMessage
(
generateResponse
());
console
.
log
(`Finalized into ${
messages
.
length
} message(s)`);
});
ts
// Pass finalParams to streamMessage to attach a keyboard after streaming
bot
.
on
("message", async (
ctx
) => {
async function*
llmStream
() {
yield "Processing your request"; yield "... here is the result!"; } await
ctx
.
streamMessage
(
llmStream
(), {
messageParams
: {
reply_markup
: new
InlineKeyboard
().
text
("Done", "ack"),
}, }); });
ts
// Direct API call — update an existing draft
await 
bot
.
api
.
sendMessageDraft
({
chat_id
: 123456789,
draft_id
: 42,
text
: "Loading data...",
});

Errors

CodeErrorCause
400Bad Request: chat not foundchat_id is invalid or the bot has no private chat history with that user
400Bad Request: not enough rightsBot does not have forum topic mode enabled — configure it via @BotFather
400Bad Request: can't parse entitiesMalformed entities array or wrong parse_mode markup
403Forbidden: bot was blocked by the userUser blocked the bot — catch and mark as inactive
429Too Many Requests: retry after NRate limit hit — check retry_after, use auto-retry plugin

TIP

Use GramIO's auto-retry plugin to handle 429 errors automatically.

Tips & Gotchas

  • Private chats only. chat_id must be the numeric ID of a private chat. Groups, supergroups, and channels are not supported. Username strings (@username) are also not accepted.
  • Bot must have forum topic mode enabled. This method only works for bots with forum topic mode enabled in @BotFather. Enable it under Bot Settings → Group Privacy → Forum Topic Mode.
  • Same draft_id = animated update. Sending a draft with the same non-zero draft_id updates the existing draft in place with a smooth animation — ideal for streaming token-by-token output.
  • draft_id must be non-zero. A draft_id of 0 is explicitly forbidden.
  • Use ctx.streamMessage() for LLM output. The high-level ctx.streamMessage(asyncIterable, options?) method handles draft_id management, batching up to 4096 chars, and calling sendMessage to finalize. Avoid managing sendMessageDraft calls manually for streaming use cases.
  • parse_mode and entities are mutually exclusive. GramIO's format helper produces entities — never pass parse_mode alongside it.

See Also