Skip to content

Instantly share code, notes, and snippets.

@felipefontoura
Created July 24, 2025 21:13
Show Gist options
  • Select an option

  • Save felipefontoura/cb4f650c1a5d90f85799426c2f9718b1 to your computer and use it in GitHub Desktop.

Select an option

Save felipefontoura/cb4f650c1a5d90f85799426c2f9718b1 to your computer and use it in GitHub Desktop.
Código Langchain N8N
// Pega a LLM e a ferramenta conectadas
const llm = await this.getInputConnectionData('ai_languageModel', 0);
const tool = await this.getInputConnectionData('ai_tool', 0);
// Obtém os IDs do workflow e da execução usando variáveis globais
const workflowId = $workflow.id;
const executionId = $execution.id;
// Define o callback para registrar o uso de tokens
llm.callbacks = [{
handleLLMEnd: async ({ generations }) => {
try {
const gen = generations[0][0];
const usage = gen.message?.usage_metadata
|| gen.message?.additional_kwargs?.usage_metadata
|| gen.generationInfo?.usage_metadata
|| {
input_tokens: gen.generationInfo?.promptTokenCount || gen.generationInfo?.input_tokens || 0,
output_tokens: gen.generationInfo?.candidatesTokenCount || gen.generationInfo?.output_tokens || 0,
total_tokens: gen.generationInfo?.totalTokenCount || gen.generationInfo?.total_tokens || 0
};
const input_tokens = usage.input_tokens || 0;
const output_tokens = usage.output_tokens || 0;
const total_tokens = usage.total_tokens || (input_tokens + output_tokens);
await tool.func({
date: new Date().toISOString(),
model: llm.modelName || llm.model || 'gemini-model',
input_tokens,
output_tokens,
total_tokens,
workflowId,
executionId
});
} catch (err) {
console.error('Erro ao registrar uso da LLM:', err);
}
}
}];
return llm;
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment