Skip to content

Instantly share code, notes, and snippets.

locustfile = locustfile.py
headless = true
expect-workers = 5
processes = 5
host = https://litellm.host/
users = 200
spawn-rate = 10
run-time = 5m
csv = out
from locust import HttpUser, task, between
from dotenv import load_dotenv
import os
import urllib3
import uuid
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
load_dotenv()
class MyUser(HttpUser):
model_list:
- model_name: fake-openai-endpoint
litellm_params:
model: openai/fake
api_key: fake-key
api_base: https://exampleopenaiendpoint-production.up.railway.app/
general_settings:
custom_sso: my_pkg.custom_sso_handler
proxy_batch_write_at: 60

Prompt Structure Template

1. Using XML-style Tags for Clear Boundaries

<instructions>
Your task is to analyze customer data and generate insights.
Follow these rules:
- Maintain data types as specified
- Handle missing fields gracefully
v3.4.8
• bugfix: Fixed an issue with helm templating for rollingUpdate deployment strategy (by @rohantmp in #2864)
• enhancement: more validation for required fields in instance types (by @neogopher in #2833)
v3.4.7
• bugfix: Fixed an issue where platform app installation would fail when using private OCI registry as source for
the helm charts. (by @neogopher in #2800)

Schema Docs

Type object
Required No
Additional properties [Not allowed]
Details

Keywords: Apache Camel | AWS S3 | file handling | chunking | Java

Messages

user: I want to read a local file, create equal chunks and store in AWS S3. Can you show me how to do that?

assistant: To read a local file, create equal chunks, and store them in AWS S3 using Apache Camel, you can use a combination of file handling, chunking, and the AWS S3 component in your Camel route.

Here's an outline of the steps to achieve this:

  1. Read the local file using the file component.

Keywords: seda | concurrent consumers | timeout property | message age | queue | custom logic

Messages

user: I am using apache camel 2.18.0 version. I am using seda queue with concurrent consumers option. What I am looking for is a way to control how long a message can remain in the queue itself. For ex: Say a timeout configuration is set for 2 min, and once the message reached the queue, if it stays for more than 2 min, exception should be thrown. In seda documentation this timeout property is available for ex: from("direct:start").to("seda:queue?timeout=120000"); // 2 minutes (in milliseconds) But this means message should reach the queue in 2min. So, if there is no such property to cause timeout on message age in a queue, is it true that Messages can stay in the queue until they are dequeued for processing? If I want to control how long messages can stay in the queue, I may need to implement custom logic in application to periodically check and remove messages from the queue if they exceed a

Keywords: sftp | camel | Apache Camel | file processing | concurrency | idempotent

Messages

user: This is my route from("sftp://userName:password@ip:22/?move=.done") .routeId("my-route-1") . How to avoid processing the same files incase of multiple instances?

assistant: To avoid processing the same files in case of multiple instances, you can use the "idempotent" read lock feature in Apache Camel's SFTP component. This feature uses an "idempotent repository" to keep track of the files that have already been processed and prevents them from being processed again.

Here's an example of how to configure the "idempotent" read lock using Java DSL and the maven artifact reference for the necessary Camel SFTP dependency: