Skip to content

Instantly share code, notes, and snippets.

@GrahamcOfBorg
Created December 31, 2024 18:02
Show Gist options
  • Select an option

  • Save GrahamcOfBorg/65ac7a9e62691bf866733ebc1ea95f07 to your computer and use it in GitHub Desktop.

Select an option

Save GrahamcOfBorg/65ac7a9e62691bf866733ebc1ea95f07 to your computer and use it in GitHub Desktop.
@Wojciech1985
Copy link

@Wojciech1985
Copy link

from transformers import BloomForCausalLM, BloomTokenizerFast

Załaduj model i tokenizer

model = BloomForCausalLM.from_pretrained("bigscience/bloom")
tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom")

Przykład generowania tekstu

input_text = "Hello, how are you?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(inputs["input_ids"], max_length=50)

Dekoduj wynik

result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)

@Wojciech1985
Copy link

@Wojciech1985
Copy link

from transformers import BloomForCausalLM, BloomTokenizerFast

Załaduj model i tokenizer

model = BloomForCausalLM.from_pretrained("bigscience/bloom")
tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom")

Przykład generowania tekstu

input_text = "Hello, how are you?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(inputs["input_ids"], max_length=50)

Dekoduj wynik

result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)

@Wojciech1985
Copy link

100000 usd

@jadejoen289
Copy link

It is like a handy guide for loading and using Bloom models having clear examples for tokenization and text generation really helps streamline experimentation. Step-by-step tutorials like this make complex setups feel approachable, much like how platforms like win pkr 888 keep everything simple and engaging for users.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment