-
-
Save GrahamcOfBorg/65ac7a9e62691bf866733ebc1ea95f07 to your computer and use it in GitHub Desktop.
| Maintainers: |
from transformers import BloomForCausalLM, BloomTokenizerFast
Załaduj model i tokenizer
model = BloomForCausalLM.from_pretrained("bigscience/bloom")
tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom")
Przykład generowania tekstu
input_text = "Hello, how are you?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(inputs["input_ids"], max_length=50)
Dekoduj wynik
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
from transformers import BloomForCausalLM, BloomTokenizerFast
Załaduj model i tokenizer
model = BloomForCausalLM.from_pretrained("bigscience/bloom")
tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom")
Przykład generowania tekstu
input_text = "Hello, how are you?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(inputs["input_ids"], max_length=50)
Dekoduj wynik
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
100000 usd
It is like a handy guide for loading and using Bloom models having clear examples for tokenization and text generation really helps streamline experimentation. Step-by-step tutorials like this make complex setups feel approachable, much like how platforms like win pkr 888 keep everything simple and engaging for users.
https://gist.github.com/GrahamcOfBorg/fe6fd64579db2302b7c736c6787f8e2d.js