-v1.3.7b- -addont- | Mila Ai

For developers and researchers, this serves as a reminder to always include model cards, licenses, and example code when sharing novel AI artifacts. For enthusiasts, it’s an invitation to search custom Hugging Face spaces or contact Mila-affiliated researchers directly.

| Benchmark | Expected Score (1.3B) | Mila AI -v1.3.7b- -aDDont- (speculative) | |-----------|----------------------|-------------------------------------------| | HellaSwag (0-shot) | ~45% | ~48% (if well-tuned) | | MMLU (5-shot) | ~25% | ~27% | | HumanEval (pass@1) | ~4% | ~5.5% | | French GLUE (FLeX) | N/A | Could excel (bilingual) | Mila AI -v1.3.7b- -aDDont-

prompt = "Explain the significance of the -aDDont- flag in attention mechanisms." inputs = tokenizer(prompt, return_tensors="pt").to("cuda") output = model.generate(**inputs, max_new_tokens=200) print(tokenizer.decode(output[0])) For developers and researchers, this serves as a

from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Mila-AI/-v1.3.7b--aDDont-" # hypothetical path tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto") For developers and researchers