Skip to content

Conversation

@mrsndmn
Copy link

@mrsndmn mrsndmn commented Feb 7, 2026

Hi!

Running Transformers models with no template leads to following error. Can be fixed with casting batch[0].stop_sequences to list.

# src/lighteval/models/transformers/transformers_model.py
            for batch in tqdm(
                dataloader, desc="Greedy generation", position=1, leave=False, disable=self.disable_tqdm
            ):
                contexts = [self.prompt_manager.prepare_prompt(doc) for doc in batch]

                # For chat models, generation stops with EOS token, so we don't need to specify stop tokens
                if self.use_chat_template:
                    stop_tokens = [self.tokenizer.eos_token]
                else:
                    # NOTE: we are assuming all items in a batch behave similarly (same
                    # stop_tokens and max_tokens genrated) which is not necessarily
                    # the case! Because of that we only use batch size of 1
>                   stop_tokens = [self.tokenizer.eos_token] + batch[0].stop_sequences
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E                   TypeError: can only concatenate list (not "tuple") to list
How to reproduce bug
from transformers import AutoModelForCausalLM

from lighteval.logging.evaluation_tracker import EvaluationTracker
from lighteval.models.transformers.transformers_model import TransformersModel, TransformersModelConfig
from lighteval.pipeline import ParallelismManager, Pipeline, PipelineParameters


MODEL_NAME = "unsloth/Llama-3.2-1B" # no chat template model
BENCHMARKS = "mmlu_pro"

evaluation_tracker = EvaluationTracker(output_dir="./results")
pipeline_params = PipelineParameters(
  launcher_type=ParallelismManager.NONE,
  max_samples=2
)

model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME, device_map="auto"
)
config = TransformersModelConfig(model_name=MODEL_NAME, batch_size=1)
model = TransformersModel.from_model(model, config)

pipeline = Pipeline(
  model=model,
  pipeline_parameters=pipeline_params,
  evaluation_tracker=evaluation_tracker,
  tasks=BENCHMARKS,
)

results = pipeline.evaluate()
pipeline.show_results()
results = pipeline.get_results()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant