Language Model API¶
Core language model functionality for loading models, running inference, and managing activations.
Main Classes¶
mi_crow.language_model.language_model.LanguageModel ¶
LanguageModel(model, tokenizer, store, model_id=None, device=None)
Fence-style language model wrapper.
Provides a unified interface for working with language models, including: - Model initialization and configuration - Inference operations through the inference property - Hook management (detectors and controllers) - Model persistence - Activation tracking
Initialize LanguageModel.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
Module
|
PyTorch model module |
required |
tokenizer
|
PreTrainedTokenizerBase
|
HuggingFace tokenizer |
required |
store
|
Store
|
Store instance for persistence |
required |
model_id
|
str | None
|
Optional model identifier (auto-extracted if not provided) |
None
|
device
|
str | device | None
|
Optional device string or torch.device (defaults to 'cpu' if None) |
None
|
Source code in src/mi_crow/language_model/language_model.py
83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 | |
clear_detectors ¶
clear_detectors()
Clear all accumulated metadata for registered detectors.
This is useful when running multiple independent inference runs
(e.g. separate infer_texts / infer_dataset calls) and you want
to ensure that detector state does not leak between runs.
Source code in src/mi_crow/language_model/language_model.py
182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 | |
from_huggingface
classmethod
¶
from_huggingface(model_name, store, tokenizer_params=None, model_params=None, device=None)
Load a language model from HuggingFace Hub.
Automatically loads model to GPU if device is "cuda" and CUDA is available. This prevents OOM errors by keeping the model on GPU instead of CPU RAM.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_name
|
str
|
HuggingFace model identifier |
required |
store
|
Store
|
Store instance for persistence |
required |
tokenizer_params
|
dict
|
Optional tokenizer parameters |
None
|
model_params
|
dict
|
Optional model parameters |
None
|
device
|
str | device | None
|
Target device ("cuda", "cpu", "mps"). If "cuda" and CUDA is available, model will be loaded directly to GPU using device_map="auto" (via the HuggingFace factory helpers). |
None
|
Returns:
| Type | Description |
|---|---|
'LanguageModel'
|
LanguageModel instance |
Source code in src/mi_crow/language_model/language_model.py
284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 | |
from_local
classmethod
¶
from_local(saved_path, store, model_id=None, device=None)
Load a language model from a saved file (created by save_model).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
saved_path
|
Path | str
|
Path to the saved model file (.pt file) |
required |
store
|
Store
|
Store instance for persistence |
required |
model_id
|
str | None
|
Optional model identifier. If not provided, will use the model_id from saved metadata. If provided, will be used to load the model architecture from HuggingFace. |
None
|
device
|
str | device | None
|
Optional device string or torch.device (defaults to 'cpu' if None) |
None
|
Returns:
| Type | Description |
|---|---|
'LanguageModel'
|
LanguageModel instance |
Raises:
| Type | Description |
|---|---|
FileNotFoundError
|
If the saved file doesn't exist |
ValueError
|
If the saved file format is invalid or model_id is required but not provided |
Source code in src/mi_crow/language_model/language_model.py
335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 | |
from_local_torch
classmethod
¶
from_local_torch(model_path, tokenizer_path, store, device=None)
Load a language model from local HuggingFace paths.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_path
|
str
|
Path to the model directory or file |
required |
tokenizer_path
|
str
|
Path to the tokenizer directory or file |
required |
store
|
Store
|
Store instance for persistence |
required |
device
|
str | device | None
|
Optional device string or torch.device (defaults to 'cpu' if None) |
None
|
Returns:
| Type | Description |
|---|---|
'LanguageModel'
|
LanguageModel instance |
Source code in src/mi_crow/language_model/language_model.py
313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 | |
get_all_detector_metadata ¶
get_all_detector_metadata()
Get metadata from all registered detectors.
Returns:
| Type | Description |
|---|---|
tuple[dict[str, dict[str, Any]], dict[str, dict[str, Tensor]]]
|
Tuple of (detectors_metadata, detectors_tensor_metadata) |
Source code in src/mi_crow/language_model/language_model.py
165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 | |
get_input_tracker ¶
get_input_tracker()
Get the input tracker instance if it exists.
Returns:
| Type | Description |
|---|---|
'InputTracker | None'
|
InputTracker instance or None |
Source code in src/mi_crow/language_model/language_model.py
156 157 158 159 160 161 162 163 | |
save_detector_metadata ¶
save_detector_metadata(run_name, batch_idx, unified=False, clear_after_save=True)
Save detector metadata to store.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
run_name
|
str
|
Name of the run |
required |
batch_idx
|
int | None
|
Batch index. Ignored when |
required |
unified
|
bool
|
If True, save metadata in a single detectors directory for the whole run instead of per‑batch directories. |
False
|
clear_after_save
|
bool
|
If True, clear detector metadata after saving to free memory. Defaults to True to prevent OOM errors when processing large batches. |
True
|
Returns:
| Type | Description |
|---|---|
str
|
Path where metadata was saved |
Raises:
| Type | Description |
|---|---|
ValueError
|
If store is not set |
Source code in src/mi_crow/language_model/language_model.py
199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 | |
save_model ¶
save_model(path=None)
Save the model and its metadata to the store.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
Path | str | None
|
Optional path to save the model. If None, defaults to {model_id}/model.pt relative to the store base path. |
None
|
Returns:
| Type | Description |
|---|---|
Path
|
Path where the model was saved |
Raises:
| Type | Description |
|---|---|
ValueError
|
If store is not set |
Source code in src/mi_crow/language_model/language_model.py
268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 | |
tokenize ¶
tokenize(texts, **kwargs)
Tokenize texts using the language model tokenizer.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
texts
|
Sequence[str]
|
Sequence of text strings to tokenize |
required |
**kwargs
|
Any
|
Additional tokenizer arguments |
{}
|
Returns:
| Type | Description |
|---|---|
Any
|
Tokenized encodings |
Source code in src/mi_crow/language_model/language_model.py
143 144 145 146 147 148 149 150 151 152 153 154 | |
mi_crow.language_model.context.LanguageModelContext
dataclass
¶
LanguageModelContext(language_model, model_id=None, tokenizer_params=None, model_params=None, device='cpu', dtype=None, model=None, tokenizer=None, store=None, special_token_ids=None, _hook_registry=dict(), _hook_id_map=dict())
Shared context for LanguageModel and its components.
mi_crow.language_model.layers.LanguageModelLayers ¶
LanguageModelLayers(context)
Manages layer access and hook registration for LanguageModel.
Initialize LanguageModelLayers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
context
|
LanguageModelContext
|
LanguageModelContext instance |
required |
Source code in src/mi_crow/language_model/layers.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 | |
disable_all_hooks ¶
disable_all_hooks()
Disable all registered hooks.
Source code in src/mi_crow/language_model/layers.py
442 443 444 445 | |
disable_hook ¶
disable_hook(hook_id)
Disable a specific hook by ID.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
hook_id
|
str
|
Hook ID to disable |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if hook was found and disabled, False otherwise |
Source code in src/mi_crow/language_model/layers.py
421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 | |
enable_all_hooks ¶
enable_all_hooks()
Enable all registered hooks.
Source code in src/mi_crow/language_model/layers.py
437 438 439 440 | |
enable_hook ¶
enable_hook(hook_id)
Enable a specific hook by ID.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
hook_id
|
str
|
Hook ID to enable |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if hook was found and enabled, False otherwise |
Source code in src/mi_crow/language_model/layers.py
405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 | |
get_controllers ¶
get_controllers()
Get all registered Controller hooks.
Returns:
| Type | Description |
|---|---|
List[Controller]
|
List of Controller instances |
Source code in src/mi_crow/language_model/layers.py
447 448 449 450 451 452 453 454 | |
get_detectors ¶
get_detectors()
Get all registered Detector hooks.
Returns:
| Type | Description |
|---|---|
List[Detector]
|
List of Detector instances |
Source code in src/mi_crow/language_model/layers.py
456 457 458 459 460 461 462 463 | |
get_hooks ¶
get_hooks(layer_signature=None, hook_type=None)
Get registered hooks, optionally filtered by layer and/or type.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
layer_signature
|
str | int | None
|
Optional layer to filter by |
None
|
hook_type
|
HookType | str | None
|
Optional hook type to filter by (HookType.FORWARD or HookType.PRE_FORWARD) |
None
|
Returns:
| Type | Description |
|---|---|
List[Hook]
|
List of Hook instances |
Source code in src/mi_crow/language_model/layers.py
380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 | |
get_layer_names ¶
get_layer_names()
Get all layer names.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of layer names |
Source code in src/mi_crow/language_model/layers.py
97 98 99 100 101 102 103 104 | |
print_layer_names ¶
print_layer_names()
Print layer names with basic info.
Useful for debugging and exploring model structure.
Source code in src/mi_crow/language_model/layers.py
106 107 108 109 110 111 112 113 114 115 116 117 | |
register_forward_hook_for_layer ¶
register_forward_hook_for_layer(layer_signature, hook, hook_args=None)
Register a forward hook directly on a layer.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
layer_signature
|
str | int
|
Layer name or index |
required |
hook
|
Callable
|
Hook callable |
required |
hook_args
|
dict
|
Optional arguments for register_forward_hook |
None
|
Returns:
| Type | Description |
|---|---|
Any
|
Hook handle |
Source code in src/mi_crow/language_model/layers.py
119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 | |
register_hook ¶
register_hook(layer_signature, hook, hook_type=None)
Register a hook on a layer.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
layer_signature
|
str | int
|
Layer name or index |
required |
hook
|
Hook
|
Hook instance to register |
required |
hook_type
|
HookType | str | None
|
Type of hook (HookType.FORWARD or HookType.PRE_FORWARD). If None, uses hook.hook_type |
None
|
Returns:
| Type | Description |
|---|---|
str
|
The hook's ID |
Raises:
| Type | Description |
|---|---|
ValueError
|
If hook ID is not unique or if mixing hook types on same layer |
Source code in src/mi_crow/language_model/layers.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 | |
register_pre_forward_hook_for_layer ¶
register_pre_forward_hook_for_layer(layer_signature, hook, hook_args=None)
Register a pre-forward hook directly on a layer.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
layer_signature
|
str | int
|
Layer name or index |
required |
hook
|
Callable
|
Hook callable |
required |
hook_args
|
dict
|
Optional arguments for register_forward_pre_hook |
None
|
Returns:
| Type | Description |
|---|---|
Any
|
Hook handle |
Source code in src/mi_crow/language_model/layers.py
139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 | |
unregister_hook ¶
unregister_hook(hook_or_id)
Unregister a hook by Hook instance or ID.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
hook_or_id
|
Hook | str
|
Hook instance or hook ID string |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if hook was found and removed, False otherwise |
Source code in src/mi_crow/language_model/layers.py
301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 | |
mi_crow.language_model.tokenizer.LanguageModelTokenizer ¶
LanguageModelTokenizer(context)
Handles tokenization for LanguageModel.
Initialize LanguageModelTokenizer.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
context
|
LanguageModelContext
|
LanguageModelContext instance |
required |
Source code in src/mi_crow/language_model/tokenizer.py
13 14 15 16 17 18 19 20 21 22 23 | |
split_to_tokens ¶
split_to_tokens(text, add_special_tokens=False)
Split text into token strings.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
text
|
Union[str, Sequence[str]]
|
Single string or sequence of strings to tokenize |
required |
add_special_tokens
|
bool
|
Whether to add special tokens (e.g., BOS, EOS) |
False
|
Returns:
| Type | Description |
|---|---|
Union[List[str], List[List[str]]]
|
For a single string: list of token strings |
Union[List[str], List[List[str]]]
|
For a sequence of strings: list of lists of token strings |
Source code in src/mi_crow/language_model/tokenizer.py
46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 | |
tokenize ¶
tokenize(texts, padding=False, pad_token='[PAD]', **kwargs)
Robust batch tokenization that works across tokenizer variants.
Tries methods in order: - callable tokenizer (most HF tokenizers) - batch_encode_plus - encode_plus per item + tokenizer.pad to collate
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
texts
|
Sequence[str]
|
Sequence of text strings to tokenize |
required |
padding
|
bool
|
Whether to pad sequences |
False
|
pad_token
|
str
|
Pad token string |
'[PAD]'
|
**kwargs
|
Any
|
Additional tokenizer arguments |
{}
|
Returns:
| Type | Description |
|---|---|
Any
|
Tokenized encodings |
Raises:
| Type | Description |
|---|---|
ValueError
|
If tokenizer is not initialized |
TypeError
|
If tokenizer is not usable for batch tokenization |
Source code in src/mi_crow/language_model/tokenizer.py
145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 | |
mi_crow.language_model.activations.LanguageModelActivations ¶
LanguageModelActivations(context)
Handles activation saving and processing for LanguageModel.
Initialize LanguageModelActivations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
context
|
LanguageModelContext
|
LanguageModelContext instance |
required |
Source code in src/mi_crow/language_model/activations.py
24 25 26 27 28 29 30 31 | |
save_activations ¶
save_activations(texts, layer_signature, run_name=None, batch_size=None, *, dtype=None, max_length=None, autocast=True, autocast_dtype=None, free_cuda_cache_every=0, verbose=False, save_in_batches=True, save_attention_mask=False, stop_after_last_layer=True)
Save activations from a list of texts.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
texts
|
Sequence[str]
|
Sequence of text strings to process |
required |
layer_signature
|
str | int | list[str | int]
|
Layer signature (or list of signatures) to capture activations from |
required |
run_name
|
str | None
|
Optional run name (generated if None) |
None
|
batch_size
|
int | None
|
Optional batch size for processing (if None, processes all at once) |
None
|
dtype
|
dtype | None
|
Optional dtype to convert activations to |
None
|
max_length
|
int | None
|
Optional max length for tokenization |
None
|
autocast
|
bool
|
Whether to use autocast |
True
|
autocast_dtype
|
dtype | None
|
Optional dtype for autocast |
None
|
free_cuda_cache_every
|
int | None
|
Clear CUDA cache every N batches (0 or None to disable) |
0
|
verbose
|
bool
|
Whether to log progress |
False
|
save_attention_mask
|
bool
|
Whether to also save attention masks (automatically attaches ModelInputDetector) |
False
|
stop_after_last_layer
|
bool
|
Whether to stop model forward pass after the last requested layer to save memory and time. Defaults to True. |
True
|
Returns:
| Type | Description |
|---|---|
str
|
Run name used for saving |
Raises:
| Type | Description |
|---|---|
ValueError
|
If model or store is not initialized |
Source code in src/mi_crow/language_model/activations.py
474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 | |
save_activations_dataset ¶
save_activations_dataset(dataset, layer_signature, run_name=None, batch_size=32, *, dtype=None, max_length=None, autocast=True, autocast_dtype=None, free_cuda_cache_every=None, verbose=False, save_in_batches=True, save_attention_mask=False, stop_after_last_layer=True)
Save activations from a dataset.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
BaseDataset
|
Dataset to process |
required |
layer_signature
|
str | int | list[str | int]
|
Layer signature (or list of signatures) to capture activations from |
required |
run_name
|
str | None
|
Optional run name (generated if None) |
None
|
batch_size
|
int
|
Batch size for processing |
32
|
dtype
|
dtype | None
|
Optional dtype to convert activations to |
None
|
max_length
|
int | None
|
Optional max length for tokenization |
None
|
autocast
|
bool
|
Whether to use autocast |
True
|
autocast_dtype
|
dtype | None
|
Optional dtype for autocast |
None
|
free_cuda_cache_every
|
int | None
|
Clear CUDA cache every N batches (None to auto-detect, 0 to disable) |
None
|
verbose
|
bool
|
Whether to log progress |
False
|
save_attention_mask
|
bool
|
Whether to also save attention masks (automatically attaches ModelInputDetector) |
False
|
stop_after_last_layer
|
bool
|
Whether to stop model forward pass after the last requested layer to save memory and time. Defaults to True. |
True
|
Returns:
| Type | Description |
|---|---|
str
|
Run name used for saving |
Raises:
| Type | Description |
|---|---|
ValueError
|
If model or store is not initialized |
Source code in src/mi_crow/language_model/activations.py
374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 | |
mi_crow.language_model.inference.InferenceEngine ¶
InferenceEngine(language_model)
Handles inference operations for LanguageModel.
Initialize inference engine.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
language_model
|
'LanguageModel'
|
LanguageModel instance |
required |
Source code in src/mi_crow/language_model/inference.py
35 36 37 38 39 40 41 42 | |
execute_inference ¶
execute_inference(texts, tok_kwargs=None, autocast=True, autocast_dtype=None, with_controllers=True, stop_after_layer=None)
Execute inference on texts.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
texts
|
Sequence[str]
|
Sequence of input texts |
required |
tok_kwargs
|
Dict | None
|
Optional tokenizer keyword arguments |
None
|
autocast
|
bool
|
Whether to use automatic mixed precision |
True
|
autocast_dtype
|
dtype | None
|
Optional dtype for autocast |
None
|
with_controllers
|
bool
|
Whether to use controllers during inference |
True
|
stop_after_layer
|
str | int | None
|
Optional layer signature (name or index) after which the forward pass should be stopped early |
None
|
Returns:
| Type | Description |
|---|---|
tuple[Any, Dict[str, Tensor]]
|
Tuple of (model_output, encodings) |
Raises:
| Type | Description |
|---|---|
ValueError
|
If texts is empty or tokenizer is not initialized |
Source code in src/mi_crow/language_model/inference.py
158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 | |
extract_logits ¶
extract_logits(output)
Extract logits tensor from model output.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
output
|
Any
|
Model output |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Logits tensor |
Source code in src/mi_crow/language_model/inference.py
231 232 233 234 235 236 237 238 239 240 241 | |
infer_dataset ¶
infer_dataset(dataset, run_name=None, batch_size=32, tok_kwargs=None, autocast=True, autocast_dtype=None, with_controllers=True, free_cuda_cache_every=0, clear_detectors_before=False, verbose=False, stop_after_layer=None, save_in_batches=True)
Run inference on whole dataset with metadata saving.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
'BaseDataset'
|
Dataset to process |
required |
run_name
|
str | None
|
Optional run name (generated if None) |
None
|
batch_size
|
int
|
Batch size for processing |
32
|
tok_kwargs
|
Dict | None
|
Optional tokenizer keyword arguments |
None
|
autocast
|
bool
|
Whether to use automatic mixed precision |
True
|
autocast_dtype
|
dtype | None
|
Optional dtype for autocast |
None
|
with_controllers
|
bool
|
Whether to use controllers during inference |
True
|
free_cuda_cache_every
|
int | None
|
Clear CUDA cache every N batches (0 or None to disable) |
0
|
clear_detectors_before
|
bool
|
If True, clears all detector state before running |
False
|
verbose
|
bool
|
Whether to log progress |
False
|
stop_after_layer
|
str | int | None
|
Optional layer signature (name or index) after which the forward pass should be stopped early |
None
|
Returns:
| Type | Description |
|---|---|
str
|
Run name used for saving |
Raises:
| Type | Description |
|---|---|
ValueError
|
If model or store is not initialized |
Source code in src/mi_crow/language_model/inference.py
436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 | |
infer_texts ¶
infer_texts(texts, run_name=None, batch_size=None, tok_kwargs=None, autocast=True, autocast_dtype=None, with_controllers=True, clear_detectors_before=False, verbose=False, stop_after_layer=None, save_in_batches=True)
Run inference on list of strings with optional metadata saving.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
texts
|
Sequence[str]
|
Sequence of input texts |
required |
run_name
|
str | None
|
Optional run name for saving metadata (if None, no metadata saved) |
None
|
batch_size
|
int | None
|
Optional batch size for processing (if None, processes all at once) |
None
|
tok_kwargs
|
Dict | None
|
Optional tokenizer keyword arguments |
None
|
autocast
|
bool
|
Whether to use automatic mixed precision |
True
|
autocast_dtype
|
dtype | None
|
Optional dtype for autocast |
None
|
with_controllers
|
bool
|
Whether to use controllers during inference |
True
|
clear_detectors_before
|
bool
|
If True, clears all detector state before running |
False
|
verbose
|
bool
|
Whether to log progress |
False
|
stop_after_layer
|
str | int | None
|
Optional layer signature (name or index) after which the forward pass should be stopped early |
None
|
save_in_batches
|
bool
|
If True, save detector metadata in per‑batch directories. If False, aggregate all detector metadata for the run under a single detectors directory. |
True
|
Returns:
| Type | Description |
|---|---|
tuple[Any, Dict[str, Tensor]] | tuple[List[Any], List[Dict[str, Tensor]]]
|
If batch_size is None or >= len(texts): Tuple of (model_output, encodings) |
tuple[Any, Dict[str, Tensor]] | tuple[List[Any], List[Dict[str, Tensor]]]
|
If batch_size < len(texts): Tuple of (list of outputs, list of encodings) |
Raises:
| Type | Description |
|---|---|
ValueError
|
If texts is empty or tokenizer is not initialized |
Source code in src/mi_crow/language_model/inference.py
327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 | |
Utilities¶
mi_crow.language_model.initialization ¶
Model initialization and factory methods.
create_from_huggingface ¶
create_from_huggingface(cls, model_name, store, tokenizer_params=None, model_params=None, device=None)
Load a language model from HuggingFace Hub.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cls
|
type['LanguageModel']
|
LanguageModel class |
required |
model_name
|
str
|
HuggingFace model identifier |
required |
store
|
Store
|
Store instance for persistence |
required |
tokenizer_params
|
dict | None
|
Optional tokenizer parameters |
None
|
model_params
|
dict | None
|
Optional model parameters |
None
|
device
|
str | device | None
|
Target device ("cuda", "cpu", "mps"). Model will be moved to this device after loading. |
None
|
Returns: LanguageModel instance
Raises:
| Type | Description |
|---|---|
ValueError
|
If model_name is invalid |
RuntimeError
|
If model loading fails |
Source code in src/mi_crow/language_model/initialization.py
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
create_from_local_torch ¶
create_from_local_torch(cls, model_path, tokenizer_path, store, device=None)
Load a language model from local HuggingFace paths.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cls
|
type['LanguageModel']
|
LanguageModel class |
required |
model_path
|
str
|
Path to the model directory or file |
required |
tokenizer_path
|
str
|
Path to the tokenizer directory or file |
required |
store
|
Store
|
Store instance for persistence |
required |
device
|
str | device | None
|
Optional device string or torch.device (defaults to 'cpu' if None) |
None
|
Returns:
| Type | Description |
|---|---|
'LanguageModel'
|
LanguageModel instance |
Raises:
| Type | Description |
|---|---|
FileNotFoundError
|
If model or tokenizer paths don't exist |
RuntimeError
|
If model loading fails |
Source code in src/mi_crow/language_model/initialization.py
84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 | |
initialize_model_id ¶
initialize_model_id(model, provided_model_id=None)
Initialize model ID for LanguageModel.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
Module
|
PyTorch model module |
required |
provided_model_id
|
str | None
|
Optional model ID provided by user |
None
|
Returns:
| Type | Description |
|---|---|
str
|
Model ID string |
Source code in src/mi_crow/language_model/initialization.py
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 | |
mi_crow.language_model.persistence ¶
Model persistence (save/load) operations.
load_model_from_saved_file ¶
load_model_from_saved_file(cls, saved_path, store, model_id=None, device=None)
Load a language model from a saved file (created by save_model).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cls
|
type['LanguageModel']
|
LanguageModel class |
required |
saved_path
|
Path | str
|
Path to the saved model file (.pt file) |
required |
store
|
'Store'
|
Store instance for persistence |
required |
model_id
|
str | None
|
Optional model identifier. If not provided, will use the model_id from saved metadata. If provided, will be used to load the model architecture from HuggingFace. |
None
|
device
|
str | device | None
|
Optional device string or torch.device (defaults to 'cpu' if None) |
None
|
Returns:
| Type | Description |
|---|---|
'LanguageModel'
|
LanguageModel instance |
Raises:
| Type | Description |
|---|---|
FileNotFoundError
|
If the saved file doesn't exist |
ValueError
|
If the saved file format is invalid or model_id is required but not provided |
RuntimeError
|
If model loading fails |
Source code in src/mi_crow/language_model/persistence.py
88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 | |
save_model ¶
save_model(language_model, path=None)
Save the model and its metadata to the store.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
language_model
|
'LanguageModel'
|
LanguageModel instance to save |
required |
path
|
Path | str | None
|
Optional path to save the model. If None, defaults to {model_id}/model.pt relative to the store base path. |
None
|
Returns:
| Type | Description |
|---|---|
Path
|
Path where the model was saved |
Raises:
| Type | Description |
|---|---|
ValueError
|
If store is not set |
OSError
|
If file operations fail |
Source code in src/mi_crow/language_model/persistence.py
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 | |