| causal_config | Returns the configuration of a causal model |
| causal_next_tokens_pred_tbl | Generate next tokens after a context and their predictability using a causal transformer model |
| causal_pred_mats | Generate a list of predictability matrices using a causal transformer model |
| causal_preload | Preloads a causal language model |
| causal_targets_pred | Compute predictability using a causal transformer model |
| causal_tokens_pred_lst | Compute predictability using a causal transformer model |
| causal_words_pred | Compute predictability using a causal transformer model |
| df_jaeger14 | Self-Paced Reading Dataset on Chinese Relative Clauses |
| df_sent | Example dataset: Two word-by-word sentences |
| installed_py_pangoling | Check if the required Python dependencies for 'pangoling' are installed |
| install_py_pangoling | Install the Python packages needed for 'pangoling' |
| masked_config | Returns the configuration of a masked model |
| masked_preload | Preloads a masked language model |
| masked_targets_pred | Get the predictability of a target word (or phrase) given a left and right context |
| masked_tokens_pred_tbl | Get the possible tokens and their log probabilities for each mask in a sentence |
| ntokens | The number of tokens in a string or vector of strings |
| perplexity_calc | Calculates perplexity |
| set_cache_folder | Set cache folder for HuggingFace transformers |
| tokenize_lst | Tokenize an input |
| transformer_vocab | Returns the vocabulary of a model |