| as_array | Converts to array |
| AutogradContext | Class representing the context. |
| autograd_backward | Computes the sum of gradients of given tensors w.r.t. graph leaves. |
| autograd_function | Records operation history and defines formulas for differentiating ops. |
| autograd_grad | Computes and returns the sum of gradients of outputs w.r.t. the inputs. |
| autograd_set_grad_mode | Set grad mode |
| backends_cudnn_is_available | CuDNN is available |
| backends_cudnn_version | CuDNN version |
| backends_mkldnn_is_available | MKLDNN is available |
| backends_mkl_is_available | MKL is available |
| backends_mps_is_available | MPS is available |
| backends_openmp_is_available | OpenMP is available |
| broadcast_all | Given a list of values (possibly containing numbers), returns a list where each value is broadcasted based on the following rules: |
| buffer_from_torch_tensor | Creates a tensor from a buffer of memory |
| clone_module | Clone a torch module. |
| Constraint | Abstract base class for constraints. |
| contrib_sort_vertices | Contrib sort vertices |
| cuda_amp_grad_scaler | Creates a gradient scaler |
| cuda_current_device | Returns the index of a currently selected device. |
| cuda_device_count | Returns the number of GPUs available. |
| cuda_empty_cache | Empty cache |
| cuda_get_device_capability | Returns the major and minor CUDA capability of 'device' |
| cuda_get_rng_state | RNG state management |
| cuda_is_available | Returns a bool indicating if CUDA is currently available. |
| cuda_memory_stats | Returns a dictionary of CUDA memory allocator statistics for a given device. |
| cuda_memory_summary | Returns a dictionary of CUDA memory allocator statistics for a given device. |
| cuda_runtime_version | Returns the CUDA runtime version |
| cuda_set_rng_state | RNG state management |
| cuda_synchronize | Waits for all kernels in all streams on a CUDA device to complete. |
| dataloader | Data loader. Combines a dataset and a sampler, and provides single- or multi-process iterators over the dataset. |
| dataloader_make_iter | Creates an iterator from a DataLoader |
| dataloader_next | Get the next element of a dataloader iterator |
| dataset | Helper function to create an function that generates R6 instances of class 'dataset' |
| dataset_subset | Dataset Subset |
| Distribution | Generic R6 class representing distributions |
| distr_bernoulli | Creates a Bernoulli distribution parameterized by 'probs' or 'logits' (but not both). Samples are binary (0 or 1). They take the value '1' with probability 'p' and '0' with probability '1 - p'. |
| distr_categorical | Creates a categorical distribution parameterized by either 'probs' or 'logits' (but not both). |
| distr_chi2 | Creates a Chi2 distribution parameterized by shape parameter 'df'. This is exactly equivalent to 'distr_gamma(alpha=0.5*df, beta=0.5)' |
| distr_gamma | Creates a Gamma distribution parameterized by shape 'concentration' and 'rate'. |
| distr_mixture_same_family | Mixture of components in the same family |
| distr_multivariate_normal | Gaussian distribution |
| distr_normal | Creates a normal (also called Gaussian) distribution parameterized by 'loc' and 'scale'. |
| distr_poisson | Creates a Poisson distribution parameterized by 'rate', the rate parameter. |
| enumerate | Enumerate an iterator |
| enumerate.dataloader | Enumerate an iterator |
| get_install_libs_url | Install Torch from files |
| install_torch | Install Torch |
| install_torch_from_file | Install Torch from files |
| is_dataloader | Checks if the object is a dataloader |
| is_nn_buffer | Checks if the object is a nn_buffer |
| is_nn_module | Checks if the object is an nn_module |
| is_nn_parameter | Checks if an object is a nn_parameter |
| is_optimizer | Checks if the object is a torch optimizer |
| is_torch_device | Checks if object is a device |
| is_torch_dtype | Check if object is a torch data type |
| is_torch_layout | Check if an object is a torch layout. |
| is_torch_memory_format | Check if an object is a memory format |
| is_torch_qscheme | Checks if an object is a QScheme |
| is_undefined_tensor | Checks if a tensor is undefined |
| iterable_dataset | Creates an iterable dataset |
| jit_compile | Compile TorchScript code into a graph |
| jit_load | Loads a 'script_function' or 'script_module' previously saved with 'jit_save' |
| jit_ops | Enable idiomatic access to JIT operators from R. |
| jit_save | Saves a 'script_function' to a path |
| jit_save_for_mobile | Saves a 'script_function' or 'script_module' in bytecode form, to be loaded on a mobile device |
| jit_scalar | Adds the 'jit_scalar' class to the input |
| jit_serialize | Serialize a Script Module |
| jit_trace | Trace a function and return an executable 'script_function'. |
| jit_trace_module | Trace a module |
| jit_tuple | Adds the 'jit_tuple' class to the input |
| jit_unserialize | Unserialize a Script Module |
| linalg_cholesky | Computes the Cholesky decomposition of a complex Hermitian or real symmetric positive-definite matrix. |
| linalg_cholesky_ex | Computes the Cholesky decomposition of a complex Hermitian or real symmetric positive-definite matrix. |
| linalg_cond | Computes the condition number of a matrix with respect to a matrix norm. |
| linalg_det | Computes the determinant of a square matrix. |
| linalg_eig | Computes the eigenvalue decomposition of a square matrix if it exists. |
| linalg_eigh | Computes the eigenvalue decomposition of a complex Hermitian or real symmetric matrix. |
| linalg_eigvals | Computes the eigenvalues of a square matrix. |
| linalg_eigvalsh | Computes the eigenvalues of a complex Hermitian or real symmetric matrix. |
| linalg_householder_product | Computes the first 'n' columns of a product of Householder matrices. |
| linalg_inv | Computes the inverse of a square matrix if it exists. |
| linalg_inv_ex | Computes the inverse of a square matrix if it is invertible. |
| linalg_lstsq | Computes a solution to the least squares problem of a system of linear equations. |
| linalg_matrix_norm | Computes a matrix norm. |
| linalg_matrix_power | Computes the 'n'-th power of a square matrix for an integer 'n'. |
| linalg_matrix_rank | Computes the numerical rank of a matrix. |
| linalg_multi_dot | Efficiently multiplies two or more matrices |
| linalg_norm | Computes a vector or matrix norm. |
| linalg_pinv | Computes the pseudoinverse (Moore-Penrose inverse) of a matrix. |
| linalg_qr | Computes the QR decomposition of a matrix. |
| linalg_slogdet | Computes the sign and natural logarithm of the absolute value of the determinant of a square matrix. |
| linalg_solve | Computes the solution of a square system of linear equations with a unique solution. |
| linalg_solve_triangular | Triangular solve |
| linalg_svd | Computes the singular value decomposition (SVD) of a matrix. |
| linalg_svdvals | Computes the singular values of a matrix. |
| linalg_tensorinv | Computes the multiplicative inverse of 'torch_tensordot()' |
| linalg_tensorsolve | Computes the solution 'X' to the system 'torch_tensordot(A, X) = B'. |
| linalg_vector_norm | Computes a vector norm. |
| load_state_dict | Load a state dict file |
| local_autocast | Autocast context manager |
| local_device | Device contexts |
| local_enable_grad | Enable grad |
| local_no_grad | Temporarily modify gradient recording. |
| local_torch_manual_seed | Sets the seed for generating random numbers. |
| lr_cosine_annealing | Set the learning rate of each parameter group using a cosine annealing schedule |
| lr_lambda | Sets the learning rate of each parameter group to the initial lr times a given function. When last_epoch=-1, sets initial lr as lr. |
| lr_multiplicative | Multiply the learning rate of each parameter group by the factor given in the specified function. When last_epoch=-1, sets initial lr as lr. |
| lr_one_cycle | Once cycle learning rate |
| lr_reduce_on_plateau | Reduce learning rate on plateau |
| lr_scheduler | Creates learning rate schedulers |
| lr_step | Step learning rate decay |
| nnf_adaptive_avg_pool1d | Adaptive_avg_pool1d |
| nnf_adaptive_avg_pool2d | Adaptive_avg_pool2d |
| nnf_adaptive_avg_pool3d | Adaptive_avg_pool3d |
| nnf_adaptive_max_pool1d | Adaptive_max_pool1d |
| nnf_adaptive_max_pool2d | Adaptive_max_pool2d |
| nnf_adaptive_max_pool3d | Adaptive_max_pool3d |
| nnf_affine_grid | Affine_grid |
| nnf_alpha_dropout | Alpha_dropout |
| nnf_avg_pool1d | Avg_pool1d |
| nnf_avg_pool2d | Avg_pool2d |
| nnf_avg_pool3d | Avg_pool3d |
| nnf_batch_norm | Batch_norm |
| nnf_bilinear | Bilinear |
| nnf_binary_cross_entropy | Binary_cross_entropy |
| nnf_binary_cross_entropy_with_logits | Binary_cross_entropy_with_logits |
| nnf_celu | Celu |
| nnf_celu_ | Celu |
| nnf_contrib_sparsemax | Sparsemax |
| nnf_conv1d | Conv1d |
| nnf_conv2d | Conv2d |
| nnf_conv3d | Conv3d |
| nnf_conv_tbc | Conv_tbc |
| nnf_conv_transpose1d | Conv_transpose1d |
| nnf_conv_transpose2d | Conv_transpose2d |
| nnf_conv_transpose3d | Conv_transpose3d |
| nnf_cosine_embedding_loss | Cosine_embedding_loss |
| nnf_cosine_similarity | Cosine_similarity |
| nnf_cross_entropy | Cross_entropy |
| nnf_ctc_loss | Ctc_loss |
| nnf_dropout | Dropout |
| nnf_dropout2d | Dropout2d |
| nnf_dropout3d | Dropout3d |
| nnf_elu | Elu |
| nnf_elu_ | Elu |
| nnf_embedding | Embedding |
| nnf_embedding_bag | Embedding_bag |
| nnf_fold | Fold |
| nnf_fractional_max_pool2d | Fractional_max_pool2d |
| nnf_fractional_max_pool3d | Fractional_max_pool3d |
| nnf_gelu | Gelu |
| nnf_glu | Glu |
| nnf_grid_sample | Grid_sample |
| nnf_group_norm | Group_norm |
| nnf_gumbel_softmax | Gumbel_softmax |
| nnf_hardshrink | Hardshrink |
| nnf_hardsigmoid | Hardsigmoid |
| nnf_hardswish | Hardswish |
| nnf_hardtanh | Hardtanh |
| nnf_hardtanh_ | Hardtanh |
| nnf_hinge_embedding_loss | Hinge_embedding_loss |
| nnf_instance_norm | Instance_norm |
| nnf_interpolate | Interpolate |
| nnf_kl_div | Kl_div |
| nnf_l1_loss | L1_loss |
| nnf_layer_norm | Layer_norm |
| nnf_leaky_relu | Leaky_relu |
| nnf_linear | Linear |
| nnf_local_response_norm | Local_response_norm |
| nnf_logsigmoid | Logsigmoid |
| nnf_log_softmax | Log_softmax |
| nnf_lp_pool1d | Lp_pool1d |
| nnf_lp_pool2d | Lp_pool2d |
| nnf_margin_ranking_loss | Margin_ranking_loss |
| nnf_max_pool1d | Max_pool1d |
| nnf_max_pool2d | Max_pool2d |
| nnf_max_pool3d | Max_pool3d |
| nnf_max_unpool1d | Max_unpool1d |
| nnf_max_unpool2d | Max_unpool2d |
| nnf_max_unpool3d | Max_unpool3d |
| nnf_mse_loss | Mse_loss |
| nnf_multilabel_margin_loss | Multilabel_margin_loss |
| nnf_multilabel_soft_margin_loss | Multilabel_soft_margin_loss |
| nnf_multi_head_attention_forward | Multi head attention forward |
| nnf_multi_margin_loss | Multi_margin_loss |
| nnf_nll_loss | Nll_loss |
| nnf_normalize | Normalize |
| nnf_one_hot | One_hot |
| nnf_pad | Pad |
| nnf_pairwise_distance | Pairwise_distance |
| nnf_pdist | Pdist |
| nnf_pixel_shuffle | Pixel_shuffle |
| nnf_poisson_nll_loss | Poisson_nll_loss |
| nnf_prelu | Prelu |
| nnf_relu | Relu |
| nnf_relu6 | Relu6 |
| nnf_relu_ | Relu |
| nnf_rrelu | Rrelu |
| nnf_rrelu_ | Rrelu |
| nnf_selu | Selu |
| nnf_selu_ | Selu |
| nnf_sigmoid | Sigmoid |
| nnf_silu | Applies the Sigmoid Linear Unit (SiLU) function, element-wise. See 'nn_silu()' for more information. |
| nnf_smooth_l1_loss | Smooth_l1_loss |
| nnf_softmax | Softmax |
| nnf_softmin | Softmin |
| nnf_softplus | Softplus |
| nnf_softshrink | Softshrink |
| nnf_softsign | Softsign |
| nnf_soft_margin_loss | Soft_margin_loss |
| nnf_tanhshrink | Tanhshrink |
| nnf_threshold | Threshold |
| nnf_threshold_ | Threshold |
| nnf_triplet_margin_loss | Triplet_margin_loss |
| nnf_triplet_margin_with_distance_loss | Triplet margin with distance loss |
| nnf_unfold | Unfold |
| nn_adaptive_avg_pool1d | Applies a 1D adaptive average pooling over an input signal composed of several input planes. |
| nn_adaptive_avg_pool2d | Applies a 2D adaptive average pooling over an input signal composed of several input planes. |
| nn_adaptive_avg_pool3d | Applies a 3D adaptive average pooling over an input signal composed of several input planes. |
| nn_adaptive_log_softmax_with_loss | AdaptiveLogSoftmaxWithLoss module |
| nn_adaptive_max_pool1d | Applies a 1D adaptive max pooling over an input signal composed of several input planes. |
| nn_adaptive_max_pool2d | Applies a 2D adaptive max pooling over an input signal composed of several input planes. |
| nn_adaptive_max_pool3d | Applies a 3D adaptive max pooling over an input signal composed of several input planes. |
| nn_avg_pool1d | Applies a 1D average pooling over an input signal composed of several input planes. |
| nn_avg_pool2d | Applies a 2D average pooling over an input signal composed of several input planes. |
| nn_avg_pool3d | Applies a 3D average pooling over an input signal composed of several input planes. |
| nn_batch_norm1d | BatchNorm1D module |
| nn_batch_norm2d | BatchNorm2D |
| nn_batch_norm3d | BatchNorm3D |
| nn_bce_loss | Binary cross entropy loss |
| nn_bce_with_logits_loss | BCE with logits loss |
| nn_bilinear | Bilinear module |
| nn_buffer | Creates a nn_buffer |
| nn_celu | CELU module |
| nn_contrib_sparsemax | Sparsemax activation |
| nn_conv1d | Conv1D module |
| nn_conv2d | Conv2D module |
| nn_conv3d | Conv3D module |
| nn_conv_transpose1d | ConvTranspose1D |
| nn_conv_transpose2d | ConvTranpose2D module |
| nn_conv_transpose3d | ConvTranpose3D module |
| nn_cosine_embedding_loss | Cosine embedding loss |
| nn_cross_entropy_loss | CrossEntropyLoss module |
| nn_ctc_loss | The Connectionist Temporal Classification loss. |
| nn_dropout | Dropout module |
| nn_dropout2d | Dropout2D module |
| nn_dropout3d | Dropout3D module |
| nn_elu | ELU module |
| nn_embedding | Embedding module |
| nn_embedding_bag | Embedding bag module |
| nn_flatten | Flattens a contiguous range of dims into a tensor. |
| nn_fractional_max_pool2d | Applies a 2D fractional max pooling over an input signal composed of several input planes. |
| nn_fractional_max_pool3d | Applies a 3D fractional max pooling over an input signal composed of several input planes. |
| nn_gelu | GELU module |
| nn_glu | GLU module |
| nn_group_norm | Group normalization |
| nn_gru | Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence. |
| nn_hardshrink | Hardshwink module |
| nn_hardsigmoid | Hardsigmoid module |
| nn_hardswish | Hardswish module |
| nn_hardtanh | Hardtanh module |
| nn_hinge_embedding_loss | Hinge embedding loss |
| nn_identity | Identity module |
| nn_init_calculate_gain | Calculate gain |
| nn_init_constant_ | Constant initialization |
| nn_init_dirac_ | Dirac initialization |
| nn_init_eye_ | Eye initialization |
| nn_init_kaiming_normal_ | Kaiming normal initialization |
| nn_init_kaiming_uniform_ | Kaiming uniform initialization |
| nn_init_normal_ | Normal initialization |
| nn_init_ones_ | Ones initialization |
| nn_init_orthogonal_ | Orthogonal initialization |
| nn_init_sparse_ | Sparse initialization |
| nn_init_trunc_normal_ | Truncated normal initialization |
| nn_init_uniform_ | Uniform initialization |
| nn_init_xavier_normal_ | Xavier normal initialization |
| nn_init_xavier_uniform_ | Xavier uniform initialization |
| nn_init_zeros_ | Zeros initialization |
| nn_kl_div_loss | Kullback-Leibler divergence loss |
| nn_l1_loss | L1 loss |
| nn_layer_norm | Layer normalization |
| nn_leaky_relu | LeakyReLU module |
| nn_linear | Linear module |
| nn_log_sigmoid | LogSigmoid module |
| nn_log_softmax | LogSoftmax module |
| nn_lp_pool1d | Applies a 1D power-average pooling over an input signal composed of several input planes. |
| nn_lp_pool2d | Applies a 2D power-average pooling over an input signal composed of several input planes. |
| nn_lstm | Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence. |
| nn_margin_ranking_loss | Margin ranking loss |
| nn_max_pool1d | MaxPool1D module |
| nn_max_pool2d | MaxPool2D module |
| nn_max_pool3d | Applies a 3D max pooling over an input signal composed of several input planes. |
| nn_max_unpool1d | Computes a partial inverse of 'MaxPool1d'. |
| nn_max_unpool2d | Computes a partial inverse of 'MaxPool2d'. |
| nn_max_unpool3d | Computes a partial inverse of 'MaxPool3d'. |
| nn_module | Base class for all neural network modules. |
| nn_module_dict | Container that allows named values |
| nn_module_list | Holds submodules in a list. |
| nn_mse_loss | MSE loss |
| nn_multihead_attention | MultiHead attention |
| nn_multilabel_margin_loss | Multilabel margin loss |
| nn_multilabel_soft_margin_loss | Multi label soft margin loss |
| nn_multi_margin_loss | Multi margin loss |
| nn_nll_loss | Nll loss |
| nn_pairwise_distance | Pairwise distance |
| nn_parameter | Creates an 'nn_parameter' |
| nn_poisson_nll_loss | Poisson NLL loss |
| nn_prelu | PReLU module |
| nn_prune_head | Prune top layer(s) of a network |
| nn_relu | ReLU module |
| nn_relu6 | ReLu6 module |
| nn_rnn | RNN module |
| nn_rrelu | RReLU module |
| nn_selu | SELU module |
| nn_sequential | A sequential container |
| nn_sigmoid | Sigmoid module |
| nn_silu | Applies the Sigmoid Linear Unit (SiLU) function, element-wise. The SiLU function is also known as the swish function. |
| nn_smooth_l1_loss | Smooth L1 loss |
| nn_softmax | Softmax module |
| nn_softmax2d | Softmax2d module |
| nn_softmin | Softmin |
| nn_softplus | Softplus module |
| nn_softshrink | Softshrink module |
| nn_softsign | Softsign module |
| nn_soft_margin_loss | Soft margin loss |
| nn_tanh | Tanh module |
| nn_tanhshrink | Tanhshrink module |
| nn_threshold | Threshold module |
| nn_triplet_margin_loss | Triplet margin loss |
| nn_triplet_margin_with_distance_loss | Triplet margin with distance loss |
| nn_unflatten | Unflattens a tensor dim expanding it to a desired shape. For use with [nn_sequential. |
| nn_upsample | Upsample module |
| nn_utils_clip_grad_norm_ | Clips gradient norm of an iterable of parameters. |
| nn_utils_clip_grad_value_ | Clips gradient of an iterable of parameters at specified value. |
| nn_utils_rnn_pack_padded_sequence | Packs a Tensor containing padded sequences of variable length. |
| nn_utils_rnn_pack_sequence | Packs a list of variable length Tensors |
| nn_utils_rnn_pad_packed_sequence | Pads a packed batch of variable length sequences. |
| nn_utils_rnn_pad_sequence | Pad a list of variable length Tensors with 'padding_value' |
| nn_utils_weight_norm | nn_utils_weight_norm |
| optimizer | Creates a custom optimizer |
| OptimizerIgnite | Abstract Base Class for LibTorch Optimizers |
| optimizer_ignite | Abstract Base Class for LibTorch Optimizers |
| optim_adadelta | Adadelta optimizer |
| optim_adagrad | Adagrad optimizer |
| optim_adam | Implements Adam algorithm. |
| optim_adamw | Implements AdamW algorithm |
| optim_asgd | Averaged Stochastic Gradient Descent optimizer |
| optim_ignite_adagrad | LibTorch implementation of Adagrad |
| optim_ignite_adam | LibTorch implementation of Adam |
| optim_ignite_adamw | LibTorch implementation of AdamW |
| optim_ignite_rmsprop | LibTorch implementation of RMSprop |
| optim_ignite_sgd | LibTorch implementation of SGD |
| optim_lbfgs | LBFGS optimizer |
| optim_required | Dummy value indicating a required value. |
| optim_rmsprop | RMSprop optimizer |
| optim_rprop | Implements the resilient backpropagation algorithm. |
| optim_sgd | SGD optimizer |
| sampler | Creates a new Sampler |
| set_autocast | Autocast context manager |
| unset_autocast | Autocast context manager |
| with_autocast | Autocast context manager |
| with_detect_anomaly | Context-manager that enable anomaly detection for the autograd engine. |
| with_device | Device contexts |
| with_enable_grad | Enable grad |
| with_no_grad | Temporarily modify gradient recording. |
| with_torch_manual_seed | Sets the seed for generating random numbers. |