Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/api/paddle/device/Overview_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,9 @@ paddle.device 目录下包含 cuda 目录和 xpu 目录, cuda 目录中存放
" :ref:`is_available <cn_api_paddle_device_is_available>` ", "检查设备是否可用"
" :ref:`get_rng_state <cn_api_paddle_device_get_rng_state>` ", "获取随机数生成器状态"
" :ref:`set_rng_state <cn_api_paddle_device_set_rng_state>` ", "设置随机数生成器状态"
" :ref:`device <_cn_api_paddle_device_device>` ", "临时使用设备"
" :ref:`device <cn_api_paddle_device_device>` ", "临时使用设备"
" :ref:`get_device_name <cn_api_paddle_device_get_device_name>` ", "返回指定设备的名称"
" :ref:`manual_seed <_cn_api_paddle_device_manual_seed>` ", "设置当前设备的随机数种子"
" :ref:`manual_seed <cn_api_paddle_device_manual_seed>` ", "设置当前设备的随机数种子"
.. _cn_device_compile:

编译环境检测
Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/gather_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ COPY-FROM: paddle.gather

.. py:function:: paddle.gather(input, dim, index, out=None)
PyTorch 兼容的 ``gather`` 操作:根据索引 index 获取输入 ``input`` 的指定 ``dim`` 维度的条目,并将它们拼接在一起。行为与 ``cn_api_paddle_take_along_axis`` 在 ``broadcast=False`` 情况下一致。
PyTorch 兼容的 ``gather`` 操作:根据索引 index 获取输入 ``input`` 的指定 ``dim`` 维度的条目,并将它们拼接在一起。行为与 :ref:`cn_api_paddle_take_along_axis` 在 ``broadcast=False`` 情况下一致。

接口对比可见 `【torch 参数更多】torch.gather`_ 。

Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/linalg/Overview_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ paddle.linalg 目录下包含飞桨框架支持的线性代数相关 API。具
" :ref:`paddle.linalg.cholesky <cn_api_paddle_linalg_cholesky>` ", "计算一个实数对称正定矩阵的 Cholesky 分解"
" :ref:`paddle.linalg.cholesky_inverse <cn_api_paddle_linalg_cholesky_inverse>` ", "使用 Cholesky 因子 `U` 计算对称正定矩阵的逆矩阵"
" :ref:`paddle.linalg.svd <cn_api_paddle_linalg_svd>` ", "计算矩阵的奇异值分解"
" :ref:`paddle.linalg.svdvals <_cn_api_paddle_linalg_svdvals>` ", "计算矩阵的奇异值"
" :ref:`paddle.linalg.svdvals <cn_api_paddle_linalg_svdvals>` ", "计算矩阵的奇异值"
" :ref:`paddle.linalg.svd_lowrank <cn_api_paddle_linalg_svd_lowrank>` ", "对低秩矩阵进行奇异值分解"
" :ref:`paddle.linalg.pca_lowrank <cn_api_paddle_linalg_pca_lowrank>` ", "对矩阵进行线性主成分分析"
" :ref:`paddle.linalg.qr <cn_api_paddle_linalg_qr>` ", "计算矩阵的正交三角分解(也称 QR 分解)"
Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/linalg/eigvals_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ eigvals


.. note::
该 API 的反向实现尚未完成,若你的代码需要对其进行反向传播,请使用 ref:`cn_api_paddle_linalg_eig`。
该 API 的反向实现尚未完成,若你的代码需要对其进行反向传播,请使用 :ref:`cn_api_paddle_linalg_eig`。


参数
Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/linalg/matrix_norm_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ matrix_norm
将计算给定 Tensor 的矩阵范数。具体用法请参见 :ref:`norm <_cn_api_paddle_linalg_norm>`。
将计算给定 Tensor 的矩阵范数。具体用法请参见 :ref:`norm <cn_api_paddle_linalg_norm>`。


参数
Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/linalg/vector_norm_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ vector_norm



将计算给定 Tensor 的向量范数。具体用法请参见 :ref:`norm <_cn_api_paddle_linalg_norm>`。
将计算给定 Tensor 的向量范数。具体用法请参见 :ref:`norm <cn_api_paddle_linalg_norm>`。


参数
Expand Down
4 changes: 2 additions & 2 deletions docs/api_guides/low_level/layers/sparse_update.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
稀疏更新
#####

`paddle.nn.functional.embedding <cn_api_paddle_nn_functional_embedding>` 层在单机训练和分布式训练时,均可以支持“稀疏更新”,即梯度以 sparse tensor 结构存储,只保存梯度不为 0 的行。
:ref:`paddle.nn.functional.embedding <cn_api_paddle_nn_functional_embedding>` 层在单机训练和分布式训练时,均可以支持“稀疏更新”,即梯度以 sparse tensor 结构存储,只保存梯度不为 0 的行。
在分布式训练中,对于较大的 embedding 层,开启稀疏更新有助于减少通信数据量,提升训练速度。

在 paddle 内部,我们用 lookup_table 来实现 embedding。下边这张图说明了 embedding 在正向和反向计算的过程:
Expand All @@ -14,4 +14,4 @@
.. image:: ../../../design/dist_train/src/lookup_table_training.png
:scale: 50 %

API 详细使用方法参考 `paddle.nn.functional.embedding <cn_api_paddle_nn_functional_embedding>`
API 详细使用方法参考 :ref:`paddle.nn.functional.embedding <cn_api_paddle_nn_functional_embedding>`
4 changes: 2 additions & 2 deletions docs/api_guides/low_level/layers/sparse_update_en.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
Sparse update
###############

`paddle.nn.functional.embedding <cn_api_paddle_nn_functional_embedding>` layer supports "sparse updates" in both single-node and distributed training, which means gradients are stored in a sparse tensor structure where only rows with non-zero gradients are saved.
:ref:`paddle.nn.functional.embedding <cn_api_paddle_nn_functional_embedding>` layer supports "sparse updates" in both single-node and distributed training, which means gradients are stored in a sparse tensor structure where only rows with non-zero gradients are saved.
In distributed training, for larger embedding layers, sparse updates reduce the amount of communication data and speed up training.

In paddle, we use lookup_table to implement embedding. The figure below illustrates the process of embedding in the forward and backward calculations:
Expand All @@ -17,4 +17,4 @@ As shown in the figure: two rows in a Tensor are not 0. In the process of forwar
Example
--------------------------

API reference `paddle.nn.functional.embedding <cn_api_paddle_nn_functional_embedding>` .
API reference :ref:`paddle.nn.functional.embedding <cn_api_paddle_nn_functional_embedding>` .