Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/declarative-pipelines-programming-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ The key advantage of SDP is its declarative approach - you define what tables sh
A quick way to install SDP is with pip:

```
pip install pyspark[pipelines]
pip install "pyspark[pipelines]"
```

See the [downloads page](//spark.apache.org/downloads.html) for more installation options.
Expand Down
6 changes: 3 additions & 3 deletions python/docs/source/getting_started/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,11 +47,11 @@ If you want to install extra dependencies for a specific component, you can inst
.. code-block:: bash

# Spark SQL
pip install pyspark[sql]
pip install "pyspark[sql]"
# pandas API on Spark
pip install pyspark[pandas_on_spark] plotly # to plot your data, you can install plotly together.
pip install "pyspark[pandas_on_spark]" plotly # to plot your data, you can install plotly together.
# Spark Connect
pip install pyspark[connect]
pip install "pyspark[connect]"


See :ref:`optional-dependencies` for more detail about extra dependencies.
Expand Down
2 changes: 1 addition & 1 deletion python/docs/source/tutorial/sql/arrow_pandas.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Ensure PyArrow Installed
To use Apache Arrow in PySpark, `the recommended version of PyArrow <arrow_pandas.rst#recommended-pandas-and-pyarrow-versions>`_
should be installed.
If you install PySpark using pip, then PyArrow can be brought in as an extra dependency of the
SQL module with the command ``pip install pyspark[sql]``. Otherwise, you must ensure that PyArrow
SQL module with the command ``pip install "pyspark[sql]"``. Otherwise, you must ensure that PyArrow
is installed and available on all cluster nodes.
You can install it using pip or conda from the conda-forge channel. See PyArrow
`installation <https://arrow.apache.org/docs/python/install.html>`_ for details.
Expand Down