Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/explanations/dapi.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,4 +31,4 @@ retrieval.

## Endpoint Overview

DAPI currently provides 2 types of endpoints: [JSON-RPC](https://www.jsonrpc.org/) and [gRPC](https://grpc.io/docs/guides/). The JSON-RPC endpoints expose some layer 1 information while the gRPC endpoints support layer 2 as well as streaming of events related to blocks and transactions/transitions. For a list of all endpoints and usage details, please see the [DAPI endpoint reference section](../reference/dapi-endpoints.md).
DAPI currently provides 2 types of endpoints: [JSON-RPC](https://www.jsonrpc.org/) and [gRPC](https://grpc.io/docs/guides/). The JSON-RPC endpoints expose some layer 1 information while the gRPC endpoints support layer 2 as well as streaming of events related to blocks, transactions/transitions, and masternode-list updates. For a list of all endpoints and usage details, please see the [DAPI endpoint reference section](../reference/dapi-endpoints.md).
2 changes: 1 addition & 1 deletion docs/explanations/drive-platform-chain.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ In order to support Dash Platform's performance requirements, the platform chain
- Hosted exclusively on masternodes
- Uses a [practical Byzantine Fault Tolerance (pBFT)](../reference/glossary.md#practical-byzantine-fault-tolerance-pbft) consensus algorithm
- Has a deterministic fee structure
- Provides fast (< 10 seconds) and absolute block finality (no reorgs)
- Provides fast (~5 second target block spacing) and absolute block finality (no reorgs)

### Blocks and Transitions

Expand Down
12 changes: 11 additions & 1 deletion docs/explanations/drive-platform-state.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

Platform state represents the current state of all the data stored on the platform. You can think about this as one large database, where each application has its own database (Application State) defined by the Data Contract associated with the application. Therefore, the platform state can be thought of as a global view of all Dash Platform data, whereas the application state is a local view of one application's data.

The Platform Chain is processed by a state machine to reach consensus on how to build the state and what it should look like. The last block of the Platform Chain contains the root of the tree structure built from all documents in the platform state. By checking the root of the state tree stored in the block, the node can confirm that it has the correct state.
The Platform Chain is processed by a state machine to reach consensus on how to build the state and what it should look like. Each committed block's header contains an AppHash that commits to the root of the state tree after the block's execution. By checking the AppHash stored in the block, a node can confirm that it has the correct state.

```{eval-rst}
.. figure:: ../../img/platform-state.svg
Expand All @@ -17,3 +17,13 @@ The Platform Chain is processed by a state machine to reach consensus on how to

Platform State Propagation
```

## GroveDB and AppHash

Platform state is stored in [GroveDB](https://github.com/dashpay/grovedb), a hierarchical authenticated data structure maintained by Drive. Because GroveDB is authenticated, each operation that mutates state also updates an aggregate root hash that commits to the entire state tree.

Under the [Tenderdash](../explanations/platform-consensus.md) same-block execution model, state transitions in a proposed block are validated and applied during block processing. The resulting state-tree root is then embedded in the committed block's header as the `AppHash`, meaning the committed block itself commits to the post-execution state.

## Proofs

Because state is stored in GroveDB, [DAPI](../explanations/dapi.md) queries can return GroveDB proofs alongside the requested data. Clients verify these proofs against the block header's `AppHash` (which itself is signed by the validator quorum), allowing light clients to trustlessly confirm the returned data without re-executing the chain.
3 changes: 2 additions & 1 deletion docs/explanations/drive.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,14 @@ There are a number of components working together to facilitate Drive's overall
- Platform state machine (validates data against the [Dash platform protocol](../explanations/platform-protocol.md); applies data to state and storage)
- [Platform state](../explanations/drive-platform-state.md) (represents current data)
- Storage (record of state transitions)
- GroveDB (authenticated hierarchical storage backend enabling cryptographic proofs returned via [DAPI](../explanations/dapi.md))

### Data Update Process

The process of adding or updating data in Drive consists of several steps to ensure data is validated, propagated, and stored properly. This description provides a simplified overview of the process:

1. [State transitions](../explanations/platform-protocol-state-transition.md) are submitted to the platform via [DAPI](../explanations/dapi.md)
2. DAPI sends the state transitions to the platform chain where they are validated, ordered, and committed to a block
2. DAPI broadcasts state transitions to Tenderdash, which validates them and includes them in block proposals
3. Valid state transitions are applied to the platform state
4. The platform chain propagates a block containing the state transitions
5. Receiving nodes update Drive data based on the valid state transitions in the block
Expand Down
2 changes: 2 additions & 0 deletions docs/explanations/fees.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,8 @@ The current cost schedule is outlined in the table below:
| Load from memory | 10 / byte |
| Blake3 hash function | 100 base + 300 / 64-byte block |

Processing fees vary by operation. The value shown is a representative base cost; the total processing fee for a state transition is the sum of the individual per-operation costs incurred while validating and applying it. See the [protocol constants reference](../protocol-ref/protocol-constants.md) for the full cost schedule.

:::{note}
Refer to the [Identity explanation](../explanations/identity.md) section for information regarding how credits are created.
:::
Expand Down
10 changes: 5 additions & 5 deletions docs/explanations/identity.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The [Identities Dash Improvement Proposal (DIP)](https://github.com/dashpay/dips

## Identity Management

In order to [create an identity](#identity-create-process), a user pays the network to store their public key(s) on the platform chain. Since new users may not have existing Dash funds, an invitation process will allow users to create an identity despite lacking their own funds. The invitation process will effectively separate the funding and registration steps that are required for any new identity to be created.
In order to [create an identity](#identity-create-process), a user pays the network to store their public key(s) on the platform chain. This is done by locking Dash on the Core chain in an asset lock transaction and then submitting an identity create state transition that references a proof of that lock.

Once an identity is created, its credit balance is used to pay for activity (e.g. use of applications). The [topup process](#identity-balance-topup-process) provides a way to add additional funds to the balance when necessary.

Expand All @@ -26,11 +26,11 @@ Once an identity is created, its credit balance is used to pay for activity (e.g
On Testnet, a [test Dash faucet](https://faucet.testnet.networks.dash.org/) is available. It dispenses small amounts to enable all users to directly acquire the funds necessary to create an identity and username.
:::

First, a sponsor (which could be a business, another person, or even the same user who is creating the identity) spends Dash in a transaction to create an invitation. The transaction contains one or more outputs which lock some Dash funds to establish credits within Dash platform.
First, the user creates an asset lock transaction on the Core chain with one or more outputs that lock Dash funds for use on Platform. An asset lock proof is then obtained for that transaction - either an InstantSend lock proof (for fast confirmation) or a ChainLock-based proof once the transaction is included in a ChainLocked block.

After the transaction is broadcast and confirmed, the sponsor sends information about the invitation to the new user. This may be done as a hyperlink that the core wallet understands, or as a QR code that a mobile wallet can scan. Once the user has the transaction data from the sponsor, they can use it to fund an [identity create state transition](https://github.com/dashpay/dips/blob/master/dip-0011.md#identity-create-transition) within Dash platform.
The user then submits an [identity create state transition](https://github.com/dashpay/dips/blob/master/dip-0011.md#identity-create-transition) referencing the asset lock proof and the public keys to register for the new identity. The locked value (minus fees) becomes the new identity's initial credit balance.

Users who already have Dash funds can act as their own sponsor if they wish, using the same steps listed here.
Application-layer flows where a third party funds an identity on behalf of another user are possible by having that third party create the asset lock transaction and share the resulting proof, but this is a client-side convention rather than a protocol-level invitation mechanism.

### Identity Balance Topup Process

Expand Down Expand Up @@ -64,6 +64,6 @@ Note: the payout key is associated with the masternode owner identity, so both t

## Credits

DPP v0.13 introduced the initial implementation of credits. As mentioned above, credits provide the mechanism for paying fees that cover the cost of platform usage. Once a user locks Dash on the core blockchain and proves ownership of the locked value in an identity create or topup transaction, their credit balance increases by that amount. As they perform platform actions, these credits are deducted to pay the associated fees.
Credits provide the mechanism for paying fees that cover the cost of platform usage. Once a user locks Dash on the core blockchain and proves ownership of the locked value in an identity create or topup state transition, their credit balance increases by that amount. As they perform platform actions, these credits are deducted to pay the associated fees.

Credits can be converted back to Dash using the identity credit withdrawal state transition, subject to a daily network-wide limit.
4 changes: 2 additions & 2 deletions docs/explanations/nft.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ NFTs can be directly transferred or traded without the need for a marketplace:

To preserve the authenticity of NFTs, Dash Platform includes creation restriction options. This ensures that only authorized entities can create certain types of NFTs. For example, in the case of land ownership NFTs, a designated authority may be the only one that can issue tokens. Restriction options are:

* **Owner Only**: Only the contract owner can create NFTs (**_Note: this is the only option implemented for the initial release_**)
* **Owner Only**: Only the contract owner can create NFTs
* **No Creation Allowed**: NFT creation is disabled for this contract
* **No Restrictions**: Anyone can create NFTs for the contract

Expand Down Expand Up @@ -77,7 +77,7 @@ Once the data contract design is completed, the contract can be registered on th

### Minting NFTs

Tokens are minted by creating new documents under the data contract. Each token is an instance of one of the document types defined in the contract.
NFTs are minted by creating new documents under the data contract. Each NFT is an instance of one of the document types defined in the contract.

```{eval-rst}
.. _explanations-nft-trade:
Expand Down
6 changes: 3 additions & 3 deletions docs/explanations/platform-consensus.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Tendermint has been mainly designed to enable efficient verification and authent

:::{note}

- Block execution only occurs after a block is committed. So, cryptographic proofs for the latest state are only available in the subsequent block.
- In classic Tendermint, block execution occurs after commit, so state proofs appear in the subsequent block. Tenderdash (used by Dash Platform) performs same-block execution, so the committed block's AppHash already commits to the post-execution state.
- Information like the transaction results and the validator set is never directly included in the block - only their Merkle roots are.
- Verification of a block requires a separate data structure to store this information. We call this the “State.”
- Block verification also requires access to the previous block.
Expand All @@ -41,7 +41,7 @@ Additional information about Tendermint is available in the <a href="https://doc

### Tendermint Limitations

While Tendermint provided a great starting point, implementing the classic version of the algorithm would have required us to start from scratch. For example, Tendermint validators use [EdDSA](https://en.wikipedia.org/wiki/EdDSA) cryptographic keys to sign votes during the consensus process.
While Tendermint provided a great starting point, implementing the classic version of the algorithm would have required us to start from scratch. For example, Tendermint validators use [Ed25519](https://en.wikipedia.org/wiki/EdDSA#Ed25519) cryptographic keys to sign votes during the consensus process.

However, Dash already has a well-established network of Masternodes that use BLS keys and a [BLS threshold signing mechanism](https://blog.dash.org/secret-sharing-and-threshold-signatures-with-bls-954d1587b5f) to produce a single signature that mobile wallets and other light clients can easily verify. In addition, subsets of masternodes, called [Long-living Masternode Quorums (LLMQ)](https://github.com/dashpay/dips/blob/master/dip-0006.md), can perform BLS threshold signing on arbitrary messages.

Expand All @@ -65,7 +65,7 @@ This allows Dash Platform to leverage the best of both worlds – the speed and

Rather than having a static validator set, Tenderdash periodically changes to a new set of validator nodes. These validator sets are a subset of masternodes that belong to the LLMQs.

The validator set is assigned to a new masternode quorum every 15 blocks (~2 mins). To determine the next quorum, the BLS threshold signature of the previous block is used as a [verifiable random function](https://en.wikipedia.org/wiki/Verifiable_random_function) to choose one of the available quorums.
The validator set is assigned to a currently active masternode quorum. Rotation to a new quorum happens when the current quorum completes a proposer cycle (proposer duty reaches the last member) or when the current quorum is no longer in the active set.

There are many advantages to adopting this dynamic rotation approach:

Expand Down
19 changes: 12 additions & 7 deletions docs/explanations/platform-protocol-data-contract.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,9 @@ Each data contract must define several fields. When using the [reference impleme
* The platform protocol schema it uses
* A contract ID (generated from a hash of the data contract's owner identity plus some entropy)
* One or more [documents](../explanations/platform-protocol-document.md)
* Optional [tokens](../explanations/tokens.md) with their own configuration, distribution, and authorization rules
* Optional groups (sets of identities with assigned power) used to jointly authorize privileged contract actions such as token minting, burning, or configuration changes
* Optional keywords used to surface the contract through discovery features, plus an optional contract description

For a practical example, see the [DashPay contract](#example-contract).

Expand All @@ -52,17 +55,19 @@ The drawing below illustrates the steps an application developer follows to comp

### Updates

#### Contract revision history
Existing data contracts can be updated by their owner in backwards-compatible ways. Updates are applied by submitting a data contract update state transition and are validated to preserve compatibility with previously stored documents.

Dash Platform v0.25 added optional contract revision history storage. Contracts using this feature maintain a record of contract revisions which can be retrieved and verified as needed.
Permitted changes include:

#### Identity key binding
* Adding new document types
* Adding new optional properties to existing document types
* Adding non-unique indices for newly added properties
* Updating token configuration where the contract's rules authorize changes (for example via the configured main control group)
* Updating contract keywords and description

Dash Platform v0.25 added key access rules that enable adding an encryption or decryption identity key that can only be used for the specific contract (or document) designated when the key is added. This provides a more granular and secure approach to key management.
Restricted changes include modifications that would break existing stored documents - for example, removing or renaming existing properties, changing their types, or altering existing unique indices.

#### Contract updates

Dash Platform v0.22 added the ability to update existing data contracts in certain backwards-compatible ways. This includes adding new documents, adding new optional properties to existing documents, and adding non-unique indices for properties added in the update.
Optional contract revision history storage allows contracts to retain a record of their revisions that can be retrieved and verified. Identity key access rules also allow an encryption or decryption key to be bound to a specific contract or document type for more granular key management.

:::{note}
For more detailed information, see the [Platform Protocol Reference - Data Contract](../protocol-ref/data-contract.md) page.
Expand Down
13 changes: 13 additions & 0 deletions docs/explanations/platform-protocol-data-trigger.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,4 +30,17 @@ As an example, DPP contains several [data triggers for DPNS](https://github.com/
| DPNS | `domain` | [`PURCHASE`](https://github.com/dashpay/platform/blob/master/packages/rs-drive-abci/src/execution/validation/state_transition/state_transitions/batch/data_triggers/triggers/reject/v0/mod.rs) | Prevents purchase of any DPNS document type |
| DPNS | `domain` | [`UPDATE_PRICE`](https://github.com/dashpay/platform/blob/master/packages/rs-drive-abci/src/execution/validation/state_transition/state_transitions/batch/data_triggers/triggers/reject/v0/mod.rs) | Prevents price updates on any DPNS document type |

:::{note}
The `REPLACE`, `DELETE`, `TRANSFER`, `PURCHASE`, and `UPDATE_PRICE` rows for DPNS all link to the same shared `reject` trigger, which DPNS reuses to disallow those actions on `domain` documents.
:::

In addition to DPNS, DPP ships data triggers for a small set of other system contracts:

| Data Contract | Document | Action(s) | Trigger Description |
| - | - | - | - |
| DashPay | `contactRequest` | [`CREATE`](https://github.com/dashpay/platform/tree/master/packages/rs-drive-abci/src/execution/validation/state_transition/state_transitions/batch/data_triggers/triggers/dashpay) | Enforces DashPay-specific rules on outgoing contact requests |
| ---- | ---- | ---- | ---- |
| Withdrawals | `withdrawal` | [`CREATE`/`REPLACE`/`DELETE`](https://github.com/dashpay/platform/tree/master/packages/rs-drive-abci/src/execution/validation/state_transition/state_transitions/batch/data_triggers/triggers/withdrawals) | Enforces withdrawal status transitions and prevents direct external mutation of withdrawal documents |
| Feature flags | (various) | [Protocol-version updates](https://github.com/dashpay/platform/tree/master/packages/rs-drive-abci/src/execution/validation/state_transition/state_transitions/batch/data_triggers/triggers/feature_flags) | Restricts feature flag changes to the authorized feature-flag identity |

When document state transitions are received, DPP checks if there is a trigger associated with the document type and action. If a trigger is found, DPP executes the trigger logic. Successful execution of the trigger logic is necessary for the document to be accepted and applied to the [platform state](../explanations/drive-platform-state.md).
Loading