Welcome to the File System Interface (Layer 1) documentation for Scalable Web3 Storage!
The File System Interface is a high-level abstraction over Layer 0's raw blob storage, allowing users to work with familiar concepts like drives, directories, and files without worrying about the underlying decentralized infrastructure.
Think of it as:
- Dropbox/Google Drive but decentralized
- IPFS but with guaranteed storage and accountability
- Traditional file system but on blockchain
| Document | Audience | Description |
|---|---|---|
| FILE_SYSTEM_INTERFACE.md | Everyone | Architecture overview, capabilities, and use cases |
| ARCHITECTURE.md | Developers | Deep dive: encoding, security, encryption, blockchain details |
| USER_GUIDE.md | End Users | Complete guide for using the file system |
| ADMIN_GUIDE.md | Administrators | System management and monitoring |
| API_REFERENCE.md | Developers | Complete API documentation |
| EXAMPLE_WALKTHROUGH.md | Developers | Step-by-step walkthrough of basic_usage.rs example |
Start here: User Guide
Quick Example:
// 1. Create a drive (10 GB, 500 blocks)
let drive_id = fs_client.create_drive(
Some("My Documents"),
10_000_000_000,
500,
1_000_000_000_000,
None, // Auto-select providers
None, // Default commit strategy
).await?;
// 2. Upload files
fs_client.upload_file(drive_id, "/report.pdf", data, bucket_id).await?;
// 3. Download files
let data = fs_client.download_file(drive_id, "/report.pdf").await?;Start here: Admin Guide
Key Responsibilities:
- Monitor provider health and capacity
- Set system policies and defaults
- Handle provider failures
- Track system metrics
Start here: API Reference
Available APIs:
- On-Chain Extrinsics: Drive creation, updates, deletion
- Client SDK: File/directory operations
- Primitives: Shared types and utilities
A drive is your storage space. Each drive:
- Has a unique ID
- Is backed by a Layer 0 bucket
- Contains a hierarchical directory structure
- Supports versioning through immutable snapshots
Files are organized hierarchically using content-addressed nodes:
Root (CID: 0xabc...)
├── documents/
│ ├── report.pdf
│ └── notes.txt
└── images/
└── photo.jpg
Control when changes are saved:
- Immediate: Every change commits (real-time, expensive)
- Batched: Commits every N blocks (balanced, default: 100)
- Manual: User controls commits (efficient, batch operations)
Automatic redundancy based on storage duration:
- Short-term (≤1000 blocks): 1 provider
- Long-term (>1000 blocks): 3 providers (1 primary + 2 replicas)
- Custom: User-specified count
Users must perform 10+ manual steps:
- Create a bucket
- Find storage providers
- Request primary agreement
- Request replica agreements
- Wait for acceptances
- Upload chunks manually
- Manage Merkle-DAG
- Track all CIDs
- Handle failures manually
- Distribute payments
Users perform 2 simple steps:
- Create drive → System handles infrastructure
- Upload file → System handles everything else
Result: 80% complexity reduction
✅ Drive Management
- Create drives with automatic setup
- List owned drives
- Rename/delete drives
✅ File Operations
- Upload files (auto-chunking)
- Download files (auto-reconstruction)
- Delete files
✅ Directory Operations
- Create directories
- Navigate directory tree
- List contents
✅ Versioning
- Access historical snapshots
- Roll back to previous versions
- Complete audit trail
✅ Configuration
- Customize storage capacity
- Set storage duration
- Choose replication level
- Configure checkpoint frequency
✅ System Monitoring
- View all drives
- Track storage usage
- Monitor provider health
- Audit operations
✅ Policy Management
- Set default provider counts
- Configure checkpoint strategies
- Define storage requirements
- Set pricing policies
✅ Provider Management
- Register providers
- Update provider settings
- Monitor performance
- Handle failures
✅ Dispute Resolution
- Monitor challenges
- Verify commitments
- Process slashing
- Replace failed providers
// 10 GB drive with auto-defaults
let drive_id = fs_client.create_drive(
Some("My Files"),
10_000_000_000,
500,
1_000_000_000_000,
None, None,
).await?;// 100 GB with 5 providers for maximum redundancy
let drive_id = fs_client.create_drive(
Some("Archive"),
100_000_000_000,
10_000,
10_000_000_000_000,
Some(5), // High redundancy
None,
).await?;// Immediate commits for real-time updates
let drive_id = fs_client.create_drive(
Some("Team Project"),
5_000_000_000,
1_000,
2_000_000_000_000,
Some(3),
Some(CommitStrategy::Immediate),
).await?;┌─────────────────────────────────────────┐
│ Layer 2: User Interfaces (Future) │
│ - FUSE drivers, Web UI, CLI │
└─────────────────────────────────────────┘
▲
│
┌─────────────────────────────────────────┐
│ Layer 1: File System Interface │
│ - Drive Registry (on-chain) │
│ - File System Primitives │
│ - Client SDK │
└─────────────────────────────────────────┘
▲
│
┌─────────────────────────────────────────┐
│ Layer 0: Scalable Web3 Storage │
│ - Buckets, Agreements, Providers │
└─────────────────────────────────────────┘
Location: storage-interfaces/file-system/pallet-registry/
Substrate pallet managing:
- Drive registry (maps drive IDs to metadata)
- User registry (maps accounts to drives)
- Bucket-to-drive mapping
- Drive lifecycle (create, update, delete)
Location: storage-interfaces/file-system/client/
Rust library providing:
- High-level file operations
- Directory management
- DAG builder for Merkle trees
- CID caching and optimization
- Blockchain integration via
subxtfor trustless storage - Real on-chain transaction submission and event extraction
The client SDK uses subxt for blockchain interaction:
- Connection: Connects to parachain WebSocket endpoint
- Signing: Uses SR25519 keypairs (dev accounts or production keys)
- Extrinsics: Submits
DriveRegistrytransactions dynamically - Events: Extracts drive IDs and transaction results
- Storage: Queries on-chain drive state
Example:
// Connect to blockchain
let mut fs_client = FileSystemClient::new(
"ws://127.0.0.1:9944", // Parachain
"http://localhost:3000" // Provider
)
.await?
.with_dev_signer("alice") // Testing signer
.await?;
// Create drive (submits on-chain extrinsic)
let drive_id = fs_client.create_drive(...).await?;Location: storage-interfaces/file-system/primitives/
Common types used across components:
DriveInfo: Drive metadataDirectoryNode: Protobuf directory structureFileManifest: File metadata and chunksCommitStrategy: Checkpoint configuration- Helper functions for CID computation
- Using the system? → Start with User Guide
- Managing the system? → Start with Admin Guide
- Developing with APIs? → Start with API Reference
- Understanding the design? → Start with FILE_SYSTEM_INTERFACE.md
- Technical deep dive? → Start with ARCHITECTURE.md (encoding, security, blockchain)
# Add dependencies to Cargo.toml
[dependencies]
file-system-client = { path = "storage-interfaces/file-system/client" }
file-system-primitives = { path = "storage-interfaces/file-system/primitives" }# Prerequisites: Start blockchain and provider node
just start-chain # Terminal 1
cargo run --release -p storage-provider-node # Terminal 2
bash scripts/verify-setup.sh # Verify setup
# Run examples
cd storage-interfaces/file-system/client
cargo run --example basic_usage# Run pallet tests
cargo test -p pallet-drive-registry
# Run client SDK tests
cargo test -p file-system-client
# Run integration tests
just start-services # Terminal 1
bash scripts/quick-test.sh # Terminal 2Complete examples are available in:
storage-interfaces/file-system/client/examples/basic_usage.rs- Complete file system workflow with blockchain integration
The basic_usage.rs example demonstrates the complete Layer 1 file system workflow with real blockchain integration:
# 1. Start infrastructure
just start-chain # Terminal 1
cargo run --release -p storage-provider-node # Terminal 2
# 2. Verify setup
bash scripts/verify-setup.sh
# 3. Run example
cd storage-interfaces/file-system/client
cargo run --example basic_usageWhat the example demonstrates:
- Connecting to blockchain using
subxt - Creating a drive with automatic infrastructure setup
- Building nested directory structures
- Uploading files to different paths
- Listing directory contents recursively
- Downloading and verifying files
- Real on-chain drive registry integration
# Test primitives
cargo test -p file-system-primitives
# Test pallet
cargo test -p pallet-drive-registry
# Test client SDK
cargo test -p file-system-client
# Run all Layer 1 tests
cargo test -p file-system-primitives -p pallet-drive-registry -p file-system-client- Batch operations (multiple files → single commit)
- Indexer service (off-chain metadata indexing)
- Search API (full-text search on file names)
- Path resolution helpers
- Symbolic links support
- FUSE driver for local mounting
- Web dashboard (Google Drive-like UI)
- CLI tools (
fs-cli ls,fs-cli cp, etc.) - WebDAV server
- Access control (W3ACL/UCAN integration)
- File sharing and permissions
- Three-Layered Architecture - Overall system design
- Layer 0 Implementation - Technical details
- Quick Start Guide - Get running in 5 minutes
- Manual Testing Guide - Testing procedures
- Extrinsics Reference - Layer 0 blockchain API
- Payment Calculator - Calculate storage costs
If you find issues or have suggestions for the documentation:
- Check existing documentation first
- Search for related issues
- Open an issue with:
- Which document
- What's unclear/missing
- Suggested improvement
For technical issues:
- Check logs with
RUST_LOG=debug - Run verification:
bash scripts/verify-setup.sh - Review error codes in API Reference
- Open an issue with:
- Error message
- Steps to reproduce
- System information
- Discord: [Link to Discord]
- Forum: [Link to Forum]
- GitHub: [Repository link]
When contributing to File System Interface:
- Keep Layer 0 dependencies minimal
- Follow DAG/content-addressed patterns
- Add comprehensive tests
- Update documentation
- Follow Rust/FRAME best practices
See CLAUDE.md for code standards.
Q: Do I need to understand Layer 0 to use Layer 1? A: No! That's the whole point. Layer 1 completely abstracts Layer 0.
Q: How much does storage cost? A: Depends on provider pricing. Use the Payment Calculator.
Q: Can I access old versions of my files? A: Yes! Each root CID is a snapshot. Save root CIDs to access historical versions.
Q: What happens if a provider fails? A: If you have replicas (3+ providers), other providers take over automatically.
Q: How do I choose between commit strategies? A:
- Immediate: Real-time collaboration
- Batched: Normal usage (default)
- Manual: Batch operations, controlled checkpoints
Q: Can I change commit strategy after creating a drive? A: Not currently. You'd need to create a new drive and migrate data.
Q: What's the maximum file size? A: Limited only by drive capacity. Large files are automatically chunked.
Q: Are files encrypted? A: Not by default. Add client-side encryption if needed.
Q: Can I share files with other users? A: Not yet. File sharing is planned for Layer 2.
Apache 2.0 - See LICENSE for details.
Need help? Start with the guide for your role:
- 👤 Users: User Guide
- 🔧 Admins: Admin Guide
- 💻 Developers: API Reference