Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# External File Storage -- Azure File Service Connector

Azure File Share connector for the External File Storage module. Sister
implementation to the Blob Storage connector -- same interface, same
framework, but the underlying Azure service has fundamentally different
semantics that make this connector simpler in some areas and more complex
in others.

## Why this connector exists

Azure File Shares provide a SMB-compatible file system with real
directories, atomic renames, and a two-step file creation API. The Blob
connector fakes directories with marker files and implements move as
copy-then-delete. This connector gets those things natively from the
Azure File Share REST API, which makes directory operations trivially
correct and file moves atomic.

## Architecture in one paragraph

The enum extension in `ExtFileShareConnector.EnumExt.al` registers the
`"File Share"` value on the framework's connector enum and binds it to the
implementation codeunit `ExtFileShareConnectorImpl` (4570). That codeunit
implements every method of the `"External File Storage Connector"`
interface by delegating to `AFS File Client` from the Azure Storage SDK.
A single table (`Ext. File Share Account`, 4570) stores connection config;
secrets live in IsolatedStorage, never in the database. The wizard page
collects all config in a single step -- no share lookup page.

## Key differences from the Blob connector

- **Directories are real.** CreateDirectory, DeleteDirectory, and
DirectoryExists are single API calls. No marker file management.
- **File creation is two steps.** CreateFile calls both
`AFSFileClient.CreateFile` (allocate the resource) and
`AFSFileClient.PutFileStream` (upload content). The Azure File Share
REST API requires both.
- **Move is atomic.** MoveFile calls `AFSFileClient.RenameFile` -- a
native server-side rename. No copy-then-delete race condition.
- **Copy needs a full URI.** CopyFile constructs
`https://{storageAccount}.file.core.windows.net/{fileShare}/{escapedPath}`
as the source parameter. The Blob connector does not need this.
- **Path length is enforced.** CheckPath rejects paths over 2048
characters (Azure File Share API limit).
- **Simpler wizard.** One page with manual text entry for the file share
name. No container/share lookup interaction.

## What to watch out for

The `DirectoryExists` implementation does not use a metadata call like
`FileExists` does. Instead it calls `ListDirectory` with `MaxResults(1)`
and treats a 404 as "not found." This is because the Azure File Share API
does not expose a directory metadata endpoint the same way it does for
files.

The `CreateFile` two-step pattern means a failure on `PutFileStream`
leaves an allocated but empty file on the share. There is no rollback.

The `Secret` field on the wizard page is a plain `Text` variable marked
`[NonDebuggable]`, not a `SecretText`. It becomes `SecretText` only when
passed into `CreateAccount`. This is the same pattern as the Blob
connector.

## Build and test

CountryCode is `W1`. The test app is
`External File Storage - Azure File Service Connector Tests`
(ID `80ef626f-e8de-4050-b144-0e3d4993a718`), declared in
`internalsVisibleTo` in `app.json`.
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
# Business logic

All business logic lives in `ExtFileShareConnectorImpl.Codeunit.al`. The
codeunit is `Access = Internal` -- only the framework and the test app
(via `internalsVisibleTo`) can call it directly.

## InitFileClient -- the gate for every operation

Every file/directory operation goes through `InitFileClient` before
touching Azure. This procedure loads the account record, checks the
`Disabled` flag, retrieves the secret from IsolatedStorage, selects the
auth strategy (SAS or SharedKey), and initializes the `AFS File Client`.
If anything fails here -- missing account, disabled flag, missing secret
-- the operation errors before making any network call.

```mermaid
flowchart TD
A[Any file operation] --> B[InitFileClient]
B --> C{Account exists?}
C -- No --> ERR1[Error: not registered]
C -- Yes --> D{Disabled?}
D -- Yes --> ERR2[Error: account disabled]
D -- No --> E{Auth type?}
E -- SasToken --> F[UseReadySAS]
E -- SharedKey --> G[CreateSharedKey]
F --> H[Initialize AFS File Client]
G --> H
```

## File creation -- the two-step protocol

This is the most important behavioral difference from the Blob connector.
Azure File Share's REST API requires you to first allocate a file resource
(with its size) and then upload the content in a separate call. The Blob
connector does both in a single `PutBlobBlockBlobStream` call.

```mermaid
flowchart TD
A[CreateFile called] --> B[InitFileClient]
B --> C["AFSFileClient.CreateFile(Path, Stream)"]
C --> D{Success?}
D -- No --> ERR[Error]
D -- Yes --> E["AFSFileClient.PutFileStream(Path, Stream)"]
E --> F{Success?}
F -- No --> ERR
F -- Yes --> DONE[Done]
```

The risk here is that if the first call succeeds but the second fails,
you are left with an allocated empty file on the share. There is no
cleanup logic.

## Move vs copy -- atomic vs constructed

MoveFile delegates to `AFSFileClient.RenameFile`, which is a native
server-side rename. This is atomic -- either the rename happens or it
does not. The Blob connector cannot do this because Azure Blob Storage
has no rename API; it must copy then delete, which can leave orphaned
copies if the delete fails.

CopyFile is more complex than you might expect. The Azure File Share copy
API requires the source to be specified as a full URI, not a relative
path. So `CopyFile` calls `CreateUri` to construct
`https://{storageAccount}.file.core.windows.net/{fileShare}/{escapedPath}`,
URL-encoding the source path with `Uri.EscapeDataString()`. The target is
just a path. This asymmetry is an Azure API requirement, not a design
choice.

## Existence checks -- two different strategies

FileExists and DirectoryExists use different approaches because the Azure
File Share API exposes different capabilities for files and directories.

`FileExists` calls `GetFileMetadata` -- a direct metadata lookup on the
file. If it succeeds, the file exists. If the error message contains
`'404'`, the file does not exist. Any other error propagates.

`DirectoryExists` cannot use a metadata call (the API does not support
directory metadata the same way). Instead it calls `ListDirectory` with
`MaxResults(1)` on the target path. A successful response means the
directory exists. A 404 means it does not. This is slightly more
expensive than a metadata call but is the only reliable option.

Both procedures use string matching on `'404'` in the error message to
distinguish "not found" from other failures. This is fragile but matches
the pattern used across all connectors in the framework.

## Directory operations -- no marker files

This is the headline simplification over the Blob connector. Azure File
Shares have real directories, so:

- `CreateDirectory` just calls `AFSFileClient.CreateDirectory(Path)`,
with a pre-check via `DirectoryExists` to give a clean error message
instead of a confusing Azure API error.
- `DeleteDirectory` just calls `AFSFileClient.DeleteDirectory(Path)`. No
need to find and delete marker files.
- `ListDirectories` and `ListFiles` use the same `GetDirectoryContent`
helper, then filter by `Resource Type` (Directory vs File).

## Listing and pagination

`GetDirectoryContent` is the shared listing engine. It initializes the
file client, enforces path constraints via `CheckPath` (trailing slash,
2048 char max), sets `MaxResults(500)` and a continuation marker, then
calls `AFSFileClient.ListDirectory`. The response is validated by
`ValidateListingResponse`, which updates the pagination marker and sets
an end-of-listing flag when the marker is empty.

This is the same marker-based pagination pattern as the Blob connector.
The page size of 500 is hardcoded.

## Account registration wizard

The wizard (`ExtFileShareAccountWizard.Page.al`) is a single-page
NavigatePage. The user fills in account name, storage account name, auth
type, secret, and file share name. There is no share lookup -- the user
must know the file share name. Clicking "Next" calls
`CreateAccount`, which generates a GUID, writes the secret to
IsolatedStorage, and inserts the record.

The "Back" button just closes the page without saving. Validation is
minimal -- `IsAccountValid` checks that three text fields are non-empty.
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Data model

## Overview

This connector has the simplest possible data model: one table and one
secret in IsolatedStorage. There are no table extensions, no
relationships to other tables, and no intermediate records.

```mermaid
erDiagram
"Ext. File Share Account" ||--o| IsolatedStorage : "Secret Key GUID"
"Ext. File Share Account" ||--|| "File Account" : "maps to (in-memory)"
"Ext. File Share Account" }|--|| "Ext. File Share Auth. Type" : "Authorization Type"
```

## How secrets work

The table stores a GUID in the `Secret Key` field, not the actual secret.
The real credential (SAS token or shared key) lives in IsolatedStorage at
company scope, keyed by that GUID. This is the standard BC pattern for
credential storage -- the GUID is an opaque handle.

When an account is deleted, the OnDelete trigger purges the
IsolatedStorage entry. When a secret is first set (via `SetSecret`), the
procedure generates a new GUID if one does not already exist, then writes
the secret to IsolatedStorage. There is no migration path for rotating
secrets -- calling `SetSecret` again overwrites the value in place using
the same GUID.

## The "File Account" mapping

`GetAccounts` in the implementation codeunit reads every `Ext. File Share
Account` record and maps it into a temporary `File Account` record that
the framework understands. This mapping happens in memory on every call --
there is no persisted `File Account` table owned by this connector.

## Disabled flag

The `Disabled` boolean is set to true by the environment cleanup
subscriber when a sandbox is created. `InitFileClient` checks this flag
on every operation and errors if the account is disabled. The flag is
user-visible and editable on the account card page, so an admin can
manually re-enable an account in a sandbox if they provide valid
credentials.
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# Extensibility

## What you can extend

Almost nothing. This connector is deliberately closed. It is a leaf
implementation of the External File Storage framework -- it consumes an
interface, it does not define one.

The table `Ext. File Share Account` (4570) is extensible by default (no
`Extensible = false`), so you can add fields to it with a table
extension. This is the only real extension point. You might use this to
store additional per-account metadata, but be aware that the wizard and
card pages are both `Extensible = false`, so you cannot add those fields
to the standard UI without building your own page.

## What you cannot extend

- Both pages (`Ext. File Share Account` and `Ext. File Share Account
Wizard`) are marked `Extensible = false`. You cannot add fields,
actions, or layout changes.
- The implementation codeunit is `Access = Internal`. You cannot call its
procedures directly from outside the app (unless you are the test app
declared in `internalsVisibleTo`).
- The auth type enum is `Access = Internal`. You cannot add new
authentication methods via enum extension.
- The connector publishes no events. There are no subscriber hooks for
intercepting or augmenting file operations.

## How to build a different connector

This app is best understood as a reference implementation. If you want to
connect to a different storage backend, you do not extend this app -- you
build a new one that follows the same pattern:

1. Create an enum extension on `"Ext. File Storage Connector"` that adds
your connector value and binds it to your implementation codeunit via
the `Implementation` property.
2. Implement the `"External File Storage Connector"` interface in your
codeunit.
3. Create your own account table, account page, and wizard.
4. Register your connector's permission sets by extending `"File Storage
- Admin"` and `"File Storage - Edit"`.

The framework discovers connectors through the enum, not through any
registration API. Adding a value to the enum is all it takes to appear
in the connector list.

## Permission set structure

The permission sets follow a layered pattern: Objects (execute on table
and pages) is included by Read (select on tabledata), which is included
by Edit (insert/modify/delete on tabledata). The permission set
extensions wire Edit into the framework's `"File Storage - Admin"` set
and Read into `"File Storage - Edit"` set, so framework-level permission
assignments automatically grant the right access to this connector's
objects.

The implicit entitlement (`ExtFileShareConnector.Entitlement.al`) grants
Edit-level access, meaning all SaaS users with the entitlement get full
CRUD on account records without explicit permission set assignment.
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
# Patterns

## Interface-based connector registration

The connector registers itself with the framework entirely through an
enum extension. `ExtFileShareConnector.EnumExt.al` adds the `"File
Share"` value to the `"Ext. File Storage Connector"` enum and uses the
`Implementation` property to bind it to `"Ext. File Share Connector
Impl"`. No factory, no registration API, no event subscription -- the
enum is the registry. This is the standard BC pattern for pluggable
implementations and is shared with the Blob, SFTP, and SharePoint
connectors.

## GUID-keyed secret indirection

Credentials never touch a database table. The account table stores only a
GUID in the `Secret Key` field. The actual secret lives in
IsolatedStorage at company scope, keyed by that GUID. This indirection
means the secret is inaccessible to SQL queries, backup restores, or
configuration package exports. The OnDelete trigger cleans up the
IsolatedStorage entry. `SetSecret` is idempotent -- it generates the GUID
once, then overwrites the value in place on subsequent calls.

This is identical to the Blob connector's approach and is the recommended
BC pattern for any credential that must survive data export scenarios.

## Native directory operations (File Share vs Blob)

The most architecturally significant pattern in this connector is what it
does *not* do. The Blob connector has to simulate directories using
marker files because Azure Blob Storage is a flat key-value store. This
connector operates against Azure File Shares, which have a real
hierarchical file system. So `CreateDirectory`, `DeleteDirectory`, and
`DirectoryExists` are trivial single-call operations.

If you are reading this connector alongside the Blob connector, the
absence of marker file management is the main thing to notice. It
simplifies the code dramatically and eliminates an entire class of
consistency bugs (orphaned markers, race conditions on directory
deletion).

## Two-step file creation

Azure File Share REST API requires files to be created in two calls:
first `CreateFile` to allocate the resource on the server (this tells
Azure the expected file size), then `PutFileStream` to upload the actual
content. This is different from Blob Storage's single-call upload and is
a consequence of how Azure File Shares implement SMB-compatible file
semantics.

The pattern introduces a failure window between the two calls. If
allocation succeeds but upload fails, an empty file remains on the share.
The connector does not attempt cleanup in this case.

## Atomic rename for move

`MoveFile` calls `AFSFileClient.RenameFile` -- a native server-side
rename that is atomic. The Blob connector must do copy-then-delete for
the same operation because Azure Blob Storage does not support rename.
The copy-then-delete approach has a failure window where both source and
target exist, or where the copy succeeds but the delete fails, leaving a
duplicate. The File Share connector avoids this entirely.

## 404-string matching for existence checks

Both `FileExists` and `DirectoryExists` detect "not found" by checking
whether the error message string contains `'404'`. This is not
status-code inspection -- it is string matching on the error text
returned by the AFS SDK. The pattern is fragile in theory (a change to
the SDK's error message format would break it) but is used consistently
across all connectors in the framework, so it is effectively a
convention.

## Lazy client initialization

Every public operation calls `InitFileClient` to construct an
`AFS File Client` from scratch. There is no cached client, no connection
pool. Each operation loads the account record, retrieves the secret,
builds the auth object, and initializes the client. This is
stateless-by-design -- it keeps the codeunit free of instance state and
avoids stale-credential bugs, at the cost of repeated IsolatedStorage
reads.

## Environment cleanup hook

The codeunit subscribes to `OnClearCompanyConfig` from the `Environment
Cleanup` codeunit. When a sandbox is created, the subscriber sets
`Disabled = true` on all accounts via `ModifyAll`. This prevents sandbox
environments from accidentally connecting to production storage accounts.
The admin can manually re-enable accounts on the card page after
verifying the credentials are appropriate for the sandbox context.
Loading