Skip to content

dayu-autostreamer/dyink

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

dyink

dyink is a Go CLI for bootstrapping a local Dayu base system inside Docker-hosted containers. It follows the same overall model as kind and keink: the host only needs Docker, while dyink create dayu creates a KubeEdge-based cluster, installs the Dayu dependency stack, and brings up the Dayu base services.

Dyink v1 intentionally stops at the Dayu base system:

  • backend
  • frontend
  • datasource
  • redis

It does not install example DAGs by default.

Architecture

dyink create dayu performs these steps:

  1. Reads a kind-style cluster config and guarantees at least one edge-node.
  2. Creates a Docker-hosted KubeEdge cluster by reusing keink cluster creation.
  3. Stages a managed dayu-files directory and mounts it into every node container.
  4. Installs the minimal Dayu-Sedna resources needed for JointMultiEdgeService.
  5. Installs the minimal Dayu-EdgeMesh resources and configures relay information from the real control-plane node.
  6. Creates the Dayu namespace, service account, and the 4 base JointMultiEdgeService resources.
  7. Resolves the generated backend-cloud NodePort and then injects the host-mapped backend URL into the frontend runtime config so the browser can use the stable localhost entrypoint.
  8. Waits for the base system to become ready and prints stable host URLs for the frontend and backend.

Prerequisites

  • Docker
  • Go 1.23+
  • Network access for pulling container images the first time you run the cluster

Quick Start

Run dyink directly from the repository root:

go run . create dayu

By default dyink creates:

  • a cluster named dyink
  • a kubeconfig under the dyink-managed cache directory
  • frontend access on http://127.0.0.1:30080
  • backend access on http://127.0.0.1:30081

Delete the cluster:

go run . delete cluster --name dyink

Configuring The Topology

Dyink accepts a kind-style config file and also supports the custom edge-node role:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: demo
nodes:
  - role: control-plane
  - role: edge-node
  - role: worker

Run with a config file:

go run . create dayu --config ./cluster.yaml

Cluster naming follows this order:

  • explicit --name
  • name: from the config file
  • fallback dyink

Dyink preprocesses the config before cluster creation:

  • ensures a control-plane node exists
  • ensures at least one edge node exists
  • mounts the managed Dayu asset tree into every node at /data/dayu-files
  • mounts the datasource directory into every node at /data/datasource
  • adds frontend and backend host port mappings to the first control-plane node

Dyink also refuses to create a cluster if another cluster with the same resolved name already exists.

Useful Flags

dyink create dayu supports:

  • --name
  • --config
  • --kubeconfig
  • --kubeedge-image
  • --wait
  • --retain
  • --registry
  • --repo
  • --dayu-tag
  • --sedna-tag
  • --edgemesh-tag
  • --frontend-port
  • --backend-port
  • --assets-dir
  • --datasource-dir

Example:

go run . create dayu \
  --name demo \
  --config ./cluster.yaml \
  --frontend-port 31080 \
  --backend-port 31081 \
  --assets-dir /path/to/dayu-files \
  --datasource-dir /path/to/videos

If bootstrap fails, dyink deletes the partially created cluster and removes its managed asset directory by default. Pass --retain if you want to inspect the failed environment.

Managed Assets

Dyink stages a cluster-specific asset root under the user cache directory and mounts it into all node containers at:

/data/dayu-files

The staged asset tree includes:

  • the embedded upstream template/ files
  • placeholder directories for referenced schedulers, processors, generators, and acc-gt* assets
  • a temp/ directory for Dayu runtime output

When the cluster is ready, dyink rewrites:

  • template/base.yaml with the selected image registry, repository, tag, and actual datasource node
  • template/scheduler/fixed-policy.yaml
  • template/scheduler/hedger.yaml

If you pass --assets-dir, dyink overlays your directory into the managed asset root so you can provide full real models and strategy assets.

Datasource Directory

If you pass --datasource-dir, that host directory is mounted into every node and exposed to Dayu at:

/data/datasource

If you omit it, dyink creates an empty managed datasource directory so the datasource base container can still start.

Output

On success dyink prints:

  • cluster name
  • kubeconfig path
  • Dayu namespace readiness
  • frontend URL
  • backend URL
  • backend service NodePort
  • backend service address on the cluster node network
  • managed assets path
  • datasource path

Testing

Unit tests cover:

  • cluster config preprocessing
  • asset staging and template rewriting
  • resource rendering
  • readiness helpers

Run them with:

GOCACHE=/tmp/dyink-go-build GOMODCACHE=/tmp/dyink-go-mod go test ./...

There is also a Docker smoke test skeleton that is skipped by default. Enable it with:

DYINK_SMOKE_TEST=1 GOCACHE=/tmp/dyink-go-build GOMODCACHE=/tmp/dyink-go-mod go test ./pkg/dyink -run TestSmokeCreateDeleteCluster -v

Known Limitations

  • v1 supports Docker only.
  • v1 only guarantees the Dayu base system is ready.
  • real strategy/model execution usually requires a complete user-provided dayu-files asset directory.
  • the first run depends on pulling published images.

Contributing

If you're interested in contributing, see CONTRIBUTING.

License

Dyink is released under the Apache 2.0 license. See LICENSE for details.

About

Dayu In kind, easily start a dayu system in just one device.

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Contributors

Languages