YAML 锚点和别名让你可以定义一次值并在整个文档中重复使用,遵循 DRY(不要重复自己)原则。结合合并键(<<),你可以构建可组合、可维护的配置文件,适用于 Docker Compose、GitHub Actions、GitLab CI、Kubernetes 等场景。本指南涵盖从基本语法到高级模式和常见陷阱的所有内容。
1. 什么是锚点和别名?
在 YAML 中,锚点是使用 & 字符加名称附加到节点(标量、映射或序列)上的标记。别名是使用 * 字符加相同名称来引用该锚点。当 YAML 解析器遇到别名时,它会将锚定的值替换到该位置。
这种机制是 YAML 1.1 和 1.2 规范的一部分,意味着它可以与任何兼容的解析器一起使用,包括 PyYAML、js-yaml、SnakeYAML、ruamel.yaml 和 go-yaml。
锚点和别名是 YAML 原生的避免重复的方式。与其多次复制相同的配置块,不如定义一次然后在其他地方引用它。
# Anchor: &name attaches a label to a value
# Alias: *name references that labeled value
defaults: &default_settings
timeout: 30
retries: 3
verbose: false
# Alias: reuse the entire defaults block
production:
<<: *default_settings
verbose: true
# After parsing, production = { timeout: 30, retries: 3, verbose: true }- &name 在节点上创建锚点
- *name 创建指向该锚点的别名(引用)
- 锚点必须在被引用之前定义
- 锚点名称可以包含字母数字字符、连字符和下划线
- 别名产生锚定节点的完全相同的副本
2. 基本锚点和别名语法
锚点和别名最简单的用法是用于标量值(字符串、数字、布尔值)。你将锚点附加到一个值上,然后在同一文档的其他位置引用它。
标量锚点:
# Define a scalar value with an anchor
db_host: &db_host "postgres.example.com"
db_port: &db_port 5432
db_name: &db_name "myapp_production"
# Reference with aliases
connection_string: "postgresql://user:pass@*db_host:*db_port/*db_name"
# Note: aliases work as standalone values, not inside strings!
# Correct usage:
primary:
host: *db_host
port: *db_port
name: *db_name
backup:
host: *db_host # Same host as primary
port: *db_port # Same port as primary
name: *db_name # Same database name解析后,connection_string 和 backup_host 中的 db_host 都解析为 "postgres.example.com"。
同一文档中的多个锚点:
# Multiple anchors for different values
app_version: &version "2.5.0"
node_image: &node_img "node:20-alpine"
python_image: &python_img "python:3.12-slim"
default_replicas: &replicas 3
services:
api:
image: *node_img
replicas: *replicas
labels:
version: *version
worker:
image: *python_img
replicas: *replicas
labels:
version: *version
frontend:
image: *node_img
replicas: 1 # Override: only 1 replica for frontend
labels:
version: *version3. 锚定对象(映射)
锚点的真正威力来自于锚定整个映射(对象)。你可以定义一次完整的配置块,并在文档中重复使用它。
当你锚定一个映射时,别名会产生该映射中每个键值对的完整副本。
# Anchor an entire mapping (object)
logging: &default_logging
driver: json-file
options:
max-size: "10m"
max-file: "3"
tag: "{{.Name}}"
services:
api_server:
image: myapp-api:latest
logging: *default_logging # Entire logging config reused
worker_server:
image: myapp-worker:latest
logging: *default_logging # Same logging config
scheduler:
image: myapp-scheduler:latest
logging: *default_logging # Same logging config锚定嵌套对象:
# Anchor nested configuration blocks
database_config: &db_config
host: db.internal.example.com
port: 5432
pool_size: 20
ssl: true
timeout: 30
cache_config: &cache_config
host: redis.internal.example.com
port: 6379
ttl: 3600
environments:
production:
database: *db_config
cache: *cache_config
staging:
database: *db_config # Exact same DB config
cache: *cache_config # Exact same cache config
# For different config, you'd need merge keys (section 4)
# or define a new block4. 合并键(<<):合并映射
合并键(<<)是一个 YAML 扩展,它允许你将锚定的映射合并到另一个映射中,同时允许你覆盖特定字段。对于配置文件来说,这比普通别名有用得多。
使用 <<: *alias 时,锚定映射中的所有键都会插入到当前映射中。如果当前映射已经定义了锚点中存在的键,则本地值优先(本地覆盖合并)。
# Define defaults with an anchor
defaults: &service_defaults
image: myapp:latest
restart: always
environment:
NODE_ENV: production
LOG_LEVEL: info
volumes:
- /var/log/app:/app/logs
ports:
- "8080:3000"
services:
production:
<<: *service_defaults # Merge all defaults
# production uses everything as-is
staging:
<<: *service_defaults # Merge all defaults
ports:
- "8081:3000" # Override: different port
environment:
NODE_ENV: staging # Override: different NODE_ENV
LOG_LEVEL: debug # Override: more verbose logging
development:
<<: *service_defaults # Merge all defaults
image: myapp:dev # Override: dev image
ports:
- "3000:3000" # Override: direct port mapping
environment:
NODE_ENV: development
LOG_LEVEL: debug
DEBUG: "true"覆盖优先级:
本地键始终优先于合并的键。这是合并键的基本规则。
# Override precedence demonstration
base: &base
name: "default"
timeout: 30
retries: 3
debug: false
service:
<<: *base
name: "my-service" # Overrides "default" -> "my-service"
debug: true # Overrides false -> true
# timeout: 30 <- inherited from base (not overridden)
# retries: 3 <- inherited from base (not overridden)
extra_key: "new" # Added: not in base at all
# Parsed result:
# service:
# name: "my-service"
# timeout: 30
# retries: 3
# debug: true
# extra_key: "new"5. 多重合并与优先级
你可以通过向 << 键传递列表来同时从多个锚点合并。当合并多个锚点时,列表中的第一个锚点在合并值中具有最高优先级,而本地键仍然覆盖一切。
优先级顺序(从高到低):
- 1. 当前映射中定义的本地键
- 2. 合并列表中的第一个锚点
- 3. 合并列表中的第二个锚点
- 4. 合并列表中的第三个锚点,依此类推
# Multiple merge sources
app_defaults: &app_defaults
image: myapp:latest
restart: always
replicas: 3
logging_defaults: &logging_defaults
logging:
driver: json-file
options:
max-size: "10m"
monitoring_defaults: &monitoring_defaults
labels:
monitoring: "true"
team: "platform"
healthcheck:
interval: 30s
timeout: 10s
retries: 3
services:
api:
# Merge from multiple anchors (list syntax)
<<: [*app_defaults, *logging_defaults, *monitoring_defaults]
ports:
- "8080:3000"
worker:
<<: [*app_defaults, *logging_defaults, *monitoring_defaults]
replicas: 5 # Override: more replicas for worker
command: ["npm", "run", "worker"]
# If app_defaults and monitoring_defaults both define "labels",
# app_defaults wins (first in the list)# Precedence example with conflicting keys
first: &first
color: red
size: large
weight: heavy
second: &second
color: blue
size: medium
shape: round
result:
<<: [*first, *second]
color: green # Local override
# Parsed result:
# result:
# color: green <- local key wins
# size: large <- from *first (first in list)
# weight: heavy <- from *first (only source)
# shape: round <- from *second (only source)6. 锚定数组(序列)
你可以像锚定标量和映射一样锚定整个数组(序列)。但有一个重要的限制:你不能像使用 << 合并映射那样合并数组。
数组锚点通过简单别名工作:
# Anchor an entire array
shared_volumes: &volumes
- ./config:/app/config:ro
- ./logs:/app/logs
- /var/run/docker.sock:/var/run/docker.sock
shared_ports: &ports
- "8080:3000"
- "8443:3443"
services:
web:
volumes: *volumes # Reuse entire volume list
ports: *ports # Reuse entire port list
api:
volumes: *volumes # Same volumes
ports:
- "9090:3000" # Different ports (cannot merge with *ports)扩展数组的变通方法:
# YAML does NOT support this (will cause an error):
# combined:
# <<: [*list_a, *list_b] # ERROR: << only works with mappings
# Workaround 1: Repeat values manually
all_hosts:
- host1.example.com
- host2.example.com
- host3.example.com # Additional host
- host4.example.com # Additional host
# Workaround 2: Use a mapping with anchor + merge instead
host_group_a: &hosts_a
host1: host1.example.com
host2: host2.example.com
host_group_b: &hosts_b
host3: host3.example.com
host4: host4.example.com
all_hosts:
<<: [*hosts_a, *hosts_b]
# Result: { host1: ..., host2: ..., host3: ..., host4: ... }7. Docker Compose:x- 扩展字段
Docker Compose 是 YAML 锚点最流行的实际用例。从 Compose 文件格式 3.4+ 开始,你可以使用 x- 前缀的顶级键作为扩展字段来存放锚点定义。Docker Compose 会忽略任何以 x- 开头的顶级键。
这种模式是跨服务共享配置的推荐方式:
# docker-compose.yml
# x- extension fields are ignored by Docker Compose
x-default-service: &default-service
restart: unless-stopped
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
networks:
- app-network
deploy:
resources:
limits:
memory: 512M
reservations:
memory: 256M
services:
api:
<<: *default-service
image: myapp-api:latest
ports:
- "8080:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://db:5432/myapp
depends_on:
- postgres
- redis
worker:
<<: *default-service
image: myapp-worker:latest
environment:
- NODE_ENV=production
- QUEUE_URL=redis://redis:6379
depends_on:
- redis
scheduler:
<<: *default-service
image: myapp-scheduler:latest
environment:
- NODE_ENV=production
deploy:
resources:
limits:
memory: 256M # Less memory for scheduler
reservations:
memory: 128M
postgres:
<<: *default-service
image: postgres:16-alpine
volumes:
- pgdata:/var/lib/postgresql/data
environment:
- POSTGRES_DB=myapp
- POSTGRES_USER=app
- POSTGRES_PASSWORD_FILE=/run/secrets/db_password
redis:
<<: *default-service
image: redis:7-alpine
volumes:
- redisdata:/data
networks:
app-network:
driver: bridge
volumes:
pgdata:
redisdata:使用多个锚点的高级 Docker Compose:
# Advanced: Multiple x- extension fields for composability
x-logging: &logging
logging:
driver: json-file
options:
max-size: "10m"
max-file: "5"
x-healthcheck-http: &healthcheck-http
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
x-healthcheck-tcp: &healthcheck-tcp
healthcheck:
test: ["CMD-SHELL", "pg_isready -U app || exit 1"]
interval: 10s
timeout: 5s
retries: 5
x-deploy-standard: &deploy-standard
deploy:
replicas: 2
resources:
limits:
cpus: "1.0"
memory: 512M
services:
api:
<<: [*logging, *healthcheck-http, *deploy-standard]
image: myapp-api:latest
ports:
- "8080:3000"
worker:
<<: [*logging, *deploy-standard]
image: myapp-worker:latest
deploy:
replicas: 4 # More workers needed
postgres:
<<: [*logging, *healthcheck-tcp]
image: postgres:16-alpine8. GitHub Actions:使用锚点复用步骤
GitHub Actions YAML 工作流支持锚点和别名,但有一些注意事项。锚点在单个工作流文件内有效,但不能跨文件使用。
常见模式包括复用环境变量、步骤配置和矩阵定义:
# .github/workflows/ci.yml
name: CI Pipeline
# Define reusable environment variables
env: &shared-env
NODE_VERSION: "20"
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
lint-and-test:
runs-on: ubuntu-latest
env: *shared-env
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: npm
- &install-deps
name: Install dependencies
run: npm ci
- name: Lint
run: npm run lint
- name: Test
run: npm test -- --coverage
build:
runs-on: ubuntu-latest
needs: lint-and-test
env: *shared-env
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: npm
- *install-deps # Reuse the install step
- name: Build
run: npm run build
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: build-output
path: dist/9. GitLab CI:模板作业与锚点
GitLab CI 对以点号(.)前缀的隐藏作业有一流的支持,这些作业可以作为模板。你可以使用 YAML 锚点或 GitLab 的 extends 关键字来实现类似的效果。
使用锚点 vs extends:
使用 YAML 锚点:
# .gitlab-ci.yml — Using YAML anchors
stages:
- test
- build
- deploy
# Hidden job as anchor template (dot prefix = hidden)
.default_job: &default_job
image: node:20-alpine
before_script:
- npm ci
cache:
key: $CI_COMMIT_REF_SLUG
paths:
- node_modules/
tags:
- docker
.deploy_template: &deploy_template
image: alpine:latest
before_script:
- apk add --no-cache curl
when: manual
tags:
- docker
# Jobs using anchors
test:
<<: *default_job
stage: test
script:
- npm run lint
- npm test -- --coverage
coverage: '/Lines\s*:\s*(\d+\.?\d*)%/'
build:
<<: *default_job
stage: build
script:
- npm run build
artifacts:
paths:
- dist/
expire_in: 1 week
deploy_staging:
<<: *deploy_template
stage: deploy
script:
- curl -X POST "https://api.example.com/deploy?env=staging"
environment:
name: staging
url: https://staging.example.com
deploy_production:
<<: *deploy_template
stage: deploy
script:
- curl -X POST "https://api.example.com/deploy?env=production"
environment:
name: production
url: https://example.com
only:
- main使用 GitLab extends(推荐):
# .gitlab-ci.yml — Using GitLab extends (preferred)
stages:
- test
- build
- deploy
# Hidden template jobs (no anchors needed)
.default_job:
image: node:20-alpine
before_script:
- npm ci
cache:
key: $CI_COMMIT_REF_SLUG
paths:
- node_modules/
tags:
- docker
.deploy_template:
image: alpine:latest
before_script:
- apk add --no-cache curl
when: manual
tags:
- docker
# Jobs using extends (deep merge!)
test:
extends: .default_job
stage: test
script:
- npm run lint
- npm test
build:
extends: .default_job
stage: build
script:
- npm run build
deploy_staging:
extends: .deploy_template
stage: deploy
script:
- curl -X POST "https://api.example.com/deploy?env=staging"
environment:
name: staging比较:
| 特性 | YAML 锚点 | GitLab extends |
|---|---|---|
| 合并类型 | 浅合并 | 深合并 |
| 可读性 | 一般 | 高 |
| 跨文件 | 否 | 是(include) |
| 标准 YAML | 是 | 否(GitLab 专用) |
10. Kubernetes:通用标签与资源限制
Kubernetes YAML 清单经常有重复的元数据、标签、资源限制和环境变量。虽然 Kubernetes 不原生处理锚点(kubectl 应用解析后的 YAML),但你可以在源文件中使用锚点,让 YAML 解析器在应用前解析它们。
Kubernetes 的常见模式:
# kubernetes-manifests.yaml
# Common definitions (using YAML multi-document with ---)
# Shared labels and metadata
_anchors:
labels: &common-labels
app.kubernetes.io/part-of: myapp
app.kubernetes.io/managed-by: kubectl
team: backend
environment: production
resources: &default-resources
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
env: &common-env
- name: LOG_LEVEL
value: "info"
- name: TZ
value: "UTC"
- name: OTEL_EXPORTER_ENDPOINT
value: "http://otel-collector:4317"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-server
labels:
<<: *common-labels
app.kubernetes.io/name: api-server
app.kubernetes.io/component: api
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: api-server
template:
metadata:
labels:
<<: *common-labels
app.kubernetes.io/name: api-server
spec:
containers:
- name: api
image: myapp-api:v2.5.0
resources: *default-resources
env:
*common-env
ports:
- containerPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: worker
labels:
<<: *common-labels
app.kubernetes.io/name: worker
app.kubernetes.io/component: worker
spec:
replicas: 5
selector:
matchLabels:
app.kubernetes.io/name: worker
template:
metadata:
labels:
<<: *common-labels
app.kubernetes.io/name: worker
spec:
containers:
- name: worker
image: myapp-worker:v2.5.0
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: "1"
memory: 1Gi # Worker needs more memory
env:
*common-env11. 限制与注意事项
虽然锚点和别名功能强大,但它们有几个重要的限制需要注意:
没有跨文件锚点
锚点的作用域限于单个 YAML 文档(在单个文件内,或在 --- 文档分隔符之间)。你不能引用在不同文件中定义的锚点。
# file-a.yaml
database: &db_config
host: localhost
port: 5432
# file-b.yaml
service:
db: *db_config # ERROR: *db_config is not defined in this file!
# Solution: Keep all anchors and aliases in the same file
# Or use tool-specific features (GitLab include, Helm, etc.)无 JSON 兼容性
JSON 不支持锚点或别名。如果你的 YAML 被转换为 JSON(例如用于 API),锚点在解析过程中会被解析为它们的值。JSON Schema 中的 $ref 机制是完全不同的特性。
# YAML with anchors:
defaults: &defaults
timeout: 30
retries: 3
service:
<<: *defaults
name: api
# Converts to JSON as (anchors resolved):
# {
# "defaults": { "timeout": 30, "retries": 3 },
# "service": { "timeout": 30, "retries": 3, "name": "api" }
# }
# No anchor/alias information is preserved in JSONYAML 炸弹(十亿笑声攻击)
递归或深度嵌套的锚点可以创建指数级增长的大型数据结构,类似于 XML 十亿笑声攻击。这是一个已知的安全问题:
# YAML bomb / Billion laughs attack
# WARNING: Do NOT parse this with unlimited settings!
a: &a ["lol","lol","lol","lol","lol","lol","lol","lol","lol"]
b: &b [*a,*a,*a,*a,*a,*a,*a,*a,*a]
c: &c [*b,*b,*b,*b,*b,*b,*b,*b,*b]
d: &d [*c,*c,*c,*c,*c,*c,*c,*c,*c]
e: &e [*d,*d,*d,*d,*d,*d,*d,*d,*d]
# Each level multiplies by 9: 9^5 = 59,049 "lol" strings
# More levels = exponential growth = memory exhaustion# Safe parsing examples:
# Python (PyYAML) — use SafeLoader
import yaml
with open('config.yaml') as f:
data = yaml.safe_load(f) # SafeLoader prevents code execution
# JavaScript (js-yaml) — default is safe
const yaml = require('js-yaml');
const data = yaml.load(fs.readFileSync('config.yaml', 'utf8'));
// js-yaml has built-in maxAliasCount (default: 100)
# Go (go-yaml) — set limits
decoder := yaml.NewDecoder(reader)
decoder.KnownFields(true) // Reject unknown fields无法部分修改别名
别名(*name)产生精确副本。你不能在不使用合并键(<<)的情况下修改别名映射的单个字段。即使使用 <<,你也只能覆盖顶级键,不能覆盖嵌套键。
# Cannot partially modify an alias
defaults: &defaults
database:
host: localhost
port: 5432
pool_size: 10
# This REPLACES the entire database mapping, not just pool_size:
staging:
<<: *defaults
database:
pool_size: 5
# host and port are LOST! << only merges top-level keys.
# Result (NOT what you might expect):
# staging:
# database:
# pool_size: 5 # host and port are gone!
# Solution: Anchor at a finer granularity
db_host: &db_host "localhost"
db_port: &db_port 5432
staging:
database:
host: *db_host
port: *db_port
pool_size: 5 # Only pool_size is different解析器支持不一致
合并键(<<)在 YAML 1.1 规范中定义,但已从 YAML 1.2 中移除。不过,大多数解析器仍然为了向后兼容而支持它。请检查你的解析器文档。
循环引用
YAML 理论上允许循环引用,但大多数解析器会拒绝它们或设置递归限制。避免创建引用自身的锚点。
12. 锚点和别名的替代方案
当 YAML 锚点不能满足你的需求时,考虑以下替代方案:
YAML includes(非标准)
一些工具支持自定义 !include 标签来引用外部文件。这不是 YAML 规范的一部分,但被 Home Assistant、Ansible 和自定义 YAML 加载器所实现。
# Non-standard !include (supported by some tools)
# config.yaml
database: !include database.yaml
logging: !include logging.yaml
# database.yaml
host: localhost
port: 5432
name: myappJSON $ref
JSON Schema 和 OpenAPI 使用 $ref 进行跨文档引用。这是与 YAML 锚点不同的机制,可以跨文件工作。
# OpenAPI / JSON Schema $ref
paths:
/users:
get:
responses:
200:
content:
application/json:
schema:
$ref: '#/components/schemas/UserList'
400:
$ref: '#/components/responses/BadRequest'Helm 模板
对于 Kubernetes,Helm 提供带有 values.yaml、命名模板、辅助函数和条件逻辑的 Go 模板。对于复杂部署比锚点强大得多。
# Helm template example (templates/deployment.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "myapp.fullname" . }}
labels:
{{- include "myapp.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
template:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
resources:
{{- toYaml .Values.resources | nindent 12 }}Jsonnet
一种编译为 JSON 的数据模板语言。支持变量、函数、条件、导入等。被 Grafana、Tanka 和其他工具使用。
// Jsonnet example
local defaults = {
replicas: 3,
image: 'myapp:latest',
resources: {
limits: { cpu: '500m', memory: '512Mi' },
},
};
{
api: defaults + {
name: 'api-server',
ports: [{ containerPort: 3000 }],
},
worker: defaults + {
name: 'worker',
replicas: 5, // Override
},
}Kustomize
一个 Kubernetes 原生的配置管理工具,支持 overlays、patches 和跨文件转换,无需模板。
Dhall
一种带有类型系统、导入和函数的可编程配置语言。可编译为 YAML、JSON 或其他格式。
何时使用什么:
# When to use what:
# Same-file repetition -> YAML anchors & aliases
# Cross-file (GitLab CI) -> extends + include
# Kubernetes config management -> Kustomize or Helm
# Complex logic / conditionals -> Jsonnet or Dhall
# API specifications -> JSON $ref (OpenAPI)- 同文件重复 -> YAML 锚点和别名
- GitLab CI 跨文件共享 -> extends + include
- Kubernetes 配置管理 -> Kustomize 或 Helm
- 复杂逻辑/条件 -> Jsonnet 或 Dhall
- API 规范 -> JSON $ref (OpenAPI)
13. 常见问题
YAML 锚点和别名有什么区别?
锚点(&name)标记一个节点以便稍后引用。别名(*name)是指向锚定节点的引用。可以将锚点理解为"定义",别名理解为"使用"。你必须在使用别名之前先定义锚点。
YAML 锚点可以跨多个文件工作吗?
不能。YAML 锚点的作用域限于单个文件中的单个文档。它们不能引用其他文件中的节点。对于跨文件复用,请使用特定工具的功能,如 GitLab CI 的 extends 配合 include、Kubernetes 的 Kustomize、Helm 模板或带 !include 标签的自定义 YAML 加载器。
YAML 中 <<: *alias 是什么意思?
<< 是合并键,它将别名映射中的所有键值对插入到当前映射中。它类似于 JavaScript 中的对象展开。本地键优先于合并的键,因此你可以在继承其余配置的同时覆盖特定字段。
YAML 锚点存在安全风险吗?
可能存在。YAML 炸弹(也称为十亿笑声攻击)使用嵌套锚点创建指数级增长的大型数据结构,耗尽内存。处理不受信任的 YAML 时,请始终设置解析限制。大多数现代解析器(PyYAML SafeLoader、js-yaml safeLoad)都有内置保护。
应该使用 YAML 锚点还是 GitLab CI 的 extends?
GitLab 推荐使用 extends 而非锚点。extends 执行深度合并(锚点只做浅合并),配合 include 可以跨文件工作,而且可读性更高。仅在需要复用单个标量值或 extends 无法满足你的用例时才使用锚点。