10 Commits

Author SHA1 Message Date
760e4a9b03 add composer 2025-11-17 10:38:06 +08:00
8cff19ecca add composer 2025-11-17 10:16:27 +08:00
6c6d5273ac complete docker cache 2025-11-17 09:18:46 +08:00
1db9badec8 fix: docker cache 2025-11-17 09:15:35 +08:00
764f642f2b fix 2025-11-15 23:03:26 +08:00
467cabe238 fix pypi 2025-11-15 23:03:02 +08:00
efd7737765 fix go cache 2025-11-15 22:05:39 +08:00
319d0021b9 fix: npm pkg 2025-11-15 21:35:41 +08:00
bb00250dda stage 1 2025-11-15 21:15:12 +08:00
0d52bae1e8 feat: 004/phase 1 2025-11-14 23:54:50 +08:00
61 changed files with 3703 additions and 323 deletions

View File

@@ -3,6 +3,8 @@
.github
.codex
.specify
Dockerfile*
.dockerignore
configs
node_modules/
coverage/

View File

@@ -7,6 +7,7 @@ Auto-generated from all feature plans. Last updated: 2025-11-13
- 本地文件系统缓存目录 `StoragePath/<Hub>/<path>`,结合文件 `mtime` + 上游 HEAD 再验证 (002-fiber-single-proxy)
- Go 1.25+(静态链接单二进制) + Fiber v3HTTP 服务、Viper配置加载/校验、Logrus + Lumberjack结构化日志 & 滚动)、标准库 `net/http`/`io`(代理回源) (003-hub-auth-fields)
- 仍使用本地 `StoragePath/<Hub>/<path>` 目录缓存正文,并依赖 HEAD 对动态标签再验证 (003-hub-auth-fields)
- 本地文件系统缓存目录 `StoragePath/<Hub>/<path>`,模块需直接复用原始路径布局 (004-modular-proxy-cache)
- Go 1.25+ (静态链接,单二进制交付) + Fiber v3HTTP 服务、Viper配置、Logrus + Lumberjack结构化日志 [EXTRACTED FROM ALL PLAN.MD FILES] 滚动)、标准库 `net/http`/`io` (001-config-bootstrap)
@@ -26,9 +27,9 @@ tests/
Go 1.25+ (静态链接,单二进制交付): Follow standard conventions
## Recent Changes
- 004-modular-proxy-cache: Added Go 1.25+ (静态链接,单二进制交付) + Fiber v3HTTP 服务、Viper配置、Logrus + Lumberjack结构化日志 & 滚动)、标准库 `net/http`/`io`
- 003-hub-auth-fields: Added Go 1.25+(静态链接单二进制) + Fiber v3HTTP 服务、Viper配置加载/校验、Logrus + Lumberjack结构化日志 & 滚动)、标准库 `net/http`/`io`(代理回源)
- 002-fiber-single-proxy: Added Go 1.25+ (静态链接,单二进制交付) + Fiber v3HTTP 服务、Viper配置、Logrus + Lumberjack结构化日志 & 滚动)、标准库 `net/http`/`io`
- 002-fiber-single-proxy: Added Go 1.25+ (静态链接,单二进制交付) + Fiber v3HTTP 服务、Viper配置、Logrus + Lumberjack结构化日志 & 滚动)、标准库 `net/http`/`io`
<!-- MANUAL ADDITIONS START -->

View File

@@ -1,7 +1,7 @@
GO ?= /home/rogee/.local/go/bin/go
GOCACHE ?= /tmp/go-build
.PHONY: build fmt test test-all run
.PHONY: build fmt test test-all run modules-test
build:
$(GO) build .
@@ -17,3 +17,6 @@ test-all:
run:
$(GO) run . --config ./config.toml
modules-test:
$(GO) test ./internal/hubmodule/...

View File

@@ -38,19 +38,20 @@ Password = "s3cr3t"
1. 复制 `configs/config.example.toml` 为工作目录下的 `config.toml` 并调整 `[[Hub]]` 配置:
- 在全局段添加/修改 `ListenPort`,并从每个 Hub 中移除 `Port`
- 为 Hub 填写 `Type`,并按需添加 `Username`/`Password`
- 根据 quickstart 示例设置 `Domain``Upstream``StoragePath` 等字段。
- 为 Hub 填写 `Type`,并按需添加 `Module`(缺省为 `legacy`,自定义模块需在 `internal/hubmodule/<module-key>/` 注册)
- 根据 quickstart 示例设置 `Domain``Upstream``StoragePath` 等字段,并按需添加 `Username`/`Password`
2. 参考 [`specs/003-hub-auth-fields/quickstart.md`](specs/003-hub-auth-fields/quickstart.md) 完成配置校验、凭证验证与日志检查。
3. 常用命令:
- `any-hub --check-config --config ./config.toml`
- `any-hub --config ./config.toml`
- `any-hub --version`
## 示例代理
## 模块化代理与示例
- `configs/docker.sample.toml``configs/npm.sample.toml` 展示了 Docker/NPM 的最小配置,复制后即可按需调整 Domain、Type、StoragePath 与凭证
- 运行 `./scripts/demo-proxy.sh docker`(或 `npm`)即可加载示例配置并启动代理,便于快速验证 Host 路由与缓存命中
- 示例操作手册、常见问题参见 [`specs/003-hub-auth-fields/quickstart.md`](specs/003-hub-auth-fields/quickstart.md)
- `configs/docker.sample.toml``configs/npm.sample.toml` 展示了 Docker/NPM 的最小配置,包含新的 `Module` 字段,复制后即可按需调整。
- 运行 `./scripts/demo-proxy.sh docker`(或 `npm`)即可加载示例配置并启动代理,日志中会附带 `module_key` 字段,便于确认命中的是 `legacy` 还是自定义模块
- 若需自定义模块,可复制 `internal/hubmodule/template/`、在 `init()` 中调用 `hubmodule.MustRegister` 描述 metadata并通过 `proxy.RegisterModuleHandler` 注入模块专属的 `ProxyHandler`,最后运行 `make modules-test` 自检
- 示例操作手册、常见问题参见 [`specs/003-hub-auth-fields/quickstart.md`](specs/003-hub-auth-fields/quickstart.md) 以及本特性的 [`quickstart.md`](specs/004-modular-proxy-cache/quickstart.md)。
## CLI 标志

View File

@@ -69,3 +69,13 @@ Proxy = ""
Username = ""
Password = ""
Type = "pypi"
# Composer Repository
[[Hub]]
Domain = "composer.hub.local"
Name = "composer"
Upstream = "https://repo.packagist.org"
Proxy = ""
Username = ""
Password = ""
Type = "composer"

View File

@@ -18,6 +18,7 @@ Domain = "docker.hub.local"
Upstream = "https://registry-1.docker.io"
Proxy = ""
Type = "docker" # 必填docker|npm|go
Module = "legacy" # 每个 Hub 使用的代理+缓存模块,默认为 legacy
Username = "" # 可选:若填写需与 Password 同时出现
Password = ""
CacheTTL = 43200 # 可选: 覆盖全局缓存 TTL

View File

@@ -15,6 +15,7 @@ Domain = "docker.hub.local"
Upstream = "https://registry-1.docker.io"
Proxy = ""
Type = "docker" # docker|npm|go
Module = "legacy"
Username = ""
Password = ""
CacheTTL = 43200

View File

@@ -15,6 +15,7 @@ Domain = "npm.hub.local"
Upstream = "https://registry.npmjs.org"
Proxy = ""
Type = "npm" # docker|npm|go
Module = "legacy"
Username = ""
Password = ""
CacheTTL = 43200

View File

@@ -0,0 +1,58 @@
# Modular Hub Migration Playbook
This playbook describes how to cut a hub over from the shared legacy adapter to a dedicated module using the new rollout flags, diagnostics endpoint, and structured logs delivered in feature `004-modular-proxy-cache`.
## Prerequisites
- Target module must be registered via `hubmodule.MustRegister` and expose a proxy handler through `proxy.RegisterModuleHandler`.
- `config.toml` must already map the hub to its target module through `[[Hub]].Module`.
- Operators must have access to the running binary port (default `:5000`) to query `/-/modules`.
## Rollout Workflow
1. **Snapshot current state**
Run `curl -s http://localhost:5000/-/modules | jq '.hubs[] | select(.hub_name=="<hub>")'` to capture the current `module_key` and `rollout_flag`. Legacy hubs report `module_key=legacy` and `rollout_flag=legacy-only`.
2. **Prepare config for dual traffic**
Edit the hub block to target the new module while keeping rollback safety:
```toml
[[Hub]]
Name = "npm-prod"
Domain = "npm.example.com"
Upstream = "https://registry.npmjs.org"
Module = "npm"
Rollout = "dual"
```
Dual mode keeps routing on the new module but keeps observability tagged as a partial rollout.
3. **Deploy and monitor**
Restart the service and tail logs filtered by `module_key`:
```sh
jq 'select(.module_key=="npm" and .rollout_flag=="dual")' /var/log/any-hub.json
```
Every request now carries `module_key`/`rollout_flag`, allowing dashboards or `grep`-based analyses without extra parsing.
4. **Verify diagnostics**
Query `/-/modules/npm` to inspect the registered metadata and confirm cache strategy, or `/-/modules` to ensure the hub binding reflects `rollout_flag=dual`.
5. **Promote to modular**
Once metrics are healthy, change `Rollout = "modular"` in config and redeploy. Continue monitoring logs to make sure both `module_key` and `rollout_flag` show the fully promoted state.
6. **Rollback procedure**
To rollback, set `Rollout = "legacy-only"` (without touching `Module`). The runtime forces traffic through the legacy module while keeping the desired module declaration for later reattempts. Confirm via diagnostics (`module_key` reverts to `legacy`) before announcing rollback complete.
## Observability Checklist
- **Logs**: Every proxy log line now contains `hub`, `module_key`, `rollout_flag`, upstream status, and `request_id`. Capture at least five minutes of traffic per flag change.
- **Diagnostics**: Store JSON snapshots from `/-/modules` before and after each rollout stage for incident timelines.
- **Config History**: Keep the `config.toml` diff (especially `Rollout` changes) attached to change records for auditability.
## Troubleshooting
- **Error: `module_not_found` during diagnostics** → module key not registered; ensure the module packages `init()` calls `hubmodule.MustRegister`.
- **Requests still tagged with `legacy-only` after promotion** → double-check the running process uses the updated config path (`ANY_HUB_CONFIG` vs `--config`) and restart the service.
- **Diagnostics 404** → confirm you are hitting the correct port and that the CLI user/network path allows HTTP access; the endpoint ignores Host headers, so `curl http://127.0.0.1:<port>/-/modules` should succeed locally.

13
internal/cache/doc.go vendored
View File

@@ -1,7 +1,10 @@
// Package cache defines the disk-backed store responsible for translating hub
// requests into StoragePath/<hub>/<path>.body files. The store exposes read/write
// primitives with safe semantics (temp file + rename) and surfaces file info
// (size, modtime) for higher layers to implement conditional revalidation.
// Proxy handlers depend on this package to stream cached responses or trigger
// upstream fetches without duplicating filesystem logic.
// requests into StoragePath/<hub>/<path> directories that mirror upstream
// paths. When a given path also needs to act as the parent of other entries
// (例如 npm metadata + tarball目录), the body is stored in a `__content` file
// under that directory so两种形态可以共存。The store exposes read/write primitives
// with safe semantics (temp file + rename) and surfaces file info (size, modtime)
// for higher layers to implement conditional revalidation. Proxy handlers depend
// on this package to stream cached responses or trigger upstream fetches without
// duplicating filesystem logic.
package cache

View File

@@ -15,8 +15,6 @@ import (
"time"
)
const cacheFileSuffix = ".body"
// NewStore 以 basePath 为根目录构建磁盘缓存,整站复用一份实例。
func NewStore(basePath string) (Store, error) {
if basePath == "" {
@@ -58,13 +56,27 @@ func (s *fileStore) Get(ctx context.Context, locator Locator) (*ReadResult, erro
default:
}
primary, legacy, err := s.entryPaths(locator)
filePath, err := s.entryPath(locator)
if err != nil {
return nil, err
}
filePath, info, f, err := s.openEntryFile(primary, legacy)
info, err := os.Stat(filePath)
if err != nil {
if errors.Is(err, fs.ErrNotExist) || isNotDirError(err) {
return nil, ErrNotFound
}
return nil, err
}
if info.IsDir() {
return nil, ErrNotFound
}
file, err := os.Open(filePath)
if err != nil {
if errors.Is(err, fs.ErrNotExist) || isNotDirError(err) {
return nil, ErrNotFound
}
return nil, err
}
@@ -74,11 +86,7 @@ func (s *fileStore) Get(ctx context.Context, locator Locator) (*ReadResult, erro
SizeBytes: info.Size(),
ModTime: info.ModTime(),
}
return &ReadResult{
Entry: entry,
Reader: f,
}, nil
return &ReadResult{Entry: entry, Reader: file}, nil
}
func (s *fileStore) Put(ctx context.Context, locator Locator, body io.Reader, opts PutOptions) (*Entry, error) {
@@ -88,12 +96,12 @@ func (s *fileStore) Put(ctx context.Context, locator Locator, body io.Reader, op
}
defer unlock()
filePath, legacyPath, err := s.entryPaths(locator)
filePath, err := s.entryPath(locator)
if err != nil {
return nil, err
}
if err := s.ensureDirWithUpgrade(filepath.Dir(filePath)); err != nil {
if err := os.MkdirAll(filepath.Dir(filePath), 0o755); err != nil {
return nil, err
}
@@ -109,12 +117,12 @@ func (s *fileStore) Put(ctx context.Context, locator Locator, body io.Reader, op
err = closeErr
}
if err != nil {
os.Remove(tempName)
_ = os.Remove(tempName)
return nil, err
}
if err := os.Rename(tempName, filePath); err != nil {
os.Remove(tempName)
_ = os.Remove(tempName)
return nil, err
}
@@ -125,7 +133,6 @@ func (s *fileStore) Put(ctx context.Context, locator Locator, body io.Reader, op
if err := os.Chtimes(filePath, modTime, modTime); err != nil {
return nil, err
}
_ = os.Remove(legacyPath)
entry := Entry{
Locator: locator,
@@ -143,16 +150,13 @@ func (s *fileStore) Remove(ctx context.Context, locator Locator) error {
}
defer unlock()
filePath, legacyPath, err := s.entryPaths(locator)
filePath, err := s.entryPath(locator)
if err != nil {
return err
}
if err := os.Remove(filePath); err != nil && !errors.Is(err, fs.ErrNotExist) {
return err
}
if err := os.Remove(legacyPath); err != nil && !errors.Is(err, fs.ErrNotExist) {
return err
}
return nil
}
@@ -179,7 +183,7 @@ func (s *fileStore) lockEntry(locator Locator) (func(), error) {
}, nil
}
func (s *fileStore) path(locator Locator) (string, error) {
func (s *fileStore) entryPath(locator Locator) (string, error) {
if locator.HubName == "" {
return "", errors.New("hub name required")
}
@@ -203,121 +207,6 @@ func (s *fileStore) path(locator Locator) (string, error) {
return filePath, nil
}
func (s *fileStore) entryPaths(locator Locator) (string, string, error) {
legacyPath, err := s.path(locator)
if err != nil {
return "", "", err
}
return legacyPath + cacheFileSuffix, legacyPath, nil
}
func (s *fileStore) openEntryFile(primaryPath, legacyPath string) (string, fs.FileInfo, *os.File, error) {
info, err := os.Stat(primaryPath)
if err == nil {
if info.IsDir() {
return "", nil, nil, ErrNotFound
}
f, err := os.Open(primaryPath)
if err != nil {
if errors.Is(err, fs.ErrNotExist) || isNotDirError(err) {
return "", nil, nil, ErrNotFound
}
return "", nil, nil, err
}
return primaryPath, info, f, nil
}
if !errors.Is(err, fs.ErrNotExist) && !isNotDirError(err) {
return "", nil, nil, err
}
info, err = os.Stat(legacyPath)
if err != nil {
if errors.Is(err, fs.ErrNotExist) || isNotDirError(err) {
return "", nil, nil, ErrNotFound
}
return "", nil, nil, err
}
if info.IsDir() {
return "", nil, nil, ErrNotFound
}
if migrateErr := s.migrateLegacyFile(primaryPath, legacyPath); migrateErr == nil {
return s.openEntryFile(primaryPath, legacyPath)
}
f, err := os.Open(legacyPath)
if err != nil {
if errors.Is(err, fs.ErrNotExist) || isNotDirError(err) {
return "", nil, nil, ErrNotFound
}
return "", nil, nil, err
}
return legacyPath, info, f, nil
}
func (s *fileStore) migrateLegacyFile(primaryPath, legacyPath string) error {
if legacyPath == "" || primaryPath == legacyPath {
return nil
}
if _, err := os.Stat(legacyPath); err != nil {
return err
}
if _, err := os.Stat(primaryPath); err == nil {
if removeErr := os.Remove(legacyPath); removeErr != nil && !errors.Is(removeErr, fs.ErrNotExist) {
return removeErr
}
return nil
}
return os.Rename(legacyPath, primaryPath)
}
func (s *fileStore) ensureDirWithUpgrade(dir string) error {
for i := 0; i < 8; i++ {
if err := os.MkdirAll(dir, 0o755); err != nil {
if isNotDirError(err) {
var pathErr *os.PathError
if errors.As(err, &pathErr) {
if upgradeErr := s.upgradeLegacyNode(pathErr.Path); upgradeErr != nil {
return upgradeErr
}
continue
}
}
return err
}
return nil
}
return fmt.Errorf("ensure cache directory failed for %s", dir)
}
func (s *fileStore) upgradeLegacyNode(conflictPath string) error {
if conflictPath == "" {
return errors.New("empty conflict path")
}
rel, err := filepath.Rel(s.basePath, conflictPath)
if err != nil {
return err
}
if strings.HasPrefix(rel, "..") {
return fmt.Errorf("conflict path outside storage: %s", conflictPath)
}
info, err := os.Stat(conflictPath)
if err != nil {
return err
}
if info.IsDir() {
return nil
}
if strings.HasSuffix(conflictPath, cacheFileSuffix) {
return nil
}
newPath := conflictPath + cacheFileSuffix
if _, err := os.Stat(newPath); err == nil {
return os.Remove(conflictPath)
}
return os.Rename(conflictPath, newPath)
}
func isNotDirError(err error) bool {
if err == nil {
return false

View File

@@ -9,7 +9,7 @@ import (
// Store 负责管理磁盘缓存的读写。磁盘布局遵循:
//
// <StoragePath>/<HubName>/<path>.body # 实际正文
// <StoragePath>/<HubName>/<path> # 实际正文(与请求路径一致)
//
// 每个条目仅由正文文件组成,文件的 ModTime/Size 由文件系统提供。
type Store interface {

View File

@@ -3,12 +3,8 @@ package cache
import (
"bytes"
"context"
"errors"
"io"
"io/fs"
"os"
"path/filepath"
"strings"
"testing"
"time"
)
@@ -42,9 +38,6 @@ func TestStorePutAndGet(t *testing.T) {
if !result.Entry.ModTime.Equal(modTime) {
t.Fatalf("modtime mismatch: expected %v got %v", modTime, result.Entry.ModTime)
}
if !strings.HasSuffix(result.Entry.FilePath, cacheFileSuffix) {
t.Fatalf("expected cache file suffix %s, got %s", cacheFileSuffix, result.Entry.FilePath)
}
}
func TestStoreGetMissing(t *testing.T) {
@@ -78,11 +71,11 @@ func TestStoreIgnoresDirectories(t *testing.T) {
t.Fatalf("unexpected store type %T", store)
}
filePath, err := fs.path(locator)
filePath, err := fs.entryPath(locator)
if err != nil {
t.Fatalf("path error: %v", err)
}
if err := os.MkdirAll(filePath+cacheFileSuffix, 0o755); err != nil {
if err := os.MkdirAll(filePath, 0o755); err != nil {
t.Fatalf("mkdir error: %v", err)
}
@@ -91,82 +84,6 @@ func TestStoreIgnoresDirectories(t *testing.T) {
}
}
func TestStoreMigratesLegacyEntryOnGet(t *testing.T) {
store := newTestStore(t)
fs, ok := store.(*fileStore)
if !ok {
t.Fatalf("unexpected store type %T", store)
}
locator := Locator{HubName: "npm", Path: "/pkg"}
legacyPath, err := fs.path(locator)
if err != nil {
t.Fatalf("path error: %v", err)
}
if err := os.MkdirAll(filepath.Dir(legacyPath), 0o755); err != nil {
t.Fatalf("mkdir error: %v", err)
}
if err := os.WriteFile(legacyPath, []byte("legacy"), 0o644); err != nil {
t.Fatalf("write legacy error: %v", err)
}
result, err := store.Get(context.Background(), locator)
if err != nil {
t.Fatalf("get legacy error: %v", err)
}
body, err := io.ReadAll(result.Reader)
if err != nil {
t.Fatalf("read legacy error: %v", err)
}
result.Reader.Close()
if string(body) != "legacy" {
t.Fatalf("unexpected legacy body: %s", string(body))
}
if !strings.HasSuffix(result.Entry.FilePath, cacheFileSuffix) {
t.Fatalf("expected migrated file suffix, got %s", result.Entry.FilePath)
}
if _, statErr := os.Stat(legacyPath); !errors.Is(statErr, fs.ErrNotExist) {
t.Fatalf("expected legacy path removed, got %v", statErr)
}
}
func TestStoreHandlesAncestorFileConflict(t *testing.T) {
store := newTestStore(t)
fs, ok := store.(*fileStore)
if !ok {
t.Fatalf("unexpected store type %T", store)
}
metaLocator := Locator{HubName: "npm", Path: "/pkg"}
legacyPath, err := fs.path(metaLocator)
if err != nil {
t.Fatalf("path error: %v", err)
}
if err := os.MkdirAll(filepath.Dir(legacyPath), 0o755); err != nil {
t.Fatalf("mkdir error: %v", err)
}
if err := os.WriteFile(legacyPath, []byte("legacy"), 0o644); err != nil {
t.Fatalf("write legacy error: %v", err)
}
tarLocator := Locator{HubName: "npm", Path: "/pkg/-/pkg-1.0.0.tgz"}
if _, err := store.Put(context.Background(), tarLocator, bytes.NewReader([]byte("tar")), PutOptions{}); err != nil {
t.Fatalf("put tar error: %v", err)
}
if _, err := os.Stat(legacyPath); !errors.Is(err, fs.ErrNotExist) {
t.Fatalf("expected legacy metadata renamed, got %v", err)
}
if _, err := os.Stat(legacyPath + cacheFileSuffix); err != nil {
t.Fatalf("expected migrated legacy cache, got %v", err)
}
primary, _, err := fs.entryPaths(tarLocator)
if err != nil {
t.Fatalf("entry path error: %v", err)
}
if _, err := os.Stat(primary); err != nil {
t.Fatalf("expected tar cache file, got %v", err)
}
}
// newTestStore returns a Store backed by a temporary directory.
func newTestStore(t *testing.T) Store {
t.Helper()

57
internal/cache/writer.go vendored Normal file
View File

@@ -0,0 +1,57 @@
package cache
import (
"context"
"errors"
"io"
"time"
"github.com/any-hub/any-hub/internal/hubmodule"
)
// ErrStoreUnavailable 表示当前模块未注入缓存存储实例。
var ErrStoreUnavailable = errors.New("cache store unavailable")
// StrategyWriter 注入模块的缓存策略,提供 TTL 决策与写入封装。
type StrategyWriter struct {
store Store
strategy hubmodule.CacheStrategyProfile
now func() time.Time
}
// NewStrategyWriter 构造策略感知的写入器,默认使用 time.Now 作为时钟。
func NewStrategyWriter(store Store, strategy hubmodule.CacheStrategyProfile) StrategyWriter {
return StrategyWriter{
store: store,
strategy: strategy,
now: time.Now,
}
}
// Enabled 返回当前是否具备缓存写入能力。
func (w StrategyWriter) Enabled() bool {
return w.store != nil
}
// Put 写入缓存正文,并保持与 Store 相同的语义。
func (w StrategyWriter) Put(ctx context.Context, locator Locator, body io.Reader, opts PutOptions) (*Entry, error) {
if w.store == nil {
return nil, ErrStoreUnavailable
}
return w.store.Put(ctx, locator, body, opts)
}
// ShouldBypassValidation 根据策略 TTL 判断是否可以直接复用缓存,避免重复 HEAD。
func (w StrategyWriter) ShouldBypassValidation(entry Entry) bool {
ttl := w.strategy.TTLHint
if ttl <= 0 {
return false
}
expireAt := entry.ModTime.Add(ttl)
return w.now().Before(expireAt)
}
// SupportsValidation 返回当前策略是否允许通过 HEAD/Etag 等方式再验证。
func (w StrategyWriter) SupportsValidation() bool {
return w.strategy.ValidationMode != hubmodule.ValidationModeNever
}

View File

@@ -5,10 +5,13 @@ import (
"path/filepath"
"reflect"
"strconv"
"strings"
"time"
"github.com/mitchellh/mapstructure"
"github.com/spf13/viper"
"github.com/any-hub/any-hub/internal/hubmodule"
)
// Load 读取并解析 TOML 配置文件,同时注入默认值与校验逻辑。
@@ -86,6 +89,22 @@ func applyHubDefaults(h *HubConfig) {
if h.CacheTTL.DurationValue() < 0 {
h.CacheTTL = Duration(0)
}
if trimmed := strings.TrimSpace(h.Module); trimmed == "" {
typeKey := strings.ToLower(strings.TrimSpace(h.Type))
if meta, ok := hubmodule.Resolve(typeKey); ok {
h.Module = meta.Key
} else {
h.Module = hubmodule.DefaultModuleKey()
}
} else {
h.Module = strings.ToLower(trimmed)
}
if rollout := strings.TrimSpace(h.Rollout); rollout != "" {
h.Rollout = strings.ToLower(rollout)
}
if h.ValidationMode == "" {
h.ValidationMode = string(hubmodule.ValidationModeETag)
}
}
func durationDecodeHook() mapstructure.DecodeHookFunc {

View File

@@ -0,0 +1,9 @@
package config
import (
_ "github.com/any-hub/any-hub/internal/hubmodule/composer"
_ "github.com/any-hub/any-hub/internal/hubmodule/docker"
_ "github.com/any-hub/any-hub/internal/hubmodule/legacy"
_ "github.com/any-hub/any-hub/internal/hubmodule/npm"
_ "github.com/any-hub/any-hub/internal/hubmodule/pypi"
)

View File

@@ -0,0 +1,27 @@
package config
import (
"time"
"github.com/any-hub/any-hub/internal/hubmodule"
"github.com/any-hub/any-hub/internal/hubmodule/legacy"
)
// HubRuntime 将 Hub 配置与模块元数据合并,方便运行时快速取用策略。
type HubRuntime struct {
Config HubConfig
Module hubmodule.ModuleMetadata
CacheStrategy hubmodule.CacheStrategyProfile
Rollout legacy.RolloutFlag
}
// BuildHubRuntime 根据 Hub 配置和模块元数据创建运行时描述,应用最终 TTL 覆盖。
func BuildHubRuntime(cfg HubConfig, meta hubmodule.ModuleMetadata, ttl time.Duration, flag legacy.RolloutFlag) HubRuntime {
strategy := hubmodule.ResolveStrategy(meta, cfg.StrategyOverrides(ttl))
return HubRuntime{
Config: cfg,
Module: meta,
CacheStrategy: strategy,
Rollout: flag,
}
}

View File

@@ -0,0 +1,62 @@
package config
import (
"fmt"
"strings"
"github.com/any-hub/any-hub/internal/hubmodule"
"github.com/any-hub/any-hub/internal/hubmodule/legacy"
)
// parseRolloutFlag 将配置中的 rollout 字段标准化,并结合模块类型输出最终状态。
func parseRolloutFlag(raw string, moduleKey string) (legacy.RolloutFlag, error) {
normalized := strings.ToLower(strings.TrimSpace(raw))
if normalized == "" {
return defaultRolloutFlag(moduleKey), nil
}
switch normalized {
case string(legacy.RolloutLegacyOnly):
return legacy.RolloutLegacyOnly, nil
case string(legacy.RolloutDual):
if moduleKey == hubmodule.DefaultModuleKey() {
return legacy.RolloutLegacyOnly, nil
}
return legacy.RolloutDual, nil
case string(legacy.RolloutModular):
if moduleKey == hubmodule.DefaultModuleKey() {
return legacy.RolloutLegacyOnly, nil
}
return legacy.RolloutModular, nil
default:
return "", fmt.Errorf("不支持的 rollout 值: %s", raw)
}
}
func defaultRolloutFlag(moduleKey string) legacy.RolloutFlag {
if strings.TrimSpace(moduleKey) == "" || moduleKey == hubmodule.DefaultModuleKey() {
return legacy.RolloutLegacyOnly
}
return legacy.RolloutModular
}
// EffectiveModuleKey 根据 rollout 状态计算真实运行的模块。
func EffectiveModuleKey(moduleKey string, flag legacy.RolloutFlag) string {
if flag == legacy.RolloutLegacyOnly {
return hubmodule.DefaultModuleKey()
}
normalized := strings.ToLower(strings.TrimSpace(moduleKey))
if normalized == "" {
return hubmodule.DefaultModuleKey()
}
return normalized
}
// RolloutFlagValue 返回当前 Hub 的 rollout flag假定 Validate 已经通过)。
func (h HubConfig) RolloutFlagValue() legacy.RolloutFlag {
flag := legacy.RolloutFlag(strings.ToLower(strings.TrimSpace(h.Rollout)))
if flag == "" {
return defaultRolloutFlag(h.Module)
}
return flag
}

View File

@@ -5,6 +5,8 @@ import (
"strconv"
"strings"
"time"
"github.com/any-hub/any-hub/internal/hubmodule"
)
// Duration 提供更灵活的反序列化能力,同时兼容纯秒整数与 Go Duration 字符串。
@@ -67,9 +69,12 @@ type HubConfig struct {
Upstream string `mapstructure:"Upstream"`
Proxy string `mapstructure:"Proxy"`
Type string `mapstructure:"Type"`
Module string `mapstructure:"Module"`
Rollout string `mapstructure:"Rollout"`
Username string `mapstructure:"Username"`
Password string `mapstructure:"Password"`
CacheTTL Duration `mapstructure:"CacheTTL"`
ValidationMode string `mapstructure:"ValidationMode"`
EnableHeadCheck bool `mapstructure:"EnableHeadCheck"`
}
@@ -103,3 +108,14 @@ func CredentialModes(hubs []HubConfig) []string {
}
return result
}
// StrategyOverrides 将 hub 层的 TTL/Validation 配置映射为模块策略覆盖项。
func (h HubConfig) StrategyOverrides(ttl time.Duration) hubmodule.StrategyOptions {
opts := hubmodule.StrategyOptions{
TTLOverride: ttl,
}
if mode := strings.TrimSpace(h.ValidationMode); mode != "" {
opts.ValidationOverride = hubmodule.ValidationMode(mode)
}
return opts
}

View File

@@ -6,16 +6,19 @@ import (
"net/url"
"strings"
"time"
"github.com/any-hub/any-hub/internal/hubmodule"
)
var supportedHubTypes = map[string]struct{}{
"docker": {},
"npm": {},
"go": {},
"pypi": {},
"docker": {},
"npm": {},
"go": {},
"pypi": {},
"composer": {},
}
const supportedHubTypeList = "docker|npm|go|pypi"
const supportedHubTypeList = "docker|npm|go|pypi|composer"
// Validate 针对语义级别做进一步校验,防止非法配置启动服务。
func (c *Config) Validate() error {
@@ -74,6 +77,29 @@ func (c *Config) Validate() error {
}
hub.Type = normalizedType
moduleKey := strings.ToLower(strings.TrimSpace(hub.Module))
if moduleKey == "" {
moduleKey = hubmodule.DefaultModuleKey()
}
if _, ok := hubmodule.Resolve(moduleKey); !ok {
return newFieldError(hubField(hub.Name, "Module"), fmt.Sprintf("未注册模块: %s", moduleKey))
}
hub.Module = moduleKey
flag, err := parseRolloutFlag(hub.Rollout, hub.Module)
if err != nil {
return newFieldError(hubField(hub.Name, "Rollout"), err.Error())
}
hub.Rollout = string(flag)
if hub.ValidationMode != "" {
mode := strings.ToLower(strings.TrimSpace(hub.ValidationMode))
switch mode {
case string(hubmodule.ValidationModeETag), string(hubmodule.ValidationModeLastModified), string(hubmodule.ValidationModeNever):
hub.ValidationMode = mode
default:
return newFieldError(hubField(hub.Name, "ValidationMode"), "仅支持 etag/last-modified/never")
}
}
if (hub.Username == "") != (hub.Password == "") {
return newFieldError(hubField(hub.Name, "Username/Password"), "必须同时提供或同时留空")
}

View File

@@ -0,0 +1,32 @@
# hubmodule
集中定义和实现 Any-Hub 的“代理 + 缓存”模块体系。
## 目录结构
```
internal/hubmodule/
├── doc.go # 包级说明与约束
├── README.md # 本文件
├── registry.go # 模块注册/发现入口(后续任务)
└── <module-key>/ # 各仓类型模块,例如 legacy、npm、docker
```
## 模块约束
- **单一接口**:每个模块需要同时实现代理与缓存接口,避免跨包耦合。
- **注册流程**:在模块 `init()` 中调用 `hubmodule.Register(ModuleMetadata{...})`,注册失败必须 panic 以阻止启动。
- **缓存布局**:一律使用 `StoragePath/<Hub>/<path>`,即与上游请求完全一致的磁盘路径;当某个路径既要保存正文又要作为子目录父节点时,会在该目录下写入 `__content` 文件以存放正文。
- **配置注入**:模块仅通过依赖注入获取 `HubConfigEntry` 和全局参数,禁止直接读取文件或环境变量。
- **可观测性**:所有模块必须输出 `module_key`、命中/回源状态等日志字段,并在返回错误时附带 Hub 名称。
## 开发流程
1. 复制 `internal/hubmodule/template/`(由 T010 提供)作为起点。
2. 填写模块特有逻辑与缓存策略,并确保包含中文注释解释设计。
3. 在模块目录添加 `module_test.go`,使用 `httptest.Server``t.TempDir()` 复现真实流量。
4. 运行 `make modules-test` 验证模块单元测试。
5. 更新 `config.toml` 中对应 `[[Hub]].Module` 字段,验证集成测试后再提交。
## 术语
- **Module Key**:模块唯一标识(如 `legacy``npm-tarball`)。
- **Cache Strategy Profile**:定义 TTL、验证策略、磁盘布局等策略元数据。
- **Legacy Adapter**:包装当前共享实现,确保迁移期间仍可运行。

View File

@@ -0,0 +1,28 @@
// Package composer declares metadata for Composer (PHP) package proxying.
package composer
import (
"time"
"github.com/any-hub/any-hub/internal/hubmodule"
)
const composerDefaultTTL = 6 * time.Hour
func init() {
hubmodule.MustRegister(hubmodule.ModuleMetadata{
Key: "composer",
Description: "Composer packages proxy with metadata+dist caching",
MigrationState: hubmodule.MigrationStateBeta,
SupportedProtocols: []string{
"composer",
},
CacheStrategy: hubmodule.CacheStrategyProfile{
TTLHint: composerDefaultTTL,
ValidationMode: hubmodule.ValidationModeETag,
DiskLayout: "raw_path",
RequiresMetadataFile: false,
SupportsStreamingWrite: true,
},
})
}

View File

@@ -0,0 +1,9 @@
// Package hubmodule 聚合任意仓类型的代理 + 缓存模块,并提供统一的注册入口。
//
// 模块作者需要:
// 1. 在 internal/hubmodule/<module-key>/ 目录下实现代理与缓存接口;
// 2. 通过本包暴露的 Register 函数在 init() 中注册模块元数据;
// 3. 保证缓存写入仍遵循 StoragePath/<Hub>/<path> 原始路径布局,并补充中文注释说明实现细节。
//
// 该包同时负责提供模块发现、可观测信息以及迁移状态的对外查询能力。
package hubmodule

View File

@@ -0,0 +1,29 @@
// Package docker 定义 Docker Hub 代理模块的元数据与缓存策略描述,供 registry 查表时使用。
package docker
import (
"time"
"github.com/any-hub/any-hub/internal/hubmodule"
)
const dockerDefaultTTL = 12 * time.Hour
// docker 模块继承 legacy 行为,但声明明确的缓存策略默认值,便于 hub 覆盖。
func init() {
hubmodule.MustRegister(hubmodule.ModuleMetadata{
Key: "docker",
Description: "Docker registry module with manifest/blob cache policies",
MigrationState: hubmodule.MigrationStateBeta,
SupportedProtocols: []string{
"docker",
},
CacheStrategy: hubmodule.CacheStrategyProfile{
TTLHint: dockerDefaultTTL,
ValidationMode: hubmodule.ValidationModeETag,
DiskLayout: "raw_path",
RequiresMetadataFile: false,
SupportsStreamingWrite: true,
},
})
}

View File

@@ -0,0 +1,55 @@
package hubmodule
import "time"
// MigrationState 描述模块上线阶段,方便观测端区分 legacy/beta/ga。
type MigrationState string
const (
MigrationStateLegacy MigrationState = "legacy"
MigrationStateBeta MigrationState = "beta"
MigrationStateGA MigrationState = "ga"
)
// ValidationMode 描述缓存再验证的默认策略。
type ValidationMode string
const (
ValidationModeETag ValidationMode = "etag"
ValidationModeLastModified ValidationMode = "last-modified"
ValidationModeNever ValidationMode = "never"
)
// CacheStrategyProfile 描述模块的缓存读写策略及其默认值。
type CacheStrategyProfile struct {
TTLHint time.Duration
ValidationMode ValidationMode
DiskLayout string
RequiresMetadataFile bool
SupportsStreamingWrite bool
}
// ModuleMetadata 记录一个模块的静态信息,供配置校验和诊断端使用。
type ModuleMetadata struct {
Key string
Description string
MigrationState MigrationState
SupportedProtocols []string
CacheStrategy CacheStrategyProfile
LocatorRewrite LocatorRewrite
}
// DefaultModuleKey 返回内置 legacy 模块的键值。
func DefaultModuleKey() string {
return defaultModuleKey
}
// Locator 表示模块可用于重写缓存路径的轻量结构体,避免依赖 cache 包。
type Locator struct {
HubName string
Path string
HubType string
}
// LocatorRewrite 允许模块根据自身协议调整缓存路径,例如将 npm metadata 写入独立文件。
type LocatorRewrite func(Locator) Locator

View File

@@ -0,0 +1,21 @@
// Package legacy 提供旧版共享代理+缓存实现的适配器,确保未迁移 Hub 可继续运行。
package legacy
import "github.com/any-hub/any-hub/internal/hubmodule"
// 模块描述:包装当前共享的代理 + 缓存实现,供未迁移的 Hub 使用。
func init() {
hubmodule.MustRegister(hubmodule.ModuleMetadata{
Key: hubmodule.DefaultModuleKey(),
Description: "Legacy proxy + cache implementation bundled with any-hub",
MigrationState: hubmodule.MigrationStateLegacy,
SupportedProtocols: []string{
"docker", "npm", "go", "pypi",
},
CacheStrategy: hubmodule.CacheStrategyProfile{
DiskLayout: "raw_path",
ValidationMode: hubmodule.ValidationModeETag,
SupportsStreamingWrite: true,
},
})
}

View File

@@ -0,0 +1,65 @@
package legacy
import (
"sort"
"strings"
"sync"
)
// RolloutFlag 描述 legacy 模块迁移阶段。
type RolloutFlag string
const (
RolloutLegacyOnly RolloutFlag = "legacy-only"
RolloutDual RolloutFlag = "dual"
RolloutModular RolloutFlag = "modular"
)
// AdapterState 记录特定 Hub 在 legacy 适配器中的运行状态。
type AdapterState struct {
HubName string
ModuleKey string
Rollout RolloutFlag
}
var (
stateMu sync.RWMutex
state = make(map[string]AdapterState)
)
// RecordAdapterState 更新指定 Hub 的 rollout 状态,供诊断端和日志使用。
func RecordAdapterState(hubName, moduleKey string, flag RolloutFlag) {
if hubName == "" {
return
}
key := strings.ToLower(hubName)
stateMu.Lock()
state[key] = AdapterState{
HubName: hubName,
ModuleKey: moduleKey,
Rollout: flag,
}
stateMu.Unlock()
}
// SnapshotAdapterStates 返回所有 Hub 的 rollout 状态,按名称排序。
func SnapshotAdapterStates() []AdapterState {
stateMu.RLock()
defer stateMu.RUnlock()
if len(state) == 0 {
return nil
}
keys := make([]string, 0, len(state))
for k := range state {
keys = append(keys, k)
}
sort.Strings(keys)
result := make([]AdapterState, 0, len(keys))
for _, key := range keys {
result = append(result, state[key])
}
return result
}

View File

@@ -0,0 +1,30 @@
// Package npm 描述 npm Registry 模块的默认策略与注册逻辑,方便新 Hub 直接启用。
package npm
import (
"time"
"github.com/any-hub/any-hub/internal/hubmodule"
)
const npmDefaultTTL = 30 * time.Minute
// npm 模块描述 NPM Registry 的默认缓存策略,并允许通过 [[Hub]] 覆盖 TTL/Validation。
func init() {
hubmodule.MustRegister(hubmodule.ModuleMetadata{
Key: "npm",
Description: "NPM proxy module with cache strategy overrides for metadata/tarballs",
MigrationState: hubmodule.MigrationStateBeta,
SupportedProtocols: []string{
"npm",
},
CacheStrategy: hubmodule.CacheStrategyProfile{
TTLHint: npmDefaultTTL,
ValidationMode: hubmodule.ValidationModeLastModified,
DiskLayout: "raw_path",
RequiresMetadataFile: false,
SupportsStreamingWrite: true,
},
LocatorRewrite: hubmodule.DefaultLocatorRewrite("npm"),
})
}

View File

@@ -0,0 +1,52 @@
package npm
import (
"testing"
"time"
"github.com/any-hub/any-hub/internal/hubmodule"
)
func TestNPMMetadataRegistration(t *testing.T) {
meta, ok := hubmodule.Resolve("npm")
if !ok {
t.Fatalf("npm module not registered")
}
if meta.Key != "npm" {
t.Fatalf("unexpected module key: %s", meta.Key)
}
if meta.MigrationState == "" {
t.Fatalf("migration state must be set")
}
if len(meta.SupportedProtocols) == 0 {
t.Fatalf("supported protocols must not be empty")
}
if meta.CacheStrategy.TTLHint != npmDefaultTTL {
t.Fatalf("expected default ttl %s, got %s", npmDefaultTTL, meta.CacheStrategy.TTLHint)
}
if meta.CacheStrategy.ValidationMode != hubmodule.ValidationModeLastModified {
t.Fatalf("expected validation mode last-modified, got %s", meta.CacheStrategy.ValidationMode)
}
if !meta.CacheStrategy.SupportsStreamingWrite {
t.Fatalf("npm strategy should support streaming writes")
}
}
func TestNPMStrategyOverrides(t *testing.T) {
meta, ok := hubmodule.Resolve("npm")
if !ok {
t.Fatalf("npm module not registered")
}
overrideTTL := 10 * time.Minute
strategy := hubmodule.ResolveStrategy(meta, hubmodule.StrategyOptions{
TTLOverride: overrideTTL,
ValidationOverride: hubmodule.ValidationModeETag,
})
if strategy.TTLHint != overrideTTL {
t.Fatalf("expected ttl override %s, got %s", overrideTTL, strategy.TTLHint)
}
if strategy.ValidationMode != hubmodule.ValidationModeETag {
t.Fatalf("expected validation mode override to etag, got %s", strategy.ValidationMode)
}
}

View File

@@ -0,0 +1,29 @@
// Package pypi 聚焦 PyPI simple index 模块,提供 TTL/验证策略的注册样例。
package pypi
import (
"time"
"github.com/any-hub/any-hub/internal/hubmodule"
)
const pypiDefaultTTL = 15 * time.Minute
// pypi 模块负责 simple index + 分发包的策略声明,默认使用 Last-Modified 校验。
func init() {
hubmodule.MustRegister(hubmodule.ModuleMetadata{
Key: "pypi",
Description: "PyPI simple index module with per-hub cache overrides",
MigrationState: hubmodule.MigrationStateBeta,
SupportedProtocols: []string{
"pypi",
},
CacheStrategy: hubmodule.CacheStrategyProfile{
TTLHint: pypiDefaultTTL,
ValidationMode: hubmodule.ValidationModeLastModified,
DiskLayout: "raw_path",
RequiresMetadataFile: false,
SupportsStreamingWrite: true,
},
})
}

View File

@@ -0,0 +1,117 @@
package hubmodule
import (
"fmt"
"sort"
"strings"
"sync"
)
const defaultModuleKey = "legacy"
var globalRegistry = newRegistry()
type registry struct {
mu sync.RWMutex
modules map[string]ModuleMetadata
}
func newRegistry() *registry {
return &registry{modules: make(map[string]ModuleMetadata)}
}
// Register 将模块元数据加入全局注册表,重复键会返回错误。
func Register(meta ModuleMetadata) error {
return globalRegistry.register(meta)
}
// MustRegister 在注册失败时 panic适合模块 init() 中调用。
func MustRegister(meta ModuleMetadata) {
if err := Register(meta); err != nil {
panic(err)
}
}
// Resolve 返回指定键的模块元数据。
func Resolve(key string) (ModuleMetadata, bool) {
return globalRegistry.resolve(key)
}
// List 返回按键排序的模块元数据列表。
func List() []ModuleMetadata {
return globalRegistry.list()
}
// Keys 返回所有已注册模块的键值,供调试或诊断使用。
func Keys() []string {
items := List()
result := make([]string, len(items))
for i, meta := range items {
result[i] = meta.Key
}
return result
}
func (r *registry) normalizeKey(key string) string {
return strings.ToLower(strings.TrimSpace(key))
}
func (r *registry) register(meta ModuleMetadata) error {
if meta.Key == "" {
return fmt.Errorf("module key is required")
}
key := r.normalizeKey(meta.Key)
if key == "" {
return fmt.Errorf("module key is required")
}
meta.Key = key
r.mu.Lock()
defer r.mu.Unlock()
if _, exists := r.modules[key]; exists {
return fmt.Errorf("module %s already registered", key)
}
r.modules[key] = meta
return nil
}
func (r *registry) mustRegister(meta ModuleMetadata) {
if err := r.register(meta); err != nil {
panic(err)
}
}
func (r *registry) resolve(key string) (ModuleMetadata, bool) {
if key == "" {
return ModuleMetadata{}, false
}
normalized := r.normalizeKey(key)
r.mu.RLock()
defer r.mu.RUnlock()
meta, ok := r.modules[normalized]
return meta, ok
}
func (r *registry) list() []ModuleMetadata {
r.mu.RLock()
defer r.mu.RUnlock()
if len(r.modules) == 0 {
return nil
}
keys := make([]string, 0, len(r.modules))
for key := range r.modules {
keys = append(keys, key)
}
sort.Strings(keys)
result := make([]ModuleMetadata, 0, len(keys))
for _, key := range keys {
result = append(result, r.modules[key])
}
return result
}

View File

@@ -0,0 +1,49 @@
package hubmodule
import "testing"
func replaceRegistry(t *testing.T) func() {
t.Helper()
prev := globalRegistry
globalRegistry = newRegistry()
return func() { globalRegistry = prev }
}
func TestRegisterResolveAndList(t *testing.T) {
cleanup := replaceRegistry(t)
defer cleanup()
if err := Register(ModuleMetadata{Key: "beta", MigrationState: MigrationStateBeta}); err != nil {
t.Fatalf("register beta failed: %v", err)
}
if err := Register(ModuleMetadata{Key: "gamma", MigrationState: MigrationStateGA}); err != nil {
t.Fatalf("register gamma failed: %v", err)
}
if _, ok := Resolve("beta"); !ok {
t.Fatalf("expected beta to resolve")
}
if _, ok := Resolve("BETA"); !ok {
t.Fatalf("resolve should be case-insensitive")
}
list := List()
if len(list) != 2 {
t.Fatalf("list length mismatch: %d", len(list))
}
if list[0].Key != "beta" || list[1].Key != "gamma" {
t.Fatalf("unexpected order: %+v", list)
}
}
func TestRegisterDuplicateFails(t *testing.T) {
cleanup := replaceRegistry(t)
defer cleanup()
if err := Register(ModuleMetadata{Key: "legacy"}); err != nil {
t.Fatalf("first registration should succeed: %v", err)
}
if err := Register(ModuleMetadata{Key: "legacy"}); err == nil {
t.Fatalf("duplicate registration should fail")
}
}

View File

@@ -0,0 +1,70 @@
package hubmodule
import (
"path"
"strings"
)
// DefaultLocatorRewrite 根据 Hub 类型返回通用的路径重写逻辑。
func DefaultLocatorRewrite(hubType string) LocatorRewrite {
switch hubType {
case "npm":
return rewriteNPMLocator
case "go":
return rewriteGoLocator
default:
return nil
}
}
func rewriteNPMLocator(loc Locator) Locator {
pathVal := loc.Path
if pathVal == "" {
return loc
}
var qsSuffix string
core := pathVal
if idx := strings.Index(core, "/__qs/"); idx >= 0 {
qsSuffix = core[idx:]
core = core[:idx]
}
if strings.Contains(core, "/-/") {
loc.Path = core + qsSuffix
return loc
}
clean := strings.TrimSuffix(core, "/")
if clean == "" {
clean = "/"
}
if clean == "/" {
loc.Path = "/package.json" + qsSuffix
return loc
}
loc.Path = clean + "/package.json" + qsSuffix
return loc
}
func rewriteGoLocator(loc Locator) Locator {
if loc.Path == "" {
loc.Path = "/"
return loc
}
clean := path.Clean("/" + loc.Path)
if strings.HasPrefix(clean, "/sumdb/") {
loc.Path = clean
return loc
}
if strings.HasSuffix(clean, "/") {
clean = strings.TrimSuffix(clean, "/")
if clean == "" {
clean = "/"
}
}
loc.Path = clean
return loc
}

View File

@@ -0,0 +1,34 @@
package hubmodule
import "time"
// StrategyOptions 描述来自 Hub Config 的 override。
type StrategyOptions struct {
TTLOverride time.Duration
ValidationOverride ValidationMode
}
// ResolveStrategy 将模块的默认策略与 hub 级覆盖合并。
func ResolveStrategy(meta ModuleMetadata, opts StrategyOptions) CacheStrategyProfile {
strategy := meta.CacheStrategy
if opts.TTLOverride > 0 {
strategy.TTLHint = opts.TTLOverride
}
if opts.ValidationOverride != "" {
strategy.ValidationMode = opts.ValidationOverride
}
return normalizeStrategy(strategy)
}
func normalizeStrategy(profile CacheStrategyProfile) CacheStrategyProfile {
if profile.TTLHint < 0 {
profile.TTLHint = 0
}
if profile.ValidationMode == "" {
profile.ValidationMode = ValidationModeETag
}
if profile.DiskLayout == "" {
profile.DiskLayout = "raw_path"
}
return profile
}

View File

@@ -0,0 +1,12 @@
// Package template 提供编写新模块时可复制的骨架示例。
package template
import "github.com/any-hub/any-hub/internal/hubmodule"
//
// 使用方式:复制整个目录到 internal/hubmodule/<module-key>/ 并替换字段。
// - 将 TemplateModule 重命名为实际模块类型。
// - 在 init() 中调用 hubmodule.MustRegister注册新的 ModuleMetadata。
// - 在模块目录中实现自定义代理/缓存逻辑,然后在 main 中调用 proxy.RegisterModuleHandler。
//
// 注意:本文件仅示例 metadata 注册写法,不会参与编译。
var _ = hubmodule.ModuleMetadata{}

View File

@@ -11,12 +11,14 @@ func BaseFields(action, configPath string) logrus.Fields {
}
// RequestFields 提供 hub/domain/命中状态字段,供代理请求日志复用。
func RequestFields(hub, domain, hubType, authMode string, cacheHit bool) logrus.Fields {
func RequestFields(hub, domain, hubType, authMode, moduleKey, rolloutFlag string, cacheHit bool) logrus.Fields {
return logrus.Fields{
"hub": hub,
"domain": domain,
"hub_type": hubType,
"auth_mode": authMode,
"cache_hit": cacheHit,
"hub": hub,
"domain": domain,
"hub_type": hubType,
"auth_mode": authMode,
"cache_hit": cacheHit,
"module_key": moduleKey,
"rollout_flag": rolloutFlag,
}
}

View File

@@ -0,0 +1,291 @@
package proxy
import (
"bytes"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"strconv"
"strings"
"github.com/any-hub/any-hub/internal/server"
)
func (h *Handler) rewriteComposerResponse(route *server.HubRoute, resp *http.Response, path string) (*http.Response, error) {
if resp == nil || route == nil || route.Config.Type != "composer" {
return resp, nil
}
if path == "/packages.json" {
return rewriteComposerRoot(resp, route.Config.Domain)
}
if !isComposerMetadataPath(path) {
return resp, nil
}
body, err := io.ReadAll(resp.Body)
if err != nil {
return resp, err
}
resp.Body.Close()
rewritten, changed, err := rewriteComposerMetadata(body, route.Config.Domain)
if err != nil {
resp.Body = io.NopCloser(bytes.NewReader(body))
return resp, err
}
if !changed {
resp.Body = io.NopCloser(bytes.NewReader(body))
return resp, nil
}
resp.Body = io.NopCloser(bytes.NewReader(rewritten))
resp.ContentLength = int64(len(rewritten))
resp.Header.Set("Content-Length", strconv.Itoa(len(rewritten)))
resp.Header.Set("Content-Type", "application/json")
resp.Header.Del("Content-Encoding")
resp.Header.Del("Etag")
return resp, nil
}
func rewriteComposerRoot(resp *http.Response, domain string) (*http.Response, error) {
body, err := io.ReadAll(resp.Body)
if err != nil {
return resp, err
}
resp.Body.Close()
data, changed, err := rewriteComposerRootBody(body, domain)
if err != nil {
resp.Body = io.NopCloser(bytes.NewReader(body))
return resp, err
}
if !changed {
resp.Body = io.NopCloser(bytes.NewReader(body))
return resp, nil
}
resp.Body = io.NopCloser(bytes.NewReader(data))
resp.ContentLength = int64(len(data))
resp.Header.Set("Content-Length", strconv.Itoa(len(data)))
resp.Header.Set("Content-Type", "application/json")
resp.Header.Del("Content-Encoding")
resp.Header.Del("Etag")
return resp, nil
}
func rewriteComposerMetadata(body []byte, domain string) ([]byte, bool, error) {
type packagesRoot struct {
Packages map[string]json.RawMessage `json:"packages"`
}
var root packagesRoot
if err := json.Unmarshal(body, &root); err != nil {
return nil, false, err
}
if len(root.Packages) == 0 {
return body, false, nil
}
changed := false
for name, raw := range root.Packages {
updated, rewritten, err := rewriteComposerPackagesPayload(raw, domain, name)
if err != nil {
return nil, false, err
}
if rewritten {
root.Packages[name] = updated
changed = true
}
}
if !changed {
return body, false, nil
}
data, err := json.Marshal(root)
if err != nil {
return nil, false, err
}
return data, true, nil
}
func rewriteComposerPackagesPayload(raw json.RawMessage, domain string, packageName string) (json.RawMessage, bool, error) {
var asArray []map[string]any
if err := json.Unmarshal(raw, &asArray); err == nil {
rewrote := rewriteComposerVersionSlice(asArray, domain, packageName)
if !rewrote {
return raw, false, nil
}
data, err := json.Marshal(asArray)
return data, true, err
}
var asMap map[string]map[string]any
if err := json.Unmarshal(raw, &asMap); err == nil {
rewrote := rewriteComposerVersionMap(asMap, domain, packageName)
if !rewrote {
return raw, false, nil
}
data, err := json.Marshal(asMap)
return data, true, err
}
return raw, false, nil
}
func rewriteComposerVersionSlice(items []map[string]any, domain string, packageName string) bool {
changed := false
for _, entry := range items {
if rewriteComposerVersion(entry, domain, packageName) {
changed = true
}
}
return changed
}
func rewriteComposerVersionMap(items map[string]map[string]any, domain string, packageName string) bool {
changed := false
for _, entry := range items {
if rewriteComposerVersion(entry, domain, packageName) {
changed = true
}
}
return changed
}
func rewriteComposerVersion(entry map[string]any, domain string, packageName string) bool {
if entry == nil {
return false
}
changed := false
if packageName != "" {
if name, _ := entry["name"].(string); strings.TrimSpace(name) == "" {
entry["name"] = packageName
changed = true
}
}
distVal, ok := entry["dist"].(map[string]any)
if !ok {
return changed
}
urlValue, ok := distVal["url"].(string)
if !ok || urlValue == "" {
return changed
}
rewritten := rewriteComposerDistURL(domain, urlValue)
if rewritten == urlValue {
return changed
}
distVal["url"] = rewritten
return true
}
func rewriteComposerDistURL(domain, original string) string {
parsed, err := url.Parse(original)
if err != nil || parsed.Scheme == "" || parsed.Host == "" {
return original
}
prefix := fmt.Sprintf("/dist/%s/%s", parsed.Scheme, parsed.Host)
newURL := url.URL{
Scheme: "https",
Host: domain,
Path: prefix + parsed.Path,
RawQuery: parsed.RawQuery,
Fragment: parsed.Fragment,
}
if raw := parsed.RawPath; raw != "" {
newURL.RawPath = prefix + raw
}
return newURL.String()
}
func isComposerMetadataPath(path string) bool {
switch {
case path == "/packages.json":
return true
case strings.HasPrefix(path, "/p2/"):
return true
case strings.HasPrefix(path, "/p/"):
return true
case strings.HasPrefix(path, "/provider-"):
return true
case strings.HasPrefix(path, "/providers/"):
return true
default:
return false
}
}
func isComposerDistPath(path string) bool {
return strings.HasPrefix(path, "/dist/")
}
func rewriteComposerAbsolute(domain, raw string) string {
if raw == "" {
return raw
}
if strings.HasPrefix(raw, "//") {
return "https://" + domain + strings.TrimPrefix(raw, "//")
}
if strings.HasPrefix(raw, "http://") || strings.HasPrefix(raw, "https://") {
parsed, err := url.Parse(raw)
if err != nil {
return raw
}
parsed.Host = domain
parsed.Scheme = "https"
return parsed.String()
}
pathVal := raw
if !strings.HasPrefix(pathVal, "/") {
pathVal = "/" + pathVal
}
return fmt.Sprintf("https://%s%s", domain, pathVal)
}
func rewriteComposerRootBody(body []byte, domain string) ([]byte, bool, error) {
var root map[string]any
if err := json.Unmarshal(body, &root); err != nil {
return nil, false, err
}
changed := false
for _, key := range []string{"metadata-url", "providers-api", "providers-url", "notify-batch"} {
if raw, ok := root[key].(string); ok && raw != "" {
newVal := rewriteComposerAbsolute(domain, raw)
if newVal != raw {
root[key] = newVal
changed = true
}
}
}
if includes, ok := root["provider-includes"].(map[string]any); ok {
for file, hashVal := range includes {
pathVal := file
if rawPath, ok := hashVal.(map[string]any); ok {
if urlValue, ok := rawPath["url"].(string); ok {
pathVal = urlValue
}
}
newPath := rewriteComposerAbsolute(domain, pathVal)
if newPath != pathVal {
changed = true
}
if rawPath, ok := hashVal.(map[string]any); ok {
rawPath["url"] = newPath
includes[file] = rawPath
} else {
includes[file] = newPath
}
}
}
if !changed {
return body, false, nil
}
data, err := json.Marshal(root)
if err != nil {
return nil, false, err
}
return data, true, nil
}

View File

@@ -0,0 +1,68 @@
package proxy
import (
"strings"
"sync"
"github.com/gofiber/fiber/v3"
"github.com/any-hub/any-hub/internal/server"
)
// Forwarder 根据 HubRoute 的 module_key 选择对应的 ProxyHandler默认回退到构造时注入的 handler。
type Forwarder struct {
defaultHandler server.ProxyHandler
}
// NewForwarder 创建 ForwarderdefaultHandler 不能为空。
func NewForwarder(defaultHandler server.ProxyHandler) *Forwarder {
return &Forwarder{defaultHandler: defaultHandler}
}
var (
moduleHandlers sync.Map
)
// RegisterModuleHandler 将特定 module_key 映射到 ProxyHandler重复注册会覆盖旧值。
func RegisterModuleHandler(key string, handler server.ProxyHandler) {
normalized := normalizeModuleKey(key)
if normalized == "" || handler == nil {
return
}
moduleHandlers.Store(normalized, handler)
}
// Handle 实现 server.ProxyHandler根据 route.ModuleKey 选择 handler。
func (f *Forwarder) Handle(c fiber.Ctx, route *server.HubRoute) error {
handler := f.lookup(route)
if handler == nil {
return fiber.NewError(fiber.StatusInternalServerError, "proxy handler unavailable")
}
return handler.Handle(c, route)
}
func (f *Forwarder) lookup(route *server.HubRoute) server.ProxyHandler {
if route != nil {
if handler := lookupModuleHandler(route.ModuleKey); handler != nil {
return handler
}
}
return f.defaultHandler
}
func lookupModuleHandler(key string) server.ProxyHandler {
normalized := normalizeModuleKey(key)
if normalized == "" {
return nil
}
if value, ok := moduleHandlers.Load(normalized); ok {
if handler, ok := value.(server.ProxyHandler); ok {
return handler
}
}
return nil
}
func normalizeModuleKey(key string) string {
return strings.ToLower(strings.TrimSpace(key))
}

View File

@@ -14,12 +14,14 @@ import (
"net/url"
"path"
"strings"
"sync"
"time"
"github.com/gofiber/fiber/v3"
"github.com/sirupsen/logrus"
"github.com/any-hub/any-hub/internal/cache"
"github.com/any-hub/any-hub/internal/hubmodule"
"github.com/any-hub/any-hub/internal/logging"
"github.com/any-hub/any-hub/internal/server"
)
@@ -30,6 +32,7 @@ type Handler struct {
client *http.Client
logger *logrus.Logger
store cache.Store
etags sync.Map // key: hub+path, value: etag/digest string
}
// NewHandler constructs a proxy handler with shared HTTP client/logger/store.
@@ -47,9 +50,13 @@ func (h *Handler) Handle(c fiber.Ctx, route *server.HubRoute) error {
requestID := server.RequestID(c)
locator := buildLocator(route, c)
policy := determineCachePolicy(route, locator, c.Method())
strategyWriter := cache.NewStrategyWriter(h.store, route.CacheStrategy)
if err := ensureProxyHubType(route); err != nil {
h.logger.WithField("hub", route.Config.Name).WithError(err).Error("hub_type_unsupported")
h.logger.WithFields(logrus.Fields{
"hub": route.Config.Name,
"module_key": route.ModuleKey,
}).WithError(err).Error("hub_type_unsupported")
return h.writeError(c, fiber.StatusNotImplemented, "hub_type_unsupported")
}
@@ -59,7 +66,7 @@ func (h *Handler) Handle(c fiber.Ctx, route *server.HubRoute) error {
}
var cached *cache.ReadResult
if h.store != nil && policy.allowCache {
if strategyWriter.Enabled() && policy.allowCache {
result, err := h.store.Get(ctx, locator)
switch {
case err == nil:
@@ -67,18 +74,28 @@ func (h *Handler) Handle(c fiber.Ctx, route *server.HubRoute) error {
case errors.Is(err, cache.ErrNotFound):
// miss, continue
default:
h.logger.WithError(err).WithField("hub", route.Config.Name).Warn("cache_get_failed")
h.logger.WithError(err).
WithFields(logrus.Fields{"hub": route.Config.Name, "module_key": route.ModuleKey}).
Warn("cache_get_failed")
}
}
if cached != nil {
serve := true
if policy.requireRevalidate {
fresh, err := h.isCacheFresh(c, route, locator, cached.Entry)
if err != nil {
h.logger.WithError(err).WithField("hub", route.Config.Name).Warn("cache_revalidate_failed")
serve = false
} else if !fresh {
if strategyWriter.ShouldBypassValidation(cached.Entry) {
serve = true
} else if strategyWriter.SupportsValidation() {
fresh, err := h.isCacheFresh(c, route, locator, cached.Entry)
if err != nil {
h.logger.WithError(err).
WithFields(logrus.Fields{"hub": route.Config.Name, "module_key": route.ModuleKey}).
Warn("cache_revalidate_failed")
serve = false
} else if !fresh {
serve = false
}
} else {
serve = false
}
}
@@ -89,17 +106,65 @@ func (h *Handler) Handle(c fiber.Ctx, route *server.HubRoute) error {
cached.Reader.Close()
}
return h.fetchAndStream(c, route, locator, policy, requestID, started, ctx)
return h.fetchAndStream(c, route, locator, policy, strategyWriter, requestID, started, ctx)
}
func (h *Handler) serveCache(c fiber.Ctx, route *server.HubRoute, result *cache.ReadResult, requestID string, started time.Time) error {
if seeker, ok := result.Reader.(io.Seeker); ok {
_, _ = seeker.Seek(0, io.SeekStart)
func (h *Handler) serveCache(
c fiber.Ctx,
route *server.HubRoute,
result *cache.ReadResult,
requestID string,
started time.Time,
) error {
var readSeeker io.ReadSeeker
switch reader := result.Reader.(type) {
case io.ReadSeeker:
readSeeker = reader
_, _ = readSeeker.Seek(0, io.SeekStart)
case io.Seeker:
_, _ = reader.Seek(0, io.SeekStart)
}
method := c.Method()
contentType := inferCachedContentType(route, result.Entry.Locator)
if contentType == "" && shouldSniffDockerManifest(route, result.Entry.Locator) {
if sniffed := sniffDockerManifestContentType(readSeeker); sniffed != "" {
contentType = sniffed
}
}
if route != nil && route.Config.Type == "composer" && isComposerMetadataPath(stripQueryMarker(result.Entry.Locator.Path)) {
body, err := io.ReadAll(result.Reader)
result.Reader.Close()
if err != nil {
return fiber.NewError(fiber.StatusBadGateway, fmt.Sprintf("read cache failed: %v", err))
}
rewritten := body
if stripQueryMarker(result.Entry.Locator.Path) == "/packages.json" {
if data, changed, err := rewriteComposerRootBody(body, route.Config.Domain); err == nil && changed {
rewritten = data
}
} else {
if data, changed, err := rewriteComposerMetadata(body, route.Config.Domain); err == nil && changed {
rewritten = data
}
}
c.Set("Content-Type", "application/json")
c.Set("X-Any-Hub-Upstream", route.UpstreamURL.String())
c.Set("X-Any-Hub-Cache-Hit", "true")
if requestID != "" {
c.Set("X-Request-ID", requestID)
}
c.Status(fiber.StatusOK)
c.Response().Header.SetContentLength(len(rewritten))
_, err = c.Response().BodyWriter().Write(rewritten)
h.logResult(route, route.UpstreamURL.String(), requestID, fiber.StatusOK, true, started, err)
if err != nil {
return fiber.NewError(fiber.StatusBadGateway, fmt.Sprintf("read cache failed: %v", err))
}
return nil
}
if contentType != "" {
c.Set("Content-Type", contentType)
} else {
@@ -137,7 +202,16 @@ func (h *Handler) serveCache(c fiber.Ctx, route *server.HubRoute, result *cache.
return nil
}
func (h *Handler) fetchAndStream(c fiber.Ctx, route *server.HubRoute, locator cache.Locator, policy cachePolicy, requestID string, started time.Time, ctx context.Context) error {
func (h *Handler) fetchAndStream(
c fiber.Ctx,
route *server.HubRoute,
locator cache.Locator,
policy cachePolicy,
writer cache.StrategyWriter,
requestID string,
started time.Time,
ctx context.Context,
) error {
resp, upstreamURL, err := h.executeRequest(c, route)
if err != nil {
h.logResult(route, upstreamURL.String(), requestID, 0, false, started, err)
@@ -149,19 +223,49 @@ func (h *Handler) fetchAndStream(c fiber.Ctx, route *server.HubRoute, locator ca
h.logResult(route, upstreamURL.String(), requestID, 0, false, started, err)
return h.writeError(c, fiber.StatusBadGateway, "upstream_failed")
}
if route.Config.Type == "pypi" {
if rewritten, rewriteErr := h.rewritePyPIResponse(route, resp, requestPath(c)); rewriteErr == nil {
resp = rewritten
} else {
h.logger.WithError(rewriteErr).WithFields(logrus.Fields{
"action": "pypi_rewrite",
"hub": route.Config.Name,
}).Warn("pypi_rewrite_failed")
}
} else if route.Config.Type == "composer" {
if rewritten, rewriteErr := h.rewriteComposerResponse(route, resp, requestPath(c)); rewriteErr == nil {
resp = rewritten
} else {
h.logger.WithError(rewriteErr).WithFields(logrus.Fields{
"action": "composer_rewrite",
"hub": route.Config.Name,
}).Warn("composer_rewrite_failed")
}
}
defer resp.Body.Close()
shouldStore := policy.allowStore && h.store != nil && isCacheableStatus(resp.StatusCode) && c.Method() == http.MethodGet
return h.consumeUpstream(c, route, locator, resp, shouldStore, requestID, started, ctx)
shouldStore := policy.allowStore && writer.Enabled() && isCacheableStatus(resp.StatusCode) &&
c.Method() == http.MethodGet
return h.consumeUpstream(c, route, locator, resp, shouldStore, writer, requestID, started, ctx)
}
func (h *Handler) consumeUpstream(c fiber.Ctx, route *server.HubRoute, locator cache.Locator, resp *http.Response, shouldStore bool, requestID string, started time.Time, ctx context.Context) error {
func (h *Handler) consumeUpstream(
c fiber.Ctx,
route *server.HubRoute,
locator cache.Locator,
resp *http.Response,
shouldStore bool,
writer cache.StrategyWriter,
requestID string,
started time.Time,
ctx context.Context,
) error {
upstreamURL := resp.Request.URL.String()
method := c.Method()
authFailure := isAuthFailure(resp.StatusCode) && route.Config.HasCredentials()
if shouldStore {
return h.cacheAndStream(c, route, locator, resp, requestID, started, ctx, upstreamURL)
return h.cacheAndStream(c, route, locator, resp, writer, requestID, started, ctx, upstreamURL)
}
copyResponseHeaders(c, resp.Header)
@@ -189,7 +293,17 @@ func (h *Handler) consumeUpstream(c fiber.Ctx, route *server.HubRoute, locator c
return nil
}
func (h *Handler) cacheAndStream(c fiber.Ctx, route *server.HubRoute, locator cache.Locator, resp *http.Response, requestID string, started time.Time, ctx context.Context, upstreamURL string) error {
func (h *Handler) cacheAndStream(
c fiber.Ctx,
route *server.HubRoute,
locator cache.Locator,
resp *http.Response,
writer cache.StrategyWriter,
requestID string,
started time.Time,
ctx context.Context,
upstreamURL string,
) error {
copyResponseHeaders(c, resp.Header)
c.Set("X-Any-Hub-Upstream", upstreamURL)
c.Set("X-Any-Hub-Cache-Hit", "false")
@@ -201,16 +315,24 @@ func (h *Handler) cacheAndStream(c fiber.Ctx, route *server.HubRoute, locator ca
reader := io.TeeReader(resp.Body, c.Response().BodyWriter())
opts := cache.PutOptions{ModTime: extractModTime(resp.Header)}
entry, err := h.store.Put(ctx, locator, reader, opts)
entry, err := writer.Put(ctx, locator, reader, opts)
h.logResult(route, upstreamURL, requestID, resp.StatusCode, false, started, err)
if err != nil {
return fiber.NewError(fiber.StatusBadGateway, fmt.Sprintf("cache_write_failed: %v", err))
}
h.rememberETag(route, locator, resp)
_ = entry
return nil
}
func (h *Handler) retryOnAuthFailure(c fiber.Ctx, route *server.HubRoute, requestID string, started time.Time, resp *http.Response, upstreamURL *url.URL) (*http.Response, *url.URL, error) {
func (h *Handler) retryOnAuthFailure(
c fiber.Ctx,
route *server.HubRoute,
requestID string,
started time.Time,
resp *http.Response,
upstreamURL *url.URL,
) (*http.Response, *url.URL, error) {
if !shouldRetryAuth(route, resp.StatusCode) {
return resp, upstreamURL, nil
}
@@ -247,7 +369,11 @@ func (h *Handler) executeRequest(c fiber.Ctx, route *server.HubRoute) (*http.Res
return h.executeRequestWithAuth(c, route, "")
}
func (h *Handler) executeRequestWithAuth(c fiber.Ctx, route *server.HubRoute, authHeader string) (*http.Response, *url.URL, error) {
func (h *Handler) executeRequestWithAuth(
c fiber.Ctx,
route *server.HubRoute,
authHeader string,
) (*http.Response, *url.URL, error) {
upstreamURL := resolveUpstreamURL(route, route.UpstreamURL, c)
body := bytesReader(c.Body())
req, err := h.buildUpstreamRequest(c, upstreamURL, route, c.Method(), body, authHeader)
@@ -259,7 +385,14 @@ func (h *Handler) executeRequestWithAuth(c fiber.Ctx, route *server.HubRoute, au
return resp, upstreamURL, err
}
func (h *Handler) buildUpstreamRequest(c fiber.Ctx, upstream *url.URL, route *server.HubRoute, method string, body io.Reader, overrideAuth string) (*http.Request, error) {
func (h *Handler) buildUpstreamRequest(
c fiber.Ctx,
upstream *url.URL,
route *server.HubRoute,
method string,
body io.Reader,
overrideAuth string,
) (*http.Request, error) {
ctx := c.Context()
if ctx == nil {
ctx = context.Background()
@@ -275,6 +408,7 @@ func (h *Handler) buildUpstreamRequest(c fiber.Ctx, upstream *url.URL, route *se
requestHeaders := fiberHeadersAsHTTP(c)
server.CopyHeaders(req.Header, requestHeaders)
req.Header.Del("Accept-Encoding")
req.Host = upstream.Host
req.Header.Set("Host", upstream.Host)
req.Header.Set("X-Forwarded-Host", c.Hostname())
@@ -315,8 +449,24 @@ func (h *Handler) writeError(c fiber.Ctx, status int, code string) error {
return c.Status(status).JSON(fiber.Map{"error": code})
}
func (h *Handler) logResult(route *server.HubRoute, upstream string, requestID string, status int, cacheHit bool, started time.Time, err error) {
fields := logging.RequestFields(route.Config.Name, route.Config.Domain, route.Config.Type, route.Config.AuthMode(), cacheHit)
func (h *Handler) logResult(
route *server.HubRoute,
upstream string,
requestID string,
status int,
cacheHit bool,
started time.Time,
err error,
) {
fields := logging.RequestFields(
route.Config.Name,
route.Config.Domain,
route.Config.Type,
route.Config.AuthMode(),
route.ModuleKey,
string(route.RolloutFlag),
cacheHit,
)
fields["action"] = "proxy"
fields["upstream"] = upstream
fields["upstream_status"] = status
@@ -337,6 +487,8 @@ func inferCachedContentType(route *server.HubRoute, locator cache.Locator) strin
switch {
case strings.HasSuffix(clean, ".zip"):
return "application/zip"
case strings.HasSuffix(clean, ".json"):
return "application/json"
case strings.HasSuffix(clean, ".mod"):
return "text/plain"
case strings.HasSuffix(clean, ".info"):
@@ -355,7 +507,7 @@ func inferCachedContentType(route *server.HubRoute, locator cache.Locator) strin
switch route.Config.Type {
case "docker":
if strings.Contains(clean, "/manifests/") {
return "application/vnd.docker.distribution.manifest.v2+json"
return ""
}
if strings.Contains(clean, "/tags/list") {
return "application/json"
@@ -388,14 +540,34 @@ func buildLocator(route *server.HubRoute, c fiber.Ctx) cache.Locator {
clean = newPath
}
query := uri.QueryString()
if route != nil && route.Config.Type == "composer" && isComposerDistPath(clean) {
// composer dist URLs often embed per-request tokens; ignore query for cache key
query = nil
}
if len(query) > 0 {
sum := sha1.Sum(query)
clean = fmt.Sprintf("%s/__qs/%s", clean, hex.EncodeToString(sum[:]))
}
return cache.Locator{
loc := cache.Locator{
HubName: route.Config.Name,
Path: clean,
}
rewrite := route.Module.LocatorRewrite
if rewrite == nil {
rewrite = hubmodule.DefaultLocatorRewrite(route.Config.Type)
}
if rewrite != nil {
rewritten := rewrite(hubmodule.Locator{
HubName: loc.HubName,
Path: loc.Path,
HubType: route.Config.Type,
})
loc = cache.Locator{
HubName: rewritten.HubName,
Path: rewritten.Path,
}
}
return loc
}
func stripQueryMarker(p string) string {
@@ -405,6 +577,38 @@ func stripQueryMarker(p string) string {
return p
}
func shouldSniffDockerManifest(route *server.HubRoute, locator cache.Locator) bool {
if route == nil || route.Config.Type != "docker" {
return false
}
clean := stripQueryMarker(locator.Path)
return strings.Contains(clean, "/manifests/")
}
func sniffDockerManifestContentType(reader io.ReadSeeker) string {
if reader == nil {
return ""
}
const maxInspectBytes = 512 * 1024
if _, err := reader.Seek(0, io.SeekStart); err != nil {
return ""
}
data, err := io.ReadAll(io.LimitReader(reader, maxInspectBytes))
if _, seekErr := reader.Seek(0, io.SeekStart); seekErr != nil {
return ""
}
if err != nil && !errors.Is(err, io.EOF) {
return ""
}
var manifest struct {
MediaType string `json:"mediaType"`
}
if err := json.Unmarshal(data, &manifest); err != nil {
return ""
}
return strings.TrimSpace(manifest.MediaType)
}
func requestPath(c fiber.Ctx) string {
if c == nil {
return "/"
@@ -449,6 +653,29 @@ func resolveUpstreamURL(route *server.HubRoute, base *url.URL, c fiber.Ctx) *url
if newPath, ok := applyDockerHubNamespaceFallback(route, clean); ok {
clean = newPath
}
if route != nil && route.Config.Type == "pypi" && strings.HasPrefix(clean, "/files/") {
trimmed := strings.TrimPrefix(clean, "/files/")
parts := strings.SplitN(trimmed, "/", 3)
if len(parts) >= 3 {
scheme := parts[0]
host := parts[1]
rest := parts[2]
filesBase := &url.URL{Scheme: scheme, Host: host}
if !strings.HasPrefix(rest, "/") {
rest = "/" + rest
}
relative := &url.URL{Path: rest, RawPath: rest}
if query := string(uri.QueryString()); query != "" {
relative.RawQuery = query
}
return filesBase.ResolveReference(relative)
}
}
if route != nil && route.Config.Type == "composer" && strings.HasPrefix(clean, "/dist/") {
if distTarget, ok := parseComposerDistURL(clean, string(uri.QueryString())); ok {
return distTarget
}
}
relative := &url.URL{Path: clean, RawPath: clean}
if query := string(uri.QueryString()); query != "" {
relative.RawQuery = query
@@ -483,8 +710,8 @@ func routePort(route *server.HubRoute) string {
}
type cachePolicy struct {
allowCache bool
allowStore bool
allowCache bool
allowStore bool
requireRevalidate bool
}
@@ -495,10 +722,10 @@ func determineCachePolicy(route *server.HubRoute, locator cache.Locator, method
policy := cachePolicy{allowCache: true, allowStore: true}
path := stripQueryMarker(locator.Path)
switch route.Config.Type {
case "docker":
if path == "/v2" || path == "v2" || path == "/v2/" {
return cachePolicy{}
}
case "docker":
if path == "/v2" || path == "v2" || path == "/v2/" {
return cachePolicy{}
}
if strings.Contains(path, "/_catalog") {
return cachePolicy{}
}
@@ -507,27 +734,37 @@ case "docker":
}
policy.requireRevalidate = true
return policy
case "go":
if strings.Contains(path, "/@v/") && (strings.HasSuffix(path, ".zip") || strings.HasSuffix(path, ".mod") || strings.HasSuffix(path, ".info")) {
case "go":
if strings.Contains(path, "/@v/") &&
(strings.HasSuffix(path, ".zip") || strings.HasSuffix(path, ".mod") || strings.HasSuffix(path, ".info")) {
return policy
}
policy.requireRevalidate = true
return policy
case "npm":
if strings.Contains(path, "/-/") && strings.HasSuffix(path, ".tgz") {
return policy
}
policy.requireRevalidate = true
return policy
case "pypi":
if isPyPIDistribution(path) {
return policy
}
policy.requireRevalidate = true
return policy
case "composer":
if isComposerDistPath(path) {
return policy
}
if isComposerMetadataPath(path) {
policy.requireRevalidate = true
return policy
}
return cachePolicy{}
default:
return policy
}
policy.requireRevalidate = true
return policy
case "npm":
if strings.Contains(path, "/-/") && strings.HasSuffix(path, ".tgz") {
return policy
}
policy.requireRevalidate = true
return policy
case "pypi":
if isPyPIDistribution(path) {
return policy
}
policy.requireRevalidate = true
return policy
default:
return policy
}
}
func isDockerImmutablePath(path string) bool {
@@ -563,26 +800,49 @@ func isCacheableStatus(status int) bool {
return status == http.StatusOK
}
func (h *Handler) isCacheFresh(c fiber.Ctx, route *server.HubRoute, locator cache.Locator, entry cache.Entry) (bool, error) {
func (h *Handler) isCacheFresh(
c fiber.Ctx,
route *server.HubRoute,
locator cache.Locator,
entry cache.Entry,
) (bool, error) {
ctx := c.Context()
if ctx == nil {
ctx = context.Background()
}
upstreamURL := resolveUpstreamURL(route, route.UpstreamURL, c)
req, err := h.buildUpstreamRequest(c, upstreamURL, route, http.MethodHead, http.NoBody, "")
resp, err := h.revalidateRequest(c, route, upstreamURL, locator, "")
if err != nil {
return false, err
}
resp, err := h.doRequest(req, route)
if err != nil {
return false, err
if shouldRetryAuth(route, resp.StatusCode) {
challenge, ok := parseBearerChallenge(resp.Header.Values("Www-Authenticate"))
resp.Body.Close()
authHeader := ""
if ok {
token, err := h.fetchBearerToken(ctx, challenge, route)
if err != nil {
return false, err
}
authHeader = "Bearer " + token
}
resp, err = h.revalidateRequest(c, route, upstreamURL, locator, authHeader)
if err != nil {
return false, err
}
}
defer resp.Body.Close()
switch resp.StatusCode {
case http.StatusNotModified:
return true, nil
case http.StatusOK:
h.rememberETag(route, locator, resp)
remote := extractModTime(resp.Header)
if !remote.After(entry.ModTime.Add(time.Second)) {
return true, nil
@@ -592,12 +852,30 @@ func (h *Handler) isCacheFresh(c fiber.Ctx, route *server.HubRoute, locator cach
if h.store != nil {
_ = h.store.Remove(ctx, locator)
}
h.forgetETag(route, locator)
return false, nil
default:
return false, nil
}
}
func (h *Handler) revalidateRequest(
c fiber.Ctx,
route *server.HubRoute,
upstreamURL *url.URL,
locator cache.Locator,
overrideAuth string,
) (*http.Response, error) {
req, err := h.buildUpstreamRequest(c, upstreamURL, route, http.MethodHead, http.NoBody, overrideAuth)
if err != nil {
return nil, err
}
if etag := h.cachedETag(route, locator); etag != "" {
req.Header.Set("If-None-Match", etag)
}
return h.doRequest(req, route)
}
func extractModTime(header http.Header) time.Time {
if last := header.Get("Last-Modified"); last != "" {
if parsed, err := http.ParseTime(last); err == nil {
@@ -668,10 +946,11 @@ func applyPyPISimpleFallback(route *server.HubRoute, path string) (string, bool)
if route == nil || route.Config.Type != "pypi" {
return path, false
}
if strings.HasPrefix(path, "/simple/") || strings.HasPrefix(path, "/packages/") {
if strings.HasPrefix(path, "/simple/") || strings.HasPrefix(path, "/files/") {
return path, false
}
if strings.HasSuffix(path, ".whl") || strings.HasSuffix(path, ".tar.gz") || strings.HasSuffix(path, ".tar.bz2") || strings.HasSuffix(path, ".zip") {
if strings.HasSuffix(path, ".whl") || strings.HasSuffix(path, ".tar.gz") || strings.HasSuffix(path, ".tar.bz2") ||
strings.HasSuffix(path, ".zip") {
return path, false
}
trimmed := strings.Trim(path, "/")
@@ -681,6 +960,38 @@ func applyPyPISimpleFallback(route *server.HubRoute, path string) (string, bool)
return "/simple/" + trimmed + "/", true
}
func parseComposerDistURL(path string, rawQuery string) (*url.URL, bool) {
if !strings.HasPrefix(path, "/dist/") {
return nil, false
}
trimmed := strings.TrimPrefix(path, "/dist/")
parts := strings.SplitN(trimmed, "/", 3)
if len(parts) < 3 {
return nil, false
}
scheme := parts[0]
host := parts[1]
rest := parts[2]
if scheme == "" || host == "" {
return nil, false
}
if rest == "" {
rest = "/"
} else {
rest = "/" + rest
}
target := &url.URL{
Scheme: scheme,
Host: host,
Path: rest,
RawPath: rest,
}
if rawQuery != "" {
target.RawQuery = rawQuery
}
return target, true
}
type bearerChallenge struct {
Realm string
Service string
@@ -729,7 +1040,11 @@ func parseAuthParams(input string) map[string]string {
return params
}
func (h *Handler) fetchBearerToken(ctx context.Context, challenge bearerChallenge, route *server.HubRoute) (string, error) {
func (h *Handler) fetchBearerToken(
ctx context.Context,
challenge bearerChallenge,
route *server.HubRoute,
) (string, error) {
if challenge.Realm == "" {
return "", errors.New("bearer realm missing")
}
@@ -762,7 +1077,11 @@ func (h *Handler) fetchBearerToken(ctx context.Context, challenge bearerChalleng
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(io.LimitReader(resp.Body, 1024))
return "", fmt.Errorf("token request failed: status=%d body=%s", resp.StatusCode, strings.TrimSpace(string(body)))
return "", fmt.Errorf(
"token request failed: status=%d body=%s",
resp.StatusCode,
strings.TrimSpace(string(body)),
)
}
var tokenResp struct {
@@ -800,7 +1119,15 @@ func isAuthFailure(status int) bool {
}
func (h *Handler) logAuthRetry(route *server.HubRoute, upstream string, requestID string, status int) {
fields := logging.RequestFields(route.Config.Name, route.Config.Domain, route.Config.Type, route.Config.AuthMode(), false)
fields := logging.RequestFields(
route.Config.Name,
route.Config.Domain,
route.Config.Type,
route.Config.AuthMode(),
route.ModuleKey,
string(route.RolloutFlag),
false,
)
fields["action"] = "proxy_retry"
fields["upstream"] = upstream
fields["upstream_status"] = status
@@ -812,7 +1139,15 @@ func (h *Handler) logAuthRetry(route *server.HubRoute, upstream string, requestI
}
func (h *Handler) logAuthFailure(route *server.HubRoute, upstream string, requestID string, status int) {
fields := logging.RequestFields(route.Config.Name, route.Config.Domain, route.Config.Type, route.Config.AuthMode(), false)
fields := logging.RequestFields(
route.Config.Name,
route.Config.Domain,
route.Config.Type,
route.Config.AuthMode(),
route.ModuleKey,
string(route.RolloutFlag),
false,
)
fields["action"] = "proxy"
fields["upstream"] = upstream
fields["upstream_status"] = status
@@ -823,6 +1158,50 @@ func (h *Handler) logAuthFailure(route *server.HubRoute, upstream string, reques
h.logger.WithFields(fields).Error("proxy_auth_failed")
}
func (h *Handler) rememberETag(route *server.HubRoute, locator cache.Locator, resp *http.Response) {
if resp == nil {
return
}
etag := resp.Header.Get("Docker-Content-Digest")
if etag == "" {
etag = resp.Header.Get("Etag")
}
etag = normalizeETag(etag)
if etag == "" {
return
}
h.etags.Store(h.locatorKey(route, locator), etag)
}
func (h *Handler) cachedETag(route *server.HubRoute, locator cache.Locator) string {
if value, ok := h.etags.Load(h.locatorKey(route, locator)); ok {
if etag, ok := value.(string); ok {
return etag
}
}
return ""
}
func (h *Handler) forgetETag(route *server.HubRoute, locator cache.Locator) {
h.etags.Delete(h.locatorKey(route, locator))
}
func (h *Handler) locatorKey(route *server.HubRoute, locator cache.Locator) string {
hub := locator.HubName
if route != nil && route.Config.Name != "" {
hub = route.Config.Name
}
return hub + "::" + locator.Path
}
func normalizeETag(value string) string {
value = strings.TrimSpace(value)
if value == "" {
return ""
}
return strings.Trim(value, "\"")
}
func ensureProxyHubType(route *server.HubRoute) error {
switch route.Config.Type {
case "docker":
@@ -833,6 +1212,8 @@ func ensureProxyHubType(route *server.HubRoute) error {
return nil
case "pypi":
return nil
case "composer":
return nil
default:
return fmt.Errorf("unsupported hub type: %s", route.Config.Type)
}

View File

@@ -0,0 +1,126 @@
package proxy
import (
"bytes"
"encoding/json"
"io"
"net/http"
"net/url"
"strconv"
"strings"
"golang.org/x/net/html"
"github.com/any-hub/any-hub/internal/server"
)
func (h *Handler) rewritePyPIResponse(route *server.HubRoute, resp *http.Response, path string) (*http.Response, error) {
if resp == nil {
return resp, nil
}
if !strings.HasPrefix(path, "/simple") && path != "/" {
return resp, nil
}
bodyBytes, err := io.ReadAll(resp.Body)
if err != nil {
return resp, err
}
resp.Body.Close()
rewritten, contentType, err := rewritePyPIBody(bodyBytes, resp.Header.Get("Content-Type"), route.Config.Domain)
if err != nil {
resp.Body = io.NopCloser(bytes.NewReader(bodyBytes))
return resp, err
}
resp.Body = io.NopCloser(bytes.NewReader(rewritten))
resp.ContentLength = int64(len(rewritten))
resp.Header.Set("Content-Length", strconv.Itoa(len(rewritten)))
if contentType != "" {
resp.Header.Set("Content-Type", contentType)
}
resp.Header.Del("Content-Encoding")
return resp, nil
}
func rewritePyPIBody(body []byte, contentType string, domain string) ([]byte, string, error) {
lowerCT := strings.ToLower(contentType)
if strings.Contains(lowerCT, "application/vnd.pypi.simple.v1+json") || strings.HasPrefix(strings.TrimSpace(string(body)), "{") {
data := map[string]interface{}{}
if err := json.Unmarshal(body, &data); err != nil {
return body, contentType, err
}
if files, ok := data["files"].([]interface{}); ok {
for _, entry := range files {
if fileMap, ok := entry.(map[string]interface{}); ok {
if urlValue, ok := fileMap["url"].(string); ok {
fileMap["url"] = rewritePyPIFileURL(domain, urlValue)
}
}
}
}
rewriteBytes, err := json.Marshal(data)
if err != nil {
return body, contentType, err
}
return rewriteBytes, "application/vnd.pypi.simple.v1+json", nil
}
rewrittenHTML, err := rewritePyPIHTML(body, domain)
if err != nil {
return body, contentType, err
}
return rewrittenHTML, "text/html; charset=utf-8", nil
}
func rewritePyPIHTML(body []byte, domain string) ([]byte, error) {
node, err := html.Parse(bytes.NewReader(body))
if err != nil {
return nil, err
}
rewriteHTMLNode(node, domain)
var buf bytes.Buffer
if err := html.Render(&buf, node); err != nil {
return nil, err
}
return buf.Bytes(), nil
}
func rewriteHTMLNode(n *html.Node, domain string) {
if n.Type == html.ElementNode {
rewriteHTMLAttributes(n, domain)
}
for child := n.FirstChild; child != nil; child = child.NextSibling {
rewriteHTMLNode(child, domain)
}
}
func rewriteHTMLAttributes(n *html.Node, domain string) {
for i, attr := range n.Attr {
switch attr.Key {
case "href", "data-dist-info-metadata", "data-core-metadata":
if strings.HasPrefix(attr.Val, "http://") || strings.HasPrefix(attr.Val, "https://") {
n.Attr[i].Val = rewritePyPIFileURL(domain, attr.Val)
}
}
}
}
func rewritePyPIFileURL(domain, original string) string {
parsed, err := url.Parse(original)
if err != nil || parsed.Scheme == "" || parsed.Host == "" {
return original
}
prefix := "/files/" + parsed.Scheme + "/" + parsed.Host
newURL := url.URL{
Scheme: "https",
Host: domain,
Path: prefix + parsed.Path,
RawQuery: parsed.RawQuery,
Fragment: parsed.Fragment,
}
if raw := parsed.RawPath; raw != "" {
newURL.RawPath = prefix + raw
}
return newURL.String()
}

View File

@@ -0,0 +1,14 @@
package server
import (
"fmt"
"github.com/any-hub/any-hub/internal/hubmodule"
)
func moduleMetadataForKey(key string) (hubmodule.ModuleMetadata, error) {
if meta, ok := hubmodule.Resolve(key); ok {
return meta, nil
}
return hubmodule.ModuleMetadata{}, fmt.Errorf("module %s is not registered", key)
}

View File

@@ -10,6 +10,8 @@ import (
"time"
"github.com/any-hub/any-hub/internal/config"
"github.com/any-hub/any-hub/internal/hubmodule"
"github.com/any-hub/any-hub/internal/hubmodule/legacy"
)
// HubRoute 将 Hub 配置与派生属性(如缓存 TTL、解析后的 Upstream/Proxy URL
@@ -24,6 +26,13 @@ type HubRoute struct {
// UpstreamURL/ProxyURL 在构造 Registry 时提前解析完成,便于后续请求快速复用。
UpstreamURL *url.URL
ProxyURL *url.URL
// ModuleKey/Module 记录当前 hub 选用的模块及其元数据,便于日志与观测。
ModuleKey string
Module hubmodule.ModuleMetadata
// CacheStrategy 代表模块默认策略与 hub 覆盖后的最终结果。
CacheStrategy hubmodule.CacheStrategyProfile
// RolloutFlag 反映当前 hub 的 legacy → modular 迁移状态,供日志/诊断使用。
RolloutFlag legacy.RolloutFlag
}
// HubRegistry 提供 Host/Host:port 到 HubRoute 的查询能力,所有 Hub 共享同一个监听端口。
@@ -96,6 +105,13 @@ func (r *HubRegistry) List() []HubRoute {
}
func buildHubRoute(cfg *config.Config, hub config.HubConfig) (*HubRoute, error) {
flag := hub.RolloutFlagValue()
effectiveKey := config.EffectiveModuleKey(hub.Module, flag)
meta, err := moduleMetadataForKey(effectiveKey)
if err != nil {
return nil, fmt.Errorf("hub %s: %w", hub.Name, err)
}
upstreamURL, err := url.Parse(hub.Upstream)
if err != nil {
return nil, fmt.Errorf("invalid upstream for hub %s: %w", hub.Name, err)
@@ -109,12 +125,20 @@ func buildHubRoute(cfg *config.Config, hub config.HubConfig) (*HubRoute, error)
}
}
effectiveTTL := cfg.EffectiveCacheTTL(hub)
runtime := config.BuildHubRuntime(hub, meta, effectiveTTL, flag)
legacy.RecordAdapterState(hub.Name, runtime.Module.Key, flag)
return &HubRoute{
Config: hub,
ListenPort: cfg.Global.ListenPort,
CacheTTL: cfg.EffectiveCacheTTL(hub),
UpstreamURL: upstreamURL,
ProxyURL: proxyURL,
Config: hub,
ListenPort: cfg.Global.ListenPort,
CacheTTL: effectiveTTL,
UpstreamURL: upstreamURL,
ProxyURL: proxyURL,
ModuleKey: runtime.Module.Key,
Module: runtime.Module,
CacheStrategy: runtime.CacheStrategy,
RolloutFlag: runtime.Rollout,
}, nil
}

View File

@@ -5,6 +5,7 @@ import (
"time"
"github.com/any-hub/any-hub/internal/config"
"github.com/any-hub/any-hub/internal/hubmodule/legacy"
)
func TestHubRegistryLookupByHost(t *testing.T) {
@@ -48,6 +49,15 @@ func TestHubRegistryLookupByHost(t *testing.T) {
if route.CacheTTL != cfg.EffectiveCacheTTL(route.Config) {
t.Errorf("cache ttl mismatch: got %s", route.CacheTTL)
}
if route.CacheStrategy.TTLHint != route.CacheTTL {
t.Errorf("cache strategy ttl mismatch: %s vs %s", route.CacheStrategy.TTLHint, route.CacheTTL)
}
if route.CacheStrategy.ValidationMode == "" {
t.Fatalf("cache strategy validation mode should not be empty")
}
if route.RolloutFlag != legacy.RolloutLegacyOnly {
t.Fatalf("default rollout flag should be legacy-only")
}
if route.UpstreamURL.String() != "https://registry-1.docker.io" {
t.Errorf("unexpected upstream URL: %s", route.UpstreamURL)

View File

@@ -79,6 +79,10 @@ func requestContextMiddleware(opts AppOptions) fiber.Handler {
c.Locals(contextKeyRequestID, reqID)
c.Set("X-Request-ID", reqID)
if isDiagnosticsPath(string(c.Request().URI().Path())) {
return c.Next()
}
rawHost := strings.TrimSpace(getHostHeader(c))
route, ok := opts.Registry.Lookup(rawHost)
if !ok {
@@ -146,6 +150,8 @@ func ensureRouterHubType(route *HubRoute) error {
return nil
case "pypi":
return nil
case "composer":
return nil
default:
return fmt.Errorf("unsupported hub type: %s", route.Config.Type)
}
@@ -153,13 +159,19 @@ func ensureRouterHubType(route *HubRoute) error {
func renderTypeUnsupported(c fiber.Ctx, logger *logrus.Logger, route *HubRoute, err error) error {
fields := logrus.Fields{
"action": "hub_type_check",
"hub": route.Config.Name,
"hub_type": route.Config.Type,
"error": "hub_type_unsupported",
"action": "hub_type_check",
"hub": route.Config.Name,
"hub_type": route.Config.Type,
"module_key": route.ModuleKey,
"rollout_flag": string(route.RolloutFlag),
"error": "hub_type_unsupported",
}
logger.WithFields(fields).Error(err.Error())
return c.Status(fiber.StatusNotImplemented).JSON(fiber.Map{
"error": "hub_type_unsupported",
})
}
func isDiagnosticsPath(path string) bool {
return strings.HasPrefix(path, "/-/")
}

View File

@@ -0,0 +1,114 @@
package routes
import (
"sort"
"strings"
"time"
"github.com/gofiber/fiber/v3"
"github.com/any-hub/any-hub/internal/hubmodule"
"github.com/any-hub/any-hub/internal/server"
)
// RegisterModuleRoutes 暴露 /-/modules 诊断接口,供 SRE 查询模块与 Hub 绑定关系。
func RegisterModuleRoutes(app *fiber.App, registry *server.HubRegistry) {
if app == nil || registry == nil {
return
}
app.Get("/-/modules", func(c fiber.Ctx) error {
payload := fiber.Map{
"modules": encodeModules(hubmodule.List()),
"hubs": encodeHubBindings(registry.List()),
}
return c.JSON(payload)
})
app.Get("/-/modules/:key", func(c fiber.Ctx) error {
key := strings.ToLower(strings.TrimSpace(c.Params("key")))
if key == "" {
return c.Status(fiber.StatusBadRequest).JSON(fiber.Map{"error": "module_key_required"})
}
meta, ok := hubmodule.Resolve(key)
if !ok {
return c.Status(fiber.StatusNotFound).JSON(fiber.Map{"error": "module_not_found"})
}
return c.JSON(encodeModule(meta))
})
}
type modulePayload struct {
Key string `json:"key"`
Description string `json:"description"`
MigrationState hubmodule.MigrationState `json:"migration_state"`
SupportedProtocols []string `json:"supported_protocols"`
CacheStrategy cacheStrategyPayload `json:"cache_strategy"`
}
type cacheStrategyPayload struct {
TTLSeconds int64 `json:"ttl_seconds"`
ValidationMode string `json:"validation_mode"`
DiskLayout string `json:"disk_layout"`
RequiresMetadataFile bool `json:"requires_metadata_file"`
SupportsStreamingWrite bool `json:"supports_streaming_write"`
}
type hubBindingPayload struct {
HubName string `json:"hub_name"`
ModuleKey string `json:"module_key"`
Domain string `json:"domain"`
Port int `json:"port"`
Rollout string `json:"rollout_flag"`
}
func encodeModules(mods []hubmodule.ModuleMetadata) []modulePayload {
if len(mods) == 0 {
return nil
}
sort.Slice(mods, func(i, j int) bool {
return mods[i].Key < mods[j].Key
})
result := make([]modulePayload, 0, len(mods))
for _, meta := range mods {
result = append(result, encodeModule(meta))
}
return result
}
func encodeModule(meta hubmodule.ModuleMetadata) modulePayload {
strategy := meta.CacheStrategy
return modulePayload{
Key: meta.Key,
Description: meta.Description,
MigrationState: meta.MigrationState,
SupportedProtocols: append([]string(nil), meta.SupportedProtocols...),
CacheStrategy: cacheStrategyPayload{
TTLSeconds: int64(strategy.TTLHint / time.Second),
ValidationMode: string(strategy.ValidationMode),
DiskLayout: strategy.DiskLayout,
RequiresMetadataFile: strategy.RequiresMetadataFile,
SupportsStreamingWrite: strategy.SupportsStreamingWrite,
},
}
}
func encodeHubBindings(routes []server.HubRoute) []hubBindingPayload {
if len(routes) == 0 {
return nil
}
sort.Slice(routes, func(i, j int) bool {
return routes[i].Config.Name < routes[j].Config.Name
})
result := make([]hubBindingPayload, 0, len(routes))
for _, route := range routes {
result = append(result, hubBindingPayload{
HubName: route.Config.Name,
ModuleKey: route.ModuleKey,
Domain: route.Config.Domain,
Port: route.ListenPort,
Rollout: string(route.RolloutFlag),
})
}
return result
}

14
main.go
View File

@@ -10,9 +10,11 @@ import (
"github.com/any-hub/any-hub/internal/cache"
"github.com/any-hub/any-hub/internal/config"
"github.com/any-hub/any-hub/internal/hubmodule"
"github.com/any-hub/any-hub/internal/logging"
"github.com/any-hub/any-hub/internal/proxy"
"github.com/any-hub/any-hub/internal/server"
"github.com/any-hub/any-hub/internal/server/routes"
"github.com/any-hub/any-hub/internal/version"
)
@@ -81,6 +83,8 @@ func run(opts cliOptions) int {
httpClient := server.NewUpstreamClient(cfg)
proxyHandler := proxy.NewHandler(httpClient, logger, store)
forwarder := proxy.NewForwarder(proxyHandler)
proxy.RegisterModuleHandler(hubmodule.DefaultModuleKey(), proxyHandler)
fields := logging.BaseFields("startup", opts.configPath)
fields["hubs"] = len(cfg.Hubs)
@@ -89,7 +93,7 @@ func run(opts cliOptions) int {
fields["version"] = version.Full()
logger.WithFields(fields).Info("配置加载完成")
if err := startHTTPServer(cfg, registry, proxyHandler, logger); err != nil {
if err := startHTTPServer(cfg, registry, forwarder, logger); err != nil {
fmt.Fprintf(stdErr, "HTTP 服务启动失败: %v\n", err)
return 1
}
@@ -130,7 +134,12 @@ func parseCLIFlags(args []string) (cliOptions, error) {
}, nil
}
func startHTTPServer(cfg *config.Config, registry *server.HubRegistry, proxyHandler server.ProxyHandler, logger *logrus.Logger) error {
func startHTTPServer(
cfg *config.Config,
registry *server.HubRegistry,
proxyHandler server.ProxyHandler,
logger *logrus.Logger,
) error {
port := cfg.Global.ListenPort
app, err := server.NewApp(server.AppOptions{
Logger: logger,
@@ -141,6 +150,7 @@ func startHTTPServer(cfg *config.Config, registry *server.HubRegistry, proxyHand
if err != nil {
return err
}
routes.RegisterModuleRoutes(app, registry)
logger.WithFields(logrus.Fields{
"action": "listen",

View File

@@ -0,0 +1,34 @@
# Specification Quality Checklist: Modular Proxy & Cache Segmentation
**Purpose**: Validate specification completeness and quality before proceeding to planning
**Created**: 2025-11-14
**Feature**: /home/rogee/Projects/any-hub/specs/004-modular-proxy-cache/spec.md
## Content Quality
- [x] No implementation details (languages, frameworks, APIs)
- [x] Focused on user value and business needs
- [x] Written for non-technical stakeholders
- [x] All mandatory sections completed
## Requirement Completeness
- [x] No [NEEDS CLARIFICATION] markers remain
- [x] Requirements are testable and unambiguous
- [x] Success criteria are measurable
- [x] Success criteria are technology-agnostic (no implementation details)
- [x] All acceptance scenarios are defined
- [x] Edge cases are identified
- [x] Scope is clearly bounded
- [x] Dependencies and assumptions identified
## Feature Readiness
- [x] All functional requirements have clear acceptance criteria
- [x] User scenarios cover primary flows
- [x] Feature meets measurable outcomes defined in Success Criteria
- [x] No implementation details leak into specification
## Notes
- Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan`

View File

@@ -0,0 +1,99 @@
openapi: 3.0.3
info:
title: Any-Hub Module Registry API
version: 0.1.0
description: |
Internal diagnostics endpoint exposing registered proxy+cache modules and per-hub bindings.
servers:
- url: http://localhost:3000
paths:
/-/modules:
get:
summary: List registered modules and hub bindings
tags: [modules]
responses:
'200':
description: Module summary
content:
application/json:
schema:
type: object
properties:
modules:
type: array
items:
$ref: '#/components/schemas/Module'
hubs:
type: array
items:
$ref: '#/components/schemas/HubBinding'
/-/modules/{key}:
get:
summary: Inspect a single module metadata record
tags: [modules]
parameters:
- in: path
name: key
schema:
type: string
required: true
description: Module key, e.g., npm-tarball
responses:
'200':
description: Module metadata
content:
application/json:
schema:
$ref: '#/components/schemas/Module'
'404':
description: Module not found
components:
schemas:
Module:
type: object
required: [key, description, migration_state, cache_strategy]
properties:
key:
type: string
description:
type: string
migration_state:
type: string
enum: [legacy, beta, ga]
supported_protocols:
type: array
items:
type: string
cache_strategy:
$ref: '#/components/schemas/CacheStrategy'
CacheStrategy:
type: object
properties:
ttl_seconds:
type: integer
minimum: 1
validation_mode:
type: string
enum: [etag, last-modified, never]
disk_layout:
type: string
requires_metadata_file:
type: boolean
supports_streaming_write:
type: boolean
HubBinding:
type: object
required: [hub_name, module_key, domain, port]
properties:
hub_name:
type: string
module_key:
type: string
domain:
type: string
port:
type: integer
rollout_flag:
type: string
enum: [legacy-only, dual, modular]

View File

@@ -0,0 +1,94 @@
# Data Model: Modular Proxy & Cache Segmentation
## Overview
The modular architecture introduces explicit metadata describing which proxy+cache module each hub uses, how modules register themselves, and what cache policies they expose. The underlying storage layout now matches the upstream request path (`StoragePath/<Hub>/<path>`), simplifying disk management while metadata ensures the runtime can resolve modules, enforce compatibility, and migrate legacy hubs incrementally.
## Entities
### 1. HubConfigEntry
- **Source**: `[[Hub]]` blocks in `config.toml` (decoded via `internal/config`).
- **Fields**:
- `Name` *(string, required)* unique per config; used as hub identifier and storage namespace.
- `Domain` *(string, required)* hostname clients access; must be unique per process.
- `Port` *(int, required)* listen port; validated to 165535.
- `Upstream` *(string, required)* base URL for upstream registry; must be HTTPS or explicitly whitelisted HTTP.
- `Module` *(string, optional, default `"legacy"`)* key resolved through module registry. Validation ensures module exists at load time.
- `CacheTTL`, `Proxy`, and other overrides *(optional)* reuse existing schema; modules may read these via dependency injection.
- **Relationships**:
- `HubConfigEntry.Module``ModuleMetadata.Key` (many-to-one).
- **Validation Rules**:
- Missing `Module` implicitly maps to `legacy` to preserve backward compatibility.
- Changing `Module` requires a migration plan; config loader logs module name for observability.
### 2. ModuleMetadata
- **Fields**:
- `Key` *(string, required)* canonical identifier (e.g., `npm-tarball`).
- `Description` *(string)* human-readable summary.
- `SupportedProtocols` *([]string)* e.g., `HTTP`, `HTTPS`, `OCI`.
- `CacheStrategy` *(CacheStrategyProfile)* embedded policy descriptor.
- `MigrationState` *(enum: `legacy`, `beta`, `ga`)* used for rollout dashboards.
- `Factory` *(function)* constructs proxy+cache handlers; not serialized but referenced in registry code.
- **Relationships**:
- One `ModuleMetadata` may serve many hubs via config binding.
### 3. ModuleRegistry
- **Representation**: in-memory map maintained by `internal/hubmodule/registry.go` at process boot.
- **Fields**:
- `Modules` *(map[string]ModuleMetadata)* keyed by `ModuleMetadata.Key`.
- `DefaultKey` *(string)* `legacy`.
- **Behavior**:
- `Register(meta ModuleMetadata)` called during init of each module package.
- `Resolve(key string) (ModuleMetadata, error)` used by router bootstrap; errors bubble to config validation.
- **Constraints**:
- Duplicate registrations fail fast.
- Registry must export a list function for diagnostics (`List()`), enabling observability endpoints if needed.
### 4. CacheStrategyProfile
- **Fields**:
- `TTL` *(duration)* default TTL per module; hubs may override via config.
- `ValidationMode` *(enum: `etag`, `last-modified`, `never`)* defines revalidation behavior.
- `DiskLayout` *(string)* description of path mapping rules (default `raw_path`, i.e., exact upstream path without suffix).
- `RequiresMetadataFile` *(bool)* whether `.meta` entries are required.
- `SupportsStreamingWrite` *(bool)* indicates module can write cache while proxying upstream.
- **Relationships**:
- Owned by `ModuleMetadata`; not independently referenced.
- **Validation**:
- TTL must be positive.
- Modules flagged as `SupportsStreamingWrite=false` must document fallback behavior before registration.
### 5. LegacyAdapterState
- **Purpose**: Tracks which hubs still run through the old shared implementation to support progressive migration.
- **Fields**:
- `HubName` *(string)* references `HubConfigEntry.Name`.
- ` rolloutFlag` *(enum: `legacy-only`, `dual`, `modular`)* indicates traffic split for that hub.
- `FallbackDeadline` *(timestamp, optional)* when legacy path will be removed.
- **Storage**: In-memory map derived from config + environment flags; optionally surfaced via diagnostics endpoint.
## State Transitions
1. **Module Adoption**
- Start: `HubConfigEntry.Module = "legacy"`.
- Transition: operator edits config to new module key, runs validation.
- Result: registry resolves new module, `LegacyAdapterState` updated to `dual` until rollout flag toggled fully.
2. **Cache Strategy Update**
- Start: Module uses default TTL.
- Transition: hub-level override applied in config.
- Result: Module receives override via dependency injection and persists it in module-local settings without affecting other hubs.
3. **Module Registration Lifecycle**
- Start: module package calls `Register` in its `init()`.
- Transition: duplicate key registration rejected; module must rename key or remove old registration.
- Result: `ModuleRegistry.Modules[key]` available during server bootstrap.
## Data Volume & Scale Assumptions
- Module metadata count is small (<20) and loaded entirely in memory.
- Hub count typically <50 per binary, so per-hub module resolution happens at startup and is cached.
- Disk usage remains the dominant storage cost; metadata adds negligible overhead.
## Identity & Uniqueness Rules
- `HubConfigEntry.Name` and `ModuleMetadata.Key` must each be unique (case-insensitive) within a config/process.
- Module registry rejects duplicate keys to avoid ambiguous bindings.

View File

@@ -0,0 +1,117 @@
# Implementation Plan: Modular Proxy & Cache Segmentation
**Branch**: `004-modular-proxy-cache` | **Date**: 2025-11-14 | **Spec**: /home/rogee/Projects/any-hub/specs/004-modular-proxy-cache/spec.md
**Input**: Feature specification from `/specs/004-modular-proxy-cache/spec.md`
**Note**: This template is filled in by the `/speckit.plan` command. See `.specify/templates/commands/plan.md` for the execution workflow.
## Summary
Modularize the proxy and cache layers so every hub type (npm, Docker, PyPI, future ecosystems) implements a self-contained module that conforms to shared interfaces, is registered via config, and exposes hub-specific cache strategies while preserving legacy behavior during phased migration. The work introduces a module registry/factory, per-hub configuration for selecting modules, migration tooling, and observability tags so operators can attribute incidents to specific modules.
## Technical Context
**Language/Version**: Go 1.25+ (静态链接,单二进制交付)
**Primary Dependencies**: Fiber v3HTTP 服务、Viper配置、Logrus + Lumberjack结构化日志 & 滚动)、标准库 `net/http`/`io`
**Storage**: 本地文件系统缓存目录 `StoragePath/<Hub>/<path>`,直接复用请求路径完成磁盘定位
**Testing**: `go test ./...`,使用 `httptest`、临时目录和自建上游伪服务验证配置/缓存/代理路径
**Target Platform**: Linux/Unix CLI 进程,由 systemd/supervisor 管理,匿名下游客户端
**Project Type**: 单 Go 项目(`cmd/` 入口 + `internal/*` 包)
**Performance Goals**: 缓存命中直接返回;回源路径需流式转发,单请求常驻内存 <256MB命中/回源日志可追踪
**Constraints**: 禁止 Web UI 或账号体系;所有行为受单一 TOML 配置控制;每个 Hub 需独立 Domain/Port 绑定;仅匿名访问
**Scale/Scope**: 支撑 Docker/NPM/Go/PyPI 等多仓代理,面向弱网及离线缓存复用场景
**Module Registry Location**: `internal/hubmodule/registry.go` 暴露注册/解析 API模块子目录位于 `internal/hubmodule/<name>/`
**Config Binding for Modules**: `[[Hub]].Module` 字段控制模块名,默认 `legacy`,配置加载阶段校验必须命中已注册模块
## Constitution Check
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
- Feature 仍然是“轻量多仓 CLI 代理”,未引入 Web UI、账号体系或与代理无关的能力。
- 仅使用 Go + 宪法指定依赖;任何新第三方库都已在本计划中说明理由与审核结论。
- 行为完全由 `config.toml` 控制,新增 `[[Hub]].Module` 配置项已规划默认值、校验与迁移策略。
- 方案维持缓存优先 + 流式回源路径,并给出命中/回源/失败的日志与观测手段。
- 计划内列出了配置解析、缓存读写、Host Header 路由等强制测试与中文注释交付范围。
**Gate Status**: ✅ All pre-research gates satisfied; no violations logged in Complexity Tracking.
## Project Structure
### Documentation (this feature)
```text
specs/[###-feature]/
├── plan.md # This file (/speckit.plan command output)
├── research.md # Phase 0 output (/speckit.plan command)
├── data-model.md # Phase 1 output (/speckit.plan command)
├── quickstart.md # Phase 1 output (/speckit.plan command)
├── contracts/ # Phase 1 output (/speckit.plan command)
└── tasks.md # Phase 2 output (/speckit.tasks command - NOT created by /speckit.plan)
```
### Source Code (repository root)
```text
cmd/any-hub/main.go # CLI 入口、参数解析
internal/config/ # TOML 加载、默认值、校验
internal/server/ # Fiber 服务、路由、中间件
internal/cache/ # 磁盘/内存缓存与 .meta 管理
internal/proxy/ # 上游访问、缓存策略、流式复制
configs/ # 示例 config.toml如需
tests/ # `go test` 下的单元/集成测试,用临时目录
```
**Structure Decision**: 采用单 Go 项目结构,特性代码应放入上述现有目录;如需新增包或目录,必须解释其与 `internal/*` 的关系并给出后续维护策略。
## Complexity Tracking
> **Fill ONLY if Constitution Check has violations that must be justified**
| Violation | Why Needed | Simpler Alternative Rejected Because |
|-----------|------------|-------------------------------------|
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |
## Phase 0 Research
### Unknowns & Tasks
- **Module registry location** → researched Go package placement that keeps modules isolated yet internal.
- **Config binding for modules** → determined safest schema extension and defaults.
- **Dependency best practices** → confirmed singletons for Fiber/Viper/Logrus and storage layout compatibility.
- **Testing harness expectations** → documented shared approach for new modules.
### Output Artifact
- `/home/rogee/Projects/any-hub/specs/004-modular-proxy-cache/research.md` summarizes each decision with rationale and alternatives.
### Impact on Plan
- Technical Context now references concrete package paths and configuration fields.
- Implementation will add `internal/hubmodule/` with registry helpers plus validation wiring in `internal/config`.
## Phase 1 Design & Contracts
### Data Model
- `/home/rogee/Projects/any-hub/specs/004-modular-proxy-cache/data-model.md` defines HubConfigEntry, ModuleMetadata, ModuleRegistry, CacheStrategyProfile, and LegacyAdapterState including validation and state transitions.
### API Contracts
- `/home/rogee/Projects/any-hub/specs/004-modular-proxy-cache/contracts/module-registry.openapi.yaml` introduces a diagnostics API (`GET /-/modules`, `GET /-/modules/{key}`) for observability around module registrations and hub bindings.
### Quickstart Guidance
- `/home/rogee/Projects/any-hub/specs/004-modular-proxy-cache/quickstart.md` walks engineers through adding a module, wiring config, running tests, and verifying logs/storage.
### Agent Context Update
- `.specify/scripts/bash/update-agent-context.sh codex` executed to sync AGENTS.md with Go/Fiber/Viper/logging/storage context relevant to this feature.
### Post-Design Constitution Check
- New diagnostics endpoint remains internal and optional; no UI/login introduced. ✅ Principle I
- Code still single Go binary with existing dependency set. ✅ Principle II
- `Module` field documented with defaults, validation, and migration path; no extra config sources. ✅ Principle III
- Cache strategy enforces“原始路径 == 磁盘路径”的布局与流式回源,相关观测需求写入 contracts。✅ Principle IV
- Logs/quickstart/test guidance ensure observability and Chinese documentation continue. ✅ Principle V
## Phase 2 Implementation Outlook (pre-tasks)
1. **Module Registry & Interfaces**: Create `internal/hubmodule` package, define shared interfaces, implement registry with tests, and expose diagnostics data source reused by HTTP endpoints.
2. **Config Loader & Validation**: Extend `internal/config/types.go` and `validation.go` to include `Module` with default `legacy`, plus wiring to registry resolution during startup.
3. **Legacy Adapter & Migration Switches**: Provide adapter module that wraps current shared proxy/cache, plus feature flags or config toggles to control rollout states per hub.
4. **Module Implementations**: Carve existing npm/docker/pypi logic into dedicated modules within `internal/hubmodule/`, ensuring cache writer复用原始请求路径与必要的 telemetry 标签。
5. **Observability/Diagnostics**: Implement `//modules` endpoint (Fiber route) and log tags showing `module_key` on cache/proxy events.
6. **Testing**: Add shared test harness for modules, update integration tests to cover mixed legacy + modular hubs, and document commands in README/quickstart.

View File

@@ -0,0 +1,36 @@
# Quickstart: Modular Proxy & Cache Segmentation
## 1. Prepare Workspace
1. Ensure Go 1.25+ toolchain is installed (`go version`).
2. From repo root, run `go mod tidy` (or `make deps` if defined) to sync modules.
3. Export `ANY_HUB_CONFIG` pointing to your working config (optional).
## 2. Create/Update Hub Module
1. Copy `internal/hubmodule/template/` to `internal/hubmodule/<module-key>/` and rename the package/types.
2. In the new package's `init()`, call `hubmodule.MustRegister(hubmodule.ModuleMetadata{Key: "<module-key>", ...})` to describe supported protocols、缓存策略与迁移阶段。
3. Register runtime behavior (proxy handler) from your module by calling `proxy.RegisterModuleHandler("<module-key>", handler)` during initialization.
4. Add tests under the module directory and run `make modules-test` (delegates to `go test ./internal/hubmodule/...`).
## 3. Bind Module via Config
1. Edit `config.toml` and set `Module = "<module-key>"` inside the target `[[Hub]]` block (omit to use `legacy`).
2. While validating a new module, set `Rollout = "dual"` so you can flip back to legacy without editing other fields.
3. (Optional) Override cache behavior per hub using existing fields (`CacheTTL`, etc.).
4. Run `ANY_HUB_CONFIG=./config.toml go test ./...` (or `make modules-test`) to ensure loader validation passes and the module registry sees your key.
## 4. Run and Verify
1. Start the binary: `go run ./cmd/any-hub --config ./config.toml`.
2. Use `curl -H "Host: <hub-domain>" http://127.0.0.1:<port>/<path>` to produce traffic, then hit `curl http://127.0.0.1:<port>/-/modules` and confirm the hub binding points to your module with the expected `rollout_flag`.
3. Inspect `./storage/<hub>/` to confirm the cached files mirror the upstream path (no suffix). When a path also has child entries (e.g., `/pkg` metadata plus `/pkg/-/...` tarballs), the metadata payload is stored in a `__content` file under that directory so both artifacts can coexist. PyPI Simple responses rewrite distribution links to `/files/<scheme>/<host>/<path>` so that wheels/tarballs are fetched through the proxy and cached alongside the HTML/JSON index. Verify TTL overrides are propagated.
4. Monitor `logs/any-hub.log` (or the sample `logs/module_migration_sample.log`) to verify each entry exposes `module_key` + `rollout_flag`. Example:
```json
{"action":"proxy","hub":"testhub","module_key":"testhub","rollout_flag":"dual","cache_hit":false,"upstream_status":200}
```
5. Exercise rollback by switching `Rollout = "legacy-only"` (or `Module = "legacy"` if needed) and re-running the traffic to ensure diagnostics/logs show the transition.
## 5. Ship
1. Commit module code + config docs.
2. Update release notes mentioning the module key, migration guidance, and related diagnostics.
3. Monitor cache hit/miss metrics post-deploy; adjust TTL overrides if necessary.
## 6. Attach Validation Artifacts
- Save the JSON snapshot from `/-/modules` and a short log excerpt (see `logs/module_migration_sample.log`) with both legacy + modular hubs present; attach them to the change request so reviewers can confirm you followed the playbook.

View File

@@ -0,0 +1,30 @@
# Research Log: Modular Proxy & Cache Segmentation
## Decision 1: Module Registry Location
- **Decision**: Introduce `internal/hubmodule/` as the root for module implementations plus a `registry.go` that exposes `Register(name ModuleFactory)` and `Resolve(hubType string)` helpers.
- **Rationale**: Keeps new hub-specific code outside `internal/proxy`/`internal/cache` core while still within internal tree; mirrors existing package layout expectations and eases discovery.
- **Alternatives considered**:
- Embed modules under `internal/proxy/<hub>`: rejected because cache + proxy concerns would blend with shared proxy infra, blurring ownership lines.
- Place modules under `pkg/`: rejected since repo avoids exported libraries and wants all runtime code under `internal`.
## Decision 2: Config Binding Field
- **Decision**: Add optional `Module` string field to each `[[Hub]]` block in `config.toml`, defaulting to `"legacy"` to preserve current behavior. Validation ensures the value matches a registered module key.
- **Rationale**: Minimal change to config schema, symmetric across hubs, and allows gradual opt-in by flipping a single field.
- **Alternatives considered**:
- Auto-detect module from `hub.Name`: rejected because naming conventions differ across users and would impede third-party forks.
- Separate `ProxyModule`/`CacheModule` fields: rejected per clarification outcome that modules encapsulate both behaviors.
## Decision 3: Fiber/Viper/Logrus Best Practices for Modular Architecture
- **Decision**: Continue to initialize Fiber/Viper/Logrus exactly once at process start; modules receive interfaces (logger, config handles) instead of initializing their own instances.
- **Rationale**: Prevents duplicate global state and adheres to constitution (single binary, centralized config/logging).
- **Alternatives considered**: Allow modules to spin up custom Fiber groups or loggers—rejected because it complicates shutdown hooks and breaks structured logging consistency.
## Decision 4: Storage Layout Compatibility
- **Decision**: Reuse the original request path directly (`StoragePath/<Hub>/<path>`) so operators can browse cached artifacts without suffix translation; modules share the same layout and rely on directory creation safeguards to avoid traversal issues.
- **Rationale**: Aligns with operational workflows that expect “what you request is what you store,” simplifying manual cache invalidation and disk audits now that we no longer need `.body` indirection.
- **Alternatives considered**: Keep the `.body` suffix or add per-module subdirectories—rejected because suffix-based migrations complicate tooling and dedicated subdirectories fragment cache quotas.
## Decision 5: Testing Strategy
- **Decision**: For each module, enforce a shared test harness that spins a fake upstream using `httptest.Server`, writes to `t.TempDir()` storage, and asserts registry wiring end-to-end via integration tests.
- **Rationale**: Aligns with Technical Context testing guidance while avoiding bespoke harnesses per hub type.
- **Alternatives considered**: Rely solely on unit tests per module—rejected since regressions often arise from wiring/registry mistakes.

View File

@@ -0,0 +1,106 @@
# Feature Specification: Modular Proxy & Cache Segmentation
**Feature Branch**: `004-modular-proxy-cache`
**Created**: 2025-11-14
**Status**: Draft
**Input**: User description: "当前项目使用一个共用的 proxy、Cache 层处理代理逻辑, 这样导致在新增或变更接入时需要考虑已有类型的兼容造成了后续可维护性变弱。把每种代理、缓存层使用类型进行分模块目录组织编写抽象统一的interface用于 功能约束,这样虽然不同类型模块会有部分代码重复,但是可维护性会大大增强。"
> 宪法对齐v1.0.0
> - 保持“轻量、匿名、CLI 多仓代理”定位:不得引入 Web UI、账号体系或与代理无关的范围。
> - 方案必须基于 Go 1.25+ 单二进制,依赖仅限 Fiber、Viper、Logrus/Lumberjack 及必要标准库。
> - 所有行为由单一 `config.toml` 控制;若需新配置项,需在规范中说明字段、默认值与迁移策略。
> - 设计需维护缓存优先 + 流式传输路径,并描述命中/回源/失败时的日志与观测需求。
> - 验收必须包含配置解析、缓存读写、Host Header 绑定等测试与中文注释交付约束。
## Clarifications
### Session 2025-11-14
- Q: Should each hub select proxy and cache modules separately or through a single combined module? → A: Single combined module per hub encapsulating proxy + cache behaviors.
## User Scenarios & Testing *(mandatory)*
### User Story 1 - Add A New Hub Type Without Regressions (Priority: P1)
As a platform maintainer, I can scaffold a dedicated proxy + cache module for a new hub type without touching existing hub implementations so I avoid regressions and lengthy reviews.
**Why this priority**: Unlocks safe onboarding of new ecosystems (npm, Docker, PyPI, etc.) which is the primary growth lever.
**Independent Test**: Provision a sample "testhub" type, wire it through config, and run integration tests showing legacy hubs still route correctly.
**Acceptance Scenarios**:
1. **Given** an empty module directory following the prescribed skeleton, **When** the maintainer registers the module via the unified interface, **Then** the hub becomes routable via config with no code changes in other hub modules.
2. **Given** existing hubs running in production, **When** the new hub type is added, **Then** regression tests confirm traffic for other hubs is unchanged and logs correctly identify hub-specific modules.
---
### User Story 2 - Tailor Cache Behavior Per Hub (Priority: P2)
As an SRE, I can choose a cache strategy module that matches a hubs upstream semantics (e.g., npm tarballs vs. metadata) and tune TTL/validation knobs without rewriting shared logic.
**Why this priority**: Cache efficiency and disk safety differ by artifact type; misconfiguration previously caused incidents like "not a directory" errors.
**Independent Test**: Swap cache strategies for one hub in staging and verify cache hit/miss, revalidation, and eviction behavior follow the new modules contract while others remain untouched.
**Acceptance Scenarios**:
1. **Given** a hub referencing cache strategy `npm-tarball`, **When** TTL overrides are defined in config, **Then** only that hubs cache files adopt the overrides and telemetry reports the chosen strategy.
2. **Given** a hub using a streaming proxy that forbids disk writes, **When** the hub switches to a cache-enabled module, **Then** the interface enforces required callbacks (write, validate, purge) before deployment passes.
---
### User Story 3 - Operate Mixed Generations During Migration (Priority: P3)
As a release manager, I can keep legacy shared modules alive while migrating hubs incrementally, with clear observability that highlights which hubs still depend on the old stack.
**Why this priority**: Avoids risky flag days and allows gradual cutovers aligned with hub traffic peaks.
**Independent Test**: Run a deployment where half the hubs use the modular stack and half remain on the legacy stack, verifying routing table, logging, and alerts distinguish both paths.
**Acceptance Scenarios**:
1. **Given** hubs split between legacy and new modules, **When** traffic flows through both, **Then** logs, metrics, and config dumps tag each request path with its module name for debugging.
2. **Given** a hub scheduled for migration, **When** the rollout flag switches it to the modular implementation, **Then** rollback toggles exist to return to legacy routing within one command.
---
### Edge Cases
- What happens when config references a hub type whose proxy/cache module has not been registered? System must fail fast during config validation with actionable errors.
- How does the system handle partial migrations where legacy cache files conflict with new module layouts? Must auto-migrate or isolate on first access to prevent `ENOTDIR`.
- How is observability handled when a module panics or returns invalid data? The interface must standardize error propagation so circuit breakers/logging stay consistent.
## Requirements *(mandatory)*
### Functional Requirements
- **FR-001**: Provide explicit proxy and cache interfaces describing the operations (request admission, upstream fetch, cache read/write/invalidation, observability hooks) that every hub-specific module must implement.
- **FR-002**: Restructure the codebase so each hub type registers a single module directory that owns both proxy and cache behaviors (optional internal subpackages allowed) while sharing only the common interfaces; no hub-specific logic may leak into the shared adapters.
- **FR-003**: Implement a registry or factory that maps the `config.toml` hub definition to the corresponding proxy/cache module and fails validation if no module is found.
- **FR-004**: Allow hub-level overrides for cache behaviors (TTL, validation strategy, disk layout) that modules can opt in to, with documented defaults and validation of allowed ranges.
- **FR-005**: Maintain backward compatibility by providing a legacy adapter that wraps the existing shared proxy/cache until all hubs migrate, including feature flags to switch per hub.
- **FR-006**: Ensure runtime telemetry (logs, metrics, tracing spans) include the module identifier so operators can attribute failures or latency to a specific hub module.
- **FR-007**: Deliver migration guidance and developer documentation outlining how to add a new module, required tests, and expected directory structure.
- **FR-008**: Update automated tests (unit + integration) so each module can be exercised independently and regression suites cover mixed legacy/new deployments.
### Key Entities *(include if feature involves data)*
- **Hub Module**: Represents a cohesive proxy+cache implementation for a specific ecosystem; attributes include supported protocols, cache strategy hooks, telemetry tags, and configuration constraints.
- **Module Registry**: Describes the mapping between hub names/types in config and their module implementations; stores module metadata (version, status, migration flag) for validation and observability.
- **Cache Strategy Profile**: Captures the policy knobs a module exposes (TTL, validation method, disk layout, eviction rules) and the allowed override values defined per hub.
### Assumptions
- Existing hubs (npm, Docker, PyPI) will be migrated sequentially; legacy adapters remain available until the last hub switches.
- Engineers adding a new hub type can modify configuration schemas and documentation but not core runtime dependencies.
- Telemetry stack (logs/metrics) already exists and only requires additional tags; no new observability backend is needed.
## Success Criteria *(mandatory)*
### Measurable Outcomes
- **SC-001**: A new hub type can be added by touching only its module directory plus configuration (≤2 additional files) and passes the modules test suite within one working day.
- **SC-002**: Regression test suites show zero failing cases for unchanged hubs after enabling the modular architecture (baseline established before rollout).
- **SC-003**: Configuration validation rejects 100% of hubs that reference unregistered modules, preventing runtime panics in staging or production.
- **SC-004**: Operational logs for proxy and cache events include the module identifier in 100% of entries, enabling SREs to scope incidents in under 5 minutes.

View File

@@ -0,0 +1,108 @@
# Tasks: Modular Proxy & Cache Segmentation
**Input**: Design documents from `/specs/004-modular-proxy-cache/`
**Prerequisites**: plan.md, spec.md, research.md, data-model.md, contracts/, quickstart.md
**Tests**: 必须覆盖配置解析 (`internal/config`)、缓存读写 (`internal/cache` + 模块)、代理命中/回源 (`internal/proxy`)、Host Header 绑定与日志 (`internal/server`).
## Phase 1: Setup (Shared Infrastructure)
- [X] T001 Scaffold `internal/hubmodule/` package with `doc.go` + `README.md` describing module contracts
- [X] T002 [P] Add `modules-test` target to `Makefile` running `go test ./internal/hubmodule/...` for future CI hooks
---
## Phase 2: Foundational (Blocking Prerequisites)
- [X] T003 Create shared module interfaces + registry in `internal/hubmodule/interfaces.go` and `internal/hubmodule/registry.go`
- [X] T004 Extend config schema with `[[Hub]].Module` defaults/validation plus sample configs in `internal/config/{types.go,validation.go,loader.go}` and `configs/*.toml`
- [X] T005 [P] Wire server bootstrap to resolve modules once and inject into proxy/cache layers (`internal/server/bootstrap.go`, `internal/proxy/handler.go`)
**Checkpoint**: Registry + config plumbing complete; user story work may begin.
---
## Phase 3: User Story 1 - Add A New Hub Type Without Regressions (Priority: P1) 🎯 MVP
**Goal**: Allow engineers to add a dedicated proxy+cache module without modifying existing hubs.
**Independent Test**: Register a `testhub` module, enable it via config, and run integration tests proving other hubs remain unaffected.
### Tests
- [X] T006 [P] [US1] Add registry unit tests covering register/resolve/list/dedup in `internal/hubmodule/registry_test.go`
- [X] T007 [P] [US1] Add integration test proving new module routing isolation in `tests/integration/module_routing_test.go`
### Implementation
- [X] T008 [US1] Implement `legacy` adapter module that wraps current shared proxy/cache in `internal/hubmodule/legacy/legacy_module.go`
- [X] T009 [US1] Refactor server/proxy wiring to resolve modules per hub (`internal/server/router.go`, `internal/proxy/forwarder.go`)
- [X] T010 [P] [US1] Create reusable module template with Chinese comments under `internal/hubmodule/template/module.go`
- [X] T011 [US1] Update quickstart + README to document module creation and config binding (`specs/004-modular-proxy-cache/quickstart.md`, `README.md`)
---
## Phase 4: User Story 2 - Tailor Cache Behavior Per Hub (Priority: P2)
**Goal**: Enable per-hub cache strategies/TTL overrides while keeping modules isolated.
**Independent Test**: Swap a hub to a cache strategy module, adjust TTL overrides, and confirm telemetry/logs reflect the new policy without affecting other hubs.
### Tests
- [X] T012 [P] [US2] Add cache strategy override integration test validating TTL + revalidation paths in `tests/integration/cache_strategy_override_test.go`
- [X] T013 [P] [US2] Add module-level cache strategy unit tests in `internal/hubmodule/npm/module_test.go`
### Implementation
- [X] T014 [US2] Implement `CacheStrategyProfile` helpers and injection plumbing (`internal/hubmodule/strategy.go`, `internal/cache/writer.go`)
- [X] T015 [US2] Bind hub-level overrides to strategy metadata via config/runtime structures (`internal/config/types.go`, `internal/config/runtime.go`)
- [X] T016 [US2] Update existing modules (npm/docker/pypi) to declare strategies + honor overrides (`internal/hubmodule/{npm,docker,pypi}/module.go`)
---
## Phase 5: User Story 3 - Operate Mixed Generations During Migration (Priority: P3)
**Goal**: Support dual-path deployments with diagnostics/logging to track legacy vs. modular hubs.
**Independent Test**: Run mixed legacy/modular hubs, flip rollout flags, and confirm logs + diagnostics show module ownership and allow rollback.
### Tests
- [X] T017 [P] [US3] Add dual-mode integration test covering rollout toggle + rollback in `tests/integration/legacy_adapter_toggle_test.go`
- [X] T018 [P] [US3] Add diagnostics endpoint contract test for `//modules` in `tests/integration/module_diagnostics_test.go`
### Implementation
- [X] T019 [US3] Implement `LegacyAdapterState` tracker + rollout flag parsing (`internal/hubmodule/legacy/state.go`, `internal/config/runtime_flags.go`)
- [X] T020 [US3] Implement Fiber handler + routing for `//modules` diagnostics (`internal/server/routes/modules.go`, `internal/server/router.go`)
- [X] T021 [US3] Add structured log fields (`module_key`, `rollout_flag`) across logging middleware (`internal/server/middleware/logging.go`, `internal/proxy/logging.go`)
- [X] T022 [US3] Document operational playbook for phased migration (`docs/operations/migration.md`)
---
## Phase 6: Polish & Cross-Cutting Concerns
- [X] T023 [P] Add Chinese comments + GoDoc for new interfaces/modules (`internal/hubmodule/**/*.go`)
- [X] T024 Validate quickstart by running module creation flow end-to-end and capture sample logs (`specs/004-modular-proxy-cache/quickstart.md`, `logs/`)
---
## Dependencies & Execution Order
1. **Phase 1 → Phase 2**: Setup must finish before registry/config work begins.
2. **Phase 2 → User Stories**: Module registry + config binding are prerequisites for all stories.
3. **User Stories Priority**: US1 (P1) delivers MVP and unblocks US2/US3; US2 & US3 can run in parallel after US1 if separate modules/files.
4. **Tests before Code**: For each story, write failing tests (T006/T007, T012/T013, T017/T018) before implementation tasks in that story.
5. **Polish**: Execute after all targeted user stories complete.
## Parallel Execution Examples
- **Setup**: T001 (docs) and T002 (Makefile) can run concurrently.
- **US1**: T006 registry tests and T007 routing tests can run in parallel while separate engineers tackle T008/T010.
- **US2**: T012 integration test and T013 unit test proceed concurrently; T014/T015 can run in parallel once T012/T013 drafted.
- **US3**: T017 rollout test and T018 diagnostics test work independently before T019T021 wiring.
## Implementation Strategy
1. Deliver MVP by completing Phases 13 (US1) and verifying new module onboarding works end-to-end.
2. Iterate with US2 for cache flexibility, ensuring overrides are testable independently.
3. Layer US3 for migration observability and rollback safety.
4. Finish with Polish tasks to document and validate the workflow.

View File

@@ -0,0 +1,197 @@
package integration
import (
"io"
"net/http"
"net/http/httptest"
"testing"
"time"
"github.com/gofiber/fiber/v3"
"github.com/sirupsen/logrus"
"github.com/any-hub/any-hub/internal/cache"
"github.com/any-hub/any-hub/internal/config"
"github.com/any-hub/any-hub/internal/hubmodule"
"github.com/any-hub/any-hub/internal/proxy"
"github.com/any-hub/any-hub/internal/server"
)
func TestCacheStrategyOverrides(t *testing.T) {
t.Run("ttl defers revalidation until expired", func(t *testing.T) {
stub := newUpstreamStub(t, upstreamNPM)
defer stub.Close()
storageDir := t.TempDir()
ttl := 50 * time.Millisecond
cfg := &config.Config{
Global: config.GlobalConfig{
ListenPort: 6100,
CacheTTL: config.Duration(time.Second),
StoragePath: storageDir,
},
Hubs: []config.HubConfig{
{
Name: "npm-ttl",
Domain: "ttl.npm.local",
Type: "npm",
Module: "npm",
Upstream: stub.URL,
CacheTTL: config.Duration(ttl),
},
},
}
app := newStrategyTestApp(t, cfg)
doRequest := func() *http.Response {
req := httptest.NewRequest(http.MethodGet, "http://ttl.npm.local/lodash", nil)
req.Host = "ttl.npm.local"
resp, err := app.Test(req)
if err != nil {
t.Fatalf("app.Test error: %v", err)
}
return resp
}
resp := doRequest()
if resp.StatusCode != fiber.StatusOK {
t.Fatalf("expected 200, got %d", resp.StatusCode)
}
if hit := resp.Header.Get("X-Any-Hub-Cache-Hit"); hit != "false" {
t.Fatalf("first request should be miss, got %s", hit)
}
resp.Body.Close()
resp2 := doRequest()
if hit := resp2.Header.Get("X-Any-Hub-Cache-Hit"); hit != "true" {
t.Fatalf("second request should hit cache before TTL, got %s", hit)
}
resp2.Body.Close()
if headCount := countRequests(stub.Requests(), http.MethodHead, "/lodash"); headCount != 0 {
t.Fatalf("expected no HEAD before TTL expiry, got %d", headCount)
}
if getCount := countRequests(stub.Requests(), http.MethodGet, "/lodash"); getCount != 1 {
t.Fatalf("upstream should be hit once before TTL expiry, got %d", getCount)
}
time.Sleep(ttl * 2)
resp3 := doRequest()
if hit := resp3.Header.Get("X-Any-Hub-Cache-Hit"); hit != "true" {
body, _ := io.ReadAll(resp3.Body)
resp3.Body.Close()
t.Fatalf("expected cached response after HEAD revalidation, got %s body=%s", hit, string(body))
}
resp3.Body.Close()
if headCount := countRequests(stub.Requests(), http.MethodHead, "/lodash"); headCount != 1 {
t.Fatalf("expected single HEAD after TTL expiry, got %d", headCount)
}
if getCount := countRequests(stub.Requests(), http.MethodGet, "/lodash"); getCount != 1 {
t.Fatalf("upstream GET count should remain 1, got %d", getCount)
}
})
t.Run("validation disabled falls back to refetch", func(t *testing.T) {
stub := newUpstreamStub(t, upstreamNPM)
defer stub.Close()
storageDir := t.TempDir()
ttl := 25 * time.Millisecond
cfg := &config.Config{
Global: config.GlobalConfig{
ListenPort: 6200,
CacheTTL: config.Duration(time.Second),
StoragePath: storageDir,
},
Hubs: []config.HubConfig{
{
Name: "npm-novalidation",
Domain: "novalidation.npm.local",
Type: "npm",
Module: "npm",
Upstream: stub.URL,
CacheTTL: config.Duration(ttl),
ValidationMode: string(hubmodule.ValidationModeNever),
},
},
}
app := newStrategyTestApp(t, cfg)
doRequest := func() *http.Response {
req := httptest.NewRequest(http.MethodGet, "http://novalidation.npm.local/lodash", nil)
req.Host = "novalidation.npm.local"
resp, err := app.Test(req)
if err != nil {
t.Fatalf("app.Test error: %v", err)
}
return resp
}
first := doRequest()
if first.Header.Get("X-Any-Hub-Cache-Hit") != "false" {
t.Fatalf("expected miss on first request")
}
first.Body.Close()
time.Sleep(ttl * 2)
second := doRequest()
if second.Header.Get("X-Any-Hub-Cache-Hit") != "false" {
body, _ := io.ReadAll(second.Body)
second.Body.Close()
t.Fatalf("expected cache miss when validation disabled, got hit body=%s", string(body))
}
second.Body.Close()
if headCount := countRequests(stub.Requests(), http.MethodHead, "/lodash"); headCount != 0 {
t.Fatalf("validation mode never should avoid HEAD, got %d", headCount)
}
if getCount := countRequests(stub.Requests(), http.MethodGet, "/lodash"); getCount != 2 {
t.Fatalf("expected two upstream GETs due to forced refetch, got %d", getCount)
}
})
}
func newStrategyTestApp(t *testing.T, cfg *config.Config) *fiber.App {
t.Helper()
registry, err := server.NewHubRegistry(cfg)
if err != nil {
t.Fatalf("registry error: %v", err)
}
logger := logrus.New()
logger.SetOutput(io.Discard)
store, err := cache.NewStore(cfg.Global.StoragePath)
if err != nil {
t.Fatalf("store error: %v", err)
}
client := server.NewUpstreamClient(cfg)
handler := proxy.NewHandler(client, logger, store)
app, err := server.NewApp(server.AppOptions{
Logger: logger,
Registry: registry,
Proxy: handler,
ListenPort: cfg.Global.ListenPort,
})
if err != nil {
t.Fatalf("app error: %v", err)
}
return app
}
func countRequests(reqs []RecordedRequest, method, path string) int {
count := 0
for _, req := range reqs {
if req.Method == method && req.Path == path {
count++
}
}
return count
}

View File

@@ -0,0 +1,316 @@
package integration
import (
"context"
"encoding/json"
"io"
"net"
"net/http"
"net/http/httptest"
"net/url"
"strings"
"sync"
"testing"
"time"
"github.com/gofiber/fiber/v3"
"github.com/sirupsen/logrus"
"github.com/any-hub/any-hub/internal/cache"
"github.com/any-hub/any-hub/internal/config"
"github.com/any-hub/any-hub/internal/proxy"
"github.com/any-hub/any-hub/internal/server"
)
func TestComposerProxyCachesMetadataAndDists(t *testing.T) {
stub := newComposerStub(t)
defer stub.Close()
storageDir := t.TempDir()
cfg := &config.Config{
Global: config.GlobalConfig{
ListenPort: 5000,
CacheTTL: config.Duration(time.Hour),
StoragePath: storageDir,
},
Hubs: []config.HubConfig{
{
Name: "composer",
Domain: "composer.hub.local",
Type: "composer",
Upstream: stub.URL,
},
},
}
registry, err := server.NewHubRegistry(cfg)
if err != nil {
t.Fatalf("registry error: %v", err)
}
logger := logrus.New()
logger.SetOutput(io.Discard)
store, err := cache.NewStore(storageDir)
if err != nil {
t.Fatalf("store error: %v", err)
}
app, err := server.NewApp(server.AppOptions{
Logger: logger,
Registry: registry,
Proxy: proxy.NewHandler(server.NewUpstreamClient(cfg), logger, store),
ListenPort: 5000,
})
if err != nil {
t.Fatalf("app error: %v", err)
}
doRequest := func(path string) *http.Response {
req := httptest.NewRequest("GET", "http://composer.hub.local"+path, nil)
req.Host = "composer.hub.local"
resp, err := app.Test(req)
if err != nil {
t.Fatalf("app.Test error: %v", err)
}
return resp
}
rootResp := doRequest("/packages.json")
if rootResp.StatusCode != fiber.StatusOK {
t.Fatalf("expected 200 for packages.json, got %d", rootResp.StatusCode)
}
rootBody, _ := io.ReadAll(rootResp.Body)
rootResp.Body.Close()
var root map[string]any
if err := json.Unmarshal(rootBody, &root); err != nil {
t.Fatalf("parse packages.json: %v", err)
}
metaURL, _ := root["metadata-url"].(string)
assertProxyURL(t, "metadata-url", metaURL)
if providersURL, _ := root["providers-url"].(string); providersURL != "" {
assertProxyURL(t, "providers-url", providersURL)
}
if notifyURL, _ := root["notify-batch"].(string); notifyURL != "" {
assertProxyURL(t, "notify-batch", notifyURL)
}
metaPath := "/p2/example/package.json"
resp := doRequest(metaPath)
if resp.StatusCode != fiber.StatusOK {
t.Fatalf("expected 200 for composer metadata, got %d", resp.StatusCode)
}
if resp.Header.Get("Content-Type") != "application/json" {
t.Fatalf("expected metadata content-type json, got %s", resp.Header.Get("Content-Type"))
}
if resp.Header.Get("X-Any-Hub-Cache-Hit") != "false" {
t.Fatalf("expected metadata miss on first request")
}
body, _ := io.ReadAll(resp.Body)
resp.Body.Close()
var meta composerMetadataPayload
if err := json.Unmarshal(body, &meta); err != nil {
t.Fatalf("parse metadata: %v", err)
}
distURL := meta.FindDistURL("example/package")
if distURL == "" {
t.Fatalf("metadata missing dist url: %s", string(body))
}
parsedDist, err := url.Parse(distURL)
if err != nil {
t.Fatalf("parse dist url: %v", err)
}
if parsedDist.Host != "composer.hub.local" {
t.Fatalf("expected dist url rewritten to proxy host, got %s", parsedDist.Host)
}
resp2 := doRequest(metaPath)
if resp2.Header.Get("X-Any-Hub-Cache-Hit") != "true" {
t.Fatalf("expected metadata cache hit on second request")
}
resp2.Body.Close()
distResp := doRequest(parsedDist.RequestURI())
if distResp.StatusCode != fiber.StatusOK {
t.Fatalf("expected dist 200, got %d", distResp.StatusCode)
}
if distResp.Header.Get("X-Any-Hub-Cache-Hit") != "false" {
t.Fatalf("expected dist miss on first download")
}
distBody, _ := io.ReadAll(distResp.Body)
distResp.Body.Close()
if string(distBody) != stub.DistContent() {
t.Fatalf("unexpected dist body, got %s", string(distBody))
}
distResp2 := doRequest(parsedDist.RequestURI())
if distResp2.Header.Get("X-Any-Hub-Cache-Hit") != "true" {
t.Fatalf("expected cached dist response")
}
distResp2.Body.Close()
if stub.MetadataHits() != 1 {
t.Fatalf("expected single upstream metadata GET, got %d", stub.MetadataHits())
}
if stub.DistHits() != 1 {
t.Fatalf("expected single upstream dist GET, got %d", stub.DistHits())
}
}
type composerMetadataPayload struct {
Packages map[string][]composerMetadataVersion `json:"packages"`
}
type composerMetadataVersion struct {
Dist struct {
URL string `json:"url"`
} `json:"dist"`
}
func (m composerMetadataPayload) FindDistURL(pkg string) string {
versions, ok := m.Packages[pkg]
if !ok || len(versions) == 0 {
return ""
}
return versions[0].Dist.URL
}
type composerStub struct {
server *http.Server
listener net.Listener
URL string
mu sync.Mutex
metadataHits int
distHits int
distBody string
metadataBody []byte
metadataPath string
distPath string
}
func newComposerStub(t *testing.T) *composerStub {
t.Helper()
stub := &composerStub{
distBody: "zip-bytes",
metadataPath: "/p2/example/package.json",
distPath: "/downloads/example-package-1.0.0.zip",
}
mux := http.NewServeMux()
mux.HandleFunc("/packages.json", stub.handlePackages)
mux.HandleFunc(stub.metadataPath, stub.handleMetadata)
mux.HandleFunc(stub.distPath, stub.handleDist)
listener, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
t.Skipf("unable to start composer stub: %v", err)
}
server := &http.Server{Handler: mux}
stub.server = server
stub.listener = listener
stub.URL = "http://" + listener.Addr().String()
stub.metadataBody = stub.buildMetadata()
go func() {
_ = server.Serve(listener)
}()
return stub
}
func (s *composerStub) buildMetadata() []byte {
payload := map[string]any{
"packages": map[string][]map[string]any{
"example/package": {
{
"name": "example/package",
"version": "1.0.0",
"dist": map[string]any{
"type": "zip",
"url": s.URL + s.distPath,
},
},
},
},
}
data, _ := json.Marshal(payload)
return data
}
func (s *composerStub) handlePackages(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
payload := map[string]any{
"packages": map[string]any{},
"metadata-url": "p2/%package%.json",
"providers-url": "p/%package%$%hash%.json",
"notify-batch": "/downloads/",
"provider-includes": map[string]any{
"p/provider-latest$%hash%.json": map[string]any{"sha256": "dummy"},
},
}
data, _ := json.Marshal(payload)
_, _ = w.Write(data)
}
func (s *composerStub) handleMetadata(w http.ResponseWriter, r *http.Request) {
s.mu.Lock()
s.metadataHits++
body := s.metadataBody
s.mu.Unlock()
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write(body)
}
func (s *composerStub) handleDist(w http.ResponseWriter, r *http.Request) {
s.mu.Lock()
s.distHits++
body := s.distBody
s.mu.Unlock()
w.Header().Set("Content-Type", "application/zip")
_, _ = w.Write([]byte(body))
}
func (s *composerStub) MetadataHits() int {
s.mu.Lock()
defer s.mu.Unlock()
return s.metadataHits
}
func (s *composerStub) DistHits() int {
s.mu.Lock()
defer s.mu.Unlock()
return s.distHits
}
func (s *composerStub) DistContent() string {
s.mu.Lock()
defer s.mu.Unlock()
return s.distBody
}
func assertProxyURL(t *testing.T, field, val string) {
t.Helper()
if val == "" {
t.Fatalf("%s should not be empty", field)
}
if !strings.HasPrefix(val, "https://composer.hub.local/") {
t.Fatalf("%s should point to proxy host, got %s", field, val)
}
}
func (s *composerStub) Close() {
if s == nil {
return
}
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
defer cancel()
if s.server != nil {
_ = s.server.Shutdown(ctx)
}
if s.listener != nil {
_ = s.listener.Close()
}
}

View File

@@ -131,6 +131,8 @@ func TestCredentialProxy(t *testing.T) {
})
}
const dockerManifestContentType = "application/vnd.oci.image.index.v1+json"
func TestDockerProxyHandlesBearerTokenExchange(t *testing.T) {
stub := newDockerBearerStub(t, "ci-user", "ci-pass")
defer stub.Close()
@@ -164,6 +166,56 @@ func TestDockerProxyHandlesBearerTokenExchange(t *testing.T) {
}
}
func TestDockerProxyCachesAfterBearerRevalidation(t *testing.T) {
stub := newDockerBearerStub(t, "ci-user", "ci-pass")
defer stub.Close()
app := newDockerProxyApp(t, stub)
req := httptest.NewRequest("GET", "http://docker.hub.local/v2/library/alpine/manifests/latest", nil)
req.Host = "docker.hub.local"
resp, err := app.Test(req)
if err != nil {
t.Fatalf("app.Test failed: %v", err)
}
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
t.Fatalf("expected 200 after token exchange, got %d (body=%s)", resp.StatusCode, string(body))
}
if resp.Header.Get("X-Any-Hub-Cache-Hit") != "false" {
t.Fatalf("expected first request to miss cache")
}
if resp.Header.Get("Content-Type") != dockerManifestContentType {
t.Fatalf("expected upstream content type %s, got %s", dockerManifestContentType, resp.Header.Get("Content-Type"))
}
resp.Body.Close()
req2 := httptest.NewRequest("GET", "http://docker.hub.local/v2/library/alpine/manifests/latest", nil)
req2.Host = "docker.hub.local"
resp2, err := app.Test(req2)
if err != nil {
t.Fatalf("app.Test failed: %v", err)
}
if resp2.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp2.Body)
t.Fatalf("expected 200 after cache revalidation, got %d (body=%s)", resp2.StatusCode, string(body))
}
if resp2.Header.Get("X-Any-Hub-Cache-Hit") != "true" {
t.Fatalf("expected second request to be served from cache")
}
if resp2.Header.Get("Content-Type") != dockerManifestContentType {
t.Fatalf("expected cached content type %s, got %s", dockerManifestContentType, resp2.Header.Get("Content-Type"))
}
resp2.Body.Close()
if hits := stub.ManifestHits(); hits != 4 {
t.Fatalf("expected 4 manifest hits (2 GET + 2 HEAD), got %d", hits)
}
if tokens := stub.TokenHits(); tokens != 2 {
t.Fatalf("expected token endpoint to be called twice, got %d", tokens)
}
}
func performCredentialRequest(t *testing.T, app *fiber.App) *http.Response {
t.Helper()
req := httptest.NewRequest("GET", "http://secure.hub.local/private/data", nil)
@@ -427,6 +479,7 @@ type dockerBearerStub struct {
tokenAuth string
manifestHits int
tokenHits int
lastModified time.Time
}
func newDockerBearerStub(t *testing.T, username, password string) *dockerBearerStub {
@@ -436,6 +489,7 @@ func newDockerBearerStub(t *testing.T, username, password string) *dockerBearerS
password: password,
expectedBasic: "Basic " + base64.StdEncoding.EncodeToString([]byte(fmt.Sprintf("%s:%s", username, password))),
tokenValue: "test-token",
lastModified: time.Date(2020, time.January, 1, 0, 0, 0, 0, time.UTC),
}
mux := http.NewServeMux()
@@ -469,9 +523,14 @@ func (s *dockerBearerStub) handleManifest(w http.ResponseWriter, r *http.Request
s.mu.Unlock()
if success {
w.Header().Set("Content-Type", "application/json")
w.Header().Set("Content-Type", dockerManifestContentType)
w.Header().Set("Last-Modified", s.lastModified.Format(http.TimeFormat))
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte(`{"schemaVersion":2}`))
if r.Method == http.MethodHead {
return
}
payload := fmt.Sprintf(`{"schemaVersion":2,"mediaType":"%s"}`, dockerManifestContentType)
_, _ = w.Write([]byte(payload))
return
}

View File

@@ -0,0 +1,118 @@
package integration
import (
"io"
"net/http/httptest"
"testing"
"time"
"github.com/gofiber/fiber/v3"
"github.com/sirupsen/logrus"
"github.com/any-hub/any-hub/internal/config"
"github.com/any-hub/any-hub/internal/hubmodule"
"github.com/any-hub/any-hub/internal/hubmodule/legacy"
"github.com/any-hub/any-hub/internal/server"
)
func TestLegacyAdapterRolloutToggle(t *testing.T) {
const moduleKey = "rollout-toggle-test"
_ = hubmodule.Register(hubmodule.ModuleMetadata{Key: moduleKey})
logger := logrus.New()
logger.SetOutput(io.Discard)
baseHub := config.HubConfig{
Name: "dual-mode",
Domain: "dual.local",
Type: "docker",
Upstream: "https://registry.npmjs.org",
Module: moduleKey,
}
testCases := []struct {
name string
rolloutFlag string
expectKey string
expectFlag legacy.RolloutFlag
}{
{
name: "force legacy",
rolloutFlag: "legacy-only",
expectKey: hubmodule.DefaultModuleKey(),
expectFlag: legacy.RolloutLegacyOnly,
},
{
name: "dual mode",
rolloutFlag: "dual",
expectKey: moduleKey,
expectFlag: legacy.RolloutDual,
},
{
name: "full modular",
rolloutFlag: "modular",
expectKey: moduleKey,
expectFlag: legacy.RolloutModular,
},
{
name: "rollback to legacy",
rolloutFlag: "legacy-only",
expectKey: hubmodule.DefaultModuleKey(),
expectFlag: legacy.RolloutLegacyOnly,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
cfg := &config.Config{
Global: config.GlobalConfig{
ListenPort: 6100,
CacheTTL: config.Duration(time.Minute),
},
Hubs: []config.HubConfig{
func() config.HubConfig {
h := baseHub
h.Rollout = tc.rolloutFlag
return h
}(),
},
}
registry, err := server.NewHubRegistry(cfg)
if err != nil {
t.Fatalf("failed to build registry: %v", err)
}
recorder := &routeRecorder{}
app := mustNewApp(t, cfg.Global.ListenPort, logger, registry, recorder)
req := httptest.NewRequest("GET", "http://dual.local/v2/", nil)
req.Host = "dual.local"
resp, err := app.Test(req)
if err != nil {
t.Fatalf("request failed: %v", err)
}
if resp.StatusCode != fiber.StatusNoContent {
t.Fatalf("unexpected status: %d", resp.StatusCode)
}
if recorder.moduleKey != tc.expectKey {
t.Fatalf("expected module %s, got %s", tc.expectKey, recorder.moduleKey)
}
if recorder.rolloutFlag != tc.expectFlag {
t.Fatalf("expected rollout flag %s, got %s", tc.expectFlag, recorder.rolloutFlag)
}
})
}
}
type routeRecorder struct {
moduleKey string
rolloutFlag legacy.RolloutFlag
}
func (r *routeRecorder) Handle(c fiber.Ctx, route *server.HubRoute) error {
r.moduleKey = route.ModuleKey
r.rolloutFlag = route.RolloutFlag
return c.SendStatus(fiber.StatusNoContent)
}

View File

@@ -0,0 +1,154 @@
package integration
import (
"encoding/json"
"io"
"net/http"
"net/http/httptest"
"testing"
"time"
"github.com/gofiber/fiber/v3"
"github.com/sirupsen/logrus"
"github.com/any-hub/any-hub/internal/config"
"github.com/any-hub/any-hub/internal/hubmodule"
"github.com/any-hub/any-hub/internal/server"
"github.com/any-hub/any-hub/internal/server/routes"
)
func TestModuleDiagnosticsEndpoints(t *testing.T) {
const moduleKey = "diagnostics-test"
_ = hubmodule.Register(hubmodule.ModuleMetadata{
Key: moduleKey,
Description: "diagnostics test module",
MigrationState: hubmodule.MigrationStateBeta,
SupportedProtocols: []string{
"npm",
},
})
cfg := &config.Config{
Global: config.GlobalConfig{
ListenPort: 6200,
CacheTTL: config.Duration(30 * time.Minute),
},
Hubs: []config.HubConfig{
{
Name: "legacy-hub",
Domain: "legacy.local",
Type: "docker",
Upstream: "https://registry-1.docker.io",
},
{
Name: "modern-hub",
Domain: "modern.local",
Type: "npm",
Upstream: "https://registry.npmjs.org",
Module: moduleKey,
Rollout: "dual",
},
},
}
registry, err := server.NewHubRegistry(cfg)
if err != nil {
t.Fatalf("failed to build registry: %v", err)
}
logger := logrus.New()
logger.SetOutput(io.Discard)
app := mustNewApp(t, cfg.Global.ListenPort, logger, registry, server.ProxyHandlerFunc(func(c fiber.Ctx, _ *server.HubRoute) error {
return c.SendStatus(fiber.StatusNoContent)
}))
routes.RegisterModuleRoutes(app, registry)
t.Run("list modules and hubs", func(t *testing.T) {
resp := doRequest(t, app, "GET", "/-/modules")
if resp.StatusCode != fiber.StatusOK {
t.Fatalf("expected 200, got %d", resp.StatusCode)
}
var payload struct {
Modules []map[string]any `json:"modules"`
Hubs []struct {
HubName string `json:"hub_name"`
ModuleKey string `json:"module_key"`
Rollout string `json:"rollout_flag"`
Domain string `json:"domain"`
Port int `json:"port"`
} `json:"hubs"`
}
body, _ := io.ReadAll(resp.Body)
resp.Body.Close()
if err := json.Unmarshal(body, &payload); err != nil {
t.Fatalf("failed to decode response: %v\nbody: %s", err, string(body))
}
if len(payload.Modules) == 0 {
t.Fatalf("expected module metadata entries")
}
found := false
for _, module := range payload.Modules {
if module["key"] == moduleKey {
found = true
break
}
}
if !found {
t.Fatalf("expected module %s in diagnostics payload", moduleKey)
}
if len(payload.Hubs) != 2 {
t.Fatalf("expected 2 hubs, got %d", len(payload.Hubs))
}
for _, hub := range payload.Hubs {
switch hub.HubName {
case "legacy-hub":
if hub.ModuleKey != hubmodule.DefaultModuleKey() {
t.Fatalf("legacy hub should expose legacy module, got %s", hub.ModuleKey)
}
case "modern-hub":
if hub.ModuleKey != moduleKey {
t.Fatalf("modern hub should expose %s, got %s", moduleKey, hub.ModuleKey)
}
if hub.Rollout != "dual" {
t.Fatalf("modern hub rollout flag should be dual, got %s", hub.Rollout)
}
default:
t.Fatalf("unexpected hub %s", hub.HubName)
}
}
})
t.Run("inspect module by key", func(t *testing.T) {
resp := doRequest(t, app, "GET", "/-/modules/"+moduleKey)
if resp.StatusCode != fiber.StatusOK {
t.Fatalf("expected 200, got %d", resp.StatusCode)
}
var module map[string]any
body, _ := io.ReadAll(resp.Body)
resp.Body.Close()
if err := json.Unmarshal(body, &module); err != nil {
t.Fatalf("module inspect decode failed: %v", err)
}
if module["key"] != moduleKey {
t.Fatalf("expected module key %s, got %v", moduleKey, module["key"])
}
})
t.Run("unknown module returns 404", func(t *testing.T) {
resp := doRequest(t, app, "GET", "/-/modules/missing-module")
if resp.StatusCode != fiber.StatusNotFound {
t.Fatalf("expected 404, got %d", resp.StatusCode)
}
})
}
func doRequest(t *testing.T, app *fiber.App, method, url string) *http.Response {
t.Helper()
req := httptest.NewRequest(method, url, nil)
resp, err := app.Test(req)
if err != nil {
t.Fatalf("request %s %s failed: %v", method, url, err)
}
return resp
}

View File

@@ -0,0 +1,107 @@
package integration
import (
"io"
"net/http/httptest"
"testing"
"github.com/gofiber/fiber/v3"
"github.com/sirupsen/logrus"
"github.com/any-hub/any-hub/internal/config"
"github.com/any-hub/any-hub/internal/hubmodule"
"github.com/any-hub/any-hub/internal/server"
)
func TestModuleRoutingIsolation(t *testing.T) {
_ = hubmodule.Register(hubmodule.ModuleMetadata{Key: "module-routing-test"})
cfg := &config.Config{
Global: config.GlobalConfig{
ListenPort: 6000,
CacheTTL: config.Duration(3600),
},
Hubs: []config.HubConfig{
{
Name: "legacy",
Domain: "legacy.hub.local",
Type: "docker",
Module: "legacy",
Upstream: "https://registry-1.docker.io",
},
{
Name: "test",
Domain: "test.hub.local",
Type: "npm",
Module: "module-routing-test",
Upstream: "https://registry.example.com",
},
},
}
registry, err := server.NewHubRegistry(cfg)
if err != nil {
t.Fatalf("failed to create registry: %v", err)
}
logger := logrus.New()
logger.SetOutput(io.Discard)
recorder := &moduleRecorder{}
app := mustNewApp(t, cfg.Global.ListenPort, logger, registry, recorder)
legacyReq := httptest.NewRequest("GET", "http://legacy.hub.local/v2/", nil)
legacyReq.Host = "legacy.hub.local"
legacyReq.Header.Set("Host", "legacy.hub.local")
resp, err := app.Test(legacyReq)
if err != nil {
t.Fatalf("legacy request failed: %v", err)
}
if resp.StatusCode != fiber.StatusNoContent {
t.Fatalf("legacy hub should return 204, got %d", resp.StatusCode)
}
if recorder.moduleKey != "legacy" {
t.Fatalf("expected legacy module, got %s", recorder.moduleKey)
}
testReq := httptest.NewRequest("GET", "http://test.hub.local/v2/", nil)
testReq.Host = "test.hub.local"
testReq.Header.Set("Host", "test.hub.local")
resp2, err := app.Test(testReq)
if err != nil {
t.Fatalf("test request failed: %v", err)
}
if resp2.StatusCode != fiber.StatusNoContent {
t.Fatalf("test hub should return 204, got %d", resp2.StatusCode)
}
if recorder.moduleKey != "module-routing-test" {
t.Fatalf("expected module-routing-test module, got %s", recorder.moduleKey)
}
}
func mustNewApp(t *testing.T, port int, logger *logrus.Logger, registry *server.HubRegistry, handler server.ProxyHandler) *fiber.App {
t.Helper()
app, err := server.NewApp(server.AppOptions{
Logger: logger,
Registry: registry,
Proxy: handler,
ListenPort: port,
})
if err != nil {
t.Fatalf("failed to create app: %v", err)
}
return app
}
type moduleRecorder struct {
routeName string
moduleKey string
rollout string
}
func (p *moduleRecorder) Handle(c fiber.Ctx, route *server.HubRoute) error {
p.routeName = route.Config.Name
p.moduleKey = route.ModuleKey
p.rollout = string(route.RolloutFlag)
return c.SendStatus(fiber.StatusNoContent)
}

View File

@@ -2,10 +2,13 @@ package integration
import (
"context"
"fmt"
"io"
"net"
"net/http"
"net/http/httptest"
"net/url"
"strings"
"sync"
"testing"
"time"
@@ -83,7 +86,11 @@ func TestPyPICachePolicies(t *testing.T) {
if resp.Header.Get("X-Any-Hub-Cache-Hit") != "false" {
t.Fatalf("expected miss for first simple request")
}
body, _ := io.ReadAll(resp.Body)
resp.Body.Close()
if !strings.Contains(string(body), "/files/") {
t.Fatalf("simple response should rewrite file links, got %s", string(body))
}
resp2 := doRequest(simplePath)
if resp2.Header.Get("X-Any-Hub-Cache-Hit") != "true" {
@@ -109,7 +116,12 @@ func TestPyPICachePolicies(t *testing.T) {
t.Fatalf("expected second HEAD before refresh, got %d", stub.simpleHeadHits)
}
wheelPath := "/packages/foo/foo-1.0-py3-none-any.whl"
wheelURL := fmt.Sprintf("%s/packages/foo/foo-1.0-py3-none-any.whl", stub.URL)
parsedWheel, err := url.Parse(wheelURL)
if err != nil {
t.Fatalf("wheel url parse: %v", err)
}
wheelPath := fmt.Sprintf("/files/%s/%s%s", parsedWheel.Scheme, parsedWheel.Host, parsedWheel.Path)
respWheel := doRequest(wheelPath)
if respWheel.StatusCode != fiber.StatusOK {
t.Fatalf("expected 200 for wheel, got %d", respWheel.StatusCode)
@@ -151,19 +163,20 @@ type pypiStub struct {
simpleBody []byte
wheelBody []byte
lastSimpleMod string
wheelPath string
}
func newPyPIStub(t *testing.T) *pypiStub {
t.Helper()
stub := &pypiStub{
simpleBody: []byte("<html>ok</html>"),
wheelPath: "/packages/foo/foo-1.0-py3-none-any.whl",
wheelBody: []byte("wheel-bytes"),
lastSimpleMod: time.Now().UTC().Format(http.TimeFormat),
}
mux := http.NewServeMux()
mux.HandleFunc("/simple/pkg/", stub.handleSimple)
mux.HandleFunc("/packages/foo/foo-1.0-py3-none-any.whl", stub.handleWheel)
mux.HandleFunc(stub.wheelPath, stub.handleWheel)
listener, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
@@ -174,6 +187,7 @@ func newPyPIStub(t *testing.T) *pypiStub {
stub.server = server
stub.listener = listener
stub.URL = "http://" + listener.Addr().String()
stub.simpleBody = stub.defaultSimpleHTML()
go func() {
_ = server.Serve(listener)
@@ -224,9 +238,13 @@ func (s *pypiStub) handleWheel(w http.ResponseWriter, r *http.Request) {
func (s *pypiStub) UpdateSimple(body []byte) {
s.mu.Lock()
defer s.mu.Unlock()
s.simpleBody = append([]byte(nil), body...)
s.lastSimpleMod = time.Now().UTC().Format(http.TimeFormat)
s.mu.Unlock()
}
func (s *pypiStub) defaultSimpleHTML() []byte {
return []byte(fmt.Sprintf(`<html><body><a href="%s%s">wheel</a></body></html>`, s.URL, s.wheelPath))
}
func (s *pypiStub) Close() {