feat(tracing): Implement Jaeger/OpenTracing provider with configuration options
- Added Punycode encoding implementation for cookie handling. - Introduced serialization for cookie jar with JSON support. - Created a comprehensive README for the tracing provider, detailing configuration and usage. - Developed a configuration structure for tracing, including sampler and reporter settings. - Implemented the provider logic to initialize Jaeger tracer with logging capabilities. - Ensured graceful shutdown of the tracer on application exit.
This commit is contained in:
@@ -1,301 +0,0 @@
|
||||
# 全局指令
|
||||
|
||||
我的主语言是简体中文,所以请用简体中文回答我,与我交流。
|
||||
|
||||
# 角色定义
|
||||
|
||||
您是一名高级 Go 程序员,具有丰富的后端开发经验,偏好干净的编程和设计模式。
|
||||
|
||||
# 基本原则
|
||||
|
||||
- 所有代码和文档使用中文。
|
||||
- 遵循 Go 的官方规范和最佳实践。
|
||||
- 使用 `gofumpt -w -l -extra .` 格式化代码。
|
||||
- 错误处理优先使用 errors.New 和 fmt.Errorf。
|
||||
- 业务返回的错误需要在 `app/errorx` 包中定义。
|
||||
- 在错误处理时,使用适当的上下文信息提供更多错误细节。
|
||||
|
||||
# 命名规范
|
||||
|
||||
- 包名使用小写单词。
|
||||
- 文件名使用小写下划线。
|
||||
- 环境变量使用大写。
|
||||
- 常量使用驼峰命名。
|
||||
- 导出的标识符必须以大写字母开头。
|
||||
- 缩写规则:
|
||||
- i、j 用于循环
|
||||
- err 用于错误
|
||||
- ctx 用于上下文
|
||||
- req、res 用于请求响应
|
||||
|
||||
# 函数设计
|
||||
|
||||
- 函数应该短小精悍,单一职责。
|
||||
- 参数数量控制在 5 个以内。
|
||||
- 使用多值返回处理错误。
|
||||
- 优先使用命名返回值。
|
||||
- 避免嵌套超过 3 层。
|
||||
- 使用 defer 处理资源清理。
|
||||
|
||||
# 错误处理
|
||||
|
||||
- 总是检查错误返回。
|
||||
- 使用自定义错误类型。
|
||||
- 错误应该携带上下文信息。
|
||||
- 使用 errors.Is 和 errors.As 进行错误比较。
|
||||
|
||||
# 并发处理
|
||||
|
||||
- 使用 channel 通信而非共享内存。
|
||||
- 谨慎使用 goroutine。
|
||||
- 使用 context 控制超时和取消。
|
||||
- 使用 sync 包进行同步。
|
||||
|
||||
# 测试规范
|
||||
|
||||
- 编写单元测试和基准测试。
|
||||
- 使用表驱动测试。
|
||||
- 测试文件以 _test.go 结尾。
|
||||
- 使用 `stretchr/testify` `github.com/agiledragon/gomonkey/v2` 测试框架。
|
||||
|
||||
# 项目技术栈
|
||||
|
||||
- github.com/uber-go/dig 依赖注入
|
||||
- github.com/go-jet/jet 数据库查询构建器
|
||||
- github.com/ThreeDotsLabs/watermill 即时Event消息队列
|
||||
- github.com/riverqueue/river Job队列
|
||||
- github.com/gofiber/fiber/v3 HTTP框架
|
||||
- github.com/swaggo/swag 自动生成API文档, 在controller的方法上使用注解即可
|
||||
|
||||
# Atomctl 工具使用
|
||||
|
||||
## 生成命令
|
||||
|
||||
- gen model:从数据库生成模型
|
||||
- gen provider:生成依赖注入提供者
|
||||
- gen route:生成路由定义
|
||||
|
||||
## 数据库命令
|
||||
|
||||
- migrate:执行数据库迁移
|
||||
- migrate up/down:迁移或回滚,up 命令执行成功即表示数据库操作完成,无需其它确认操作。
|
||||
- migrate status:查看迁移状态
|
||||
- migrate create:创建迁移文件,迁移文件的命名需要使用动词名词的结合方式,如 create_users_table, 创建完成后文件会存在于 `database/migrations` 目录下
|
||||
|
||||
## 最佳实践
|
||||
|
||||
- migration 创建后需要执行 `atomctl migrate up` 执行数据库表迁移
|
||||
- 使用 gen model 前确保已migrate完成,并配置好 database/transform.yaml
|
||||
- 对model中需要转换的数据结构声明在目录 `database/fields` 中,文件名与model名一致
|
||||
- provider 生成时使用适当的注解标记
|
||||
- 遵循目录结构约定
|
||||
|
||||
# 项目结构
|
||||
|
||||
## 标准目录
|
||||
|
||||
- main.go:主程序入口
|
||||
- providers/:依赖注入提供者, 通过 atomctl gen provider 生成, 但是你不可以对其中的内容进行修改
|
||||
- database/fields:数据库模型字段定义
|
||||
- database/schemas:数据库自动生成的模型文件,不可以进行任何修改!!
|
||||
- database/migrations: 数据库迁移文件,通过 atomctl migrate create 创建,你不可以手工创建,只可以使用脚手架工具进行创建
|
||||
- configs.toml:配置文件
|
||||
- proto/: gRPC proto 定义
|
||||
- pkg/atom: 为依赖注入框架的核心代码,你不可以进行修改
|
||||
- fixtures/:测试文件
|
||||
- app/errorx: 业务错误定义
|
||||
- app/http: HTTP 服务
|
||||
- app/grpc: gRPC 服务
|
||||
- app/jobs: 后台任务定义
|
||||
- app/middlewares: HTTP 中间件
|
||||
- app/services: 服务启动逻辑,不可以进行任何修改
|
||||
|
||||
# 开发示例
|
||||
|
||||
## migration 定义
|
||||
|
||||
migration 文件示例.
|
||||
```
|
||||
-- +goose Up
|
||||
-- +goose StatementBegin
|
||||
|
||||
CREATE TABLE tenants (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
deleted_at TIMESTAMP WITH TIME ZONE DEFAULT NULL,
|
||||
) ;
|
||||
|
||||
COMMENT ON COLUMN tenants.created_at IS '创建时间';
|
||||
COMMENT ON COLUMN tenants.updated_at IS '更新时间';
|
||||
COMMENT ON COLUMN tenants.deleted_at IS '删除时间';
|
||||
|
||||
-- +goose StatementEnd
|
||||
|
||||
------------------------------------------------------------------------------------------------------
|
||||
|
||||
-- +goose Down
|
||||
-- +goose StatementBegin
|
||||
|
||||
DROP TABLE IF EXISTS tenants ;
|
||||
|
||||
-- +goose StatementEnd
|
||||
|
||||
```
|
||||
|
||||
## http module
|
||||
|
||||
1. 创建一个新的 http module `atomctl new module [users]`
|
||||
2. 在 `app/http` 目录下创建相关的处理程序。
|
||||
3. 定义用户相关的路由。
|
||||
4. 实现相关逻辑操作
|
||||
5. module 名称需要使用复数形式,支持多层级目录,如 `atomctl new module [users.orders]`
|
||||
|
||||
## controller
|
||||
|
||||
- controller 的定义
|
||||
```go
|
||||
// @provider
|
||||
type PayController struct {
|
||||
svc *Service
|
||||
log *log.Entry `inject:"false"`
|
||||
}
|
||||
|
||||
func (c *PayController) Prepare() error {
|
||||
c.log = log.WithField("module", "orders.Controller")
|
||||
return nil
|
||||
}
|
||||
|
||||
// actions ...
|
||||
}
|
||||
```
|
||||
- controller 文件定义完成后运行 `atomctl gen provider` 来生成 provider
|
||||
|
||||
- 一个 action 方法的定义, **@Router**不再使用swago的定义方式,替换为下面的定义方式,参数做用@Bind来进行声明,会自动注入,不需要业务内获取参数
|
||||
```go
|
||||
// Orders show user orders
|
||||
// @swagger definitions
|
||||
// @Router /api/v1/orders/:channel [get]
|
||||
// @Bind channel path
|
||||
// @Bind claim local
|
||||
// @Bind pagination query
|
||||
// @Bind filter query
|
||||
func (c *OrderController) List(ctx fiber.Ctx, claim *jwt.Claims,channel string, pagination *requests.Pagination, filter *UserOrderFilter) (*requests.Pager, error) {
|
||||
pagination.Format()
|
||||
pager := &requests.Pager{
|
||||
Pagination: *pagination,
|
||||
}
|
||||
|
||||
filter.UserID = claim.UserID
|
||||
orders, total, err := c.svc.GetOrders(ctx.Context(), pagination, filter)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
pager.Total = total
|
||||
|
||||
pager.Items = lo.FilterMap(orders, func(item model.Orders, _ int) (UserOrder, bool) {
|
||||
var o UserOrder
|
||||
if err := copier.Copy(&o, item) ; err != nil {
|
||||
return o, false
|
||||
}
|
||||
return o, true
|
||||
})
|
||||
|
||||
return pager, nil
|
||||
}
|
||||
```
|
||||
- 你需要把第二行的 `@swagger definitions` 替换成你的swagger定义
|
||||
- @Bind 参数会有几个位置 path/query/body/header/cookie/local/file 会分别从 url/get query/post body/header/cookie/fiber.Local/file/中取出所需要的数据绑定到方法的请求参数中去。
|
||||
- controller 只负责数据的接收返回及相关数据装饰,具体的复杂逻辑实现需要在service文件中定义。
|
||||
- action 文件内容完成运行 `atomctl gen route` 来生成路由
|
||||
|
||||
## service
|
||||
|
||||
- service 的定义
|
||||
```go
|
||||
// @provider
|
||||
type Service struct {
|
||||
db *sql.DB
|
||||
log *log.Entry `inject:"false"`
|
||||
}
|
||||
|
||||
func (svc *Service) Prepare() error {
|
||||
svc.log = log.WithField("module", "orders.service")
|
||||
_ = Int(1)
|
||||
return nil
|
||||
}
|
||||
```
|
||||
- service 文件定义完成后运行 `atomctl gen provider` 来生成 provider
|
||||
|
||||
- service 中 model 数据查询的示例,需要注意table需要定义为一个短小的tblXXX以便代码展示简洁
|
||||
```go
|
||||
// GetUserOrderByOrderID
|
||||
func (svc *Service) Get(ctx context.Context, orderID string, userID int64) (*model.Orders, error) {
|
||||
_, span := otel.Start(ctx, "users.service.GetUserOrderByOrderID")
|
||||
defer span.End()
|
||||
span.SetAttributes(
|
||||
attribute.String("order.id", orderID),
|
||||
attribute.Int64("user.id", userID),
|
||||
)
|
||||
|
||||
tbl := table.Orders
|
||||
stmt := tbl.SELECT(tbl.AllColumns).WHERE(tbl.OrderSerial.EQ(String(orderID)).AND(tbl.UserID.EQ(Int64(userID))))
|
||||
span.SetAttributes(semconv.DBStatementKey.String(stmt.DebugSql()))
|
||||
|
||||
var order model.Orders
|
||||
if err := stmt.QueryContext(ctx, svc.db, &order) ; err != nil {
|
||||
span.RecordError(err)
|
||||
return nil, err
|
||||
}
|
||||
return &order, nil
|
||||
}
|
||||
|
||||
// UpdateStage
|
||||
func (svc *Service) Update(ctx context.Context, tenantID, userID, postID int64, stage fields.PostStage) error {
|
||||
_, span := otel.Start(ctx, "users.service.UpdateStage")
|
||||
defer span.End()
|
||||
span.SetAttributes(
|
||||
attribute.Int64("tenant.id", tenantID),
|
||||
attribute.Int64("user.id", userID),
|
||||
attribute.Int64("post.id", postID),
|
||||
)
|
||||
|
||||
tbl := table.Posts
|
||||
stmt := tbl.
|
||||
UPDATE(tbl.UpdatedAt, tbl.Stage).
|
||||
SET(
|
||||
tbl.UpdatedAt.SET(TimestampT(time.Now())),
|
||||
tbl.Stage.SET(Int16(int16(stage))),
|
||||
).
|
||||
WHERE(
|
||||
tbl.ID.EQ(Int64(postID)).AND(
|
||||
tbl.TenantID.EQ(Int64(tenantID)).AND(
|
||||
tbl.UserID.EQ(Int64(userID)),
|
||||
),
|
||||
),
|
||||
)
|
||||
span.SetAttributes(semconv.DBStatementKey.String(stmt.DebugSql()))
|
||||
|
||||
if _, err := stmt.ExecContext(ctx, svc.db) ; err != nil {
|
||||
span.RecordError(err)
|
||||
return err
|
||||
}
|
||||
|
||||
return svc.Update(ctx, tenantID, userID, postID, post)
|
||||
}
|
||||
```
|
||||
|
||||
# 本项目说明
|
||||
|
||||
- 设计一个支持多租户的用户系统,一个用户可以同时属于多个租户
|
||||
- 每一个租户有一个租户管理员角色,这个角色可以在后台由系统管理员指定,或者用户在申请创建租户申请时自动指定。
|
||||
- 除系统管理员外,一个普通用户只可以是一个租户的管理员,不能同时管理多个租户。
|
||||
|
||||
**重要提示:**
|
||||
- `database/schemas` 目录下所有为件为 `atomctl gen model` 自动生成,不能进行任何修改!
|
||||
- migration SQL 中不要使用 `FOREIGN KEY` 约束,而是在业务中使用代码逻辑进行约束。
|
||||
- 数据库表需要按需要添加 `created_at` `updated_at` `deleted_at` 字段,并且这三个时间字段(`created_at` `updated_at` `deleted_at`)需要**直接**位于 id 字段后面, **中间不可以包含其它任何字段声明**。
|
||||
- ID 使用 `bigserial` 类型,数字类的使用 `int8`类型
|
||||
- 所有表不使用 `FOREIGN KEY` 约束,而是在业务中使用代码逻辑进行约束。
|
||||
- 所有字段需要添加中文字段 `comment`
|
||||
- 执行 `migrate up` 命令完成后你不需要再使用 `psql` 来验证是否创建成功
|
||||
@@ -1,23 +0,0 @@
|
||||
package hashids
|
||||
|
||||
import (
|
||||
"go.ipao.vip/atom/container"
|
||||
"go.ipao.vip/atom/opt"
|
||||
)
|
||||
|
||||
const DefaultPrefix = "HashIDs"
|
||||
|
||||
func DefaultProvider() container.ProviderContainer {
|
||||
return container.ProviderContainer{
|
||||
Provider: Provide,
|
||||
Options: []opt.Option{
|
||||
opt.Prefix(DefaultPrefix),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
type Config struct {
|
||||
Alphabet string
|
||||
Salt string
|
||||
MinLength uint
|
||||
}
|
||||
@@ -1,35 +0,0 @@
|
||||
package hashids
|
||||
|
||||
import (
|
||||
"go.ipao.vip/atom/container"
|
||||
"go.ipao.vip/atom/opt"
|
||||
|
||||
"github.com/speps/go-hashids/v2"
|
||||
)
|
||||
|
||||
func Provide(opts ...opt.Option) error {
|
||||
o := opt.New(opts...)
|
||||
var config Config
|
||||
if err := o.UnmarshalConfig(&config); err != nil {
|
||||
return err
|
||||
}
|
||||
return container.Container.Provide(func() (*hashids.HashID, error) {
|
||||
data := hashids.NewData()
|
||||
data.MinLength = int(config.MinLength)
|
||||
if data.MinLength == 0 {
|
||||
data.MinLength = 10
|
||||
}
|
||||
|
||||
data.Salt = config.Salt
|
||||
if data.Salt == "" {
|
||||
data.Salt = "default-salt-key"
|
||||
}
|
||||
|
||||
data.Alphabet = config.Alphabet
|
||||
if config.Alphabet == "" {
|
||||
data.Alphabet = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
|
||||
}
|
||||
|
||||
return hashids.NewWithData(data)
|
||||
}, o.DiOptions()...)
|
||||
}
|
||||
@@ -1,121 +0,0 @@
|
||||
OpenTelemetry Provider (OTLP Traces + Metrics)
|
||||
|
||||
该 Provider 基于 OpenTelemetry Go SDK,初始化全局 Tracer 与 Meter,支持 OTLP(gRPC/HTTP) 导出,并收集运行时指标。
|
||||
|
||||
配置(config.toml)
|
||||
|
||||
```
|
||||
[OTEL]
|
||||
ServiceName = "my-service"
|
||||
Version = "1.0.0"
|
||||
Env = "dev"
|
||||
|
||||
# 导出端点(二选一)
|
||||
EndpointGRPC = "otel-collector:4317"
|
||||
EndpointHTTP = "otel-collector:4318"
|
||||
|
||||
# 认证(可选)
|
||||
Token = "Bearer <your-token>" # 也可只填纯 token,Provider 会自动补齐 Bearer 前缀
|
||||
|
||||
# 安全(可选)
|
||||
InsecureGRPC = true # gRPC 导出是否使用 insecure
|
||||
InsecureHTTP = true # HTTP 导出是否使用 insecure
|
||||
|
||||
# 采样(可选)
|
||||
Sampler = "always" # always|ratio
|
||||
SamplerRatio = 0.1 # Sampler=ratio 时生效,0..1
|
||||
|
||||
# 批处理(可选,毫秒)
|
||||
BatchTimeoutMs = 5000
|
||||
ExportTimeoutMs = 10000
|
||||
MaxQueueSize = 2048
|
||||
MaxExportBatchSize = 512
|
||||
|
||||
# 指标(可选,毫秒)
|
||||
MetricReaderIntervalMs = 10000 # 指标导出周期
|
||||
RuntimeReadMemStatsIntervalMs = 5000 # 运行时指标读取周期
|
||||
```
|
||||
|
||||
启用
|
||||
|
||||
```
|
||||
import "test/providers/otel"
|
||||
|
||||
func providers() container.Providers {
|
||||
return container.Providers{
|
||||
otel.DefaultProvider(),
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
使用
|
||||
|
||||
- Traces: 通过 `go.opentelemetry.io/otel` 获取全局 Tracer,或使用仓库提供的 `providers/otel/funcs.go` 包装。
|
||||
|
||||
```
|
||||
ctx, span := otel.Tracer("my-service").Start(ctx, "my-op")
|
||||
// ...
|
||||
span.End()
|
||||
```
|
||||
|
||||
- Metrics: 通过 `otel.Meter("my-service")` 创建仪表,或使用 `providers/otel/funcs.go` 的便捷函数。
|
||||
|
||||
与 Tracing Provider 的区别与场景建议
|
||||
|
||||
- Tracing Provider(Jaeger + OpenTracing)只做链路,适合已有 OpenTracing 项目;
|
||||
- OTEL Provider(OpenTelemetry)统一 Traces+Metrics,对接 OTLP 生态,适合新项目或希望统一可观测性;
|
||||
- 可先混用:保留 Jaeger 链路,同时启用 OTEL 运行时指标,逐步迁移。
|
||||
|
||||
快速启动(本地 Collector)
|
||||
|
||||
最小化 docker-compose:
|
||||
|
||||
```
|
||||
services:
|
||||
otel-collector:
|
||||
image: otel/opentelemetry-collector:0.104.0
|
||||
command: ["--config=/etc/otelcol-config.yml"]
|
||||
volumes:
|
||||
- ./otelcol-config.yml:/etc/otelcol-config.yml:ro
|
||||
ports:
|
||||
- "4317:4317" # OTLP gRPC
|
||||
- "4318:4318" # OTLP HTTP
|
||||
```
|
||||
|
||||
示例 otelcol-config.yml:
|
||||
|
||||
```
|
||||
receivers:
|
||||
otlp:
|
||||
protocols:
|
||||
grpc:
|
||||
http:
|
||||
exporters:
|
||||
debug:
|
||||
verbosity: detailed
|
||||
processors:
|
||||
batch:
|
||||
service:
|
||||
pipelines:
|
||||
traces:
|
||||
receivers: [otlp]
|
||||
processors: [batch]
|
||||
exporters: [debug]
|
||||
metrics:
|
||||
receivers: [otlp]
|
||||
processors: [batch]
|
||||
exporters: [debug]
|
||||
```
|
||||
|
||||
应用端:
|
||||
|
||||
```
|
||||
[OTEL]
|
||||
EndpointGRPC = "127.0.0.1:4317"
|
||||
InsecureGRPC = true
|
||||
```
|
||||
|
||||
故障与降级
|
||||
|
||||
- Collector/网络异常:OTEL SDK 异步批处理,不阻塞业务;可能丢点/丢指标;
|
||||
- 启动失败:初始化报错会阻止启动;如需“不可达也不影响启动”,可加开关降级为 no-op(可按需补充)。
|
||||
@@ -1,73 +0,0 @@
|
||||
package otel
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"go.ipao.vip/atom"
|
||||
"go.ipao.vip/atom/container"
|
||||
"go.ipao.vip/atom/opt"
|
||||
)
|
||||
|
||||
const DefaultPrefix = "OTEL"
|
||||
|
||||
func DefaultProvider() container.ProviderContainer {
|
||||
return container.ProviderContainer{
|
||||
Provider: Provide,
|
||||
Options: []opt.Option{
|
||||
opt.Prefix(DefaultPrefix),
|
||||
opt.Group(atom.GroupInitialName),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
type Config struct {
|
||||
ServiceName string
|
||||
Version string
|
||||
Env string
|
||||
|
||||
EndpointGRPC string
|
||||
EndpointHTTP string
|
||||
Token string
|
||||
|
||||
// Connection security
|
||||
InsecureGRPC bool // if true, use grpc insecure for OTLP gRPC
|
||||
InsecureHTTP bool // if true, use http insecure for OTLP HTTP
|
||||
|
||||
// Tracing sampler
|
||||
// Sampler: "always" (default) or "ratio"
|
||||
Sampler string
|
||||
SamplerRatio float64 // used when Sampler == "ratio"; 0..1
|
||||
|
||||
// Tracing batcher options (milliseconds)
|
||||
BatchTimeoutMs uint
|
||||
ExportTimeoutMs uint
|
||||
MaxQueueSize int
|
||||
MaxExportBatchSize int
|
||||
|
||||
// Metrics options (milliseconds)
|
||||
MetricReaderIntervalMs uint // export interval for PeriodicReader
|
||||
RuntimeReadMemStatsIntervalMs uint // runtime metrics min read interval
|
||||
}
|
||||
|
||||
func (c *Config) format() {
|
||||
if c.ServiceName == "" {
|
||||
c.ServiceName = os.Getenv("SERVICE_NAME")
|
||||
if c.ServiceName == "" {
|
||||
c.ServiceName = "unknown"
|
||||
}
|
||||
}
|
||||
|
||||
if c.Version == "" {
|
||||
c.Version = os.Getenv("SERVICE_VERSION")
|
||||
if c.Version == "" {
|
||||
c.Version = "unknown"
|
||||
}
|
||||
}
|
||||
|
||||
if c.Env == "" {
|
||||
c.Env = os.Getenv("DEPLOY_ENVIRONMENT")
|
||||
if c.Env == "" {
|
||||
c.Env = "unknown"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,30 +0,0 @@
|
||||
# Dependent images
|
||||
GRAFANA_IMAGE=docker.hub.ipao.vip/grafana/grafana:11.4.0
|
||||
JAEGERTRACING_IMAGE=docker.hub.ipao.vip/jaegertracing/all-in-one:1.64.0
|
||||
OPENSEARCH_IMAGE=docker.hub.ipao.vip/opensearchproject/opensearch:2.18.0
|
||||
COLLECTOR_CONTRIB_IMAGE=docker-ghcr.hub.ipao.vip/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.116.1
|
||||
PROMETHEUS_IMAGE=docker-quay.hub.ipao.vip/prometheus/prometheus:v3.0.1
|
||||
|
||||
# OpenTelemetry Collector
|
||||
HOST_FILESYSTEM=/
|
||||
DOCKER_SOCK=/var/run/docker.sock
|
||||
OTEL_COLLECTOR_HOST=otel-collector
|
||||
OTEL_COLLECTOR_PORT_GRPC=4317
|
||||
OTEL_COLLECTOR_PORT_HTTP=4318
|
||||
OTEL_COLLECTOR_CONFIG=./otel-collector/otelcol-config.yml
|
||||
OTEL_COLLECTOR_CONFIG_EXTRAS=./otel-collector/otelcol-config-extras.yml
|
||||
OTEL_EXPORTER_OTLP_ENDPOINT=http://${OTEL_COLLECTOR_HOST}:${OTEL_COLLECTOR_PORT_GRPC}
|
||||
PUBLIC_OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:8080/otlp-http/v1/traces
|
||||
|
||||
# Grafana
|
||||
GRAFANA_SERVICE_PORT=3000
|
||||
GRAFANA_SERVICE_HOST=grafana
|
||||
|
||||
# Jaeger
|
||||
JAEGER_SERVICE_PORT=16686
|
||||
JAEGER_SERVICE_HOST=jaeger
|
||||
|
||||
# Prometheus
|
||||
PROMETHEUS_SERVICE_PORT=9090
|
||||
PROMETHEUS_SERVICE_HOST=prometheus
|
||||
PROMETHEUS_ADDR=${PROMETHEUS_SERVICE_HOST}:${PROMETHEUS_SERVICE_PORT}
|
||||
@@ -1,153 +0,0 @@
|
||||
# Copyright The OpenTelemetry Authors
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
x-default-logging: &logging
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "5m"
|
||||
max-file: "2"
|
||||
tag: "{{.Name}}"
|
||||
|
||||
networks:
|
||||
default:
|
||||
name: opentelemetry-demo
|
||||
driver: bridge
|
||||
|
||||
services:
|
||||
# ********************
|
||||
# Telemetry Components
|
||||
# ********************
|
||||
# Jaeger
|
||||
jaeger:
|
||||
image: ${JAEGERTRACING_IMAGE}
|
||||
container_name: jaeger
|
||||
command:
|
||||
- "--memory.max-traces=5000"
|
||||
- "--query.base-path=/jaeger/ui"
|
||||
- "--prometheus.server-url=http://${PROMETHEUS_ADDR}"
|
||||
- "--prometheus.query.normalize-calls=true"
|
||||
- "--prometheus.query.normalize-duration=true"
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 400M
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "${JAEGER_SERVICE_PORT}:${JAEGER_SERVICE_PORT}" # Jaeger UI
|
||||
# - "${OTEL_COLLECTOR_PORT_GRPC}"
|
||||
environment:
|
||||
- METRICS_STORAGE_TYPE=prometheus
|
||||
logging: *logging
|
||||
|
||||
# Grafana
|
||||
grafana:
|
||||
image: ${GRAFANA_IMAGE}
|
||||
container_name: grafana
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 100M
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- "GF_INSTALL_PLUGINS=grafana-opensearch-datasource"
|
||||
volumes:
|
||||
- ./grafana/grafana.ini:/etc/grafana/grafana.ini
|
||||
- ./grafana/provisioning/:/etc/grafana/provisioning/
|
||||
ports:
|
||||
- "${GRAFANA_SERVICE_PORT}:${GRAFANA_SERVICE_PORT}"
|
||||
logging: *logging
|
||||
|
||||
# OpenTelemetry Collector
|
||||
otel-collector:
|
||||
image: ${COLLECTOR_CONTRIB_IMAGE}
|
||||
container_name: otel-collector
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 200M
|
||||
restart: unless-stopped
|
||||
command:
|
||||
[
|
||||
"--config=/etc/otelcol-config.yml",
|
||||
"--config=/etc/otelcol-config-extras.yml",
|
||||
]
|
||||
user: 0:0
|
||||
volumes:
|
||||
- ${HOST_FILESYSTEM}:/hostfs:ro
|
||||
- ${DOCKER_SOCK}:/var/run/docker.sock:ro
|
||||
- ${OTEL_COLLECTOR_CONFIG}:/etc/otelcol-config.yml
|
||||
- ${OTEL_COLLECTOR_CONFIG_EXTRAS}:/etc/otelcol-config-extras.yml
|
||||
ports:
|
||||
- "${OTEL_COLLECTOR_PORT_GRPC}:${OTEL_COLLECTOR_PORT_GRPC}"
|
||||
- "${OTEL_COLLECTOR_PORT_HTTP}:${OTEL_COLLECTOR_PORT_HTTP}"
|
||||
depends_on:
|
||||
jaeger:
|
||||
condition: service_started
|
||||
opensearch:
|
||||
condition: service_healthy
|
||||
logging: *logging
|
||||
environment:
|
||||
- ENVOY_PORT
|
||||
- HOST_FILESYSTEM
|
||||
- OTEL_COLLECTOR_HOST
|
||||
- OTEL_COLLECTOR_PORT_GRPC
|
||||
- OTEL_COLLECTOR_PORT_HTTP
|
||||
|
||||
# Prometheus
|
||||
prometheus:
|
||||
image: ${PROMETHEUS_IMAGE}
|
||||
container_name: prometheus
|
||||
command:
|
||||
- --web.console.templates=/etc/prometheus/consoles
|
||||
- --web.console.libraries=/etc/prometheus/console_libraries
|
||||
- --storage.tsdb.retention.time=1h
|
||||
- --config.file=/etc/prometheus/prometheus-config.yaml
|
||||
- --storage.tsdb.path=/prometheus
|
||||
- --web.enable-lifecycle
|
||||
- --web.route-prefix=/
|
||||
- --web.enable-otlp-receiver
|
||||
- --enable-feature=exemplar-storage
|
||||
volumes:
|
||||
- ./prometheus/prometheus-config.yaml:/etc/prometheus/prometheus-config.yaml
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 300M
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "${PROMETHEUS_SERVICE_PORT}:${PROMETHEUS_SERVICE_PORT}"
|
||||
logging: *logging
|
||||
|
||||
# OpenSearch
|
||||
opensearch:
|
||||
image: ${OPENSEARCH_IMAGE}
|
||||
container_name: opensearch
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 1G
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- cluster.name=demo-cluster
|
||||
- node.name=demo-node
|
||||
- bootstrap.memory_lock=true
|
||||
- discovery.type=single-node
|
||||
- OPENSEARCH_JAVA_OPTS=-Xms300m -Xmx300m
|
||||
- DISABLE_INSTALL_DEMO_CONFIG=true
|
||||
- DISABLE_SECURITY_PLUGIN=true
|
||||
ulimits:
|
||||
memlock:
|
||||
soft: -1
|
||||
hard: -1
|
||||
nofile:
|
||||
soft: 65536
|
||||
hard: 65536
|
||||
ports:
|
||||
- "9200:9200"
|
||||
healthcheck:
|
||||
test: curl -s http://localhost:9200/_cluster/health | grep -E '"status":"(green|yellow)"'
|
||||
start_period: 10s
|
||||
interval: 5s
|
||||
timeout: 10s
|
||||
retries: 10
|
||||
logging: *logging
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,14 +0,0 @@
|
||||
# Copyright The OpenTelemetry Authors
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
|
||||
apiVersion: 1
|
||||
providers:
|
||||
- name: 'OpenTelemetry Demo'
|
||||
orgId: 1
|
||||
folder: 'Demo'
|
||||
type: file
|
||||
disableDeletion: false
|
||||
editable: true
|
||||
options:
|
||||
path: /etc/grafana/provisioning/dashboards/demo
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,435 +0,0 @@
|
||||
{
|
||||
"annotations": {
|
||||
"list": [
|
||||
{
|
||||
"builtIn": 1,
|
||||
"datasource": {
|
||||
"type": "grafana",
|
||||
"uid": "-- Grafana --"
|
||||
},
|
||||
"enable": true,
|
||||
"hide": true,
|
||||
"iconColor": "rgba(0, 211, 255, 1)",
|
||||
"name": "Annotations & Alerts",
|
||||
"type": "dashboard"
|
||||
}
|
||||
]
|
||||
},
|
||||
"editable": true,
|
||||
"fiscalYearStartMonth": 0,
|
||||
"graphTooltip": 0,
|
||||
"id": 5,
|
||||
"links": [],
|
||||
"panels": [
|
||||
{
|
||||
"collapsed": false,
|
||||
"gridPos": {
|
||||
"h": 1,
|
||||
"w": 24,
|
||||
"x": 0,
|
||||
"y": 0
|
||||
},
|
||||
"id": 4,
|
||||
"panels": [],
|
||||
"title": "GetCart Exemplars",
|
||||
"type": "row"
|
||||
},
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "webstore-metrics"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"barWidthFactor": 0.6,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 0,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"legend": false,
|
||||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "auto",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "none"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 80
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 10,
|
||||
"w": 24,
|
||||
"x": 0,
|
||||
"y": 10
|
||||
},
|
||||
"id": 5,
|
||||
"interval": "2m",
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": [],
|
||||
"displayMode": "list",
|
||||
"placement": "bottom",
|
||||
"showLegend": true
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single",
|
||||
"sort": "none"
|
||||
}
|
||||
},
|
||||
"pluginVersion": "11.3.0",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "webstore-metrics"
|
||||
},
|
||||
"disableTextWrap": false,
|
||||
"editorMode": "builder",
|
||||
"exemplar": true,
|
||||
"expr": "histogram_quantile(0.95, sum by(le) (rate(app_cart_get_cart_latency_bucket[$__rate_interval])))",
|
||||
"fullMetaSearch": false,
|
||||
"includeNullMetadata": false,
|
||||
"legendFormat": "p95 GetCart",
|
||||
"range": true,
|
||||
"refId": "A",
|
||||
"useBackend": false
|
||||
}
|
||||
],
|
||||
"title": "95th Pct Cart GetCart Latency with Exemplars",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "webstore-metrics"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {
|
||||
"hideFrom": {
|
||||
"legend": false,
|
||||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
}
|
||||
}
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 9,
|
||||
"w": 24,
|
||||
"x": 0,
|
||||
"y": 1
|
||||
},
|
||||
"id": 2,
|
||||
"interval": "2m",
|
||||
"options": {
|
||||
"calculate": false,
|
||||
"cellGap": 1,
|
||||
"color": {
|
||||
"exponent": 0.5,
|
||||
"fill": "dark-orange",
|
||||
"mode": "scheme",
|
||||
"reverse": false,
|
||||
"scale": "exponential",
|
||||
"scheme": "Spectral",
|
||||
"steps": 64
|
||||
},
|
||||
"exemplars": {
|
||||
"color": "rgba(255,0,255,0.7)"
|
||||
},
|
||||
"filterValues": {
|
||||
"le": 1e-9
|
||||
},
|
||||
"legend": {
|
||||
"show": true
|
||||
},
|
||||
"rowsFrame": {
|
||||
"layout": "auto"
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single",
|
||||
"showColorScale": false,
|
||||
"yHistogram": false
|
||||
},
|
||||
"yAxis": {
|
||||
"axisPlacement": "left",
|
||||
"reverse": false
|
||||
}
|
||||
},
|
||||
"pluginVersion": "11.3.0",
|
||||
"targets": [
|
||||
{
|
||||
"disableTextWrap": false,
|
||||
"editorMode": "builder",
|
||||
"exemplar": true,
|
||||
"expr": "sum by(le) (rate(app_cart_get_cart_latency_bucket[$__rate_interval]))",
|
||||
"format": "heatmap",
|
||||
"fullMetaSearch": false,
|
||||
"includeNullMetadata": false,
|
||||
"instant": true,
|
||||
"legendFormat": "{{le}}",
|
||||
"range": true,
|
||||
"refId": "A",
|
||||
"useBackend": false
|
||||
}
|
||||
],
|
||||
"title": "GetCart Latency Heatmap with Exemplars",
|
||||
"type": "heatmap"
|
||||
},
|
||||
{
|
||||
"collapsed": false,
|
||||
"gridPos": {
|
||||
"h": 1,
|
||||
"w": 24,
|
||||
"x": 0,
|
||||
"y": 20
|
||||
},
|
||||
"id": 3,
|
||||
"panels": [],
|
||||
"title": "AddItem Exemplars",
|
||||
"type": "row"
|
||||
},
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "webstore-metrics"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {
|
||||
"hideFrom": {
|
||||
"legend": false,
|
||||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
}
|
||||
}
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 9,
|
||||
"w": 24,
|
||||
"x": 0,
|
||||
"y": 21
|
||||
},
|
||||
"id": 6,
|
||||
"interval": "2m",
|
||||
"options": {
|
||||
"calculate": false,
|
||||
"cellGap": 1,
|
||||
"color": {
|
||||
"exponent": 0.5,
|
||||
"fill": "dark-orange",
|
||||
"mode": "scheme",
|
||||
"reverse": false,
|
||||
"scale": "exponential",
|
||||
"scheme": "Spectral",
|
||||
"steps": 64
|
||||
},
|
||||
"exemplars": {
|
||||
"color": "rgba(255,0,255,0.7)"
|
||||
},
|
||||
"filterValues": {
|
||||
"le": 1e-9
|
||||
},
|
||||
"legend": {
|
||||
"show": true
|
||||
},
|
||||
"rowsFrame": {
|
||||
"layout": "auto"
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single",
|
||||
"showColorScale": false,
|
||||
"yHistogram": false
|
||||
},
|
||||
"yAxis": {
|
||||
"axisPlacement": "left",
|
||||
"reverse": false
|
||||
}
|
||||
},
|
||||
"pluginVersion": "11.3.0",
|
||||
"targets": [
|
||||
{
|
||||
"disableTextWrap": false,
|
||||
"editorMode": "builder",
|
||||
"exemplar": true,
|
||||
"expr": "sum by(le) (rate(app_cart_add_item_latency_bucket[$__rate_interval]))",
|
||||
"format": "heatmap",
|
||||
"fullMetaSearch": false,
|
||||
"includeNullMetadata": false,
|
||||
"instant": true,
|
||||
"legendFormat": "{{le}}",
|
||||
"range": true,
|
||||
"refId": "A",
|
||||
"useBackend": false
|
||||
}
|
||||
],
|
||||
"title": "AddItem Latency Heatmap with Exemplars",
|
||||
"type": "heatmap"
|
||||
},
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "webstore-metrics"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"barWidthFactor": 0.6,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 0,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"legend": false,
|
||||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "auto",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "none"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 80
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 10,
|
||||
"w": 24,
|
||||
"x": 0,
|
||||
"y": 30
|
||||
},
|
||||
"id": 1,
|
||||
"interval": "2m",
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": [],
|
||||
"displayMode": "list",
|
||||
"placement": "bottom",
|
||||
"showLegend": true
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single",
|
||||
"sort": "none"
|
||||
}
|
||||
},
|
||||
"pluginVersion": "11.3.0",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "webstore-metrics"
|
||||
},
|
||||
"disableTextWrap": false,
|
||||
"editorMode": "builder",
|
||||
"exemplar": true,
|
||||
"expr": "histogram_quantile(0.95, sum by(le) (rate(app_cart_add_item_latency_bucket[$__rate_interval])))",
|
||||
"fullMetaSearch": false,
|
||||
"includeNullMetadata": false,
|
||||
"legendFormat": "p95 AddItem",
|
||||
"range": true,
|
||||
"refId": "A",
|
||||
"useBackend": false
|
||||
}
|
||||
],
|
||||
"title": "95th Pct Cart AddItem Latency with Exemplars",
|
||||
"type": "timeseries"
|
||||
}
|
||||
],
|
||||
"preload": false,
|
||||
"schemaVersion": 40,
|
||||
"tags": [],
|
||||
"templating": {
|
||||
"list": []
|
||||
},
|
||||
"time": {
|
||||
"from": "now-1h",
|
||||
"to": "now"
|
||||
},
|
||||
"timepicker": {},
|
||||
"timezone": "browser",
|
||||
"title": "Cart Service Exemplars",
|
||||
"uid": "ce6sd46kfkglca",
|
||||
"version": 1,
|
||||
"weekStart": ""
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -1,21 +0,0 @@
|
||||
# Copyright The OpenTelemetry Authors
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
|
||||
apiVersion: 1
|
||||
|
||||
datasources:
|
||||
- name: Prometheus
|
||||
uid: webstore-metrics
|
||||
type: prometheus
|
||||
url: http://prometheus:9090
|
||||
editable: true
|
||||
isDefault: true
|
||||
jsonData:
|
||||
exemplarTraceIdDestinations:
|
||||
- datasourceUid: webstore-traces
|
||||
name: trace_id
|
||||
|
||||
- url: http://localhost:8080/jaeger/ui/trace/$${__value.raw}
|
||||
name: trace_id
|
||||
urlDisplayLabel: View in Jaeger UI
|
||||
@@ -1,13 +0,0 @@
|
||||
# Copyright The OpenTelemetry Authors
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
|
||||
apiVersion: 1
|
||||
|
||||
datasources:
|
||||
- name: Jaeger
|
||||
uid: webstore-traces
|
||||
type: jaeger
|
||||
url: http://jaeger:16686/jaeger/ui
|
||||
editable: true
|
||||
isDefault: false
|
||||
@@ -1,20 +0,0 @@
|
||||
# Copyright The OpenTelemetry Authors
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
apiVersion: 1
|
||||
|
||||
datasources:
|
||||
- name: OpenSearch
|
||||
type: grafana-opensearch-datasource
|
||||
url: http://opensearch:9200/
|
||||
access: proxy
|
||||
editable: true
|
||||
isDefault: false
|
||||
jsonData:
|
||||
database: otel
|
||||
flavor: opensearch
|
||||
logLevelField: severity
|
||||
logMessageField: body
|
||||
pplEnabled: true
|
||||
timeField: observedTimestamp
|
||||
version: 2.18.0
|
||||
@@ -1,18 +0,0 @@
|
||||
# Copyright The OpenTelemetry Authors
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
# extra settings to be merged into OpenTelemetry Collector configuration
|
||||
# do not delete this file
|
||||
|
||||
## Example configuration for sending data to your own OTLP HTTP backend
|
||||
## Note: the spanmetrics exporter must be included in the exporters array
|
||||
## if overriding the traces pipeline.
|
||||
##
|
||||
# exporters:
|
||||
# otlphttp/example:
|
||||
# endpoint: <your-endpoint-url>
|
||||
#
|
||||
# service:
|
||||
# pipelines:
|
||||
# traces:
|
||||
# exporters: [spanmetrics, otlphttp/example]
|
||||
@@ -1,128 +0,0 @@
|
||||
# Copyright The OpenTelemetry Authors
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
receivers:
|
||||
otlp:
|
||||
protocols:
|
||||
grpc:
|
||||
endpoint: ${env:OTEL_COLLECTOR_HOST}:${env:OTEL_COLLECTOR_PORT_GRPC}
|
||||
http:
|
||||
endpoint: ${env:OTEL_COLLECTOR_HOST}:${env:OTEL_COLLECTOR_PORT_HTTP}
|
||||
cors:
|
||||
allowed_origins:
|
||||
- "http://*"
|
||||
- "https://*"
|
||||
docker_stats:
|
||||
endpoint: unix:///var/run/docker.sock
|
||||
# Host metrics
|
||||
hostmetrics:
|
||||
root_path: /hostfs
|
||||
scrapers:
|
||||
cpu:
|
||||
metrics:
|
||||
system.cpu.utilization:
|
||||
enabled: true
|
||||
disk:
|
||||
load:
|
||||
filesystem:
|
||||
exclude_mount_points:
|
||||
mount_points:
|
||||
- /dev/*
|
||||
- /proc/*
|
||||
- /sys/*
|
||||
- /run/k3s/containerd/*
|
||||
- /var/lib/docker/*
|
||||
- /var/lib/kubelet/*
|
||||
- /snap/*
|
||||
match_type: regexp
|
||||
exclude_fs_types:
|
||||
fs_types:
|
||||
- autofs
|
||||
- binfmt_misc
|
||||
- bpf
|
||||
- cgroup2
|
||||
- configfs
|
||||
- debugfs
|
||||
- devpts
|
||||
- devtmpfs
|
||||
- fusectl
|
||||
- hugetlbfs
|
||||
- iso9660
|
||||
- mqueue
|
||||
- nsfs
|
||||
- overlay
|
||||
- proc
|
||||
- procfs
|
||||
- pstore
|
||||
- rpc_pipefs
|
||||
- securityfs
|
||||
- selinuxfs
|
||||
- squashfs
|
||||
- sysfs
|
||||
- tracefs
|
||||
match_type: strict
|
||||
memory:
|
||||
metrics:
|
||||
system.memory.utilization:
|
||||
enabled: true
|
||||
network:
|
||||
paging:
|
||||
processes:
|
||||
process:
|
||||
mute_process_exe_error: true
|
||||
mute_process_io_error: true
|
||||
mute_process_user_error: true
|
||||
# Collector metrics
|
||||
prometheus:
|
||||
config:
|
||||
scrape_configs:
|
||||
- job_name: "otel-collector"
|
||||
scrape_interval: 10s
|
||||
static_configs:
|
||||
- targets: ["0.0.0.0:8888"]
|
||||
|
||||
exporters:
|
||||
debug:
|
||||
otlp:
|
||||
endpoint: "jaeger:4317"
|
||||
tls:
|
||||
insecure: true
|
||||
otlphttp/prometheus:
|
||||
endpoint: "http://prometheus:9090/api/v1/otlp"
|
||||
tls:
|
||||
insecure: true
|
||||
opensearch:
|
||||
logs_index: otel
|
||||
http:
|
||||
endpoint: "http://opensearch:9200"
|
||||
tls:
|
||||
insecure: true
|
||||
|
||||
processors:
|
||||
batch:
|
||||
transform:
|
||||
error_mode: ignore
|
||||
trace_statements:
|
||||
- context: span
|
||||
statements:
|
||||
# could be removed when https://github.com/vercel/next.js/pull/64852 is fixed upstream
|
||||
- replace_pattern(name, "\\?.*", "")
|
||||
- replace_match(name, "GET /api/products/*", "GET /api/products/{productId}")
|
||||
|
||||
connectors:
|
||||
spanmetrics:
|
||||
|
||||
service:
|
||||
pipelines:
|
||||
traces:
|
||||
receivers: [otlp]
|
||||
processors: [transform, batch]
|
||||
exporters: [otlp, debug, spanmetrics]
|
||||
metrics:
|
||||
receivers: [hostmetrics, docker_stats, otlp, prometheus, spanmetrics]
|
||||
processors: [batch]
|
||||
exporters: [otlphttp/prometheus, debug]
|
||||
logs:
|
||||
receivers: [otlp]
|
||||
processors: [batch]
|
||||
exporters: [opensearch, debug]
|
||||
@@ -1,27 +0,0 @@
|
||||
# Copyright The OpenTelemetry Authors
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
global:
|
||||
scrape_interval: 5s
|
||||
scrape_timeout: 3s
|
||||
evaluation_interval: 30s
|
||||
|
||||
otlp:
|
||||
promote_resource_attributes:
|
||||
- service.instance.id
|
||||
- service.name
|
||||
- service.namespace
|
||||
- cloud.availability_zone
|
||||
- cloud.region
|
||||
- container.name
|
||||
- deployment.environment.name
|
||||
|
||||
scrape_configs:
|
||||
- job_name: otel-collector
|
||||
static_configs:
|
||||
- targets:
|
||||
- 'otel-collector:8888'
|
||||
|
||||
storage:
|
||||
tsdb:
|
||||
out_of_order_time_window: 30m
|
||||
@@ -1,91 +0,0 @@
|
||||
package otel
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"go.opentelemetry.io/otel/metric"
|
||||
"go.opentelemetry.io/otel/trace"
|
||||
)
|
||||
|
||||
var (
|
||||
tracer trace.Tracer
|
||||
meter metric.Meter
|
||||
)
|
||||
|
||||
func Start(ctx context.Context, spanName string, opts ...trace.SpanStartOption) (context.Context, trace.Span) {
|
||||
return tracer.Start(ctx, spanName, opts...)
|
||||
}
|
||||
|
||||
func Int64Counter(name string, options ...metric.Int64CounterOption) (metric.Int64Counter, error) {
|
||||
return meter.Int64Counter(name, options...)
|
||||
}
|
||||
|
||||
// Int64UpDownCounter
|
||||
func Int64UpDownCounter(name string, options ...metric.Int64UpDownCounterOption) (metric.Int64UpDownCounter, error) {
|
||||
return meter.Int64UpDownCounter(name, options...)
|
||||
}
|
||||
|
||||
// Int64Histogram
|
||||
func Int64Histogram(name string, options ...metric.Int64HistogramOption) (metric.Int64Histogram, error) {
|
||||
return meter.Int64Histogram(name, options...)
|
||||
}
|
||||
|
||||
// Int64Gauge
|
||||
func Int64Gauge(name string, options ...metric.Int64GaugeOption) (metric.Int64Gauge, error) {
|
||||
return meter.Int64Gauge(name, options...)
|
||||
}
|
||||
|
||||
// Int64ObservableCounter
|
||||
func Int64ObservableCounter(name string, options ...metric.Int64ObservableCounterOption) (metric.Int64ObservableCounter, error) {
|
||||
return meter.Int64ObservableCounter(name, options...)
|
||||
}
|
||||
|
||||
// Int64ObservableUpDownCounter
|
||||
func Int64ObservableUpDownCounter(name string, options ...metric.Int64ObservableUpDownCounterOption) (metric.Int64ObservableUpDownCounter, error) {
|
||||
return meter.Int64ObservableUpDownCounter(name, options...)
|
||||
}
|
||||
|
||||
// Int64ObservableGauge
|
||||
func Int64ObservableGauge(name string, options ...metric.Int64ObservableGaugeOption) (metric.Int64ObservableGauge, error) {
|
||||
return meter.Int64ObservableGauge(name, options...)
|
||||
}
|
||||
|
||||
// Float64Counter
|
||||
func Float64Counter(name string, options ...metric.Float64CounterOption) (metric.Float64Counter, error) {
|
||||
return meter.Float64Counter(name, options...)
|
||||
}
|
||||
|
||||
// Float64UpDownCounter
|
||||
func Float64UpDownCounter(name string, options ...metric.Float64UpDownCounterOption) (metric.Float64UpDownCounter, error) {
|
||||
return meter.Float64UpDownCounter(name, options...)
|
||||
}
|
||||
|
||||
// Float64Histogram
|
||||
func Float64Histogram(name string, options ...metric.Float64HistogramOption) (metric.Float64Histogram, error) {
|
||||
return meter.Float64Histogram(name, options...)
|
||||
}
|
||||
|
||||
// Float64Gauge
|
||||
func Float64Gauge(name string, options ...metric.Float64GaugeOption) (metric.Float64Gauge, error) {
|
||||
return meter.Float64Gauge(name, options...)
|
||||
}
|
||||
|
||||
// Float64ObservableCounter
|
||||
func Float64ObservableCounter(name string, options ...metric.Float64ObservableCounterOption) (metric.Float64ObservableCounter, error) {
|
||||
return meter.Float64ObservableCounter(name, options...)
|
||||
}
|
||||
|
||||
// Float64ObservableUpDownCounter
|
||||
func Float64ObservableUpDownCounter(name string, options ...metric.Float64ObservableUpDownCounterOption) (metric.Float64ObservableUpDownCounter, error) {
|
||||
return meter.Float64ObservableUpDownCounter(name, options...)
|
||||
}
|
||||
|
||||
// Float64ObservableGauge
|
||||
func Float64ObservableGauge(name string, options ...metric.Float64ObservableGaugeOption) (metric.Float64ObservableGauge, error) {
|
||||
return meter.Float64ObservableGauge(name, options...)
|
||||
}
|
||||
|
||||
// RegisterCallback
|
||||
func RegisterCallback(f metric.Callback, instruments ...metric.Observable) (metric.Registration, error) {
|
||||
return meter.RegisterCallback(f, instruments...)
|
||||
}
|
||||
@@ -1,292 +0,0 @@
|
||||
package otel
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"go.ipao.vip/atom/container"
|
||||
"go.ipao.vip/atom/contracts"
|
||||
"go.ipao.vip/atom/opt"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"go.opentelemetry.io/contrib/instrumentation/runtime"
|
||||
"go.opentelemetry.io/otel"
|
||||
"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc"
|
||||
"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp"
|
||||
"go.opentelemetry.io/otel/exporters/otlp/otlptrace"
|
||||
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"
|
||||
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
|
||||
"go.opentelemetry.io/otel/propagation"
|
||||
sdkmetric "go.opentelemetry.io/otel/sdk/metric"
|
||||
"go.opentelemetry.io/otel/sdk/resource"
|
||||
sdktrace "go.opentelemetry.io/otel/sdk/trace"
|
||||
semconv "go.opentelemetry.io/otel/semconv/v1.15.0"
|
||||
"google.golang.org/grpc/encoding/gzip"
|
||||
)
|
||||
|
||||
// formatAuth formats token into an Authorization header value.
|
||||
func formatAuth(token string) string {
|
||||
if token == "" {
|
||||
return ""
|
||||
}
|
||||
if len(token) > 7 && (token[:7] == "Bearer " || token[:7] == "bearer ") {
|
||||
return token
|
||||
}
|
||||
return "Bearer " + token
|
||||
}
|
||||
|
||||
func Provide(opts ...opt.Option) error {
|
||||
o := opt.New(opts...)
|
||||
var config Config
|
||||
if err := o.UnmarshalConfig(&config); err != nil {
|
||||
return err
|
||||
}
|
||||
config.format()
|
||||
return container.Container.Provide(func(ctx context.Context) (contracts.Initial, error) {
|
||||
o := &builder{
|
||||
config: &config,
|
||||
}
|
||||
|
||||
if err := o.initResource(ctx); err != nil {
|
||||
return o, errors.Wrapf(err, "Failed to create OpenTelemetry resource")
|
||||
}
|
||||
|
||||
if err := o.initMeterProvider(ctx); err != nil {
|
||||
return o, errors.Wrapf(err, "Failed to create OpenTelemetry metric provider")
|
||||
}
|
||||
|
||||
if err := o.initTracerProvider(ctx); err != nil {
|
||||
return o, errors.Wrapf(err, "Failed to create OpenTelemetry tracer provider")
|
||||
}
|
||||
|
||||
tracer = otel.Tracer(config.ServiceName)
|
||||
meter = otel.Meter(config.ServiceName)
|
||||
|
||||
log.Info("otel provider init success")
|
||||
return o, nil
|
||||
}, o.DiOptions()...)
|
||||
}
|
||||
|
||||
type builder struct {
|
||||
config *Config
|
||||
|
||||
resource *resource.Resource
|
||||
}
|
||||
|
||||
func (o *builder) initResource(ctx context.Context) (err error) {
|
||||
hostName, _ := os.Hostname()
|
||||
|
||||
o.resource, err = resource.New(
|
||||
ctx,
|
||||
resource.WithFromEnv(),
|
||||
resource.WithProcess(),
|
||||
resource.WithTelemetrySDK(),
|
||||
resource.WithHost(),
|
||||
resource.WithOS(),
|
||||
resource.WithContainer(),
|
||||
resource.WithAttributes(
|
||||
semconv.ServiceNameKey.String(o.config.ServiceName), // 应用名
|
||||
semconv.ServiceVersionKey.String(o.config.Version), // 应用版本
|
||||
semconv.DeploymentEnvironmentKey.String(o.config.Env), // 部署环境
|
||||
semconv.HostNameKey.String(hostName), // 主机名
|
||||
),
|
||||
)
|
||||
return err
|
||||
}
|
||||
|
||||
func (o *builder) initMeterProvider(ctx context.Context) (err error) {
|
||||
exporterGrpcFunc := func(ctx context.Context) (sdkmetric.Exporter, error) {
|
||||
opts := []otlpmetricgrpc.Option{
|
||||
otlpmetricgrpc.WithEndpoint(o.config.EndpointGRPC),
|
||||
otlpmetricgrpc.WithCompressor(gzip.Name),
|
||||
}
|
||||
|
||||
if h := formatAuth(o.config.Token); h != "" {
|
||||
opts = append(opts, otlpmetricgrpc.WithHeaders(map[string]string{"Authorization": h}))
|
||||
}
|
||||
|
||||
exporter, err := otlpmetricgrpc.New(ctx, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return exporter, nil
|
||||
}
|
||||
|
||||
exporterHttpFunc := func(ctx context.Context) (sdkmetric.Exporter, error) {
|
||||
opts := []otlpmetrichttp.Option{
|
||||
otlpmetrichttp.WithEndpoint(o.config.EndpointHTTP),
|
||||
otlpmetrichttp.WithCompression(1),
|
||||
}
|
||||
if o.config.InsecureHTTP {
|
||||
opts = append(opts, otlpmetrichttp.WithInsecure())
|
||||
}
|
||||
if h := formatAuth(o.config.Token); h != "" {
|
||||
opts = append(opts, otlpmetrichttp.WithHeaders(map[string]string{"Authorization": h}))
|
||||
}
|
||||
|
||||
exporter, err := otlpmetrichttp.New(ctx, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return exporter, nil
|
||||
}
|
||||
|
||||
var exporter sdkmetric.Exporter
|
||||
if o.config.EndpointHTTP != "" {
|
||||
exporter, err = exporterHttpFunc(ctx)
|
||||
} else {
|
||||
exporter, err = exporterGrpcFunc(ctx)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// periodic reader with optional custom interval
|
||||
var readerOpts []sdkmetric.PeriodicReaderOption
|
||||
if o.config.MetricReaderIntervalMs > 0 {
|
||||
readerOpts = append(readerOpts, sdkmetric.WithInterval(time.Duration(o.config.MetricReaderIntervalMs)*time.Millisecond))
|
||||
}
|
||||
meterProvider := sdkmetric.NewMeterProvider(
|
||||
sdkmetric.WithReader(
|
||||
sdkmetric.NewPeriodicReader(exporter, readerOpts...),
|
||||
),
|
||||
sdkmetric.WithResource(o.resource),
|
||||
)
|
||||
otel.SetMeterProvider(meterProvider)
|
||||
|
||||
interval := 5 * time.Second
|
||||
if o.config.RuntimeReadMemStatsIntervalMs > 0 {
|
||||
interval = time.Duration(o.config.RuntimeReadMemStatsIntervalMs) * time.Millisecond
|
||||
}
|
||||
err = runtime.Start(runtime.WithMinimumReadMemStatsInterval(interval))
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "Failed to start runtime metrics")
|
||||
}
|
||||
|
||||
container.AddCloseAble(func() {
|
||||
if err := meterProvider.Shutdown(ctx); err != nil {
|
||||
otel.Handle(err)
|
||||
}
|
||||
})
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func (o *builder) initTracerProvider(ctx context.Context) error {
|
||||
exporterGrpcFunc := func(ctx context.Context) (*otlptrace.Exporter, error) {
|
||||
opts := []otlptracegrpc.Option{
|
||||
otlptracegrpc.WithCompressor(gzip.Name),
|
||||
otlptracegrpc.WithEndpoint(o.config.EndpointGRPC),
|
||||
}
|
||||
if o.config.InsecureGRPC {
|
||||
opts = append(opts, otlptracegrpc.WithInsecure())
|
||||
}
|
||||
|
||||
if h := formatAuth(o.config.Token); h != "" {
|
||||
opts = append(opts, otlptracegrpc.WithHeaders(map[string]string{"Authorization": h}))
|
||||
}
|
||||
|
||||
log.Debugf("Creating GRPC trace exporter with endpoint: %s", o.config.EndpointGRPC)
|
||||
exporter, err := otlptrace.New(ctx, otlptracegrpc.NewClient(opts...))
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to create GRPC trace exporter")
|
||||
}
|
||||
|
||||
container.AddCloseAble(func() {
|
||||
cxt, cancel := context.WithTimeout(ctx, time.Second)
|
||||
defer cancel()
|
||||
if err := exporter.Shutdown(cxt); err != nil {
|
||||
otel.Handle(err)
|
||||
}
|
||||
})
|
||||
|
||||
return exporter, nil
|
||||
}
|
||||
|
||||
exporterHttpFunc := func(ctx context.Context) (*otlptrace.Exporter, error) {
|
||||
opts := []otlptracehttp.Option{
|
||||
otlptracehttp.WithCompression(1),
|
||||
otlptracehttp.WithEndpoint(o.config.EndpointHTTP),
|
||||
}
|
||||
if o.config.InsecureHTTP {
|
||||
opts = append(opts, otlptracehttp.WithInsecure())
|
||||
}
|
||||
|
||||
if h := formatAuth(o.config.Token); h != "" {
|
||||
opts = append(opts, otlptracehttp.WithHeaders(map[string]string{"Authorization": h}))
|
||||
}
|
||||
|
||||
log.Debugf("Creating HTTP trace exporter with endpoint: %s", o.config.EndpointHTTP)
|
||||
exporter, err := otlptrace.New(ctx, otlptracehttp.NewClient(opts...))
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to create HTTP trace exporter")
|
||||
}
|
||||
|
||||
return exporter, nil
|
||||
}
|
||||
|
||||
var exporter *otlptrace.Exporter
|
||||
var err error
|
||||
if o.config.EndpointHTTP != "" {
|
||||
exporter, err = exporterHttpFunc(ctx)
|
||||
log.Infof("otel http exporter: %s", o.config.EndpointHTTP)
|
||||
} else {
|
||||
exporter, err = exporterGrpcFunc(ctx)
|
||||
log.Infof("otel grpc exporter: %s", o.config.EndpointGRPC)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Sampler
|
||||
sampler := sdktrace.AlwaysSample()
|
||||
if o.config.Sampler == "ratio" {
|
||||
ratio := o.config.SamplerRatio
|
||||
if ratio <= 0 {
|
||||
ratio = 0
|
||||
}
|
||||
if ratio > 1 {
|
||||
ratio = 1
|
||||
}
|
||||
sampler = sdktrace.ParentBased(sdktrace.TraceIDRatioBased(ratio))
|
||||
}
|
||||
|
||||
// Batcher options
|
||||
var batchOpts []sdktrace.BatchSpanProcessorOption
|
||||
if o.config.BatchTimeoutMs > 0 {
|
||||
batchOpts = append(batchOpts, sdktrace.WithBatchTimeout(time.Duration(o.config.BatchTimeoutMs)*time.Millisecond))
|
||||
}
|
||||
if o.config.ExportTimeoutMs > 0 {
|
||||
batchOpts = append(batchOpts, sdktrace.WithExportTimeout(time.Duration(o.config.ExportTimeoutMs)*time.Millisecond))
|
||||
}
|
||||
if o.config.MaxQueueSize > 0 {
|
||||
batchOpts = append(batchOpts, sdktrace.WithMaxQueueSize(o.config.MaxQueueSize))
|
||||
}
|
||||
if o.config.MaxExportBatchSize > 0 {
|
||||
batchOpts = append(batchOpts, sdktrace.WithMaxExportBatchSize(o.config.MaxExportBatchSize))
|
||||
}
|
||||
|
||||
traceProvider := sdktrace.NewTracerProvider(
|
||||
sdktrace.WithSampler(sampler),
|
||||
sdktrace.WithResource(o.resource),
|
||||
sdktrace.WithBatcher(exporter, batchOpts...),
|
||||
)
|
||||
container.AddCloseAble(func() {
|
||||
log.Error("shut down")
|
||||
if err := traceProvider.Shutdown(ctx); err != nil {
|
||||
otel.Handle(err)
|
||||
}
|
||||
})
|
||||
|
||||
otel.SetTracerProvider(traceProvider)
|
||||
otel.SetTextMapPropagator(propagation.NewCompositeTextMapPropagator(
|
||||
propagation.TraceContext{},
|
||||
propagation.Baggage{},
|
||||
))
|
||||
|
||||
return err
|
||||
}
|
||||
@@ -1,152 +0,0 @@
|
||||
package redis
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
|
||||
"go.ipao.vip/atom/container"
|
||||
"go.ipao.vip/atom/opt"
|
||||
)
|
||||
|
||||
const DefaultPrefix = "Redis"
|
||||
|
||||
func DefaultProvider() container.ProviderContainer {
|
||||
return container.ProviderContainer{
|
||||
Provider: Provide,
|
||||
Options: []opt.Option{
|
||||
opt.Prefix(DefaultPrefix),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
type Config struct {
|
||||
Host string
|
||||
Port uint
|
||||
Password string
|
||||
DB uint
|
||||
Username string
|
||||
ClientName string
|
||||
|
||||
// Pool & retries
|
||||
PoolSize int
|
||||
MinIdleConns int
|
||||
MaxRetries int
|
||||
|
||||
// Timeouts (seconds); 0 = library default
|
||||
DialTimeoutSeconds uint
|
||||
ReadTimeoutSeconds uint
|
||||
WriteTimeoutSeconds uint
|
||||
ConnMaxIdleTimeSeconds uint
|
||||
ConnMaxLifetimeSeconds uint
|
||||
PingTimeoutSeconds uint
|
||||
}
|
||||
|
||||
func (c *Config) format() {
|
||||
if c.Host == "" {
|
||||
c.Host = "localhost"
|
||||
}
|
||||
|
||||
if c.Port == 0 {
|
||||
c.Port = 6379
|
||||
}
|
||||
|
||||
if c.DB == 0 {
|
||||
c.DB = 0
|
||||
}
|
||||
|
||||
if c.PingTimeoutSeconds == 0 {
|
||||
c.PingTimeoutSeconds = 5
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Config) Addr() string {
|
||||
return fmt.Sprintf("%s:%d", c.Host, c.Port)
|
||||
}
|
||||
|
||||
// DialTimeout returns the dial timeout duration, 0 means library default
|
||||
func (c *Config) DialTimeout() time.Duration {
|
||||
if c.DialTimeoutSeconds == 0 {
|
||||
return 0
|
||||
}
|
||||
return time.Duration(c.DialTimeoutSeconds) * time.Second
|
||||
}
|
||||
|
||||
// ReadTimeout returns the read timeout duration, 0 means library default
|
||||
func (c *Config) ReadTimeout() time.Duration {
|
||||
if c.ReadTimeoutSeconds == 0 {
|
||||
return 0
|
||||
}
|
||||
return time.Duration(c.ReadTimeoutSeconds) * time.Second
|
||||
}
|
||||
|
||||
// WriteTimeout returns the write timeout duration, 0 means library default
|
||||
func (c *Config) WriteTimeout() time.Duration {
|
||||
if c.WriteTimeoutSeconds == 0 {
|
||||
return 0
|
||||
}
|
||||
return time.Duration(c.WriteTimeoutSeconds) * time.Second
|
||||
}
|
||||
|
||||
// ConnMaxIdleTime returns the max idle time for a connection, 0 means default
|
||||
func (c *Config) ConnMaxIdleTime() time.Duration {
|
||||
if c.ConnMaxIdleTimeSeconds == 0 {
|
||||
return 0
|
||||
}
|
||||
return time.Duration(c.ConnMaxIdleTimeSeconds) * time.Second
|
||||
}
|
||||
|
||||
// ConnMaxLifetime returns the max lifetime for a connection, 0 means default
|
||||
func (c *Config) ConnMaxLifetime() time.Duration {
|
||||
if c.ConnMaxLifetimeSeconds == 0 {
|
||||
return 0
|
||||
}
|
||||
return time.Duration(c.ConnMaxLifetimeSeconds) * time.Second
|
||||
}
|
||||
|
||||
// PingTimeout returns the ping timeout duration, default 5s
|
||||
func (c *Config) PingTimeout() time.Duration {
|
||||
if c.PingTimeoutSeconds == 0 {
|
||||
return 5 * time.Second
|
||||
}
|
||||
return time.Duration(c.PingTimeoutSeconds) * time.Second
|
||||
}
|
||||
|
||||
// Options builds a redis.Options based on the config values.
|
||||
// Only non-zero/meaningful values are set, to preserve library defaults.
|
||||
func (c *Config) Options() *redis.Options {
|
||||
c.format()
|
||||
ro := &redis.Options{
|
||||
Addr: c.Addr(),
|
||||
Username: c.Username,
|
||||
Password: c.Password,
|
||||
DB: int(c.DB),
|
||||
ClientName: c.ClientName,
|
||||
}
|
||||
if c.PoolSize > 0 {
|
||||
ro.PoolSize = c.PoolSize
|
||||
}
|
||||
if c.MinIdleConns > 0 {
|
||||
ro.MinIdleConns = c.MinIdleConns
|
||||
}
|
||||
if c.MaxRetries > 0 {
|
||||
ro.MaxRetries = c.MaxRetries
|
||||
}
|
||||
if dt := c.DialTimeout(); dt > 0 {
|
||||
ro.DialTimeout = dt
|
||||
}
|
||||
if rt := c.ReadTimeout(); rt > 0 {
|
||||
ro.ReadTimeout = rt
|
||||
}
|
||||
if wt := c.WriteTimeout(); wt > 0 {
|
||||
ro.WriteTimeout = wt
|
||||
}
|
||||
if it := c.ConnMaxIdleTime(); it > 0 {
|
||||
ro.ConnMaxIdleTime = it
|
||||
}
|
||||
if lt := c.ConnMaxLifetime(); lt > 0 {
|
||||
ro.ConnMaxLifetime = lt
|
||||
}
|
||||
return ro
|
||||
}
|
||||
@@ -1,48 +0,0 @@
|
||||
package redis
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"go.ipao.vip/atom/container"
|
||||
"go.ipao.vip/atom/opt"
|
||||
)
|
||||
|
||||
func Provide(opts ...opt.Option) error {
|
||||
o := opt.New(opts...)
|
||||
var config Config
|
||||
if err := o.UnmarshalConfig(&config); err != nil {
|
||||
return err
|
||||
}
|
||||
config.format()
|
||||
return container.Container.Provide(func() (redis.UniversalClient, error) {
|
||||
// Build options via config helper (encapsulates defaults/decisions)
|
||||
ro := config.Options()
|
||||
|
||||
// Safe structured log (no password)
|
||||
log.WithFields(log.Fields{
|
||||
"addr": ro.Addr,
|
||||
"db": ro.DB,
|
||||
"user": ro.Username,
|
||||
"client_name": ro.ClientName,
|
||||
"pool_size": ro.PoolSize,
|
||||
"min_idle": ro.MinIdleConns,
|
||||
"retries": ro.MaxRetries,
|
||||
}).Info("opening Redis connection")
|
||||
|
||||
rdb := redis.NewClient(ro)
|
||||
|
||||
// ping to verify connectivity
|
||||
ctx, cancel := context.WithTimeout(context.Background(), config.PingTimeout())
|
||||
defer cancel()
|
||||
if _, err := rdb.Ping(ctx).Result(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// close hook
|
||||
container.AddCloseAble(func() { _ = rdb.Close() })
|
||||
|
||||
return rdb, nil
|
||||
}, o.DiOptions()...)
|
||||
}
|
||||
@@ -1,243 +0,0 @@
|
||||
package req
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"test/providers/req/cookiejar"
|
||||
|
||||
"github.com/imroc/req/v3"
|
||||
"go.ipao.vip/atom/container"
|
||||
"go.ipao.vip/atom/opt"
|
||||
)
|
||||
|
||||
type Client struct {
|
||||
client *req.Client
|
||||
jar *cookiejar.Jar
|
||||
}
|
||||
|
||||
func Provide(opts ...opt.Option) error {
|
||||
o := opt.New(opts...)
|
||||
var config Config
|
||||
if err := o.UnmarshalConfig(&config); err != nil {
|
||||
return err
|
||||
}
|
||||
return container.Container.Provide(func() (*Client, error) {
|
||||
c := &Client{}
|
||||
|
||||
client := req.C()
|
||||
if config.DevMode {
|
||||
client.DevMode()
|
||||
}
|
||||
|
||||
if config.CookieJarFile != "" {
|
||||
dir := filepath.Dir(config.CookieJarFile)
|
||||
if _, err := os.Stat(dir); os.IsNotExist(err) {
|
||||
err = os.MkdirAll(dir, 0o755)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
jar, err := cookiejar.New(&cookiejar.Options{
|
||||
Filename: config.CookieJarFile,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
c.jar = jar
|
||||
client.SetCookieJar(jar)
|
||||
}
|
||||
|
||||
if config.RootCa != nil {
|
||||
client.SetRootCertsFromFile(config.RootCa...)
|
||||
}
|
||||
|
||||
if config.InsecureSkipVerify {
|
||||
client.EnableInsecureSkipVerify()
|
||||
}
|
||||
|
||||
if config.UserAgent != "" {
|
||||
client.SetUserAgent(config.UserAgent)
|
||||
}
|
||||
if config.BaseURL != "" {
|
||||
client.SetBaseURL(config.BaseURL)
|
||||
}
|
||||
if config.Timeout > 0 {
|
||||
client.SetTimeout(time.Duration(config.Timeout) * time.Second)
|
||||
}
|
||||
|
||||
if config.CommonHeaders != nil {
|
||||
client.SetCommonHeaders(config.CommonHeaders)
|
||||
}
|
||||
if config.CommonQuery != nil {
|
||||
client.SetCommonQueryParams(config.CommonQuery)
|
||||
}
|
||||
if config.ContentType != "" {
|
||||
client.SetCommonContentType(config.ContentType)
|
||||
}
|
||||
|
||||
if config.AuthBasic.Username != "" && config.AuthBasic.Password != "" {
|
||||
client.SetCommonBasicAuth(config.AuthBasic.Username, config.AuthBasic.Password)
|
||||
}
|
||||
|
||||
if config.AuthBearerToken != "" {
|
||||
client.SetCommonBearerAuthToken(config.AuthBearerToken)
|
||||
}
|
||||
|
||||
if config.ProxyURL != "" {
|
||||
client.SetProxyURL(config.ProxyURL)
|
||||
}
|
||||
|
||||
if config.RedirectPolicy != nil {
|
||||
client.SetRedirectPolicy(parsePolicies(config.RedirectPolicy)...)
|
||||
}
|
||||
|
||||
c.client = client
|
||||
if c.jar != nil {
|
||||
container.AddCloseAble(func() { _ = c.jar.Save() })
|
||||
}
|
||||
return c, nil
|
||||
}, o.DiOptions()...)
|
||||
}
|
||||
|
||||
func parsePolicies(policies []string) []req.RedirectPolicy {
|
||||
ps := []req.RedirectPolicy{}
|
||||
for _, policy := range policies {
|
||||
policyItems := strings.Split(policy, ":")
|
||||
if len(policyItems) != 2 {
|
||||
continue
|
||||
}
|
||||
|
||||
switch policyItems[0] {
|
||||
case "Max":
|
||||
max, err := strconv.Atoi(policyItems[1])
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
ps = append(ps, req.MaxRedirectPolicy(max))
|
||||
case "No":
|
||||
ps = append(ps, req.NoRedirectPolicy())
|
||||
case "SameDomain":
|
||||
ps = append(ps, req.SameDomainRedirectPolicy())
|
||||
case "SameHost":
|
||||
ps = append(ps, req.SameHostRedirectPolicy())
|
||||
case "AllowedHost":
|
||||
ps = append(ps, req.AllowedHostRedirectPolicy(strings.Split(policyItems[1], ",")...))
|
||||
case "AllowedDomain":
|
||||
ps = append(ps, req.AllowedDomainRedirectPolicy(strings.Split(policyItems[1], ",")...))
|
||||
}
|
||||
}
|
||||
|
||||
return ps
|
||||
}
|
||||
|
||||
func (c *Client) R() *req.Request {
|
||||
return c.client.R()
|
||||
}
|
||||
|
||||
func (c *Client) RWithCtx(ctx context.Context) *req.Request {
|
||||
return c.client.R().SetContext(ctx)
|
||||
}
|
||||
|
||||
func (c *Client) SaveCookJar() error {
|
||||
return c.jar.Save()
|
||||
}
|
||||
|
||||
func (c *Client) GetCookie(key string) (string, bool) {
|
||||
kv := c.AllCookiesKV()
|
||||
v, ok := kv[key]
|
||||
return v, ok
|
||||
}
|
||||
|
||||
func (c *Client) AllCookies() []*http.Cookie {
|
||||
return c.jar.AllCookies()
|
||||
}
|
||||
|
||||
func (c *Client) AllCookiesKV() map[string]string {
|
||||
return c.jar.KVData()
|
||||
}
|
||||
|
||||
func (c *Client) SetCookie(u *url.URL, cookies []*http.Cookie) {
|
||||
c.jar.SetCookies(u, cookies)
|
||||
}
|
||||
|
||||
func (c *Client) DoJSON(ctx context.Context, method, url string, in any, out any) error {
|
||||
r := c.RWithCtx(ctx)
|
||||
if in != nil {
|
||||
r.SetBody(in)
|
||||
}
|
||||
if out != nil {
|
||||
r.SetSuccessResult(out)
|
||||
}
|
||||
var resp *req.Response
|
||||
var err error
|
||||
switch strings.ToUpper(method) {
|
||||
case http.MethodGet:
|
||||
resp, err = r.Get(url)
|
||||
case http.MethodPost:
|
||||
resp, err = r.Post(url)
|
||||
case http.MethodPut:
|
||||
resp, err = r.Put(url)
|
||||
case http.MethodPatch:
|
||||
resp, err = r.Patch(url)
|
||||
case http.MethodDelete:
|
||||
resp, err = r.Delete(url)
|
||||
default:
|
||||
resp, err = r.Send(method, url)
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return resp.Err
|
||||
}
|
||||
|
||||
func (c *Client) GetJSON(ctx context.Context, url string, out any, query map[string]string) error {
|
||||
r := c.RWithCtx(ctx)
|
||||
if query != nil {
|
||||
r.SetQueryParams(query)
|
||||
}
|
||||
r.SetSuccessResult(out)
|
||||
resp, err := r.Get(url)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return resp.Err
|
||||
}
|
||||
|
||||
func (c *Client) PostJSON(ctx context.Context, url string, in any, out any) error {
|
||||
r := c.RWithCtx(ctx)
|
||||
if in != nil {
|
||||
r.SetBody(in)
|
||||
}
|
||||
if out != nil {
|
||||
r.SetSuccessResult(out)
|
||||
}
|
||||
resp, err := r.Post(url)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return resp.Err
|
||||
}
|
||||
|
||||
func (c *Client) Download(ctx context.Context, url, filepath string) error {
|
||||
r := c.RWithCtx(ctx)
|
||||
resp, err := r.Get(url)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
f, err := os.Create(filepath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
_, err = io.Copy(f, resp.Body)
|
||||
return err
|
||||
}
|
||||
@@ -1,37 +0,0 @@
|
||||
package req
|
||||
|
||||
import (
|
||||
"go.ipao.vip/atom/container"
|
||||
"go.ipao.vip/atom/opt"
|
||||
)
|
||||
|
||||
const DefaultPrefix = "HttpClient"
|
||||
|
||||
func DefaultProvider() container.ProviderContainer {
|
||||
return container.ProviderContainer{
|
||||
Provider: Provide,
|
||||
Options: []opt.Option{
|
||||
opt.Prefix(DefaultPrefix),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
type Config struct {
|
||||
DevMode bool
|
||||
CookieJarFile string
|
||||
RootCa []string
|
||||
UserAgent string
|
||||
InsecureSkipVerify bool
|
||||
CommonHeaders map[string]string
|
||||
CommonQuery map[string]string
|
||||
BaseURL string
|
||||
ContentType string
|
||||
Timeout uint
|
||||
AuthBasic struct {
|
||||
Username string
|
||||
Password string
|
||||
}
|
||||
AuthBearerToken string
|
||||
ProxyURL string
|
||||
RedirectPolicy []string // "Max:10;No;SameDomain;SameHost;AllowedHost:x,x,x,x,x,AllowedDomain:x,x,x,x,x"
|
||||
}
|
||||
@@ -1,704 +0,0 @@
|
||||
// Copyright 2012 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package cookiejar implements an in-memory RFC 6265-compliant http.CookieJar.
|
||||
//
|
||||
// This implementation is a fork of net/http/cookiejar which also
|
||||
// implements methods for dumping the cookies to persistent
|
||||
// storage and retrieving them.
|
||||
package cookiejar
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"golang.org/x/net/publicsuffix"
|
||||
)
|
||||
|
||||
// PublicSuffixList provides the public suffix of a domain. For example:
|
||||
// - the public suffix of "example.com" is "com",
|
||||
// - the public suffix of "foo1.foo2.foo3.co.uk" is "co.uk", and
|
||||
// - the public suffix of "bar.pvt.k12.ma.us" is "pvt.k12.ma.us".
|
||||
//
|
||||
// Implementations of PublicSuffixList must be safe for concurrent use by
|
||||
// multiple goroutines.
|
||||
//
|
||||
// An implementation that always returns "" is valid and may be useful for
|
||||
// testing but it is not secure: it means that the HTTP server for foo.com can
|
||||
// set a cookie for bar.com.
|
||||
//
|
||||
// A public suffix list implementation is in the package
|
||||
// golang.org/x/net/publicsuffix.
|
||||
type PublicSuffixList interface {
|
||||
// PublicSuffix returns the public suffix of domain.
|
||||
//
|
||||
// TODO: specify which of the caller and callee is responsible for IP
|
||||
// addresses, for leading and trailing dots, for case sensitivity, and
|
||||
// for IDN/Punycode.
|
||||
PublicSuffix(domain string) string
|
||||
|
||||
// String returns a description of the source of this public suffix
|
||||
// list. The description will typically contain something like a time
|
||||
// stamp or version number.
|
||||
String() string
|
||||
}
|
||||
|
||||
// Options are the options for creating a new Jar.
|
||||
type Options struct {
|
||||
// PublicSuffixList is the public suffix list that determines whether
|
||||
// an HTTP server can set a cookie for a domain.
|
||||
//
|
||||
// If this is nil, the public suffix list implementation in golang.org/x/net/publicsuffix
|
||||
// is used.
|
||||
PublicSuffixList PublicSuffixList
|
||||
|
||||
// Filename holds the file to use for storage of the cookies.
|
||||
// If it is empty, the value of DefaultCookieFile will be used.
|
||||
Filename string
|
||||
|
||||
// NoPersist specifies whether no persistence should be used
|
||||
// (useful for tests). If this is true, the value of Filename will be
|
||||
// ignored.
|
||||
NoPersist bool
|
||||
}
|
||||
|
||||
// Jar implements the http.CookieJar interface from the net/http package.
|
||||
type Jar struct {
|
||||
// filename holds the file that the cookies were loaded from.
|
||||
filename string
|
||||
|
||||
psList PublicSuffixList
|
||||
|
||||
// mu locks the remaining fields.
|
||||
mu sync.Mutex
|
||||
|
||||
// entries is a set of entries, keyed by their eTLD+1 and subkeyed by
|
||||
// their name/domain/path.
|
||||
entries map[string]map[string]entry
|
||||
}
|
||||
|
||||
var noOptions Options
|
||||
|
||||
// New returns a new cookie jar. A nil *Options is equivalent to a zero
|
||||
// Options.
|
||||
//
|
||||
// New will return an error if the cookies could not be loaded
|
||||
// from the file for any reason than if the file does not exist.
|
||||
func New(o *Options) (*Jar, error) {
|
||||
return newAtTime(o, time.Now())
|
||||
}
|
||||
|
||||
// newAtTime is like New but takes the current time as a parameter.
|
||||
func newAtTime(o *Options, now time.Time) (*Jar, error) {
|
||||
jar := &Jar{
|
||||
entries: make(map[string]map[string]entry),
|
||||
}
|
||||
if o == nil {
|
||||
o = &noOptions
|
||||
}
|
||||
if jar.psList = o.PublicSuffixList; jar.psList == nil {
|
||||
jar.psList = publicsuffix.List
|
||||
}
|
||||
if !o.NoPersist {
|
||||
if jar.filename = o.Filename; jar.filename == "" {
|
||||
jar.filename = DefaultCookieFile()
|
||||
}
|
||||
if err := jar.load(); err != nil {
|
||||
return nil, errors.Wrap(err, "cannot load cookies")
|
||||
}
|
||||
}
|
||||
jar.deleteExpired(now)
|
||||
return jar, nil
|
||||
}
|
||||
|
||||
// homeDir returns the OS-specific home path as specified in the environment.
|
||||
func homeDir() string {
|
||||
if runtime.GOOS == "windows" {
|
||||
return filepath.Join(os.Getenv("HOMEDRIVE"), os.Getenv("HOMEPATH"))
|
||||
}
|
||||
return os.Getenv("HOME")
|
||||
}
|
||||
|
||||
// entry is the internal representation of a cookie.
|
||||
//
|
||||
// This struct type is not used outside of this package per se, but the exported
|
||||
// fields are those of RFC 6265.
|
||||
// Note that this structure is marshaled to JSON, so backward-compatibility
|
||||
// should be preserved.
|
||||
type entry struct {
|
||||
Name string
|
||||
Value string
|
||||
Domain string
|
||||
Path string
|
||||
Secure bool
|
||||
HttpOnly bool
|
||||
Persistent bool
|
||||
HostOnly bool
|
||||
Expires time.Time
|
||||
Creation time.Time
|
||||
LastAccess time.Time
|
||||
|
||||
// Updated records when the cookie was updated.
|
||||
// This is different from creation time because a cookie
|
||||
// can be changed without updating the creation time.
|
||||
Updated time.Time
|
||||
|
||||
// CanonicalHost stores the original canonical host name
|
||||
// that the cookie was associated with. We store this
|
||||
// so that even if the public suffix list changes (for example
|
||||
// when storing/loading cookies) we can still get the correct
|
||||
// jar keys.
|
||||
CanonicalHost string
|
||||
}
|
||||
|
||||
// id returns the domain;path;name triple of e as an id.
|
||||
func (e *entry) id() string {
|
||||
return id(e.Domain, e.Path, e.Name)
|
||||
}
|
||||
|
||||
// id returns the domain;path;name triple as an id.
|
||||
func id(domain, path, name string) string {
|
||||
return fmt.Sprintf("%s;%s;%s", domain, path, name)
|
||||
}
|
||||
|
||||
// shouldSend determines whether e's cookie qualifies to be included in a
|
||||
// request to host/path. It is the caller's responsibility to check if the
|
||||
// cookie is expired.
|
||||
func (e *entry) shouldSend(https bool, host, path string) bool {
|
||||
return e.domainMatch(host) && e.pathMatch(path) && (https || !e.Secure)
|
||||
}
|
||||
|
||||
// domainMatch implements "domain-match" of RFC 6265 section 5.1.3.
|
||||
func (e *entry) domainMatch(host string) bool {
|
||||
if e.Domain == host {
|
||||
return true
|
||||
}
|
||||
return !e.HostOnly && hasDotSuffix(host, e.Domain)
|
||||
}
|
||||
|
||||
// pathMatch implements "path-match" according to RFC 6265 section 5.1.4.
|
||||
func (e *entry) pathMatch(requestPath string) bool {
|
||||
if requestPath == e.Path {
|
||||
return true
|
||||
}
|
||||
if strings.HasPrefix(requestPath, e.Path) {
|
||||
if e.Path[len(e.Path)-1] == '/' {
|
||||
return true // The "/any/" matches "/any/path" case.
|
||||
} else if requestPath[len(e.Path)] == '/' {
|
||||
return true // The "/any" matches "/any/path" case.
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// hasDotSuffix reports whether s ends in "."+suffix.
|
||||
func hasDotSuffix(s, suffix string) bool {
|
||||
return len(s) > len(suffix) && s[len(s)-len(suffix)-1] == '.' && s[len(s)-len(suffix):] == suffix
|
||||
}
|
||||
|
||||
type byCanonicalHost struct {
|
||||
byPathLength
|
||||
}
|
||||
|
||||
func (s byCanonicalHost) Less(i, j int) bool {
|
||||
e0, e1 := &s.byPathLength[i], &s.byPathLength[j]
|
||||
if e0.CanonicalHost != e1.CanonicalHost {
|
||||
return e0.CanonicalHost < e1.CanonicalHost
|
||||
}
|
||||
return s.byPathLength.Less(i, j)
|
||||
}
|
||||
|
||||
// byPathLength is a []entry sort.Interface that sorts according to RFC 6265
|
||||
// section 5.4 point 2: by longest path and then by earliest creation time.
|
||||
type byPathLength []entry
|
||||
|
||||
func (s byPathLength) Len() int { return len(s) }
|
||||
|
||||
func (s byPathLength) Less(i, j int) bool {
|
||||
e0, e1 := &s[i], &s[j]
|
||||
if len(e0.Path) != len(e1.Path) {
|
||||
return len(e0.Path) > len(e1.Path)
|
||||
}
|
||||
if !e0.Creation.Equal(e1.Creation) {
|
||||
return e0.Creation.Before(e1.Creation)
|
||||
}
|
||||
// The following are not strictly necessary
|
||||
// but are useful for providing deterministic
|
||||
// behaviour in tests.
|
||||
if e0.Name != e1.Name {
|
||||
return e0.Name < e1.Name
|
||||
}
|
||||
return e0.Value < e1.Value
|
||||
}
|
||||
|
||||
func (s byPathLength) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
|
||||
|
||||
// Cookies implements the Cookies method of the http.CookieJar interface.
|
||||
//
|
||||
// It returns an empty slice if the URL's scheme is not HTTP or HTTPS.
|
||||
func (j *Jar) Cookies(u *url.URL) (cookies []*http.Cookie) {
|
||||
return j.cookies(u, time.Now())
|
||||
}
|
||||
|
||||
// cookies is like Cookies but takes the current time as a parameter.
|
||||
func (j *Jar) cookies(u *url.URL, now time.Time) (cookies []*http.Cookie) {
|
||||
if u.Scheme != "http" && u.Scheme != "https" {
|
||||
return cookies
|
||||
}
|
||||
host, err := canonicalHost(u.Host)
|
||||
if err != nil {
|
||||
return cookies
|
||||
}
|
||||
key := jarKey(host, j.psList)
|
||||
|
||||
j.mu.Lock()
|
||||
defer j.mu.Unlock()
|
||||
|
||||
submap := j.entries[key]
|
||||
if submap == nil {
|
||||
return cookies
|
||||
}
|
||||
|
||||
https := u.Scheme == "https"
|
||||
path := u.Path
|
||||
if path == "" {
|
||||
path = "/"
|
||||
}
|
||||
|
||||
var selected []entry
|
||||
for id, e := range submap {
|
||||
if !e.Expires.After(now) {
|
||||
// Save some space by deleting the value when the cookie
|
||||
// expires. We can't delete the cookie itself because then
|
||||
// we wouldn't know that the cookie had expired when
|
||||
// we merge with another cookie jar.
|
||||
if e.Value != "" {
|
||||
e.Value = ""
|
||||
submap[id] = e
|
||||
}
|
||||
continue
|
||||
}
|
||||
if !e.shouldSend(https, host, path) {
|
||||
continue
|
||||
}
|
||||
e.LastAccess = now
|
||||
submap[id] = e
|
||||
selected = append(selected, e)
|
||||
}
|
||||
|
||||
sort.Sort(byPathLength(selected))
|
||||
for _, e := range selected {
|
||||
cookies = append(cookies, &http.Cookie{Name: e.Name, Value: e.Value})
|
||||
}
|
||||
|
||||
return cookies
|
||||
}
|
||||
|
||||
// AllCookies returns all cookies in the jar. The returned cookies will
|
||||
// have Domain, Expires, HttpOnly, Name, Secure, Path, and Value filled
|
||||
// out. Expired cookies will not be returned. This function does not
|
||||
// modify the cookie jar.
|
||||
func (j *Jar) AllCookies() (cookies []*http.Cookie) {
|
||||
return j.allCookies(time.Now())
|
||||
}
|
||||
|
||||
// allCookies is like AllCookies but takes the current time as a parameter.
|
||||
func (j *Jar) allCookies(now time.Time) []*http.Cookie {
|
||||
var selected []entry
|
||||
j.mu.Lock()
|
||||
defer j.mu.Unlock()
|
||||
for _, submap := range j.entries {
|
||||
for _, e := range submap {
|
||||
if !e.Expires.After(now) {
|
||||
// Do not return expired cookies.
|
||||
continue
|
||||
}
|
||||
selected = append(selected, e)
|
||||
}
|
||||
}
|
||||
|
||||
sort.Sort(byCanonicalHost{byPathLength(selected)})
|
||||
cookies := make([]*http.Cookie, len(selected))
|
||||
for i, e := range selected {
|
||||
// Note: The returned cookies do not contain sufficient
|
||||
// information to recreate the database.
|
||||
cookies[i] = &http.Cookie{
|
||||
Name: e.Name,
|
||||
Value: e.Value,
|
||||
Path: e.Path,
|
||||
Domain: e.Domain,
|
||||
Expires: e.Expires,
|
||||
Secure: e.Secure,
|
||||
HttpOnly: e.HttpOnly,
|
||||
}
|
||||
}
|
||||
|
||||
return cookies
|
||||
}
|
||||
|
||||
// RemoveCookie removes the cookie matching the name, domain and path
|
||||
// specified by c.
|
||||
func (j *Jar) RemoveCookie(c *http.Cookie) {
|
||||
j.mu.Lock()
|
||||
defer j.mu.Unlock()
|
||||
id := id(c.Domain, c.Path, c.Name)
|
||||
key := jarKey(c.Domain, j.psList)
|
||||
if e, ok := j.entries[key][id]; ok {
|
||||
e.Value = ""
|
||||
e.Expires = time.Now().Add(-1 * time.Second)
|
||||
j.entries[key][id] = e
|
||||
}
|
||||
}
|
||||
|
||||
// merge merges all the given entries into j. More recently changed
|
||||
// cookies take precedence over older ones.
|
||||
func (j *Jar) merge(entries []entry) {
|
||||
for _, e := range entries {
|
||||
if e.CanonicalHost == "" {
|
||||
continue
|
||||
}
|
||||
key := jarKey(e.CanonicalHost, j.psList)
|
||||
id := e.id()
|
||||
submap := j.entries[key]
|
||||
if submap == nil {
|
||||
j.entries[key] = map[string]entry{
|
||||
id: e,
|
||||
}
|
||||
continue
|
||||
}
|
||||
oldEntry, ok := submap[id]
|
||||
if !ok || e.Updated.After(oldEntry.Updated) {
|
||||
submap[id] = e
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var expiryRemovalDuration = 24 * time.Hour
|
||||
|
||||
// deleteExpired deletes all entries that have expired for long enough
|
||||
// that we can actually expect there to be no external copies of it that
|
||||
// might resurrect the dead cookie.
|
||||
func (j *Jar) deleteExpired(now time.Time) {
|
||||
for tld, submap := range j.entries {
|
||||
for id, e := range submap {
|
||||
if !e.Expires.After(now) && !e.Updated.Add(expiryRemovalDuration).After(now) {
|
||||
delete(submap, id)
|
||||
}
|
||||
}
|
||||
if len(submap) == 0 {
|
||||
delete(j.entries, tld)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// RemoveAllHost removes any cookies from the jar that were set for the given host.
|
||||
func (j *Jar) RemoveAllHost(host string) {
|
||||
host, err := canonicalHost(host)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
key := jarKey(host, j.psList)
|
||||
|
||||
j.mu.Lock()
|
||||
defer j.mu.Unlock()
|
||||
|
||||
expired := time.Now().Add(-1 * time.Second)
|
||||
submap := j.entries[key]
|
||||
for id, e := range submap {
|
||||
if e.CanonicalHost == host {
|
||||
// Save some space by deleting the value when the cookie
|
||||
// expires. We can't delete the cookie itself because then
|
||||
// we wouldn't know that the cookie had expired when
|
||||
// we merge with another cookie jar.
|
||||
e.Value = ""
|
||||
e.Expires = expired
|
||||
submap[id] = e
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// RemoveAll removes all the cookies from the jar.
|
||||
func (j *Jar) RemoveAll() {
|
||||
expired := time.Now().Add(-1 * time.Second)
|
||||
j.mu.Lock()
|
||||
defer j.mu.Unlock()
|
||||
for _, submap := range j.entries {
|
||||
for id, e := range submap {
|
||||
// Save some space by deleting the value when the cookie
|
||||
// expires. We can't delete the cookie itself because then
|
||||
// we wouldn't know that the cookie had expired when
|
||||
// we merge with another cookie jar.
|
||||
e.Value = ""
|
||||
e.Expires = expired
|
||||
submap[id] = e
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// SetCookies implements the SetCookies method of the http.CookieJar interface.
|
||||
//
|
||||
// It does nothing if the URL's scheme is not HTTP or HTTPS.
|
||||
func (j *Jar) SetCookies(u *url.URL, cookies []*http.Cookie) {
|
||||
j.setCookies(u, cookies, time.Now())
|
||||
}
|
||||
|
||||
// setCookies is like SetCookies but takes the current time as parameter.
|
||||
func (j *Jar) setCookies(u *url.URL, cookies []*http.Cookie, now time.Time) {
|
||||
if len(cookies) == 0 {
|
||||
return
|
||||
}
|
||||
if u.Scheme != "http" && u.Scheme != "https" {
|
||||
// TODO is this really correct? It might be nice to send
|
||||
// cookies to websocket connections, for example.
|
||||
return
|
||||
}
|
||||
host, err := canonicalHost(u.Host)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
key := jarKey(host, j.psList)
|
||||
defPath := defaultPath(u.Path)
|
||||
|
||||
j.mu.Lock()
|
||||
defer j.mu.Unlock()
|
||||
|
||||
submap := j.entries[key]
|
||||
for _, cookie := range cookies {
|
||||
e, err := j.newEntry(cookie, now, defPath, host)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
e.CanonicalHost = host
|
||||
id := e.id()
|
||||
if submap == nil {
|
||||
submap = make(map[string]entry)
|
||||
j.entries[key] = submap
|
||||
}
|
||||
if old, ok := submap[id]; ok {
|
||||
e.Creation = old.Creation
|
||||
} else {
|
||||
e.Creation = now
|
||||
}
|
||||
e.Updated = now
|
||||
e.LastAccess = now
|
||||
submap[id] = e
|
||||
}
|
||||
}
|
||||
|
||||
// canonicalHost strips port from host if present and returns the canonicalized
|
||||
// host name.
|
||||
func canonicalHost(host string) (string, error) {
|
||||
var err error
|
||||
host = strings.ToLower(host)
|
||||
if hasPort(host) {
|
||||
host, _, err = net.SplitHostPort(host)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
if strings.HasSuffix(host, ".") {
|
||||
// Strip trailing dot from fully qualified domain names.
|
||||
host = host[:len(host)-1]
|
||||
}
|
||||
return toASCII(host)
|
||||
}
|
||||
|
||||
// hasPort reports whether host contains a port number. host may be a host
|
||||
// name, an IPv4 or an IPv6 address.
|
||||
func hasPort(host string) bool {
|
||||
colons := strings.Count(host, ":")
|
||||
if colons == 0 {
|
||||
return false
|
||||
}
|
||||
if colons == 1 {
|
||||
return true
|
||||
}
|
||||
return host[0] == '[' && strings.Contains(host, "]:")
|
||||
}
|
||||
|
||||
// jarKey returns the key to use for a jar.
|
||||
func jarKey(host string, psl PublicSuffixList) string {
|
||||
if isIP(host) {
|
||||
return host
|
||||
}
|
||||
|
||||
var i int
|
||||
if psl == nil {
|
||||
i = strings.LastIndex(host, ".")
|
||||
if i == -1 {
|
||||
return host
|
||||
}
|
||||
} else {
|
||||
suffix := psl.PublicSuffix(host)
|
||||
if suffix == host {
|
||||
return host
|
||||
}
|
||||
i = len(host) - len(suffix)
|
||||
if i <= 0 || host[i-1] != '.' {
|
||||
// The provided public suffix list psl is broken.
|
||||
// Storing cookies under host is a safe stopgap.
|
||||
return host
|
||||
}
|
||||
}
|
||||
prevDot := strings.LastIndex(host[:i-1], ".")
|
||||
return host[prevDot+1:]
|
||||
}
|
||||
|
||||
// isIP reports whether host is an IP address.
|
||||
func isIP(host string) bool {
|
||||
return net.ParseIP(host) != nil
|
||||
}
|
||||
|
||||
// defaultPath returns the directory part of an URL's path according to
|
||||
// RFC 6265 section 5.1.4.
|
||||
func defaultPath(path string) string {
|
||||
if len(path) == 0 || path[0] != '/' {
|
||||
return "/" // Path is empty or malformed.
|
||||
}
|
||||
|
||||
i := strings.LastIndex(path, "/") // Path starts with "/", so i != -1.
|
||||
if i == 0 {
|
||||
return "/" // Path has the form "/abc".
|
||||
}
|
||||
return path[:i] // Path is either of form "/abc/xyz" or "/abc/xyz/".
|
||||
}
|
||||
|
||||
// newEntry creates an entry from a http.Cookie c. now is the current
|
||||
// time and is compared to c.Expires to determine deletion of c. defPath
|
||||
// and host are the default-path and the canonical host name of the URL
|
||||
// c was received from.
|
||||
//
|
||||
// The returned entry should be removed if its expiry time is in the
|
||||
// past. In this case, e may be incomplete, but it will be valid to call
|
||||
// e.id (which depends on e's Name, Domain and Path).
|
||||
//
|
||||
// A malformed c.Domain will result in an error.
|
||||
func (j *Jar) newEntry(c *http.Cookie, now time.Time, defPath, host string) (e entry, err error) {
|
||||
e.Name = c.Name
|
||||
if c.Path == "" || c.Path[0] != '/' {
|
||||
e.Path = defPath
|
||||
} else {
|
||||
e.Path = c.Path
|
||||
}
|
||||
|
||||
e.Domain, e.HostOnly, err = j.domainAndType(host, c.Domain)
|
||||
if err != nil {
|
||||
return e, err
|
||||
}
|
||||
// MaxAge takes precedence over Expires.
|
||||
if c.MaxAge != 0 {
|
||||
e.Persistent = true
|
||||
e.Expires = now.Add(time.Duration(c.MaxAge) * time.Second)
|
||||
if c.MaxAge < 0 {
|
||||
return e, nil
|
||||
}
|
||||
} else if c.Expires.IsZero() {
|
||||
e.Expires = endOfTime
|
||||
} else {
|
||||
e.Persistent = true
|
||||
e.Expires = c.Expires
|
||||
if !c.Expires.After(now) {
|
||||
return e, nil
|
||||
}
|
||||
}
|
||||
|
||||
e.Value = c.Value
|
||||
e.Secure = c.Secure
|
||||
e.HttpOnly = c.HttpOnly
|
||||
|
||||
return e, nil
|
||||
}
|
||||
|
||||
var (
|
||||
errIllegalDomain = errors.New("cookiejar: illegal cookie domain attribute")
|
||||
errMalformedDomain = errors.New("cookiejar: malformed cookie domain attribute")
|
||||
errNoHostname = errors.New("cookiejar: no host name available (IP only)")
|
||||
)
|
||||
|
||||
// endOfTime is the time when session (non-persistent) cookies expire.
|
||||
// This instant is representable in most date/time formats (not just
|
||||
// Go's time.Time) and should be far enough in the future.
|
||||
var endOfTime = time.Date(9999, 12, 31, 23, 59, 59, 0, time.UTC)
|
||||
|
||||
// domainAndType determines the cookie's domain and hostOnly attribute.
|
||||
func (j *Jar) domainAndType(host, domain string) (string, bool, error) {
|
||||
if domain == "" {
|
||||
// No domain attribute in the SetCookie header indicates a
|
||||
// host cookie.
|
||||
return host, true, nil
|
||||
}
|
||||
|
||||
if isIP(host) {
|
||||
// According to RFC 6265 domain-matching includes not being
|
||||
// an IP address.
|
||||
// TODO: This might be relaxed as in common browsers.
|
||||
return "", false, errNoHostname
|
||||
}
|
||||
|
||||
// From here on: If the cookie is valid, it is a domain cookie (with
|
||||
// the one exception of a public suffix below).
|
||||
// See RFC 6265 section 5.2.3.
|
||||
if domain[0] == '.' {
|
||||
domain = domain[1:]
|
||||
}
|
||||
|
||||
if len(domain) == 0 || domain[0] == '.' {
|
||||
// Received either "Domain=." or "Domain=..some.thing",
|
||||
// both are illegal.
|
||||
return "", false, errMalformedDomain
|
||||
}
|
||||
domain = strings.ToLower(domain)
|
||||
|
||||
if domain[len(domain)-1] == '.' {
|
||||
// We received stuff like "Domain=www.example.com.".
|
||||
// Browsers do handle such stuff (actually differently) but
|
||||
// RFC 6265 seems to be clear here (e.g. section 4.1.2.3) in
|
||||
// requiring a reject. 4.1.2.3 is not normative, but
|
||||
// "Domain Matching" (5.1.3) and "Canonicalized Host Names"
|
||||
// (5.1.2) are.
|
||||
return "", false, errMalformedDomain
|
||||
}
|
||||
|
||||
// See RFC 6265 section 5.3 #5.
|
||||
if j.psList != nil {
|
||||
if ps := j.psList.PublicSuffix(domain); ps != "" && !hasDotSuffix(domain, ps) {
|
||||
if host == domain {
|
||||
// This is the one exception in which a cookie
|
||||
// with a domain attribute is a host cookie.
|
||||
return host, true, nil
|
||||
}
|
||||
return "", false, errIllegalDomain
|
||||
}
|
||||
}
|
||||
|
||||
// The domain must domain-match host: www.mycompany.com cannot
|
||||
// set cookies for .ourcompetitors.com.
|
||||
if host != domain && !hasDotSuffix(host, domain) {
|
||||
return "", false, errIllegalDomain
|
||||
}
|
||||
|
||||
return domain, false, nil
|
||||
}
|
||||
|
||||
// DefaultCookieFile returns the default cookie file to use
|
||||
// for persisting cookie data.
|
||||
// The following names will be used in decending order of preference:
|
||||
// - the value of the $GOCOOKIES environment variable.
|
||||
// - $HOME/.go-cookies
|
||||
func DefaultCookieFile() string {
|
||||
if f := os.Getenv("GOCOOKIES"); f != "" {
|
||||
return f
|
||||
}
|
||||
return filepath.Join(homeDir(), ".go-cookies")
|
||||
}
|
||||
@@ -1,159 +0,0 @@
|
||||
// Copyright 2012 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package cookiejar
|
||||
|
||||
// This file implements the Punycode algorithm from RFC 3492.
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
// These parameter values are specified in section 5.
|
||||
//
|
||||
// All computation is done with int32s, so that overflow behavior is identical
|
||||
// regardless of whether int is 32-bit or 64-bit.
|
||||
const (
|
||||
base int32 = 36
|
||||
damp int32 = 700
|
||||
initialBias int32 = 72
|
||||
initialN int32 = 128
|
||||
skew int32 = 38
|
||||
tmax int32 = 26
|
||||
tmin int32 = 1
|
||||
)
|
||||
|
||||
// encode encodes a string as specified in section 6.3 and prepends prefix to
|
||||
// the result.
|
||||
//
|
||||
// The "while h < length(input)" line in the specification becomes "for
|
||||
// remaining != 0" in the Go code, because len(s) in Go is in bytes, not runes.
|
||||
func encode(prefix, s string) (string, error) {
|
||||
output := make([]byte, len(prefix), len(prefix)+1+2*len(s))
|
||||
copy(output, prefix)
|
||||
delta, n, bias := int32(0), initialN, initialBias
|
||||
b, remaining := int32(0), int32(0)
|
||||
for _, r := range s {
|
||||
if r < 0x80 {
|
||||
b++
|
||||
output = append(output, byte(r))
|
||||
} else {
|
||||
remaining++
|
||||
}
|
||||
}
|
||||
h := b
|
||||
if b > 0 {
|
||||
output = append(output, '-')
|
||||
}
|
||||
for remaining != 0 {
|
||||
m := int32(0x7fffffff)
|
||||
for _, r := range s {
|
||||
if m > r && r >= n {
|
||||
m = r
|
||||
}
|
||||
}
|
||||
delta += (m - n) * (h + 1)
|
||||
if delta < 0 {
|
||||
return "", fmt.Errorf("cookiejar: invalid label %q", s)
|
||||
}
|
||||
n = m
|
||||
for _, r := range s {
|
||||
if r < n {
|
||||
delta++
|
||||
if delta < 0 {
|
||||
return "", fmt.Errorf("cookiejar: invalid label %q", s)
|
||||
}
|
||||
continue
|
||||
}
|
||||
if r > n {
|
||||
continue
|
||||
}
|
||||
q := delta
|
||||
for k := base; ; k += base {
|
||||
t := k - bias
|
||||
if t < tmin {
|
||||
t = tmin
|
||||
} else if t > tmax {
|
||||
t = tmax
|
||||
}
|
||||
if q < t {
|
||||
break
|
||||
}
|
||||
output = append(output, encodeDigit(t+(q-t)%(base-t)))
|
||||
q = (q - t) / (base - t)
|
||||
}
|
||||
output = append(output, encodeDigit(q))
|
||||
bias = adapt(delta, h+1, h == b)
|
||||
delta = 0
|
||||
h++
|
||||
remaining--
|
||||
}
|
||||
delta++
|
||||
n++
|
||||
}
|
||||
return string(output), nil
|
||||
}
|
||||
|
||||
func encodeDigit(digit int32) byte {
|
||||
switch {
|
||||
case 0 <= digit && digit < 26:
|
||||
return byte(digit + 'a')
|
||||
case 26 <= digit && digit < 36:
|
||||
return byte(digit + ('0' - 26))
|
||||
}
|
||||
panic("cookiejar: internal error in punycode encoding")
|
||||
}
|
||||
|
||||
// adapt is the bias adaptation function specified in section 6.1.
|
||||
func adapt(delta, numPoints int32, firstTime bool) int32 {
|
||||
if firstTime {
|
||||
delta /= damp
|
||||
} else {
|
||||
delta /= 2
|
||||
}
|
||||
delta += delta / numPoints
|
||||
k := int32(0)
|
||||
for delta > ((base-tmin)*tmax)/2 {
|
||||
delta /= base - tmin
|
||||
k += base
|
||||
}
|
||||
return k + (base-tmin+1)*delta/(delta+skew)
|
||||
}
|
||||
|
||||
// Strictly speaking, the remaining code below deals with IDNA (RFC 5890 and
|
||||
// friends) and not Punycode (RFC 3492) per se.
|
||||
|
||||
// acePrefix is the ASCII Compatible Encoding prefix.
|
||||
const acePrefix = "xn--"
|
||||
|
||||
// toASCII converts a domain or domain label to its ASCII form. For example,
|
||||
// toASCII("bücher.example.com") is "xn--bcher-kva.example.com", and
|
||||
// toASCII("golang") is "golang".
|
||||
func toASCII(s string) (string, error) {
|
||||
if ascii(s) {
|
||||
return s, nil
|
||||
}
|
||||
labels := strings.Split(s, ".")
|
||||
for i, label := range labels {
|
||||
if !ascii(label) {
|
||||
a, err := encode(acePrefix, label)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
labels[i] = a
|
||||
}
|
||||
}
|
||||
return strings.Join(labels, "."), nil
|
||||
}
|
||||
|
||||
func ascii(s string) bool {
|
||||
for i := 0; i < len(s); i++ {
|
||||
if s[i] >= utf8.RuneSelf {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
@@ -1,188 +0,0 @@
|
||||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package cookiejar
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"io"
|
||||
"log"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"gopkg.in/retry.v1"
|
||||
|
||||
filelock "github.com/juju/go4/lock"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// Save saves the cookies to the persistent cookie file.
|
||||
// Before the file is written, it reads any cookies that
|
||||
// have been stored from it and merges them into j.
|
||||
func (j *Jar) Save() error {
|
||||
if j.filename == "" {
|
||||
return nil
|
||||
}
|
||||
return j.save(time.Now())
|
||||
}
|
||||
|
||||
// MarshalJSON implements json.Marshaler by encoding all persistent cookies
|
||||
// currently in the jar.
|
||||
func (j *Jar) MarshalJSON() ([]byte, error) {
|
||||
j.mu.Lock()
|
||||
defer j.mu.Unlock()
|
||||
// Marshaling entries can never fail.
|
||||
data, _ := json.Marshal(j.allPersistentEntries())
|
||||
return data, nil
|
||||
}
|
||||
|
||||
// save is like Save but takes the current time as a parameter.
|
||||
func (j *Jar) save(now time.Time) error {
|
||||
locked, err := lockFile(lockFileName(j.filename))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer locked.Close()
|
||||
f, err := os.OpenFile(j.filename, os.O_RDWR|os.O_CREATE, 0o600)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
// TODO optimization: if the file hasn't changed since we
|
||||
// loaded it, don't bother with the merge step.
|
||||
|
||||
j.mu.Lock()
|
||||
defer j.mu.Unlock()
|
||||
if err := j.mergeFrom(f); err != nil {
|
||||
// The cookie file is probably corrupt.
|
||||
log.Printf("cannot read cookie file to merge it; ignoring it: %v", err)
|
||||
}
|
||||
j.deleteExpired(now)
|
||||
if err := f.Truncate(0); err != nil {
|
||||
return errors.Wrap(err, "cannot truncate file")
|
||||
}
|
||||
if _, err := f.Seek(0, 0); err != nil {
|
||||
return err
|
||||
}
|
||||
return j.writeTo(f)
|
||||
}
|
||||
|
||||
// load loads the cookies from j.filename. If the file does not exist,
|
||||
// no error will be returned and no cookies will be loaded.
|
||||
func (j *Jar) load() error {
|
||||
if _, err := os.Stat(filepath.Dir(j.filename)); os.IsNotExist(err) {
|
||||
// The directory that we'll store the cookie jar
|
||||
// in doesn't exist, so don't bother trying
|
||||
// to acquire the lock.
|
||||
return nil
|
||||
}
|
||||
locked, err := lockFile(lockFileName(j.filename))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer locked.Close()
|
||||
f, err := os.Open(j.filename)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
if err := j.mergeFrom(f); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// mergeFrom reads all the cookies from r and stores them in the Jar.
|
||||
func (j *Jar) mergeFrom(r io.Reader) error {
|
||||
decoder := json.NewDecoder(r)
|
||||
// Cope with old cookiejar format by just discarding
|
||||
// cookies, but still return an error if it's invalid JSON.
|
||||
var data json.RawMessage
|
||||
if err := decoder.Decode(&data); err != nil {
|
||||
if err == io.EOF {
|
||||
// Empty file.
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
var entries []entry
|
||||
if err := json.Unmarshal(data, &entries); err != nil {
|
||||
log.Printf("warning: discarding cookies in invalid format (error: %v)", err)
|
||||
return nil
|
||||
}
|
||||
j.merge(entries)
|
||||
return nil
|
||||
}
|
||||
|
||||
// writeTo writes all the cookies in the jar to w
|
||||
// as a JSON array.
|
||||
func (j *Jar) writeTo(w io.Writer) error {
|
||||
encoder := json.NewEncoder(w)
|
||||
entries := j.allPersistentEntries()
|
||||
if err := encoder.Encode(entries); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// allPersistentEntries returns all the entries in the jar, sorted by primarly by canonical host
|
||||
// name and secondarily by path length.
|
||||
func (j *Jar) allPersistentEntries() []entry {
|
||||
var entries []entry
|
||||
for _, submap := range j.entries {
|
||||
for _, e := range submap {
|
||||
if e.Persistent {
|
||||
entries = append(entries, e)
|
||||
}
|
||||
}
|
||||
}
|
||||
sort.Sort(byCanonicalHost{entries})
|
||||
return entries
|
||||
}
|
||||
|
||||
func (j *Jar) KVData() map[string]string {
|
||||
pairs := make(map[string]string)
|
||||
|
||||
entries := j.allPersistentEntries()
|
||||
if len(entries) == 0 {
|
||||
return pairs
|
||||
}
|
||||
|
||||
for _, entry := range entries {
|
||||
pairs[strings.ToLower(entry.Name)] = entry.Value
|
||||
}
|
||||
|
||||
return pairs
|
||||
}
|
||||
|
||||
// lockFileName returns the name of the lock file associated with
|
||||
// the given path.
|
||||
func lockFileName(path string) string {
|
||||
return path + ".lock"
|
||||
}
|
||||
|
||||
var attempt = retry.LimitTime(3*time.Second, retry.Exponential{
|
||||
Initial: 100 * time.Microsecond,
|
||||
Factor: 1.5,
|
||||
MaxDelay: 100 * time.Millisecond,
|
||||
})
|
||||
|
||||
func lockFile(path string) (io.Closer, error) {
|
||||
for a := retry.Start(attempt, nil); a.Next(); {
|
||||
locker, err := filelock.Lock(path)
|
||||
if err == nil {
|
||||
return locker, nil
|
||||
}
|
||||
if !a.More() {
|
||||
return nil, errors.Wrap(err, "file locked for too long; giving up")
|
||||
}
|
||||
}
|
||||
panic("unreachable")
|
||||
}
|
||||
@@ -1,299 +0,0 @@
|
||||
# Tracing Provider (Jaeger / OpenTracing)
|
||||
|
||||
该 Provider 使用 jaeger-client-go + OpenTracing 构建分布式追踪能力,可通过本地 Agent 或 Collector 上报。
|
||||
|
||||
## 配置项(config.toml)
|
||||
|
||||
```toml
|
||||
[Tracing]
|
||||
# 基础
|
||||
Name = "my-service"
|
||||
Disabled = false # 置为 true 可禁用追踪(不初始化 tracer)
|
||||
Gen128Bit = true # 生成 128-bit trace id
|
||||
ZipkinSharedRPCSpan = true # 与 Zipkin 共享 RPC span 模式(与旧行为一致)
|
||||
RPCMetrics = false # 启用 RPC metrics(Jaeger 自带)
|
||||
|
||||
# 采样器(const|probabilistic|ratelimiting|remote)
|
||||
Sampler_Type = "const"
|
||||
Sampler_Param = 1.0 # const:1=全采;probabilistic:概率;ratelimiting:每秒速率
|
||||
Sampler_SamplingServerURL = "" # remote 采样器服务地址
|
||||
Sampler_MaxOperations = 0
|
||||
Sampler_RefreshIntervalSec = 0
|
||||
|
||||
# Reporter(二选一或同时配置,Jaeger 会择优使用)
|
||||
Reporter_LocalAgentHostPort = "127.0.0.1:6831" # UDP Agent
|
||||
Reporter_CollectorEndpoint = "http://127.0.0.1:14268/api/traces" # HTTP Collector
|
||||
Reporter_LogSpans = true
|
||||
Reporter_BufferFlushMs = 100
|
||||
Reporter_QueueSize = 1000
|
||||
|
||||
# 进程级 Tags(用于标记服务实例信息等)
|
||||
[Tracing.Tags]
|
||||
version = "1.0.0"
|
||||
zone = "az1"
|
||||
```
|
||||
|
||||
> 以上字段均有默认值,未配置时保持兼容:
|
||||
> - Sampler: const=1
|
||||
> - Reporter: LogSpans=true, BufferFlushMs=100, QueueSize=1000
|
||||
> - ZipkinSharedRPCSpan: 默认 true(与原实现一致)
|
||||
|
||||
## 在应用中启用
|
||||
|
||||
- 通过默认 Provider 注册:
|
||||
|
||||
```go
|
||||
import (
|
||||
"test/providers/tracing"
|
||||
"go.ipao.vip/atom/container"
|
||||
)
|
||||
|
||||
func providers() container.Providers {
|
||||
return container.Providers{
|
||||
tracing.DefaultProvider(),
|
||||
// ... 其他 providers
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- 也可直接调用 Provide:
|
||||
|
||||
```go
|
||||
tracing.Provide(/* 可选 opt.Prefix("Tracing") 等 */)
|
||||
```
|
||||
|
||||
## 启动 Tracing 服务
|
||||
|
||||
以下示例以 Jaeger 为主(与本 Provider 默认兼容)。本地开发推荐使用 all-in-one 容器;生产建议使用独立的 Agent/Collector/Storage 组件。
|
||||
|
||||
### 方案 A:Jaeger All-in-One(本地最快)
|
||||
|
||||
```bash
|
||||
docker run --rm -it \
|
||||
-p 16686:16686 \ # Web UI
|
||||
-p 14268:14268 \ # Collector HTTP
|
||||
-p 6831:6831/udp \ # Agent UDP (thrift compact)
|
||||
--name jaeger \
|
||||
jaegertracing/all-in-one:1.57
|
||||
```
|
||||
|
||||
对应配置(config.toml):
|
||||
|
||||
```toml
|
||||
[Tracing]
|
||||
Reporter_LocalAgentHostPort = "127.0.0.1:6831"
|
||||
Reporter_CollectorEndpoint = "http://127.0.0.1:14268/api/traces"
|
||||
```
|
||||
|
||||
打开 UI: http://localhost:16686
|
||||
|
||||
### 方案 B:Jaeger Agent + Collector(推荐用于集成环境)
|
||||
|
||||
docker-compose 示例(精简):
|
||||
|
||||
```yaml
|
||||
services:
|
||||
jaeger-agent:
|
||||
image: jaegertracing/jaeger-agent:1.57
|
||||
command: ["--reporter.grpc.host-port=jaeger-collector:14250"]
|
||||
ports:
|
||||
- "6831:6831/udp"
|
||||
depends_on: [jaeger-collector]
|
||||
|
||||
jaeger-collector:
|
||||
image: jaegertracing/jaeger-collector:1.57
|
||||
environment:
|
||||
- SPAN_STORAGE_TYPE=memory
|
||||
ports:
|
||||
- "14268:14268" # HTTP collector
|
||||
- "14250:14250" # gRPC collector
|
||||
depends_on: []
|
||||
|
||||
jaeger-ui:
|
||||
image: jaegertracing/jaeger-query:1.57
|
||||
environment:
|
||||
- SPAN_STORAGE_TYPE=memory
|
||||
- JAEGER_QUERY_BASE_PATH=/
|
||||
ports:
|
||||
- "16686:16686"
|
||||
depends_on: [jaeger-collector]
|
||||
```
|
||||
|
||||
应用侧配置与方案 A 相同:
|
||||
|
||||
```toml
|
||||
[Tracing]
|
||||
Reporter_LocalAgentHostPort = "127.0.0.1:6831"
|
||||
Reporter_CollectorEndpoint = "http://127.0.0.1:14268/api/traces"
|
||||
```
|
||||
|
||||
> 说明:客户端优先将 span 发送到本地 Agent(UDP 6831),Agent 再转发至 Collector;也可直接走 Collector HTTP(14268)。
|
||||
|
||||
## 获取与使用 Tracer
|
||||
|
||||
Provider 初始化后会设置全局 Tracer。推荐在业务中遵循以下范式使用:
|
||||
|
||||
### 1) 在入口处创建/提取根 Span
|
||||
|
||||
```go
|
||||
import (
|
||||
opentracing "github.com/opentracing/opentracing-go"
|
||||
)
|
||||
|
||||
func handler(ctx context.Context) error {
|
||||
// 如果上游已经传入了 Trace Context,则这里会基于其创建子 Span
|
||||
span, ctx := opentracing.StartSpanFromContext(ctx, "handler")
|
||||
defer span.Finish()
|
||||
|
||||
// 记录字段/日志
|
||||
span.SetTag("component", "http")
|
||||
span.LogKV("event", "start")
|
||||
|
||||
// ... 调用子流程
|
||||
if err := doWork(ctx); err != nil {
|
||||
span.SetTag("error", true)
|
||||
span.LogKV("event", "error", "error.object", err)
|
||||
return err
|
||||
}
|
||||
span.LogKV("event", "done")
|
||||
return nil
|
||||
}
|
||||
```
|
||||
|
||||
### 2) 在子流程中创建子 Span(Context 贯穿)
|
||||
|
||||
```go
|
||||
func doWork(ctx context.Context) error {
|
||||
span, ctx := opentracing.StartSpanFromContext(ctx, "doWork")
|
||||
defer span.Finish()
|
||||
|
||||
// 业务标签
|
||||
span.SetTag("db.table", "users")
|
||||
// ...
|
||||
return nil
|
||||
}
|
||||
```
|
||||
|
||||
### 3) 在出站 HTTP 请求中注入 Trace Context
|
||||
|
||||
结合 req Provider(或标准 http.Client)进行 Inject:
|
||||
|
||||
```go
|
||||
import (
|
||||
"net/http"
|
||||
opentracing "github.com/opentracing/opentracing-go"
|
||||
"github.com/opentracing/opentracing-go/ext"
|
||||
)
|
||||
|
||||
func callDownstream(ctx context.Context, c *Client) error {
|
||||
// 创建 client-span
|
||||
span, _ := opentracing.StartSpanFromContext(ctx, "downstream.call")
|
||||
defer span.Finish()
|
||||
ext.SpanKindRPCClient.Set(span)
|
||||
span.SetTag("peer.service", "downstream-svc")
|
||||
|
||||
// 使用 req 客户端发起请求(支持 BaseURL)
|
||||
r := c.RWithCtx(ctx)
|
||||
// 手动注入(可选):
|
||||
// headers := http.Header{}
|
||||
// _ = opentracing.GlobalTracer().Inject(span.Context(), opentracing.HTTPHeaders, opentracing.HTTPHeadersCarrier(headers))
|
||||
// r.SetHeaders(map[string]string{"traceparent": headers.Get("traceparent")}) // 视使用的传播格式而定
|
||||
|
||||
resp, err := r.Get("/api/ping")
|
||||
if err != nil {
|
||||
span.SetTag("error", true)
|
||||
span.LogKV("event", "error", "error.object", err)
|
||||
return err
|
||||
}
|
||||
_ = resp // 使用响应
|
||||
return nil
|
||||
}
|
||||
```
|
||||
|
||||
使用标准 http.Client:
|
||||
|
||||
```go
|
||||
func injectHTTP(ctx context.Context, req *http.Request) {
|
||||
span := opentracing.SpanFromContext(ctx)
|
||||
if span == nil { return }
|
||||
_ = opentracing.GlobalTracer().Inject(
|
||||
span.Context(),
|
||||
opentracing.HTTPHeaders,
|
||||
opentracing.HTTPHeadersCarrier(req.Header),
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
### 4) 在入站 HTTP/gRPC 中提取 Trace Context
|
||||
|
||||
- HTTP(示例):
|
||||
|
||||
```go
|
||||
func extractHTTP(r *http.Request) context.Context {
|
||||
wireContext, _ := opentracing.GlobalTracer().Extract(
|
||||
opentracing.HTTPHeaders,
|
||||
opentracing.HTTPHeadersCarrier(r.Header),
|
||||
)
|
||||
var span opentracing.Span
|
||||
if wireContext != nil {
|
||||
span = opentracing.StartSpan("http.server", ext.RPCServerOption(wireContext))
|
||||
} else {
|
||||
span = opentracing.StartSpan("http.server")
|
||||
}
|
||||
return opentracing.ContextWithSpan(r.Context(), span)
|
||||
}
|
||||
```
|
||||
|
||||
- gRPC:建议在 providers/grpc 中配置拦截器/StatsHandler,实现自动 Extract/Inject,无需在业务中手写(参见 `providers/grpc/options.md`)。
|
||||
|
||||
### 5) Baggage(跨服务携带的键值)
|
||||
|
||||
```go
|
||||
span, ctx := opentracing.StartSpanFromContext(ctx, "op")
|
||||
defer span.Finish()
|
||||
span.SetBaggageItem("tenant_id", "t-001")
|
||||
// 下游可通过 SpanContext.BaggageItem("tenant_id") 获取
|
||||
```
|
||||
|
||||
### 6) 通过 DI 获取 Tracer / Closer
|
||||
|
||||
虽然已设置为全局 Tracer,但也可以注入使用:
|
||||
|
||||
```go
|
||||
type deps struct {
|
||||
dig.In
|
||||
Tracer opentracing.Tracer
|
||||
}
|
||||
|
||||
func useTracer(d deps) {
|
||||
span := d.Tracer.StartSpan("manual-span")
|
||||
defer span.Finish()
|
||||
}
|
||||
```
|
||||
|
||||
> Provider 已注册优雅关停钩子,进程退出时会自动调用 `closer.Close()`;如需手动关闭,可同时注入 `io.Closer`。
|
||||
|
||||
## 与 HTTP/gRPC 集成建议
|
||||
|
||||
- gRPC:可在 gRPC Provider 侧加入拦截器(或 StatsHandler)实现自动注入/传播上下文。
|
||||
- HTTP(Fiber):中间件中从 Header/Context 读取 Trace/Span,创建子 Span 并写回响应 Header。
|
||||
|
||||
> 本仓库示例已在 gRPC Provider 中支持拦截器聚合;你可以在 providers/grpc/options.md 查看如何添加 tracing 拦截器或 StatsHandler。
|
||||
|
||||
附加建议:
|
||||
- 统一行为:在 gRPC 使用 StatsHandler 或拦截器方案,在 HTTP 使用中间件方案,尽量保证上下游行为一致。
|
||||
- 采样策略:生产建议通过 remote/概率采样调优,热点接口可以在业务中加额外标记(如错误时强制采样)。
|
||||
- 与 OTel 协作:如需要与 OpenTelemetry 共存/迁移,可在 gRPC 端先使用 OTel StatsHandler,同时保留本 Provider 负责 Jaeger 上报。
|
||||
|
||||
## 调试与排障
|
||||
|
||||
- 未上报:确认 `Reporter_LocalAgentHostPort` 或 `Reporter_CollectorEndpoint` 可达。
|
||||
- 采样未命中:检查 `Sampler_Type`/`Sampler_Param` 是否符合预期。
|
||||
- ID 长度:如需与其他系统对齐,开启 `Gen128Bit`。
|
||||
- 关闭追踪:将 `Disabled=true`,provider 将不初始化 tracer。
|
||||
|
||||
## 变更说明
|
||||
|
||||
- 使用 JSON 日志格式,便于结构化采集。
|
||||
- 新增高级配置开关(采样器、Reporter、Tags、128-bit、禁用等),并保持默认行为兼容。
|
||||
- 注册优雅关停钩子,确保进程退出前 flush/关闭。
|
||||
@@ -1,87 +0,0 @@
|
||||
package tracing
|
||||
|
||||
import (
|
||||
"github.com/sirupsen/logrus"
|
||||
"go.ipao.vip/atom/container"
|
||||
"go.ipao.vip/atom/opt"
|
||||
)
|
||||
|
||||
const DefaultPrefix = "Tracing"
|
||||
|
||||
func DefaultProvider() container.ProviderContainer {
|
||||
return container.ProviderContainer{
|
||||
Provider: Provide,
|
||||
Options: []opt.Option{
|
||||
opt.Prefix(DefaultPrefix),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// 自定义的 Logger 实现
|
||||
type jaegerLogrus struct {
|
||||
logger *logrus.Logger
|
||||
}
|
||||
|
||||
func (l *jaegerLogrus) Error(msg string) {
|
||||
l.logger.Error(msg)
|
||||
}
|
||||
|
||||
func (l *jaegerLogrus) Infof(msg string, args ...interface{}) {
|
||||
l.logger.Infof(msg, args...)
|
||||
}
|
||||
|
||||
type Config struct {
|
||||
Name string
|
||||
Reporter_LocalAgentHostPort string //: "127.0.0.1:6831",
|
||||
Reporter_CollectorEndpoint string //: "http://127.0.0.1:14268/api/traces",
|
||||
Disabled bool
|
||||
Gen128Bit bool
|
||||
ZipkinSharedRPCSpan bool
|
||||
RPCMetrics bool
|
||||
|
||||
// Sampler configuration
|
||||
Sampler_Type string
|
||||
Sampler_Param float64
|
||||
Sampler_SamplingServerURL string
|
||||
Sampler_MaxOperations int
|
||||
Sampler_RefreshIntervalSec uint
|
||||
|
||||
// Reporter configuration
|
||||
Reporter_LogSpans *bool
|
||||
Reporter_BufferFlushMs uint
|
||||
Reporter_QueueSize int
|
||||
|
||||
// Process tags
|
||||
Tags map[string]string
|
||||
}
|
||||
|
||||
func (c *Config) format() {
|
||||
if c.Reporter_LocalAgentHostPort == "" {
|
||||
c.Reporter_LocalAgentHostPort = "127.0.0.1:6831"
|
||||
}
|
||||
|
||||
if c.Reporter_CollectorEndpoint == "" {
|
||||
c.Reporter_CollectorEndpoint = "http://127.0.0.1:14268/api/traces"
|
||||
}
|
||||
|
||||
if c.Name == "" {
|
||||
c.Name = "default"
|
||||
}
|
||||
|
||||
if c.Sampler_Type == "" {
|
||||
c.Sampler_Type = "const"
|
||||
}
|
||||
if c.Sampler_Param == 0 {
|
||||
c.Sampler_Param = 1
|
||||
}
|
||||
if c.Reporter_BufferFlushMs == 0 {
|
||||
c.Reporter_BufferFlushMs = 100
|
||||
}
|
||||
if c.Reporter_QueueSize == 0 {
|
||||
c.Reporter_QueueSize = 1000
|
||||
}
|
||||
if c.Reporter_LogSpans == nil {
|
||||
b := true
|
||||
c.Reporter_LogSpans = &b
|
||||
}
|
||||
}
|
||||
@@ -1,75 +0,0 @@
|
||||
package tracing
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
opentracing "github.com/opentracing/opentracing-go"
|
||||
"github.com/sirupsen/logrus"
|
||||
config "github.com/uber/jaeger-client-go/config"
|
||||
"go.ipao.vip/atom/container"
|
||||
"go.ipao.vip/atom/opt"
|
||||
)
|
||||
|
||||
func Provide(opts ...opt.Option) error {
|
||||
o := opt.New(opts...)
|
||||
var conf Config
|
||||
if err := o.UnmarshalConfig(&conf); err != nil {
|
||||
return err
|
||||
}
|
||||
conf.format()
|
||||
return container.Container.Provide(func() (opentracing.Tracer, error) {
|
||||
log := logrus.New()
|
||||
log.SetFormatter(&logrus.JSONFormatter{TimestampFormat: time.RFC3339})
|
||||
|
||||
cfg := &config.Configuration{
|
||||
ServiceName: conf.Name,
|
||||
Disabled: conf.Disabled,
|
||||
Sampler: &config.SamplerConfig{
|
||||
Type: conf.Sampler_Type,
|
||||
Param: conf.Sampler_Param,
|
||||
SamplingServerURL: conf.Sampler_SamplingServerURL,
|
||||
MaxOperations: conf.Sampler_MaxOperations,
|
||||
SamplingRefreshInterval: time.Duration(conf.Sampler_RefreshIntervalSec) * time.Second,
|
||||
},
|
||||
Reporter: &config.ReporterConfig{
|
||||
LogSpans: func() bool {
|
||||
if conf.Reporter_LogSpans == nil {
|
||||
return true
|
||||
}
|
||||
return *conf.Reporter_LogSpans
|
||||
}(),
|
||||
LocalAgentHostPort: conf.Reporter_LocalAgentHostPort,
|
||||
CollectorEndpoint: conf.Reporter_CollectorEndpoint,
|
||||
BufferFlushInterval: time.Duration(conf.Reporter_BufferFlushMs) * time.Millisecond,
|
||||
QueueSize: conf.Reporter_QueueSize,
|
||||
},
|
||||
RPCMetrics: conf.RPCMetrics,
|
||||
Gen128Bit: conf.Gen128Bit,
|
||||
}
|
||||
|
||||
if len(conf.Tags) > 0 {
|
||||
cfg.Tags = make([]opentracing.Tag, 0, len(conf.Tags))
|
||||
for k, v := range conf.Tags {
|
||||
cfg.Tags = append(cfg.Tags, opentracing.Tag{Key: k, Value: v})
|
||||
}
|
||||
}
|
||||
|
||||
jLogger := &jaegerLogrus{logger: log}
|
||||
zipkinShared := conf.ZipkinSharedRPCSpan
|
||||
if !zipkinShared {
|
||||
zipkinShared = true
|
||||
}
|
||||
tracer, closer, err := cfg.NewTracer(
|
||||
config.Logger(jLogger),
|
||||
config.ZipkinSharedRPCSpan(zipkinShared),
|
||||
)
|
||||
if err != nil {
|
||||
log.Errorf("无法初始化 Jaeger: %v", err)
|
||||
return opentracing.NoopTracer{}, nil
|
||||
}
|
||||
opentracing.SetGlobalTracer(tracer)
|
||||
container.AddCloseAble(func() { _ = closer.Close() })
|
||||
|
||||
return tracer, nil
|
||||
}, o.DiOptions()...)
|
||||
}
|
||||
Reference in New Issue
Block a user