跳转至内容
  • 版块
  • 最新
  • 标签
  • 热门
  • 世界
  • 用户
  • 群组
皮肤
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • 默认(不使用皮肤)
  • 不使用皮肤
折叠

乐达

A

admin

@admin
关于
帖子
8
主题
2
分享
0
群组
0
粉丝
0
关注
0

帖子

最新 最佳 有争议的

  • Docker 部署自动日志收集功能
    A admin

    === 容器状态 ===
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    bc54e3be67b7 docker-sz.loda.net.cn/docker-pos/rider/api:2855.2026.0112.59831 "dotnet Loda.Distrib…" 18 seconds ago Up 18 seconds 0.0.0.0:8071->80/tcp distribution-centre-api-test
    44b32ace9727 docker-sz.loda.net.cn/docker-pos/api:2855.2026.0112.59828 "dotnet Loda.Abp.Sto…" 7 minutes ago Up 7 minutes 0.0.0.0:8041->8080/tcp pos-api-test
    48ca6fd5bef4 docker-sz.loda.net.cn/docker-pos/blazor:2855.2026.0110.59707 "dotnet Loda.Abp.Sto…" 2 days ago Up 2 days 0.0.0.0:8031->8080/tcp pos-blazor-test
    5a2278b77870 docker-sz.loda.net.cn/docker-pos/auth:2855.2026.0109.59379 "dotnet Loda.Abp.Sto…" 3 days ago Up 3 days 0.0.0.0:8051->8080/tcp pos-auth-test
    3a7636d4a907 docker-hk.loda.net.cn/redis:7-alpine "docker-entrypoint.s…" 6 weeks ago Up 7 days 0.0.0.0:56379->6379/tcp, [::]:56379->6379/tcp redis-56379
    a417ffcc7aa6 docker-hk.loda.net.cn/redis:7-alpine "docker-entrypoint.s…" 6 weeks ago Up 7 days 0.0.0.0:56380->6379/tcp, [::]:56380->6379/tcp redis-56380
    029b4cfc0ab9 rewardplatformweb-reward "dotnet RewardPlatfo…" 4 months ago Up 7 days (unhealthy) 0.0.0.0:44396->80/tcp, [::]:44396->80/tcp reward

    === 容器资源 ===
    CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
    bc54e3be67b7 distribution-centre-api-test 0.78% 184.2MiB / 6.976GiB 2.58% 62.5kB / 162kB 8.19kB / 53.2kB 49
    44b32ace9727 pos-api-test 0.06% 435.5MiB / 6.976GiB 6.10% 590kB / 703kB 4.1kB / 152kB 51
    48ca6fd5bef4 pos-blazor-test 0.21% 249.2MiB / 6.976GiB 3.49% 6.5MB / 8.44MB 1.02MB / 229kB 33
    5a2278b77870 pos-auth-test 0.24% 433.4MiB / 6.976GiB 6.07% 12.2MB / 20.3MB 8.19kB / 1.13MB 33
    3a7636d4a907 redis-56379 0.47% 3.57MiB / 6.976GiB 0.05% 52.7kB / 25.3kB 8.97MB / 49.2kB 6
    a417ffcc7aa6 redis-56380 0.51% 19.94MiB / 6.976GiB 0.28% 22.9kB / 7.06kB 17.7MB / 0B 6
    029b4cfc0ab9 reward 0.05% 393MiB / 6.976GiB 5.50% 941kB / 22.3MB 160MB / 4.1kB 22

    === 容器详情 ===
    [
    {
    "Id": "bc54e3be67b7280e7666edc5f7ded99d76211ab947c5c52ad3adf77b826b0bb9",
    "Created": "2026-01-12T11:32:44.399418948Z",
    "Path": "dotnet",
    "Args": [
    "Loda.DistributionCentre.Web.Host.dll"
    ],
    "State": {
    "Status": "running",
    "Running": true,
    "Paused": false,
    "Restarting": false,
    "OOMKilled": false,
    "Dead": false,
    "Pid": 749357,
    "ExitCode": 0,
    "Error": "",
    "StartedAt": "2026-01-12T11:32:44.464254948Z",
    "FinishedAt": "0001-01-01T00:00:00Z"
    },
    "Image": "sha256:cc436d5e068a5daa68a27a4ebf4e83df30818d758963e7743beb44c03cfcce1d",
    "ResolvConfPath": "/var/lib/docker/containers/bc54e3be67b7280e7666edc5f7ded99d76211ab947c5c52ad3adf77b826b0bb9/resolv.conf",
    "HostnamePath": "/var/lib/docker/containers/bc54e3be67b7280e7666edc5f7ded99d76211ab947c5c52ad3adf77b826b0bb9/hostname",
    "HostsPath": "/var/lib/docker/containers/bc54e3be67b7280e7666edc5f7ded99d76211ab947c5c52ad3adf77b826b0bb9/hosts",
    "LogPath": "/var/lib/docker/containers/bc54e3be67b7280e7666edc5f7ded99d76211ab947c5c52ad3adf77b826b0bb9/bc54e3be67b7280e7666edc5f7ded99d76211ab947c5c52ad3adf77b826b0bb9-json.log",
    "Name": "/distribution-centre-api-test",
    "RestartCount": 0,
    "Driver": "overlay2",
    "Platform": "linux",
    "MountLabel": "",
    "ProcessLabel": "",
    "AppArmorProfile": "docker-default",
    "ExecIDs": null,
    "HostConfig": {
    "Binds": [
    "/srv/distribution-centre/logs/api:/app/Logs"
    ],
    "ContainerIDFile": "",
    "LogConfig": {
    "Type": "json-file",
    "Config": {}
    },
    "NetworkMode": "bridge",
    "PortBindings": {
    "80/tcp": [
    {
    "HostIp": "0.0.0.0",
    "HostPort": "8071"
    }
    ]
    },
    "RestartPolicy": {
    "Name": "unless-stopped",
    "MaximumRetryCount": 0
    },
    "AutoRemove": false,
    "VolumeDriver": "",
    "VolumesFrom": null,
    "ConsoleSize": [
    0,
    0
    ],
    "CapAdd": null,
    "CapDrop": null,
    "CgroupnsMode": "private",
    "Dns": [],
    "DnsOptions": [],
    "DnsSearch": [],
    "ExtraHosts": null,
    "GroupAdd": null,
    "IpcMode": "private",
    "Cgroup": "",
    "Links": null,
    "OomScoreAdj": 0,
    "PidMode": "",
    "Privileged": false,
    "PublishAllPorts": false,
    "ReadonlyRootfs": false,
    "SecurityOpt": null,
    "UTSMode": "",
    "UsernsMode": "",
    "ShmSize": 67108864,
    "Runtime": "runc",
    "Isolation": "",
    "CpuShares": 0,
    "Memory": 0,
    "NanoCpus": 0,
    "CgroupParent": "",
    "BlkioWeight": 0,
    "BlkioWeightDevice": [],
    "BlkioDeviceReadBps": [],
    "BlkioDeviceWriteBps": [],
    "BlkioDeviceReadIOps": [],
    "BlkioDeviceWriteIOps": [],
    "CpuPeriod": 0,
    "CpuQuota": 0,
    "CpuRealtimePeriod": 0,
    "CpuRealtimeRuntime": 0,
    "CpusetCpus": "",
    "CpusetMems": "",
    "Devices": [],
    "DeviceCgroupRules": null,
    "DeviceRequests": null,
    "MemoryReservation": 0,
    "MemorySwap": 0,
    "MemorySwappiness": null,
    "OomKillDisable": null,
    "PidsLimit": null,
    "Ulimits": [],
    "CpuCount": 0,
    "CpuPercent": 0,
    "IOMaximumIOps": 0,
    "IOMaximumBandwidth": 0,
    "MaskedPaths": [
    "/proc/asound",
    "/proc/acpi",
    "/proc/interrupts",
    "/proc/kcore",
    "/proc/keys",
    "/proc/latency_stats",
    "/proc/timer_list",
    "/proc/timer_stats",
    "/proc/sched_debug",
    "/proc/scsi",
    "/sys/firmware",
    "/sys/devices/virtual/powercap"
    ],
    "ReadonlyPaths": [
    "/proc/bus",
    "/proc/fs",
    "/proc/irq",
    "/proc/sys",
    "/proc/sysrq-trigger"
    ]
    },
    "GraphDriver": {
    "Data": {
    "ID": "bc54e3be67b7280e7666edc5f7ded99d76211ab947c5c52ad3adf77b826b0bb9",
    "LowerDir": "/var/lib/docker/overlay2/405cb8c9aa1666804f2c182a7215306363f95f8d2f713e7f5eb755b7d13bc493-init/diff:/var/lib/docker/overlay2/082b1a06e0b75f5906735e830c5b02bad10909e873d21bb4d64095097b071aba/diff:/var/lib/docker/overlay2/3bca4a900387a94b7d43166ad48c413534a07a761e583b1118a2ceff02d66d93/diff:/var/lib/docker/overlay2/b88e36f6375c14437e305a81b2c6512b409c095142d8b03a715fc54bdbc14ca5/diff:/var/lib/docker/overlay2/236a2b85bf0d62b68bf0b644f1f9b142e1edf555ba7ccb478ac501e8b5e03877/diff:/var/lib/docker/overlay2/167a94024e505e7a252fb6173fc4633116b3af9380a28370b16ec6719b7fa3d0/diff:/var/lib/docker/overlay2/318fb2a62244f9fa3144f5a83eaf6373410df98c6b4112b4fc7d247635f7a1ea/diff:/var/lib/docker/overlay2/b32418fcb0903e993b4b79fbb53a09d3436d0710c15992f1c0325b2be49b49e8/diff:/var/lib/docker/overlay2/236684bbdd8e7e3a0e19cc47b5bcf13b3302e317ca9a2912532a46d7d8808fc8/diff:/var/lib/docker/overlay2/95b57c8b5efd1f4e5030e42ee8f4e0d40b98d8a7668a8054ff4bf9c69d99d9eb/diff:/var/lib/docker/overlay2/9e4b53d4f574b7a538da3d5a211ad71f500be70439b464f9d408c45ecc3dcd95/diff:/var/lib/docker/overlay2/913fc948cbabc01d43e5e3d21b50d9fb56c651a663bd34b3b71051fdaea5baeb/diff:/var/lib/docker/overlay2/fda0c16f14d6705c29c54872df539c21d0f641a932ff70bfa1224f56cf467530/diff:/var/lib/docker/overlay2/256584f70b45487c8562f8f38c535f38c70cc555c8434b25f8bcb22685ac308d/diff:/var/lib/docker/overlay2/42e5bceb6f86515a2d61b68bfd268145d62d61e63d553b944ef93f57b1a86404/diff:/var/lib/docker/overlay2/31ef0ef7df042188980f2f6c55553ce2a6110c9f581da00ea308afa142dcda14/diff:/var/lib/docker/overlay2/e5a65937fbbfbd3d27da24393b700e95a66879d9ea968fd19cad32bc811117f8/diff:/var/lib/docker/overlay2/9db579ae7ae6f8a2cdd0931e79b20f4b6c4edea50e62692f401b8421273782fb/diff:/var/lib/docker/overlay2/7583c02e3323c98e545a9a6c7f343700bbbb633f4c856a2a222f8e3652eda60d/diff:/var/lib/docker/overlay2/f13db28022489bf3e63b2aa531757e264e1cb0ad34635afdbaed373bc02a9d83/diff:/var/lib/docker/overlay2/8e089d7b6f69eb33696977c5a36d749c49df43992e06c0e6cf762b1c67722b94/diff:/var/lib/docker/overlay2/64e4ec95d4195213956f64f2bf7d2a37982214b887d4208bbad0d1a826d2014f/diff",
    "MergedDir": "/var/lib/docker/overlay2/405cb8c9aa1666804f2c182a7215306363f95f8d2f713e7f5eb755b7d13bc493/merged",
    "UpperDir": "/var/lib/docker/overlay2/405cb8c9aa1666804f2c182a7215306363f95f8d2f713e7f5eb755b7d13bc493/diff",
    "WorkDir": "/var/lib/docker/overlay2/405cb8c9aa1666804f2c182a7215306363f95f8d2f713e7f5eb755b7d13bc493/work"
    },
    "Name": "overlay2"
    },
    "Mounts": [
    {
    "Type": "bind",
    "Source": "/srv/distribution-centre/logs/api",
    "Destination": "/app/Logs",
    "Mode": "",
    "RW": true,
    "Propagation": "rprivate"
    }
    ],
    "Config": {
    "Hostname": "bc54e3be67b7",
    "Domainname": "",
    "User": "",
    "AttachStdin": false,
    "AttachStdout": false,
    "AttachStderr": false,
    "ExposedPorts": {
    "80/tcp": {}
    },
    "Tty": false,
    "OpenStdin": false,
    "StdinOnce": false,
    "Env": [
    "ASPNETCORE_ENVIRONMENT=test",
    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
    "ASPNETCORE_URLS=",
    "DOTNET_RUNNING_IN_CONTAINER=true",
    "DOTNET_VERSION=6.0.36",
    "ASPNET_VERSION=6.0.36",
    "DOTNET_GENERATE_ASPNET_CERTIFICATE=false",
    "DOTNET_NOLOGO=true",
    "DOTNET_SDK_VERSION=6.0.428",
    "DOTNET_USE_POLLING_FILE_WATCHER=true",
    "NUGET_XMLDOC_MODE=skip",
    "POWERSHELL_DISTRIBUTION_CHANNEL=PSDocker-DotnetSDK-Debian-11"
    ],
    "Cmd": null,
    "Image": "docker-sz.loda.net.cn/docker-pos/rider/api:2855.2026.0112.59831",
    "Volumes": null,
    "WorkingDir": "/app",
    "Entrypoint": [
    "dotnet",
    "Loda.DistributionCentre.Web.Host.dll"
    ],
    "OnBuild": null,
    "Labels": {}
    },
    "NetworkSettings": {
    "Bridge": "",
    "SandboxID": "a5068e2868bc64451f4fbdf56f6c775a1a55a80d6ae268a919a20d6468c35da0",
    "SandboxKey": "/var/run/docker/netns/a5068e2868bc",
    "Ports": {
    "80/tcp": [
    {
    "HostIp": "0.0.0.0",
    "HostPort": "8071"
    }
    ]
    },
    "HairpinMode": false,
    "LinkLocalIPv6Address": "",
    "LinkLocalIPv6PrefixLen": 0,
    "SecondaryIPAddresses": null,
    "SecondaryIPv6Addresses": null,
    "EndpointID": "eb35adb6802d8a42c50452cf2acb4d24a894d8218cce11118e85796992a645a9",
    "Gateway": "172.17.0.1",
    "GlobalIPv6Address": "",
    "GlobalIPv6PrefixLen": 0,
    "IPAddress": "172.17.0.5",
    "IPPrefixLen": 16,
    "IPv6Gateway": "",
    "MacAddress": "5a:12:cb:9b:bb:d1",
    "Networks": {
    "bridge": {
    "IPAMConfig": null,
    "Links": null,
    "Aliases": null,
    "MacAddress": "5a:12:cb:9b:bb:d1",
    "DriverOpts": null,
    "GwPriority": 0,
    "NetworkID": "616640d561322f9ace160edec3bba3d6f310eec53f590a3318b099589228104c",
    "EndpointID": "eb35adb6802d8a42c50452cf2acb4d24a894d8218cce11118e85796992a645a9",
    "Gateway": "172.17.0.1",
    "IPAddress": "172.17.0.5",
    "IPPrefixLen": 16,
    "IPv6Gateway": "",
    "GlobalIPv6Address": "",
    "GlobalIPv6PrefixLen": 0,
    "DNSNames": null
    }
    }
    }
    }
    ]

    === 端口监听 ===
    State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess
    LISTEN 0 4096 127.0.0.54:53 0.0.0.0:*
    LISTEN 0 4096 127.0.0.1:39471 0.0.0.0:*
    LISTEN 0 4096 0.0.0.0:22 0.0.0.0:*
    LISTEN 0 511 0.0.0.0:80 0.0.0.0:*
    LISTEN 0 511 0.0.0.0:888 0.0.0.0:*
    LISTEN 0 4096 0.0.0.0:44396 0.0.0.0:*
    LISTEN 0 4096 0.0.0.0:56380 0.0.0.0:*
    LISTEN 0 4096 0.0.0.0:56379 0.0.0.0:*
    LISTEN 0 4096 0.0.0.0:8071 0.0.0.0:*
    LISTEN 0 4096 0.0.0.0:8031 0.0.0.0:*
    LISTEN 0 4096 0.0.0.0:8041 0.0.0.0:*
    LISTEN 0 4096 0.0.0.0:8051 0.0.0.0:*
    LISTEN 0 511 0.0.0.0:36379 0.0.0.0:*
    LISTEN 0 100 0.0.0.0:51435 0.0.0.0:*
    LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
    LISTEN 0 4096 [::]:22 [::]:*
    LISTEN 0 4096 [::]:44396 [::]:*
    LISTEN 0 4096 [::]:56380 [::]:*
    LISTEN 0 4096 [::]:56379 [::]:*

    === 磁盘空间 ===
    Filesystem Size Used Avail Use% Mounted on
    tmpfs 715M 5.2M 710M 1% /run
    efivarfs 256K 19K 233K 8% /sys/firmware/efi/efivars
    /dev/vda3 99G 42G 53G 45% /
    tmpfs 3.5G 5.7M 3.5G 1% /dev/shm
    tmpfs 5.0M 0 5.0M 0% /run/lock
    /dev/vda2 197M 6.2M 191M 4% /boot/efi
    tmpfs 715M 12K 715M 1% /run/user/0
    tmpfs 715M 12K 715M 1% /run/user/1004

    === 内存状态 ===
    total used free shared buff/cache available
    Mem: 7143 2718 270 575 5035 4424
    Swap: 0 0 0

    === OOM事件 ===

    === Docker事件 ===
    2026-01-12T19:23:20.607064812+08:00 container exec_create: curl -f http://localhost/health-status 029b4cfc0ab9a4172e12088c4faab4c84235ae86dd7baa30fbf8f879b68ee880 (com.docker.compose.config-hash=fa5720073ce1ebf925849b1e4cbd586052490604987028241836fd0c700bff6f, com.docker.compose.container-number=1, com.docker.compose.depends_on=, com.docker.compose.image=sha256:464e6086f5d1aa2b694e82cbc5b2540f1b20a8791fba5ecb4ba6493951c8daa5, com.docker.compose.oneoff=False, com.docker.compose.project=rewardplatformweb, com.docker.compose.project.config_files=/home/reward/RewardPlatform/src/RewardPlatform.Web/docker-compose.yaml, com.docker.compose.project.working_dir=/home/reward/RewardPlatform/src/RewardPlatform.Web, com.docker.compose.service=reward, com.docker.compose.version=2.28.1, execID=c8d3a5f8883245901e3c0fb9782d82b054ffb2ecb90c0d00693eba51f0e7f2dd, image=rewardplatformweb-reward, name=reward)
    2026-01-12T19:23:20.607167182+08:00 container exec_start: curl -f http://localhost/health-status 029b4cfc0ab9a4172e12088c4faab4c84235ae86dd7baa30fbf8f879b68ee880 (com.docker.compose.config-hash=fa5720073ce1ebf925849b1e4cbd586052490604987028241836fd0c700bff6f, com.docker.compose.container-number=1, com.docker.compose.depends_on=, com.docker.compose.image=sha256:464e6086f5d1aa2b694e82cbc5b2540f1b20a8791fba5ecb4ba6493951c8daa5, com.docker.compose.oneoff=False, com.docker.compose.project=rewardplatformweb, com.docker.compose.project.config_files=/home/reward/RewardPlatform/src/RewardPlatform.Web/docker-compose.yaml, com.docker.compose.project.working_dir=/home/reward/RewardPlatform/src/RewardPlatform.Web, com.docker.compose.service=reward, com.docker.compose.version=2.28.1, execID=c8d3a5f8883245901e3c0fb9782d82b054ffb2ecb90c0d00693eba51f0e7f2dd, image=rewardplatformweb-reward, name=reward)
    2026-01-12T19:23:50.722054677+08:00 container exec_create: curl -f http://localhost/health-status 029b4cfc0ab9a4172e12088c4faab4c84235ae86dd7baa30fbf8f879b68ee880 (com.docker.compose.config-hash=fa5720073ce1ebf925849b1e4cbd586052490604987028241836fd0c700bff6f, com.docker.compose.container-number=1, com.docker.compose.depends_on=, com.docker.compose.image=sha256:464e6086f5d1aa2b694e82cbc5b2540f1b20a8791fba5ecb4ba6493951c8daa5, com.docker.compose.oneoff=False, com.docker.compose.project=rewardplatformweb, com.docker.compose.project.config_files=/home/reward/RewardPlatform/src/RewardPlatform.Web/docker-compose.yaml, com.docker.compose.project.working_dir=/home/reward/RewardPlatform/src/RewardPlatform.Web, com.docker.compose.service=reward, com.docker.compose.version=2.28.1, execID=6398e56e45718f4ac693847a526222ea7b8030992eb5679215f138d5dcab1148, image=rewardplatformweb-reward, name=reward)
    2026-01-12T19:23:50.722167378+08:00 container exec_start: curl -f http://localhost/health-status 029b4cfc0ab9a4172e12088c4faab4c84235ae86dd7baa30fbf8f879b68ee880 (com.docker.compose.config-hash=fa5720073ce1ebf925849b1e4cbd586052490604987028241836fd0c700bff6f, com.docker.compose.container-number=1, com.docker.compose.depends_on=, com.docker.compose.image=sha256:464e6086f5d1aa2b694e82cbc5b2540f1b20a8791fba5ecb4ba6493951c8daa5, com.docker.compose.oneoff=False, com.docker.compose.project=rewardplatformweb, 2026-01-12T19:32:53.013454020+08:00 container exec_create: curl -f http://localhost/health-status 029b4cfc0ab9a4172e12088c4faab4c84235ae86dd7baa30fbf8f879b68ee880 (com.docker.compose.config-hash=fa5720073ce1ebf925849b1e4cbd586052490604987028241836fd0c700bff6f, com.docker.compose.container-number=1, com.docker.compose.depends_on=, com.docker.compose.image=sha256:464e6086f5d1aa2b694e82cbc5b2540f1b20a8791fba5ecb4ba6493951c8daa5, com.docker.compose.oneoff=False, com.docker.compose.project=rewardplatformweb, com.docker.compose.project.config_files=/home/reward/RewardPlatform/src/RewardPlatform.Web/docker-compose.yaml, com.docker.compose.project.working_dir=/home/reward/RewardPlatform/src/RewardPlatform.Web, com.docker.compose.service=reward, com.docker.compose.version=2.28.1, execID=0a338bec297fda57ebfc2e3f12a01d32bb7fa4e814bd5ec2fd13d1f3c1a4678f, image=rewardplatformweb-reward, name=reward)
    2026-01-12T19:32:53.013568922+08:00 container exec_start: curl -f http://localhost/health-status 029b4cfc0ab9a4172e12088c4faab4c84235ae86dd7baa30fbf8f879b68ee880 (com.docker.compose.config-hash=fa5720073ce1ebf925849b1e4cbd586052490604987028241836fd0c700bff6f, com.docker.compose.container-number=1, com.docker.compose.depends_on=, com.docker.compose.image=sha256:464e6086f5d1aa2b694e82cbc5b2540f1b20a8791fba5ecb4ba6493951c8daa5, com.docker.compose.oneoff=False, com.docker.compose.project=rewardplatformweb, com.docker.compose.project.config_files=/home/reward/RewardPlatform/src/RewardPlatform.Web/docker-compose.yaml, com.docker.compose.project.working_dir=/home/reward/RewardPlatform/src/RewardPlatform.Web, com.docker.compose.service=reward, com.docker.compose.version=2.28.1, execID=0a338bec297fda57ebfc2e3f12a01d32bb7fa4e814bd5ec2fd13d1f3c1a4678f, image=rewardplatformweb-reward, name=reward)

    === Done ===
    Mon Jan 12 07:33:05 PM CST 2026

    团队公告

  • Docker 部署自动日志收集功能
    A admin

    Microsoft.AspNetCore.StaticFiles.StaticFileMiddleware[4]
    2026-01-12T11:21:01.047969315Z => SpanId:6952101bbcf4f63b, TraceId:e96ac0262f8324b9d7060b5407533b2f, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15L => RequestPath:/health RequestId:0HNIHPIL8R15L:00000002
    2026-01-12T11:21:01.047983958Z The request path /health does not match a supported file type
    2026-01-12T11:21:01.133497534Z dbug: Microsoft.AspNetCore.Routing.Matching.DfaMatcher[1001]
    2026-01-12T11:21:01.135428046Z => SpanId:6952101bbcf4f63b, TraceId:e96ac0262f8324b9d7060b5407533b2f, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15L => RequestPath:/health RequestId:0HNIHPIL8R15L:00000002
    2026-01-12T11:21:01.135548625Z 1 candidate(s) found for the request path '/health'
    2026-01-12T11:21:01.135553642Z dbug: Microsoft.AspNetCore.Routing.EndpointRoutingMiddleware[1]
    2026-01-12T11:21:01.135557345Z => SpanId:6952101bbcf4f63b, TraceId:e96ac0262f8324b9d7060b5407533b2f, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15L => RequestPath:/health RequestId:0HNIHPIL8R15L:00000002
    2026-01-12T11:21:01.135561094Z Request matched endpoint 'Health checks'
    2026-01-12T11:21:01.193240590Z dbug: Microsoft.AspNetCore.Authentication.JwtBearer.JwtBearerHandler[9]
    2026-01-12T11:21:01.193307759Z => SpanId:6952101bbcf4f63b, TraceId:e96ac0262f8324b9d7060b5407533b2f, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15L => RequestPath:/health RequestId:0HNIHPIL8R15L:00000002
    2026-01-12T11:21:01.193313857Z AuthenticationScheme: JwtBearer was not authenticated.
    2026-01-12T11:21:01.228272422Z info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
    2026-01-12T11:21:01.228325478Z => SpanId:6952101bbcf4f63b, TraceId:e96ac0262f8324b9d7060b5407533b2f, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15L => RequestPath:/health RequestId:0HNIHPIL8R15L:00000002
    2026-01-12T11:21:01.228330299Z Executing endpoint 'Health checks'
    2026-01-12T11:21:01.233422011Z dbug: Microsoft.Extensions.Diagnostics.HealthChecks.DefaultHealthCheckService[100]
    2026-01-12T11:21:01.233528787Z => SpanId:6952101bbcf4f63b, TraceId:e96ac0262f8324b9d7060b5407533b2f, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15L => RequestPath:/health RequestId:0HNIHPIL8R15L:00000002
    2026-01-12T11:21:01.233535606Z Running health checks
    2026-01-12T11:21:01.259186434Z dbug: Microsoft.Extensions.Diagnostics.HealthChecks.DefaultHealthCheckService[101]
    2026-01-12T11:21:01.259276429Z => SpanId:6952101bbcf4f63b, TraceId:e96ac0262f8324b9d7060b5407533b2f, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15L => RequestPath:/health RequestId:0HNIHPIL8R15L:00000002
    2026-01-12T11:21:01.259281979Z Health check processing with combined status Healthy completed after 5.1877ms
    2026-01-12T11:21:01.259285121Z info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1]
    2026-01-12T11:21:01.259288513Z => SpanId:6952101bbcf4f63b, TraceId:e96ac0262f8324b9d7060b5407533b2f, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15L => RequestPath:/health RequestId:0HNIHPIL8R15L:00000002
    2026-01-12T11:21:01.259292221Z Executed endpoint 'Health checks'
    2026-01-12T11:21:01.259295247Z dbug: Microsoft.AspNetCore.Server.Kestrel.Connections[9]
    2026-01-12T11:21:01.259299448Z => SpanId:6952101bbcf4f63b, TraceId:e96ac0262f8324b9d7060b5407533b2f, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15L => RequestPath:/health RequestId:0HNIHPIL8R15L:00000002
    2026-01-12T11:21:01.259303087Z Connection id "0HNIHPIL8R15L" completed keep alive response.
    2026-01-12T11:21:01.267379138Z info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
    2026-01-12T11:21:01.267466498Z => SpanId:6952101bbcf4f63b, TraceId:e96ac0262f8324b9d7060b5407533b2f, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15L => RequestPath:/health RequestId:0HNIHPIL8R15L:00000002
    2026-01-12T11:21:01.267471867Z Request finished HTTP/1.1 GET http://localhost:8071/health - - - 200 - text/plain 235.0108ms
    2026-01-12T11:21:01.274367426Z dbug: Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets[6]
    2026-01-12T11:21:01.274435304Z Connection id "0HNIHPIL8R15L" received FIN.
    2026-01-12T11:21:01.274649188Z dbug: Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets[7]
    2026-01-12T11:21:01.274660023Z => ConnectionId:0HNIHPIL8R15L
    2026-01-12T11:21:01.274663185Z Connection id "0HNIHPIL8R15L" sending FIN because: "The Socket transport's send loop completed gracefully."
    2026-01-12T11:21:01.276244246Z dbug: Microsoft.AspNetCore.Server.Kestrel.Connections[10]
    2026-01-12T11:21:01.276322794Z => ConnectionId:0HNIHPIL8R15L
    2026-01-12T11:21:01.276327757Z Connection id "0HNIHPIL8R15L" disconnecting.
    2026-01-12T11:21:01.279905758Z dbug: Microsoft.AspNetCore.Server.Kestrel.Connections[2]
    2026-01-12T11:21:01.279963262Z Connection id "0HNIHPIL8R15L" stopped.
    2026-01-12T11:21:01.287400785Z dbug: Microsoft.AspNetCore.Server.Kestrel.Connections[39]
    2026-01-12T11:21:01.287533513Z Connection id "0HNIHPIL8R15M" accepted.
    2026-01-12T11:21:01.287633513Z dbug: Microsoft.AspNetCore.Server.Kestrel.Connections[1]
    2026-01-12T11:21:01.287643138Z Connection id "0HNIHPIL8R15M" started.
    2026-01-12T11:21:01.288834811Z info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
    2026-01-12T11:21:01.289051041Z => SpanId:a2d77f6ca13f196a, TraceId:878f233e949a7839b2fe450361abe6b4, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15M => RequestPath:/health RequestId:0HNIHPIL8R15M:00000002
    2026-01-12T11:21:01.289070555Z Request starting HTTP/1.1 GET http://localhost:8071/health - -
    2026-01-12T11:21:01.289412415Z dbug: Microsoft.AspNetCore.StaticFiles.StaticFileMiddleware[4]
    2026-01-12T11:21:01.289573619Z => SpanId:a2d77f6ca13f196a, TraceId:878f233e949a7839b2fe450361abe6b4, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15M => RequestPath:/health RequestId:0HNIHPIL8R15M:00000002
    2026-01-12T11:21:01.289597960Z The request path /health does not match a supported file type
    2026-01-12T11:21:01.292621497Z dbug: Microsoft.AspNetCore.Routing.Matching.DfaMatcher[1001]
    2026-01-12T11:21:01.292958189Z => SpanId:a2d77f6ca13f196a, TraceId:878f233e949a7839b2fe450361abe6b4, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15M => RequestPath:/health RequestId:0HNIHPIL8R15M:00000002
    2026-01-12T11:21:01.293025936Z 1 candidate(s) found for the request path '/health'
    2026-01-12T11:21:01.293028980Z dbug: Microsoft.AspNetCore.Routing.EndpointRoutingMiddleware[1]
    2026-01-12T11:21:01.293032074Z => SpanId:a2d77f6ca13f196a, TraceId:878f233e949a7839b2fe450361abe6b4, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15M => RequestPath:/health RequestId:0HNIHPIL8R15M:00000002
    2026-01-12T11:21:01.293035342Z Request matched endpoint 'Health checks'
    2026-01-12T11:21:01.293742258Z dbug: Microsoft.AspNetCore.Authentication.JwtBearer.JwtBearerHandler[9]
    2026-01-12T11:21:01.293976827Z => SpanId:a2d77f6ca13f196a, TraceId:878f233e949a7839b2fe450361abe6b4, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15M => RequestPath:/health RequestId:0HNIHPIL8R15M:00000002
    2026-01-12T11:21:01.294036368Z AuthenticationScheme: JwtBearer was not authenticated.
    2026-01-12T11:21:01.295026665Z info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
    2026-01-12T11:21:01.295296378Z => SpanId:a2d77f6ca13f196a, TraceId:878f233e949a7839b2fe450361abe6b4, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15M => RequestPath:/health RequestId:0HNIHPIL8R15M:00000002
    2026-01-12T11:21:01.295427010Z Executing endpoint 'Health checks'
    2026-01-12T11:21:01.295546545Z dbug: Microsoft.Extensions.Diagnostics.HealthChecks.DefaultHealthCheckService[100]
    2026-01-12T11:21:01.295680839Z => SpanId:a2d77f6ca13f196a, TraceId:878f233e949a7839b2fe450361abe6b4, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15M => RequestPath:/health RequestId:0HNIHPIL8R15M:00000002
    2026-01-12T11:21:01.295690312Z Running health checks
    2026-01-12T11:21:01.295694236Z dbug: Microsoft.Extensions.Diagnostics.HealthChecks.DefaultHealthCheckService[101]
    2026-01-12T11:21:01.295733560Z => SpanId:a2d77f6ca13f196a, TraceId:878f233e949a7839b2fe450361abe6b4, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15M => RequestPath:/health RequestId:0HNIHPIL8R15M:00000002
    2026-01-12T11:21:01.295737279Z Health check processing with combined status Healthy completed after 0.1931ms
    2026-01-12T11:21:01.298354643Z info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1]
    2026-01-12T11:21:01.298418080Z => SpanId:a2d77f6ca13f196a, TraceId:878f233e949a7839b2fe450361abe6b4, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15M => RequestPath:/health RequestId:0HNIHPIL8R15M:00000002
    2026-01-12T11:21:01.298422450Z Executed endpoint 'Health checks'
    2026-01-12T11:21:01.298425674Z dbug: Microsoft.AspNetCore.Server.Kestrel.Connections[9]
    2026-01-12T11:21:01.298428688Z => SpanId:a2d77f6ca13f196a, TraceId:878f233e949a7839b2fe450361abe6b4, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15M => RequestPath:/health RequestId:0HNIHPIL8R15M:00000002
    2026-01-12T11:21:01.298431957Z Connection id "0HNIHPIL8R15M" completed keep alive response.
    2026-01-12T11:21:01.301355946Z dbug: Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets[6]
    2026-01-12T11:21:01.301523135Z Connection id "0HNIHPIL8R15M" received FIN.
    2026-01-12T11:21:01.301527810Z dbug: Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets[7]
    2026-01-12T11:21:01.301532033Z => ConnectionId:0HNIHPIL8R15M
    2026-01-12T11:21:01.301535872Z Connection id "0HNIHPIL8R15M" sending FIN because: "The Socket transport's send loop completed gracefully."
    2026-01-12T11:21:01.301956981Z info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
    2026-01-12T11:21:01.301962299Z => SpanId:a2d77f6ca13f196a, TraceId:878f233e949a7839b2fe450361abe6b4, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15M => RequestPath:/health RequestId:0HNIHPIL8R15M:00000002
    2026-01-12T11:21:01.301966616Z Request finished HTTP/1.1 GET http://localhost:8071/health - - - 200 - text/plain 10.1955ms
    2026-01-12T11:21:01.301987124Z dbug: Microsoft.AspNetCore.Server.Kestrel.Connections[10]
    2026-01-12T11:21:01.301991080Z => ConnectionId:0HNIHPIL8R15M
    2026-01-12T11:21:01.301994444Z Connection id "0HNIHPIL8R15M" disconnecting.
    2026-01-12T11:21:01.301998218Z dbug: Microsoft.AspNetCore.Server.Kestrel.Connections[2]
    2026-01-12T11:21:01.302061153Z Connection id "0HNIHPIL8R15M" stopped.
    2026-01-12T11:21:01.641537250Z dbug: Microsoft.AspNetCore.Server.Kestrel.Connections[39]
    2026-01-12T11:21:01.641589785Z Connection id "0HNIHPIL8R15N" accepted.
    2026-01-12T11:21:01.641594925Z dbug: Microsoft.AspNetCore.Server.Kestrel.Connections[1]
    2026-01-12T11:21:01.641598948Z Connection id "0HNIHPIL8R15N" started.
    2026-01-12T11:21:01.646328203Z info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
    2026-01-12T11:21:01.646532904Z => SpanId:8fa8eca33635eb67, TraceId:7b7fc7ae26270c513dbe32c92fb162fb, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15N => RequestPath:/health RequestId:0HNIHPIL8R15N:00000002
    2026-01-12T11:21:01.646542675Z Request starting HTTP/1.1 GET http://rider.test.sz.loda.net.cn/health - -
    2026-01-12T11:21:01.647408511Z dbug: Microsoft.AspNetCore.StaticFiles.StaticFileMiddleware[4]
    2026-01-12T11:21:01.647490253Z => SpanId:8fa8eca33635eb67, TraceId:7b7fc7ae26270c513dbe32c92fb162fb, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15N => RequestPath:/health RequestId:0HNIHPIL8R15N:00000002
    2026-01-12T11:21:01.647494712Z The request path /health does not match a supported file type
    2026-01-12T11:21:01.647942085Z dbug: Microsoft.AspNetCore.Routing.Matching.DfaMatcher[1001]
    2026-01-12T11:21:01.647965692Z => SpanId:8fa8eca33635eb67, TraceId:7b7fc7ae26270c513dbe32c92fb162fb, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15N => RequestPath:/health RequestId:0HNIHPIL8R15N:00000002
    2026-01-12T11:21:01.647969815Z 1 candidate(s) found for the request path '/health'
    2026-01-12T11:21:01.648386705Z dbug: Microsoft.AspNetCore.Routing.EndpointRoutingMiddleware[1]
    2026-01-12T11:21:01.648467374Z => SpanId:8fa8eca33635eb67, TraceId:7b7fc7ae26270c513dbe32c92fb162fb, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15N => RequestPath:/health RequestId:0HNIHPIL8R15N:00000002
    2026-01-12T11:21:01.648472380Z Request matched endpoint 'Health checks'
    2026-01-12T11:21:01.649553689Z dbug: Microsoft.AspNetCore.Authentication.JwtBearer.JwtBearerHandler[9]
    2026-01-12T11:21:01.649614672Z => SpanId:8fa8eca33635eb67, TraceId:7b7fc7ae26270c513dbe32c92fb162fb, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15N => RequestPath:/health RequestId:0HNIHPIL8R15N:00000002
    2026-01-12T11:21:01.649620431Z AuthenticationScheme: JwtBearer was not authenticated.
    2026-01-12T11:21:01.651466878Z info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
    2026-01-12T11:21:01.651519293Z => SpanId:8fa8eca33635eb67, TraceId:7b7fc7ae26270c513dbe32c92fb162fb, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15N => RequestPath:/health RequestId:0HNIHPIL8R15N:00000002
    2026-01-12T11:21:01.651558479Z Executing endpoint 'Health checks'
    2026-01-12T11:21:01.651562458Z dbug: Microsoft.Extensions.Diagnostics.HealthChecks.DefaultHealthCheckService[100]
    2026-01-12T11:21:01.651566351Z => SpanId:8fa8eca33635eb67, TraceId:7b7fc7ae26270c513dbe32c92fb162fb, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15N => RequestPath:/health RequestId:0HNIHPIL8R15N:00000002
    2026-01-12T11:21:01.651570707Z Running health checks
    2026-01-12T11:21:01.651574105Z dbug: Microsoft.Extensions.Diagnostics.HealthChecks.DefaultHealthCheckService[101]
    2026-01-12T11:21:01.651577856Z => SpanId:8fa8eca33635eb67, TraceId:7b7fc7ae26270c513dbe32c92fb162fb, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15N => RequestPath:/health RequestId:0HNIHPIL8R15N:00000002
    2026-01-12T11:21:01.651581789Z Health check processing with combined status Healthy completed after 0.0199ms
    2026-01-12T11:21:01.651585097Z info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1]
    2026-01-12T11:21:01.651588757Z => SpanId:8fa8eca33635eb67, TraceId:7b7fc7ae26270c513dbe32c92fb162fb, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15N => RequestPath:/health RequestId:0HNIHPIL8R15N:00000002
    2026-01-12T11:21:01.651592939Z Executed endpoint 'Health checks'
    2026-01-12T11:21:01.651595853Z dbug: Microsoft.AspNetCore.Server.Kestrel.Connections[9]
    2026-01-12T11:21:01.651598704Z => SpanId:8fa8eca33635eb67, TraceId:7b7fc7ae26270c513dbe32c92fb162fb, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15N => RequestPath:/health RequestId:0HNIHPIL8R15N:00000002
    2026-01-12T11:21:01.651601991Z Connection id "0HNIHPIL8R15N" completed keep alive response.
    2026-01-12T11:21:01.651604639Z info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
    2026-01-12T11:21:01.651607883Z => SpanId:8fa8eca33635eb67, TraceId:7b7fc7ae26270c513dbe32c92fb162fb, ParentId:0000000000000000 => ConnectionId:0HNIHPIL8R15N => RequestPath:/health RequestId:0HNIHPIL8R15N:00000002
    2026-01-12T11:21:01.651611724Z Request finished HTTP/1.1 GET http://rider.test.sz.loda.net.cn/health - - - 200 - text/plain 5.2685ms

    团队公告

  • Docker 部署自动日志收集功能
    A admin

    === 1. 容器状态 ===
    NAMES STATUS PORTS
    distribution-centre-api-test Up 19 seconds 0.0.0.0:8071->80/tcp
    pos-blazor-test Up 2 days 0.0.0.0:8031->8080/tcp
    pos-auth-test Up 3 days 0.0.0.0:8051->8080/tcp
    pos-api-test Up 3 days 0.0.0.0:8041->8080/tcp
    redis-56379 Up 7 days 0.0.0.0:56379->6379/tcp, [::]:56379->6379/tcp
    redis-56380 Up 7 days 0.0.0.0:56380->6379/tcp, [::]:56380->6379/tcp
    reward Up 7 days (unhealthy) 0.0.0.0:44396->80/tcp, [::]:44396->80/tcp

    === 2. 容器资源 ===
    CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
    c0aa9985deb9 distribution-centre-api-test 1.19% 205.4MiB / 6.976GiB 2.87% 59.8kB / 158kB 0B / 53.2kB 47

    === 3. 容器详情 ===
    Image:docker-sz.loda.net.cn/docker-pos/rider/api:2855.2026.0112.59827 Started:2026-01-12T11:20:44.406234722Z Restarts:0 Status:running ExitCode:0

    === 4. 端口监听 ===
    State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess
    LISTEN 0 4096 127.0.0.54:53 0.0.0.0:*
    LISTEN 0 4096 127.0.0.1:39471 0.0.0.0:*
    LISTEN 0 4096 0.0.0.0:22 0.0.0.0:*
    LISTEN 0 511 0.0.0.0:80 0.0.0.0:*
    LISTEN 0 511 0.0.0.0:888 0.0.0.0:*
    LISTEN 0 4096 0.0.0.0:44396 0.0.0.0:*
    LISTEN 0 4096 0.0.0.0:56380 0.0.0.0:*
    LISTEN 0 4096 0.0.0.0:56379 0.0.0.0:*
    LISTEN 0 4096 0.0.0.0:8071 0.0.0.0:*
    LISTEN 0 4096 0.0.0.0:8031 0.0.0.0:*
    LISTEN 0 4096 0.0.0.0:8041 0.0.0.0:*
    LISTEN 0 4096 0.0.0.0:8051 0.0.0.0:*
    LISTEN 0 511 0.0.0.0:36379 0.0.0.0:*
    LISTEN 0 100 0.0.0.0:51435 0.0.0.0:*
    LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
    LISTEN 0 4096 [::]:22 [::]:*
    LISTEN 0 4096 [::]:44396 [::]:*
    LISTEN 0 4096 [::]:56380 [::]:*
    LISTEN 0 4096 [::]:56379 [::]:*

    === 5. 磁盘空间 ===
    Filesystem Size Used Avail Use% Mounted on
    tmpfs 715M 5.2M 710M 1% /run
    efivarfs 256K 19K 233K 8% /sys/firmware/efi/efivars
    /dev/vda3 99G 42G 53G 45% /
    tmpfs 3.5G 5.6M 3.5G 1% /dev/shm
    tmpfs 5.0M 0 5.0M 0% /run/lock
    /dev/vda2 197M 6.2M 191M 4% /boot/efi
    tmpfs 715M 12K 715M 1% /run/user/0
    tmpfs 715M 12K 715M 1% /run/user/1004

    === 6. 内存状态 ===
    total used free shared buff/cache available
    Mem: 7143 2838 219 624 5015 4304
    Swap: 0 0 0

    === 7. OOM事件 ===

    === Done ===
    Mon Jan 12 07:21:05 PM CST 2026

    团队公告

  • Docker 部署自动日志收集功能
    A admin

    1.png 2.png 3.png

    团队公告

  • Docker 部署自动日志收集功能
    A admin

    Docker 部署自动日志收集功能

    发布日期:2026-01-12
    影响范围:所有使用 docker/deploy.yml 模板的 Docker 容器部署
    生效方式:自动生效,无需修改项目配置


    功能概述

    从即日起,所有 Docker 容器部署任务在执行完成后(无论成功或失败),会自动收集:

    1. 📄 完整容器日志 - 包含启动过程、运行日志、异常信息
    2. 🔍 系统诊断信息 - 容器状态、资源占用、端口、磁盘、内存、OOM 事件

    这些信息会保存为 GitLab Artifact,保留 7 天。


    为什么需要这个功能?

    之前排查容器问题时:

    • ❌ 需要 SSH 登录服务器手动执行 docker logs
    • ❌ 日志只输出在控制台,内容过长时难以查看
    • ❌ 部署失败时日志可能被截断或丢失
    • ❌ 无法快速了解服务器资源状态

    现在:

    • ✅ 自动收集,无需登录服务器
    • ✅ 完整日志,保存为可下载文件
    • ✅ 诊断信息,一目了然
    • ✅ Artifact 链接,直接点击下载

    如何使用

    1. 查看 Artifact 链接

    部署任务完成后,在 GitLab Job 日志底部会显示:

    ========================================
    📥 Artifact 下载链接:
    ========================================
    
    📄 容器日志:
       https://gitlab.xxx.com/xxx/-/jobs/12345/artifacts/file/logs/container_production_20260112_190000.log
    
    🔍 诊断信息:
       https://gitlab.xxx.com/xxx/-/jobs/12345/artifacts/file/logs/container_production_20260112_190000_diagnostics.txt
    
    📦 全部文件(浏览):
       https://gitlab.xxx.com/xxx/-/jobs/12345/artifacts/browse/logs/
    ========================================
    

    2. 下载方式

    • 方式一:直接点击上述链接
    • 方式二:在 Job 页面右侧点击 "Browse" 或 "Download" 按钮
    • 方式三:在 Pipeline 页面点击 下载图标 → artifacts

    3. Artifact 内容

    文件 内容
    {容器名}_{环境}_{时间}.log 完整容器日志(含时间戳)
    {容器名}_{环境}_{时间}_diagnostics.txt 系统诊断信息

    诊断信息包含什么?

    ============================================
    📦 1. 容器状态 (docker ps -a)
    ============================================
    - 所有容器的运行状态、端口映射、镜像版本
    
    ============================================
    📊 2. 容器资源占用 (docker stats)
    ============================================
    - CPU、内存、网络 IO、磁盘 IO
    
    ============================================
    🔧 3. 容器配置摘要 (docker inspect)
    ============================================
    - 镜像名、启动时间、重启次数、状态、退出码、端口映射、环境变量
    
    ============================================
    🌐 4. 端口监听 (ss -tlnp)
    ============================================
    - 服务器上所有监听的端口
    
    ============================================
    💾 5. 磁盘空间 (df -h)
    ============================================
    - 各分区使用情况
    
    ============================================
    🧠 6. 内存状态 (free -m)
    ============================================
    - 物理内存和 Swap 使用情况
    
    ============================================
    💀 7. OOM 事件 (dmesg)
    ============================================
    - 内核日志中的 OOM(内存不足被杀)事件
    

    配置选项(可选)

    默认已启用,一般无需修改。如需调整,在产品配置中设置:

    variables:
      # 禁用日志收集(默认 true)
      COLLECT_LOGS_ENABLED: "false"
      
      # 限制日志行数(默认空=完整日志)
      COLLECT_LOGS_TAIL: "5000"
      
      # 禁用诊断信息收集(默认 true)
      COLLECT_DIAGNOSTICS: "false"
    

    常见问题

    Q: 日志文件会不会很大?

    A: 取决于容器运行时间和日志量。Artifact 有大小限制(默认 100MB),超大日志会被截断。如果经常超限,可设置 COLLECT_LOGS_TAIL: "10000" 限制行数。

    Q: 敏感信息会不会泄露?

    A: Artifact 遵循 GitLab 项目权限,只有项目成员可查看。如有特殊安全要求,可设置 COLLECT_LOGS_ENABLED: "false" 禁用。

    Q: 部署失败时还能收集日志吗?

    A: 能。after_script 在主脚本失败后仍会执行,when: always 确保 Artifact 始终上传。

    Q: 旧项目需要改配置吗?

    A: 不需要。只要使用 docker/deploy.yml 模板,功能自动生效。


    技术实现

    • 在 docker/deploy.yml 的 after_script 中实现
    • 通过 SSH 远程执行 docker logs 和诊断命令
    • 结果保存到 logs/ 目录作为 Artifact 上传

    相关文件:

    • deploy/common-ci/docker/deploy.yml

    反馈与建议

    如有问题或改进建议,请联系 DevOps 团队或在 deploy/common-ci 仓库提 Issue。

    团队公告

  • 安装 Redis 不带密码的步骤
    A admin

    安装 Redis 不带密码的步骤

    1. 在 WSL (Ubuntu) 中执行以下命令安装 Redis:
    sudo apt update
    sudo apt install redis-server -y
    
    1. 修改 Redis 配置文件以禁用密码认证:
    sudo sed -i 's/^requirepass/#requirepass/' /etc/redis/redis.conf
    
    1. 重启 Redis 服务使配置生效:
    sudo service redis-server restart
    
    1. 验证 Redis 是否正常运行且无需密码:
    redis-cli ping
    

    如果返回 "PONG" 则表示安装成功且无需密码。

    开发环境搭建

  • 签入代码治理要求(Commit/MR/分支的可追溯治理)
    A admin

    1. 背景:我们为什么要改流程

    近期我们在 CI/CD 与分支治理上做了一轮升级,目标很明确:

    • 提升生产环境安全性:避免“谁改了 master 就把生产一起带崩”的风险
    • 提升可追溯性:每个改动都能对应到一个明确的需求/缺陷 Key(如 ZD-xxx / TG-xxx / REQ-xxx)
    • 在多仓库、多级触发(A->B->C)的情况下保持稳定:避免触发链路分支漂移、遗漏合并

    这套升级会增加一些管理动作,但它换来的是可控、可审计、可回滚的发布链路。


    2. 现场问题:为什么“我要上生产”会变成“要把所有人都合进去”

    我们遇到的典型困惑是:

    • 当前大家都在 develop 上做修改
    • 某个修复已经在 develop 测试通过,现在要发布生产
    • 但 develop 同时包含其他人的改动
      一旦用 develop -> master 合并,就会把其他人的改动也带上生产
      结果看起来就像“流程绕了一圈,和以前直接改 master 没区别”,甚至更累

    这不是某个人的问题,而是**“只有 master+develop 且大家直接改 develop”**这种工作方式的必然结果:
    develop 会自然变成“所有进行中的改动大杂烩”,不具备按需挑选发布的能力。


    3. 根因剖析:为什么会出现“hotfix 从 develop 拉”的错误用法

    当大家想紧急发生产时,很自然会做出一个看似合理、实则危险的动作:

    • 从 develop 拉一个 hotfix/* 分支
    • 然后 hotfix/* -> master 发生产

    问题在于:如果 hotfix 的基线来自 develop,它本质上等价于“把 develop 发布到生产”。
    这就必然把别人还未准备好的改动一起带进去。

    关键原则:
    生产修复(hotfix)必须从“生产基线”分出来
    也就是:hotfix/* 必须从 master 拉,而不是从 develop 拉。


    4. 共识目标:我们到底要的是什么制度

    我们需要一套“成本不高、不会出错”的制度,解决两个核心问题:

    • 生产发布可控:能明确“这次上生产到底包含哪些提交”
    • 多仓库链路稳定:上游是什么分支,下游就尽量跟随;不跟随也要有可预测的 fallback

    换句话说:让 master 真的“受保护”,让生产发布变成一个可管理动作。


    5. 建议落地方案(推荐,成本最低且最符合治理闭环)

    5.1 分支定位(长期分支 + 临时分支)

    • master(生产基线,受保护)
      • 只允许 MR 合入
      • 对应 production/staging 发布(手动)
    • develop(集成基线,建议受保护)
      • 通过 MR 合入,减少“无意破坏”
      • 对应 test 环境(自动)
    • 允许临时分支(不等于分支爆炸,而是必要的“隔离器”)
      • feature/<KEY>-<slug>:从 develop 拉,合回 develop
      • hotfix/<KEY>-<slug>:从 master 拉,合回 master 发生产

    这并不是回到复杂 GitFlow,而是最小集合:
    “两个长期分支 + 两类临时分支”,用来解决“只发布一部分”的刚需。

    5.2 发布路径(把“整合发布”和“紧急修复”分开)

    • 整合发布(develop 当前整体可发布)
      • 走 develop -> master MR
      • staging 验证后,手动 production
    • 紧急修复(develop 不可整体发布)
      • 从 master 拉 hotfix/<KEY>
      • 只把必要提交带过去(见下一节 cherry-pick)
      • hotfix -> master 上生产
      • 回灌:再做一次 master -> develop,避免修复丢失或未来重复出现

    5.3 cherry-pick 的角色:解决“只想上其中几个提交”

    当修复已经在 develop 做完,但 develop 夹杂其他人改动时:

    • 不再用 develop -> master 解决
    • 而是从 master 拉 hotfix/<KEY>,然后对 develop 上属于本次修复的提交执行 cherry-pick
      这样生产只包含你要发布的那几个提交,不会把别人的未完成工作带上去

    6. 多仓库下游触发:FANOUT_BRANCH 的意义与补充规则

    在多级触发(A->B->C)里,容易出现“分支漂移”:
    每一跳都用 CI_COMMIT_BRANCH 触发下游,会导致链路不可预测。

    现有文档已定义唯一原则:

    • FANOUT_BRANCH 贯穿整条触发链路
    • 上游传了就沿用;没传就等于 CI_COMMIT_BRANCH

    补充建议:同名分支优先 + 不存在则回落 master(可选增强)

    当上游是 hotfix/<KEY> 时,下游可能未创建同名分支。理想策略是:

    • 下游存在同名分支:触发同名分支
    • 下游不存在:回落到 master

    这类 fallback 要做到“稳定不报错”,通常需要在触发前做一次“分支存在性判断”(脚本/机制层面),避免触发直接失败。是否引入该增强,视团队对稳定性与实现成本的取舍决定。


    7. 这套制度能解决什么、代价是什么

    能解决

    • 生产发布可控:不会再“合并一次把所有人带上生产”
    • 责任边界清晰:谁的改动、哪个 Key、进了哪些仓库、是否已合 master 一目了然
    • 发布更安全:master 真正成为“受保护的生产基线”

    需要付出的代价

    • 合并与回灌变成日常管理工作(但这是“可控发布”的必要成本)
    • 要求改动尽量小步快跑,否则合并冲突会指数级上升

    8. 执行清单(请团队统一遵守)

    • 分支来源
      • hotfix/* 必须从 master 拉
      • feature/* 从 develop 拉
    • 发布生产
      • 非紧急:develop -> master(前提是 develop 已声明为可发布集合)
      • 紧急:hotfix/* -> master(必要提交用 cherry-pick 精准带入)
    • 回灌
      • hotfix 发布后必须 master -> develop
    • 治理约束
      • 分支名、commit、MR 标题必须带合法 Key(ZD/TG/REQ)
      • master 必须受保护,只允许 MR

    9. 下一步(建议)

    • 把以上规则固化进团队文档(branch-rules)
    • 在 CI 中增加最小校验(例如:hotfix MR 目标必须是 master;关键作业基于分支类型区分环境)
    • 结合“按 Key 聚合查询”工具:每天/每周形成“哪些 Key 已合 master、哪些仍在 develop/feature”的报告,降低遗漏风险

    如果你希望我把这篇公告进一步“贴近你们仓库的现实参数”(比如:当前只允许 master/develop 还是允许 feature/hotfix;哪些环境 job 是自动/手动;是否需要下游分支 fallback),你回复两句话即可:

    • 你们最终是否允许临时分支 feature/*、hotfix/*?
    • 下游触发 fallback(同名分支优先,不存在回落 master)要不要做成强制能力?
    团队公告

  • 下游触发分支规则(FANOUT_BRANCH 单原则)
    A admin

    下游触发分支选择:现象、根因与最终方案(仅 develop/master)

    背景与目标

    我们在 NuGet 组件发布流水线中使用下游触发(trigger downstream pipeline)来联动发布/同步多个项目。

    核心目标是:

    • 让下游触发分支选择可预测、可维护、可审计。
    • 避免因为 GitLab Token 权限差异导致的“误判分支不存在”“偶发失败”等不稳定行为。
    • 适配跨项目、跨实例同步(common-ci 同步到多个 GitLab 实例)的场景。

    现象(问题表现)

    在 trigger_downstream_pipeline 作业中出现以下典型问题:

    • 现象 A:作业过早触发

      • 流水线还未完成 push/update 等步骤,下游就被触发。
    • 现象 B:下游明明存在分支,却提示分支不存在

      • 例如下游有 develop,但 job 日志显示“分支不存在”。
    • 现象 C:HTTP 401 Unauthorized

      • 使用 CI_JOB_TOKEN 调用 GitLab API 触发下游 pipeline 时返回 401。
      • 说明默认情况下 CI_JOB_TOKEN 不具备跨项目触发的权限。
    • 现象 D:Reference not found / downstream pipeline can not be created

      • trigger 使用的 branch 变量为空或未按预期注入,导致 GitLab 认为 ref 不存在。

    根因分析

    1. GitLab 原生 trigger: 的能力边界

    • 原生 trigger: 只能触发一个确定的 project + branch。
    • 它不会做“分支不存在则 fallback”这种动态选择。

    因此,如果要做“某项目没有分支就改触发另一个分支、后续再恢复原分支”的行为,必须:

    • 预先知道下游是否存在该分支(需要 API),或
    • 直接尝试触发并解析失败原因(需要 API),或
    • 人工/静态配置规则(每项目一个开关),或
    • 制定组织规范(所有项目都具备统一分支)。

    2. 跨项目权限与 Token 复杂度

    如果使用 API 触发下游(/api/v4/projects/:id/pipeline):

    • CI_JOB_TOKEN 在多数场景下对跨项目 API 权限不足(401)。
    • 使用 GITLAB_API_TOKEN(Project/Group/Personal Access Token)可以解决,但引入:
      • Token 分发与轮换成本
      • 安全风险与合规审计成本
      • 多实例同步时的配置复杂度

    3. 变量注入顺序与 artifacts 依赖

    我们使用 dotenv artifacts 在 setup 阶段写入变量(如 FANOUT_BRANCH、DOWNSTREAM_BRANCH)。

    如果 trigger job 没有正确 needs 对应的 setup job(且 artifacts: true),就可能拿不到变量,导致:

    • $DOWNSTREAM_BRANCH 为空
    • trigger 触发时报 Reference not found

    方案对比(做过的选择)

    方案 1:智能分支(API + fallback)

    • 思路:优先触发“根分支”,失败再 fallback 到 develop/master。
    • 优点:理论上能实现更复杂的链路行为。
    • 缺点:
      • 需要跨项目 API 权限,CI_JOB_TOKEN 往往不够(401)
      • Token 配置与安全治理成本高
      • 失败原因多样(401/404/网络/权限策略),可维护性差

    方案 2:自建服务(分支查询/代理触发)

    • 思路:将 GitLab token 收拢到内部服务,CI 只调用内部服务。
    • 优点:可控、安全面可收敛。
    • 缺点:引入额外系统:
      • 部署与运维成本
      • 高可用/限流/审计等需求
      • 增加链路复杂度

    方案 3(最终采用):最简单方案(无 API,仅 develop/master)

    结论:采用方案 3。

    • 只在 develop/master 两者之间选择分支。
    • 在 setup 阶段一次性确定“触发链根分支”,并通过 dotenv 跨项目传递。
    • trigger 使用 GitLab 原生 trigger:,不使用 API,不依赖 token。

    该方案的关键前提:所有参与链路的项目必须同时存在 develop 和 master 分支。

    这项前提换来的是:

    • 不需要任何额外权限与 token
    • 行为稳定、可预测
    • 维护成本最低

    最终方案设计

    1. 根分支归一化(setup 阶段)

    在 setup_version(.version_setup)阶段:

    • 读取触发源:FANOUT_BRANCH(上游传递)或 CI_COMMIT_BRANCH(当前)
    • 归一化为:
      • master/main/hotfix/* → master
      • 其他 → develop
    • 写入 dotenv:
      • FANOUT_BRANCH=<develop|master>
      • DOWNSTREAM_BRANCH=<develop|master>(如果用户未显式指定)

    2. 下游触发(trigger 阶段)

    在 .trigger-downstream 中:

    • 使用 GitLab 原生:
      • trigger.project: $DOWNSTREAM_PROJECT
      • trigger.branch: $DOWNSTREAM_BRANCH
    • 同时向下游继续传递:
      • FANOUT_BRANCH
      • FANOUT_FROM
      • STOP_FANOUT

    并通过 needs 确保拿到 setup 阶段的 dotenv:

    • needs: setup_global_var (artifacts: true)

    代码改动点(common-ci)

    • shared/version-setup.yml

      • 在 version.env 中写入归一化后的 FANOUT_BRANCH,并默认填充 DOWNSTREAM_BRANCH。
    • nuget/7-trigger-downstream.yml

      • 恢复为原生 trigger:
      • needs 增加 setup_global_var(带 artifacts: true)以确保 DOWNSTREAM_BRANCH 可用

    使用与约束

    • 组织规范(必须执行)

      • 所有参与链路的仓库必须同时存在 develop 与 master 分支。
    • 项目侧配置(可选)

      • 在 .gitlab-ci.yml 中配置:
        • DOWNSTREAM_PROJECT:下游项目路径
      • 一般无需配置 DOWNSTREAM_BRANCH,默认由上游归一化规则决定。

    常见问题(FAQ)

    Q1:为什么不继续做“智能分支 + fallback”?

    因为需要 API 触发或分支判断,而跨项目 token 权限与安全治理成本过高,且多实例同步场景下不可控因素更多。

    Q2:为什么会出现 Reference not found?

    多数情况下是 trigger job 没有拿到 setup 阶段的 dotenv(变量未注入),导致 branch 为空/错误。

    Q3:这个方案还能实现“某项目只有 master,但后续又想回到 develop”吗?

    不能。

    不使用 API 的前提下无法判断下游是否存在某分支,因此无法对“每一跳”做动态选择。

    要支持这种行为,必须回到 API 方案或引入项目级配置。

    团队公告
  • 登录

  • 登录或注册以进行搜索。
  • 第一个帖子
    最后一个帖子
0
  • 版块
  • 最新
  • 标签
  • 热门
  • 世界
  • 用户
  • 群组