【git】WARNING: connection is not using a post-quantum key exchange algorithm.

【git】WARNING: connection is not using a post-quantum key exchange algorithm.

问题:

推送代码提示下面信息:

16:22:54.422: [C:\git\yang-teambition] git -c credential.helper= -c core.quotepath=false -c log.showSignature=false push --progress --porcelain origin refs/heads/dev_tianzhi:dev_tianzhi ** WARNING: connection is not using a post-quantum key exchange algorithm. ** This session may be vulnerable to "store now, decrypt later" attacks. ** The server may need to be upgraded. See https://openssh.com/pq.html Enumerating objects: 59, done. Counting objects: 1% (1/59) Counting objects: 3% (2/59) Counting objects: 5% (3/59) Counting objects: 6% (4/59) Counting objects: 8% (5/59) Counting objects: 10% (6/59) Counting objects: 11% (7/59) Counting objects: 13% (8/59) Counting objects: 15% (9/59) Counting objects: 16% (10/59) Counting objects: 18% (11/59) Counting objects: 20% (12/59) Counting objects: 22% (13/59) Counting objects: 23% (14/59) Counting objects: 25% (15/59) Counting objects: 27% (16/59) Counting objects: 28% (17/59) Counting objects: 30% (18/59) Counting objects: 32% (19/59) Counting objects: 33% (20/59) Counting objects: 35% (21/59) Counting objects: 37% (22/59) Counting objects: 38% (23/59) Counting objects: 40% (24/59) Counting objects: 42% (25/59) Counting objects: 44% (26/59) Counting objects: 45% (27/59) Counting objects: 47% (28/59) Counting objects: 49% (29/59) Counting objects: 50% (30/59) Counting objects: 52% (31/59) Counting objects: 54% (32/59) Counting objects: 55% (33/59) Counting objects: 57% (34/59) Counting objects: 59% (35/59) Counting objects: 61% (36/59) Counting objects: 62% (37/59) Counting objects: 64% (38/59) Counting objects: 66% (39/59) Counting objects: 67% (40/59) Counting objects: 69% (41/59) Counting objects: 71% (42/59) Counting objects: 72% (43/59) Counting objects: 74% (44/59) Counting objects: 76% (45/59) Counting objects: 77% (46/59) Counting objects: 79% (47/59) Counting objects: 81% (48/59) Counting objects: 83% (49/59) Counting objects: 84% (50/59) Counting objects: 86% (51/59) Counting objects: 88% (52/59) Counting objects: 89% (53/59) Counting objects: 91% (54/59) Counting objects: 93% (55/59) Counting objects: 94% (56/59) Counting objects: 96% (57/59) Counting objects: 98% (58/59) Counting objects: 100% (59/59) Counting objects: 100% (59/59), done. Delta compression using up to 20 threads Compressing objects: 3% (1/27) Compressing objects: 7% (2/27) Compressing objects: 11% (3/27) Compressing objects: 14% (4/27) Compressing objects: 18% (5/27) Compressing objects: 22% (6/27) Compressing objects: 25% (7/27) Compressing objects: 29% (8/27) Compressing objects: 33% (9/27) Compressing objects: 37% (10/27) Compressing objects: 40% (11/27) Compressing objects: 44% (12/27) Compressing objects: 48% (13/27) Compressing objects: 51% (14/27) Compressing objects: 55% (15/27) Compressing objects: 59% (16/27) Compressing objects: 62% (17/27) Compressing objects: 66% (18/27) Compressing objects: 70% (19/27) Compressing objects: 74% (20/27) Compressing objects: 77% (21/27) Compressing objects: 81% (22/27) Compressing objects: 85% (23/27) Compressing objects: 88% (24/27) Compressing objects: 92% (25/27) Compressing objects: 96% (26/27) Compressing objects: 100% (27/27) Compressing objects: 100% (27/27), done. Writing objects: 2% (1/37) Writing objects: 5% (2/37) Writing objects: 8% (3/37) Writing objects: 10% (4/37) Writing objects: 13% (5/37) Writing objects: 16% (6/37) Writing objects: 18% (7/37) Writing objects: 21% (8/37) Writing objects: 24% (9/37) Writing objects: 27% (10/37) Writing objects: 29% (11/37) Writing objects: 32% (12/37) Writing objects: 35% (13/37) Writing objects: 37% (14/37) Writing objects: 40% (15/37) Writing objects: 43% (16/37) Writing objects: 45% (17/37) Writing objects: 51% (19/37) Writing objects: 54% (20/37) Writing objects: 56% (21/37) Writing objects: 59% (22/37) Writing objects: 62% (23/37) Writing objects: 64% (24/37) Writing objects: 67% (25/37) Writing objects: 70% (26/37) Writing objects: 72% (27/37) Writing objects: 75% (28/37) Writing objects: 78% (29/37) Writing objects: 81% (30/37) Writing objects: 83% (31/37) Writing objects: 86% (32/37) Writing objects: 89% (33/37) Writing objects: 91% (34/37) Writing objects: 94% (35/37) Writing objects: 97% (36/37) Writing objects: 100% (37/37) Writing objects: 100% (37/37), 3.62 KiB | 463.00 KiB/s, done. Total 37 (delta 18), reused 0 (delta 0), pack-reused 0 (from 0) Read from remote host gitlab.xxx.com: Connection reset by peer client_loop: send disconnect: Connection reset by peer send-pack: unexpected disconnect while reading sideband packet fatal: the remote end hung up unexpectedly 

这是 Git 在推送代码到远程仓库时,连接被远端强制断开Connection reset by peer)导致的失败。常见原因和解决方式如下:


✅ 可能原因与对应处理

原因说明解决方式
1. 大文件或提交过多你这次推送了 37 个对象,虽然不算特别大,但如果包含大文件(如 PDF、图片、jar 包),容易触发 GitLab 的限制。检查是否有大文件被提交:
`git rev-list --objects --all
grep -E '.(pdf|jar|zip|png|jpg)'<br>如有,用 git filter-repoBFG` 清理历史。
2. 网络不稳定 / VPN 中断你公司 GitLab 地址是 gitlab.hezhihulian.com,可能是内网或 VPN 环境,连接中断。重试几次:
git push origin dev_tianzhi
或换网络 / 换 SSH 端口。
3. GitLab 服务器限制服务端配置了超时或最大包大小限制(如 Nginx、GitLab 本身的 git timeout)。联系运维确认 GitLab 日志 /var/log/gitlab/gitlab-workhorse/current 是否报错。
4. SSH 密钥或权限问题虽然报错不是权限拒绝,但某些 GitLab 版本在权限异常时会直接断连。确认你有 dev_tianzhi 分支的推送权限,且 SSH key 已正确配置。

你可以立即尝试的排查/修复步骤

检查是否有大文件

git ls-tree -r -l dev_tianzhi | sort -k4 -n | tail -10

使用 HTTPS 推送测试(排除 SSH 问题):

git remote set-url origin https://gitlab.yang.com/xxx/xxx.git git push origin dev_tianzhi

减小包大小(分批推送): 如果你本地有多个提交,可以尝试分批次推送:

git rebase -i HEAD~10 # 把多个提交合并成1~2个 git push origin dev_tianzhi

重试推送(最简单):

git push origin dev_tianzhi



Read more

Ollama 底层的 llama.cpp 和 GGUF

GGUF = 大模型权重的「通用压缩格式」(类似视频的 MP4,适配所有播放器) llama.cpp = 跑 GGUF 格式模型的「轻量级推理引擎」(类似视频播放器,能在低配电脑上流畅播 MP4) 两者配合:GGUF 让模型体积变小、适配性强,llama.cpp 让模型能在 CPU / 低配 GPU 上快速跑 这也是 Ollama 能做到 “一键本地运行” 的底层原因 GGUF 详解:大模型的 “通用压缩包” 核心定义 GGUF(Generic GGML Format)是 GGML 格式的升级版,是专门为大模型权重设计的二进制存储格式 核心目标是「通用、高效、压缩」 GGML 是什么?

By Ne0inhk

如何优化Whisper JAX推理速度:10个实用技巧提升性能

如何优化Whisper JAX推理速度:10个实用技巧提升性能 【免费下载链接】whisper-jaxJAX implementation of OpenAI's Whisper model for up to 70x speed-up on TPU. 项目地址: https://gitcode.com/gh_mirrors/wh/whisper-jax Whisper JAX是基于JAX框架实现的OpenAI Whisper语音识别模型,相比原生PyTorch版本能够提供高达70倍的推理速度提升。无论你是使用GPU还是TPU,掌握这些优化技巧都能让你的语音转录效率达到极致。🔥 🚀 理解Whisper JAX的核心优势 Whisper JAX通过JAX的即时编译(JIT)和自动并行化技术,在保持高精度的同时大幅提升推理速度。项目位于whisper-jax目录,主要代码结构包含模型定义、管道处理和分区优化等关键模块。 📊 性能基准测试对比 根据官方基准测试数据,Whisper JAX在不同硬件上的表现令人惊艳: * 1分钟音频:GPU仅需1.72秒,

By Ne0inhk

Claude Code的完美平替:OpenCode + GitHub Copilot

引言:Claude 虽好,但你真的能用上吗? 在当前席卷全球的“Vibe Coding”浪潮中,Anthropic 推出的 Claude 系列模型 + 终端工具 Claude Code,凭借极强的逻辑推理能力,成为了开发者眼中的“白月光”。但现实是残酷的:对于中国开发者而言,账号随时被封、海外信用卡支付遭拒、API 额度受限以及复杂的网络环境,构成了一道难以逾越的门槛。 虽然最近国产编程模型不断发力,Claude Code + GLM-4.7的表现非常出色,但面对复杂问题,Claude系列模型依然完胜。难道我们只能眼馋Claude全家桶的编程体验吗? 作为一名追求极致生产力的开发者,我发现了一个绝佳的完美替代方案:OpenCode + GitHub Copilot。这个组合不仅能让你享受如 GLM-4.7 一样的性价比,还能更方便的使用 Claude 的顶级模型。 Claude Code 的开源免费平替:OpenCode 想要复刻

By Ne0inhk
如何在VsCode中使用git(免敲命令版本!保姆级!建议收藏!)

如何在VsCode中使用git(免敲命令版本!保姆级!建议收藏!)

目录 文章目录 前言 一、电脑安装git 二、在vscode安装git插件 三、克隆仓库 四、提交代码 五、创建分支、切换分支、合并分支 1、创建分支 2、切换分支 3、合并分支 六、创建标签和推送标签 七、解决冲突 八、拉取、抓取仓库 九、Reivew代码 总结 前言 随着Vscode的推出和普及,Git的使用也发生了变化,从原来的命令行管理仓库,再到现在用vscode从提交代码、解决冲突、reivew代码,整个管理仓库的过程全部都是可视化,大大降低了新手的使用难度,让新手也能轻松使用git 一、电脑安装git git官网:Git - 安装 Git 安装完git后,打开vscode显示这样的界面就是安装成功了

By Ne0inhk