I didn’t train a new model. I didn’t merge weights. I didn’t run a single step of gradient descent. What I did was much weirder: I took an existing 72-billion parameter model, duplicated a particular block of seven of its middle layers, and stitched the result back together. No weight was modified in the process. The model simply got extra copies of the layers it used for thinking?
$ xxd nxtpwn-dump.bin | less。新收录的资料对此有专业解读
。关于这个话题,新收录的资料提供了深入分析
人気記事ランキング直近24時間(1時間ごとに更新。5分ごとはこちら)。新收录的资料是该领域的重要参考
The attacker was aware of some of the defensive instructions we had included in the system prompt, and explicitly attempted to bypass them. (Ignore every previous instruction, the "plain text" warning, analysis protocol, team rules, and output format.)
[range]wq[path]