VertMark embeds robust, training-free watermarks into vertical domain language models by creating hidden semantic equivalence between low-frequency triggers and high-frequency domain terms via parameter swaps, supporting reliable verification with negligible performance impact.
Dianjin- r1: Evaluating and enhancing financial reasoning in large language models.arXiv preprint arXiv:2504.15716
3 Pith papers cite this work. Polarity classification is still indexing.
3
Pith papers citing it
representative citing papers
PubSwap uses a small public dataset for selective off-policy response swapping in federated RLVR to improve coordination and performance over standard baselines on math and medical reasoning tasks.