CHR improves medical question answering retrieval by explicitly promoting evidence aligned with a correct hypothesis while penalizing content aligned with a plausible incorrect alternative.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
4 Pith papers cite this work. Polarity classification is still indexing.
years
2026 4representative citing papers
A new tree-conditioned edit-flow model for ancestral sequence reconstruction achieves reasonable accuracy on substitution-only evolved sequences and superior localization of changes on natural indel-rich sequences.
EncFormer reduces online MPC communication by 1.4x-30.4x and end-to-end latency by 1.3x-9.8x versus prior hybrid FHE-MPC systems for private GPT- and BERT-style inference while preserving accuracy.
Fine-tuned LLaMA3 with LoRA reaches 81.24% F1 on 18-category fine-grained medical entity recognition, beating zero-shot by 63.11% and few-shot by 35.63%.
citing papers explorer
-
Ruling Out to Rule In: Contrastive Hypothesis Retrieval for Medical Question Answering
CHR improves medical question answering retrieval by explicitly promoting evidence aligned with a correct hypothesis while penalizing content aligned with a plausible incorrect alternative.
-
Tree-Conditioned Edit Flows for Ancestral Sequence Reconstruction
A new tree-conditioned edit-flow model for ancestral sequence reconstruction achieves reasonable accuracy on substitution-only evolved sequences and superior localization of changes on natural indel-rich sequences.
-
EncFormer: Secure and Efficient Transformer Inference over Encrypted Data
EncFormer reduces online MPC communication by 1.4x-30.4x and end-to-end latency by 1.3x-9.8x versus prior hybrid FHE-MPC systems for private GPT- and BERT-style inference while preserving accuracy.
-
Beyond the Basics: Leveraging Large Language Model for Fine-Grained Medical Entity Recognition
Fine-tuned LLaMA3 with LoRA reaches 81.24% F1 on 18-category fine-grained medical entity recognition, beating zero-shot by 63.11% and few-shot by 35.63%.