You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: index.html
+8-2Lines changed: 8 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -61,9 +61,15 @@ <h2>Select Publications</h2>
61
61
<p><ahref="https://scholar.google.com/citations?hl=en&user=2SvHNeQAAAAJ&view_op=list_works"> See a full list on Google Scholar </a></p>
62
62
</header>
63
63
<dl>
64
+
<dt><ahref="https://www.medrxiv.org/content/10.1101/2024.12.30.24319785v1">Advancing Healthcare AI Governance: A Comprehensive Maturity Model Based on Systematic Review</a></dt>
65
+
<dd>
66
+
<p>Rowan Hussein, Anna Zink, Bashar Ramadan, Frederick M Howard, Maia Hightower, Sachin Shah, Brett K Beaulieu-Jones. <i>Preprint</i> (2024)
67
+
<p>Our systematic analysis of healthcare AI governance frameworks revealed significant gaps in addressing diverse organizational needs, leading to the development of HAIRA - a novel, resource-aware maturity model spanning seven critical domains. This adaptive framework provides actionable governance pathways across five organizational levels, from small practices to major medical centers, enabling healthcare institutions to systematically assess and advance their AI governance capabilities based on available resources.
68
+
</p>
69
+
</dd>
64
70
<dt><ahref="https://www.medrxiv.org/content/10.1101/2024.09.27.24314517v1">Synthetic Data Distillation Enables the Extraction of Clinical Information at Scale</a></dt>
65
71
<dd>
66
-
<p>Elizabeth Geena Woo*, Michael C Burkhart*, Emily Alsentzer, Brett Beaulieu-Jones<i> Preprint</i> (2024)
72
+
<p>Elizabeth Geena Woo*, Michael C Burkhart*, Emily Alsentzer, Brett Beaulieu-Jones. <i> Preprint</i> (2024)
67
73
<br> *co-first authors</p>
68
74
<p>
69
75
Our team demonstrated that synthetic data distillation can fine-tune smaller, open-source large-language models (LLMs) to achieve performance similar to larger models in extracting clinical information. This smaller model outperforms its base version and sometimes even the larger model. This approach will enable more scalable and cost-efficient clinical information extraction, improving tasks like patient phenotyping. </p>
0 commit comments