What I did during the summer

Download Report

Transcript What I did during the summer

Raphael Cohen, Michael Elhadad Noemie Elhadad
1.
2.
3.
4.
5.
6.
7.
8.
If it has to do with human readable (more or
less) text – it NLP!
Search engines.
Information extraction.
Helping the government read your emails.
Topic Models.
Movie reviews aggregators.
Spell chekers.
…

Detecting collocations: "‫"קפה עלית‬, “‫“כאב ראש‬
Dunning 1994 – Word occurrences, ChiSquare / Maximum Likelyhood

Topic Modeling: “‫ הריון‬/ ‫ “לידה‬vs "‫"טפיל‬
Blei et al. 2003 – Mixed generative model
acquired using Gibbs sampling over word
occurrences in document.



Hospital data is becoming digital.
Textual part of EHR is important. In our
Hebrew collection of 900 neurology notes –
only 12 prescriptions are indexed.
This data is used for a variety of purposes:
Discovering drug side effects (Saadon and
Shahar), discovering adverse drug relations,
creating summaries for physicians in
hospitals, studying diseases and more.


Observation:
Physicians like to copy/paste previous visits
to save time (couldn’t do it with paper notes).
Wrenn et al. showed up to 74% redundancy.
It occurs in the same patient notes (Thank
god…), usually within the same form but not
always.


No fear, other interesting datasets are also
redundant:
News reports (try Google News)
Movie reviews
Product reviews
Talkbacks in Ynet…
Also, we call ourselves Medical-Informatics,
and have our own conferences.
%identity
18
% 16
14
o 12
f
10
n
o
t
e
s
8
6
4
2
0
10
20
30
40
50
60
70
80
% identity
On average 52% identity, but we can see two
document populations.
90
100




Conventional wisdom – the more data the
better performance of statistical algorithms.
This usually works for huge corpora (the
internet).
To solve domain specific problems we have to
use smaller corpora (For example, translating
CS literature from English to Chinese)
However, redundancy creates false
occurrence counts. With some patients having
hundreds of redundant notes, this might
create a bias in smaller corpora.




22,564 patient notes of patients with kidney
problems.
6,131,879 tokens.
The physician tells us that the most
important notes are those from the “primaryhealth-care-provider” table in the database.
There are 504 patients with such notes, and
1,618 “primary-provider” notes.
Effect on word counts



Medical concepts are detected using HealthTerm-Finder, an NLP program based on the
OpenNLP suite and UMLS (Unified Medical
Language System) a medical concept
repository.
These concepts include drugs, findings,
symptoms…
Hey, you said no bio… - annotations are used
with names of actors (movie reviews /
gossip), corporations (news) and terrorists
(online forums and chats).
Effect on UMLS concept counts
Effect on co-occurrence in UMLS concepts


Build a corpus with controlled amount of
redundancy.
Reminiscent of Non-Redundant protein/DNA
databases built in the beginning of the last
decade [Holes and Sanders (1998)].


Our easy and naïve approach:
We have the patients’ ids. Let’s sample a
small number of notes from each patient (The
“Last” dataset in the graphs we saw).
Drawbacks:
a) Annonimized data-sets are the future (our
Soroka collection is on example)- they ain’t
got ids.
b) Are we throwing out some good data along
with the redundant stuff?



Align all pairs of sequences (Nimrod showed
us how to do that last week) and kick out the
redundant ones.
Problem: Alignment costs ~O(n²), this will
take a while.
Solution: BLAST / FASTA algorithms use short
identical finger prints (substrings) to only
compare sequences likely to be similar and to
cut down O(n²) to ~O(n) in most cases.
*Experts say that using borrowed algorithm from
another discipline gets you into journals



The Bioinfo algorithms are optimized for
4/20 (now 21) alphabets, and the sequences
are shorter (usually less than 5K characters).
Texts are easier than DNA, the have defined
end of lines and only one reading frame.
Fingerprinting methods for texts already exist
in order to find plagiarism.
Sort documents by size.
For each document:
Find finger prints by lines (For each line,
break into substrings of length F)
Add to the corpus if there is no document
sharing more than Max_redundancy
substrings in the corpus


How long does it take?
5 minutes for our 20K documents.
20 minutes for our 400k documents.
Is it better than the “Last note” naïve
approach?
Number of concepts as function
of subset
800,000
600,000
400,000
200,000
0
Last Note
1/5 cutoff 1/2 cutoff 1/3 cutoff
Original