Introduction to Natural Language Processing (NLP)

Download Report

Transcript Introduction to Natural Language Processing (NLP)

Introduction to Natural Language
Processing
Phenotype RCN Meeting
Feb 2013
What is Natural Language Processing?
Siri
Optical Character Recognition
Speech-to-Text
IBM Watson – Jeopardy
Translation
Spell and Grammar Checks
Feb. 25, 2013
Introduction to NLP
2
What is Natural Language Processing?
• Methods to translate human (natural) language input,
allowing computers to derive meaning from.
 Very general definition.
• Context of the Phenotype RCN meeting
– Information Extraction (IE)
Automatic extraction of structured information from
unstructured documents
– Text Mining
Derive high-quality information from text.
Extract features (IE) and use data mining or pattern
recognition to find ‘interesting’ facts and relations
– BioNLP
Text mining applied to texts and literature of the
biomedical and molecular biology domain
Feb. 25, 2013
Introduction to NLP
3
Outline
Three Questions
1. What do we want from NLP?
2. How can we get Facts?
What approaches are there?
Requirements and what are the costs?
3. What can you expect?
How do we measure quality?
Are there limits?
Feb. 25, 2013
Introduction to NLP
4
Do we know what we want?
1. WHAT DO WE WANT FROM NLP?
Feb. 25, 2013
Introduction to NLP
5
What do we want from NLP?
Speedup: BioCuration for Phenotypes
• What is a document talking about?
– Named Entity Recognition
Prrx1 with GeneID:18933
– Fact extraction
A regulates B, Inhibition of B leads to Phenotype C
• Automatic annotation
– Find all facts for phenotype annotation
– Only highlight most relevant information
Feb. 25, 2013
Introduction to NLP
6
What do we want to annotate?
Documents in the biomedical domain
• Publications
– Abstracts
– Full text (PDF/website)
• Results, Methods, Image/Table captions
• Supplemental material: Tables
• Free form text
– E.g. existing databases such as OMIM
• Non electronic documents
– Books
– Scanned documents
Feb. 25, 2013
Introduction to NLP
7
The long road of finding phenotypes in a text
2. HOW CAN WE GET FACTS?
Feb. 25, 2013
Introduction to NLP
8
How can we get Facts?
• NLP is difficult, because Language is:
– Ambiguous
homonyms, acronyms, …
– Variable spelling, synonyms, sentence structure, …
– Complex
multiple components, chains, options, …
• BioNLP: multi step, multi algorithm
• Every algorithm has been applied to BioNLP
• Ongoing research area
Feb. 25, 2013
Introduction to NLP
9
Preliminaries
Getting the Text
1. Select a corpus/prioritize documents
2. Get the document
– Repositories (i.e. PubMedCentral)
– Local copy
– Scan and OCR (Error rate?)
3. Extract text (PDF, HTML, …)
4. Language detection
5. Document Segmentation
Title, Headers, Captions, Literature references
Feb. 25, 2013
Introduction to NLP
10
Parsing
• Goal: Find sentences and phrases, semantic units
1.
2.
Lexical analysis: Define tokens/words
Find: Noun phrases, sentences, units
Prrx1 knockout mice exhibit malformation of skeletal structures [49].
• Heavy vs. light weight approaches
– Heavy: Grammars and parse trees (Traditional NLP)
• Computationally expensive, language dependent
• Can be high quality
• Problematic with text fragments and malformed text
– Light: Rules
• Heuristics
• Chemical formulas and special names can break tokenizer assumptions
Feb. 25, 2013
Introduction to NLP
11
Entity Recognition
• Match text fragments to entities
• Multiple approaches
– Dictionaries of known entity names
•
•
•
•
Proteins, Genes (Prrx1)
Ontology terms: skeleton (UBERON:0004288)
Required: Know synonyms a priori
Cannot find new entities, i.e. new ontology term candidates
– Rules and patterns
• Match entities according to a set of rules
Mutation short-hand G121A
• How to create the rules?
– Machine learning
Feb. 25, 2013
Introduction to NLP
12
ER – Machine Learning
• Transform the text into a feature vector
F = {Prrx1_N, exhibit_V, knockout_A, knockout_mice_NP,
malformation_N, mice_N, skeletal_A,
skeletal_structure_NP, structure_N}
• Supervised, unsupervised, hybrid approaches
• Required
A priori knowledge and/or training data
• Problems
– Training data – Never enough training data
– Overfitting
• Only learn to classify the training data
• No generalization for new documents
Feb. 25, 2013
Introduction to NLP
13
From Text Matches to Entities
• A text match is not an named (bio-)entity
– Require at least an identifier
– Try to find supporting evidence
• Disambiguation
– Multiple candidates for one match
• Use context to filter
• Prrx1  55 candidate genes
species Mus musculus  PRRX1_MOUSE GeneID:18933
– False positive matches
• Common (English) words
HAS is a short name for ‘Heme A synthase’
• Fruit fly genes/proteins Ken and Barbie
Feb. 25, 2013
Introduction to NLP
14
Finding Facts
• Facts have multiple components
Prrx1 knockout mice exhibit malformation of skeletal structures
 PRRX1_MOUSE
GeneID:18933
 gene knock out
OBI:001148
 Mus Musculus
NCBITaxon:10090
 malformed
PATO:0000646
 skeleton
UBERON:0004288
• Use all the input from the previous steps
–
–
–
–
–
Named entities
Assign relations
Disambiguate
Remove redundant or known relations
Rank candidates
gene_knock_out(PRRX1_MOUSE) has_phenotype malformed(skeleton)
Feb. 25, 2013
Introduction to NLP
15
Reality
3. WHAT CAN YOU EXPECT?
Feb. 25, 2013
Introduction to NLP
16
What can you expect?
• Every step in the BioNLP process may introduce errors
 Many steps
 Errors propagate
• How do we measure quality?
 Benchmarks
• Ideal benchmark
– Large and representative test set of documents
– Pre-annotated by experts
• Benchmarking with real word problems
– BioCreAtIvE: A critical assessment of text mining methods
in molecular biology (Next talk)
Feb. 25, 2013
Introduction to NLP
17
Benchmarks
• Common quality measures
– Precision
– Recall
– F-score
Fraction of relevant hits
Fraction of all relevant documents
Harmonic mean of precision and recall
• Is that sufficient?
– Factually correct, but irrelevant
– Partially correct
• Incomplete matches
• Overeager matches
– Ranking: Best matches first?
Feb. 25, 2013
Introduction to NLP
18
What can you expect?
Upper limits
Prrx1 knockout mice exhibit malformation of skeletal
structures
PRRX1_MOUSE
0.95
gene knock out
0.8
Mus Musculus
0.98
malformed
0.85
skeleton
0.95
0.95  0.8  0.98 * 0.85 * 0.95  0.60
(On average) 40 of 100 facts will be wrong or missed.
Feb. 25, 2013
Introduction to NLP
19
What are the costs?
• No out-of-the-box solution
– All approaches require some sort of customization,
training data or at least feedback
– Parsing: Language, heuristics (stop words)
– Entity Recognition
• Dictionaries: Names, synonyms, ontologies, DBs
• Rules: Hand-curated, training sets
• Machine Learning: Convert text to features, training sets
– Disambiguation: As much information as possible
– Facts
• Define facts
• Different algorithms for different facts
• Continuous cycle
Feb. 25, 2013
Introduction to NLP
20
Summary
• No magic bullet
 Many different approaches
• BioNLP can be very good with specific tasks
 Next talks
• Remember: Errors propagate
• Only as good as the input and feedback
– Abstract vs. Full text
– High quality vs. high quantity training data
Feb. 25, 2013
Introduction to NLP
21
THANK YOU.
Feb. 25, 2013
Introduction to NLP
22