Electronic Medical Records

HEALTHCARE IT

Electronic Medical Records:
Confidentiality, Care, and Epidemiology

Michael Lesk | Rutgers University

A uniied patient medical record ofers hope for better care and reduced costs without deteriorating the
conidentiality of patient information. However, two kinds of conidentiality concerns—patients’ desire
to preserve privacy and vendors’ desire to limit knowledge of their systems—impede the full exploitation
of medical records for better patient care.

E

lectronic medical record (EMR) systems are expected to improve patient care, save staf time, and
support epidemiological research. For these and other reasons, the Afordable Care Act requires that all US patients
have an EMR by 2014. Approximately US$35 billion will
be spent to support doctors’ and medical facilities’ installation of records systems, with a criterion of achieving
“meaningful use” in actual practice. Unfortunately, EMRs
in the US sufer from not only implementation problems
but also policy decisions about their privacy that might

impede both patient care and medical progress.

What Does the Rest of the World Do?
Europe has seen impressive results with EMR systems—
Denmark has had full coverage for more than 10 years,
and other countries such as the Netherlands and Sweden
also have essentially full coverage. Denmark has the lowest drug error rate in Europe, and its doctors report that
EMR systems save them approximately one hour per day.1
Meanwhile, the US is still struggling to reduce errors. he
famous 2000 National Research Council report “To Err
Is Human” estimated that approximately 100,000 deaths
resulted from medical errors each year; a decade later, estimates suggest that the rate is approximately the same.2
1540-7993/13/$31.00 © 2013 IEEE

he most common serious errors relate to pharmaceuticals. In addition to unintentional drug errors,
there are also cases of prescribed but medically inappropriate drug administration. Denmark has the lowest rate of inappropriate medication in eight European
countries (Denmark, the Netherlands, the UK, Iceland, Norway, Finland, Italy, and the Czech Republic—
a 5.8 percent rate, compared to 19.8 percent in these
countries on average).3 Fear of medical error is much
less common in Denmark than in countries with less

complete records, suggesting that Denmark’s population has recognized the gains from their system, or that
publicity about the dangers of EMRs hasn’t obscured
observation of their beneits.
In the US, we don’t yet see convincing results about
care quality. Some practices and experiments report
fewer errors with e-prescribing, but other studies
don’t. Many authors report anecdotal problems with
sotware, some even saying that paper records would
be beter. Serious studies disagree. To pick just a few
papers, one study reports improved results from health
IT in hospitals, another reports no improvements with
outpatients, and a review suggests that efects are minimal and nonsigniicant.4

Copublished by the IEEE Computer and Reliability Societies

November/December 2013

19

HEALTHCARE IT


Patient compliance with physician instructions—a
key ingredient in improving health—doesn’t seem to
improve as a result of automation. he excitement about
personal health records and geting people to monitor
their own health has died down a bit, with the demise
of Google Health being an example. Devices such as
the FitBit, which tracks calorie consumption, and WiFi-enabled bathroom scales atract young “geeks” who
are still in their 20s and don’t represent a major share of
health problems or costs. he elderly are less enthusiastic about maintaining their own health records.
Electronic health records (EHRs) are also vital for
epidemiological research. From 2000 to 2007, studies
on electronic records increased by a factor of 6.5 he
UK announced that it will make a medical records database available for UK researchers.6 Patients choose to
opt in, but a high rate of participation is expected, with
some 52 million records available for study. France will
similarly be making health data available for epidemiological research.

Software Problems Alict
the US EMR System

he US EMR systems’ code base is quite old, and their
interfaces old-fashioned. Years ater mobile devices have
become ubiquitous, many medical systems still present doctors with a desktop system, showing a screen
of ields to populate. Presentation issues detract from
care; for example, screens listing patient drug schedules
in the order that prescriptions are writen make detecting multiple prescriptions for the same drug diicult.
Recently, a doctor sent me a screenshot from an EHR
system showing a list of four prescriptions for the same
drug interspersed with 13 warning messages (many
duplicative) and asking how he could quickly and conidently igure out the total prescribed dosage with such
a confusing interface.
Interoperability is another problem. Many patients
are treated by multiple healthcare providers, and formating problems can impede data exchange. Oten,
when a patient moves to a new provider, the old records
system won’t deliver structured data to the new system,
just images of printed forms. One physician related
an amusing problem—in her hospital, EKG tracings
were presented horizontally, but when patient records
arrived from a nearby provider, EKG tracings were displayed vertically, causing her to spend her time standing
with her head turned 90 degrees.

Such interoperability problems impede eforts for coordinated care. Modern healthcare atempts to consider all
patient problems together, rather than isolating and treating issues separately. When diferent specialists can’t easily
see the same record, care might sufer. Using a single record
per patient—as many European systems do—can reduce
20

IEEE Security & Privacy

conlicts and improve care. Many participants recognize
interoperability as a major issue; for instance, Minnesota
law requires interoperable health records by 2015.
One disadvantage of the single record is that it’s
bulkier. he modern medical record averages more than
200 pages, and doctors are supposed to spend less than
10 minutes with each patient. As a result, information
overload is a serious problem, and bad displays and lack
of summarization make things worse. When health providers spend all their time looking at screens instead of
patients, aspects of patient behavior might be missed,
and patients might feel ignored and less involved.
Some EMR standards exist, both at high (CCDs

[continuity of care documents] and CCRs [continuity
of care records]) and lower levels (radiological image
exchange). However, too oten it appears that the vendors’ goal is to lock in customers rather than to facilitate
data transfer to new systems.
A combination of interface and interoperability
problems, along with training issues, planning problems, and installation diiculties, has meant that 30 to
40 percent of US atempts to install an EHR system
fail.7 here’s now an entire literature on EMR system
problems in practice, with discussions on procedures to
improve interfaces, worklow, and user engagement. A
recent Middleton review reports on eforts to increase
computer systems’ usability for the patients’ beneit.8
Typically, outsiders can’t inspect EMR system code.
Although the Veterans Administration uses an open
source system, and West Virginia Senator Jay Rockefeller
proposed a bill in 2009 to fund open source records systems, the industry in general hasn’t been enthusiastic
about using open source. his makes debugging healthcare sotware a proprietary and unobservable task.
To obtain the improvements observed in Europe,
the US requires beter sotware. Unfortunately, during
the three-year period between the enactment of the

requirement for health records in 2009 and the 2012
election—which might have resulted in the repeal of
the Afordable Care Act—less was done to develop and
improve EMR systems than might have happened without uncertainty. Over the next few years, rapid progress
is necessary to achieve real patient beneit.

Problems Imposed
by Vendor Conidentiality
Most EMR vendors insist on contracts that don’t allow
healthcare staf to disclose sotware problems to anyone
except the vendor. Some contracts even forbid showing
screenshots of their systems to anybody. hese contracts
also transfer all liability from the sotware vendor to the
healthcare system.9 In addition, despite its importance
in medical treatment today, EMR sotware isn’t regulated by the US Food and Drug Administration (FDA).
November/December 2013

his makes historical sense; when sotware was irst
introduced to medicine, it arrived as billing sotware,
and billing mistakes aren’t likely to cause patient injury.

However, we’re long past those days, and our hospitals’
EMR/EHR sotware still lacks oversight.
Conidentiality clauses prohibiting disclosure of
medical sotware problems contrast with the mandatory
reporting of drug problems and the medical device–
reporting systems. he FDA operates the Adverse Event
Reporting System to which pharmaceutical companies
must report adverse drug efects. Individual clinicians
and even consumers can also make reports (but aren’t
compelled to do so). Similarly, FDA’s Manufacturer and
User Facility Device Experience system collects reports
of adverse events involving medical devices. Again,
device manufacturers must report adverse events, and
hospitals must report deaths related to medical devices
to the FDA but can report injuries only to the manufacturer. No similar system exists for sotware problems.
he largest single class of medical errors is drug errors,
and e-prescribing systems are believed to reduce error. A
recent comparison of e-prescribing systems’ error rates
found wide variations, from 5 to 37 percent.10 However,
the authors couldn’t get permission to publish the systems’ names, so physicians don’t know which are best.

If patient safety is a critical health IT issue, corporate policies that interfere with research to help patients
by restricting hazards disclosure—in the name of intellectual property and liability limitations—are diicult
to defend. For improved patient safety, we need data
about actual problems, particularly given the increasing
dominance and complexity of computer sotware. In
fact, the conidentiality imposed on bug reports hurts
the sotware further: vendors are encouraged to ix bugs
only at the hospital that reported the problem, leading
to a proliferation of versions with diferent bugs persisting in diferent places. Enforcing ignorance of sotware
problems hurts patients and is poor public policy.
As an example of a similar conidentiality issue in
a diferent domain, consider NASA’s Aviation Safety
Reporting System, which accepts reports from air transport workers, including pilots, light crews, and ground
staf in both industry and government, of any incident
that presents a safety risk. As an incentive to report incidents, people who report a problem that didn’t involve
an actual accident or a violation of law aren’t penalized
for their actions if they promptly report the situation. In
addition, the data is kept conidential, and the entire system is run by NASA, not the Federal Aviation Administration, which would be the enforcement agency. NASA
has no enforcement power over air travel. As a result of
this bargain—beter data in exchange for immunity—

more than 1 million incidents are in the database. hese
are available in anonymized form for research.
www.computer.org/security

Patient Conidentiality Impacts Research
Increasingly, we’re restricting the details available even
in anonymized medical records. We used to publish
death records by cause of death and town. Now they’re
published only by state, making it diicult to ind “cancer clusters” or other data. Daniel Wartenberg and W.
Douglas hompson discuss the conlict between privacy
and research, noting that in 1988, public health records
included county or city location and date, whereas now
they have no geographic information and only a partial
date.11 he authors note that important research on air
pollution done in the past couldn’t be done today. We
might ask why death statistics need the same level of privacy as the recording of events about living patients.
Similarly, a study of the Health Insurance Portability
and Accountability Act’s (HIPAA) impact on inluenza
research observed the distortion resulting from HIPAA
restrictions on geographic coding. As a result of privacy concerns, one research group’s question about the

relationship of a bacterial infection to stomach cancer
in a small community couldn’t be answered.12 Because
health professionals couldn’t say whether the community actually had a higher-than-normal risk of stomach
cancer, they couldn’t address the resulting anxiety or
resolve the underlying issue.
he onrush of genetic data will make privacy an increasingly relevant issue. Groups like Sage Genomics believe
that by analyzing patient genomic data, they can revolutionize cancer treatment, making chemotherapy more
efective with fewer side efects. Today, however, patient
privacy is an obstacle to gathering such data. In the UK,
the law recognizes “DNA thet” as an ofense, although
research on DNA samples is permited with approval. If
the US adopted similar rules, those rules would impede
progress in genomic research. As with other data that
might be personally identiiable—genomic or radiological, for example—research that requires access to the data
can be impeded by privacy constraints.
It’s particularly upseting to epidemiologists that commercial companies have beter access to medical records
than researchers do, because companies can buy records
from insurance companies, pharmacies, and hospitals. A
few years ago, Vermont tried to give doctors the right to
prohibit the sale of information about the prescriptions
they wrote, but the Supreme Court struck down the law.
As a result, epidemiology is easier in corporations than
in medical schools and hospitals—although billions
of public dollars are spent on medical research, public
researchers are hampered in their eforts.

Anonymization Has Been Given
a Bad Name
Opinions such as “the anonymization process is an illusion” are common. A few years ago, someone found the
21

HEALTHCARE IT

medical records of William Weld, then Governor of
Massachusets, in an anonymized dataset. More widely
publicized examples are the identiication of particular people in nonmedical datasets, in particular, AOL
search logs and Netlix movie recommendations. hese
instances frightened people into further fuzzing of medical data, which impacts formal research. Institutional
review board requirements also constrain atempts to
do medical research. he recognition that DNA databases and radiological images are also personally identiiable has further frustrated eforts to create patient
databases for use in research.
More recently, there’s been some pushback. Daniel
Barth-Jones argued that the Weld deanonymization
case depended on publicity given to his hospitalization and doesn’t represent a set of generally applicable
circumstances.13 Researchers are exploring additional
ways to anonymize data. In general, these methods rely
on aggregating data to a level at which individuals can’t
be identiied. hese methods can be diicult to understand and rely on statistical methods, so confusion can
lead to disclosure out of ignorance. For instance, data
administrators who don’t understand statistical methods might allow searches that tell you that 102 people
in a group are more than 40 years old, but 101 are more
than 41 years old, so that you know exactly one person
is 41 years old; by combining this with other features,
you can ind out more about that one person.
It’s sometimes possible to deanonymize data by
comparing multiple public sources, but it won’t be
apparent that individuals can be identiied until the various data sources are compared. For example, people can
be identiied from cell phone records, even though the
locations aren’t reported to full precision. Limiting the
number of data requests to databases can impede someone who wishes to identify an individual by comparing
results from multiple queries. For example, consider
how the Netlix dataset was de-anonymized. Imagine I
take three rarely watched movies and ind that one and
only one person in the Netlix dataset saw all of them.
hen I ind that one and only one person on IMDb has
reviewed all of them. It’s a good guess that this is the
same person, and IMDb reviews are signed and oten
link to real names and biographies. he restrictions that
prevent this kind of game-playing also pose obstacles
for researchers, and so restrictions and permissions for
qualiied researchers need to be negotiated.
Various researchers are now trying to balance anonymization with clinical needs. For example, Oscar Ferrández
and his colleagues look at various methods to anonymize
clinical reports and compare their efectiveness at removing personal information while leaving enough detail for
clinical study.14 Privacy advocates will object that the recommended methods don’t guarantee anonymization;
22

IEEE Security & Privacy

however, perfect conidentiality is an unachievable goal
(and didn’t exist with paper records, either). More computer security wouldn’t have helped in the recent case of
two Australian comedians who, by impersonating the
Queen of England, persuaded London hospital staf members that they had authority to know about the Duchess of
Cambridge’s medical condition.

Amateur Epidemiology
Sites such as www.patientslikeme.com and www.23
andme.com atempt to collect medical records for
research, as do professional researchers in organizations
like Sage Genomics. hese eforts demonstrate that
sharing medical data can produce beter results for individual patients, so that patients enthusiastically participate. For example, patients know privacy risks exist in
using Internet discussion groups about health, but the
more serious their illness, the more willing they are to
disclose information.
Professional epidemiologists have some hesitation about these sites owing to problems such as selfselection and inaccurate reporting. Nevertheless,
Internet data has been valuable for medical research.
he best-known example is Google Flu Trends, which
uses information about popular search terms to track
those that are correlated with inluenza outbreaks. his
search data detects places where the disease is occurring
faster than the Centers for Disease Control receives and
publicizes reports from doctors. Today, there is more
interest in sites with more direct medical data, such as
disease support groups and the volunteer sites I mentioned earlier. However, A. Cecile Janssens and Peter
Krat have several hesitations about exploiting data
from online communities.15 hey worry about selection bias, for example. If you imagine that people who
like the Internet are more likely to ill out forms about
their psychological health, you might get a distorted
view about the impact of Internet use on depression.
Similarly, confounding can arise when people report
only a few variables, some of which might be related to
unreported variables. In a normal experiment, we might
be able to ask about those other variables, but this can
be more diicult with a volunteer survey. Janssens and
Krat also stress a need for careful disclosure of what’s
being done.
he individual genomics data on the 23andMe website can also be used in research, but questions have
arisen as to whether risks are adequately disclosed. A
problem with discussion of the detailed risks is that
suiciently frightened patients might refuse to discuss
their problems with their physician. Others have dismissed (or at least criticized) personal testing as “recreational genetics” and tried to steer clear of this data.
Some systems are a mixture of self-managed
November/December 2013

treatment and epidemiological research. For example,
some systems for diabetes patients encourage patients to
pay careful atention to their own diabetes and manage
their own treatment. Data collected from these patients
can be valuable for epidemiology. In addition to the general sites already mentioned, researchers have exploited
data from several UK diabetes-related online communities. Again, we must take steps to ensure patient protection and to understand the risks involved in data sharing.
Paul Wicks and his colleagues wrote a particularly
interesting article on data exploitation from PatientsLikeMe in which they selected control patients from a
dataset to reduce selection bias.16 heir article suggests
that patient-reported data can accelerate the discovery
of new treatments as well as help evaluate the efectiveness and side efects of current methods. Patientcontributed data is available in large quantities and
more rapidly than data from most clinical trials. It’s particularly important for rarer diseases in which researchers in one geographic location might have diiculty
inding enough suferers to achieve statistical validity.
In all these situations, patients voluntarily contributed data. hey might not know what the risks are, or
they might have decided they are small. In any case,
patients have decided that voluntary disclosure is useful to them personally and are willing to accept that
other people will use the data for research. People who
aren’t currently sick don’t see the same advantages in
disclosure, but we can’t do longitudinal studies without
information on people who haven’t yet developed the
diseases we’re investigating.

Risks
In the US, data from patient records can indeed be used
against you. Although your health insurance company
can’t use genetic testing results in rate seting, there’s
no such prohibition for life insurance or long-termcare insurance companies. And, of course, it’s legal to
ire people for being sick. As a strange metric of risk,
criminals sell social security numbers for $5 but medical records for $50. People perceive particular dangers
in medical data exposure—aside from the now familiar and general risks of identity thet, barrages of telemarketing, and public notoriety, medical records might
afect employment, medical treatment, insurance, and
many other facets of life such as the ability to buy irearms or criminal sentencing decisions.
Strangely enough, some of the same arguments made
about patient privacy are made about corporate conidentiality. Medical sotware companies want neither
mandatory disclosure of sotware laws nor regulation
by the FDA. hey argue the possibility of inancial loss
if disclosure of errors leads to liability lawsuits, for example. Just as individuals worry that they won’t be able to
www.computer.org/security

get a new job if employers know too much about their
medical history, vendors worry that they can’t introduce new features if regulators and purchasers are able
to closely investigate the process and take too long to
evaluate new options. hey argue that government regulation, in particular, will slow the creation of new features and the introduction of new sotware methods;
this would be more convincing as an argument if there
weren’t still EMR systems using Cobol. Again, there’s a
conlict between public beneit and participants’ privacy,
in this case, vendors’ privacy. Most of us see an ethical
diference between conidentiality of patient data and
conidentiality of sotware design, but similar arguments
are being made, and I am reminded that Governor Mit
Romney argued that “corporations are people.”
People—real people—legitimately fear losing their
job as a result of medical records disclosure. Sometimes,
these consequences are justiied. In 1996, a train crash
in New Jersey killed three people, including the train
engineer who ran through a red signal and who had concealed from the railroad company his loss of color vision
as a result of diabetes.17 And, going back more than a
century, a New York woman best known as “Typhoid
Mary” infected multiple people with typhoid fever but
kept taking jobs as a cook. She was released from quarantine ater promising to stop working as a cook, but
being a laundress paid less, so she changed her name
and returned to cooking. Ater another series of typhoid
cases, she was conined until her death.
he news media regularly feature stories about thet
of records, typically credit card numbers. One result
of these stories is an increased level of fear about data
disclosure, causing people to demand ever more conidentiality about medical records, which as noted, interferes with medical research and treatment decisions. We
don’t see news stories about our inability to recognize
carcinogens because we can’t do adequate data mining.
hus, the media bias the discussion in favor of privacy
and against medical research.

he Conlict between
Epidemiology and Privacy
Jane Yakowitz wrote a detailed and insightful article
about the conlict between research and privacy, ranging far beyond medical epidemiology.18 She points to
the many valuable studies done with large datasets and
the importance of continuing such research. Anonymization is possible, if not perfect, and she suggests that
the public beneit is so important that it outweighs exaggerated privacy concerns. We can also anticipate further
improvements in our knowledge of anonymization
techniques and our understanding of the risks.
Some believe that patient data should be “property”
belonging to the patient and should not available without
23

HEALTHCARE IT

payment. Given that patient data is routinely traded
today,19 albeit in an anonymized HIPAA-compliant
form, it’s understandable to think that if patient data is
sold, the patients should get the money. However, introducing property rights in medical records is likely to create a mess for the entire healthcare system. Many aspects
of people’s lives, such as their credit, where and how
rapidly they drive, what books they buy, and what movies they watch, are of commercial value and exploited
today. Should all these become an individual’s property
right? At a minimum, this will produce a vast expansion
of license agreements, so that buying a cell phone will
require acknowledgment of a transfer of ownership of
travel history. As a society, we’re unwilling to impede
data studies done for marketing; is it not more important to preserve our ability to do medical research?
As an example of the importance of detailed data,
Janet Currie and W. Reed Walker have shown that introducing E-ZPass electronic toll collection in New Jersey
improved health—reducing premature births and low
birth weight infants among mothers who lived within
2 km of a toll plaza.20 Avoiding the need for drivers to
stop at the tollbooths lowered congestion and pollution. his study couldn’t have been done without access
to the mothers’ exact street addresses, which is exactly
the kind of precise data that privacy advocates fear can
be used for deanonymization. Should we make it diicult to have done this study?

M

ore openness about medicine would beneit
all of us. We should believe in anonymized
records and make them more widely available for study
and push for disclosure and regulation of medical sotware programs.

References
1. D. Proti and I. Johansen, “Widespread Adoption of Information Technology in Primary Care Physician Oices in
Denmark: A Case Study,” Commonwealth Fund, Mar.
2010.
2. D. Grady, “Study Finds No Progress in Safety at Hospitals,” he New York Times, 24 Nov. 2010.
3. D. Fialová et al., “Potentially Inappropriate Medication
Use among Elderly Home Care Patients in Europe,” J. Am.
Medical Assoc., vol. 293, no. 11, 2005, pp. 1348–1358.
4. C.M. DesRoches et al., “Electronic Health Records’
Limited Successes Suggest More Targeted Uses,” Health
Afairs, vol. 29, no. 4, 2010, pp. 639–646.
5. B.B. Dean et al., “Use of Electronic Medical Records for
Health Outcomes Research: A Literature Review,” Medical
Care Research and Rev., vol. 66, no. 6, 2009, pp. 611–638.
6. I. Sample, “NHS Patient Records to Revolutionise Medical Research in Britain,” he Guardian, 28 Aug. 2012.
24

IEEE Security & Privacy

7. S. Alfreds, Health Information Technology Adoption in
Massachusets: Costs and Timerame, Univ. Massachusets Medical School; www.umassmed.edu/uploaded
Files/CWM_CHPR/Publications/Clinical_Supports/
EOHHS_HITadoptionMassachusets.pdf.
8. A.F. Rose et al., “Using Qualitative Studies to Improve the
Usability of an EMR,” J. Biomedical Informatics, vol. 38, no.
1, 2005, pp. 51–60.
9. R. Koppel and D. Kreda, “Health Care Information Technology Vendors’ ‘Hold Harmless’ Clause: Implications
for Patients and Clinicians,” J. Am. Medical Assoc., vol. 301,
no. 12, 2009, pp. 1276–1278.
10. K.C. Nanji et al., “Errors Associated with Outpatient
Computerized Prescribing Systems,” J. Am. Medical Informatics Assoc., vol. 18, 2011, pp. 767–773.
11. D. Wartenberg and W.D. hompson, “Privacy versus Public Health: he Impact of Current Conidentiality Rules,”
Am. J. Public Health, vol. 100, no. 3, 2010, pp. 407–412.
12. A. Colquhoun et al., “Challenges Created by Data Dissemination and Access Restrictions When Atempting to
Address Community Concerns: Individual Privacy Versus Public Wellbeing,” Int’l J. Circumpolar Health, vol. 7,
2012, pp. 1–7.
13. D. Barth-Jones, “he ‘Re-Identiication’ of Governor William Weld’s Medical Information: A Critical Re-Examination of Health Data Identiication Risks and Privacy
Protections, hen and Now,” 4 June 2012; htp://ssrn.
com/abstract=2076397.
14. O. Ferrández et al., “Evaluating Current Automatic Deidentiication Methods with Veteran’s Health Administration Clinical Documents,” BMC Medical Research
Methodology, vol. 12, 2012; htp://link.springer.com/
article/10.1186%2F1471-2288-12-109.
15. A.C.J.W. Janssens and P. Krat, “Research Conducted
Using Data Obtained through Online Communities: Ethical Implications of Methodological Limitations,” PLoS
Medicine, vol. 9, no. 10, 2012; e1001328.
16. P. Wicks et al., “Sharing Health Data for Beter Outcomes
on PatientsLikeMe,” J. Medical Internet Research, vol. 12,
no. 2, 2010, p. e19.
17. M.L. Wald, “Eye Problem Cited in ’96 Train Crash,” he
New York Times, 26 Mar. 1997, p. A1.
18. J. Yakowitz, “Tragedy of the Data Commons,” Harvard J.
Law and Technology, vol. 25, no.1, 2011, pp. 1–67.
19. M.A. Rodwin, “Patient Data: Property, Privacy & the
Public Interest,” Am. J. Law and Medicine, vol. 36, 2010,
pp. 586–618.
20. J. Currie and R. Walker, “Traic Congestion and Infant
Health: Evidence from E-ZPass,” Am. Economic J.: Applied
Economics, vol. 3, no. 1, 2011, pp. 65–90.
Michael Lesk is a professor of library and information

science at Rutgers University. Contact him at lesk@
acm.org.
November/December 2013