Patient Computer Dialog in Primary Care
|The safety and scientific validity of this study is the responsibility of the study sponsor and investigators. Listing a study does not mean it has been evaluated by the U.S. Federal Government. Read our disclaimer for details.|
|ClinicalTrials.gov Identifier: NCT00386776|
Recruitment Status : Completed
First Posted : October 12, 2006
Results First Posted : June 10, 2013
Last Update Posted : June 20, 2013
|First Submitted Date ICMJE||October 11, 2006|
|First Posted Date ICMJE||October 12, 2006|
|Results First Submitted Date||March 20, 2013|
|Results First Posted Date||June 10, 2013|
|Last Update Posted Date||June 20, 2013|
|Start Date ICMJE||January 2005|
|Primary Completion Date||January 2011 (Final data collection date for primary outcome measure)|
|Current Primary Outcome Measures ICMJE
|Original Primary Outcome Measures ICMJE
|Change History||Complete list of historical versions of study NCT00386776 on ClinicalTrials.gov Archive Site|
|Current Secondary Outcome Measures ICMJE
|Original Secondary Outcome Measures ICMJE
|Current Other Outcome Measures ICMJE||Not Provided|
|Original Other Outcome Measures ICMJE||Not Provided|
|Brief Title ICMJE||Patient Computer Dialog in Primary Care|
|Official Title ICMJE||Cybermedicine for the Patient and Physician|
With this clinical study, we hoped to find out if interactive, computer-based medical interviews, when carefully tested and honed and made available to patients in their homes on the Internet, will improve both the efficiency and quality of medical care and be well received and found helpful by patients and their physicians. We developed the computer-based medical interview consisting of over 6000 questions and a corresponding program that provides a concisely written, summary of the patient's responses to the questions in the interview. We then conducted read aloud and test/retest reliability evaluations of the interview and summary programs and determined the programs to be reliable. Results were published in the November 27, 2010 issue of the Journal of the American medical Informatics Association. We also developed, edited, and revised a program that provides a concisely written, summary of the patient's responses to the questions in the interview.
We obtained a grant from the Rx Foundation to conduct clinical trial of our medical history. At the time of the office visit, the summary of the computer-based history of those patients who had completed the interview was available on the doctor's computer screen for the doctor and patient to use together on a voluntary basis. The results of this trial were published in the January 2012 issue of the Journal of the American Informatics Association.
We developed a computer-based medical history for patients to take in their homes via the Internet. The history is divided into 24 modules— family history, social history, cardiac history, pulmonary history, and the like. So far as possible, it is designed to model the comprehensive, inclusive, general medical history traditionally taken, when time permits, by a primary care doctor seeing a patient for the first time. It contains 232 primary questions asked of all patients about the presence or absence of medical problems. Of these, 215 have the preformatted mutually exclusive responses "Yes," "No," "Uncertain (Don't Know, Maybe)," "Don't understand," and "I'd rather not answer;" 10 have other sets of multiple choices, one response permitted; five have multiple choices with more than one response permitted, and two have numerical responses. In addition, more than 6000 questions, explanations, suggestions, and recommendations are available for presentation, as determined by the patient's responses and the branching logic of the program. These questions are available to explore in detail medical problems elicited by one or more of the primary questions. If for example, a patient responds with "Yes" to the question about chest pain, the program branches to multiple qualifying questions about characteristics of the pain, such as onset, location, quality, severity, relationship to exertion, and course. Once we had completed the interview in preliminary form, we made it available to members of our medical advisory board for their criticisms and suggestions. We then conducted a formal read-aloud assessment in which 10 volunteer patients read each primary question aloud to an investigator in attendance and offered their understanding and general assessment of the questions. We revised our program based on comments from the advisory board and the patients
We then conducted a test/retest reliability study of the 215 of the 232 primary questions that have the preformatted, allowable response set of "Yes," "No," "Uncertain (Don't know, Maybe)," "Don't understand," and "I'd rather not answer, the 10 questions that have other response sets with one answer permitted, and the 5 questions with more than one response permitted. Email messages were sent via PatientSite (our patients' portal to their electronic medical record) to inform patients of the study and how to sign on to the informed consent form, and for those that had consented to the study to remind them to take interview for the first and then the second time.)
From randomly selected patients of doctors affiliated with Beth Israel Deaconess Medical Center in Boston, 48 patients took the history twice with intervals between sessions ranging from one to 35 days (mean seven days; median five days). When we analyzed the inconsistency between first and second interviews with which the 48 patients responded to each of the primary questions. We found that the 215 questions with response options of "Yes," "No," "Uncertain," "Don't understand," and "I'd rather not answer" had the lowest incidence (6 percent); the 10 other multiple choice questions with one response permitted had a 13 percent incidence, and the five multiple choice questions with more than one response permitted had a 14 percent incidence. Whenever an inconsistency was detected with the repeat interview, the patient was asked to choose when appropriate from four possible reasons. Reasons chosen were: "clicked on the wrong choice" (23 percent), "not sure about the answer" (23 percent), "medical situation changed" (6 percent), and "didn't understand the question (less than 1 percent). With the remaining 47 percent of the inconsistencies, no reason was given.
We then computed the percentage of agreement for each of the primary questions together with Cohen's Kappa Index of Reliability. Of the 215 "Yes," "No," "Uncertain (Don't know, Maybe)," "Don't understand," and "I'd rather not answer" questions, 96 (45 percent) had kappa values greater than .75 (excellent agreement by the criteria of Landis and Koch, and of these, 38 had kappa values of one (perfect agreement); an additional 24 primary questions (12 percent), to which all patients had made identical responses both times (perfect consistency), had no Kappa values. Sixty-eight of these questions (32 percent) had kappa values between .40 and .75 (fair to good agreement); and 26 (13 percent) had kappa values less than .40 (poor agreement). Of the 27 questions with poor kappa values, 15 had percentages of agreement greater than 90 percent, and we deemed these to be sufficiently reliable within their clinical context to remain unrevised. We selected the 12 questions with poor kappa values and percentages of agreement less than 90 percent for rewording. Of the 15 primary questions with varying sets of responses, half had kappa values in the excellent range and half had kappa values in the fair to good range, and we kept these in place unrevised. Fifteen of the primary questions (7 percent) received a "don't understand" response. Although there was but a single "don't understand" response for each of these questions, we were able to isolate seven with which the possibility of confusion seemed to be evident, and we revised these accordingly.
With the first of the two interviews—with a mean of 545 frames presented and a completion time of 45 to 90 minutes (based on an estimated 7 seconds per frame —the volunteers were for the most part favorable in their assessment of the interview when asked a set of 10, 10-point Likert-scale questions.
These results were published in the November 2010 issue of the Journal of the American Informatics Association.
We also developed, edited, and revised a program that provides a concisely written, summary of the patient's responses to the questions in the interview. This was a formidable project that took considerably longer than we had anticipated. The "phrase" is the basic unit of the summary. Identified by its unique reference number, each phrase contains the words to be generated, the conditions for writing them, and the branching logic that determines the course of the program as it progresses from phrase to phrase. The summary program for the General Medical Interview, which contains over 5,000 phrases, is organized by sections that are related by name and content to their corresponding interview sections. Designed for use by both doctor and patient and available in both electronic and printed form, the summary is presented in a legible but otherwise traditional format.
We were not able to complete the randomized control study at this time due to a couple of factors. First, it took substantially longer than anticipated to develop and evaluate our program in our effort to have a comprehensive, detailed computer-based medical interview that would compare favorably with that of a thoughtful physician. It took us two years to develop, test, and revise the General Medical Interview and far longer than we had anticipated to complete the test-retest reliability study and to develop, test, and revise the summary program. In addition, our medical center's current policy is to obtain a patient's e-mail address only after the patient has had a first visit to the center and only if the patient has been registered in PatientSite after a first visit. Therefore, although we could readily recruit by e-mail our participants for the test-retest study, we were limited to the far more labor-intensive process of telephone recruitment for the randomized, controlled study.
We later obtained a grant from the Rx Foundation us to conduct clinical trial of our newly revised medical history. After completing the medical history the patients were asked to complete an online 10 item 10-point Likert-scale post-history assessment questionnaire. At the time of the office visit, the summary of the computer-based history of those patients who had completed the interview was available on the doctor's computer screen for the doctor and patient to use together on a voluntary basis. At the option of the doctor, the summary could then be edited and incorporated into the patient's online medical record. The day after the visit the patients and the doctors were asked to complete a 10-point Likert-scale questionnaire consisting of six questions that asked about the effect of the medical history and its summary on the quality of the visit from the patient's and the doctor's perspectives, with provision for them to record comments and suggestions for improvement.
The results of this were published in the January 2012 issue of the Journal of the American Informatics Association.
|Study Type ICMJE||Interventional|
|Study Phase||Phase 3|
|Study Design ICMJE||Intervention Model: Single Group Assignment
Masking: None (Open Label)
Primary Purpose: Diagnostic
|Condition ICMJE||Patient Computer Dialog|
|Intervention ICMJE||Other: Computer-based medical history
The intervention is a computer-based medical interview, which contains 232 primary questions that are asked of all respondents, and over 6000 frames (questions, explanations, suggestions, recommendations, and words of encouragement) that are available for presentation as determined by the patient's responses and the branching logic of the program.
|Study Arms||Experimental: `Computer-based medical history
A computer-based medical history to take in their homes via the Internet. The history is divided into 24 modules— family history, social history, cardiac history, pulmonary history, and the like.
Intervention: Other: Computer-based medical history
* Includes publications given by the data provider as well as publications identified by ClinicalTrials.gov Identifier (NCT Number) in Medline.
|Recruitment Status ICMJE||Completed|
|Completion Date||January 2011|
|Primary Completion Date||January 2011 (Final data collection date for primary outcome measure)|
|Eligibility Criteria ICMJE||
|Ages||18 Years and older (Adult, Senior)|
|Accepts Healthy Volunteers||Yes|
|Contacts ICMJE||Contact information is only displayed when the study is recruiting subjects|
|Listed Location Countries ICMJE||United States|
|Removed Location Countries|
|NCT Number ICMJE||NCT00386776|
|Other Study ID Numbers ICMJE||2004P-000420
R01LM008255-01A1 ( U.S. NIH Grant/Contract )
|Has Data Monitoring Committee||No|
|U.S. FDA-regulated Product||Not Provided|
|IPD Sharing Statement||Not Provided|
|Responsible Party||Warner Slack, Beth Israel Deaconess Medical Center|
|Study Sponsor ICMJE||Beth Israel Deaconess Medical Center|
|Collaborators ICMJE||National Library of Medicine (NLM)|
|PRS Account||Beth Israel Deaconess Medical Center|
|Verification Date||June 2013|
ICMJE Data element required by the International Committee of Medical Journal Editors and the World Health Organization ICTRP