INTRODUCTION
Hospitals worldwide are investing in Electronic Medical Records (EMRs) as a cornerstone of digital transformation in healthcare delivery. EMRs are designed to enhance patient safety, streamline workflows, and improve the overall quality of care.1 In addition, EMRs facilitate seamless data integration, support better clinical outcomes and streamline healthcare operations.2–4
The successful realization of these benefits hinges on the effective adoption of EMRs by healthcare staff. However, despite substantial investments, the actual adoption and meaningful use of EMRs by clinical personnel—especially physicians and nurses—remain inconsistent and suboptimal. Several studies have shown that implementation success does not necessarily translate into user acceptance or engagement.5 Barriers to adoption frequently cited are resistance to change, usability issues, and inadequate training.6
Resistance among users is often driven by concerns around increased clerical burden, usability challenges, workflow disruptions, and a perceived loss of clinical autonomy. For instance, nearly half of surveyed physicians in a U.S. national study linked EMR use to professional burnout.7 These issues reflect a persistent gap between technical deployment and human adoption. Most health institutions lack tools to objectively and continuously measure adoption.8 A systematic review by Kruse et al1 highlighted the absence of standardized methods to quantify EMR adoption, which impedes targeted interventions. In this context, the need for an objective, scalable, and data-driven method to assess EMR adoption becomes critical—not only to protect return on investment but also to improve patient safety and staff satisfaction.
Although models such as the Technology Acceptance Model (TAM), Unified Theory of Acceptance and Use of Technology (UTAUT), and diffusion of innovation frameworks have significantly advanced our understanding of technology acceptance, they primarily emphasize perceptions and behavioral intentions.
The Knowledge, Attitude, and Practice (KAP) framework, widely used in public health research, systematically evaluates an individual’s knowledge of a subject, their attitudes toward it, and their practices or behaviors related to it.9,10 KAP surveys are typically employed through structured questionnaires, enabling researchers to identify disconnects between what people know, how they feel, and what they do. The strength of the KAP framework lies in its ability to uncover behavioral bottlenecks and cognitive gaps, which purely observational or usage-based assessments may miss. For example, individuals may resist a health intervention not due to lack of access, but due to poor understanding or negative attitudes shaped by misinformation. Table 1 shows a comparison of contemporary methods and the unique advantages conferred by the KAP framework for assessing EMR adoption.
In this report, the structured, objective and actionable capability of the KAP framework is leveraged to provide insights into EMR adoption behavior among clinical staff. We have come up with a novel method that adapts the KAP framework to the context of EMR adoption. This method introduces a structured and sequential assessment: from knowledge of system functionalities, to attitudes toward usage, and ultimately to actual practice. By explicitly separating these three dimensions, the method allows us to move beyond perception- or intention-driven measures and capture the full continuum of adoption behavior. This novel method provides a more objective and actionable means to identify specific gaps—whether in training, mindset, or practice—that existing protocols do not systematically address.
The method involves a series of targeted questions designed to evaluate healthcare personnel knowledge about EMR systems, their attitudes toward EMR usage, and their practices in using the system. The method not only identifies the problem areas in adoption, but also offers recommendations to tackle them. We also demonstrate a working prototype of a tool that was built based on this methodology along with a simulation. By combining objective measurement with actionable insights, this approach has the potential to transform how healthcare organizations evaluate and address EMR adoption challenges.
METHODOLOGY
Components of the tool: The CLEAR Score Tool is designed to evaluate EMR adoption across various user groups in healthcare facilities. The tool provides an interface for EMR end-users to provide their responses anonymously. The three core components of the tool are:
-
User Interface (Input Layer): This front-end interface is designed for diverse healthcare professionals, including clinicians, nurses, paramedical staff and administrative staff. Through this interface, users provide both basic demographic details and responses to a series of structured questions. The demographic details do not include personal identifiers such as name, address and mobile number. The structured questions, though designed within the KAP framework, are presented without specifying if they belong to K, A or P categories. This ensures unbiased and intuitive engagement from the users. The content and complexity of the questions can be tailored to align with the specific EMR solution and workflows being used in the specific healthcare facility (See Appendix 1 for list of questions).
-
Analytics Engine (Processing Layer): The analytics engine processes and evaluates all submitted responses in an anonymized and secure manner. It applies a range of filters such as role, department and EMR experience level to analyze and provide the scores in each of the categories. A default benchmark score of 70% is used to indicate satisfactory adoption. However, the threshold can be changed to higher or lower levels to suit specific organizational needs or stages of EMR implementation.
-
Results Dashboard (Output Layer): The output is presented through a results dashboard, accessible to authorized personnel such as management of the healthcare institution. The dashboard displays analyzed data showing scores that can be segmented based on filters such as role or department. Based on the set threshold, it indicates if the given group/team has obtained a passing score in each category. In addition, it also displays the standard recommendations Then, the dashboard provides specific recommendations to the management to take timely and informed decisions to improve EMR adoption and performance (See Appendix 2 for recommendations).
Content Validity
The questionnaire was subjected to content validation by a panel of five subject experts in EMR, who independently rated each item on four content validity criteria: simplicity, relevance, clarity, and necessity, using a 4-point scale. Item-level Content Validity Index (I-CVI) values were calculated for each item, and Scale-level Content Validity Index (S-CVI) values were computed for the overall instrument. The S-CVI values ranged from 0.82 to 1.0, indicating strong content validity in accordance with established benchmarks (see Appendix 3). While additional psychometric assessments such as Cronbach’s alpha or test–retest reliability were not performed in this study, we recognize this as a limitation and propose that future work will evaluate the internal consistency and stability of the tool in larger field settings.
Administration of the tool
The CLEAR Score tool can be deployed in various formats depending on the organization’s needs. It may be used as a single-point assessment at one time or as a multi-point assessment several times to track improvement in adoption and to evaluate the effect of interventions. The tool is suitable for both new users as well as experienced users who are already engaged with the EMR system. The selection of users, timing and frequency of deployment are flexible and defined by the management.
When a healthcare personnel user logs into the tool, it first captures basic demographic details of the person using the tool. This is followed by questions under the three categories of Knowledge, Attitude and Practice (KAP). As each question is provided with multiple choices of answers, the user selects the best answer by clicking against it. Although questions under each of the KAP categories are present, there is no information provided to indicate which question belongs to which category. After answering all the questions, the user clicks on the ‘submit’ button. This completes the task from the user’s side.
Scoring and benchmarking
The tool assesses inputs provided by the user as follows: the demographic details of the user are noted. Each question is given the appropriate score based on the answer choice selected by the user. Questions are given either a score of 1 or 2 depending on the weightage assigned to each question, with questions with higher weightage carrying a score of 2. Scores obtained for questions under each of the KAP categories are summated and the summated score is recorded as the score for that category. Responses are analyzed based on the filters such as role, department, work experience level and EMR experience level to generate the scores. After the summated scores for each of the three categories are calculated, the tool analyzes them to decide if the user has ‘passed’ or ‘failed’ in each category. This decision is based on the pre-programmed ‘threshold’ or ‘passing score’ in the tool. However, the tool enables customizing the threshold based on the organization’s needs such as internal goals, stages of EMR maturity, or any other specific departmental needs. Scores falling below the benchmark trigger targeted interventions.
Ethics, data privacy and data governance
Administering a questionnaire-based framework for assessing EMR adoption raises important ethical and governance considerations. Ensuring informed consent is critical, as staff participation must remain voluntary and free from perceptions of managerial coercion. Equally important is protecting respondent privacy: since staff may feel that their views on EMR use could be linked to performance evaluation, strict anonymization and confidentiality safeguards are necessary to encourage honest responses. Data governance procedures also play a central role, as sensitive information regarding staff knowledge, attitudes, and practices must be stored securely and used solely for research and quality improvement purposes. Access to data is restricted to authorized personnel only, based on clearly defined user roles and permissions. By foregrounding these measures, the framework not only strengthens the validity of the responses but also aligns with broader principles of ethical research and responsible data stewardship within healthcare institutions.
WORKING PROTOTYPE OF TOOL AND SIMULATION
Tool Prototype
To demonstrate the CLEAR Score methodology can be seamlessly built into a product, a working prototype of a tool based on the above framework was developed. The tool has a front-end for capturing demographic details and responses to questions from the user, a back-end for analysis of the submitted responses, and an output layer to display the results and recommendations for improving adoption. The tool was used in a simulation using mock data from a group of healthcare personnel as detailed below.
Simulation
To demonstrate the working of the CLEAR score prototype tool, a simulation was performed as follows: The CLEAR score analysis was performed on three mock groups of healthcare personnel – five doctors, five nurses and five laboratory technicians. The responses given by each individual personnel member to the questions were tailored so that the group of doctors failed to meet the threshold score for attitude, the nurses failed in knowledge and technicians failed in the practice categories. The following screenshots show the various features and working of the tool.
As shown in figures 4a, 4b and 4c, the output of results and recommendations for each group was correctly given by the tool as per the performance of each group in the analysis.
Fig. 1 shows a screenshot of the front-end of the tool, with its user-interface (UI).
Post assessment, the authorized management staff of the healthcare institution can login into the tool application to view the results. The tool displays average scores obtained by each group of healthcare personnel who took the assessment, and recommendations to improve adoption if any group failed to achieve the set threshold for CLEAR score. As an example, Fig. 2 shows scores obtained and a portion of the screenshot of recommendations displayed for the physician group who took the assessment.
In a similar manner, results of the simulation carried out for the group of nurses and technicians is also shown (Fig.3)
DISCUSSION
This report proposes a novel, objective method to assess EMR adoption by healthcare staff using the KAP framework. While KAP surveys have traditionally relied on self-reported data to gauge user readiness and behaviour in health and education domains, the approach outlined here departs from this subjective modality by integrating behavioural indicators and system interaction metrics. An objective KAP-based assessment method holds significant promise for offering a more nuanced view of EMR adoption across diverse user categories. In addition, the scoring not only highlights the overall status of adoption but also identifies specific areas requiring intervention. For instance, a low knowledge score might indicate the need for additional training, while a low attitude score could reflect resistance to change or concerns about system usability. Moreover, the CLEAR score can be used to track changes over time. Regular assessments can determine whether scores improve following interventions, providing valuable feedback on the effectiveness of these measures. Additionally, filters can be set up to display assessment results of only specific categories such as newly joined personnel, junior or senior personnel and others. The dynamic, iterative nature of the method enables healthcare organizations to adapt their strategies in response to emerging needs, thereby fostering a culture of continuous improvement.
Our approach contributes to the growing body of literature that calls for moving beyond binary or static adoption definitions such as “implemented” or “not implemented”.5 Rather than viewing EMR adoption as a singular event, this framework allows for a continuum-based assessment that is both temporally and behaviourally dynamic. Furthermore, it aligns with calls to assess meaningful use of EMRs,14 rather than mere access or log-ins.
This method also addresses a critical gap in current EMR adoption literature: the over-reliance on self-report surveys or retrospective interviews, which are often biased by recall errors, social desirability, and organizational pressures. As highlighted by Althubaiti,15 self-report measures like surveys or interviews can suffer from social desirability and recall bias, undermining data reliability. By contrast, the integration of objective metrics based on the KAP framework adds robustness and replicability to the assessment process. These metrics offer concrete and trackable scores of user interaction with the system, helping organizations move toward targeted interventions.
To demonstrate the feasibility of this method, a prototype assessment tool was developed and tested in a simulated environment using three virtual user groups—doctors, nurses, and lab technicians. Each group was assigned predefined interaction patterns with a mock EMR system reflecting varying levels of knowledge, attitude, and practice. The tool successfully processed these behavioral inputs and generated distinct adoption scores for each group. Not only was the tool able to successfully analyze the inputs from users, it also displayed recommendations for each group. These results affirm the method’s utility in providing administrators with targeted, evidence-based remedial strategies to enhance EMR adoption.
Future work should explore the integration of this method into hospital dashboards or EMR administrative panels as a live monitoring tool. Such integration can allow hospital administrators to identify bottlenecks, provide targeted remedial measures and measure the impact of policy changes or system upgrades. Additionally, exploring correlations between EMR adoption scores and patient care outcomes—such as reduced medication errors, improved documentation, or better clinical decision support usage—would enhance the method’s clinical utility.
Limitations
One limitation of this work is the need for pilot testing of the proposed framework. While the conceptual adaptation of KAP to EMR adoption demonstrates theoretical novelty, its practical utility requires empirical verification. Pilot studies across diverse healthcare settings to test the reliability, validity, and sensitivity of the framework in capturing staff adoption patterns will strengthen its applicability for broader adoption assessments. Such validation will be essential to establish its robustness and generalizability.
Another potential limitation is that reliance on self-reported questionnaires could introduce the possibility of response bias, including social desirability and recall bias, which may affect the accuracy of findings. Moreover, the current results may have limited generalizability, as the framework has not yet been validated across different healthcare institutions, staff groups, or geographic contexts. Future studies in diverse real-world settings will be essential to strengthen the robustness and applicability of the proposed KAP-based approach.
In conclusion, this report introduces an innovative and practical approach to measuring EMR adoption using an objective application of the KAP framework. It has potential to move the conversation beyond mere implementation statistics to deeper insights on system usage behaviour. The proof-of-concept demonstration using simulation indicates that this method can serve as a valuable tool for health informatics researchers, hospital administrators, and policymakers aiming to enhance digital maturity in healthcare systems.
ACKNOWLEDGEMENT
We would like to thank Dr. Praveen Kumar Patil, Senior Director, Oracle Health, Bengaluru India for his constant encouragement, support and guidance.
Oracle, Java, MySQL, and NetSuite are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Some regulatory certifications or registrations to products or services referenced in this article are held by Cerner Corporation. Cerner Corporation is a wholly owned subsidiary of Oracle. Cerner Corporation is an ONC-certified health IT developer and a registered medical device manufacturer in the United States and other jurisdictions worldwide.
This document may include some forward-looking content for illustrative purposes only. Some products and features discussed are indicative of the products and features of a prospective future launch in the United States only or elsewhere. Not all products and features discussed are currently offered for sale in the United States or elsewhere. Products and features of the actual offering may differ from those discussed in this document and may vary from country to country. Any timelines contained in this document are indicative only. Timelines and product features may depend on regulatory approvals or certification for individual products or features in the applicable country or region.
DISCLAIMER
The core idea and methodology in this report was granted a patent (12,100,317) in 2024 to assignee Cerner Innovation Inc., with the co-inventors being PBG and SJP.