Development of a Nursing Competency Assessment Tool: A Pilot Study inside the Department of Pediatric Intensive Care 111

Chiara Tosin*

Citation: “Development of a Nursing Competency Assessment Tool: A Pilot Study inside the Department of Pediatric Intensive Care”. American Research Journal of Nursing; V3, I1; pp:1-8.

Copyright This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Aim: The paper aims to describe the development and test of the Nursing Competencies Assessment Tool (NCAT), an indicator framework thought to support nursing professional development. It is argued that the NCAT application method can represent a worthwhile tool to assess nursing competences on novice, advanced beginner, competent and proficient nurses.

Background: Several elements composing nursing competency profile have been examined by many authors. Actually managers needed practical and user-friendly instruments that respected specific characteristic of different settings.

Methods: The presented pilot study exploited a mixed methods approach, including: a literature review to create an indicator framework, self-administered questionnaires submitted to expert nurses, consensus meetings to elaborate the integration program and a final control group comparison. NCAT was before tested in a Pediatric Intensive Care Unit, later we retested NCAT in a pediatric unit with a semi–intensive area.

Results: All indicators have been considered pertinent and relevant by expert nurses. Every nurses found the integration plan easy to understand and useful for their work. The control group comparison emphasized the importance in nurses’ group opinions.

Conclusion: The NCAT implementation method represents a reliable and reproducible tool aimed to evaluate nurses’ competence profiles respecting the specific context. Further studies are now needed to evaluate the chance to transfer in a profitable way the emerged indicators in several other adults and pediatrics contexts.

Keywords: Competences, professional development, competence assessment



Assessing nurses’ competence profile represents a crucial element in order to guarantee efficient nursing care to patients and to identify areas for nursing practice development. Indeed, competence represents an essential factor to assure quality, safety and cost-effective health care (Defloor et al., 2006). Nurse managers need to assess constantly the competences of clinical nurses in order to assure a qualified and safe patient care. Additionally, the importance of measuring nurses’ subjective assessments of their work environment is emphasized as a key contributor of job satisfaction (Adams, 2000). Several studies proven that the use of scales for competence self-assessment encourages practice improvement and continuing education, and can potentially reduce nurse turnover (Buchan, 1997).

Despite the importance of competence assessment, the debate in Italy started only recently. The need to improve various practice levels had been identified only during the end of the 90s, as a consequence of university reform. The reform allowed the introduction of second level nursing training courses and played an important role for nursing practice. Currently, inside healthcare organizations a structured monitoring and differentiation system of competences is absent. However, inside these organizations, there is not a well-structured system of monitoring and differentiation of competence related with the educational background and work experience (Girot, 2000; Dellai et al., 2009).

A previous work in this direction was made: The Nurse Competence Scale (NCS) had been developed for the nurses’ competence assessment in various work environments in Finland between 1997 and 2003. The content, construct and concurrent validity, inter-rater reliability and internal consistency of the instrument have been tested in several nurses populations in Finland (Meretoja et al., 2004; Meretoja et al., 2003; Meretoja et al., 2002; Heikkila et al., 2007; Makipeura et al., 2007; Salonen A, 2007). Furthermore, several studies validated this instrument in the Italian context (Palese et al., 2005; Dellai et al., Finotto et al., 2009). Nonetheless, the number of documents that report and discuss the use of the NCS is very low. In addition, it should be noted that among these reports the NCS has been frequently used as a merely list of indicators applied to recognize nurse competences.


Recently, the concept of competence has become a central part of a much wider and international debate about the development of the nursing profession (Riitta-Liisa Aari et al., 2007; Sholes et al., 2000; Barlett et al.,2000; Batalden et al., 2002; McMullan et al., 2005; Palese et al., 2005; Dellai et al., 2009). In Italy, the recent transition from an education based on regional governance to an education provided by universities is leading to an increasing number of graduated nurses. At the same time, a transformation of the institutional role of the nurse occurred. This profound change in educational and professional models has stimulated new debates about the expertise that is required to nurses, as well as the skills that they should possess (Meretoja et al., 2002; Mclendall et al., 1973).

Italian nurses often do not find appropriate tools to support their professional growth by means of the development of their competences, and an assessment of their educational needs lacks (Meretoja et al., 2004; Fraccaroli, 2007). Currently, in many hospitals, the hiring process of nurses does not stem from a proper evaluation of the clinical expertise of the candidates (Meretoja et al., 2002). In line with this, the evaluation of personnel is frequently conducted by means of standard tools exploited by the entire hospital or, even by all the hospitals of the region. Finally, these instruments are not based on explicit and validated measurement scales weighted for the specificity of the clinical setting (Meretoja et al., 2004; Batalden et al., 2002; Finotto et al.,2009). 

Benner’s research (1984) investigated the acquisition of abilities by documenting the experience of expert and novice nurses employed in different care settings. The results demonstrated a significant improvement of nurses’ competence thanks to the reflective practice and critical thinking. The research highlighted professional competence by means of direct interviews with nurses. The emerged competences were then classified into seven categories. Starting from the findings of Benner’s research, Meretoja developed a new structured questionnaire (2002).


On the one hand, the aim of the present study is to describe the development and testing of the Nursing Competence Assessing Tool (NCAT), an indicator framework specifically oriented to support nursing professional development. On the other hand, the paper illustrates the use of this new indicator framework to assess nursing competence profile on novice nurses employed inside a Department of Pediatric Intensive Care.

Setting and participants

The study was conducted inside the Department of Pediatric Intensive Care of the Major City Hospital in Verona (18 beds), in Italy, and it lasts three months (October-December 2012). Participants were selected according to non-probability convenience sampling. 14 expert nurses were invited to participate. These were qualified as expert by the peers and by considering their time of employment (at least 6 years, in accordance with Benner’s classification of novice, competent and expert nurses). Participants’ seniority was used to define the experts level due to the inadequacy of current competence measuring instruments in our setting. The control group were recruited in the Department of Neonatal Intensive Care of the University City Hospital in Verona (22 beds; Italy) according to non-probability convenience sampling.


Phase I: Indicator Framework generation

In order to develop the indicator framework, the first phase involved a thorough review of the current literature to explore in what manner the concept of competence were used in existing instruments related with nursing development (Meretoja & Leino-Kilpi, 2001). Several authors already investigated this topic in order to identify elements useful to define nursing competence. However, as pointed out by Riitta-Liisa Aari and colleagues (2007), the concept of competence is ambiguous and it is often misunderstood with other related concepts (such as ability, skill, or knowledge). Following the proposal of Benner (1984), we can define competence as “the ability to perform a task with desirable outcomes, as the effective application of knowledge and skills and as something that a person should be able to do”. Moreover, following Dunn and colleagues (2000), the notion of competence involves an “overlap of knowledge with the performance components of psychomotor skills and clinical problem solving within the realm of affective responses”. Starting from this, a series of indicators was developed in order to assess nursing competence by Meretoja et al. (2002) and known as the Nurse Competence Scale (NCS). This latter was translated, tested and validated for Italian settings by many authors (Palese et al., 2005; Dellai et al., Finotto et al., 2009). They agree on the fact that the NCS can be actually used in Italian settings as a reference for the development of specific indicators useful to promote the professional growth of nurses. The usage of a specific set of indicators defining the competence required for professional development in a specific setting is useful both for facilitating job placement and for promoting the professional growth. Furthermore, it supports coordinators and managers in evaluating professional competences according to the skills required to operate in a specific area of care (Spencer, 1995). 

The Italian version of the NCS (Palese, 2005) was initially applied to identify the most crucial indicators for nursing competence. Since the present study identified basic differences between the Benner’s work and the pattern of indicators validated by Palese, we decided to use all the indicators described in both studies. Furthermore, we integrated the indicators reported by Benner (1984) that were not considered by Meretoja (2003) and Palese (2005) (Figure 1).

In the end, 97 indicators were included and they were arranged in the seven main domains previously described by Benner (1984), specifically: helping role, teaching-coaching, diagnostic functions, managing situations, therapeutic interventions, ensuring quality, work role. We then added one more domain defined as “technical and care activities” that was not identified in our bibliography (Annex A).

Phase II: Indicator framework validation and training plan definition for our specific context

Firstly, the set of indicators was reviewed by the expert nurses through a self-administered questionnaire. For each one of the 97 indicator, we asked them to evaluate the relevance, pertinence and related level of competence. In addition, they were asked to individually judge and quantify the validity of the items. Relevance was validated using a 5-level Likert scale (completely irrelevant 0,0%, insignificant 0,4%, fairly important 12%, very important 45%, fully relevant 42,6%), the indicator was considered pertinent in 96.1% of cases, not relevant in 2.4% of cases, whilst no response was given in 1.5% of the cases. Items with inter-rater agreement over 50% were accept for inclusion in the framework. In the end, the 77 indicators emerged from this process.

Secondly, expert nurses were asked to indicate for each indicator the corresponding level of competence (Novice, Competent, Expert). Based on their opinion, the 97 indicators were assigned: novice 24 indicators (25%), competent 50 indicators (51%), and expert 23 indicators (24%).

Thirdly, three 90-minute consensus meetings were conducted. The aim was to identify specific observable behaviours corresponding to each novice assigned indicator. In the end, 96 criteria and observable behaviours emerged.

Finally, in order to evaluate the application in real contexts of this novices’ training plan, a pilot test with two novice nurses was conducted. Participants manage it well, reporting no misunderstanding, issues, and a general easiness to use. 

Phase III: Training plan Implementation

In 2014 the novice nurses’ competency mapping allowed the implementation of the new integration plan in the Pediatric Intensive Care of our Unit.

It included a preliminary meeting in order to share times, objectives, responsibilities, a sharing of the learning agreement, mentoring, and supervision. The meeting was also useful to present the supportive tools built on the grid of indicators obtained by mapping skills (i.e. evaluation card and self-assessment card) and of the formative assessments at the first, third, and fifth month (with the collaboration of the local supervisor).

Finally, a constant evaluation and organization of the professional career based on the educational needs of each nurse was scheduled. 

Phase IV: Evaluation of the results

In 2015, after two years from the implementation of the new integration plan, a study was carried out to evaluate the results and the impact of the research. A questionnaire was submitted to the novice nurses included in the new integration plan, as well as to the expert nurses. The same questionnaire was submitted to a group of nurses working in the same region, in neonatal and pediatric intensive care settings, who were classified as expert or novice according to the criteria used for the local group. Differences between the two groups were examined to highlight the strengths and limitations of the current skills’ mapping of the integration plan for newly-hired nurses implemented in this Pediatric Intensive Care Unit in Verona.

In the group of novices, the answers were generally uniform. All novice nurses reported receiving clear information on their plan of integration, they were assisted by a tutor, received information about times, methods and goals of their professional career through dedicated and structured forms. The control group did not report the same results. Moreover, the novices in the exanimated Pediatric Intensive Care Unit considered the goals clear, close to clinical practice, detailed and specific for the specific setting of their workplace, since they received an evaluation and self-assessment board that were considered very useful. Furthermore, the novice nurses stated that the integration plan was “quite” to “very much” supporting their professional environment. Conversely, the control group considered it “quite” to “not at all” supportive.


This study developed and tested a new instrument, the Nursing Competence Assessment Tool (NCAT), for the self-assessment of competence profile by hospital nurses.

The results are discussed first with the regard to content validity and the efficacy of new integration program. We then discuss the relevance and utility of the framework and identify areas for further research.


The creation of a single tool derived from the results of the studies b) Meretoja (2002), Palese (2005) and Benner (1984) was necessary to produce a set of indicators including all the competences of the nurses working in PICU. This combination allowed a precise analysis of characteristics of a specific tool for a critical care setting according with Palese (2005), reproducibility in different settings as identify by Meretoja (2002) and solid base drawn from listening, interpretation, transcription and synthesis of clinical experience Benner (1984). Content validity is the degree to which the items of an instrument adequately represent the universe of the content and it is the most important type of validity as it ensures a match between research target and data collecting tool of the study (Burns & Grove, 1997). The evidence supporting the content validity of the framework was based, firstly, on literature review and, secondly on the judgments of 14 expert group (Figure 2). Face validity was verified by assessing that the instrument truly measure the concept (Lynn, 1986; Berk, 1990; Davis, 1992; Strickland, 2000).

The analysis of the data obtained by reading the framework revealed a strong correlation among the opinions of nurses. The levels of competence have been assigned with high uniformity, suggesting that the group has a common opinion about the level at which nurses can achieve a specific competence. Furthermore, high uniformity indicated the instrument’s validity. 

Competence category identification was accomplished by reviewing research instruments on nurses’ competence. Benner’s (1984) competency framework was selected to specify the content domain of nurse competence due to good validation of these domains (Figure 3).

The final scale consisted of 97 indicators classified into eight competence categories.


The integration program based on consensus meeting has been deemed by novice nurses very important to support their professional development. The evidence supporting the utility of integration program was based, firstly on literature review and, secondly on the results of the data analysis of comparison to control group. This support the utility of structured programs (Watson et Al. 2002; Redfern et Al. 2002).


This study allowed us to produce two essential instruments. First, the Nursing Competencies Assessing Tool (NCAT). It derives from an analysis of the literature. Second, the integration program drawn from the data and consensus meeting analysis. 

Currently our nursing context situation is characterized by a search for universal indicators that can be valid for all settings, but hardly adaptable for specific contexts. 

This project can be considered as a methodological study, based on the published work of Meretoja, Benner and Palese, which is readily adaptable through a process of contextualization. 

This approach allows an analysis of competences and it highlights the settings’ particularities. This project showed that the implementation of a structured program involving a large number of team members can support the construction, implementation and sharing of new tools. The recognition of the importance of senior nurses who are often considered outdated proved to be a successful aspect of the project. A strong affinity within the group of experts was highlighted during this process, probably because they felt ‟a space where they could express the concept of care‟ and found the possibility to reduce the variability of behaviours by pooling their knowledge. Valuing their experience was very important to encourage their active participation and the involvement of all participants. Moreover, during the consensus meeting, many nurses who did not know the “state of the art” regarding the training of new employees were updated and they were encouraged to be more involved in the training of new employees; not as supervisors, but as members of a group, taking care of new members and believing that work placement and professional development are a common interest. Nowadays, nursing managers commonly face pressures and heavy demands: they can find support in a plan that empowers the novices, involving the expertise of the team, which is particular to the specific situation, built on the needs of the context and providing a regular re-evaluation of skills with beginners. All of this, however, entails the participation of the nursing manager, both in the initial part (as an expert within the consensual meeting), and in the implementation stage, since he/she is required to serve as a facilitator for the development of the project.


Defloor T, Van Hecker A, Verhaegen S, Robert M, Darras W, Grypdonck M. The clinical nursing competence and their complexity in Belgian General hospitals. Journal of Advanced Nursing 2006; 56:669-78.
Buchan J. Clinical ladders: the ups and downs. Int Nurs rev 1997; 44:41-46.
Adams A. & Bond S. (2000) Hospital nurses job satisfaction, individual and organizational characteristics. Journal of Advanced Nursing 32 (3), 536–543.
Dellai M, Mortari L, Meretoja R. Self-assessment of nursing competencies – validation of the Finnish NCS instrument with Italian nurses. Scandinavian journal of caring science 2009; 23: 783-91.
Girot E. Assessment of graduates and diplomates in practice in UK – are we measuring the same level of competence. J Clin Nurs 2000; 9: 330-7.
Meretoja R, Isoaho H, Leino-Kilpi H. Nurse Competence Scale: development and psychometric testing. J Adv Nurs 2004; 47: 124–33.
Meretoja R, Leino-Kilpi H. Comparison of competence assessments made by nurse managers and practising nurses. J Nurs Manag 2003; 11: 404–9.
Meretoja R, Leino-Kilpi H, Kaira A-M. Comparison of nurse competence in different hospital work environments. J Nurs Manag 2004; 12: 329–36.
Heikkila¨ A, Ahola N, Kankkunen P, Meretoja R, Suominen T. Nurses’s competence level in medical, operative and psychiatric specialised health care. Hoitotiede (Nurs Sci) 2007; 19: 3–12 (in Finnish).
Ma¨kipeura J, Meretoja R, Virta-Helenius M, Hupli M. Nurse working in neurological setting. The competence, frequency of using competencies and challenges of continuing professional development. Hoitotiede (Nurs Sci) 2007; 19: 152–62 (in Finnish).
Salonen A, Kaunonen M, Meretoja R, Tarkka M-T. Competence profiles of recently registered nurses working in intensive and emergency settings. J Nurs Manag 2007; 15: 792–800.
Meretoja R, Eriksson E, Leino-Kilpi H. Indicators for competent nursing practice. J Nurs Manag 2002; 10: 95–102.
Meretoja R, Eriksson E, Leino-Kipli H. Nurse Competence Scale: development and psychometric testing. Journal of advanced nursing 2004; 47(2):124-33.
Mc Clelland D.C. Testing for competence rather than intelligence. American Psychologist 1973; 28:1-14
Watson R, Stimpson A, Topping A, Porok D. Clinical competence assesment in nursing: a systematic review of the literature. Journal of advanced nursing, 39: 421-431, 2002.
Redfern S, Norman I, Calman L, Watson R, Murrells T. Assessing competence to practise in nursing: a review of the literature. Research papers in Education 2002; 17(1).
Spencer S.M. Competenza nel lavoro. Milano: Franco Angeli, 1995.
Fraccaroli F. Apprendimento e formazione nelle organizzazioni. Bologna: il Mulino, 2007
Knowles M.S, Holton E.F, Swason R.A. The adult learner. Burlington: Elsevier, 2005.
Maioli S, Mostarda M.P. La formazione continua nelle organizzazioni sanitarie. Milano: McGraw-Hill, 2008.
Batalden P, Leach D, Swing S, Dreyfus H, Dreyfus S. General competencies and accreditation in graduate medical education. Health Affairs 2002; 21(5): 103-11.
Benner P. (1982) Issues in competency-based testing. Nursing Outlook 30(5), 303–309.
Benner P. (1984) From Novice to Expert. Excellence and Power in Clinical Nursing Practice. Addison Wesley, Menlo Park, 1984.
Benner P, Tanner C.A, Chesla C.A. Expertise in Nursing Practice. Caring, Clinical Judgement and Ethics. Springer, Publishing Co. New York, 1996.
Dellai M, Ruocco M, Roat O, Dallapè F. La competenza infermieristica superiore/ avanzata (advanced). Assistenza Infermieristica e Ricerca 2006; 25(2): 92-7.
Palese A, Spangaro S, Venier A. Gli indicatori di competenza infermieristica in area critica: studio esplorativo. Nursing Oggi 2005; 4: 45-51.
McMullan M, Endacott R, Gray M.A, Jasper M, Miller C.M.L, Scholes J, Webb C. Portfolio and assessment of competence: review of the literature. Journal of advanced nursing 2003; 41(3): 283-94.
Barlett H.P, Simonite V, Westcott E, Taylor H.R. A comparison of the nursing competence of graduates and diplomats from UK nursing programmes. Juornal of Clinical Nursing 2000; 9(3): 369-81.
Sholes J, Endacott R, Chellel A, A formula for diversity: a review of critical care curricola. Juornal of Clinical Nursing 2000; 9(3): 382-90.
Burns N, Grove S.K. The practice of Nursing Research, Conduct, Critique and utilization. W.B. Saunders Co., Philadelphia, PA, 1997.
Berk R.A. Importance of expert judgement in content-related validity evidence. Western Journal of Nursing Research 12, 659–67, 1990.
Lynn M.R. Determination and quantification of content validity. Nursing Research 35, 382–385, 1986.
Davis L. Instrument review: getting the most of your panel of experts. Applied Nursing Research 5, 104–107, 1992.
Strickland O.L. Deleting items during instrument development – some caveats. Journal of Nursing Measurement 8, 103–104, 2000.