Competency Mapping is a process of identify key competencies for an organization and/or a job and incorporating those competencies throughout the various processes (i. e. job evaluation, training, recruitment) of the organization. To ensure we are both on the same page, we would define a competency as a behavior (i. e. communication, leadership) rather than a skill or ability. The steps involved in competency mapping with an end result of job evaluation include the following: 1) Conduct a job analysis by asking incumbents to complete a position information questionnaire (PIQ).
This can be provided for incumbents to complete, or you can conduct one-on-one interviews using the PIQ as a guide. The primary goal is to gather from incumbents what they feel are the key behaviors necessary to perform their respective jobs. 2) Using the results of the job analysis, you are ready to develop a competency based job description. A sample of a competency based job description generated from the PIQ may be analyzed. This can be developed after carefully analyzing the input from the represented group of incumbents and converting it to standard competencies. ) With a competency based job description, you are on your way to begin mapping the competencies throughout your human resources processes. The competencies of the respective job description become your factors for assessment on the performance evaluation. Using competencies will help guide you to perform more objective evaluations based on displayed or not displayed behaviors. 4) Taking the competency mapping one step further, you can use the results of your evaluation to identify in what competencies individuals need additional development or training.
This will help you focus your training needs on the goals of the position and company and help your employees develop toward the ultimate success of the organ ization. The Role of Competencies in Leader Development Background The core of The Banff Centre leadership learning is our unique competency mapping process, which links to our online 360 Degree Assessment feedback system. At The Banff Centre, we believe that the three basic requirements of successful leadership are knowledge, competency and character. According to the most current research,? ome of the most successful leaders consistently apply all three of these areas of expertise in order to make them highly effective. In addition to keeping current on the rapidly changing knowledge that is required to do their jobs well, leaders also require the character capacity to know what is the right thing to do, have the courage to act, and to operate with integrity and trust. In order to truly be effective, leaders must have the tools (competencies) to create the kinds of actions that lead to sustainable success. The Banff Centre Competency Matrix
The Banff Centre Competency Matrix contains 24 competencies grouped evenly into 6 Leadership Dimensions, including Self Mastery, Futuring, Sense Making, Design of Intelligent Action, Aligning People to Action and Adaptive Learning. The four competencies that reside inside each of the six Dimensions (24 in total) define a set of related actions that, when executed by a leader with intention, create a specific outcome. Each competency is made up of observable skills that can be learned. Like any skill, practice and feedback are necessary. Some skills take longer than others to acquire.
We view leadership as a lifelong journey. The specific skills represented by these 24 competencies constitute the essentials of leadership. By the essentials we mean those primary skills that can be combined and recombined to handle the majority of challenges faced by leaders today. We have grouped these primary skills into the 24 definable competencies to show function and purpose, choosing dimension names that best describe the groupings of these competencies. Periodically we make adjustments and changes to the skills and competencies to reflect the evolution of changing demands on leaders.
Leadership Dimensions A dimension is a broad category containing 4 essential competencies. Leadership dimensions are used to distinguish related competencies and place them into sets or groupings. The competencies within a dimension have a commonality in that they all support the same activity but in different ways. For example, the dimension of ‘Self-Mastery’ contains competencies required to understand oneself as a leader, the inner work of being a leader. The dimension of ‘Aligning People to Action’ contains competencies required to engage others and have them work towards a goal.
Dimensions ‘partition the ground’ of leader development into broad categories of essential behaviours. Together they tend to both cover the field and change, although slowly, as new demands are placed on leaders. Leadership Competencies Although there are various definitions of competency, all of them include the description of a competency as a set of skills required for taking effective action. The skills are important, as is the knowledge of when to use them and when not to. Competencies define a set of actions that must be learned and executed in a way that creates a chosen outcome.
If a leader is competent in ‘Strategic Foresight’ then the leader can consistently consider and work with a range of possible futures. It is likely that the leader can read current trends and identify the impact on their organization, create clear images of the future, and use processes such as ‘scenario planning’ to build a compelling vision for a preferred future. Competencies, then, are workable groupings of skills that fulfil a purpose and / or provide a specific function. Leadership Skills Skills are defined sets of acts that have a specific effect or accomplish a specific task.
Often skills will suggest a certain sequence of action that needs to be taken for the tasks to be completed. Sometimes skills can be used in a variety of orders or combinations to accomplish a related task. Skills can be combined and recombined to deal with a number of demands and situations. Skills can also be combined and recombined to create new competencies. The action of ‘identifying specific behaviours for observation by others’ could also be used in another competency such as ‘performance management’ where this skill would be very useful. It makes sense that the ore skills that can be learned the more adaptable and able the leader is to address a greater variety of situations. The skills are the basic building blocks of leader development. Background -- concept maps Concept maps were invented by Joseph Novak in the 1960s for use as a teaching tool. They are quite simple: labelled boxes represent concepts in a syllabus, and lines or arrows denote relationships between the concepts. If students develop a concept map at the start of a course, then teaching staff will have a better idea of their preexisting conceptual framework.
Teachers can also present a course syllabus in the form of a concept map, showing how the ideas being taught are interrelated. Students can also use concept maps as a notetaking tool, to represent the information in an article or to depict the structure of a novel. It is clear how these activities fit in with a constructivist view of the teaching process. William Trochim (1986) later developed the concept map into a strategic planning tool for use in the design of organisational components.
Trochim's technique differs significantly from Novak's original idea in that, while Novak's maps are generated for one person as a means of communicating complex ideas to many, Trochim's are generated by many people as a means of developing complex ideas. The sample concept map below is from Trochim (1989)and was generated by stakeholders in Cornell University's Health Services. In Trochim's method, a group of participants are collected that have some stake in the organisation that is planned.
Initially, participants brainstorm to compile a list of concepts, which are then written on separate index cards. Each participant then sorts the cards into piles of related concepts. It is important that no constraints are placed upon the participants' sorting. The results of the sorting are then tabulated, and a correlation matrix M is created from those results. If concepts i and j were grouped together by N participants, then M[i,j] = N. The grouping relation is commutative, so the matrix M will be symmetrical.
Cluster analysis is then carried out on the correlation matrix to group the concepts into categories, and the categories are examined by the participants to see what the concepts in each have in common. Multidimensional scaling (MDS) is then applied to generate a 2D image of the domain, in which the degree of correlation is inversely proportional to the distance between the categories. This map can then be used to design new organisational structures, to define the responsibilities of the new organisation, to communicate its purpose to clients, and for many other purposes.
Concept mapping for introductory programming Thesis main page Introduction and background Background: education Assessment Background: concept maps Aims Competency mapping Benefits Method and results Data sets Method Results Random data Analysis and conclusions Factor analysis Cluster analysis Methodological problems A better test Conclusion Appendix I: Datasets Appendix II: Activities Appendix III: MDS coordinates Appendix IV: Data generation scripts Bibliography Previous: Assessment Main page Next: Aims Benefits of competency mapping
Competency maps have many potential benefits for students and teaching staff. Of course, because staff and students share many goals, these benefits are not entirely divisible; some aspects of competency mapping will benefit both staff and students. A partial list of potential uses for competency mapping follows. It is likely that more benefits will be discovered as the technique matures. Benefits for staff If competency mapping can actually give a picture of the structure of the course as the students experience it, teaching staff will be able to use that picture as the basis for course refinement.
The identification of key concepts is the first step towards designing a syllabus. The information gained can also be published to the students, for example by including it in the subject information handout that students usually receive in their first lecture, or by putting it on the courseware web page. Of course, it is quite possible that the structure revealed by analysis of student results does not match the lecturer's idea of the conceptual structure of the course. In this case, the revealed structure may suggest ways in which the course can be improved.
For example, if two competencies that should be related (for example, C pointers and passing by reference) are not clustered together, it could indicate a need to make the connection more explicit to the students. If the competency map uses all the coursework marks as input, this will not help the students of that year; however, it may well help teaching staff to refine the coursework for the next delivery of the course. It would also be useful to staff who are teaching follow-on courses, as they would gain a better idea of which topics need revision.
A competency map using only the marks for half of the course can be produced if staff wish to refine the course on the fly, but care must be taken that the data are sufficient: if the only marks on record are the first six prac marks, it is unlikely that any useful conclusions can be drawn. It is not yet certain how many points are needed for competency mapping to be useful, but it is likely to depend on the amount and complexity of the course material. These uses assume that competency mapping will elucidate the structure of the course.
If, however, the technique does not do this, then there are still potential benefits: logically, we would expect that activities that test strongly related competencies should show correlations in their marks; if this is not the case, there must be some reason. For example, written exam questions about linked lists might not correlate strongly with practical questions about linked lists if success in pracs is more closely related to factors other than subject knowledge. This could be the case if the some students find their work environment --- operating system, compiler and editor --- difficult to use.
In this case, prac questions will tend to cluster much more strongly with other prac questions, and much less strongly with theory questions. The competency map can show that there is a problem; it is then up to the teaching staff to investigate that problem. Of course, competency mapping over subsequent years of the course will help the staff know when they have ameliorated the problem. In order to satisfy Ethics Committee requirements, this project only uses deidentified marks data and no demographic information is available.
However, in a university setting, competency mapping can be used to compare demographic subsets of students to verify equity of access. If it is suspected that there is a systematic problem with some students' access to education, for example if there is concern that students of non-English speaking background are finding a particular activity especially difficult because of the complex language used to explain it, then competency mapping can be applied separately to the results from students belonging to that group and the results compared to a competency map derived from the marks of the rest of the student body.
In this case, a problem with English would result in a distorted cluster arrangement: written-answer questions and questions with complex requirements would tend to cluster together. The technique may also be used to determine whether female students conceptualise the subject differently to male students. Again, if a problem is found, competency mapping over subsequent years will show staff whether the remedies are working. Benefits for students The primary benefit of competency mapping for students is the increased understanding of the student viewpoint that the teaching staff will have, and the resulting likely course improvements.
However, students should also benefit directly from it. A constructivist view of the teaching process suggests that students will assimilate new knowledge and gain new skills more readily if they can be made aware of how those new competencies interrelate with knowledge and skills that are already mastered. Of course, lecturers know this; most new topics begin with an explanation of the new material in relation to material already seen. However, this explanation is almost always exclusively verbal.
Information about relationships is often best presented in visual form, especially if the relationships are multidimensional: pictures are two-dimensional, but words are one-dimensional, strictly ordered in time. Therefore, having access to a two-dimensional map of the course structure may help students construct their understanding of the course material. If it is possible to use competency mapping to break the subject down into components that are close to orthogonal, it should also be possible to design assessment on the basis of that breakdown. Once the components are known, assessment tasks can be esigned that test them individually, or (since it is virtually impossible to test anything in isolation) as close to it as possible. Thus a test can be delivered to students that is quite small, but gives results that are interpretable in terms of the course's competency map. Because competency mapping measures correlations between task marks across students, it is obviously impossible to generate a competency map based on a single student's data; however, numeric results can be presented alongside the group competency map --- for example, by shading regions that correspond to topics that the student needs to work on.
In this way, a student may be able to use her test results to determine her own weaknesses, and then consult the map to see how they relate to the rest of the course: using this map and compass, she may find it easier to navigate through the material. If she still has trouble understanding the material, she may ask a staff member for help. In this case, if the staff member has access to her test results, it would be easier to pinpoint the misconstruction that is at the heart of the problem.
Experience shows that determining the problem is almost always harder and more time-consuming than solving it; figuring out what needs to be explained is more difficult than developing an explanation, especially considering that teachers can develop a set of explanations that work and re-use them. This means that the student need not worry as much about coming to consultation, and (because consultation time can be used more effectively) the teaching staff are more likely to be free to help her. Factor analysis
For most datasets, there was a single dominating factor with a large eigenvalue. Identifying that factor cannot really be done with the data available in this project, but several candidates come to mind. There are many factors other than academic ability and mastery of the domain that are likely to affect performance on an exam: memory, especially if the students were trying a cram-and-forget strategy; ability to work quickly; ability to handle pressure; ability to understand and follow written English instructions.
It is possible to draw up a similar list of non-academic factors that are likely to affect performance in pracs. The most obvious factors are those that stem from the task itself: the ability to cope with the interface provided to the student in pracs. This usually includes a text editor, a compiler, and some mechanism for running programs. It may also include the ability to perform basic operating system functions, such as creating a directory or copying a file, and theoretical background knowledge about how the filesystem works.
A student who does not understand what a "directory" or "folder" is may suffer as the semester progresses if it means that she cannot organise her prac work efficiently. It would be a mistake to overlook the other differences between pracs and exams. While taking an exam is an inherently solitary task, pracs are often social. Students may learn through interaction with peers; on the other hand, a shy student may feel exposed and uncomfortable. Of course, all students need to be able to interact with their prac demonstrator; a student who is too shy to ask for help is at a definite disadvantage.
In this way, social skills may be considered to be modal competencies. Another modal factor --- it surely could not be called a competency --- is cheating. In general, it is much easier for a student to cheat without detection in a prac than it is in an exam. It is to be hoped that such activity is not sufficiently prevalent to make a large difference in the competency map for the subject, but the possibility that it has affected some marks cannot be ignored, and it must surely affect prac and exam marks differentially. Cluster analysis
It was hoped that the clustering would provide an insight into the structure of the course, ideally a competency-based decomposition that would be able to be used as the basis for assessment design. Unfortunately, such decomposition can not be read from the results that we obtained. However, some valuable and unexpected insights were gained in the process of the analysis. The most obvious result is that prac tasks are clustered with prac tasks and exam tasks with exam tasks. This tendency seems more marked among the less able students and less marked among the better-performing students.
Compare the clustering in the maps for BMAB, AMAB and TMAB. In the first of these, which shows the results from the bottom third of students, all but one of the prac questions cluster together. In the second, showing the results for the whole student body, the pracs cluster with exam tasks about digital logic and test data. In the third, representing the top third of students, the pracs (excluding prac 11) cluster with exam tasks on digital logic, data structures, test data, and the two smaller programming questions.
Clearly, the better a student performs at computer science, the more strongly correlated her prac and exam results are. This could be interpreted as simply showing that a student in the top third of the class gets good marks at everything, but the reality is more complex than that: if the observed tendency is simply the result of selecting students on the basis of ability, then the bottom third of students (which is likely to be as homogeneous in terms of ability as the top third) should also show stronger correlations.
In fact, the observed correlations are weaker for the bottom third of students. This may indicate that the weaker students are not applying theory to practical situations, or are not allowing lessons learnt in practice to illuminate their understanding of theory, or it could indicate some overriding factor that affects one kind of task but not the other: for example, difficulty using the computer system or difficulty with English. This second possibility mirrors the idea of the "modal competency" that was introduced in Section 3. . It is significant that the exam task for which students were asked to develop sets of data to test a function is clustered with most of the prac questions for the whole-group and top-group data. The correlation between the test-data task and the prac marks could indicate that the ability to test code fully is a marker for the ability to program; alternatively, it might only mean that only students who finish the pracs get practice at testing.
The former hypothesis is easy to test: give intensive lessons in software testing to a group of students, and see whether their programming ability improves as a result. It is rather surprising that the programming questions from the exam do not cluster with the prac questions for the whole-group data. It is informative that the two smaller programming questions are clustered with the prac questions for the top third of students. This is further evidence that the majority of students do not apply theory to practice.
Students' programming practices need to be investigated. The program design principles that students are told to apply would serve them well enough in an exam situation, but if they are actually applying a rapid-prototyping code-test-debug cycle in pracs, in which the compiler rather than the designer forms the first line of defence against bugs, then they will not perform well in any context where a computer is not present. In most cases, the harder questions cluster ogether. For example, for set AMAB, the exam questions about linked lists and bubblesort, the largest programming question on the exam, and prac 12 cluster together. For set ALL, the equivalent cluster contains the exam questions on linked lists and bubblesort, the largest programming question on the exam, and four of the prac bonus questions. None of these tasks had an average mark of more than 33%, and for prac bonus 8 and prac 12 the average mark was under 7%.