Building Training for Capability Development and Validation

July 18, 2012 By GTI President Chadd Harbaugh

(Part two of a multi-part report on redefining law enforcement tactical training)

Part One - The Dirty Secret About Most Current Law Enforcement Training Programs
Part Two - Building Training for Capability Development and Validation
Part Three - The Planning Phase of Training and Pre-Functional Area Analysis Work

"What is a good police officer or SWAT operator and how do you develop one?" "What is a good tactical team and how are they organized or developed?"

These are two questions I have asked for nearly two decades. As simple as these questions seem, they are actually incredibly difficult to answer. Ask 100 different officers and you will receive at least as many different responses. Even more intriguing to me than the varying answers provided is the obviously strenuous thought process that these officers and supervisors engage in attempting to answer these questions. The majority of officers I have presented the questions to have a decade or more on the job and thousands of hours of training in their respective fields. How is it that so many individuals serving in their profession cannot answer seemingly routine questions about their career fields? The problem with the questions as worded, of course, is that they are subjective. What constitutes a "good" operator for one person does not match the definition of another. Therein lies an inherent difficulty, very few individuals or agencies have spent the time to carry out a task analysis and fewer still are performing capabilities based assessments (CBA), so we are left without core competencies, mission essential task, and quality definitions for job functions. We cannot manage what we cannot measure and we cannot measure what we can't define. Our industry spends a tremendous amount of time, energy and financing on managing tactical teams and law enforcement agencies but we have spent very few resources on defining or measuring them. We have put the proverbial cart before the horse as a whole and training is one of the more visible aspects of this flaw.

Collectively, U.S. law enforcement agencies spend hundreds of millions of dollars annually on training for their officers based on the title of the class or its reputation without knowing much more than basic course content. Industry-wide we waste millions every year on training that is redundant, not current, not timely, not accurate, and neither meets the real needs of the student nor the needs of the agency. I utilized DHS in last month's newsletter as an example…you cannot throw tens of billions dollars at a problem and just hope it goes away. You need a strategy. You need to have capability development in mind from the start; to begin you need a baseline and some direction. Before you can "fix" or address a problem/gap, you have to know that you have a problem/gap. Once the issue is recognized, the most efficient methods for fixing/filling can be evaluated and implemented. Think of training as a solution (one of many) to a performance problem or gap like prescription medication or major surgery is to a medical issue. Prognosis without diagnosis is malpractice. Additionally, as I will discuss later, training is not the end all-be-all solution to performance issues even though it is typically thought of as such. Not all problems are a result of a lack of training and training cannot solve all problems.

In 2010, a federal agency that needed guidance validating a tactical training program approached me and requested my assistance in conducting the validation and utilizing me as a subject matter expert in developing any necessary program changes. The agency had started the program out right, they had conducted a very thorough and impressive front end analysis (FEA) that defined the need for a dedicated agency to train the various organizations who had jurisdiction within a particular domain; they conducted developing a curriculum (DACUM) studies to define the skills and competencies the students would be required to posses in their real-world operational capacities; they conducted benchmark studies of other organizations to see how they conducted business and what their personnel selection, doctrine, organization, training, equipment, leadership, training facilities and standards looked like; they defined mission essential tasks of the core organizations that would attend their training; they defined evaluation and assessment tools and examined the tasks, conditions, and standards for individuals so they could build training programs that built capabilities to meet those requirements. The FEA left out very little and clearly defined what training needed to occur and why. This is the ideal way to start a new program of instruction (POI). With the information they possessed through the FEA, the organization could build the most relevant, valid, timely and effective training for their students. But they didn't. The problems for the organization developed after the FEA. There was a tremendous disconnect between the FEA and everything that occurred subsequent to it.

After the tedious process of reviewing thousands of pages of documents examining everything from the FEA, all of the documents referenced within it, the programs of instruction, the training schedules, the training organization's structure, the mission essential tasks (METL) and design of the federal organizations who sent operators to their courses, and the legal authorities of all involved organizations, I discovered that, like the vast majority of law enforcement training programs, serious problems existed throughout the organization and the training cycle. However, unlike most law enforcement agencies that stand up new programs continually without thorough analysis, this organization spent in excess of $3 million conducting a very thorough FEA. Their problems occurred after the fact. Thus, the importance of continuous improvement, effective communication, stakeholder involvement and strong leadership throughout the entire training cycle became more painfully evident than ever.

Because the validation was sensitive in nature and dealt with national security issues, which could become political nightmares, should gaps be discovered, the organization wanted me to narrowly validate the curriculum and not look into organizational performance gaps or issues. Doing so could expose issues or gaps at higher command levels. The problem with that concept however, is that it is impossible to validate a POI without examining the training cycle as a whole and if a gap/issue is identified, a legitimate validation must identify where the gap comes from. As mentioned earlier, the problem may be with the training program but it may alternately be a result of insufficient or improper doctrine, organizational structure, equipment, leadership, personnel or facilities. Upon initial review of the organization's program, issues were evident at every stage of the training cycle.

According to the Training and Doctrine Command (TRADOC), capability is defined as, "The ability to generate an effect under specific conditions and to certain standards." To the organization's credit, they wanted to validate their curriculum because they wanted to ensure that they were providing the best program available to build capabilities and performance. Capability and performance are all about factors such as culture, mission, workflow, goals, environment, knowledge, skills, ability, experiences, etc… all working together to produce something that has value. Both are about output and results. For instance, when a football team has potential but the coach is not talented, the performance fails. When the players are individually talented but are not working together the performance fails. When the team and coaches are all working in concert at the highest levels but the bus breaks down or fails to show up at all to take them to the game, the performance fails. Performance therefore, whether it involves a football team playing a game, or a SWAT team on a hostage rescue mission, needs to occur on multiple levels simultaneously.

While the organization wanted a validation of only their POI, several of their performance factors fell well outside of that realm. For instance, one of their training components involved helicopters and vessels and they did not have any assigned to the schoolhouse and they could not routinely access either from sister agencies. This obviously had direct impact on their ability to train students to handle certain tasks under set conditions. They could not remove FAST roping and ship boarding without impacting the student's capabilities. This is an example of an organizational problem that cannot be solved through the training POI alone. Another example that crossed multiple levels of performance involved students and standards and crossed into job/performer levels, and process/organizational levels. The training agency's staff and senior commanders were frustrated because many students were unable to pass the minimum performance standards, specifically those dealing with late stages of the live-fire range. This agency had very little control over personnel selected to attend the training program and only minimum prerequisites. Politics got involved (as so often is the case with law enforcement training) and instead of changing the selection criteria, or the POI's to better prepare the students for the live-fire tests, they lowered the qualification standards to meet their student output goals.

Other problems included the POI itself, it did not address the human body's natural response to threats (the sympathetic nervous system) and subsequently some of the tactics, techniques and procedures (TTPs) conflicted with the body's actual reaction when it encounters a real-world threat; there was little to no focus on the cognitive issues involved with the tactical decision making process; the training didn't sufficiently address modern real-world threats - students learned basics of CBRNE training in the classroom but there was no basic field operations conducted in a personal protective equipment (PPE) posture; the curriculum trained only one basic entry tactic (dynamic entries) and didn't even introduce students to alternate entry tactics or techniques, which highly limited their tactical options and their safety and their overall effectiveness; the curriculum review process was only scheduled to occur once every three years so it was not keeping pace with the changes in technology, enemy TTPs, or the real-world threat evolution. Additionally, there were no legitimate vehicles to monitor the effects of the training programs within the organizations that attended the training; there were no evaluation methods in place; there was no real influence over sustainment training in the various units the students returned to; and the feedback process of the schoolhouse itself didn't properly address training retention or transfer of learning issues. This organization had multiple problems and all of the stakeholders were suffering the end results.

Despite the tremendous effort on the FEA, this organization was experiencing many of the same trials and tribulations faced by state and local agencies in every region of the U.S. They wanted to know that the program of instruction built the capabilities required to meet the modern threats their students might face. But, their pre-planning efforts all but ceased once they completed the FEA and as such, validating their curriculum was going to be difficult, costly and expose many more gaps and flaws that lie just below the surface than they were prepared to handle.

Validating Programs of Instruction Through Individual Operator Capabilities

Validation of curriculum is a major concern, the agency must know that their curriculum meets or exceeds operational requirements, yet they must maintain fiscal responsibility. Return on investment (ROI) must be weighed continuously (particularly in the era of large public agency deficits). Commanders face two opposing factors constantly: financial savings and capabilities development. All of this requires a tremendous amount of pre-planning.

To validate any training program the first question you must ask is, "What do we want the student to be capable of?" It is not coincidental that this is also the first question you ask yourself when you build your enabling and terminal learning objectives. Enabling-learning objectives are building blocks for the student to meet the terminal-learning objective. Seemingly simple, however, this is where things become complicated. If you consider the "end" product as the graduate's abilities as a result of the program, a variety of issues are opened up that require different analysis, measuring transfer of learning and the effectiveness of the training. A major consideration in designing any training program is the relevance to operational requirements. One commonly accepted model of measuring training effectiveness consists of five levels of assessment or more commonly known as the Kirkpatrick model of training evaluation. It is commonly recognized as a four-level model but the fourth is sometimes split into two levels with the fifth level representing a comparison of costs and benefits quantified in dollars. Though this model is very influential and accepted by most U.S. government agencies as a valid tool, it makes several assumptions about training and the subsequent transfer of skills to the field, which are rarely discussed. Some of the more critical assumptions are:

  1. That there is some clear link between learning how to do something in a course and actually being able to perform a set of tasks applying the knowledge.
  2. That because someone can demonstrate a TTP in a course they will actually use the TTP in the field.
  3. That the climate or culture of the team or organization the student returns to following training has no effect on whether or not learning will be transferred back to the workplace.
  4. That what a student believes about their abilities has no effect on whether or not learning will be transferred.

All of these assumptions are flawed as there are correlations or regression links to what someone learns and how they perform attached to each one of these assumptions. Professional trainers and evaluators understand this.

An additional challenge of validating POIs through individual operators involves the tactical decision making process. While it is relatively easy and straight forward to design a testing and evaluation system for psychomotor skills such as weapons manipulation, accuracy, movement through a structure, collapsing on sectors, etc… and it is equally rudimentary to develop testing and evaluation for knowing and understanding skills such as minimum stand-off distances, explosive charge calculations, the history of terrorism, etc…, measuring a student's cognitive abilities is immensely more challenging. One of the most important aspects of any tactical operation consists of the operator's ability to make sound tactical decisions under stress. Tactical operations are typified as being tense, uncertain and rapidly evolving. Making sound decisions under compressed timelines and under these factors is challenging in and of itself. If you add into the equation that the operators and leaders having to make the decisions will be subjected to varying degrees of a sympathetic nervous system response, which further strains cognitive abilities, it becomes obvious that cognition needs to be a major consideration in any training and evaluation program. The challenge becomes how do you measure the process or the student's selection of TTPs where the variables affecting the outcomes are ambiguous and moderated by degrees of uncertainty most often expressed as subjective probabilities? How do you measure the experiential aspects, which are the second part of the equation as the Bayesian Decision Theory explains? I will cover the tactical decision making spectrum more thoroughly in a subsequent article that details training design considerations.

Though difficult, particularly if performed as an afterthought to the development of a POI, we can validate tactical programs through individual operator effectiveness. I will discuss the process in more detail in later articles covering both functional needs analysis (FNA) and functional solutions analysis (FSA).

Validating Programs of Instruction Through Team Effectiveness

As you just read, there are inherent difficulties in validating POIs based on individual operators. It stands to reason that the difficulty would increase when we examine a team of operators. Many would argue that the ultimate end-result of any tactical training program is to produce an operator that can be integrated as a functioning member of a tactical team/unit upon graduation. There is strong merit to this argument but it opens Pandora's Box from a validation standpoint - human behavior.

The interdependence of human behavior is a prevailing feature of any tactical team operation. It influences our response to the event (through the actions of suspects, innocents and hostages), it influences our effectiveness (through intra team behaviors) and the two combined produce a compounding effect. Effective teamwork and coordination is highly desired and a mandatory component of these operations. In most task-oriented tactical situations, the effects arising from the interdependence of human behavior influences system performance and, accordingly, are consequence for training and should subsequently be evaluated for effectiveness. The requirements for training are not only to enhance the capability of a team to perform according to formalized standard operating procedures and to handle contingencies as they arise, but also to enhance performance through coordinated team activities.

For measuring team effectiveness, an essential difficulty remains, defining team skills. Team skills and team performance are affected by attitude, interaction, conformity, identity, confidence, integration, coordination, performance, communication, and flexibility among a host of others. These variables typically lack consistent objective measurement models. The ambiguous nature of such terms as attitude and coordination used to describe team behaviors must be dealt with at the onset of any attempt to assess relative qualitative facets of these activities. If such behaviors are identified as critical to team performance during the task-analytic phase (which will be discussed in detail in a subsequent article on conducting a functional area analysis), they must be functionally defined in terms that evaluators reliably agree upon. If, for example, effective communication is deemed to be a critical behavior for effective team functioning, raters should have in their repertoire of evaluation skills, behaviorally-anchored criteria against which interpersonal communications can be meaningfully judged.

Training law enforcement officers and soldiers to perform tactical and strategic operations is deadly serious business with real-life consequences. Far too often it is taken lightly or completely for granted. Professional administrators, curriculum developers and trainers understand that there is both an art and a science to developing and delivering training and building and enhancing human performance. They also understand the importance of developing the understanding and cognitive aspects of the tactical decision processes are every bit as important as focusing on psychomotor skills - that all three must work in concert with one another to achieve operational success. Professional trainers understand that training is not about giving a student knowledge or a capability that they will only retain for a few days and will never use in the field - it is about providing capabilities that are retained and transferred to real-world application. Professional trainers understand that standards must be developed and maintained as a consistent measure of both proficiency and capability. Far too often this is not the case and those that suffer the consequences from incompetent trainers and programs are the operators themselves - and ultimately the people they serve to protect. As I mentioned in last month's newsletter, while everyone professes intuitively to be able to recognize a good team or good tactical operators - the "I know it when I see it" phenomenon - few are able to articulate their dimensions with sufficient clarity to permit the development of training procedures for producing them or empirical programs for assessing them. This needs to change. Our system is broken and needs a major overhaul.

As an industry we have not pro-actively taken steps to solve problems as they emerge, let alone forecast solutions to problems that we have yet to witness. True leaders and trainers seek to find solutions to forecasted problems in advance of need. As I have stated before, agency heads and commanders need to learn how to calculate ROI in training and development investments. Top leaders in the agency need to communicate across the organization that investments in training and development are expected to produce clearly identified results. Along with these key executives, trainers and curriculum developers need to be held accountable for building new capabilities and maximizing performance, not just how many students they have trained. Collectively there needs to be a much deeper understanding of human and organizational performance - how to define it, how to construct it, how to keep it, and how to measure it. "If it ain't broke don't fix it' is a slogan of the complacent, the arrogant or the scared. It's an excuse for inaction, a call to non-arms. It's a mindset that assumes (or hopes) that today's realities will continue tomorrow in a tidy, linear and predictable fashion. Pure fantasy." Colin Powell.

In next month's newsletter we will begin to explain the details of the process of changing your tactical training, calculating returns on investments and gaining a deep understanding of the art and science behind human and organizational performance by discussing the details of the first of five phases of training, the planning phase.

Chadd Harbaugh
President
Government Training Institute