Risk Management and Insurance
Risk Management and Insurance

Risk Management and Insurance

Lead Author(s): Rob Hoyt, David Sommer

Student Price: Contact us to learn more

The text covers the foundations of risk management and insurance. The broad view of risk reflected in the concept of enterprise risk management is incorporated throughout the text, while still maintaining features that are important for introductory courses in risk management and insurance. The text considers property, liability, life, health, and income risks for both individuals and organizations. For all risks discussed, both insurance and non-insurance solutions are analyzed.

Chapter 2

Risk Identification and Evaluation

McCullough and Associates is a management consulting firm headquartered in Boston, with operations throughout the northeastern part of the United States. The company was formed in 2014 by its owner and chief operating officer, Barry McCullough. McCullough currently has about 200 consultants and an additional 150 employees working in supporting roles. The firm has developed a reputation for being especially helpful in solving personnel and financial management problems. As Barry McCullough and his management team plan for the future, they want to be very systematic in identifying key issues, opportunities, and potential problems that may confront the company in the years ahead.

One area that has not received much attention to date is that of pure risk exposures. Thus, Barry McCullough has recently assigned one of his top consultants to take on the responsibility for identifying all of the potential risks confronting his firm. After the exposures have been identified, Barry wants to know the relative importance of each one. For example, is the risk of loss due to fire potentially more damaging than the risk of adverse liability judgments? What losses are most likely to happen in a given year, and how much would probably be lost in each case? Can consistency be expected from one year to the next? Barry believes that it is impossible to make good risk management decisions without first having answers to these and similar questions. Such issues form the basis for this chapter.

Learning Goals Banner.png

After studying this chapter, you should be able to:

  • Explain several methods for identifying risks.
  • Identify the important elements in risk evaluation.
  • Explain three different measures of central tendency.
  • Explain three different measures of variation.
  • Discuss the concept of a probability distribution and explain the importance to risk managers.
  • Give examples of how risk managers might use the normal binomial, and Poisson distributions.
  • Explain how the concepts of risk mapping and value at risk are used in an enterprise-wide evaluation of risk.
  • Explain the importance of the law of large numbers for risk management.

INTRODUCTION

The risk management process and its context were introduced at the end of the previous chapter. You’ll recall that the risk management process has four sequential steps: risk identification, risk evaluation, selection of appropriate risk management techniques, and implementation and review. The first two steps in the risk management process are discussed in this chapter; subsequent chapters address the remaining steps in the overall process.

RISK IDENTIFICATION

The identification of risks and exposures to loss is perhaps the most important element of the risk management process. Unless the sources of possible losses are recognized, it is impossible to consciously choose appropriate, efficient methods for dealing with those losses should they occur.

A loss exposure is a potential loss that may be associated with a specific type of risk. Loss exposures are typically classified in the same way as are pure risks, which were discussed briefly in Chapter 1; that is, loss exposures can be categorized as to whether they result from property, liability, life, health, or loss of income risks.

All of these exposures will be analyzed in greater detail throughout this book. At this point, it is helpful to consider techniques for identifying and evaluating risks present in particular settings. Approaches used by many risk managers involve loss exposure checklists, financial statement analysis, flowcharts, contract analysis, on-site inspections, and statistical analysis of past losses.

Loss Exposure Checklists

One risk identification tool that can be used both by businesses and by individuals is a loss exposure checklist, which specifies numerous potential sources of loss from the destruction of assets and from legal liability. For each item on the checklist, the user asks the question, “Is this a potential source of loss to me or my firm?” In this way, the systematic use of loss exposure checklists reduces the likelihood of overlooking important sources of risk.

Some loss exposure checklists are designed for specific industries, such as manufacturers, retail stores, educational institutions, or religious organizations. Such lists tend to be quite lengthy, because they attempt to cover all the exposures that various entities are likely to face. Consideration is given to the cost to repair or replace property, to income losses that may accompany the destruction of assets, and to likely sources of legal liability.

A second type of checklist focuses on a specific category of exposure. For example, a checklist might deal with potential losses associated with real and personal property. Both the risk of physical damage and the risk of liability arising from the use of property would be explored through the questions included in such a checklist. Although many items may not be relevant to a particular organization, the questions usually address specific exposures in considerable detail. Thus, these checklists can be helpful not only in risk identification but also in compiling information necessary for an in-depth evaluation of risks that are identified.

Financial Statement Analysis

Another approach that can be used by businesses to identify risks is financial statement analysis. Using this method, all items on a firm’s balance sheet and income statement are analyzed in regard to risks that may be present. By including budgets, long-range forecasts, and written strategic plans in the analysis, this method can also help identify possible future risks that may not currently exist.

To illustrate this method of risk identification, consider the asset categories included on the balance sheets of business entities. Buildings owned by a firm are usually noted on its balance sheet, and leased buildings may be noted in footnotes to the financial statements. Future building acquisitions may be noted in budgets and strategic plans. Once such present and future buildings are identified, potential losses associated with them can then be considered. The loss exposures associated with building damage may include repair costs, the value of inventories and equipment inside, loss of income while the building cannot be used, and injuries to employees and customers inside the building. If a building is leased, relevant concerns would also include the disposition of the lease if the building is destroyed, including cost estimates of alternative facilities. This example does not begin to exhaust the range of possible losses that might result from damage to a building. It does, however, illustrate the thought process that is essential to the financial statement analysis method of risk identification. 

Flowcharts

A third tool—the flowchart—is especially useful for businesses in identifying sources of risk in their production processes. The simplified flowchart in Figure 2- 1 illustrates how they can pinpoint areas of potential losses. The question may be asked, “What events could disrupt the even and uninterrupted flow of parts to the final assembly floor, on which the whole production process depends?” For example, where are paints and solvents kept for the activities undertaken at Stage 3 in the figure? Are appropriate steps being taken to safeguard these materials from fire? Are floors kept clean and free of grease that might cause spills? Are any particular dangers threatening the storage of finished products that may require special protection? If the finished products are fragile, are appropriate protective measures being taken in loading and unloading?

FIGURE 2-1

Flowchart for Production Process

Figure 2-1.jpg

Only through careful inspection of the entire production process can the full range of loss exposures be identified. And for some firms, even that may not be sufficient. It may be important, for example, to expand the flowchart to include the suppliers of parts and materials, particularly if a firm’s production process is dependent on only a few suppliers. Thus, if there is only one possible supplier of a crucial part, a complete risk analysis will include identification of potential losses to that supplier as well as to the firm itself. Similar situations may arise if a firm manufactures products that are purchased by only a few customers. In this case, expansion of the flowchart to include customers will help identify risks that might otherwise be overlooked.

Contract Analysis

The analysis of contracts into which the firm enters is another method for identifying potential exposures to risk. It is not unusual for contracts to state that some losses, if they occur, are to be borne by specific parties. For example, a company may require building contractors that it hires to bear the cost of any liability suits arising out of the builder’s construction operations. In this way, the cost of suits that might otherwise be incurred by the hiring firm will be borne by the builder.

This type of contractual liability may be found not only in construction contracts but also in sales contracts and lease agreements. For example, a property owner with a superior bargaining position may require her tenants to be responsible for all injuries that occur on the leased premises, even if caused by the property owner’s own negligence. In other situations, she might agree to bear the liability arising out of a tenant’s negligence. Ideally, the specification of who is to pay for various losses should be a conscious decision that is made as part of the overall contract negotiation process. And this decision should reflect the comparative advantage of each party in managing and bearing the risk. But even where that ideal is not possible, it is important to examine all contracts so that important sources of risk are identified prior to the occurrence of any losses.

On-Site Inspections

Because some risks may exist that are not readily identifiable with the tools discussed thus far, it is important for business risk managers to visit periodically the various locations and departments within the firm. During these visits, it can be especially helpful to talk with department managers and other employees regarding their activities. Through this type of personal interaction, the risk manager can become better informed about current exposures to risk as well as potential future exposures that may arise.

Statistical Analysis of Past Losses

A final risk identification tool that may be helpful for larger firms is that of statistical analysis of past losses. A risk management information system (RMIS) is a computer software program that assists in performing this task. Some characteristics of past losses that may prove to be important in this regard include the cause of loss, the particular employees (if any) involved, where the loss occurred, and the total dollar amount of the loss.

To illustrate how these factors can prove important, suppose a trucking company experiences several vehicle accidents involving the same driver. Upon further investigation, the firm may discover that it has several problem drivers because it is not adequately checking the driving records of its employment applicants. Similarly, a restaurant chain that experiences a large number of employee injuries at its Dallas location may have safety hazards present that warrant additional investigation. As risk management information systems become increasingly sophisticated and user-friendly, it is anticipated that more businesses will be able to effectively use statistical analysis in their risk management activities. The trend toward web-based access to RMIS has also enabled firms to provide systems access to decision makers throughout the firm. This improved access provides decision makers with immediate availability of important risk management information.

Concept Check 2.1

All of the following are common approaches to risk identification EXCEPT

A

best guess estimation

B

loss exposure checklists

C

financial statement analysis

D

on-site inspection

  

Professional Perspectives

Risk Management Information Systems (RMIS)

Risk Management Information Systems are software tools designed to assist risk managers in their functions. Traditional RMIS software emphasizes claim management, safety monitoring, and financing losses. Other tools available in a RMIS are management of insurance policies, exposure data, and insurance certificates. The following are examples of the successful use of RMIS by risk managers:

  1. Reporting: Creation of reports that summarize loss payments and estimates of future losses. Accounting and finance departments use these reports in preparing the organization’s financial statements.
  2. Examination of the causes of accidents: by identifying the reasons for accidents, risk managers can determine where safety and loss prevention expenditures would be most helpful. A large number of employees or customers slipping and falling in a certain area may warrant a review of cleanup procedures or a study of the costs for installing special carpet.
  3. Review of claims adjustment process: risk managers use RMIS to evaluate the performance of claims adjusters by comparing actual results to standards. Typical evaluation areas are promptness of initial contact, case settlement time, amount paid for type of injury, and accuracy of the adjuster’s case value estimate.

The RMIS marketplace is evolving, with new products being introduced regularly. The primary distribution platform for new products is the Internet. The emergence of the Internet as a critical tool for business communication and services has rapidly impacted RMIS, as many vendors have “Web-enabled” new and existing products to take advantage of the Internet’s broad availability and low end user maintenance costs. The Internet has also permitted older “legacy” systems to remain viable in the marketplace because end-users work with a newer, standard interface even though processing may be occurring on a mainframe or other older computer system. Many systems are now taking advantage of the growing cloud computing capabilities.

Software products have also been introduced to serve special application needs, such as catastrophe simulation software to assist in examining the effects of disasters on a group of exposed properties, and hazardous material tracking programs to record the uses and locations of potentially hazardous items.

Source: Ahmed Moinuddin, Vice President–Technical Services, INSUREtrust.com, LLC, Atlanta, Georgia.

 

RISK EVALUATION

As noted briefly in Chapter 1, once a risk is identified, the next step in the risk management process is to estimate both the frequency and severity of potential losses. In this way, the risk manager obtains information that is helpful in determining the relative importance of identified risks and in selecting particular techniques for managing those risks.

Before proceeding with evaluating the frequency and severity of a risk, it is important to consider whether you are trying to measure inherent risk or residual risk.  Inherent risk would reflect the frequency and severity of the risk before any actions are taken to manage the risk, whereas residual risk reflects the amount of risk remaining after risk management activities have been implemented.  For example, assume that a risk manager is analyzing fire risk for a warehouse.  If no actions have been taken to manage this risk, the risk manager would measure the inherent risk of fire.  If the risk manager then takes actions to lower the probability of a fire occurring in the warehouse and also purchases fire insurance, these actions obviously reduce the frequency and severity of fire losses to the company.  The amount of risk still remaining would be the residual risk.  

In some cases, no particular problem would arise even if losses were incurred regularly, because the potential size of each loss is small. Thus, the daily occurrence of some inventory breakage may be an expected part of some businesses and would warrant only minimal attention from the risk manager. But other losses that occur infrequently yet are relatively large when they do occur (such as accidental deaths or destruction by a large fire) may be treated entirely differently. Such losses might cause bankruptcy if they were to happen with no means in place to counteract the resulting adverse financial effects for the firm. 

One complicating factor in evaluating exposures is that many losses do not result in complete destruction of the asset involved. For example, if Jim Carson’s business is struck by lightning, the building will not necessarily burn to the ground. In evaluating the risk of loss from this peril, Jim should consider three things: (1) the frequency with which lightning may strike his building, (2) the maximum probable loss that would likely result if lightning did strike, and (3) the maximum possible loss if the building were completely destroyed. The difference between these last two factors is that the maximum probable loss is an estimate of the likely severity of losses that occur, whereas the maximum possible loss is an estimate of the catastrophe potential associated with a particular exposure to risk. In other words, what is the worst possible loss that can result from a given occurrence? To assess that potential, Jim needs to consider not only the loss of the building itself but also the destruction of inventory and equipment located inside. Furthermore, if Jim would seek to operate his business from another location in the event of loss, then his estimate of maximum possible loss should also include the cost of such temporary facilities.

The actual estimation of the frequency and severity of losses may be done in various ways. Some risk managers consider these concepts informally in evaluating identified risks. They may broadly classify the frequency of various losses into categories such as “slight,” “moderate,” and “certain,” and may have similarly broad estimates for loss severity. Even this type of informal evaluation is better than none at all. But as risk management becomes increasingly sophisticated, most large firms attempt to be more precise in evaluating risks. It is now common to use probability distributions and statistical techniques in estimating both loss frequency and severity. These topics are considered in the next several sections of this chapter.

Risk Mapping or Profiling

With the evolution of integrated or enterprise risk management, alternative methods of risk identification and assessment have emerged. One such method is risk mapping, sometimes referred to as risk profiling. Since integrated risk management is based on identifying all the risks facing the firm, it is not unusual for a firm to identify in excess of 100 risks when using this approach. Cataloguing and making sense of so many risks requires a structured process. Risk mapping or profiling involves arraying these risks in a matrix, with one dimension being the frequency of events and the other being the severity. Each risk is then marked to indicate whether it is covered by insurance or not. By considering the likelihood and severity of each of the risks in this matrix, as well as the extent to which insurance protection is already available, it becomes possible for the firm to identify the risks that are most likely to seriously affect the firm’s ability to achieve its goals.

Concept Check 2.2

Which of the following are true of risk mapping?

A

It doesn't consider both frequency and severity of loss

B

It is common in firms that are focusing on ERM

C

It only focuses on pure risks


Statistical Concepts

Before discussing some techniques for statistically estimating loss frequency and severity, it is useful to review some essential concepts from the field of probability and statistics.

Probability

The probability of an event refers to its long-term frequency of occurrence. All events have a probability between 0 (for an event that is certain not to occur) and 1 (for an event that is certain to occur). To calculate the probability of an event, the number of times a given event occurs is divided by all possible events of that type. For example, if 150 accidents are observed to occur to 1,000 automobiles in operation, it can be said that there is a 0.15 probability of an accident (150 ÷ 1,000). This concept is the same one described in Chapter 1 in defining the term chance of loss. A probability distribution is a mutually exclusive and collectively exhaustive list of all events that can result from a chance process and contains the probability associated with each event. Thus, a risk manager may monitor the events (losses) that occur to a fleet of automobiles to determine how often losses of a particular size occur. The firm may then use that distribution to predict future losses.

Concept Check 2.3

If the likelihood of flood damage to a particular home is estimated to be 1 in 100, it is said that the home is located in a 100-year flood plain. What is the probability of flood damage in a given year?


Measures of Central Tendency or Location

When risk managers speak of various measures of central tendency or location, they are concerned with measuring the center of a probability distribution. Several types of such measures exist, but the most widely used is the mean. Usually signified by the symbol x ̄ or by  Greek letter µ, the mean can be defined as the sum of a set of n measurements x1, x2, x3,..., xn divided by n:

Chapt 2 eq 1.jpg

For example, the mean of the five values 1, 1, 2, 2, and 4 is (1 +1+ 2 + 2 + 4) ÷ 5 = 10 ÷ 5 = 2. A related concept is the expected value. It is obtained by multiplying each item or event by the probability of its occurrence. For instance, assume the following hypothetical distribution of loss from fire to a group of buildings:

Chapt 2 eq 2.jpg

To determine the mean or expected value of losses, multiply each loss amount by its probability and then sum. The expected value is $3,000. In effect, the expected value figure is a weighted average and reflects the best estimate of long-term average loss for a given loss distribution.

Another measure of central tendency is the median, which is the midpoint in a range of measurements. It is the point such that half of the items are larger and half are smaller than it. For instance, in a series of five losses of $1,000, $3,000, $5,000, $6,000, and $30,000, the median loss would be $5,000. Half of the losses are greater than that value, and half are smaller. (The mean of the series is $9,000.) One of the advantages of the median is that it is not affected greatly by extreme values, as is the mean. In the preceding loss situation, $5,000 does a much better job of describing the average loss than $9,000, because the extreme loss of $30,000 distorts the mean. The loss distribution in this example is said to be positively skewed or skewed to the right. This pattern is typical of loss distributions in practice.

Finally, the mode is the value of the variable that occurs most often in a frequency distribution. Thus, if a firm experienced eight losses of $25, $30, $30, $40, $40, $40, $50, and $60, the mode would be $40. As a measure of central tendency for risk managers, the mode is not as widely used as the mean or median.

Concept Check 2.4

Which of the following is not a measure of central tendency or location?

A

mode

B

median

C

probability

D

mean

Concept Check 2.5

If a loss distribution is positively skewed, how will the mean compare to the median?

A

mean larger than the median

B

mean equal to the median

C

mean smaller than the median

D

Can't be determined


Measures of Variation or Dispersion

Because risk is synonymous with uncertainty, an extremely important statistical concept is that of variation from what is expected. The standard deviation, usually represented by the Greek letter σ, is a number that measures how close a group of individual measurements is to its expected value or mean. For example, assume a manufacturer has 100 employees who are injured during a year. The dollar loss from these injuries ranges from $500 to $25,000, with an expected value of $12,500. The range of the individual losses is rather great, from $500 to $25,000. To say that the average injury loss is $12,500 is not very descriptive of the magnitude of the average loss, especially if one is comparing it with another group of 100 losses that range in severity from $11,000 to $14,000 but also have an average loss of $12,500. It is helpful to state precisely just how the two groups differ. By comparing the standard deviation of two sets of injuries, the precise variation in injuries becomes clear.

To calculate the standard deviation of a group of measures, one must first determine the mean or expected value. Then the mean is subtracted from each individual value, and the resulting figure is squared. The squared differences are added together, with the sum divided by the total number of measurements. The result is the mean of the squared deviation, which is known as the variance. The square root of the variance is the standard deviation. An example illustrating these calculations for a set of five losses is provided in Table 2-1.

As an example of how to use the information provided by measures of dispersion, suppose there are two factories with the same average loss. However, the dollar loss of all injuries in one factory falls within one standard deviation of the mean loss, whereas only 10 percent of the injuries in the other factory do so. With such data, there can be a much better understanding of the risk associated with injury loss in these two factories. The dispersion of losses in the first factory is much less than in the second. Thus, the standard deviation is a gauge of the dispersion of values about the mean and, hence, of the risk or uncertainty.

TABLE 2-1        Calculating the Standard Deviation of Losses

Losses ($)

Mean Loss*($)

Deviation from Mean ($)

Squared Deviations ($)

10

30

-20

400

20

30

-10

100

30

30

0

0

40

30

10

100

50

30

20

400




1,000

        Variance = 1,000 ÷ 5 =200; Standard deviation = 200\sqrt{200} = $14.14;

        *Mean loss = ($10 + 20 + 30 + 40 + 50) ÷ 5 = $30

When the standard deviation is expressed as a percentage of the mean, the result is the coefficient of variation, which is one way to characterize the concept of mathematical risk to the insurer. It is the method used in Chapter 1 to measure objective risk. If losses from a group of exposure units have a low coefficient of variation, there is less risk (less variation) associated with this group of exposures than with another group with a higher coefficient of variation.

Concept Check 2.6

Which of the following risk measures represents relative risk of loss?

A

standard deviation

B

coefficient of variation

C

mean

D

median


Loss Distributions Used in Risk Management

Probability distributions can be very useful tools for evaluating the expected frequency and/or severity of losses due to identified risks. In risk management, two types of probability distribution are used: empirical and theoretical. To form an empirical probability distribution, the risk manager actually observes the events that occur, as explained in the previous section. To create a theoretical probability distribution, a mathematical formula is used. To effectively use such distributions, the risk manager must be reasonably confident that the distribution of the firm’s losses is similar to the theoretical distribution chosen.

Three theoretical probability distributions that are widely used in risk management are: the binomial, the normal, and the Poisson.

The Binomial Distribution

Suppose the probability that an event will occur at any point in time is p. Then the probability q that the event will not occur can be stated by the equation q = 1 – p. One can calculate how often an event will happen by means of the binomial formula, which indicates that the probability of r events in n possible times equals:

                                                                                  n! /(r!(n-r)!)  ≥  prqn-r

Note that the expression n! is read “n factorial” and refers to a successive multiplication of the numbers n, n – 1, n – 2, ..., 1.

Suppose that a risk manager needs to estimate the probability of the number of losses in a particular group. If the firms own a fleet of 10,000 automobiles, the binomial formula may be used to calculate the chance of 10 losses, 100 losses, 200 losses, or any other number of losses, provided that p can be reasonably estimated. Similarly, if there are 100 exposure units, such as separate retail stores, and it is known from past experience that the separate probability of loss of any one store by fire each year is 0.01, reference to a binomial table tells us that the probability is: 

        0.366         that 0 stores will burn
       0.370         that 1 store will burn
       0.185         that 2 stores will burn
       0.061         that 3 stores will burn
       0.018         that 4 or more stores will burn
       ———
       1.000          Total

The Normal Distribution

As the number of observations increases, a mathematical concept called the central limit theory states that the expected results for a pool or portfolio of independent observations can be approximated by the normal distribution, which is a very useful type of mathematical distribution. Shown graphically in Figure 2- 2, it is perfectly bell shaped. When one knows its mean and standard deviation, the distribution is said to be completely defined.

FIGURE 2-2

Normal Probability Distribution of 500 Losses

Chapt 2 Figure 2-2.jpg

For instance, the loss distribution in Figure 2-2 is a normal distribution of 500 losses with a mean value of $500 and a standard deviation of $150. When risk managers have this information, they can assume that about 68 percent of all losses will be within 1 standard deviation of the mean. The figure shows that 340 losses (75 + 95 + 95 + 75) are between $350 and $650, which is the range of  ± 1 standard deviation. Likewise, about 95 percent, or 475, of all losses should occur within two standard deviations of the mean. These losses would be within the $200–$800 range. About 99 percent of all observations should be within 3 standard deviations of the mean. If risk managers know that their loss distributions are normal, they can assume that these relationships hold, and they can predict the probability of a given loss level occurring or the probability of losses being within a certain range of the mean.

It should be noted that the binomial distribution requires variables to be discrete (i.e., there is either a loss or no loss). With the normal distribution, variables may be continuous, having a value of any number.

The Poisson Distribution

The Poisson distribution is another theoretical probability distribution that is useful in risk management applications. For example, auto accidents, fires, and other losses tend to occur in a way that can be approximated with the Poisson distribution. One determines the probability of an event under the Poisson distribution using the following formula:

                                                                                                 p=mremr!p=\frac{m^re^{-m}}{r!}  

where:
p = the probability that an event n occurs
r = the number of events for which the probability estimate is needed m = mean = expected                                     loss frequency
e = a constant, the base of the natural logarithms, equal to 2.71828

The mean m of a Poisson distribution is also its variance. Consequently, its standard deviation  σ\sigma  is equal to m \sqrt{m} .

To obtain a better understanding of how the Poisson distribution is used to calculate probabilities, consider the following example. Suppose the Ferguson Company owns 10 trucks. In a typical year, a total of one loss occurs, thus allowing p to be estimated to be 0.1. What is the probability of more than two accidents in a year? Or stated another way, what is the probability of three or more accidents? The answer is 8.03 percent, which is calculated in Table 2-2. Note that the probabilities in Table 2-2 are similar to those calculated previously for the binomial distribution, where the mean loss was also equal to 1. When the probabilities of loss are greater, the difference between the two distributions is greater. However, it should be noted that as the number of exposure units increases and the probability of loss decreases, the binomial distribution becomes more and more like the Poisson distribution.

TABLE 2-2

Probability of Losses Using the Poisson Distribution

Chapt 2 Table 2-2.jpg

From a risk management viewpoint, the Poisson distribution is most desirable when more than 50 independent exposure units exist and the probability that any one item will suffer a loss is 0.1 or less. However, when there are fewer than 50 exposures but each one can suffer multiple losses during the year, it can still be used. Given these characteristics, the Poisson distribution can be a very useful probability distribution for risk managers.

Other theoretical distributions used in risk management include the Gamma, the Log Normal, the Negative Binomial, and the Pareto. Comparison to the firm’s actual data can confirm whether a particular theoretical distribution is appropriate. These other theoretical distributions are positively skewed which is a trait often exhibited by data in risk management applications.

Concept Check 2.7

Based on the information in Table 2-2, what is the probability of no more than 3 losses occurring in one year?


Integrated Risk Measures

The assessment of risk in an integrated or enterprise-wide risk framework requires additional quantification techniques. One approach being used is value at risk (VAR). Value-at-risk analysis has been used by banks to quantify financial risk, but is increasingly being considered by other types of firms that wish to assess all types of risks in a coordinated framework. VAR analysis constructs probability distributions of the risks alone and in various combinations, to obtain estimates of the risk of loss at various probability levels. This type of analysis yields a numerical statement of the maximum expected loss in a specific time period and at a given probability level. The VAR approach is similar to the concept of maximum probable loss described previously, but it provides the firm with an assessment of the overall impact of risk on the firm. For example, consider the isolated risk that disability payments may have to be paid to injured workers. Suppose the firm estimates that at the 95 percent probability level, total disability payments in one year will be less than $100,000. Thus, the VAR equals $100,000. To obtain a broader estimate of the firm’s VAR, probability distributions for a variety of risks facing the firm would be combined.

One significant advantage of using VAR in enterprise risk management is that it considers correlation between different categories of risk. The relationship among different risks may either increase or decrease the overall effect of the risks facing an organization. For instance, increases in unemployment can lead to increases in criminal activity and workers’ compensation claims, and to decreases in a firm’s sales. The combined impact of these three risks could be substantially different from what might be estimated by considering each risk alone. Ultimately, it is the net effect of risk that is critical to the ability of a firm to achieve its goals.

Another measure sometimes used in an enterprise-wide assessment of risk is risk-adjusted return on capital (RAROC). This approach attempts to allocate risk costs to the many different activities of the firm, such as products, projects, loans, and so on. In effect, RAROC assesses how much capital would be required by the organization’s various activities to keep the probability of bankruptcy below a specified probability level. As a result of RAROC, managers are forced to consider risk levels in evaluating the profitability of their decisions.

Concept Check 2.8

Useful risk assessment measures in an enterprise-wide risk management process include each of the following EXCEPT

A

VAR

B

RAROC

C

net effect of risk


ACCURACY OF PREDICTIONS

A question of interest to risk managers is how many individual exposure units are necessary before a given degree of accuracy can be achieved in obtaining an actual loss frequency that is close to the expected loss frequency. As discussed in this section, the number of observed losses for a particular firm must be fairly large to accurately predict future losses. If the number is not sufficiently large, then the firm may still perform risk evaluation by choosing an appropriate theoretical probability distribution similar to the firm’s own distribution of losses. 

Law of Large Numbers

Objective risk was defined in Chapter 1 to be the ratio of the probable variation of actual from expected losses, divided by expected losses. As noted there, the degree of objective risk is meaningful only when the group is fairly large. In fact, the concept becomes increasingly meaningful (and useful) as the size of the group exposed to the risk expands. The law of large numbers, which can be derived and proven mathematically, states that as the number of exposure units (in other words, persons or objects exposed to risk) increases, the more likely it becomes that actual loss experience will equal probable loss experience. Hence, the degree of objective risk diminishes as the number of exposure units increases.

An individual seldom has a sufficient number of items exposed to a particular risk to reduce the degree of risk significantly through the operation of the law of large numbers. Large businesses may be better equipped to do so. For example, suppose RBW Rental Car Company owns a fleet of 10,000 automobiles available for rental. While it is impossible to predict which particular cars will incur physical damage losses in any given year, RBW may be able to predict fairly accurately how many of the cars will be damaged. The accuracy of RBW’s prediction is enhanced because of the large number of exposure units (cars) involved.

To illustrate more precisely the effects of the law of large numbers, assume that QQQ Company and RRR Company own 100 and 900 automobiles, respectively. These cars are used by the sales personnel of each firm and are driven in the same general geographical territory. The chance of loss in a given year due to collision is 20 percent. Thus, the expected number of losses is 0.20 × 100 = 20 for QQQ and 0.20 × 900 = 180 for RRR. Suppose further that statisticians have computed that the likely range in the number of losses in one year is 8 for QQQ and 24 for RRR. As shown, RRR’s degree of risk is only one-third that for QQQ:

Objective risk QQQ = Range/Expected = 8/20 = 40%

Objective risk RRR = Range/Expected = 24/180 = 13.33%

In this example, the crucial input values are the likely ranges in actual results. In general, the range of possible results decreases on a relative basis as the number of exposure units increases.

In the previous example it was assumed that the underlying chance of loss was the same for QQQ and RRR. Consider now the effect of changing the long-term chance of loss while maintaining the same number of exposure units. At first glance it appears that the higher the chance of loss, the higher the risk. However, the opposite is actually true. As the chance of loss increases, the variation of actual from expected losses tends to decrease if the exposures remain the same. In less technical language, as a loss becomes more and more certain to happen, there is less and less uncertainty that it will not happen. And if a point is finally reached where an event is sure to happen, then there is no risk at all.

To illustrate, assume that employers A and B, each with 10,000 employees, are concerned about occupational injuries to workers. Employer A is in a “safe” industry, with the chance of loss of a disabling injury in its plant being equal to 0.01. Employer B is in a more dangerous industry, with its chance of loss equal to 0.25. It has been determined that the probable variation in injuries in employer A’s plant will be no more than 20, whereas in employer B’s plant the probable variation will not exceed 87. Thus, the degrees of objective risk are computed to be

Ojective riskA = 20/(0.01)(10,000) = 20/100 = 20%

Objective riskB = 87/(0.25)(10000) = 87/2500 = 3.5%

Although B’s chance of loss is much greater than A’s, its degree of risk is only 17.5 percent of A’s risk (3.5 ÷ 20 = 0.175). In general, the degree of objective risk will vary inversely with the chance of loss for any constant number of exposure units.

In summary, the two most important applications of the law of large numbers in relation to objective risk are as follows:

  1. As the number of exposure units increases, the degree of risk decreases.
  2. Given a constant number of exposure units, as the chance of loss increases, the degree of risk decreases.
Concept Check 2.9

What concept allows larger firms to do a better job of predicting future losses than smaller firms?


Number of Exposure Units Required

Given the law of large numbers, risk managers know that as the number of exposure units becomes infinitely large, the actual loss frequency will approach the expected true loss frequency. But it is never possible for a single entity to group together an infinitely large number of exposures. Thus, the question arises as to how much error is introduced when a group is not sufficiently large. More precisely, a risk manager might ask, “How many exposure units must be grouped together in order to be 95 percent sure that the estimate of the maximum probable number of losses differs from expected losses by no more than 5 percent?”

It is assumed that the expected losses for a very large population of exposures either are known, can be estimated from industry-wide data, or can be determined subjectively. Essentially, the risk manager wishes to know how stable the loss experience will be, that is, how much objective risk must be accepted for a given number of exposure units. Certain mathematical and statistical laws help provide an answer to this question. Although the assumptions required by these laws may not always hold in the real world, they enable the risk manager to make an approximation that will be of considerable help in making a sound decision. The required assumption is that the losses occur in the manner assumed by the binomial formula. In other words, each loss occurs independently of each other loss, and the probability of loss is constant from occurrence to occurrence.

A simple mathematical formula is available that enables insurers to estimate the number of exposures required for a given degree of accuracy. However, unless mathematical tools—such as the formula given below—are used with great caution and are interpreted by experienced persons, erroneous conclusions may be reached; the following formula is included only as an illustration of how such tools can be of help in guiding an insurer to reduce risk. The formula is based on the assumption that losses in an insured population are distributed normally and concerns only the occurrence of a loss and not the evaluation of the size of the loss, which is an entirely different problem and beyond the scope of this book. This formula is based on the knowledge that the normal distribution is an approximation of the binomial distribution and that known percentages of losses will fall within 1, 2, 3, or more standard deviations from the mean:

                                                                                 N=S2p(1p)E2N = \frac{S^2p(1-p)}{E^2} 

where
p = probability of loss
N = the number of exposure units sufficient for a given degree of accuracy                                                              E = the degree of accuracy required, expressed as a ratio of actual losses to the total number in the sample
S = the number of standard deviations of the distribution

The value of S indicates the level of confidence that can be stated for the results. Thus, if S is 1, it is known with 68 percent confidence that losses will be as predicted; if S is 2, there is 95 percent confidence; and if S is 3, there is 99 percent confidence.2

As an example, suppose that in the preceding case the probability of loss is 0.3 (not an unusual probability in certain areas for collision of automobiles) and that it is desired that there be 95 percent confidence that the actual loss ratio (number of losses divided by the total number of exposure units) will not differ from the expected 0.3 by more than 2 percentage points. In other words, the risk manager wants to know how many units there must be in order to be 95 percent confident that the number of losses out of each 100 units will fall in the range from 28 to 32. Substitution in the formula yields:

                                                                 N=S2p(1p)E2=(22)(0.3)(0.7)(0.02)2N = \frac{S^2p(1-p)}{E^2}=\frac{(2^2)(0.3)(0.7)}{(0.02)^2} 

or 2,100 exposure units. The value of S is 2 in this case because of the requirement of a 95 percent confidence interval statement. That is, it is known that 95 percent of all losses will fall within a range of 2 standard deviations of the mean.

In the preceding illustration the probability of loss was very large. For many risks, it is somewhat unusual to experience such large probabilities. It is much more common for the probability of loss to be about 5 percent or less. If the probability of loss is only 5 percent, the risk manager will undoubtedly want a higher standard of accuracy than was true in the preceding case. The example in Table 2-3 illustrates that 7,600 exposure units are needed in this situation to have 95 confidence that the actual loss ratio will be within 10 percent of the expected.

TABLE 2-3

Chapt 2 Table 2-3.jpg

The example in Table 2-3 illustrates a fundamental truth about risk management: When the probability of loss is small, a larger number of exposure units is needed for an acceptable degree of risk than is commonly recognized. Mathematical formulas such as the one used in these examples can assist risk managers considerably in making estimates of the degree of risk assumed with given numbers in an exposure group.

Concept Check 2.10

As the probability of loss decreases, the number of exposure units required to achieve a certain level of confidence

A

increases

B

decreases

C

remains constant



End of Chapter Summary Banner.png

SUMMARY

1. Loss exposure checklists, financial statement analysis, flowcharts, contract analysis, on-site inspections of property, and the statistical analysis of past losses can be helpful in identifying risk.

2. After risks are identified, they should be evaluated regarding their expected frequency of occurrence, the probable severity of associated losses, the maximum probable loss, and the maximum possible loss. Risk mapping is one way to catalogue the wide variety of risks identified.

3. A probability distribution is a mutually exclusive and collectively exhaustive list of all events that result from a chance process. Risk managers use both empirical and theoretical probability distributions of losses in evaluating identified risks.

4. The mean, median, and mode are ways of measuring the center of a probability distribution.

5. The variance, standard deviation, and coefficient of variation are important ways of measuring the variation of actual from expected experience.

6. Three theoretical distributions that are especially useful for risk managers are the normal, binomial, and Poisson distributions.

7. Value at risk (VAR) analysis involves the construction of probability distributions of risks alone and in various combinations to obtain estimates of the risk of loss at various probability levels.

8. The law of large numbers indicates that as the number of exposure units increases, the degree of risk decreases. And, given a constant number of exposure units, as the chance of loss increases, the degree of risk decreases.

9. When the probability of loss is very small, a larger number of exposure units is needed to achieve the same degree of risk than when the probability of loss is large.

Citations

[1] Image courtesy of Pixabay under CC0 1.0

[2] J. T. McClave, P. G. Benson, and T. Sincich, A First Course in Business Statistics, 7th Edition (Upper Saddle River, N.J.: Prentice Hall, 1998): 70. 

A potential loss that may be associated with a specific type of risk.
A risk identification tool used by businesses and individuals that lists many different potential losses. The user can determine which of the potential losses is relevant.
A method of risk identification in which each item on a firm’s balance sheet and income statement is analyzed regarding potential risks.
A risk identification method that helps pinpoint sources of risk in the production process.
Liability arising from contractual agreements in which it is stated that some losses, if they occur, are to be borne by specific parties.
A computer software program that assists in tracking and statistical analysis of past losses.
The frequency and severity of the risk before any actions are taken to manage the risk.
The amount of risk remaining after risk management activites have been implemented.
An estimate of the likely severity of loss that might result from a given occurrence.
An estimate of the worst loss that might result from a given occurrence.
Method of risk identification and assessment by arranging all risks in a matrix reflecting frequency, severity, and existing insurance coverage.
The long-term frequency of an event’s occurrence; all events have a probability between 0 (for an event that is certain not to occur) and 1 (for an event that is certain to occur).
A mutually exclusive and collectively exhaustive list of all events that can result from a chance process containing the probability associated with each event.
Measures of the midpoint of a probability distribution, such as the mean, the expected value, the median, and the mode.
The sum of a set of n measurements x1, x2, x3,_._._._, xn divided by n; usually signified by the symbol x ̄ .
A figure that is a weighted average and reflects the best estimate of long-term average loss for a given loss distribution. Also know as the mean
The midpoint in a range of measurements such that half of the items are larger and half are smaller than it.
The value of the variable that occurs most often in a frequency distribution.
A number that measures how close a group of individual measurements is to its expected value; usually signified by the Greek letter sigma
The mean of the squared deviations from the mean; the square root of the variance is the standard deviation.
The standard deviation expressed as a percentage of the mean.
A tool for evaluating the expected frequency and/or severity of losses due to identified risks that involves observing actual events.
A tool for evaluating the frequency and/or expected severity of losses due to identified risks that involves using mathematical
An equation for estimating the likely number of losses, given a stated probability of loss and a number of loss exposures.
A very useful, perfectly bell-shaped probability distribution.
Describes variables that can take on a finite or countable value (such as 1, 2, 3, ... etc.).
Describes variables that can take on any value within a certain interval or range.
Estimate of the risk of loss at various probability levels.
Assesses how much capital would be required by the organization’s various activities (such as products, projects, loans, etc.) to keep the probability of bankruptcy below a specified probability level.
A mathematical principle showing that as the number of exposure units increases, the more certain it becomes that actual loss experience will equal probable loss experience.