report
stringlengths 320
1.32M
| summary
stringlengths 203
13.7k
|
|---|---|
This section discusses the purpose, types, and locations of natural gas storage sites; leaks from such sites; safety enforcement prior to 2017; and the PIPES Act. Natural gas storage sites—geologic formations where natural gas is stored deep underground and retrieved for later use—are key parts of our energy system. Natural gas provides about 30 percent of U.S. energy needs, is used to generate a third of the nation’s electricity, is widely used for heating homes and businesses, and is used in a variety of industrial processes, according to Energy Information Administration (EIA) information. Natural gas storage sites provide a way to meet peak energy needs—such as during a cold spell in the winter or during periods of high electricity demand in the summer—more quickly than would be possible if relying solely on pipelines that transport natural gas from distant production fields. Natural gas storage sites are privately owned and operated by a variety of companies in the energy industry, including local utilities, independent companies that store gas for sale at peak times to other companies, and interstate pipeline companies. There are three major types of underground geologic formations where natural gas storage sites are found: (1) underground salt caverns, (2) depleted aquifers, and (3) depleted oil and gas reservoirs. The wells that inject or withdraw natural gas from the underground formations can extend thousands of feet underground. The 415 natural gas storage sites in the United States contain about 17,000 wells, ranging from a few wells per site to over a hundred wells at some larger sites. Figure 1 illustrates the types of geologic formations where natural gas storage sites are constructed and operated. Natural gas storage sites are found in 31 states across the country, according to EIA data. Over 300 cities, towns, and other populated areas are located near a natural gas storage site, according to a DOE analysis. Operators often locate natural gas storage sites near major population centers or large gas pipelines to improve their ability to deliver natural gas when needed. Figure 2 shows the approximate location of natural gas storage sites located within counties populated by 100,000 or more people. Leaks from natural gas storage sites can be caused by a variety of factors—such as underground fissures or inadequately designed or damaged wells—and have the potential to affect human health, cause economic disruption, and harm the environment. For example, natural gas poses the risk of explosion and asphyxiation within enclosed spaces. In addition, other components of natural gas can cause short-term neurological, gastrointestinal, and respiratory symptoms, according to the Los Angeles County Department of Public Health. Moreover, if a large gas storage facility unexpectedly goes offline due to a major leak, it can disrupt the natural gas supply system, which in turn may affect the flow of gas to heat homes and businesses or may cause electrical blackouts due to the loss of fuel for gas-fired electrical generators. According to a DOE report, the natural gas stored in geologic formations is under high pressure and may find its way to the surface if underground fissures or unplugged oil and gas wells allow the geologic formation to be breached. Leaks can also occur if the wells used to inject and withdraw natural gas from geologic formations lose integrity due to cracking of cement used to seal the well or other factors. Older wells used for natural gas storage were often drilled for other reasons, such as oil and gas production, and are more likely to have age-related degradation, according to DOE. About half of the about 17,000 wells that inject and withdraw natural gas from storage sites are more than 50 years old, and many wells are more than 100 years old, according to DOE. In addition, DOE reported that other factors may contribute to leaks, such as earthquake activity, nearby drilling activity, or other mechanical stresses and undetected corrosion that may not be known by the natural gas storage site operators. Further, DOE has reported that operators can sustain safety by regularly maintaining site equipment, monitoring and repairing leaks, keeping records about the site, and planning for possible emergencies, among other things. Leaks from natural gas storage sites can result in significant and harmful effects on public health and safety, the environment, and the energy system. DOE, PHMSA, and others have identified three major leaks from natural gas storage sites since 2000 that illustrate these potential negative effects: The Aliso Canyon leak, which was detected in October 2015 and continued for nearly 4 months, focused national attention on natural gas storage safety. As of August 2017, the cause of the leak had not been conclusively determined. However, the leak occurred in a well that, at the time, was about 60 years old, according to DOE. The operator of the Aliso Canyon site unsuccessfully attempted to stop the leak several times over the 4-month event and eventually was able to do so in February 2016 by permanently sealing the well. According to the private operator, it temporarily relocated about 8,000 neighboring families until the leak was abated. Also, the leak disrupted the Aliso Canyon site’s ability to supply natural gas to electricity generating plants. Because the Aliso Canyon site supplies gas for nearly 10 gigawatts of electricity in the Los Angeles basin, the leak led to concerns that there may not be enough gas to serve the electricity needs of the surrounding region during peak times. In July 2017, California state regulators announced that the operator had conducted a comprehensive safety review and that the regulators would allow Aliso Canyon to reopen at a greatly reduced capacity in order to prevent energy shortages. In August 2004, the Moss Bluff natural gas storage site in Liberty County, Texas, experienced a major leak due to a damaged well. The leaking gas caught fire and burned for over 6 days, according to DOE and PHMSA documents. As a result, the gas was released into the atmosphere as carbon dioxide, which, according to an EPA analysis, is a less potent greenhouse gas than natural gas, which was released by the Aliso Canyon leak. In January 2001, the Yaggy natural gas storage site leaked through underground fissures from the site’s salt caverns into the nearby city of Hutchinson, Kansas, eventually causing an explosion in the city’s downtown business district, DOE reported. Two people were killed, and several businesses were damaged or destroyed by the explosion. Before 2017, many natural gas storage sites were subject to varied, state- by-state safety enforcement. States were responsible for regulating and enforcing safety at sites that were located solely within their boundaries and only linked to pipelines within the state. Agencies representing 26 state governments licensed 211 such sites, which amounted to about half of the 415 active sites in the United States. Prior to 2017, these state governments applied various safety standards that addressed underground conditions, such as the integrity of the geologic formations that store natural gas, or the construction and maintenance of wells that inject and withdraw gas. For example, according to a DOE report, some states’ standards specified how site operators should safely construct the wells. Other states’ standards specified how wells were to be maintained during their useful life, or how they were to be safely plugged and abandoned after their useful life ended. Prior to 2017, the remaining 204 interstate natural gas storage sites were subject solely to federal oversight. However, the federal government had not issued safety standards for them. The Federal Energy Regulatory Commission (FERC) licenses storage sites that serve the interstate natural gas market—a market regulated by FERC. However, according to FERC, its licensing process focuses on whether a proposed site serves an economic need, and it does not review the safety conditions of a site when reviewing whether to grant a license. In this role, FERC has licensed 204 sites in 24 states. As part of its mission to ensure the safety of the interstate natural gas pipeline system—of which natural gas storage sites are a part—PHMSA had the regulatory authority to issue and enforce safety standards for interstate natural gas storage sites. However, PHMSA’s interstate pipeline safety regulations did not extend to underground natural gas storage facilities, even when connected to interstate pipelines. Moreover, because interstate sites were under federal jurisdiction, state safety standards could not be applied to such sites. Other federal agencies had responsibilities that addressed limited aspects of safety at natural gas storage sites. DOE provided technical assistance to California during the Aliso Canyon incident, and has researched the effects of natural gas storage leaks on the reliability of the electricity grid. The Bureau of Land Management (BLM), within the Department of the Interior, manages public lands that overlap, either partially or fully, with 33 natural gas storage sites. EPA provides funding and oversight to help states and local pollution control agencies meet their responsibility to monitor air quality within their jurisdictions, according to EPA officials. EPA can also provide its expertise and support to states and local communities in the event of natural gas storage leaks, as it did during the leak at Aliso Canyon. However, EPA does not regulate underground conditions at gas storage sites. In June 2016, Congress passed and the President signed the PIPES Act, which, among other things, directed DOT to establish minimum safety standards for all natural gas storage sites by June 2018 after considering recommendations from a federal task force and industry standards. PHMSA sets and enforces these standards. The PIPES Act also directed DOE to establish and lead the task force, which was charged with analyzing the Aliso Canyon incident and making recommendations to reduce the occurrence of similar incidents in the future. The task force published its report in October 2016. The report included findings in three areas—well integrity, environmental and health protection, and energy reliability. The report also made 44 recommendations to enhance natural gas storage safety, including 3 key recommendations: Operators of natural gas storage sites should make advance preparations with appropriate federal, state, and local governments to mitigate potential future leaks. Electrical grid operators should prepare for the risks that potential gas storage disruptions create for the electric system. Operators of natural gas storage sites should begin a rigorous program to evaluate the status of the wells, establish risk management planning, and, in most cases, phase out old wells with single-point-of-failure designs. The PIPES Act directed DOT to consider industry consensus standards to the extent practicable in establishing its minimum safety standards. Consensus standards for the oil and gas industry—including those for natural gas storage—are issued by various entities, including the American Petroleum Institute (API). API consensus standards describe how to safely perform technical procedures, such as drilling wells for oil and gas production, refining produced natural gas into usable gas for heating and electricity generation, and conducting “workover” operations to refurbish existing wells. API develops its consensus standards involving industry, manufacturers, engineering firms, the public, academia, and government, and API’s recommended practices are frequently adopted by a majority of the industry, according to API and PHMSA. Following several years of study and discussion by industry experts and government officials, including participation by PHMSA, API issued two documents outlining recommended practices for the development and operations of natural gas storage sites. These recommended practices describe the procedures for designing, locating, constructing, and operating natural gas storage sites, and include such activities as inspecting and testing the wells used to inject and withdraw gas from natural gas storage sites and monitoring the integrity of the underground formations where natural gas is stored. The API documents also recommend that operators prepare for emergencies and train the personnel who operate the sites. Under the PIPES Act, state governments also have a continuing role in enforcing natural gas storage safety for the sites in their states. The act allows states to certify with PHMSA that they have adopted state standards that meet or exceed the federal standards and can enforce these standards. Once a state certifies that it has met these conditions, the state is responsible for enforcing safety standards on state-regulated intrastate natural gas underground storage sites through inspections conducted by state employees, according to PHMSA officials. In addition, PHMSA officials told us that they would periodically assess whether states are meeting these conditions. PHMSA officials told us that PHMSA will have direct responsibility for inspecting federally-licensed interstate facilities for the next few years because federal safety standards are still being established, but officials noted that state inspectors could eventually seek permission from PHMSA to assume the role of inspecting interstate natural gas storage sites on behalf of PHMSA in the future. PHMSA officials also noted that PHMSA does not force states to participate in their pipeline safety program, and so in cases where a state chooses not to certify its safety enforcement program, PHMSA has stated that it will assign its own inspectors and staff to enforce federal natural gas storage safety standards in that state. The PIPES Act also requires PHMSA to set and charge user fees to operators that it can use for activities related to underground natural gas storage facility safety, subject to the expenditure of these fees being provided in advance in an appropriations act. Citing an urgent need to improve safety at natural gas storage sites, PHMSA issued an interim final rule that includes minimum safety standards based largely on API recommended practices in December 2016. The rule took effect in January 2017 and provided that existing facilities (and those constructed by July 18, 2017) must meet the standards by January 18, 2018. PHMSA is now considering public comments on its interim standards, and it plans to finalize them by issuing a final rule by January 2018. PHMSA also has stated that it will delay enforcement of certain standards in the interim final rule until 1 year after issuance of the final rule. To meet the requirement under the PIPES Act, PHMSA issued minimum safety standards for natural gas storage through an interim final rule in December 2016, which took effect in January 2017. PHMSA issued the interim final rule—which allowed the safety standards to take effect more quickly than under the conventional regulatory process—and stated that any delay in adopting the standards would jeopardize the public interest through risks to public safety and the environment. As a result, all 415 natural gas storage sites are for the first time subject to federal regulation, including minimum safety standards as set forth in the interim final rule, and subject to revision in a final rule. To develop the minimum safety standards, PHMSA considered industry consensus standards, as required by the PIPES Act. PHMSA had already advised operators to follow industry-recommended practices published by API, which develops consensus standards for the oil and gas industry. Specifically, in February 2016, before the passage of the PIPES Act, PHMSA issued a bulletin encouraging operators to follow the API recommended practices to update their safety programs. The API recommended practices contain many provisions that are mandatory, and other provisions that are nonmandatory. The interim final rule provides that the nonmandatory provisions of the recommended practices that are incorporated by reference in the rule are adopted as mandatory. PHMSA’s interim final rule requires operators of existing natural gas sites, and those constructed by July 18, 2017, to meet the requirements of certain sections of the API recommended practices identified in the rule by January 18, 2018. The API recommended practices address, among other things, general operations, monitoring the sites for potential leaks, and emergency response and preparedness. For new storage sites starting construction after July 18, 2017, the rule requires operators to meet all sections of the applicable API recommended practices. According to PHMSA officials, PHMSA considered the recommendations of the task force in developing its minimum safety standards, as required by the PIPES Act, and continues to do so. PHMSA’s minimum safety standards addressed certain recommendations made by the task force, according to an analysis performed by PHMSA. However, PHMSA did not require operators to implement one key recommendation of the task force report with its minimum standards, according to PHMSA officials. In particular, the October 2016 task force report recommended that operators phase out most storage wells with single-point-of-failure designs—where the failure of a single component, such as a well casing, could lead to a large release of gas—by installing multiple points of control at each well. According to an API official, its recommended practices do not direct operators to phase out such wells because this practice may not significantly improve safety in all cases; for example, this practice may not have prevented the leak at Aliso Canyon. The API official and PHMSA officials noted that API recommended practices direct operators to assess the risks at their sites and to take steps to address these risks. According to PHMSA officials, assessing the risks of a site could include identifying wells with a single point of failure and developing steps to mitigate this risk. Mitigating the risk could include installing multiple points of control for certain wells, among other possible mitigation steps. Neither PHMSA nor API officials could tell us how many of the approximately 17,000 wells at the nation’s 415 natural gas storage sites have single-point-of-failure designs, because this information has not been centrally gathered to date. However, PHMSA plans to gather information about how many storage wells have single-point-of-failure designs by asking operators to provide this information as part of a required annual report. To fund its enforcement of its minimum safety standards, PHMSA also issued a notice to set the user fees that PHMSA charges operators, as required by the PIPES Act. In November 2016, PHMSA published a notice of agency action and request for comment, describing its user fee structure. PHMSA collected public comments, evaluated them, and finalized its user fee structure in April 2017. As set forth in this notice, PHMSA will charge each operator based on the size of the operator’s storage sites as measured by working gas capacity range. The notice stated that PHMSA plans to collect a total of up to $8 million annually in fees from all operators combined; however, PHMSA may seek authority to increase or decrease the amount it charges operators if it finds that the cost of inspection and enforcement is more or less than it initially estimated, according to PHMSA officials. Following enactment of an appropriations act provision, PHMSA is authorized to use the fees it collects to fund its enforcement activities and plans to use a portion of the fees to reimburse states for enforcing its minimum safety standards, according to PHMSA officials. Table 1 provides a timeline of key events in the development of PHMSA’s minimum safety standards. Since issuing its interim final rule, PHMSA has been collecting public comments and plans to adjust some aspects of the rule in response to comments from the public, industry representatives, and others. PHMSA plans to finalize its minimum safety standards by replacing its interim final rule with a final rule in January 2018, and has delayed some dates for when it expects operators to comply with some aspects of its standards. PHMSA’s interim final rule states that, with respect to incorporation by reference of the standards, the nonmandatory provisions it adopted are adopted as mandatory provisions. API and two other organizations representing natural gas utilities and transmission companies submitted comments asking PHMSA to reconsider how it used the API recommended practices in its minimum safety standards. While API and the other industry representatives agreed that it was appropriate for PHMSA to use API recommended practices for its minimum safety standards, they stated that making all portions mandatory would make the standards burdensome. In June 2017, PHMSA published a notice in the Federal Register stating that it would consider these comments as it finalized its minimum safety standards, which it stated it expects to issue by January 2018. The notice stated further that PHMSA will not issue any enforcement citations to operators for failure to meet any standards that were nonmandatory but that were converted to mandatory by provisions of the interim final rule until 1 year after it issues the final rule. PHMSA also provided additional guidance and clarifications to operators about scheduling and its plans for enforcement. During the development of its interim final rule, PHMSA noted that some of the provisions in the minimum safety standards may take operators several years to fully implement. According to PHMSA officials, these provisions recommend that operators carefully inspect their natural gas storage sites, identify any conditions that do not meet industry-recommended practices, and then improve conditions at the sites by prioritizing the greatest risks and implementing preventative measures to mitigate and remediate these risks over a number of years. As a result, PHMSA published guidance on its website stating that it expects operators to make and implement plans to inspect and remediate risks found at their sites within 3 to 8 years following the effective date of the interim final rule. To enforce PHMSA’s safety standards, the agency’s officials have taken a variety of steps to establish a safety enforcement program for natural gas storage sites, but they have not yet followed certain leading practices of strategic planning in starting PHMSA’s natural gas storage program. Specifically, PHMSA officials have started developing a training program for natural gas storage inspectors. They also have established a strategic goal and begun developing a training performance goal for their natural gas safety enforcement program. However, they have not yet followed certain leading practices for strategic planning—the systematic process for defining desired outcomes and translating this vision into goals and steps to achieve them. For example, PHMSA’s training performance goal does not define the level of performance officials hope to achieve or address all core program activities, such as conducting effective inspections. In addition, PHMSA has not used baseline data or budgetary information to inform the development of performance goals. PHMSA officials explained that they are still developing performance goals for their new program and collecting relevant data. To enforce the agency’s safety standards, PHMSA officials have taken a variety of steps to establish a safety enforcement program for natural gas storage sites by January of 2018. For example, PHMSA officials have started developing a training program for natural gas storage inspectors. They have identified learning objectives for the program and have begun developing learning materials. According to PHMSA officials, developing a training program for inspectors is central to safety enforcement efforts, in part because PHMSA has a limited number of staff members with expertise in natural gas storage. For example, PHMSA had 10 employees with natural gas storage experience as of August 2017, according to PHMSA officials. In addition, PHMSA officials have completed eight safety assessments of selected natural gas storage operators to document the initial condition of gas storage sites and safety practices. According to PHMSA officials, their methodology for conducting these assessments involved visiting a cross section of operators, including operators of interstate and intrastate sites and multiple types of facilities. PHMSA officials also have developed workload and budget estimates for their new program, according to PHMSA documentation. In recent years, the Office of Pipeline Safety, which will be responsible for natural gas storage inspections in addition to pipeline inspections and other activities, has initiated about 1,100 inspections annually, according to PHMSA data. When natural gas storage site inspections begin, PHMSA officials estimate that the Office of Pipeline Safety’s inspection workload could increase 14 percent due to their new responsibilities. They reached this estimate by dividing the 203 new natural gas storage units they anticipate needing to inspect by the total number of inspection units they currently inspect. To meet the demands of this increased workload, officials estimate that PHMSA will need $2 million annually to fund 6 new inspector positions, training, travel, and other expenses associated with managing the natural gas storage safety enforcement program. With this number of inspectors, PHMSA officials believe that they can inspect all 203 natural gas storage units within about 4 years. Because PHMSA officials expect that many states that have previously conducted similar inspections will help PHMSA conduct inspections, officials also estimate that PHMSA will need to provide $6 million annually to states. However, PHMSA officials noted that their estimates may change as they gain additional information about the program. Specifically, after PHMSA begins initial inspections in early 2018, officials will have more information about the time it takes to inspect natural gas storage sites. By the end of fiscal year 2018, they will have even more information with which to develop more precise workload and budget estimates for the program, according to these officials. To ensure that the states assisting PHMSA are fully qualified to enforce the federal government’s minimum safety standards, PHMSA officials have begun developing a state certification program. This has involved drafting certification documents and contacting potential state partners. As of June 2017, PHMSA officials expected all states with intrastate natural gas storage sites to pursue certification. However, officials explained that they may not know until the end of fiscal year 2017 exactly how many states will pursue certification. If some states choose not to pursue certification or are not approved by PHMSA, PHMSA will be responsible for inspecting natural gas storage sites in those states, which could increase its inspection workload beyond the level it has estimated. For states that choose certification and are approved, PHMSA plans to use grants to fund up to 80 percent of state inspection costs. However, PHMSA officials told us that PHMSA may not be able to fund states to this level, depending on the approved costs requested by all states and levels of funding PHMSA receives through the appropriations process. In either circumstance, PHMSA’s grant program for certified state partners leverages state dollars, since it requires states to fund the portions of their programs not covered by grant funding. PHMSA also has established a strategic goal for its natural gas safety enforcement program, but it has not yet followed other leading practices for strategic planning. Specifically, PHMSA officials told us that their new enforcement program will be guided by one of PHMSA’s existing strategic goals—to promote continuous improvement in safety performance. PHMSA officials also told us that they are developing a performance goal for their training program and that other performance goals are still being identified and developed. The Government Performance and Results Act of 1993 (GPRA), as amended—which seeks to improve the effectiveness of federal programs by establishing a system for agencies to set goals for program performance and measure results—defines a performance goal as the target level of performance expressed as a tangible, measurable objective against which actual achievement is to be compared. For example, in the area of weather forecasting, we have previously reported that such a goal could be to increase the lead time for predicting tornadoes from 7 to 9 minutes. PHMSA has not yet followed certain leading practices for strategic planning, as it has not: (1) defined the level of performance or fully addressed core program activities with its existing performance goal; or (2) used baseline data and other data or budget information to inform and refine performance goals. Our prior work has identified several leading practices for strategic planning that PHMSA has not yet followed, such as setting goals that define a certain level of performance and address all core program activities. Some of this prior work has examined requirements under GPRA and the GPRA Modernization Act of 2010. GPRA, which was significantly enhanced by the GPRA Modernization Act of 2010, requires agencies to develop annual performance plans that, among other things, establish performance goals to define the level of performance to be achieved. We have previously reported that requirements under these acts can serve as leading practices for planning at lower levels of the agency. As one of several operating administrations within DOT, PHMSA would be considered a lower level of the agency. In addition, we have found that a key attribute of successful performance measures is that they reflect the full range of core program activities. Moreover, we have found that a key practice for helping federal agencies enhance and sustain collaborative efforts with other agencies is to define and articulate a common outcome or purpose they are seeking to achieve. While PHMSA has taken some steps to plan strategically for its new program, it has not followed certain leading practices of strategic planning. For example, PHMSA has developed a performance goal for its training program, and agency officials told us that they plan to review the number of students who pass their gas storage training course as a measure of the agency’s training performance goal. However, with this measure PHMSA has not defined the level of performance to be achieved. An example of a measure of the agency’s training performance goal that defines the level of performance could be one that specifies that a certain percentage of students will pass the course on their first attempt. In addition, PHMSA has not yet developed performance goals for other core program activities, such as conducting effective inspections. According to PHMSA subject-matter experts, one of the critical tasks associated with inspecting a gas storage site will be determining whether the operator has met all well monitoring requirements specified in API’s Recommended Practice 1171, which addresses the functional integrity of gas storage in depleted hydrocarbon reservoirs and aquifers. An example of a performance goal that could indicate whether PHMSA’s inspections are effective could be to annually reduce, by a certain percentage, the number of operators that do not meet the well monitoring requirements of Recommended Practice 1171. Another critical task identified by PHMSA’s subject-matter experts will be to determine whether the operator has followed its own risk management plan for gas storage sites—another area where PHMSA has not developed a performance goal. An example of a performance goal in this area could be to annually reduce, by a certain percentage, the number of gas storage operators that have not followed their own risk management plans. PHMSA officials acknowledged that their performance goals are not yet complete and said that they would strive to refine performance goals as they continue developing the program; however, PHMSA has not yet done so. As they do so, ensuring that their performance goals define the level of performance to be achieved and address core program activities could help them ensure that they effectively track progress toward their strategic goal and make adjustments to activities and resources, if needed, to better meet the goal. In addition, because PHMSA plans to leverage state resources to oversee gas storage sites, the success of its gas storage program will depend, in part, on collaboration with state partners. Establishing performance goals for the program could help PHMSA coordinate efforts and resources with the states that are expected to assist PHMSA with inspections. Another leading practice of strategic planning involves using baseline and trend data to inform performance goals, according to our prior work. Baseline data—data collected about operations before oversight begins— can serve as a basis for comparison with subsequently collected trend data. We have previously reported that baseline and trend data can provide a context for drawing conclusions about whether performance goals are reasonable and appropriate. For example, we found in 1999 that the Department of Education was able to use such information to gauge the appropriateness of its goals for reducing the default rate on student loans provided through the Federal Family Education Loan program. The program’s annual plan provided baseline and trend data for the default rate, which indicated that the rate declined from 22.4 percent to 10.4 percent from fiscal years 1990 to 1995. According to Education’s analysis of the data, future declines were likely to be steady but smaller because of the large number of high-default schools that had already been eliminated from the program. For fiscal year 1999, Education set a goal of reducing the default rate to 10.1 percent of borrowers. For PHMSA’s natural gas storage program, PHMSA will have access to baseline data—and eventually trend data—over time that could inform the development of performance goals and subsequent refinement of them. PHMSA officials told us that they have not yet used such data to inform the development of their performance goal because they are still in the process of collecting relevant data. For example, officials told us that, over time, they will have access to data about operators’ facilities, functional integrity work, and operations and maintenance procedures starting in early 2018. These data will likely include the number of wells that have leaked and been repaired during the last calendar year. As specified in PHMSA’s minimum safety standards, PHMSA also plans to collect safety and incident reports to track gas releases, deaths, and injuries resulting in hospitalizations. In addition, in August of 2017, PHMSA officials completed eight industry safety assessments, which involved visiting natural gas storage sites and studying sites’ safety procedures. As previously mentioned, these assessments aimed, in part, to document the initial condition of gas storage sites and safety practices. Agency officials told us that they had planned to use the data they collect from these assessments to inform the agency’s state certification and inspection programs. They did not specify whether or how they intend to use these data to inform their performance goals. As PHMSA continues developing performance goals for its natural gas storage program, using available data to inform and refine these goals could help the agency ensure that its goals are reasonable and appropriate. We also have reported that comparing information about budgetary resources with information about performance goals can help decisionmakers determine whether their performance goals are achievable. Specifically, we have reported that decisionmakers can better compare planned levels of accomplishment with the resources requested if they have information about how funding levels are expected to achieve a discrete set of performance goals. For example, we reported in a best practices report about strategic planning that the Internal Revenue Service (IRS) included in its performance plan for 1999 the budget amounts that corresponded with past performance levels. Table 2 illustrates how IRS used this information to inform proposed performance levels for the upcoming year. Moreover, GPRA requires agencies to prepare an annual performance plan covering each program activity set forth in the budget and, among other things, describe the resources required to meet performance goals. As previously mentioned, we have found that GPRA requirements can serve as leading practices for planning at lower levels of the agency. Assessing whether the new program’s performance goals are achievable given budgetary resources is important at a time when PHMSA officials are managing other new resources and responsibilities. For example, in addition to requiring DOT to establish minimum safety standards for natural gas storage sites, the PIPES Act of 2016 also requires DOT to update minimum safety standards for small-scale liquefied natural gas pipeline facilities. To carry out its responsibilities, PHMSA has received additional resources in recent years. As shown in figure 3, PHMSA’s Pipeline Safety Program has seen its total budgetary resources available increase from about $95 million in fiscal year 2007 to about $175 million in fiscal year 2016. In addition, the Consolidated Appropriations Act for fiscal year 2017 included a provision allowing for the obligation of up to $8 million from fees collected in fiscal year 2017 from operators for PHMSA’s natural gas storage program. These fees will be deposited in an Underground Natural Gas Storage Facility Safety account within PHMSA’s Pipeline Safety Fund and will be added to the Pipeline Safety Program’s total budgetary resources available for fiscal year 2017. PHMSA is not yet in a position to use budget information to inform or refine performance goals for its natural gas storage program because PHMSA officials are still developing these goals and PHMSA lacks key data, such as data on the time it takes—and therefore the budgetary resources required—to inspect natural gas storage sites. As previously mentioned, PHMSA will begin inspections in early 2018, and officials will have a better understanding of how long it takes to inspect natural gas storage sites by the end of fiscal year 2018. As PHMSA officials continue developing performance goals and finish collecting relevant data, using information about budgetary resources to inform and refine these goals may help PHMSA ensure that its goals are achievable. Natural gas storage sites are key elements of our nation’s energy system, helping ensure that natural gas is available when demand peaks. As evidenced by the large-scale leak of natural gas outside Los Angeles that started in 2015 and extended into 2016, leaks from these sites can cause economic disruptions and environmental damage. These sites recently became subject to national safety standards, which are subject to further revision. PHMSA has taken a variety of steps to meet its new responsibilities for overseeing natural gas storage sites, such as developing a training program for inspectors and a performance goal for training. However, PHMSA has not yet followed certain leading practices of strategic planning in starting PHMSA’s new safety enforcement program. For example, PHMSA’s only current performance goal does not define the level of performance officials are working to achieve, and PHMSA does not currently have goals that address other core program activities, such as conducting effective inspections. PHMSA also has not yet used the baseline data it is collecting to develop its performance goals. PHMSA officials explained that they are still developing performance goals for their new program and collecting data. As the agency continues to develop these goals, ensuring that performance goals define the level of performance and address all core program activities could help the agency better track progress toward its strategic goal and adjust activities and resources, if needed, to better meet the goal. Using baseline data to develop these goals could help PHMSA ensure that its goals are reasonable and appropriate. Finally, once PHMSA finalizes performance goals for the program and collects relevant data over time as well as budgetary information, using these data and information when available to inform and refine performance goals may help PHMSA ensure that its goals are achievable. We are making the following two recommendations to PHMSA. The Administrator of PHMSA should ensure that PHMSA defines levels of performance, addresses core program activities, and uses baseline data as it continues developing performance goals for its natural gas storage program. (Recommendation 1) The Administrator of PHMSA should ensure that PHMSA uses other data and information about budgetary resources as they become available to inform and refine its performance goals. (Recommendation 2) We provided a draft of this report to DOT for review and comment. In written comments, DOT concurred with the report’s recommendations and provided additional information on steps they are taking or plan to take as part of their oversight of natural gas storage sites. In addition, DOT stated that it would provide a detailed response to each recommendation within 60 days of our final report’s issuance. The complete comment letter is reproduced in appendix III. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Transportation, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or members of your staff have any questions about this report, please contact us at (202) 512-3841, gomezj@gao.gov, or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. In this report, we examine (1) the status of the Pipeline and Hazardous Materials Administration’s (PHMSA) efforts to implement the requirement under the Protecting Our Infrastructure of Pipelines and Enhancing Safety (PIPES) Act of 2016 to issue minimum safety standards for natural gas storage sites, and (2) the extent to which PHMSA has planned strategically to enforce its safety standards for natural gas storage sites. To examine the status of PHMSA’s efforts to implement the requirement to issue minimum safety standards for natural gas storage sites, we examined laws, regulations, and agency documents that describe the authority, time frames, and enforcement goals for implementing new federal rules under the PIPES Act. Specifically, we reviewed the PIPES Act to identify requirements that the act directed to the Department of Transportation (DOT), or PHMSA. To understand PHMSA’s implementation of DOT’s requirements under the act, we reviewed PHMSA notices and regulations as presented in the Federal Register and discussed the information in these documents with agency officials. We also reviewed guidance documents on the PHMSA website intended to provide natural gas storage operators with more detailed guidance and discussed the documents with agency officials. We reviewed an October 2016 report, mandated by the act, which was issued by a task force led by the Department of Energy (DOE). We also obtained and reviewed copies of recommended practices issued by the American Petroleum Institute (API), which issues industry consensus standards for the oil and gas industry, and interviewed API officials to better understand these recommended practices. We also interviewed agency officials. Specifically, we interviewed officials with PHMSA, the Federal Energy Regulatory Commission, the Bureau of Land Management within the Department of the Interior, and the Environmental Protection Agency, to understand how they participated in the task force and to what degree they have responsibilities related to natural gas storage safety enforcement. In addition, we obtained data from PHMSA and DOE’s Energy Information Administration about natural gas storage sites to gain an estimate of the number and regulatory status of various natural gas storage sites, their locations, and other details. We assessed the reliability of these data by (1) corroborating these data with other sources, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined that these data were sufficiently reliable for the purposes of this report. We also interviewed agency officials at DOT and PHMSA, including discussing agency requirements under the PIPES Act and how PHMSA planned to implement its responsibilities. To better understand the operation and control of natural gas storage sites, we conducted a site visit to the Aliso Canyon Gas Storage Facility in California and spoke to officials representing the operator of the site, and state government officials responsible for safety enforcement at the site. To examine the extent to which PHMSA has planned strategically to enforce safety standards for natural gas storage sites, we compared information we gathered from PHMSA officials and documents with leading practices for strategic planning identified by our prior work, which were identified by examining requirements under the Government Performance and Results Act (GPRA) of 1993. We have previously reported that requirements under GPRA and the GPRA Modernization Act of 2010 can serve as leading practices for planning at lower levels of the agency. We also interviewed PHMSA officials—including budgetary, policy, and programmatic officials—about their planning efforts for the natural gas storage program. In addition, we reviewed regulations and documents that reflect agency planning efforts, including: PHMSA’s interim final rule on the safety of underground natural gas storage facilities; agency guidance, such as frequently asked questions for operators of natural gas storage sites; and agency planning documents, such as the Training Implementation Plan for Natural Gas Underground Storage Regulation Training, PHMSA 2021 Business Plan - 2017, and workload and budget estimates for the program. Using information obtained from these sources about PHMSA’s efforts to plan for its natural gas storage program, we compared PHMSA’s planning efforts with leading practices for strategic planning identified in our prior reports. We conducted this performance audit from November 2016 to November 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 3 identifies the 415 natural gas storage sites active as of January 2016, by state and jurisdiction. The number of natural gas storage sites that fall under federal or state jurisdiction in each state is presented, along with the total storage capacity of the sites. A natural gas storage site is considered to be under federal jurisdiction—also known as “interstate”—if the site is linked to a federally-regulated interstate pipeline permitted by the Federal Energy Regulatory Commission. Otherwise, sites are under state jurisdiction. The sites represented in this table were compiled by the Department of Energy’s Energy Information Administration in 2016, and provided by the Department of Transportation’s Pipeline and Hazardous Materials Safety Administration (PHMSA). EIA collects these data using a survey of natural gas storage site operators. According to a PHMSA document, PHMSA used these data to, among other things, identify natural gas storage sites and calculate the amount of user fees that it charged operators in 2017 (the first year PHMSA collected these user fees) to fund its inspection and enforcement programs. PHMSA plans to update its information about natural gas storage sites using data submitted by operators, as required by its interim final rule. This rule requires natural gas storage site operators to submit these data on or before July 18, 2017. PHMSA plans to require operators to annually submit this information using a form. According to PHMSA officials, the Office of Management and Budget recently approved this form. As a result, PHMSA will begin collecting data that reflect calendar year 2017 by its due date of March 15, 2018. PHMSA officials told us that it will take about 5 to 6 months to develop a website that will allow PHMSA to efficiently collect these data from operators for all sites this year and in future years. In addition to the individuals named above, Mike Hix and Jon Ludwigson (Assistant Directors), Richard Burkard, Lee Carroll, Nirmal Chaudhary, Ellen Fried, Cindy Gilbert, Carol Henn, Mary Koenen, Jessica Lemke, Ben Licht, Greg Marchand, John Mingus, Katrina Pekar-Carpenter, Sara Sullivan, and Kiki Theodoropoulos made important contributions to this report.
|
Natural gas storage is important for ensuring that natural gas is available when demand increases. There are 415 storage sites—including underground caverns and depleted aquifers and oil and gas reservoirs—located in 31 states, often near population centers (see fig.). Leaks from these sites, such as one near Los Angeles that led to the temporary relocation of about 8,000 families in 2015, can result in environmental and economic damage. Until 2016, states set standards for 211 sites, but there were no standards for 204 sites connected to interstate pipelines subject to federal jurisdiction. With passage of the PIPES Act of 2016, PHMSA, an agency within DOT that sets and enforces standards for energy pipelines, among other things, was tasked with issuing minimum standards for all gas storage sites. GAO was asked to review natural gas storage safety standards. This report examines (1) PHMSA's efforts to implement the requirement to issue minimum safety standards for natural gas storage sites and (2) the extent to which PHMSA has planned strategically to enforce its safety standards for these sites. GAO reviewed PHMSA documents and plans, compared them to leading planning practices, and interviewed PHMSA officials. To meet its requirement under the Protecting Our Infrastructure of Pipelines and Enhancing Safety (PIPES) Act of 2016, the Department of Transportation's (DOT) Pipeline and Hazardous Materials Safety Administration (PHMSA) issued minimum safety standards in an interim rule and plans to finalize them by January 2018. Under the interim standards, site operators are to follow industry-developed best practices to detect and prevent leaks and plan for emergencies, among other things. Since the interim rule went into effect in January 2017, the minimum safety standards apply to all 415 natural gas storage sites, and the rule will be subject to further revision before it is final. To enforce its safety standards, PHMSA has taken steps to establish a natural gas storage safety enforcement program. For example, PHMSA has started developing a training program for its inspectors. PHMSA also has identified a strategic goal for its program—to promote continuous improvement in safety performance—and is developing a performance goal for its training program. However, PHMSA has not yet followed certain leading strategic planning practices. For example, PHMSA has not yet defined the level of performance to be achieved, fully addressed all core program activities, or used baseline data to develop its performance goal. GAO has previously reported that requirements under the Government Performance and Results Act (GPRA) and GPRA Modernization Act of 2010—which include establishing performance goals to define the level of performance—can serve as leading practices for lower levels of an agency, such as PHMSA. GAO also has found that successful performance goals address all core program activities. PHMSA's goal focuses on training and does not address other core program activities, such as conducting effective inspections. For example, a goal to evaluate whether PHMSA's inspections are effective could be to annually reduce, by a certain percentage, the number of sites not meeting minimum standards. PHMSA officials told GAO that they will strive to add and refine performance goals as the program evolves. As they do so, ensuring that these goals define the level of performance, address all core program activities, and use baseline data could help PHMSA better track progress toward its strategic goal. GAO is making two recommendations, which are that PHMSA (1) define levels of performance and address all core program activities and (2) use budget data to refine performance goals for its gas storage program. DOT concurred with GAO's recommendations.
|
The Secretary of Defense established goals for BRAC 2005 in a November 2002 memorandum issuing initial guidance for BRAC 2005 and again in a March 2004 report to Congress certifying the need for a BRAC round. Specifically, the Secretary reported that the BRAC 2005 round would be used to (1) dispose of excess facilities, (2) promote force transformation, and (3) enhance jointness. Although DOD did not specifically define these three goals, we have generally described them in prior reports as follows. Dispose of excess facilities: Eliminating unneeded infrastructure to achieve savings. Promote force transformation: Correlating base infrastructure to the force structure and defense strategy. In the late 1990s, DOD embarked on a major effort to transform its business processes, human capital, and military capabilities. Transformation is also seen as a process intended to provide continuous improvements to military capabilities. For example, the Army used the BRAC process to transform the Army’s force structure from an organization based on divisions to more rapidly deployable, brigade-based units and to accommodate rebasing of overseas units. Enhance jointness: Improving joint utilization to meet current and future threats. According to DOD, “joint” connotes activities, operations, and organizations, among others, in which elements of two or more military departments participate. Congress established clear time frames in the BRAC statute for many of the milestones involved with base realignments and closures. The BRAC 2005 process took 10 years from authorization through implementation. Congress authorized the BRAC 2005 round on December 28, 2001. The BRAC Commission submitted its recommendations to the President in 2005 and the round ended on September 15, 2011—6 years from the date the President submitted his certification of approval of the recommendations to Congress. The statute allows environmental cleanup and property caretaker and transfer actions associated with BRAC sites to exceed the 6-year time limit and does not set a deadline for the completion of these activities. Figure 1 displays the three phases of the BRAC 2005 round—analysis, implementation, and disposal—and key events involving Congress, DOD, and the BRAC Commission. During the analysis phase, DOD developed selection criteria, created a force structure plan and infrastructure inventory, collected and analyzed data, and proposed recommendations for base realignments and closures. The BRAC statute authorizing the BRAC 2005 round directed DOD to propose and adopt selection criteria to develop and evaluate candidate recommendations, with military value as the primary consideration. The BRAC statute also required DOD to develop a force structure plan based on an assessment of probable threats to national security during a 20-year period beginning with fiscal year 2005. Based on the statute’s requirements, the selection criteria were adopted as final in February 2004, and the force structure plan was provided to Congress in March 2004. To help inform its decision-making process during the analysis phase, the three military departments and the seven joint cross-service groups collected capacity and military value data that were certified as accurate by senior leaders. In testimony before the BRAC Commission in May 2005, the Secretary of Defense said that DOD collected approximately 25 million pieces of data as part of the BRAC 2005 process. Given the extensive volume of requested data, we noted in July 2005 that the data- collection process was lengthy and required significant efforts to help ensure data accuracy, particularly from joint cross-service groups that were attempting to obtain common data across multiple military components. We reported that, in some cases, coordinating data requests, clarifying questions and answers, controlling database entries, and other issues led to delays in the data-driven analysis DOD originally envisioned. As time progressed, however, these groups reported that they obtained the needed data, for the most part, to inform and support their scenarios. We ultimately reported that DOD’s process for conducting its analysis was generally logical, reasoned, and well documented. After taking these plans and accompanying analyses into consideration, the Secretary of Defense was then required to certify whether DOD should close or realign military installations. The BRAC Commission assessed DOD’s closure and realignment recommendations for consistency with the eight selection criteria and DOD’s Force Structure Plan. Ultimately, the BRAC Commission accepted over 86 percent of DOD’s proposed internal recommendations; rejected, modified, or added additional recommendations; and adjusted some costs of BRAC recommendations. After the BRAC Commission released its recommendations, and the recommendations became binding, the implementation phase started. During this phase, which started on November 9, 2005, and continued to September 15, 2011 (as required by the statute authorizing BRAC), DOD took steps to implement the BRAC Commission’s 198 recommendations. Also during this phase, the military departments were responsible for completing environmental impact studies to determine how to enact the BRAC Commission’s relevant recommendations. The military departments implemented their respective recommendations to close and realign installations, establish joint bases, and construct new facilities. The large number and variety of BRAC actions resulted in DOD requiring BRAC oversight mechanisms to improve accountability for implementation. The BRAC 2005 round had more individual actions (813) than the four prior rounds combined (387). Thus, in the BRAC 2005 round, the Office of the Secretary of Defense for the first time required the military departments to develop business plans to better inform the Office of the Secretary of Defense of the status of implementation and financial details for each of the BRAC 2005 recommendations. These business plans included: (1) information such as a listing of all actions needed to implement each recommendation, (2) schedules for personnel relocations between installations, and (3) updated cost and savings estimates by DOD based on current information. This approach permitted senior-level intervention if warranted to ensure completion of the BRAC recommendations by the statutory completion date. The disposal phase began soon after the BRAC recommendations became binding and has continued to today. During the disposal phase, DOD’s policy was to act in an expeditious manner to dispose of closed properties. Such disposal actions included transferring the property to other DOD components and federal agencies, homeless-assistance providers, or local communities for the purposes of job generation, among other actions. In doing so, DOD has incurred caretaker and environmental cleanup costs. For example, DOD reported to Congress that, as of September 2016, the military departments had spent $735 million on environmental cleanup associated with BRAC 2005 sites, and had $482 million left to spend on BRAC 2005 sites. Overall, the military departments reported that they had disposed of 59,499 acres and still needed to dispose of 30,239 acres from BRAC 2005 as of September 30, 2016. ASD (EI&E), the military services, and 25 of the 26 military units or organizations we met with did not measure the achievement of the BRAC 2005 goals—reducing excess infrastructure, transforming the military, and promoting jointness. Specifically, a senior ASD (EI&E) official stated that no performance measures existed to evaluate the achievement of goals and the office did not create baselines to measure performance. Air Force officials stated that they did not measure the achievement of goals but that it would have been helpful to have metrics to measure success, especially as DOD had requested from Congress another BRAC round. Army officials similarly stated it did not measure the achievement of goals, noting that measuring excess capacity would have been important to help DOD get authorization for another BRAC round. Navy and Marine Corps officials said that they did not track performance measures or otherwise measure the achievement of the BRAC 2005 goals. Moreover, 25 of the 26 military units or organizations we met with stated that they did not measure the achievement of BRAC 2005 goals. The one exception in our selected sample was the command at Joint Base Charleston, which stated that it measured jointness through common output or performance-level standards for installation support, as required for installations affected by the BRAC 2005 recommendation on joint basing. By measuring jointness, officials were able to identify that the base met 86 percent of its common output level standards in the second quarter of fiscal year 2017, and it has identified recommendations to improve on those standards not met. Instead of measuring the achievement of BRAC 2005 goals, officials with ASD (EI&E) and the military departments stated that they tracked completion of the BRAC recommendations by the statutory deadline of September 2011 and measured the cost savings associated with the recommendations. Senior ASD (EI&E) officials stated that the primary measure of success was completing the recommendations as detailed by the implementation actions documented in the business plans. In addition, officials from the Army, Navy, and Air Force stated that they measured the savings produced as a result of BRAC 2005. For example, Army officials stated that closing bases in BRAC 2005 significantly reduced base operations support costs, such as by eliminating costs for trash collection, utilities, and information technology services. However, tracking completion of the recommendations and measuring savings did not enable the department to determine the success of the BRAC round in achieving its goals. For example, tracking completion of the recommendations establishing joint training centers did not give DOD insight into whether the military departments achieved the jointness goal by conducting more joint activities or operations. Similarly, measuring savings did not allow DOD to know whether it achieved the goal of reducing excess infrastructure, and in reviewing DOD’s data we found that the department ultimately did not have the needed data to calculate excess infrastructure disposed of during BRAC 2005. Key practices on monitoring performance and results highlight the importance of using performance measures to track an agency’s progress and performance, and stress that performance measures should include a baseline and target; should be objective, measurable, and quantifiable; and should include a time frame. The Standards for Internal Control in the Federal Government emphasizes that an agency’s management should track major agency achievements and compare these to the agencies’ plans, goals, and objectives. During BRAC 2005, DOD was not required to identify appropriate measures of effectiveness and track achievement of its goals. As a result, in March 2013, we recommended that, in the event of any future BRAC round, DOD identify appropriate measures of effectiveness and develop a plan to demonstrate the extent to which the department achieved the results intended from the implementation of the BRAC round. DOD did not concur with our recommendation, stating that military value should be the key driver for BRAC. However, we noted at the time that our recommendation does not undermine DOD’s reliance on military value as the primary selection criteria for DOD’s base realignment and closure candidate recommendations, and DOD can still prioritize military value while identifying measures that help determine whether DOD achieved the military value that it seeks. As of October 2017, DOD officials stated that no action to implement our recommendation is expected. We continue to believe that, if any future BRAC round is authorized, the department would benefit from measuring its achievement of goals. Further, this information would assist Congress in assessing the outcomes of any future BRAC rounds. Given that DOD did not concur with our 2013 recommendation and does not plan to act upon it, DOD is not currently required to identify appropriate measures of effectiveness and track achievement of its BRAC goals in future rounds. Without a requirement to identify and measure the achievement of goals for a BRAC round, DOD cannot demonstrate to Congress whether the implementation of any future BRAC round will improve efficiency and effectiveness or otherwise have the effect that the department says its proposed recommendations will achieve. If Congress would like to increase its oversight for any future BRAC round, requiring DOD to identify appropriate measures of effectiveness and track achievement of its goals would provide it with improved visibility over the expected outcomes. DOD has implemented 33 of the 65 prior recommendations that we identified in our work since 2004, and it has the opportunity to address additional challenges regarding communications and monitoring to improve any future BRAC round. Specifically, for the BRAC analysis phase, DOD implemented 1 of 12 recommendations, and it has agreed to implement another 7 recommendations should Congress authorize any future BRAC round. Additionally, we found that DOD can improve its communications during the analysis phase. For the implementation phase, DOD implemented 28 of 39 recommendations, and it has agreed to implement another 3 recommendations. Further, we found it can improve monitoring of mission-related changes. For the disposal phase, DOD implemented 4 of 14 recommendations, and it has agreed to implement another 8 recommendations. Of the 12 recommendations we made from 2004 to 2016 to help DOD improve the BRAC analysis phase, DOD generally agreed with 6 of them and, as of October 2017, DOD had implemented 1. Specifically, DOD implemented our May 2004 recommendation to provide a more detailed discussion on assumptions used in its May 2005 report on BRAC recommendations. In addition, DOD stated it would address seven recommendations—the other five recommendations it agreed with and two it had previously nonconcurred with—affecting BRAC’s analysis phase in the event of any future BRAC round. These recommendations included better estimating information technology costs and improving ways of describing and entering cost data. DOD reported that the department is awaiting authorization of a future BRAC round prior to implementing these recommendations. Appendix III provides more information on our recommendations, DOD’s response, and DOD’s actions to date concerning the BRAC analysis phase. DOD officials cited an additional challenge with communications during the BRAC 2005 analysis phase. Specifically, some military organizations we met with stated that they could not communicate to BRAC decision makers information outside of the data-collection process, which ultimately hindered analysis. For example: Officials from the Army Human Resources Command in Fort Knox, Kentucky, said that facilities data submitted during the data-collection process did not convey a complete picture of excess capacity at the installation, and officials at Fort Knox were unable to share the appropriate context or details because nondisclosure agreements prevented communication. Specifically, they stated that the data showed an overall estimate of Fort Knox’s excess capacity, but the data did not detail that the excess was not contiguous but rather based on space at 40 buildings spread throughout the installation. The officials stated that there was no way to communicate to decision makers during the data collection process that the facilities were ill- suited for relocating the Human Resources Command and would require significant renovation costs to host the command’s information technology infrastructure. The officials said that, because the needed details on the facility data were not communicated, the relocation moved forward without full consideration of alternatives for using better-suited excess space at other locations that would not require significant costs to renovate. As a result, the Army ultimately constructed a new headquarters building for the Human Resources Command at Fort Knox and DOD spent approximately $55 million more than estimated to complete this action. Officials at the Naval Consolidated Brig Charleston, South Carolina, told us that the lack of communication outside of the data-collection process resulted in decision makers not taking into account declining numbers of prisoners, leading to the construction of a new, oversized building in which to house prisoners. The officials said that the decision makers analyzing the facilities data did not consider the current correctional population; rather, the decision makers considered a correctional model based on the type of military fielded in World War II and the Korean and Vietnam wars—a force comprised of conscripted personnel that served longer tours and had higher correctional needs. Further, the officials said the decision makers did not consider that, in the 2000 to 2005 period, DOD increased the use of administrative separations from military service rather than incarcerate service members convicted of offenses, such as drug- related crimes or unauthorized absence, further reducing correctional needs. The officials said they did not have a mechanism to communicate this information outside of the data-collection process when decision makers were analyzing the facilities data. As a result, the BRAC Commission recommendation added 680 beds throughout the corrections system, increasing the Navy’s total confinement capacity to 1,200 posttrial beds. Specifically at Naval Consolidated Brig Charleston, the BRAC recommendation added 80 beds at a cost of approximately $10 million. However, the facility already had excess capacity prior to the 2005 BRAC recommendation, and its excess capacity further increased after adding 80 beds (see fig. 2). Air National Guard officials said that the lack of communication outside of the data-collection process in the BRAC analysis phase meant that they could not identify the specific location of excess facilities. Specifically, they said the facilities data showed that Elmendorf Air Force Base, Alaska, had sufficient preexisting space to accept units relocating from Kulis Air Guard Station, Alaska, a base slated for closure. However, without communicating with base officials, Air National Guard officials did not know that the space was not contiguous. As a result, officials stated that DOD ultimately needed to complete additional military construction to move the mission from Kulis Air Guard Station. The BRAC Commission increased the Air Force’s initial cost estimate by approximately $66 million in additional funds to implement the BRAC recommendation. U.S. Army Central officials stated that there was no communication outside of the data-collection process to allow DOD to fully consider workforce recruitment-related issues in deciding to move the U.S. Army Central headquarters to Shaw Air Force Base, South Carolina. While other criteria, such as military value, enhancing jointness, and enabling business process transformation, were considered in developing the recommendation, the officials stated that they were unable to communicate concerns regarding civilian hiring and military transfers. The officials said that since the headquarters’ move to Shaw Air Force Base from Fort McPherson, Georgia, they have had difficulties recruiting civilian employees, such as information technology personnel, to their facility because of its location. They also said that it has been harder to encourage Army personnel to move to Shaw Air Force Base due to a perception that there is a lack of promotional opportunities at an Army organization on an Air Force base. As a result, U.S. Army Central officials said morale surveys have indicated that these workforce issues have negatively affected mission accomplishment. The military departments and organizations we met with said that these concerns regarding the BRAC 2005 analysis phase were because DOD did not establish clear and consistent communications throughout different levels of authority in the department during data collection. According to Standards for Internal Control in the Federal Government, management should use relevant data from reliable sources and process these data into quality information that is complete and accurate. Further, management should communicate quality information down, across, up, and around reporting lines to all levels of the department. Given the unclear and inconsistent communications in the department during data collection, DOD decision makers had data that may have been outdated or incomplete. Additionally, the outdated and incomplete data hindered the BRAC 2005 analysis and contributed to additional costs and recruitment problems at some locations affected by BRAC 2005, as previously discussed. Officials stated that clear and consistent communications would have improved the flow of information between on-the-ground personnel and decision makers and could have better informed the BRAC decision-making process. For example, Army officials said that nondisclosure agreements hindered their ability to call personnel at some installations to confirm details about buildings and facilities in question. The Air Force’s Lessons Learned: BRAC 2005 report stated that site surveys could have communicated additional detail and generated more specific requirements than those generated in an automated software tool that the Air Force used for BRAC-related analysis. Navy officials said that, with limited communication, there were shortfalls in the decision-making process. Overall, officials from ASD (EI&E) and the military departments agreed that communication could be improved in the analysis phase of any future BRAC round. They also cited improved technology, such as geographic information system software and a new base stationing tool, as well as an increase in the amount of data collected as factors that may mitigate any effects of reduced communication if Congress authorizes any future BRAC round. Without taking steps to establish clear and consistent communication throughout the department during data collection, DOD risks collecting outdated and incomplete data in any future BRAC rounds that may hinder its analysis and the achievement of its stated goals for BRAC. To improve the implementation phase of the BRAC 2005 round, we made 39 recommendations between 2005 and 2016. DOD generally agreed with 32 and did not concur with 7 recommendations. As of October 2017, DOD had implemented 28 of these recommendations. DOD stated that it does not plan on implementing 8 of the recommendations, and action on 3 of the recommendations is pending. Our previous recommendations relate to issues including providing guidance for consolidating training, refining cost and performance data, and periodic reviews of installation- support standards, among others. Appendix IV provides more information on our recommendations, DOD’s response, and DOD’s actions to date concerning the BRAC implementation phase. DOD officials identified challenges related to monitoring mission-related changes during the implementation of the BRAC 2005 recommendations, specifically when unforeseen circumstances developed that affected units’ ability to carry out their missions following implementation or added difficulties to fulfilling the intent of the recommendation. For example: During the implementation process, a final environmental impact statement at Eglin Air Force Base, Florida, contributed to the decision that only a portion of the initial proposed aircraft and operations would be established to fulfill the Joint Strike Fighter Initial Joint Training Site recommendation. Marine Corps officials stated that as a result of this environmental impact statement and the subsequent limitations, the Marine Corps decided to eventually move its training from Eglin Air Force Base to Marine Corps Air Station Beaufort, South Carolina. Despite these limitations, the Air Force constructed infrastructure for the Marine Corps’ use at Eglin Air Force Base in order to fulfill the minimum legal requirements of the recommendation. Specifically, the BRAC 2005 recommendation realigned the Air Force, Navy, and Marine Corps portions of the F-35 Joint Strike Fighter Initial Joint Training Site to Eglin Air Force Base. The Air Force’s goal and the initial proposal for the Joint Strike Fighter Initial Joint Training Site at Eglin Air Force Base was to accommodate 107 F-35 aircraft, with three Air Force squadrons of 24 F-35 aircraft each, one Navy squadron with 15 F-35 aircraft, and one Marine Corps squadron of 20 F-35 aircraft. In 2008, after the implementation phase began, DOD completed an environmental impact statement for the proposed implementation of the BRAC recommendations at Eglin Air Force Base. Based on the environmental impact statement and other factors, a final decision was issued in February 2009, stating that the Air Force would only implement a portion of the proposed actions for the recommendation, with a limit of 59 F-35 aircraft and reduced planned flight operations due to potential noise impacts, among other factors. This decision stated that the subsequent operational limitations would not be practical for use on a long-term basis but would remain in place until a supplemental environmental impact statement could be completed. After the final supplemental environmental impact statement was released, in June 2014 DOD decided to continue the limited operations established in the February 2009 decision. Marine Corps officials stated that, as a result of the February 2009 decision, the Marine Corps decided that it would eventually move its F-35 aircraft from Eglin Air Force Base to Marine Corps Air Station Beaufort. According to Marine Corps officials, by September 2009 the Marine Corps had developed a concept to prepare Marine Corps Air Station Beaufort to host its F-35 aircraft. A September 2010 draft supplemental environmental impact statement included updated operational data and found that the Marine Corps total airfield operations at Eglin Air Force Base would be reduced by 30.7 percent from the proposals first assessed in the 2008 final environmental impact statement. However, to abide by the BRAC recommendation, Marine Corps officials stated that the Marine Corps temporarily established an F-35 training squadron at Eglin Air Force Base in April 2010. Using fiscal year 2010 military construction funding, DOD spent approximately $27.7 million to create a landing field for use by the new Marine Corps F-35 training squadron mission at Eglin Air Force Base. Marine Corps officials stated that this construction occurred during the same period as the decision to relocate the F-35 training squadron to Marine Corps Air Station Beaufort. However, ASD (EI&E) officials stated that they did not know about this mission- related change, adding that they expected any change to be reported from the units to the responsible military department through the chain of command. However, the military departments did not have guidance to report in the business plans to ASD (EI&E) these mission- related changes during implementation; without this guidance, the changes related to the Marine Corps F-35 mission were not relayed to ASD (EI&E) through the Air Force. Officials from the Joint Strike Fighter training program at Eglin Air Force Base stated that this construction was finished in June 2012 and that it was never used by the Marine Corps. In February 2014, the Marine Corps F-35 training squadron left Eglin Air Force Base and was established at Marine Corps Air Station Beaufort. The Marine Corps does not plan on returning any F-35 aircraft from Marine Corps Air Station Beaufort to Eglin Air Force Base for joint training activities. Additionally, officials from the Armed Forces Chaplaincy Center stated that studies undertaken during the implementation phase determined that it would be difficult to fulfill the intent of a recommendation creating a joint center for religious training and education, yet the recommendation was implemented and included new construction with significantly greater costs than initial estimates. The BRAC 2005 recommendation consolidated Army, Navy, and Air Force religious training and education at Fort Jackson, South Carolina, establishing a Joint Center of Excellence for Religious Training and Education. Prior to the construction of facilities to accommodate this recommendation, the Interservice Training Review Organization conducted a study published in November 2006 that assessed the resource requirements and costs of consolidating and colocating the joint chaplaincy training at Fort Jackson. This study identified limitations in the feasibility of consolidating a joint training mission for the chaplains, including differences within the services’ training schedules and the limited availability of specific administrative requirements for each service, as well as limited instructors and curriculum development personnel. Despite the results of this study, in 2008 an approximately $11.5 million construction project began to build facilities for the Joint Center of Excellence for Religious Training and Education. However, ASD (EI&E) officials stated that they did not know about the results of the study. The military departments did not have guidance to report these mission-related changes, which ultimately were not relayed from the units to ASD (EI&E). Officials from the Armed Forces Chaplaincy Center stated that following the start of construction to accommodate the recommendation, the services completed additional studies in 2008 and 2011 that further identified limitations to the feasibility of joint training for the services’ chaplains. Overall, the services discovered that 95 percent of the religious training could not be conducted jointly. Moreover, the military departments have faced additional impediments to their respective missions for religious training and education. For example, the Army stated it could not house its junior soldiers alongside the senior Air Force chaplaincy students, and both the Navy and Air Force had to transport their chaplains to other nearby bases to receive service- specific training. Due to these challenges, officials from the Armed Forces Chaplaincy Center stated that the Air Force chaplains left Fort Jackson and returned to Maxwell Air Force Base, Alabama, in 2017, and the Navy has also discussed leaving Fort Jackson and returning to Naval Station Newport, Rhode Island. Standards for Internal Control in the Federal Government emphasizes the importance of monitoring the changes an entity faces so that the entity’s internal controls can remain aligned with changing objectives, environment, laws, resources, and risks. During the implementation phase of BRAC 2005, DOD did not have specific guidance for the military services to monitor mission-related changes that added difficulties to fulfilling the intent of BRAC recommendations. The Office of the Secretary of Defense required BRAC recommendation business plans to be submitted every 6 months and include information such as a listing of all actions needed to implement each recommendation, schedules for personnel movements between installations, updated cost and savings estimates based on better and updated information, and implementation completion time frames. In addition, in November 2008, the Deputy Under Secretary of Defense (Installations and Environment) issued a memorandum requiring the military departments and certain defense agencies to present periodic status briefings to the Office of the Secretary of Defense on implementation progress and to identify any significant issues impacting the ability to implement BRAC recommendations by the September 15, 2011, statutory deadline. The 6-month business plan updates and the memorandum on periodic briefings focused primarily on changes affecting the ability to fully implement the BRAC recommendations and on meeting the statutory deadline, but they did not provide specific guidance to inform ASD (EI&E) of mission-related changes that arose from unforeseen challenges during the implementation phase. According to a senior official with ASD (EI&E), if the organization responsible for a business plan identified a need to change the plan to fulfill the legal obligation of the recommendation by the statutory deadline, ASD (EI&E) reviewed any proposed changes through meetings with stakeholders involved in implementation. According to this official, the office typically only got involved with the implementation if the business plan was substantively out of line with the intent of the recommendation or if there was a dispute between two DOD organizations, such as two military departments. The official stated that any installation-level concerns had to be raised to the attention of ASD (EI&E) through the responsible military department’s chain of command. If a mission-related change was not raised through the military department’s chain of command, then ASD (EI&E) officials were not always aware of the details of such changes. ASD (EI&E) officials acknowledged that they did not know about all mission-related changes during implementation, such as with the Joint Strike Fighter recommendations, and they stated that there was no explicit guidance informing the military departments to report challenges and mission-related changes to ASD (EI&E). Senior officials from ASD (EI&E) stated that additional guidance would be appropriate in the event of any future BRAC round. This lack of specific guidance to monitor and report mission-related changes that arose during BRAC 2005 implementation ultimately resulted in inefficient use of space and extra costs for DOD. Without providing specific guidance to monitor and report mission-related changes that require significant changes to the recommendation business plans, DOD will not be able to effectively monitor the efficient use of space and the costs associated with implementing any future BRAC recommendations. Furthermore, DOD may not be able to effectively make adjustments in its plans to ensure that the department achieves its overall goals in any future BRAC rounds. Of the 14 recommendations we made from 2007 to 2017 to help DOD address challenges affecting BRAC’s disposal phase, DOD generally agreed with 12 of them. As of October 2017, DOD had implemented 4 of the recommendations, with actions on 8 others pending. Our previous recommendations relate to three primary issues: guidance for communities managing the effects of the reduction or growth of DOD installations, the environmental cleanup process for closed properties, and the process for reusing closed properties for homeless assistance. Appendix V provides more information on our recommendations, DOD’s response, and DOD’s actions to date concerning the BRAC disposal phase. During our review, we identified an additional example of challenges in the disposal phase related to the environmental cleanup process. Specifically, officials representing Portsmouth, Rhode Island, stated that the city had issues with the environmental cleanup process resulting from BRAC 2005 changes at Naval Station Newport, Rhode Island. According to the site’s environmental impact statement, the land Portsmouth is to receive is contaminated and requires cleanup prior to transfer, and officials from the community stated that the Navy has not provided them with a clear understanding of a time frame for the environmental cleanup process needed to transfer the property. However, a senior official from the Navy stated that uncertainties in available funds and unforeseen environmental obstacles are common and prevent the Navy from projecting specific estimates for environmental cleanup time frames. The officials representing Portsmouth stated that, due to the lack of information from the Navy on a projected time frame for cleaning and transferring the property, representatives in the community have begun to discuss not wanting to take over the land and letting the Navy hold a public sale. We had previously recommended in January 2017 that DOD create a repository or method to record and share lessons learned about how various locations have successfully addressed environmental cleanup challenges. DOD concurred and actions are pending. Moreover, during our review we identified additional examples of challenges in the disposal phase related to the homeless assistance program. For example, officials representing the community of Wilmington, North Carolina, stated that they had issues with the homeless-assistance process regarding a closed Armed Forces Reserve Center. According to the officials, they did not know that there were legal alternatives to providing on-base property for homeless assistance. Wilmington officials stated that the city would have been willing to construct a homeless-assistance facility in a nonbase location, and use the closed property for a different purpose, which would have expedited the overall redevelopment process. According to the officials, the organization that took over the property for homeless-assistance purposes lacks the financial means to complete the entire project plan, and as of July 2017 it remains unfinished. We had previously recommended that DOD and the Department of Housing and Urban Development—which, with DOD, develops the implementing regulations for the BRAC homeless-assistance process—include information on legal alternatives to providing on-base property to expedite the redevelopment process, but DOD did not concur and stated no action is expected. Additionally, officials from New Haven, Connecticut, stated that the process of finding land suitable for a homeless assistance provider and converting an Army Reserve Center into a police academy took an undesirably long amount of time to complete. The officials stated that the process of preparing its redevelopment plan and transferring the property from DOD to the community lasted roughly 5 years from 2008 to 2013, and they suggested streamlining or expediting this process. As a result of these types of delays, many properties have not yet been transferred from DOD to the communities, and undisposed properties continue to increase caretaker costs. As of September 30, 2016, DOD had received approximately $172 million in payments for transfers, and it had spent approximately $275 million for caretaker costs of buildings and land prior to transferring property on closed installations during BRAC 2005. Implementing our prior recommendations related to the BRAC environmental cleanup and homeless-assistance process could help DOD expedite the disposal of unneeded and costly BRAC property, reduce its continuing fiscal exposure stemming from continuing to hold these properties, and ultimately improve the effectiveness of the disposal phase. DOD has long faced challenges in reducing unneeded infrastructure, and on five different occasions DOD has used the BRAC process to reduce excess capacity and better match needed infrastructure to the force structure and to support military missions. In addition to using BRAC to reduce excess capacity, DOD also sought to promote jointness across the military departments and realign installations in the 2005 round, making the round the biggest, costliest, and most complex ever. While DOD finished its implementation of BRAC 2005 in September 2011 and continues to prepare some remaining sites for disposal, it did not measure whether and to what extent it achieved the round’s goals of reducing excess infrastructure, transforming the military, and promoting jointness. Because it did not measure whether the BRAC actions achieved these goals, DOD cannot demonstrate whether the military departments have improved their efficiency or effectiveness as a result of the BRAC 2005 actions. In October 2017, DOD officials stated the department does not plan to take action on our March 2013 recommendation to measure goals for any future BRAC round. Congress can take steps to improve its oversight of any future BRAC round, specifically by requiring DOD to identify and track appropriate measures of effectiveness. Congress would have enhanced information to make decisions about approving any future BRAC rounds, while DOD would be in a stronger position to demonstrate the benefits it achieves relative to the up-front implementation costs incurred for holding any future BRAC rounds. In addition, challenges in the analysis, implementation, and disposal phases of BRAC 2005 led to unintended consequences, such as increases in costs, workforce recruitment issues, and delayed disposal of closed properties. Limited or restricted communications throughout different levels of authority in the department during data collection hampered the ability of decision makers to receive as much relevant information as possible during BRAC 2005. If Congress authorizes any future BRAC round, ASD (EI&E) can encourage clear and consistent communication throughout DOD during the analysis phase, thereby helping personnel to address any potential problems that may arise. In addition, without specific guidance to monitor mission-related changes during the BRAC implementation phase, DOD did not fulfill the intent of some recommendations and spent millions of dollars to build infrastructure that was ultimately unused or underutilized. This lack of specific guidance meant that ASD (EI&E) was not aware of all mission- related changes. By instituting improvements to the analysis, implementation, and disposal phases in any future BRAC round, DOD could better inform decision making, better ensure that its infrastructure meets the needs of its force structure, and better position itself to gain congressional approval for additional rounds of BRAC in the future. Congress should consider, in any future BRAC authorization, a requirement for DOD to identify appropriate measures of effectiveness and to track the achievement of its goals. (Matter for Consideration 1) We are making the following two recommendations to the Secretary of Defense. In the event of any future BRAC round, the Secretary of Defense should ensure that ASD (EI&E) and the military departments take steps to establish clear and consistent communications throughout the department during data collection. (Recommendation 1) In the event of any future BRAC round, the Secretary of Defense should ensure that ASD (EI&E) provides specific guidance for the military departments to monitor and report on mission-related changes that require significant changes to the recommendation business plans. (Recommendation 2) We provided a draft of this report for review and comment to DOD. In written comments, DOD objected to our matter for congressional consideration and concurred with both recommendations. DOD’s comments are summarized below and reprinted in their entirety in appendix VI. DOD also provided technical comments, which we incorporated as appropriate. DOD objected to our matter for congressional consideration that Congress should consider, in any future BRAC authorization, a requirement for DOD to identify appropriate measures of effectiveness and to track the achievement of its goals. DOD stated that, as advised by BRAC counsel, it believes this requirement would subvert the statutory requirement that military value be the priority consideration. However, as we noted when we originally directed this recommendation to the department in March 2013, our recommendation does not undermine DOD’s reliance on military value as the primary selection criteria for DOD’s BRAC candidate recommendations, and DOD can still prioritize military value while identifying measures that help determine whether DOD achieved the military value that it seeks. Congress enacting a requirement for DOD to identify appropriate measures of effectiveness and to track the achievement of its goals, alongside the requirement to prioritize military value, would address DOD’s concern about subverting a statutory requirement related to military value. Moreover, the department will likely have a better understanding of whether it achieved its intended results while still continuing to enhance military value. DOD concurred with our first recommendation that, in the event of any future BRAC round, the Secretary of Defense should ensure that ASD (EI&E) and the military departments take steps to establish clear and consistent communications throughout the department during data collection. In its letter, however, DOD stated it did not agree with our assertion that the perceptions of lower-level personnel are necessarily indicative of the process as a whole. We disagree with DOD’s statement that we relied on the perceptions of lower-level personnel. We obtained perceptions from senior personnel in the various military organizations deemed by DOD leadership to be the most knowledgeable. We then corroborated these perceptions with those from senior officials from the military departments, along with evidence obtained from the Air Force and Army lessons-learned reports. Moreover, DOD stated that the ability to gather data was not limited by the nondisclosure agreements or an inability to communicate with those participating in the BRAC process. While DOD concurred with our recommendation, we continue to believe it should consider the perceptions obtained from knowledgeable personnel that data gathering was limited by nondisclosure agreements or an inability to communicate throughout different levels of authority in the department during data collection. DOD also concurred with our second recommendation that, in the event of any future BRAC round, the Secretary of Defense should ensure that ASD (EI&E) provides specific guidance for the military departments to monitor and report on mission-related changes that require significant changes to the recommendation business plans. In its letter, DOD stated it would continue to provide guidance, as it did in the 2005 BRAC round, to encourage resolution at the lowest possible level, with Office of the Secretary of Defense involvement limited to review and approval of any necessary changes to the business plans. However, as we reported, if a mission-related change was not raised through the military department’s chain of command, ASD (EI&E) officials stated that they were not always aware of the details of such changes, hence the need for our recommendation. By providing specific guidance to monitor and report mission-related changes that require significant changes to the recommendation business plans, DOD may be able to more effectively make adjustments in its plans to ensure that the department achieves its overall goals in any future BRAC rounds. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 15 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; and the Commandant of the Marine Corps. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Selected economic indicators for the 20 communities surrounding the 23 Department of Defense (DOD) installations closed in the 2005 Base Realignment and Closure (BRAC) round vary compared to national averages. In our analysis, we used annual unemployment and real per capita income growth rates compiled by the U.S. Bureau of Labor Statistics and the U.S. Bureau of Economic Analysis as broad indicators of the economic health of those communities where installation closures occurred. Our analyses of the U.S. Bureau of Labor Statistics annual unemployment data for 2016, the most recent data available, showed that 11 of the 20 closure communities had unemployment rates at or below the national average of 4.9 percent for the period from January through December 2016. Another seven communities had unemployment rates that were higher than the national average but at or below 6.0 percent. Only two communities had unemployment rates above 8.0 percent (see fig. 3). Of the 20 closure communities, Portland-South Portland, Maine (Naval Air Station Brunswick) had the lowest unemployment rate at 3.0 percent and Yukon-Koyukuk, Alaska (Galena Forward Operating Location) had the highest rate at 17.2 percent. We also used per capita income data from the U.S. Bureau of Economic Analysis between 2006 and 2016 to calculate annualized growth rates and found that 11 of the 20 closure communities had annualized real per capita income growth rates that were higher than the national average of 1.0 percent (see fig. 4). The other 9 communities had rates that were below the national average. Of the 20 communities affected, Yukon- Koyukuk, Alaska (Galena Forward Operating Location) had the highest annualized growth rate at 4.6 percent and Gulfport-Biloxi-Pascagoula, Mississippi (Mississippi Army Ammunition Plant and Naval Station Pascagoula) had the lowest rate at -0.1 percent. The objectives of our review were to assess the extent that the Department of Defense (DOD) (1) measured the achievement of goals for reducing excess infrastructure, transforming the military, and promoting jointness for the 2005 Base Realignment and Closure (BRAC) round and (2) implemented prior GAO recommendations and addressed any additional challenges faced in BRAC 2005 to improve performance for any future BRAC round. In addition, we describe how current economic indicators for the communities surrounding the 23 closed bases in BRAC 2005 compare to national averages. For all objectives, we reviewed the 2005 BRAC Commission’s September 2005 report to the President, policy memorandums, and guidance on conducting BRAC 2005. We also reviewed other relevant documentation such as supporting BRAC analyses prepared by the military services or units related to the development of BRAC 2005 recommendations. We interviewed officials with the Office of the Assistant Secretary of Defense for Energy, Installations, and Environment; the Army; the Navy; the Air Force; the Marine Corps; the U.S. Army Reserve Command; and the National Guard Bureau. We also conducted site visits to Connecticut, Indiana, Kentucky, Massachusetts, North Carolina, Rhode Island, and South Carolina. We met with 26 military units or organizations, such as Air Force wings and Army and Navy installations’ Departments of Public Works, and 12 communities involved with BRAC 2005 recommendations. These interviews provide examples of any challenges faced by each individual party, but information obtained is not generalizable to all parties involved in the BRAC process. We selected locations for site visits based on ensuring geographic diversity and a mix of types of BRAC recommendations (closures, transformation, or jointness), and having at least one installation from or community associated with each military department. To assess the extent that DOD measured the achievement of goals for reducing excess infrastructure, transforming the military, and promoting jointness for BRAC 2005, we met with officials to discuss measurement of goals and requested any related documentation. We compared DOD’s efforts to Standards for Internal Control in the Federal Government, which emphasizes that an agency’s management should track major agency achievements and compare these to the agencies’ plans, goals, and objectives. We also tried to calculate the excess infrastructure disposed of during BRAC 2005; however, DOD’s data were incomplete. Specifically, in reviewing the square footage and plant replacement value data from DOD’s Cost of Base Realignment Actions model, we found that data from several bases were not included. Additionally, a senior official with the Office of the Assistant Secretary of Defense for Energy, Installations, and Environment stated the data provided were not the most current data used during BRAC 2005 and the office did not have access to the complete data. We also tried to corroborate the square footage and plant replacement value data from the Cost of Base Realignment Actions model to DOD’s 2005 Base Structure Report, but we found the data to be incomparable. As such, we determined that the incomplete and outdated data were not sufficiently reliable to calculate the excess infrastructure disposed of during BRAC 2005. To assess the extent that DOD implemented prior GAO recommendations on BRAC 2005 and addressed any additional challenges faced in BRAC 2005 to improve performance for any future BRAC round, we reviewed our prior reports and testimonies on BRAC 2005 to identify recommendations made and determined whether those recommendations applied to the analysis, implementation, or disposal phase of BRAC 2005. We then identified whether DOD implemented recommendations we made by discussing the status of recommendations with agency officials and obtaining copies of agency documents supporting the recommendations’ implementation. We also met with officials to identify what challenges, if any, continue to be faced and what opportunities exist to improve the analysis, implementation, and disposal phases for any future BRAC round. For the analysis phase, we reviewed military service lessons-learned documents. For the implementation phase, we reviewed business plans supporting the implementation of the BRAC 2005 recommendations and other applicable documentation, such as a workforce planning study and an environmental impact statement affecting the implementation of some recommendations. For the disposal phase, we analyzed DOD’s caretaker costs for closed bases that it has not yet transferred. We compared information about challenges in the analysis, implementation, and disposal phases to criteria for communications, monitoring, and risk assessments in Standards for Internal Control in the Federal Government. To describe how current economic indicators for the communities surrounding the 23 closed bases in BRAC 2005 compare to national averages, we collected economic indicator data on the communities surrounding closed bases from the Bureau of Labor Statistics and the Bureau of Economic Analysis in order to compare them with national averages. To identify the communities surrounding closed bases, we focused our review on the 23 major DOD installations closed in the BRAC 2005 round and their surrounding communities. For BRAC 2005, DOD defined major installation closures as those that had a plant replacement value exceeding $100 million. We used information from our 2013 report, which identified the major closure installations. We then defined the “community” surrounding each major installation by (1) identifying the economic area in DOD’s Base Closure and Realignment Report, which linked a metropolitan statistical area, a metropolitan division, or a micropolitan statistical area to each installation, and then (2) updating those economic areas based on the most current statistical areas or divisions, as appropriate. Because DOD’s BRAC report did not identify the census area for the Galena Forward Operating Location in Alaska or the Naval Weapons Station Seal Beach Detachment in Concord, California, we identified the town of Galena as within the Yukon-Koyukuk Census Area and the city of Concord in the Oakland-Hayward-Berkeley, CA Metropolitan Division, and our analyses used the economic data for these areas. See table 1 for a list of the major DOD installations closed in BRAC 2005 and their corresponding economic areas. To compare the economic indicator data of the communities surrounding the 23 major DOD installations closed in the BRAC 2005 round to U.S. national averages, we collected and analyzed calendar year 2016 unemployment data from the U.S. Bureau of Labor Statistics and calendar year 2006 through 2016 per capita income growth data, along with data on inflation, from the U.S. Bureau of Economic Analysis which we used to calculate annualized real per capita income growth rates. Calendar year 2016 was the most current year for which local area data were available from these databases. We assessed the reliability of these data by reviewing U.S. Bureau of Labor Statistics and U.S. Bureau of Economic Analysis documentation regarding the methods used by each agency in producing their data and found the data to be sufficiently reliable to report 2016 annual unemployment rates and 2006 through 2016 real per capita income growth. We used unemployment and annualized real per capita income growth rates as key performance indicators because (1) DOD used these measures in its community economic impact analysis during the BRAC location selection process and (2) economists commonly use these measures in assessing the economic health of an area over time. While our assessment provides an overall picture of how these communities compare with the national averages, it does not isolate the condition, or the changes in that condition, that may be attributed to a specific BRAC action. We conducted this performance audit from April 2017 to March 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To improve the analysis phase of the 2005 Base Realignment and Closure (BRAC) round, we made 12 recommendations between 2004 and 2016. The Department of Defense (DOD) fully concurred with 4, partially concurred with 2, and did not concur with 6 recommendations. It implemented 1 of the 12 recommendations (see table 2). According to DOD officials, DOD will be unable to take actions on 7 recommendations unless Congress authorizes any future BRAC round. To improve the implementation phase of the 2005 Base Realignment and Closure (BRAC) round, we made 39 recommendations between 2005 and 2016. The Department of Defense (DOD) fully concurred with 17, partially concurred with 15, and did not concur with 7 recommendations. DOD implemented 28 of them (see table 3). To improve the disposal phase of the 2005 Base Realignment and Closure (BRAC) round, we made 14 recommendations between 2007 and 2017. The Department of Defense (DOD) fully concurred with 7, partially concurred with 5, and did not concur with 2 recommendations. DOD implemented 4 of them with 8 recommendations pending further action (see table 4). According to DOD officials, DOD will be unable to take actions on 5 of the 8 pending recommendations until another BRAC round is authorized. In addition to the contact named above, Gina Hoffman (Assistant Director), Tracy Barnes, Irina Bukharin, Timothy Carr, Amie Lesser, John Mingus, Kevin Newak, Carol Petersen, Richard Powelson, Clarice Ransom, Jodie Sandel, Eric Schwab, Michael Silver, and Ardith Spence made key contributions to this report. High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. Military Base Realignments and Closures: DOD Has Improved Environmental Cleanup Reporting but Should Obtain and Share More Information. GAO-17-151. Washington, D.C.: January 19, 2017. Military Base Realignments and Closures: More Guidance and Information Needed to Take Advantage of Opportunities to Consolidate Training. GAO-16-45. Washington, D.C.: February 18, 2016. Military Base Realignments and Closures: Process for Reusing Property for Homeless Assistance Needs Improvements. GAO-15-274. Washington, D.C.: March 16, 2015. DOD Joint Bases: Implementation Challenges Demonstrate Need to Reevaluate the Program. GAO-14-577. Washington, D.C.: September 19, 2014. Defense Health Care Reform: Actions Needed to Help Realize Potential Cost Savings from Medical Education and Training. GAO-14-630. Washington, D.C: July 31, 2014. Defense Infrastructure: DOD’s Excess Capacity Estimating Methods Have Limitations. GAO-13-535. Washington, D.C.: June 20, 2013. Defense Infrastructure: Communities Need Additional Guidance and Information to Improve Their Ability to Adjust to DOD Installation Closure or Growth. GAO-13-436. Washington, D.C.: May 14, 2013. Military Bases: Opportunities Exist to Improve Future Base Realignment and Closure Rounds. GAO-13-149. Washington, D.C.: March 7, 2013. DOD Joint Bases: Management Improvements Needed to Achieve Greater Efficiencies. GAO-13-134. Washington, D.C.: November 15, 2012. Military Base Realignments and Closures: The National Geospatial- Intelligence Agency’s Technology Center Construction Project. GAO-12-770R. Washington, D.C.: June 29, 2012. Military Base Realignments and Closures: Updated Costs and Savings Estimates from BRAC 2005. GAO-12-709R. Washington, D.C.: June 29, 2012. Military Base Realignments and Closures: Key Factors Contributing to BRAC 2005 Results. GAO-12-513T. Washington, D.C.: March 8, 2012. Excess Facilities: DOD Needs More Complete Information and a Strategy to Guide Its Future Disposal Efforts. GAO-11-814. Washington, D.C.: September 19, 2011. Military Base Realignments and Closures: Review of the Iowa and Milan Army Ammunition Plants. GAO-11-488R. Washington, D.C.: April 1, 2011. Defense Infrastructure: High-Level Federal Interagency Coordination Is Warranted to Address Transportation Needs beyond the Scope of the Defense Access Roads Program. GAO-11-165. Washington, D.C.: January 26, 2011. Military Base Realignments and Closures: DOD Is Taking Steps to Mitigate Challenges but Is Not Fully Reporting Some Additional Costs. GAO-10-725R. Washington, D.C.: July 21, 2010. Defense Infrastructure: Army Needs to Improve Its Facility Planning Systems to Better Support Installations Experiencing Significant Growth. GAO-10-602. Washington, D.C.: June 24, 2010. Military Base Realignments and Closures: Estimated Costs Have Increased While Savings Estimates Have Decreased Since Fiscal Year 2009. GAO-10-98R. Washington, D.C.: November 13, 2009. Military Base Realignments and Closures: Transportation Impact of Personnel Increases Will Be Significant, but Long-Term Costs Are Uncertain and Direct Federal Support Is Limited. GAO-09-750. Washington, D.C.: September 9, 2009. Military Base Realignments and Closures: DOD Needs to Update Savings Estimates and Continue to Address Challenges in Consolidating Supply- Related Functions at Depot Maintenance Locations. GAO-09-703. Washington, D.C.: July 9, 2009. Defense Infrastructure: DOD Needs to Periodically Review Support Standards and Costs at Joint Bases and Better Inform Congress of Facility Sustainment Funding Uses. GAO-09-336. Washington, D.C.: March 30, 2009. Military Base Realignments and Closures: DOD Faces Challenges in Implementing Recommendations on Time and Is Not Consistently Updating Savings Estimates. GAO-09-217. Washington, D.C.: January 30, 2009. Military Base Realignments and Closures: Army Is Developing Plans to Transfer Functions from Fort Monmouth, New Jersey, to Aberdeen Proving Ground, Maryland, but Challenges Remain. GAO-08-1010R. Washington, D.C.: August 13, 2008. Defense Infrastructure: High-Level Leadership Needed to Help Communities Address Challenges Caused by DOD-Related Growth. GAO-08-665. Washington, D.C.: June 17, 2008. Defense Infrastructure: DOD Funding for Infrastructure and Road Improvements Surrounding Growth Installations. GAO-08-602R. Washington, D.C.: April 1, 2008. Military Base Realignments and Closures: Higher Costs and Lower Savings Projected for Implementing Two Key Supply-Related BRAC Recommendations. GAO-08-315. Washington, D.C.: March 5, 2008. Defense Infrastructure: Realignment of Air Force Special Operations Command Units to Cannon Air Force Base, New Mexico. GAO-08-244R. Washington, D.C.: January 18, 2008. Military Base Realignments and Closures: Estimated Costs Have Increased and Estimated Savings Have Decreased. GAO-08-341T. Washington, D.C.: December 12, 2007. Military Base Realignments and Closures: Cost Estimates Have Increased and Are Likely to Continue to Evolve. GAO-08-159. Washington, D.C.: December 11, 2007. Military Base Realignments and Closures: Impact of Terminating, Relocating, or Outsourcing the Services of the Armed Forces Institute of Pathology. GAO-08-20. Washington, D.C.: November 9, 2007. Military Base Realignments and Closures: Transfer of Supply, Storage, and Distribution Functions from Military Services to Defense Logistics Agency. GAO-08-121R. Washington, D.C.: October 26, 2007. Defense Infrastructure: Challenges Increase Risks for Providing Timely Infrastructure Support for Army Installations Expecting Substantial Personnel Growth. GAO-07-1007. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Plan Needed to Monitor Challenges for Completing More Than 100 Armed Forces Reserve Centers. GAO-07-1040. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Observations Related to the 2005 Round. GAO-07-1203R. Washington, D.C.: September 6, 2007. Military Base Closures: Projected Savings from Fleet Readiness Centers Likely Overstated and Actions Needed to Track Actual Savings and Overcome Certain Challenges. GAO-07-304. Washington, D.C.: June 29, 2007. Military Base Closures: Management Strategy Needed to Mitigate Challenges and Improve Communication to Help Ensure Timely Implementation of Air National Guard Recommendations. GAO-07-641. Washington, D.C.: May 16, 2007. Military Base Closures: Opportunities Exist to Improve Environmental Cleanup Cost Reporting and to Expedite Transfer of Unneeded Property. GAO-07-166. Washington, D.C.: January 30, 2007. Military Bases: Observations on DOD’s 2005 Base Realignment and Closure Selection Process and Recommendations. GAO-05-905. Washington, D.C.: July 18, 2005. Military Bases: Analysis of DOD’s 2005 Selection Process and Recommendations for Base Closures and Realignments. GAO-05-785. Washington, D.C.: July 1, 2005. Military Base Closures: Observations on Prior and Current BRAC Rounds. GAO-05-614. Washington, D.C.: May 3, 2005. Military Base Closures: Assessment of DOD’s 2004 Report on the Need for a Base Realignment and Closure Round. GAO-04-760. Washington, D.C.: May 17, 2004.
|
The 2005 BRAC round was the costliest and most complex BRAC round ever. In contrast to prior rounds, which focused on the goal of reducing excess infrastructure, DOD's goals for BRAC 2005 also included transforming the military and fostering joint activities. GAO was asked to review DOD's performance outcomes from BRAC 2005. This report examines the extent to which DOD has (1) measured the achievement of its goals for BRAC 2005 and (2) implemented prior GAO recommendations on BRAC 2005 and addressed any additional challenges to improve performance for any future BRAC round. GAO reviewed relevant documents and guidance; met with a nongeneralizable selection of 26 military organizations and 12 communities involved with BRAC 2005; and interviewed DOD officials. The Department of Defense (DOD) components generally did not measure the achievement of goals—reducing excess infrastructure, transforming the military, and promoting joint activities among the military departments—for the 2005 Base Realignment and Closure (BRAC) round. In March 2013, GAO recommended that, for any future BRAC round, DOD identify measures of effectiveness and develop a plan to demonstrate achieved results. DOD did not concur and stated that no action is expected. Without a requirement for DOD to identify measures of effectiveness and track achievement of its goals, Congress will not have full visibility over the expected outcomes or achievements of any future BRAC rounds. Of the 65 recommendations GAO has made to help DOD address challenges it faced in BRAC 2005, as of October 2017 DOD had implemented 33 of them (with 18 pending DOD action). DOD has not addressed challenges associated with communication and monitoring mission-related changes. Specifically: Some military organizations stated that they could not communicate to BRAC decision makers information outside of the data-collection process because DOD did not establish clear and consistent communications. For example, Army officials at Fort Knox, Kentucky, stated that there was no way to communicate that excess facilities were ill-suited for relocating the Human Resources Command and moved forward without full consideration of alternatives for using better-suited excess space at other locations. As a result, DOD spent about $55 million more than estimated to construct a new building at Fort Knox. DOD implemented BRAC recommendations that affected units' ability to carry out their missions because DOD lacked specific guidance to monitor and report on mission-related changes. For example, DOD spent about $27.7 million on a landing field for a Marine Corps F-35 training squadron at Eglin Air Force Base, Florida, even though it had been previously decided to station the F-35 aircraft and personnel at another base. By addressing its communication and monitoring challenges, DOD could better inform decision making, better ensure that its infrastructure meets the need of its force structure, and better position itself to achieve its goals in any future BRAC round. Congress should consider requiring DOD to identify and track appropriate measures of effectiveness in any future BRAC round. Also, GAO recommends that in any future BRAC round DOD (1) take steps to establish clear and consistent communications while collecting data and (2) provide specific guidance to the military departments to monitor and report on mission-related changes during implementation. GAO also continues to believe that DOD should fully implement GAO's prior recommendations on BRAC 2005. DOD objected to Congress requiring DOD to identify and track performance measures, but GAO continues to believe this to be an appropriate action for the reasons discussed in the report. Lastly, DOD concurred with the two recommendations.
|
In November 2002, Congress passed and the President signed the Improper Payments Information Act of 2002 (IPIA), which was later amended by IPERA and the Improper Payments Elimination and Recovery Improvement Act of 2012 (IPERIA). The amended legislation requires executive branch agencies to (1) review all programs and activities and identify those that may be susceptible to significant improper payments (commonly referred to as a risk assessment), (2) publish improper payment estimates for those programs and activities that the agency identified as being susceptible to significant improper payments, (3) implement corrective actions to reduce improper payments and set reduction targets, and (4) report on the results of addressing the foregoing requirements. In addition to the agencies’ identifying programs and activities that are susceptible to significant improper payments, OMB designates as high priority the programs with the most egregious cases of improper payments. Specifically, under a provision added to IPIA by IPERIA, OMB is required to annually identify a list of high-priority federal programs in need of greater oversight and review. In general, for fiscal years 2014 through 2017, OMB implemented this requirement by designating high- priority programs based on a threshold of $750 million in estimated improper payments for a given fiscal year. OMB also plays a key role in implementing laws related to improper payment reporting. Specifically, OMB is directed by statute to provide guidance to federal agencies on estimating, reporting, reducing, and recovering improper payments. IPERA also requires executive agencies’ IGs to annually determine and report on whether their respective agencies complied with certain IPERA- related criteria. If an agency does not meet one or more of the six IPERA criteria for any of its programs or activities, the agency is considered noncompliant overall. The six criteria are as follows: 1. publish a report in the form and content required by OMB—typically an agency financial report (AFR) or a performance and accountability report (PAR)—for the most recent fiscal year, and post that report on the agency website; 2. conduct a program-specific risk assessment, if required, for each program or activity that conforms with IPIA as amended; 3. publish improper payment estimates for all programs and activities deemed susceptible to significant improper payments under the agency’s risk assessments; 4. publish corrective action plans for those programs and activities assessed to be at risk for significant improper payments; 5. publish and meet annual reduction targets for all programs and activities assessed to be at risk for significant improper payments; and 6. report a gross improper payment rate of less than 10 percent for each program and activity for which an improper payment estimate was published. Under IPERA, agencies reported by their IG as not in compliance with any of these criteria in a fiscal year are required to submit a plan to Congress describing the actions they will take to come into compliance, and such plans shall include measureable milestones, the designation of senior accountable officials, and the establishment of accountability mechanisms to achieve compliance. OMB guidance states that agencies are required to submit these plans to Congress and OMB in the first year of reported noncompliance. When agency programs are reported as noncompliant for consecutive years, IPERA and OMB guidance requires agencies and OMB to take additional actions. Specifically, an agency with a program reported as noncompliant for 3 or more consecutive years is required to submit to Congress within 30 days of the IG’s report either (1) a reauthorization proposal for the program or (2) the proposed statutory changes necessary to bring the program or activity into compliance. We previously recommended that when agencies determine that reauthorization or statutory changes are not necessary to bring the programs into compliance, the agencies should state so in their notifications to Congress. Effective starting with fiscal year 2018 reporting, OMB updated its guidance to instruct agencies with programs reported as noncompliant for 3 consecutive years to explain what the agency is doing to achieve compliance if a reauthorization proposal or proposed statutory change will not bring a program into compliance with IPERA. The updated guidance also instructs agencies with programs reported as noncompliant for 4 or more consecutive years to submit a report to Congress and OMB (within 30 days of the IG’s determination of noncompliance) detailing the activities taken and still being pursued to prevent and reduce improper payments. If agency programs are reported as noncompliant under IPERA for 2 consecutive years, and the Director of OMB determines that additional funding would help the agency come into compliance, the head of the agency must obligate additional funding in the amount determined by the Director to intensify compliance efforts. IPERA directs the agency to exercise any reprogramming or transfer authority that the agency may have to provide additional funding to meet the level determined by OMB and, if necessary, submit a request to Congress for additional reprogramming or transfer authority to meet the full level of funding determined by OMB. Table 1 summarizes agency and OMB requirements related to agency programs that are noncompliant under IPERA, as reported by their IGs. Seven years after the initial implementation of IPERA, over half of the 24 CFO Act agencies were reported as noncompliant by their IGs for fiscal years 2016 and 2017. Specifically, 13 agencies were reported as noncompliant with one or more IPERA criteria for fiscal year 2016, and 14 agencies were reported as noncompliant for fiscal year 2017 (see fig. 1). Nine of these agencies have been reported as noncompliant in one or more programs every year since IPERA was implemented in 2011 (see app. II for additional details on CFO Act agencies’ compliance under IPERA for fiscal years 2011 through 2017, as reported by their IGs). Although the number of agencies reported as noncompliant under IPERA has varied slightly since fiscal year 2011, the total instances of noncompliance for all six criteria substantially improved after fiscal year 2011, when IPERA was first implemented. As shown in figure 2, the total instances decreased from 38 instances (for 14 noncompliant agencies) for fiscal year 2011 to 26 instances (for 14 noncompliant agencies) for fiscal year 2017. Also, for fiscal year 2017, 7 of 14 agencies were reported as noncompliant for only one criterion per noncompliant program. Of these, 6 agencies—the Departments of Homeland Security (DHS), Education (Education), Commerce, and Transportation; the General Services Administration; and the Social Security Administration (SSA)—were only reported as noncompliant with the IPERA criterion that requires agencies to publish and meet reduction targets. In addition, the Department of the Treasury (Treasury) was only reported as noncompliant with the IPERA criterion that requires agencies to report improper payment rates below 10 percent. Furthermore, the programs reported as noncompliant for fiscal year 2017 accounted for a significantly smaller portion of the total reported estimated improper payments as compared to the noncompliant programs for fiscal year 2015. Specifically, we previously reported that 52 noncompliant programs accounted for $132 billion (or about 96 percent) of the $137 billion total reported estimated improper payments for fiscal year 2015, whereas 58 noncompliant programs accounted for $80 billion (or about 57 percent) of the $141 billion total reported estimated improper payments for fiscal year 2017. Although improper payment estimates associated with noncompliant programs vary from year to year, this decrease (approximately $52 billion) was primarily due to two programs. Specifically, the Department of Health and Human Services’ (HHS) Medicare Fee-for-Service (Parts A and B) and Medicare Part C programs were reported as noncompliant and accounted for approximately $43 billion and $14 billion, respectively, of estimated improper payments for fiscal year 2015. These programs were reported as compliant for fiscal year 2017 and accounted for approximately $36 billion and $14 billion, respectively, or about 36 percent of the $141 billion total reported improper payments for fiscal year 2017. Almost a third (18 programs) of the 58 programs that contributed to 14 CFO Act agencies’ noncompliance under IPERA, as of fiscal year 2017, were reported as noncompliant for 3 or more consecutive years. The number of programs noncompliant for 3 or more consecutive years has continually increased since fiscal year 2015, as shown in figure 3. Specifically, 12 programs (associated with 7 agencies) were reported as noncompliant for 3 or more consecutive years, as of fiscal year 2015, and the number increased to 14 programs (associated with 8 agencies) and 18 programs (associated with 9 agencies), as of fiscal years 2016 and 2017, respectively. These programs accounted for a substantial portion of the $141 billion total estimated improper payments for fiscal year 2017. As shown in table 2, 14 of the 18 programs that were reported as noncompliant for 3 or more consecutive years reported improper payment estimates that accounted for an estimated $74.4 billion (about 53 percent) of the $141 billion, while the other 4 programs did not report improper payment estimates for fiscal year 2017 and were reported by their respective IGs as noncompliant with the IPERA criterion to publish improper payment estimates. The $74.4 billion is primarily composed of estimates reported for 2 noncompliant programs—HHS’s Medicaid program ($36.7 billion) and Treasury’s Earned Income Tax Credit program ($16.2 billion)— totaling $52.9 billion (or approximately 71 percent of the $74.4 billion). Improper payments associated with these two noncompliant programs are also a central part of two areas included in our 2017 High-Risk List, which includes federal programs and operations that are especially vulnerable to waste, fraud, abuse, and mismanagement, or that need transformative change. Eight of the 18 noncompliant programs have been reported as noncompliant since the implementation of IPERA in fiscal year 2011, for a total of 7 consecutive years, as shown in table 2. Reported compliance for Treasury’s Earned Income Tax Credit improved from being reported as noncompliant with multiple IPERA criteria in fiscal year 2013 to noncompliance with only one criterion for the last 4 years (fiscal years 2014 through 2017). Eight CFO Act agencies’ programs were reported as noncompliant under IPERA for 3 or more consecutive years, as of fiscal year 2016. Three of these agencies did not notify Congress of their program’s continued noncompliance as required. In addition to submitting the required notifications for their noncompliant programs, the other five agencies also included additional information in their notifications—such as measurable milestones, designation of senior officials, and accountability mechanisms—useful for assessing their efforts to achieve compliance. In June 2018, OMB updated its guidance to clarify agency reporting requirements for each consecutive year a program is reported as noncompliant. However, OMB’s updated guidance did not direct agencies to include other types of quality information in their notifications for programs reported as noncompliant for 3 or more consecutive years that could help Congress to more effectively assess their efforts to address long-standing challenges and other issues affecting these programs and to achieve compliance. Of the eight agencies with programs reported as noncompliant under IPERA for 3 or more consecutive years as of fiscal year 2016, we found that five agencies notified Congress of their noncompliance as required. Specifically, the Department of Defense (DOD), Education, HHS, DHS, and SSA notified Congress of their programs’ reported noncompliance for 3 or more consecutive years as of fiscal year 2016 as required by IPERA and OMB guidance. The remaining three agencies—the U.S. Department of Agriculture (USDA), the Department of Labor (DOL), and Treasury— did not notify Congress as required. Additional information regarding the three agencies that did not submit their required notifications to Congress is summarized below: USDA: In May 2017, the USDA IG reported that four USDA Food and Nutrition Service programs—Child and Adult Care Food Program; National School Lunch Program; School Breakfast Program; and Special Supplemental Nutrition Program for Women, Infants, and Children—had been noncompliant for 6 consecutive years, as of fiscal year 2016. However, USDA has not notified Congress of these programs’ continued noncompliance with IPERA as of fiscal year 2016, despite prior recommendations that we, and the USDA IG, made to USDA to do so. USDA staff stated in May 2018 that USDA drafted, but had not submitted, a letter to Congress regarding these programs’ noncompliance. DOL: In June 2017, the DOL IG reported that the Unemployment Insurance Benefit program had been noncompliant for 6 consecutive years, as of fiscal year 2016. In October 2016, DOL included proposed legislation in its last notification to Congress regarding this program, approximately 8 months prior to the DOL IG’s IPERA compliance report. However, because the requirement for agencies to notify Congress is triggered by IG reporting of programs that are noncompliant for 3 or more consecutive years, DOL should have also notified Congress regarding the program’s continued noncompliance in fiscal year 2016 after the IG’s report was issued in June 2017. DOL staff stated in August 2018 that the proposed legislation included in its October 2016 notification had not been enacted and that DOL is currently working to develop a new report to Congress and OMB detailing corrective actions taken to bring the program into compliance. Treasury: In May 2017, the Treasury IG reported that the Earned Income Tax Credit (EITC) program had been noncompliant for 6 consecutive years, as of fiscal year 2016. We previously reported that Treasury submitted proposed statutory changes to Congress for this program in August 2014 and in June 2015. As stated in the Treasury IG’s fiscal year 2016 IPERA compliance report, the proposed statutory changes would help prevent the improper issuance of billions of dollars in refunds as it would provide the Internal Revenue Service (IRS) with expanded authority to systematically correct erroneous claims that are identified when tax returns are processed and allow IRS to deny erroneous EITC refund claims before they are paid. Further, Treasury stated that IRS has repeatedly requested authority to correct such errors in subsequent fiscal year budgets, including its fiscal year 2019 budget submission. In June 2018, Treasury staff stated that the Consolidated Appropriations Act, 2016 provided IRS with additional tools for reducing EITC improper payments; however, the act did not expand IRS’s authority to systematically correct the erroneous claims that are identified when tax returns are processed. Treasury staff also stated that the department has continued to coordinate with OMB on required reporting for the EITC program because of the program’s complexity, and that OMB has not requested additional actions or documentation regarding the program’s noncompliance. Although continued coordination with OMB is important, Treasury did not notify Congress regarding the EITC program’s continued noncompliance as required. In summary, despite reporting requirements in IPERA and OMB guidance, one agency (USDA) has not notified Congress about four programs being reported as noncompliant for 6 consecutive years, as of fiscal year 2016. The remaining two agencies (DOL and Treasury) that did not notify Congress of their programs’ consecutive noncompliance, as of fiscal year 2016, submitted notifications to Congress prior to their respective IGs’ fiscal year 2016 compliance results. However, IPERA requires agencies to notify Congress when programs are reported as noncompliant for more than 3 consecutive years and thus DOL and Treasury should have also notified Congress about their programs’ being reported as noncompliant for 6 consecutive years, as of fiscal year 2016. It is important that agencies continue to notify Congress of their programs’ consecutive noncompliance each year after the third consecutive year as the information related to their proposals or regarding their IPERA compliance efforts included in prior years’ notifications to Congress may significantly change over time. Unless agencies continue to notify Congress in subsequent years, Congress may lack the current and relevant information needed to effectively assess agencies’ proposals or monitor their efforts to address problematic programs in a timely manner. OMB updated its guidance in June 2018 to provide more clarity regarding the notification requirements for each consecutive year a program is reported as noncompliant. Effective implementation of this guidance may help ensure that agencies consistently provide required information to Congress on these programs in future years. We found that the five agencies—DOD, DHS, Education, HHS, and SSA—that notified Congress regarding their programs’ reported noncompliance for 3 or more consecutive years, as of fiscal year 2016, also included additional information about their efforts to achieve IPERA compliance. Although IPERA does not specifically require that agency proposals for reauthorization or other statutory change provide such information, including it could help Congress to better assess the agencies’ proposals included in these notifications and to oversee agency efforts to address long-standing challenges and compliance issues associated with these programs. In many instances, the types of additional information provided by these agencies are similar to information that agencies are required to provide to Congress or OMB in other required notifications or other reports, such as annual AFRs or PARs. For example, all improper payment estimates reported under IPIA, as amended, must be accompanied by information on what the agency is doing to reduce improper payments, including a description of root causes and the steps the agency has taken to ensure accountability. Further, IPERA and OMB guidance require agencies to provide corrective action plans to Congress for programs reported as noncompliant for 1 year. Such plans should include actions planned or taken to address the program’s noncompliance, measurable milestones, a senior official designated to oversee progress, and the accountability mechanisms in place to hold the senior official accountable. In addition, GAO’s Standards for Internal Control in the Federal Government emphasizes the importance of communicating quality information, such as significant matters related to risks, changes, or issues affecting agencies’ efforts to achieve compliance objectives, to external parties—such as legislators, oversight bodies, and the general public. Furthermore, in our fiscal year 2017 High-Risk Update, we also highlight the importance of these types of information when assessing agency efforts to address issues associated with programs included on our High-Risk List. Examples of such information include (1) action plans that are accessible and transparent with clear milestones and metrics, including established goals and performance measures to address identified root causes; (2) leadership commitment of top (or senior) officials to establish long-term priorities and goals and continued oversight and accountability; (3) monitoring progress against goals, assessing program performance, or reporting potential risks; and (4) demonstrated progress, through recommendations implemented, actions taken for improvement, and effectively addressing identified root causes and managing high-risk issues. Table 3 summarizes the types of additional information described above that the five agencies provided in their fiscal year 2016 notifications to Congress to address programs with 3 or more consecutive years of noncompliance. All five agencies informed Congress of (1) root causes that directly lead to improper payments or hindered the program’s ability to achieve compliance; (2) certain risks, significant changes, or issues affecting their efforts; and (3) their corrective actions or strategies to achieve compliance. Three of the five agencies—DOD, Education, and DHS—also included the other types of additional information described above in their notifications, including measurable milestones, designated senior officials to oversee progress, and accountability mechanisms established to help achieve compliance. For example, all three agencies designated their chief financial officers (CFO) to oversee progress toward achieving measurable milestones and expanded their official roles and responsibilities to hold them accountable. Education and DHS stated that these responsibilities were added to their respective CFOs’ individual performance plans. Although OMB updated its guidance in June 2018 to clarify agency reporting requirements related to programs reported as noncompliant for 3 or more consecutive years, the updated guidance did not direct agencies to include other types of quality information in their notifications, such as those described above. In addition, information related to measurable milestones, corrective actions, risks, issues, or other items affecting agencies’ efforts may change significantly over time. With this additional information, Congress could have more complete information to effectively oversee agency efforts to address long-standing challenges and other issues that have contributed to programs being reported as noncompliant for 3 or more consecutive years. Fifteen programs in seven agencies and 12 programs in six agencies were reported as noncompliant for 2 consecutive years as of fiscal years 2016 and 2017, respectively. For agencies reported as noncompliant under IPERA for 2 consecutive years for the same program, IPERA gives the Director of OMB the authority to determine whether additional funding would help the agencies come into compliance. If the OMB Director determines that such funding would help, the agency is required to use any available reprogramming or transfer authority to meet the funding level that the OMB Director specified and, if such authorities are not sufficient, submit a request to Congress for additional reprogramming or transfer authority. According to OMB staff, OMB determined that no additional funding was needed for programs reported as noncompliant for 2 consecutive years as of fiscal year 2016. As of September 2018, OMB was in the process of making funding determinations for 12 programs that were reported as noncompliant as of fiscal year 2017 and stated that any determinations made would be developed in the President’s Budget for fiscal year 2020. The 12 programs reported as noncompliant for 2 consecutive years, as of fiscal year 2017, accounted for approximately $3 billion (2 percent) of the $141 billion total improper payment estimate for that year. Of these 12 programs, more than half (or 7 of the 12) were attributable to DOD; however, Education’s Pell Grant program accounted for $2.2 billion (or 74 percent) of the $3 billion in improper payment estimates for programs reported as noncompliant programs for 2 consecutive years, for fiscal year 2017. In addition, as shown in table 4, the 12 programs reported as noncompliant for 2 consecutive years, as of fiscal year 2017, were primarily noncompliant with the IPERA criteria that required agencies to publish information in their PAR or AFR or publish and meet reduction targets. As noted previously, IPERA gives OMB authority to determine whether additional funding for intensified compliance efforts would help the agency come into compliance under IPERA. Therefore, an established process for making timely, well-informed funding determinations is an essential part of ensuring that agencies have sufficient resources and take steps to intensify their compliance efforts in a timely manner. In April 2018, OMB staff stated that when making funding determinations, they primarily rely on the IGs’ recommendations in their annual IPERA compliance reports. OMB staff also stated that for its fiscal year 2016 determinations, OMB determined that additional funding was not needed because the IGs’ recommendations did not specify that additional funding was needed to help resolve the programs’ noncompliance. The IGs’ annual reports provide information on agencies’ IPERA compliance and may be useful to OMB as a tool to help them make determinations for additional funding. However, IPERA does not require IGs to address funding levels in their annual compliance reports, and OMB’s guidance does not inform the IGs that their work might be relied upon in this manner. We reviewed the IGs’ fiscal years 2016 and 2017 IPERA compliance reports for the agencies with programs reported as noncompliant for 2 consecutive years and found that the IGs did not make any recommendations regarding additional funding needed to bring these programs into compliance. In addition, as specifically stated by the IGs for Education and USDA in their IPERA reports, OMB has the statutory responsibility to make these funding determinations. Education IG’s fiscal year 2017 IPERA compliance report stated that “If OMB recommends that the Department needs additional funding or should take any other actions to become compliant with IPERA, we recommend that the Department implement OMB’s recommendations.” Also, the USDA IG’s fiscal year 2016 IPERA compliance report stated, “For agencies that are not compliant for 2 consecutive years for the same program, the Director of OMB will determine if additional funding would help these programs come into compliance.” As a result, OMB’s reliance on IG recommendations as the source of information to support additional funding determinations may not provide sufficient information to effectively assess agencies’ funding needs to address noncompliance. OMB staff subsequently stated that they no longer need to conduct a detailed review of the IGs’ IPERA compliance reports to identify recommendations related to additional funding needs. Instead, OMB Memorandum M-18-20, issued in June 2018, updated OMB Circular No. A-123, Appendix C, and clarified that the funding determination process will unfold as part of the annual development of the President’s Budget, as described in OMB Circular No. A-11. This updated guidance also directs agencies to submit proposals to OMB regarding additional funding needs that may help them address IPERA noncompliance. To illustrate, under this new guidance, the IGs’ fiscal year 2018 IPERA compliance reports will be due in May 2019, and any funding needs to address noncompliance would be incorporated in the next annual budget preparation process, the results of which are due to be submitted to Congress in February 2020 for the President’s Budget for fiscal year 2021. Once OMB’s determinations have been made and communicated to agencies, agencies would respond by performing the required reprogramming and making transfers under existing authority, where available. Any requests for additional transfer authority may be incorporated into subsequent appropriations legislation. Estimated improper payments reported government-wide total almost $1.4 trillion from fiscal year 2003 through fiscal year 2017. The number of programs reported as noncompliant under IPERA for 3 or more consecutive years has continued to increase, from 12 programs (associated with 7 agencies) to 18 programs (associated with 9 agencies) as of fiscal years 2015 and 2017, respectively. Including additional useful, up-to-date information—such as measurable milestones, risks, or other issues affecting agency efforts to achieve compliance—in notifications to Congress, which are required when programs are reported as noncompliant for 3 or more consecutive years, could help Congress better assess agency efforts to address long-standing challenges and other issues associated with them. Although certain agencies included certain types of additional information in their notifications as of fiscal year 2016, OMB guidance does not require agencies to include such information in their notifications. As a result, Congress may lack sufficient information to effectively oversee agency efforts and take prompt action to help address long-standing challenges or other issues associated with these programs. The Director of OMB should take steps to update OMB guidance to specify other types of quality information that agencies with programs noncompliant for 3 or more consecutive years should include in their notifications to Congress, such as significant matters related to risks, issues, root causes, measurable milestones, designated senior officials, accountability mechanisms, and corrective actions or strategies planned or taken by agencies to achieve compliance. (Recommendation 1) We provided a draft of this report to OMB and requested comments, and OMB said that it had no comments. We also provided a draft of this report to the 24 CFO Act agencies and their IGs and requested comments. We received letters from the DHS Office of Inspector General (OIG), SSA, and the United States Agency for International Development. These letters are reproduced in appendixes V through VII. We also received technical comments from DOL, the Department of Veterans Affairs, the General Services Administration, HHS, the Department of Housing and Urban Development, and the Treasury OIG, which we incorporated in the report as appropriate. The remaining agencies and OIGs either did not provide comments or notified us via email that they had no comments. In its comments, SSA stated that it provided information to Congress on measurable milestones, designated senior officials, and accountability mechanisms in its AFR. In the report, we acknowledge that these types of additional information are similar to information that agencies are required to provide to Congress or OMB in other reports, such as annual AFRs. However, our analysis was based on SSA’s fiscal year 2016 notifications to Congress for programs reported as noncompliant under IPERA, in which this specific information was not reported. As such, we continue to believe that OMB should take steps to update OMB guidance to help ensure that agencies report such significant information and include it in their notifications to Congress. We are sending copies of this report to the appropriate congressional committees, the Director of the Office of Management and Budget, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-2623 or davisbh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. Our objectives were to determine the following: 1. The extent to which the 24 agencies listed in the Chief Financial Officers Act of 1990, as amended (CFO Act), complied with the six criteria listed in the Improper Payments Elimination and Recovery Act of 2010 (IPERA), for fiscal years 2016 and 2017, and the trends evident since 2011, as reported by their inspectors general (IG). 2. The extent to which CFO Act agencies addressed requirements for programs and activities reported as noncompliant with IPERA criteria for 3 or more consecutive years, as of fiscal year 2016, and communicated their strategies to Congress for reducing improper payments and achieving compliance. 3. The extent to which the Office of Management and Budget (OMB) made determinations regarding whether additional funding would help CFO Act programs and activities reported as noncompliant with IPERA criteria for 2 consecutive years, as of fiscal years 2016 and 2017, come into compliance. Although the responsibility for complying with provisions of improper payment-related statutes rests with the head of each executive agency, we focused on the 24 agencies listed in the CFO Act because estimates of their improper payments represent over 99 percent of the total reported estimated improper payments for fiscal years 2016 and 2017. Our work did not include validating or retesting the data or methodologies that the IGs used to determine and report compliance. We corroborated all of our findings with OMB and all 24 CFO Act agencies and IGs. To address our first objective, we identified the requirements that agencies must meet by reviewing the Improper Payments Information Act of 2002 (IPIA), IPERA, and OMB guidance. We reviewed the CFO Act agency IGs’ IPERA compliance reports for fiscal years 2016 and 2017, which were the most current reports available at the time of our review. We summarized the overall agency and program-specific compliance determinations with the six IPERA criteria, as reported by the IGs. For fiscal years 2011 through 2015, we relied on and reviewed prior year supporting documentation and analyses of CFO Act agencies’ IPERA compliance, as reported in our prior reports, in order to identify compliance trends since 2011, as reported by the IGs. Based on these reports, we summarized the programs and the number of consecutive years that they were reported as noncompliant. For each IG report that did not specifically state that the agency had programs noncompliant for consecutive years, we compared the list of programs reported as noncompliant for fiscal years 2016 and 2017 to the list of programs reported as noncompliant for fiscal years 2014 and 2015 in our prior reports. Lastly, we corroborated our findings with OMB and all 24 CFO Act agencies and IGs. To address our second objective, we determined if the agencies responsible for programs and activities reported as noncompliant for 3 or more consecutive years as of fiscal year 2016 had submitted the required proposals (reauthorizations or statutory changes) to Congress by requesting and reviewing documentation of the required submissions and relevant notifications to Congress obtained from each applicable agency. Further, we reviewed the content of each agency notification to evaluate agencies’ efforts to communicate quality information to Congress concerning their strategies for achieving compliance consistent with Standards for Internal Control in the Federal Government. Principle 15 of these standards emphasizes the need for an entity’s management to communicate necessary quality information, such as significant matters related to risks, changes, or issues affecting agencies’ efforts, to achieve compliance objectives, to external parties—such as legislators, oversight bodies, and the general public. To identify other types of information useful for this purpose, we reviewed IPIA, as amended; IPERA; and OMB guidance for information agencies are required to provide to Congress or OMB in other notifications and reports, such as their corrective action plans or strategies, measurable milestones, designated senior officials, and accountability mechanisms for achieving compliance. We also reviewed information used to assess agency efforts to address issues associated with programs on our High-Risk List. To determine the extent to which agencies’ notifications to Congress included these additional types of useful information for their applicable program(s), we used a data collection instrument to document our determinations regarding the additional types of quality information included in each notification. In addition, two GAO analysts independently reviewed each agency’s notification and documented their determinations regarding the types of information included in the notifications. Differences between the analysts’ determinations were identified and resolved to ensure that the types of additional information were consistently identified and categorized. We did not evaluate the sufficiency and completeness of the agency-provided information. Lastly, we corroborated our findings with the respective agencies and IGs. To address our third objective, we identified provisions in IPIA, IPERA, and OMB guidance that are applicable to OMB for programs reported as noncompliant for 2 consecutive years. To determine if OMB made additional funding determinations for agency programs and activities reported as noncompliant for 2 consecutive years as of fiscal years 2016 and 2017, we requested relevant information and communications from OMB and the applicable agencies and IGs. We also interviewed key OMB staff on their process for determining additional funding needs for noncompliant programs and activities as of fiscal years 2016 and 2017 and related results. In addition, we reviewed the applicable fiscal years 2016 and 2017 CFO Act agency IG IPERA compliance reports, which OMB staff stated they relied on for determining whether noncompliant programs and activities required additional funding. We also asked the agencies whether they coordinated with OMB regarding their need for additional funding for programs and activities reported as noncompliant for 2 consecutive years as of fiscal years 2016 and 2017. Lastly, we corroborated our findings with OMB and the respective agencies and IGs. We conducted this performance audit from November 2017 to December 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Figure 4 details the 24 Chief Financial Officers Act of 1990 (CFO Act) agencies’ overall compliance under the Improper Payments Elimination and Recovery Act of 2010 (IPERA), as reported by their inspectors general, for fiscal years 2011 through 2017. We previously reported on CFO Act agencies’ overall reported compliance for fiscal years 2011 through 2015. Tables 5 and 6 detail the Chief Financial Officers Act of 1990 (CFO Act) agencies and programs reported by their inspectors general as noncompliant with the six criteria specified by the Improper Payments Elimination and Recovery Act of 2010 (IPERA), for fiscal years 2016 and 2017. We previously reported on CFO Act agencies’ reported compliance with the six IPERA criteria for fiscal year 2015. Table 7 details the Chief Financial Officers Act of 1990 (CFO Act) agencies and programs reported by their inspectors general as noncompliant under the Improper Payments Elimination and Recovery Act of 2010 (IPERA) for 2 or more consecutive years, as of fiscal years 2016 and 2017. We previously reported on CFO Act agencies’ reported compliance for fiscal year 2015. In addition to the contact named above, Michelle Philpott (Assistant Director), Matthew Valenta (Assistant Director), Vivian Ly (Auditor in Charge), Juvy Chaney, John Craig, Caitlin Cusati, Francine DelVecchio, Patrick Frey, Maria Hasan, Maxine Hattery, Jason Kelly, Jim Kernen, Jason Kirwan, Sharon Kittrell, Lisa Motley, Heena Patel, Anne Rhodes- Kline, and Kailey Schoenholtz made key contributions to this report.
|
Government-wide estimated improper payments totaled almost $1.4 trillion from fiscal year 2003 through fiscal year 2017. IPERA requires IGs to annually assess and report on whether executive branch agencies complied with the six criteria to (1) publish an agency financial report or performance accountability report, (2) conduct program-specific improper payment risk assessments, (3) publish improper payment estimates, (4) publish corrective action plans, (5) publish and meet annual improper payment reduction targets, and (6) report a gross improper payment rate of less than 10 percent. This report examines the extent to which 1. CFO Act agencies complied with IPERA criteria for fiscal years 2016 and 2017, and the trends evident since 2011, as reported by their IGs; 2. CFO Act agencies addressed requirements for programs reported as noncompliant with IPERA criteria for 3 or more consecutive years, as of fiscal year 2016, and communicated their strategies to Congress for reducing improper payments and achieving compliance; and 3. OMB made determinations regarding whether additional funding would help CFO Act agency programs reported as noncompliant with IPERA criteria for 2 consecutive years, as of fiscal years 2016 and 2017, come into compliance. GAO analyzed the IGs' fiscal years 2016 and 2017 IPERA compliance reports; reviewed prior GAO reports on agencies' IPERA compliance; reviewed agency information submitted to Congress; and made inquiries to OMB, applicable agencies, and IGs; and assessed such information based on relevant IPERA provisions and OMB and other guidance. Over half of the 24 Chief Financial Officers Act of 1990 (CFO Act) agencies were reported by their inspectors general (IG) as noncompliant with one or more criteria under the Improper Payments Elimination and Recovery Act of 2010 (IPERA) for fiscal years 2016 and 2017. Nine CFO Act agencies have been reported as noncompliant in one or more programs every year since the implementation of IPERA in fiscal year 2011, totaling 7 consecutive years of noncompliance. The IGs of the 14 noncompliant agencies reported that a total of 58 programs were responsible for the identified instances of noncompliance in fiscal year 2017. Further, 18 of the 58 programs at 9 agencies were reported as noncompliant for 3 or more consecutive years. Fourteen of these 18 programs accounted for an estimated $74.4 billion of the $141 billion total estimated improper payments for fiscal year 2017; the other 4 programs did not report improper payment estimates. This sum may include estimates that are of unknown reliability. The $74.4 billion is primarily composed of estimates reported for two noncompliant programs, the Department of Health and Human Services' Medicaid program and the Department of the Treasury's (Treasury) Earned Income Tax Credit program; estimated improper payments for these two programs are also a central part of certain high-risk areas in GAO's 2017 High-Risk List. Agencies with any program reported as noncompliant for 3 or more consecutive years are required to notify Congress of their program's consecutive noncompliance and submit a proposal for reauthorization or statutory change to bring that program into compliance. GAO found that three agencies with one or more programs reported as noncompliant for 3 or more consecutive years, as of fiscal year 2016, did not notify Congress or submit the required proposals. The Departments of Labor and the Treasury submitted proposed legislative changes in response to their programs being previously reported as noncompliant, but did not notify Congress of the programs' continued noncompliance as of fiscal year 2016. The U.S. Department of Agriculture (USDA) has not notified Congress despite prior GAO and USDA IG recommendations to do so. To address these issues, in June 2018 the Office of Management and Budget (OMB) updated its guidance to clarify the notification requirements for each consecutive year a program is reported as noncompliant. GAO found that five agencies did notify Congress as required, and included additional quality information that is not specifically required, but could be useful in updating Congress on their compliance efforts. For example, all five agencies provided information on the root causes, risks, changes, or issues affecting their efforts and corrective actions or strategies to address them; three agencies provided other quality information on accountability mechanisms, designated senior officials, and measurable milestones. In June 2018, OMB updated its guidance to clarify agency reporting requirements for programs reported as noncompliant for 3 or more consecutive years. However, the updated guidance does not direct agencies to include the types of quality information included in these five agencies' notifications for fiscal year 2016. GAO's Standards for Internal Control in the Federal Government emphasizes the importance of communicating quality information, such as significant matters affecting agencies' efforts to achieve compliance objectives. Such information could be useful in understanding the current challenges of these programs and is essential for assessing agency efforts to address high-risk and other issues. As a result, Congress could have more complete information to effectively oversee agency efforts to address program noncompliance for 3 or more consecutive years. When programs are reported as noncompliant for 2 consecutive years, IPERA gives OMB authority to determine whether additional funding is needed to help resolve the noncompliance. In April 2018, OMB staff stated that they determined that no additional funding was needed for the 15 programs that were reported as noncompliant for 2 consecutive years, as of fiscal year 2016, and that they primarily rely on the IGs' recommendations in their annual IPERA compliance reports when making funding determinations. OMB staff subsequently stated that they no longer need to conduct a detailed review of the IGs' IPERA compliance reports to identify recommendations related to additional funding needs. Instead, OMB updated its guidance in June 2018 to direct agencies to submit proposals to OMB regarding additional funding needs to help address IPERA noncompliance and clarified that the funding determination process will unfold as part of the annual development of the President's Budget. As of September 2018, OMB was in the process of making funding determinations for 12 programs that were reported as noncompliant as of fiscal year 2017 and stated that any determinations made would be developed in the President's Budget for fiscal year 2020. GAO recommends that OMB update its guidance to specify other types of quality information that agencies with programs noncompliant for 3 or more consecutive years should include in their notifications to Congress, such as significant matters related to risks, issues, root causes, measurable milestones, designated senior officials, accountability mechanisms, and corrective actions or strategies planned or taken by agencies to achieve compliance. GAO provided a draft of this report to OMB and requested comments, and OMB said that it had no comments. GAO also provided a draft of this report to the 24 CFO Act agencies and their IGs and requested comments. In its written comments, the Social Security Administration (SSA) stated that it provided information on measurable milestones, designated senior officials, and accountability mechanisms in its agency financial report. However, SSA did not provide this information in its notifications to Congress for programs reported as noncompliant under IPERA as of fiscal year 2016. GAO believes that OMB should take steps to update OMB's guidance to help ensure that agencies report such significant information and include it in their notifications to Congress. In addition, several agencies and IGs provided technical comments, which were incorporated in the report as appropriate.
|
CFPB’s Research, Markets, and Regulations Division has primary responsibility for CFPB’s efforts to monitor market developments and risks to consumers and to retrospectively assess rules. As shown in figure 1, the division is composed of the Office of Research, the Office of Regulations, and the following four offices (collectively known as the “Markets Offices”), which are focused on different consumer financial markets: The Office of Card, Payment, and Deposit Markets monitors credit cards, deposit accounts, prepaid cards, and remittances, as well as other emerging forms of payment and related technologies, such as mobile payments and virtual currencies. It also monitors data aggregation services. The Office of Consumer Lending, Reporting, and Collection Markets monitors debt collection, debt relief, and consumer reporting and scoring, as well as student, auto, and the small-dollar and personal lending markets. The Office of Mortgage Markets monitors the mortgage markets, including originations, servicing, and secondary markets. The Office of Small Business Lending Markets monitors credit to small businesses, including traditional lenders, specialty financing, and emerging technologies. The four Markets Offices are responsible for collecting and sharing market intelligence, helping to shape CFPB policy (including through participation on rulemaking teams), and helping to inform the marketplace through research and outreach. The Office of Research is responsible for conducting research to support the design and implementation of CFPB’s consumer protection policies, including developing and writing any required cost-benefit analyses for rulemakings. Among other things, these offices research, analyze, and report on consumer financial markets issues. These offices also help inform the work of the Office of Regulations, which supports and provides strategic direction for CFPB’s rulemaking, guidance, and regulatory implementation functions. The Markets Offices and the Office of Research contribute to CFPB’s efforts to address the Dodd-Frank Act requirement that CFPB monitor for certain risks to consumers in support of its rulemaking and other functions. This provision states that CFPB may consider a number of factors in allocating its resources for risk-monitoring efforts with regard to consumer financial products and the markets for those products, such as consumers’ understanding of a type of product’s risks, the extent to which existing law is likely to protect consumers, and any disproportionate effects on traditionally underserved consumers. Further, the Dodd-Frank Act gives CFPB authority in connection with such monitoring to gather information from time to time regarding the organization, business conduct, markets, and activities of covered persons or service providers from a variety of sources, including several sources specified in the act. Finally, this provision requires CFPB to issue at least one report of significant findings from its risk monitoring each calendar year. The Office of Research has led CFPB’s efforts to address the Dodd-Frank Act requirement that CFPB conduct assessments of each significant final rule or order it adopts and publish a report of the assessment no later than 5 years after the rule or order’s effective date. Before publishing a report of its assessment, CFPB must invite public comment on whether the rule or order should be modified, expanded, or eliminated. In addition, the Dodd-Frank Act provides CFPB authority to require covered persons or service providers to provide information to help support these assessments, as well as to support its risk-monitoring activities. In addition to the Research, Markets, and Regulations Division, other CFPB divisions and offices conduct outreach to help inform CFPB policy making. For example, CFPB’s External Affairs Division facilitates conversation with stakeholders, such as Congress, financial institutions, state governments, and the public. In addition, in the Consumer Education and Engagement Division, the Office of Consumer Response manages the intake of and response to complaints about consumer financial products and services. All of the divisions report to the Director. In November 2017, the President designated a new Acting Director of CFPB, and in December 2018, the Senate voted to confirm a new Director of the bureau. To address the Dodd-Frank Act consumer risk-monitoring requirement, CFPB routinely monitors consumer financial markets through a variety of methods. It also conducts more targeted market monitoring to support rulemaking and other agency functions. CFPB collects and monitors routine market data and other market intelligence through a combination of internal and external data sources and outreach (see fig. 2). Markets Offices staff use information from these sources to analyze market trends and identify emerging risks that may require greater attention. Staff produce monthly and quarterly reports that summarize or analyze observed market developments and trends, and they distribute them bureau-wide. CFPB internal data and research. Staff in CFPB’s Markets Offices use CFPB data and research to identify and monitor risks. For example, in our review of CFPB’s market intelligence reports from July 2016 through July 2018, we observed the following frequently cited internal CFPB data sources: Consumer complaints submitted to CFPB. Markets Offices staff monitor consumer complaints to track trends and potential problems in the marketplace. For example, monthly mortgage trend reports we reviewed cited changes in total numbers of mortgage complaints, as well as in complaints related to private mortgage insurance, escrow accounts, and other mortgage-related topics. Consumer Credit Trends tool. This tool is based on a nationally representative sample of commercially available, anonymized credit records. Markets Offices staff use this tool to monitor conditions and outcomes for specific groups of consumers in markets for mortgages, credit cards, auto loans, and student loans. For example, CFPB monthly auto market trend reports cited the tool as a source for information on changes in the volume of auto loans by neighborhood income. Home Mortgage Disclosure Act data. CFPB maintains loan-level data that mortgage lenders report pursuant to the Home Mortgage Disclosure Act. According to CFPB, Markets Offices staff use the data for their market monitoring, which can include analysis to determine whether lenders are serving the housing needs of their communities and to identify potentially discriminatory lending patterns. External data and research. In addition to its internal databases, CFPB obtains external market data from a number of public and proprietary data sources. The market intelligence reports we reviewed included the following commonly cited external sources, among others: federal databases and research, such as the Federal Reserve Bank of New York’s Quarterly Report on Household Debt and Credit; publicly available information from sources such as industry websites, mainstream news publications, and publicly traded companies’ financial statements. proprietary data from sources such as data analytics services and credit reporting agencies. Engagement with industry representatives. CFPB also gathers market intelligence from engagement with industry representatives. Market intelligence reports we reviewed cited several meetings with industry representatives and regular CFPB attendance at industry conferences. Representatives of two trade groups we interviewed told us that CFPB had sometimes proactively reached out to them regarding areas of potential risk. According to CFPB, in fiscal year 2018, Markets Offices staff conducted an average of about 50 meetings with industry per month and held intelligence-gathering meetings across various consumer financial markets throughout the year. Engagement with consumer organizations. CFPB’s External Affairs Division, which is responsible for engagement with the nonprofit sector, facilitates most communication between Markets Offices staff and consumer organizations to help inform staff’s risk monitoring efforts. According to CFPB, between January and September 2018, staff from the External Affairs and Research, Markets, and Regulation divisions held an average of about four meetings per month with consumer organizations and nonprofit stakeholders, and Markets Offices staff said these meetings provided information useful in monitoring markets. Two of the three consumer organizations we interviewed noted that their communication with CFPB had decreased since late 2017. However, one group noted that external engagement has typically been greater when CFPB is going through a rulemaking and that rulemaking activity had slowed in the last year. Advisory committees and other formal outreach. CFPB obtains information on consumer financial issues and emerging market trends from various advisory groups and other formal outreach. In 2012, CFPB established a consumer advisory board, in accordance with a Dodd-Frank Act requirement. It also established three additional advisory councils (community bank, credit union, and academic) to obtain external perspectives on issues affecting its mission. The groups, which include subgroups focused on various consumer financial market areas or issues, met regularly through 2017. CFPB dismissed the existing members of the consumer advisory board and community bank and credit union advisory councils in June 2018 and reconstituted the groups with new, smaller memberships that resumed meeting in September 2018. In addition, from July 2016 to mid-November 2018, CFPB solicited public input through public field hearings and town hall meetings on issues such as debt collection, consumer access to financial records, and elder financial abuse, among other issues. Coordination with other regulators. CFPB engages with the federal prudential regulators and other federal and state agencies to inform its routine market-monitoring efforts. This engagement can occur through mechanisms such as working groups, task forces, and information- sharing agreements. For example, CFPB is a member of a working group of federal housing agencies, whose members share market intelligence and discuss risks they have observed in the mortgage markets. Markets Offices staff also receive quarterly, publicly available bank and credit union call report data through the Federal Financial Institutions Examination Council and the National Credit Union Administration, with which it has information-sharing agreements. CFPB has supplemented its routine monitoring by conducting targeted research and data collection to inform rulemaking efforts, meet statutory reporting requirements, and learn more about a particular market for consumer financial products. As noted earlier, the Dodd-Frank Act authorizes CFPB to collect certain data from covered persons and service providers. Since July 2016, to support bureau rulemaking efforts, Markets Offices staff have augmented their routine monitoring with targeted use of supervisory data collected through CFPB’s examinations of covered persons and service providers. The Research, Markets, and Regulations Division has a formal information-sharing agreement with CFPB’s Supervision, Enforcement, and Fair Lending Division. Under this agreement, staff in the Office of Small Business Lending Markets used supervisory information on common data terminology used by business lenders to inform recommendations on data elements that should be included in a potential small business data collection rule. In addition, as discussed below, Markets Offices staff reviewed aggregated and anonymized supervisory information from CFPB’s examinations of payday lenders for research that informed the November 2017 Payday, Vehicle Title, and Certain High-Cost Installment Loans Rule, also referred to as the Payday Rule. In addition to rulemaking, CFPB has conducted targeted risk-monitoring activities to support certain statutory reporting requirements. For its mandated biennial credit card study, CFPB used its data-collection authorities under the Dodd-Frank Act to make four mandatory information requests to a total of 15 credit card issuers. According to CFPB officials, this study and other statutory reporting efforts—such as the bureau’s annual report on the Fair Debt Collection Practices Act—also support their market-monitoring efforts under the Dodd-Frank Act. CFPB notified the relevant federal and state regulators of its impending requests to the credit card issuers under those regulators’ supervision. Finally, CFPB has sometimes engaged in targeted data collection to learn more about specific areas of potential consumer financial risk. In some cases, CFPB has used its Dodd-Frank Act data collection authority under Section 1022 to require a company to provide data. For example, to understand developments with respect to person-to-person payments, CFPB required a payment processing company to provide certain information regarding its system. In other cases, CFPB has obtained targeted data through voluntary agreements with other regulators. For instance, in January 2018, CFPB reached an agreement with the Federal Reserve to obtain supervisory data on bank holding companies’ and intermediate holding companies’ mortgage and home equity loan portfolios. According to CFPB officials, they plan to use the data to monitor trends and risks in the mortgage market and inform bureau policy making. The market monitoring conducted by CFPB’s Markets Offices staff contributes to bureau rulemaking and other functions, such as supervision, guidance to industry, consumer education, and reporting. Rulemaking. Since July 2016, CFPB’s market-monitoring efforts have informed certain rulemaking efforts. For example, Markets Offices analysis of the small-dollar lending market informed CFPB’s November 2017 Payday Rule, according to staff and the proposed and final rules. Staff said they had found that some borrowers were caught in a cycle of using payday loan products without the ability to repay the loans. Under the final rule, lenders for certain loans must reasonably determine up front that borrowers can afford to make the payments on their loans without needing to re-borrow within 30 days, while still meeting their basic living and other expenses. In addition, CFPB’s November 2016 Prepaid Accounts Rule reflected market-monitoring information and other research that staff helped collect on prepaid accounts. The rule incorporated findings from CFPB’s 2014 analysis of prepaid account agreements, which CFPB conducted to understand the potential costs and benefits of extending existing regulatory provisions—such as error resolution protections—to such agreements. Further, CFPB’s market intelligence reports we reviewed from 2017 and 2018 reflected Markets Offices staff’s communication with industry regarding a debt-collection rule—a topic that has been on CFPB’s public rulemaking agenda since 2013, based in part on market-monitoring findings. Industry supervision and policy positions. Markets Offices staff’s market-monitoring findings have informed CFPB’s efforts to supervise institutions and communicate policy positions to industry participants. Staff assist the Supervision, Enforcement, and Fair Lending Division in its annual risk-based prioritization process. In 2018, for example, staff provided information on market size and risk for more than a dozen market areas, which helped the supervision division prioritize its coverage of those market areas in its examination schedule. Markets Offices staff told us they also have met frequently with supervision staff to share issues identified through monitoring and determine whether supervisory guidance or related actions would be appropriate to address them. Further, according to CFPB, market-monitoring information supported bureau leadership’s public statements on selected market developments and informed policy documents, such as consumer protection principles on financial technology. Consumer education. CFPB’s risk monitoring has informed its broader consumer education efforts. CFPB’s Consumer Education and Engagement Division provides financial education tools, including blogs and print and online guides on financial topics such as buying a home, choosing a bank or credit union, or responding to debt collectors. Markets Offices staff provided us with several examples of consumer education materials for which they had contributed subject-matter expertise since July 2016. Examples included a consumer advisory on credit repair services and blog posts on mortgage closing scams and tax refund advance loans. Public reports. CFPB’s market-monitoring findings have informed several of its public reports since July 2016. According to CFPB officials, when Markets Offices staff identify risks they think could be mitigated by public communications to consumers, they work with the Consumer Education and Engagement Division, as well as other divisions, to publish relevant material. As noted earlier, the Dodd-Frank Act requires CFPB to issue at least one report annually of significant findings from its monitoring of risks to consumers in the offering or provision of consumer financial products or services. CFPB officials stated that this requirement is addressed by the first section of CFPB’s semiannual reports to Congress, which discusses significant problems consumers face in shopping for or obtaining consumer financial products and services. CFPB officials further noted that other public CFPB reports include information related to risks to consumers and may also respond to the annual Dodd-Frank Act reporting requirement. For example, CFPB’s December 2017 biennial report on the consumer credit card market discussed credit card debt collection and persistent indebtedness faced by some consumers, among other consumer financial risks. In addition, CFPB’s quarterly consumer credit trend reports have discussed risks related to consumers financing auto purchases with longer-term loans. CFPB currently lacks a systematic, bureau-wide process for prioritizing financial risks facing consumers—using information from its market monitoring, among other sources—and for considering how it will use its tools to address those risks. In 2015, CFPB initiated such a process, but CFPB officials said that the most recent round of this process was completed in 2017 and that its leadership has not yet decided whether to continue using the process. In a February 2016 public report, CFPB described this process (which CFPB refers to internally as “One Bureau”) for deploying shared bureau-wide resources to address some of the most troubling problems facing consumers. According to the report, through this One Bureau process, CFPB prioritized problems that pose risks to consumers based on the extent of the consumer harm CFPB had identified and its capacity to eliminate or mitigate that harm. The report identified near-term priority goals in nine areas where CFPB hoped to make substantial progress within 2 years. It provided evidence of the nature or extent of risks facing consumers and described how CFPB planned to use its tools—such as rulemaking, supervision, enforcement, research, and consumer education—to address the priority goals. As part of the One Bureau process, CFPB created several cross-bureau working groups, which were focused on specific market areas and tasked with helping ensure progress toward CFPB’s near-term priority goals, among other responsibilities. The bureau revisited its stated priorities in June 2017 to guide its work through fiscal year 2018. However, officials said that while the working groups continue to facilitate communication, informal collaboration, and strategy-setting across the bureau, CFPB has not decided whether to engage in a third round of prioritization under the One Bureau process. The bureau was without a permanent Director from November 2017 until December 2018, when the Senate confirmed a new Director. CFPB officials told us that CFPB may revise its approach to prioritization under new leadership. Federal internal control standards state that management should use quality information to achieve agency objectives, such as by using quality information to make informed decisions. In addition, the standards state that management should identify, analyze, and respond to risks related to achieving the defined objectives. Through One Bureau, CFPB had a process to use the large amount of data and market intelligence it collected on consumer risks to make informed decisions about its bureau- wide policy priorities and how it would address them. CFPB has mechanisms in place for the Markets Offices to inform the work of individual divisions. For example, as noted, Markets Offices staff contribute to rulemaking efforts (including through participation on rulemaking teams) and to the annual setting of supervisory priorities. However, although the Markets Offices continue to collect market intelligence and contribute to cross-bureau working groups, CFPB currently lacks a process for systematically prioritizing risks or problems facing consumers and identifying the most effective tools to address those risks. CFPB officials noted that the bureau issued 12 requests for information in early 2018 to seek public input to inform its priorities. Topics covered by these requests for public input have included the bureau’s rulemaking process and its inherited and adopted rules. In an October 2018 statement, CFPB announced that it expected to publish an updated statement of rulemaking priorities by spring 2019 based on consideration of various activities, including its ongoing market monitoring and its analysis of the public comments from the requests for information. However, this prioritization effort focuses on setting rulemaking priorities and does not incorporate all of CFPB’s other tools to respond to consumer financial risks. While CFPB has continued to take steps to consider information to inform its policy priorities, a systematic, bureau-wide process to prioritize risks to consumers and consider how CFPB will use its full set of tools to address them could help to ensure that CFPB effectively focuses its resources on the most significant risks to consumers. This, in turn, could enhance CFPB’s capacity to meet its statutory consumer protection objectives. In two internal memorandums, CFPB documented an initial process for meeting the Dodd-Frank Act requirement to retrospectively assess significant rules or orders and issue reports of such assessments within 5 years of the rule or order’s effective date. According to CFPB officials, the bureau may modify the process for future work after it has completed its first three assessments. The assessments will be in addition to other regulatory reviews conducted by CFPB. To determine which of its final rules were significant for purposes of the Dodd-Frank Act retrospective assessment requirement, CFPB created a four-factor test. In applying this test, CFPB analyzes the rule’s 1. cumulative annual cost to covered persons of over $100 million, 2. effects on the features of consumer financial products and services, 3. effects on business operations of providers that support the product or 4. effects on the market, including the availability of consumer financial products and services. The memorandums recommended weighing the first factor more heavily and considering factors two through four cumulatively, so that high-cost rules tend to be considered significant. If a rule’s cumulative annual costs exceed $100 million, CFPB may consider the rule to be significant even if the cumulative effect from factors two through four is small. If the rule’s costs do not exceed $100 million, there must be a large cumulative effect from factors two through four for the rule to be considered significant. After applying the test to nine rules in early 2017, CFPB determined that three were significant for retrospective assessment purposes: Remittance Rule. This rule covers remittances, which are a cross- border transfer of funds. Ability-to-Repay/Qualified Mortgage Rule (ATR/QM Rule). This rule covers consumers’ ability to repay mortgage loans and categories of mortgage loans that meet the ability-to-repay requirement (qualified mortgages). Real Estate Settlement Procedures Act (RESPA) Servicing Rule. This rule covers loan servicing requirements under RESPA. CFPB staff told us that in the future they plan to apply the four-factor test to rules not already subject to an assessment within 3 years of the rules’ effective dates, pending new leadership’s review of the test. As of November 2018, staff told us they had not yet formally applied the test to any additional rules. However, they told us that they plan to apply the test to the TILA-RESPA Integrated Disclosure Rule in 2019. If CFPB determines that the rule is significant, CFPB officials said they plan to complete an assessment in late 2020. In addition to outlining the four-factor test, a March 2016 memorandum documented CFPB’s decision to generally focus any significant new data collection efforts on a rule’s effects on consumer and market-wide outcomes rather than effects on businesses. In the memorandum, CFPB noted that the objectives of many of its rules focus on improved consumer experiences and outcomes, such as reductions in loan-default risk and improved access to financial product information and credit. However, the memorandum also noted that CFPB would assess outcomes for businesses when data were available at minimal cost. In addition, the memorandum explained that CFPB would consider spending additional resources to collect data on business outcomes under certain conditions, such as when unfavorable outcomes for businesses could meaningfully affect significant numbers of consumers. Although CFPB stated in its March 2016 memorandum that it did not plan to formally assess the previously mentioned three rules’ costs or benefits to providers, it stated in its October 2018 Remittance Rule Assessment Report that it may reconsider that decision for future rule assessments. In the March 2016 memorandum, CFPB also documented a decision to not make specific policy recommendations in the final reports for the retrospective assessments. CFPB expects the findings from its final assessment reports to inform its policy development process, through which it makes decisions about future rulemaking efforts. In the March 2016 memorandum, CFPB explained that separating the assessments from policy recommendations would keep the assessments focused on evidence-based descriptions. As previously described, CFPB also issued requests for information to obtain public input on effects of its inherited and adopted rules, in addition to the required retrospective assessments. CFPB staff stated that they plan to use the lessons learned from the initial assessment process to inform their procedures for future assessments. According to CFPB, a future procedures document is to outline its process for the retrospective assessments required by the Dodd-Frank Act as well as for similar assessments CFPB may conduct pursuant to other statutes or executive orders. For each of the three rules it determined to be significant, CFPB created detailed assessment plans and a timeline for completion (see table 1). Each plan defined which aspects of the rules the assessment would focus on; outlined the scope and methodology, including challenges for the assessment and potential limitations of methodology; and identified data CFPB planned to gather and compile, including CFPB’s own and third-party data, and explained how the data will be used to evaluate the effects of the rule. CFPB issued requests for information between March and June of 2017 to collect public input on each assessment and created plans for incorporating the comments in each assessment report. As required by the Dodd-Frank Act, these requests solicited comments on modifying, expanding, or eliminating the rules. In addition, CFPB requested comments on the assessment plans and invited suggestions on other data that might be useful for evaluating the rules’ effects. In a document provided to us, CFPB described its preliminary plan to summarize comments received from the public and use the information received. CFPB staff told us they adjusted their research questions and data sources on all three assessments in response to comments. For example, based on comments, they added a question to an industry survey about a provision of the Remittance Rule and incorporated a new data source into the ATR/QM Rule and RESPA Servicing Rule assessments. Other data sources used for the assessments include federal and state agencies, voluntary surveys of providers of consumer financial products, and loan data from servicers. For example, for the Remittance Rule assessment, CFPB sent a voluntary industry survey to 600 money transmitters, banks, and credit unions on how the rule has affected their business practices and costs, as well as potential problems in specific market segments. For the RESPA Servicing Rule assessment, CFPB conducted qualitative structured interviews with mortgage servicers to learn about changes servicers had to make in response to the rule. CFPB published its Remittance Rule Assessment Report in October 2018. The report analyzed trends in the volume of remittance transfers, the number of providers, and the price of transfers. For example, CFPB found that declining remittance prices and an increase in the volume of remittances—trends that had begun before the rule’s effective date— continued afterward. However, CFPB was unable to conclude whether these trends would have changed without the rule. In addition, the report noted that new technology has increased access to remittances but has also complicated CFPB’s attempts to measure the effects of the Remittance Rule on consumers. The report also estimated the rule’s initial and continued compliance costs for businesses, estimating that they added between 30 and 33 cents for the one-time cost in 2014 and between 7 and 37 cents in continuing costs per remittance in 2017. In addition, the report summarized comments and information CFPB received from a request for information in March 2017. In monitoring risks of financial products and services to consumers, CFPB has drawn from a wide range of sources, and its findings have informed its key consumer protection tools, such as rulemakings and consumer education materials. In 2016 and 2017, CFPB’s One Bureau process allowed it to consider the market information it collected to prioritize the most important risks to consumers and determine how to most effectively address those risks on a bureau-wide basis. However, CFPB has not yet decided whether to use the One Bureau process to reexamine its priorities and has instead relied on prioritization mechanisms that focus on its use of individual policy tools, such as its processes for setting rulemaking and supervision priorities. Putting a systematic bureau-wide prioritization process in place could help CFPB ensure that it focuses on the most significant risks to consumers and effectively meets its statutory consumer protection objectives. The Director of CFPB should implement a systematic process for prioritizing risks to consumers and considering how to use the bureau’s available policy tools—such as rulemaking, supervision, enforcement, and consumer education—to address these risks. Such a process could incorporate principles from the prior One Bureau process, such as an assessment of the extent of potential harm to consumers in financial markets, to prioritize the most significant risks. (Recommendation 1) We provided a draft of this product to CFPB for comment. We also provided the relevant excerpts of the draft report to the Federal Housing Finance Agency, the Federal Reserve, and the Office of the Comptroller of the Currency for their review and technical comments. CFPB provided oral and written comments, which are summarized below. CFPB’s written comments are reproduced in appendix I. In addition, CFPB and the Federal Housing Finance Agency provided technical comments, which we incorporated as appropriate. The Federal Reserve and the Office of the Comptroller of the Currency had no comments. In oral comments provided on November 29, 2018, CFPB’s Acting Deputy Director and other CFPB officials clarified the status of the One Bureau process. The officials clarified that while CFPB officials had previously told us that the One Bureau process was on hold, work on One Bureau priorities has continued with support from a set of cross-bureau working groups. The officials noted that CFPB had not yet determined whether to engage in another round of the One Bureau priority-setting process. In addition, in its written comments, CFPB highlighted the role of the cross- bureau working groups in its market monitoring and other efforts. In response to these comments, we made edits to clarify the status of the One Bureau process and describe the role of the cross-bureau working groups. In its written comments, CFPB did not agree or disagree with our recommendation but stated that it will endeavor to improve its processes for identifying and addressing consumer financial risks. CFPB stated that it recognizes the importance of having processes in place to prioritize and address risks to consumers in the financial marketplace. CFPB cited examples of existing processes—such as its processes for setting its rulemaking agenda and supervisory priorities—that were designed to ensure that its risk monitoring informs its work. In the oral comments, CFPB officials expressed concern that the draft report’s characterization of a lack of a systematic process for prioritizing risks to consumers might suggest that CFPB entirely lacks processes in this regard. We note that the draft report described CFPB’s existing processes for setting rulemaking and supervisory priorities. While we agree that these processes help CFPB to prioritize work in these areas, we maintain that these processes do not reflect a systematic, bureau-wide process for prioritizing risks to consumers and determining how to most effectively address them. We made minor edits to the report to clarify that the process CFPB lacks is a bureau-wide process that considers how it will use its full set of tools to address risks to consumers. We maintain that having such a process would help to ensure that CFPB focuses its resources on the most significant consumer risks and is well positioned to meet its consumer protection objectives. We are sending copies of this report to CFPB, the Federal Housing Finance Agency, the Federal Reserve, the Office of the Comptroller of the Currency, the appropriate congressional committees and members, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or clementsm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact above, John Fisher (Assistant Director), Lisa Reynolds (Analyst-in-Charge), Bethany Benitez, Joseph Hackett, Marc Molino, Jennifer Schwartz, and Tyler Spunaugle made key contributions to this report.
|
The Dodd-Frank Act created CFPB to regulate the provision of consumer financial products and services. Congress included a provision in statute for GAO to study financial services regulations annually, including CFPB’s related activities. This eighth annual report examines steps CFPB has taken to (1) identify, monitor, and report on risks to consumers in support of its rulemakings and other functions and (2) retrospectively assess the effectiveness of certain rules within 5 years of their effective dates. GAO reviewed CFPB policies and procedures, internal and public reports, and memorandums documenting key decisions, assessment plans, and requests for public comment. GAO also interviewed officials from CFPB, three federal agencies with which it coordinated, and representatives of consumer and industry groups. In accordance with the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act), the Consumer Financial Protection Bureau (CFPB) has routinely monitored the consumer financial markets to identify potential risks to consumers related to financial products and services. CFPB monitors consumer complaints, analyzes market data, and gathers market intelligence from external groups (see figure for sources of CFPB’s monitoring). CFPB has used risk-monitoring findings to inform its rulemakings, supervision, and other functions. In 2015, CFPB initiated a bureau-wide process for using market data and other information to set policy priorities related to addressing risks to consumers. However, CFPB has not yet decided whether it will continue to use this process to set priorities. CFPB currently lacks a systematic, bureau-wide process for prioritizing financial risks to consumers and considering how it will use its tools—such as rulemaking, supervision, and consumer education—to address them. Federal internal control standards state that management should use quality information to achieve agency objectives and that it should also identify, analyze, and respond to risks related to achieving those objectives. Implementing a bureau-wide prioritization process could help to ensure that CFPB effectively focuses its resources on the most significant financial risks to consumers and enhances its ability to meet its statutory consumer protection objectives. CFPB has taken steps to retrospectively assess its significant rules within 5 years of these rules becoming effective, as required by the Dodd-Frank Act. CFPB developed and applied criteria to identify three rules as significant and requiring a retrospective assessment. For these three rules, CFPB created assessment plans, issued public requests for comment and information, and reached out to external parties for additional data and evidence. In October 2018, CFPB issued its first assessment report on a rule related to cross-border money transfers. Among other things, the report found that certain trends, such as increasing volume of these transfers, continued after the rule took effect. CFPB expects to complete the other two assessments by the January 2019 deadline. GAO recommends that CFPB implement a systematic process for prioritizing risks to consumers and considering how to use its available policy tools—such as rulemaking, supervision, enforcement, and consumer education—to address these risks. CFPB did not agree or disagree with the recommendation but agreed with the importance of having processes in place to prioritize and address consumer financial risks.
|
We found that from October 2013 through March 2017, the five selected VA medical centers required reviews of a total of 148 providers’ clinical care after concerns were raised about their care, but officials at these medical centers could not provide documentation to show that almost half of these reviews were conducted. We found that all five VA medical centers lacked at least some documentation of the reviews they told us they conducted, and in some cases, we found that the required reviews were not conducted at all. Specifically, across the five VA medical centers, we found the following: The medical centers lacked documentation showing that one type of review—focused professional practice evaluations (FPPE) for cause—had been conducted for 26 providers after concerns had been raised about their care. FPPEs for cause are reviews of providers’ care over a specified period of time, during which the provider continues to see patients and has the opportunity to demonstrate improvement. Documentation of these reviews is explicitly required under VHA policy. Additionally, VA medical center officials confirmed that FPPEs for cause that were required for another 21 providers were never conducted. The medical centers lacked documentation showing that retrospective reviews—which assess the care previously delivered by a provider during a specific period of time— had been conducted for 8 providers after concerns had been raised about their clinical care. One medical center lacked documentation showing that reviews had been conducted for another 12 providers after concerns had been raised about their care. In the absence of any documentation, we were unable to identify the types of reviews, if any, that were conducted for these 12 providers. We also found that the five selected VA medical centers did not always conduct reviews of providers’ clinical care in a timely manner. Specifically, of the 148 providers, the VA medical centers did not initiate reviews of 16 providers for 3 months, and in some cases, for multiple years, after concerns had been raised about the providers’ care. In a few of these cases, additional concerns about the providers’ clinical care were raised before the reviews began. We found that two factors were largely responsible for the inadequate documentation and untimely reviews of providers’ clinical care we identified at the selected VA medical centers. First, VHA policy does not require VA medical centers to document all types of reviews of providers’ clinical care, including retrospective reviews, and VHA has not established a timeliness requirement for initiating reviews of providers’ clinical care. Second, VHA’s oversight of the reviews of providers’ clinical care is inadequate. Under VHA policy, networks are responsible for overseeing the credentialing and privileging processes at their respective VA medical centers. While reviews of providers’ clinical care after concerns are raised are a component of credentialing and privileging, we found that none of the network officials we spoke with described any routine oversight of such reviews. This may be in part because the standardized tool that VHA requires the networks to use during their routine audits does not direct network officials to ensure that all reviews of providers’ clinical care have been conducted and documented. Further, some of the VISN officials we interviewed told us they were not using the standardized audit tool as required. Without adequate documentation and timely completion of reviews of providers’ clinical care, VA medical center officials lack the information they need to make decisions about providers’ privileges, including whether or not to take adverse privileging actions against providers. Furthermore, because of its inadequate oversight, VHA lacks reasonable assurance that VA medical center officials are reviewing all providers about whom clinical care concerns have been raised and are taking adverse privileging actions against the providers when appropriate. To address these shortcomings, we recommended that VHA 1) require documentation of all reviews of providers’ clinical care after concerns have been raised, 2) establish a timeliness requirement for initiating such reviews, and 3) strengthen its oversight by requiring networks to oversee VA medical centers to ensure that such reviews are documented and initiated in a timely manner. VA concurred with these recommendations and described plans for VHA to revise existing policy and update the standardized audit tool used by the networks to include more comprehensive oversight of VA medical centers’ reviews of providers’ clinical care after concerns have been raised. We found that from October 2013 through March 2017, the five VA medical centers we reviewed had only reported one of nine providers required to be reported to the NPDB under VHA policy. These nine providers either had adverse privileging actions taken against them or resigned or retired while under investigation before an adverse privileging action could be taken. None of these nine providers were reported to state licensing boards as required by VHA policy. The VA medical centers documented that these nine providers had significant clinical deficiencies that sometimes resulted in adverse outcomes for veterans. For example, the documentation shows that one provider’s surgical incompetence resulted in numerous repeat surgeries for veterans. Another provider’s opportunity to improve through an FPPE for cause had to be halted and the provider was removed from providing care after only a week due to concerns that continuing the review would potentially harm patients. In addition to these nine providers, one VA medical center terminated the services of four contract providers based on deficiencies in the providers’ clinical performance, but the facility did not follow any of the required steps for reporting providers to the NPDB or relevant state licensing boards. This is concerning, given that the VA medical center documented that one of these providers was terminated for cause related to patient abuse after only 2 weeks of work at the facility. Two of the five VA medical centers we reviewed each reported one provider to the state licensing boards for failing to meet generally accepted standards of clinical practice to the point that it raised concerns for the safety of veterans. However, we found that the medical centers’ reporting to the state licensing board took over 500 days to complete in both cases, which was significantly longer than the 100 days suggested in VHA policy. Across the five VA medical centers, we found that providers were not reported to the NPDB and state licensing boards as required for two reasons. First, VA medical center officials were generally not familiar with or misinterpreted VHA policies related to NPDB and state licensing board reporting. For example, at one VA medical center, we found that officials failed to report six providers to the NPDB because they were unaware that they had been delegated responsibility for NPDB reporting. Officials at two other VA medical centers incorrectly told us that VHA cannot report contract providers to the NDPB. At another VA medical facility, officials did not report a provider to the NPDB or to any of the state licensing boards where the provider held a medical license because medical center officials learned that one state licensing board had already found out about the issue independently. Therefore, VA officials did not believe that they needed to report the provider. This misinterpretation of VHA policy meant that the NPDB and the state licensing boards in other states where the provider held licenses were not alerted to concerns about the provider’s clinical practice. Second, VHA policy does not require the networks to oversee whether VA medical centers are reporting providers to the NPDB or state licensing boards when warranted. We found, for example, that network officials were unaware of situations in which VA medical center officials failed to report providers to the NPDB. We concluded that VHA lacks reasonable assurance that all providers who should be reported to these entities are reported. VHA’s failure to report providers to the NPDB and state licensing boards as required facilitates providers who provide substandard care at one facility obtaining privileges at another VA medical center or at hospitals outside of VA’s health care system. We found several cases of this occurring among the providers who were not reported to the NPDB or state licensing boards by the five VA medical centers we reviewed. For example, we found that two of the four contract providers whose contracts were terminated for clinical deficiencies remained eligible to provide care to veterans outside of that VA medical center. At the time of our review, one of these providers held privileges at another VA medical center, and another participated in the network of providers that can provide care for veterans in the community. We also found that a provider who was not reported as required to the NPDB during the period we reviewed had their privileges revoked 2 years later by a non-VA hospital in the same city for the same reason the provider was under investigation at the VA medical center. Officials at this VA medical center did not report this provider following a settlement agreement under which the provider agreed to resign. A committee within the VA medical center had recommended that the provider’s privileges be revoked prior to the agreement. There was no documentation of the reasons why this provider was not reported to the NPDB under VHA policy. To improve VA medical centers’ reporting of providers to the NPDB and state licensing boards and VHA oversight of these processes, we recommended that VHA require its networks to establish a process for overseeing VA medical centers to ensure they are reporting to the NPDB and to state licensing boards and to ensure that this reporting is timely. VA concurred with this recommendation and told us that it plans to include oversight of timely reporting to the NPDB and state licensing boards as part of the standard audit tool used by the networks. If you or your staff members have any questions concerning this testimony, please contact me at (202) 512-7114 (williamsonr@gao.gov). Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions to this testimony include Marcia A. Mann (Assistant Director), Kaitlin M. McConnell (Analyst-in-Charge), Summar C. Corley, Krister Friday, and Jacquelyn Hamilton. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
This testimony summarizes the information contained in GAO's November 2017 report, entitled VA Health Care: Improved Policies and Oversight Needed for Reviewing and Reporting Providers for Quality and Safety Concerns ( GAO-18-63 ). Department of Veterans Affairs (VA) medical center (VAMC) officials are responsible for reviewing the clinical care delivered by their privileged providers—physicians and dentists who are approved to independently perform specific services—after concerns are raised. The five VAMCs GAO selected for review collectively required review of 148 providers from October 2013 through March 2017 after concerns were raised about their clinical care. GAO found that these reviews were not always documented or conducted in a timely manner. GAO identified these providers by reviewing meeting minutes from the committee responsible for requiring these types of reviews at the respective VAMCs, and through interviews with VAMC officials. The selected VAMCs were unable to provide documentation of these reviews for almost half of the 148 providers. Additionally, the VAMCs did not start the reviews of 16 providers for 3 months to multiple years after the concerns were identified. GAO found that VHA policies do not require documentation of all types of clinical care reviews and do not establish timeliness requirements. GAO also found that the Veterans Health Administration (VHA) does not adequately oversee these reviews at VAMCs through its Veterans Integrated Service Networks (VISN), which are responsible for overseeing the VAMCs. Without documentation and timely reviews of providers' clinical care, VAMC officials may lack information needed to reasonably ensure that VA providers are competent to provide safe, high quality care to veterans and to make appropriate decisions about these providers' privileges. GAO also found that from October 2013 through March 2017, the five selected VAMCs did not report most of the providers who should have been reported to the National Practitioner Data Bank (NPDB) or state licensing boards (SLB) in accordance with VHA policy. The NPDB is an electronic repository for critical information about the professional conduct and competence of providers. GAO found that selected VAMCs did not report to the NPDB eight of nine providers who had adverse privileging actions taken against them or who resigned during an investigation related to professional competence or conduct, as required by VHA policy, and none of these nine providers had been reported to SLBs. GAO found that officials at the selected VAMCs misinterpreted or were not aware of VHA policies and guidance related to NPDB and SLB reporting processes resulting in providers not being reported. GAO also found that VHA and the VISNs do not conduct adequate oversight of NPDB and SLB reporting practices and cannot reasonably ensure appropriate reporting of providers. As a result, VHA's ability to provide safe, high quality care to veterans is hindered because other VAMCs, as well as non-VA health care entities, will be unaware of serious concerns raised about a provider's care. For example, GAO found that after one VAMC failed to report to the NPDB or SLBs a provider who resigned to avoid an adverse privileging action, a non-VA hospital in the same city took an adverse privileging action against that same provider for the same reason 2 years later.
|
DOD uses working capital funds to focus management’s attention on the total costs of carrying out critical business operations and encourage DOD support organizations to provide quality goods and services at the lowest cost. The ability of working capital funds to operate on a break- even basis depends on accurately projecting workload, estimating costs, and setting rates to recover the full costs of producing goods and services. Generally, customers use appropriated funds to finance orders placed with working capital funds. DOD sets the rates charged for goods and services during the budget preparation process, which generally occurs approximately 18 months before the rates go into effect. To develop rates, working capital fund managers review projected costs such as labor and materials, as well as projected customer requirements. The rates are intended to remain fixed during the fiscal year in accordance with DOD policy. DOD’s stabilized price policy serves to protect customers from unforeseen inflationary increases and other cost uncertainties and better assures customers that they will not have to reduce programs to pay for potentially higher-than- anticipated prices. Because working capital fund managers base rates charged on assumptions formulated in advance of rates going into effect, some variance is expected between projected and actual costs and revenues. The TWCF is dedicated to TRANSCOM’s mission to provide air, land, and sea transportation for DOD in times of peace and war, with a primary focus on wartime readiness. Specifically, TWCF is used to provide air transportation and services for passengers or cargo in support of DOD operations or along established routes. The TWCF is also used to finance Air Force and joint training requirements. Examples of joint capabilities supported by the TWCF are depicted in figure 2. The TWCF uses rates for airlift services that do not cover the full cost of airlift operations. The military services may choose between TRANSCOM and commercial service providers along established routes. Thus, fund managers set rates for some airlift services to remain competitive with commercial airlift carriers, which historically, do not result in revenue sufficient to cover the full cost of airlift operations. DOD must maintain airlift capacity and must remain ready and available to support mobilization for war and contingencies. Providing an incentive for customers to use DOD airlift capacity helps TRANSCOM maintain military airlift capabilities not available from commercial providers. TWCF cash balances are managed as a component of the Air Force Working Capital Fund. Although the TWCF is managed on a day-to-day basis by TRANSCOM, it is part of the Air Force Working Capital Fund for cash management purposes. The relationship of the TWCF to the Air Force Working Capital Fund provides a cash management benefit. According to Air Force officials, retaining the TWCF within the Air Force Working Capital Fund for cash management purposes provides flexibility while minimizing the need for additional funding. According to month-end cash balance data, the TWCF has been able to operate using cash available in the Air Force Working Capital Fund when no funds were available in the TWCF. For example, the TWCF month-end cash balance was negative fifteen times during fiscal years 2007-2017, but there was sufficient cash in the Air Force Working Capital Fund to allow the TWCF to continue to operate and execute its missions. For more information on the cash balances of the Air Force Working Capital Fund and the TWCF see appendix II. Multiple DOD organizations have roles in managing various aspects of the TWCF: The Office of the Under Secretary of Defense (Comptroller)/Chief Financial Officer is generally responsible for coordinating DOD budget preparation, issuing guidance, issuing working capital fund annual financial reports, and overseeing the implementation of working capital funds across DOD. The Office of the Under Secretary of Defense (Comptroller)/Chief Financial Officer is also responsible for approving rates developed for the budget process and charged to the military services. The Air Force assumed responsibility for TWCF cash management in fiscal year 1998 and the TWCF cash balance is included in the Air Force Working Capital Fund cash balance. The Air Force is also responsible for developing Operations and Maintenance budget requests that include requests for funds to pay TRANSCOM for airlift services financed through the TWCF and the ARA. The Assistant Secretary of the Air Force (Financial Management and Comptroller) is responsible for directing and managing all comptroller, programming, and financial management functions, activities, and operations of the Air Force. TRANSCOM is responsible for the day-to-day financial management of the TWCF and has financial reporting responsibility for the TWCF, including setting rates for airlift services. TRANSCOM is also responsible for providing defense components with transportation services to meet national security needs; providing guidance for forecasting; and providing guidance for the standardization of rates, regulations, operational policies, and procedures. Air Mobility Command is a major Air Force command and is responsible to TRANSCOM for providing airlift services paid for by the TWCF. To fulfill its responsibility for providing airlift services to defense components, TRANSCOM and Air Mobility Command use a combination of military and commercial aircraft. The Air Force requested, allotted, and expended billions of dollars for ARA for fiscal years 2007 through 2017. These amounts varied annually, in some cases, by hundreds of millions of dollars. Our analysis of Air Force and TRANSCOM budget and financial information showed that for fiscal years 2007 through 2017, the Air Force requested $2.8 billion from Congress for ARA requirements, as part of its annual Operations and Maintenance appropriation. The Air Force allotted $2.8 billion (i.e., directed the use of the appropriated funds) and expended $2.4 billion of the ARA appropriated funds). During this period, the total allotted amount was about $400 million dollars more than the expended amount. According to Air Force officials, this $400 million was used to pay for other Air Force readiness priorities. ARA amounts requested, allotted, and expended for fiscal years 2007 through 2017 are shown in figure 3. In five fiscal years (2008-2009, 2013-2014, and 2017) the Air Force allotted less than the amount ultimately expended for the ARA. In these fiscal years, Air Force officials stated that they used available Operations and Maintenance appropriations to support the ARA. For example, in fiscal year 2013, the Air Force requested and allotted less than a million dollars for the ARA. However, the Air Force expended $294 million for the ARA in fiscal year 2013. According to Air Force officials, the Air Force used Air Force Operation and Maintenance mobilization funding to provide the ARA funds to the TWCF to cover this gap. Furthermore, in five fiscal years (2010-2012 and 2015-2016) the Air Force did not expend the total amounts allotted for the ARA, because the allotments exceeded ARA funding needs. According to Air Force officials, they expended amounts initially allotted for ARA requirements to support other readiness priorities, such as training and sustainment requirements. For additional information related to TWCF costs and revenues for airlift services see appendix III. Based on our analysis and interviews with Air Force and TRANSCOM officials, we determined that the Air Force’s ARA budget request, the ARA amount allotted, and the amount expended by the Air Force can vary for a number of reasons. For example, Workload variations occurred due to changes in the global security environment, natural disasters, and force structure changes: For example, in fiscal year 2010, airlift services workload increased 8 percent over the previous year’s level and 39 percent over budgeted levels as a result of force structure changes in Iraq and Afghanistan. This occurred because during fiscal year 2010 the number of U.S. armed forces personnel in Iraq declined by about 81,000, and the number of U.S. armed forces personnel in Afghanistan increased by about 34,000. These changes required additional airlift services, and resulted in more revenue than was originally estimated for the TWCF. The TWCF also received additional funding from the military services to offset increased fuel costs. As a result, TRANSCOM did not issue a bill for the ARA for fiscal year 2010, and the Air Force used the $262 million allotted for ARA requirements for other readiness priorities. ARA budget requests and subsequent expenditures in the fiscal year of availability may be affected by other revenue sources: From fiscal years 2007 through 2017, the TWCF received $6.5 billion from other revenue sources, such as amounts from cash recovery charges, fuel supplement charges, and cash transfers from the Air Force. For example, cash recovery charges were paid by the military services, including the Air Force, using Overseas Contingency Operations funding to cover cash shortages in the TWCF in the early part of the Global War on Terrorism. TRANSCOM charged its customers cash recovery charges in fiscal years 2007 through 2014, with the exception of 2010. ARA expenditures in the fiscal year of availability may be more or less than budgeted: For example, in fiscal year 2015, TRANSCOM did not receive revenue from other sources, resulting in the Air Force expending $404 million dollars more from its Operations and Maintenance funds than requested to cover the ARA bill for that fiscal year. On the other hand, in the fiscal year 2016 Air Force Operations and Maintenance budget request, the Air Force requested $657 million for the ARA, and subsequently allotted $406 million to the ARA—about $251 million less than requested. This occurred because the cost of fuel declined in fiscal year 2016, and TRANSCOM did not bill the Air Force for the full amount allotted for ARA by the Air Force. As a result, the Air Force contributed $122 million of the $406 million to the TWCF and used the remaining available amount for other readiness priorities. DOD and its components have considerable flexibility in using Operation and Maintenance funds and can redesignate funds appropriated among activity and subactivity groups in various ways. Air Force budget requests include some information on the ARA but omit details provided in budget requests prior to fiscal year 2010. Air Force budget officials stated the ARA budget information that was included for fiscal years 2007 through 2009 was changed for the fiscal year 2010 budget request as part of a DOD initiative to reduce the overall number of budget line items. For fiscal years 2007 through 2009 Air Force Operations and Maintenance budget requests, the amounts requested by the Air Force for the ARA were explicitly stated in the budget justification documents as part of a separate subactivity group line item. For fiscal years 2010 through 2017, the ARA amount was bundled with funding requests for other training requirements in the Air Force Operations and Maintenance budget justification documents, thus omitting specific details with respect to the ARA. Specifically, Air Force budget justification materials included the amount the ARA changed from one fiscal year to the next, but did not include the total ARA amount. In the annual President’s budget request submission, DOD requests specific amounts for Operations and Maintenance activities and includes information about (1) amounts for the next fiscal year for which estimates are submitted, (2) revisions to the amounts for the fiscal year in progress, and (3) reports on the actual amounts allotted to a particular activity or subactivity for the last completed fiscal year. The Standards for Internal Control in the Federal Government state that management should communicate the necessary quality information (internally and externally). According to Air Force budget officials there is no requirement from the Office of the Under Secretary of Defense (Comptroller)/Chief Financial Officer to separately identify the ARA amount and related details in the Air Force Operations and Maintenance annual budget requests. Nevertheless, officials from the Air Force and the Office of the Under Secretary of Defense (Comptroller)/Chief Financial Officer agreed that it would be helpful to include additional information in the budget, because of DOD and congressional interest. Without establishing specific requirements to present detailed ARA information in the annual Air Force Operations and Maintenance budget request, DOD and congressional decision-makers do not have sufficient information to make informed decisions about the level of funding necessary to cover airlift costs not recovered by the rates charged by TRANSCOM. TRANSCOM has not provided ARA estimates in time to inform Air Force budget requests. Air Force officials stated that they need to have TRANSCOM’s estimates by mid-June to be able to conduct analysis to strengthen confidence in the ARA budget request and obtain senior leadership approval. The Air Force submits its Operations and Maintenance annual budget request to DOD in early July. However, TRANSCOM was not providing its ARA estimate until August. As a result, Air Force officials stated they have been developing their own ARA estimate based on historical average trends because they have not received information from TRANSCOM on time. TRANSCOM and Air Force officials agree that TRANSCOM—as the provider of transportation services—is in the best position to understand transportation workload demands. The Standards for Internal Control in the Federal Government state that management should use quality information that is appropriate, current, complete, accurate, accessible, and provided on a timely basis to achieve the entity’s objectives. Furthermore, management should use quality information to make informed decisions and evaluate the entity’s performance in achieving key objectives and addressing risks and should design control activities, such as policies, procedures, techniques, and mechanisms as needed to enforce management’s directives. In October 2017, Air Force and TRANSCOM officials told us they were working on a memorandum of understanding to improve the timing and communication of budgetary information from TRANSCOM to support the Air Force ARA budget request. Officials stated that the memorandum of understanding is expected to be completed by the end of fiscal year 2018. However, in May 2018, the draft memorandum that the Air Force provided for our review consisted of a 2-page template with a list of potential topics, and no substantive details regarding formalizing processes. Without developing sufficient detail on the formal processes and subsequently finalizing the memorandum of understanding, the Air Force and TRANSCOM will not be able to reasonably assure that the timing and communication of budgetary information from TRANSCOM are sufficient to support the Air Force Operations and Maintenance ARA annual budget request. TRANSCOM has a rate-setting process for airlift services, but producing accurate workload forecasts is challenging. Our analysis of TRANSCOM data showed that the airlift forecasting process produced increasingly inaccurate projections of actual workload. Producing accurate forecasts is challenging because TRANSCOM has not fully implemented: (1) an effective process to gather workload projections from customers, (2) forecasting goals and metrics and the review of its performance, and (3) an action plan to improve workload forecasts. TRANSCOM has a rate-setting process for airlift services that is generally established to be competitive with commercial airlift services, according to DOD guidance. Specifically, TRANSCOM operates five categories of airlift services, and according to documents and TRANSCOM officials the rate-setting process for each category is as follows: Channel Cargo rates apply to military air cargo along established routes. The rates for this category generally cover about 65 percent of the cost to provide airlift cargo services, and do not vary based on the type of aircraft used. Rates are benchmarked against commercial prices based on the weight of cargo using the following step-by-step process. Initially, International Heavyweight Air Tender price data from the prior year are checked for commercial rates on various routes. If no data are available for some routes, data from the closest country are used to develop average country-to-country rates or a weighted average when there is more than one country-to-country combination. Once rates are developed they are adjusted based on budget exhibits. The TRANSCOM Operations and Plans directorate is responsible for Channel Cargo forecasts to inform rate-setting for this category of service. Channel Passenger rates apply when military and civilian passengers are flying on established routes. The rates are benchmarked against commercial prices, recover about 85 percent of costs, and do not vary based on the type of aircraft used. Channel passenger rate-setting guidance also uses a step-by-step process. General Services Administration city pairs are checked for comparable prices. If no General Services Administration rate is found, the Defense Travel System is checked. If the Defense Travel System does not have a rate, online travel websites are checked. If the online travel sites do not have a rate, then a prior standard rate per mile for that route is adjusted based on budget exhibits. The TRANSCOM Strategic Plans, Policy, and Logistics directorate is responsible for channel passenger forecasts to inform rate-setting. Special Assignment Airlift Missions/Contingency rates apply for the use of full-plane charters performing and providing exclusive services for specific users. Rates are generally determined by the type of aircraft and those rates recover about 91 percent of costs for military aircraft and 100 percent of costs for commercial aircraft. Flight hour rates for military aircraft, flight length (miles), and capacity used for commercial aircraft are considered in the rate determinations. The TRANSCOM Operations and Plans Directorate is responsible for Special Assignment Airlift Missions/Contingency workload forecasts to inform rate-setting for this category of service. Joint Exercise Transportation Program rates apply to airlift services in support of realistic operational joint training. Rates are generally set in the same manner as the rates for the Special Assignment Airlift Missions/Contingency category, except that the TRANSCOM Operations and Plans Directorate is responsible for workload forecasting for the Joint Exercise Transportation Program. Training rates apply to those activities used to conduct programmed flying training, which generally includes a required number of sorties, flying hours, and aircrew training to support readiness. Rates are set to recover 100 percent of the recorded costs because the Air Force is the sole customer for these missions, according to TRANSCOM and Air Force officials. Training rates are generally based on the type of aircraft, and the cost per flight hour. According to TRANSCOM officials, the Air Mobility Command Air, Space and Information Operations Directorate is responsible for the flying hour model that determines requirements for this category of airlift services. TRANSCOM produces a forecast of its airlift workload to inform the development of the ARA budget request. According to TRANSCOM’s guidance, workload forecasts are to be developed using future demand derived from a combination of statistical methods and necessary adjustments for expected operational conditions. The basic principles used for workload forecasting are generally the same for all five categories of airlift services. According to TRANSCOM officials, forecasting methods are applied with some variation. This practice is allowed under the forecasting instruction, depending on the category, and which TRANSCOM or Air Mobility Command entity is responsible for developing the forecast. For example, forecasts for the Joint Exercise Transportation Program and Training are affected more by requirements to support readiness and funding constraints. On the other hand, the basic forecasting process for Channel Cargo, Channel Passenger, and Special Assignment Airlift Missions/Contingency are affected by the transportation needs of the military services and combatant commands and generally based on historical workload. Based on our analysis, workload forecasts have been increasingly inaccurate for fiscal years 2007 through 2017. Specifically, we found that forecast inaccuracy (i.e., the variance between the forecast and the actual workload amounts aggregated across all five workload categories) averaged about 25 percent and was trending upward in absolute value for fiscal years 2007 through 2017, as shown in figure 4. In addition to the aggregate workload forecast being increasingly inaccurate, the accuracy of the workload forecasts across each of the five categories varies from year to year. For example, In fiscal year 2008, channel cargo actual workload was about 17 percent lower than the forecast, and Special Assignment Airlift Missions/Contingency actual workload was about 12 percent higher than the forecast; and In fiscal year 2016 Special Assignment Airlift Missions/Contingency actual workload was about 116 percent higher than the forecast and the Joint Exercise Transportation Program actual workload was about 45 percent lower than forecasts. For fiscal years 2007 through 2017, the workload categories with the largest absolute forecast inaccuracy include Special Assignment Airlift Missions/Contingency, Channel Cargo, and the Joint Exercise Transportation Program. Two of these categories (Special Assignment Airlift Missions/Contingency and Channel Cargo) also have the largest share of airlift services. However, all five workload categories had forecast inaccuracy of more than 15 percent in at least three of the eleven years we reviewed. The variance of forecasted workload from actual workload by airlift service category is presented in figure 5 below. Based on our analysis and discussions with TRANSCOM officials, TRANSCOM has not taken sustained actions to improve forecasting accuracy. Specifically, we found that TRANSCOM has not fully implemented (1) an effective process to collect projected airlift workload information from its customers (i.e., military services) to inform its forecasts, (2) metrics and goals for measuring and reviewing forecast accuracy, and (3) an action plan to improve workload forecasting. Specifically, TRANSCOM has not implemented an effective process for collecting projected airlift workload information: TRANSCOM officials told us they use historic workload data to establish a baseline, and perform statistical analysis to estimate averages and trends according to their instructions. Next, forecasters use information from the military services and combatant commands that may affect each category of workload, if available, and adjust workload estimates as needed. However, according to TRANSCOM officials, personnel conducting forecasts have limited visibility over factors that may influence forecasts, such as demand for transportation services, due to the lack of information obtained from their customers (i.e., the military services and Combatant Commands). Attempts to collect information from the military services and combatant commands have been made on an ad hoc basis. For example, in April 2016 TRANSCOM’s Commander solicited information from the military services’ senior leadership regarding their future transportation requirements, including airlift needs. The message emphasized the importance of forecasting to inform budget requests and management decisions to improve operational efficiency. However, according to TRANSCOM officials, the Air Force—who is TRANSCOM’s largest customer for airlift services—was the only military service that provided the requested information in response to the TRANSCOM’s Commander’s one-time request. According to TRANSCOM officials, the other military services have not provided the requested information for workload projections because the services do not understand how they would benefit from providing the information and TRANSCOM’s terminology and processes are not familiar to the services. As a result, TRANSCOM’s ad hoc approach has not obtained quality information from its customers to use in forecasting workload. Standards for Internal Control in the Federal Government state that management should use quality information that is appropriate, current, complete, accurate, accessible, and provided on a timely basis to achieve the entity’s objectives. Furthermore, we found other defense organizations have provided a mechanism for customers to routinely communicate projected workload information. For example, the Defense Logistics Agency and their customers work together to evaluate historical demand data for spare parts and tailor forecast plans for those spare parts based on projected future usage. To this end, communications with customers are expected to be consistent and to use terminology shared in common with customers. Options are presented in a manner that is readily understood by customers in a format determined by customers’ needs to encourage the most efficient and effective solutions available. TRANSCOM no longer uses forecast accuracy metrics and has not established forecast accuracy goals: In 2012, TRANSCOM developed a forecasting process, and according to officials started providing forecast performance metric briefings to TRANSCOM senior leadership on a quarterly basis in fiscal year 2014. TRANSCOM’s overall forecast accuracy improved slightly in 2015. However, according to TRANSCOM officials, these forecast briefings were canceled after the first quarter of fiscal year 2016 because they were viewed as minimally useful for budgeting, and were not used to position airlift capacity to meet operational needs. In addition, TRANSCOM officials stated that they no longer measure forecast performance. We found that overall forecast inaccuracy was higher for fiscal years 2016 and 2017 than any other year we reviewed, as indicated above in figure 4. However, TRANSCOM’s January 2015 forecasting instruction requires forecast accuracy metrics to be developed to support management decisions and forecast variance from actual workload to be reviewed. Furthermore, the Standards for Internal Control in the Federal Government state that management should define objectives in specific and measurable terms to enable the design of internal control for related risks, establish activities to monitor performance measures and indicators, and assess performance against plans, goals, and objectives set by the entity. TRANSCOM does not have a corrective action plan for improving workload forecasts: TRANSCOM officials acknowledge that workload forecasting needs improvement, and told us that TRANSCOM does not have an action plan to improve its forecasting processes to inform budgetary and operational decisions. In October 2013, TRANSCOM considered, but did not adopt, a process designed to help ensure senior management has visibility over issues, including forecasting, known as Sales and Operations Planning (S&OP). We reported that the Army implemented this process in 2013 after Army officials concluded that they could leverage commercial best practices to improve logistics performance (see sidebar). We discussed the S&OP process with TRANSCOM officials, and they told us that the possibility of adapting the process to military logistics was not readily accepted at TRANSCOM because of organizational resistance to change. Initial organizational resistance to change was also experienced by the Army, as discussed in our prior report. However, according to the Army, the benefits of implementing S&OP resulted in a 50 percent reduction in forecast error, and a decision was made to deploy the S&OP process for use across all Army depots and arsenals by the end of fiscal year 2018. Adopting a corrective action plan, or approach such as S&OP can help TRANSCOM focus and improve planning efforts resulting in improved and more accurate workload forecasting. Furthermore, according to TRANSCOM’s January 2015 forecasting instruction, opportunities to improve forecasts should be assessed. Additionally, Standards for Internal Control in the Federal Government state that management should complete and document corrective actions to remediate internal control deficiencies on a timely basis to achieve established objectives. Our prior work has also shown that organizations benefit from corrective action plans for improvement. TRANSCOM officials told us that producing accurate workload forecasts is challenging, and we agree that there are some inherent difficulties in accurately forecasting airlift workload on an annual basis. However, our prior work on aviation forecasting has noted that forecasting is inherently uncertain, but managing the risk related to that uncertainty is essential to making informed decisions. Improved forecasting by addressing the weaknesses identified could allow for more effective financial planning and enable more efficient airlift operations. For example, TRANSCOM estimated needing an ARA amount of $772 million for fiscal year 2016. However, according to our analysis of TRANSCOM financial records, the TWCF did not require support from ARA funds because actual revenue from airlift services exceeded its costs by $148 million in fiscal year 2016. Inaccurate forecasts can lead to unreliable budget requests and hinder effective and efficient operational planning necessary to provide customers with the service they need. For example, according to a 2017 Air Force Audit Agency report, flying channel passenger flights at 85 percent of capacity may result in estimated savings of about $30 million over a 6-year period. Our past work also shows that underutilization of cargo airlift capacity is a longstanding issue. Improving forecast accuracy would help TRANSCOM manage airlift services more efficiently, make better use of budgetary resources to maximize airlift capacity more effectively, and result in an ARA budget estimate that is more accurate. In response to our findings and discussions, TRANSCOM officials stated they plan to begin reviewing TRANSCOM’s workload forecasting process and determine a path ahead in June 2018. However, the outcome and timeframes for this review are uncertain. Furthermore, TRANSCOM leadership still must approve and fully implement changes to forecasting processes, metrics, and goals. Unless TRANSCOM fully implements an effective process to obtain projected workload requirements from its customers on a routine basis, uses forecast accuracy metrics and establishes goals, and develops an action plan, airlift workload forecasting will not improve. We acknowledge that eliminating volatility entirely in the ARA budget request is unlikely given that there will be unexpected and unpredictable workload adjustments due to changes in the global security environment or natural disasters. We also understand improving workload forecasts through the use of goals, metrics, and an action plan for improvement will not eliminate the inherent volatility associated with the ARA budget request amount. However, these improvements would allow TRANSCOM to better manage the inherent risks associated with the accuracy of forecasts and improve ARA estimates used to inform future Air Force Operations and Maintenance budget requests. Each year DOD spends billions of dollars on airlift services flying personnel and cargo worldwide. The clarity of budget estimates and the accuracy of forecasts for airlift services are essential for Congress and DOD to make informed decisions. Accordingly, Congress would benefit from detailed ARA information in its budget requests, and this information would be improved by TRANSCOM providing timely information on the annual ARA estimate to the Air Force. Additionally, TRANSCOM continues to face challenges in forecasting its workload, which is a key factor in estimating the ARA. Until TRANSCOM establishes a process to collect projected workload information from its customers, uses forecast accuracy metrics and goals to monitor its performance, and implements a corrective action plan, forecast accuracy and ARA estimates are not likely to improve. We are making a total of five recommendations to DOD. The Secretary of Defense should ensure that the Undersecretary of Defense (Comptroller)/Chief Financial Officer establishes requirements to present details related to the ARA in the annual Air Force Operations and Maintenance budget request including (1) amounts for the next fiscal year for which estimates are submitted, (2) revisions to the amounts for the fiscal year in progress, and (3) the actual amounts allotted for the last completed fiscal year. (Recommendation 1) The Secretary of Defense should ensure that the Secretary of the Air Force and the Commander, U.S. Transportation Command, in collaboration, develop sufficient detail on the formal processes and finalize their memorandum of understanding to improve the timing and communication of budgetary information to support the Air Force Operations and Maintenance Airlift Readiness Account annual budget request. (Recommendation 2) The Secretary of Defense should ensure that the Commander, U.S. Transportation Command, fully implements a process to obtain projected airlift workload from the military services and Combatant Commanders on a routine basis to improve the accuracy of its workload forecasts. (Recommendation 3) The Secretary of Defense should ensure that the Commander, U.S. Transportation Command, uses forecast performance metrics and establishes forecast accuracy goals for the airlift workload. (Recommendation 4) The Secretary of Defense should ensure that the Commander, U.S. Transportation Command, develops a corrective action plan to improve the accuracy of its workload forecasting. (Recommendation 5) We provided a draft of this report to DOD for review and comment. In written comments, which are reprinted in appendix IV, DOD concurred with our recommendations and stated that it plans to take specific actions in response to our recommendations. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Diana Maurer at (202) 512-9627 or maurerd@gao.gov, or Asif Khan at (202) 512-9869, or khana@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix V. To determine the extent to which ARA funds were requested, allotted, and expended by the Air Force from fiscal years 2007 through 2017, we analyzed Air Force budget request documents and underlying support documentation. We also analyzed information from the Air Force’s Automated Budget Interactive Data Environment Systems to determine the appropriated amounts allotted for ARA activities. Furthermore, we analyzed summary-level documents detailing expenditures from the Air Force and TRANSCOM for fiscal years 2007 through 2017 to establish trends. Moreover, we reviewed TRANSCOM’s procedures and supporting documentation for billing the Air Force for payment of the ARA. Lastly, we interviewed DOD, Air Force and TRANSCOM officials to gain an understanding of general reasons variances from year to year occurred or between the requested and expended amounts. To determine the extent to which the Air Force provided ARA information in its budget request to Congress and informed its request with information from TRANSCOM, we analyzed Air Force Operations and Maintenance budget justification documents to determine the type of ARA information (i.e., total budget request amount, changes from year to year, and other information) provided in the fiscal years 2007 through 2017 President budget submissions. To understand the differences, if any, between the ARA information provided from year to year, we interviewed Air Force budget officials to obtain an explanation for changes in the reported information. In addition, we analyzed Air Force Operations and Maintenance budget justification documents, and Transportation Working Capital Fund budget documents to determine if the ARA was based on available information. We also discussed with Air Force and TRANSCOM officials future plans to change their procedures and the information considered in the development of the ARA estimate. Further, we compared the Air Force and TRANSCOM processes and procedures against Standards for Internal Controls in the Federal Government, specifically standards regarding internal and external reporting and mechanisms to enforce management directives. To determine the extent to which TRANSCOM has implemented a process to set rates for airlift services and use workload forecasts to estimate the annual ARA funding request, we analyzed the processes TRANSCOM used to set rates it charges customers in various airlift workload categories for fiscal years 2007 through 2017. We also reviewed forecasting procedures and analyzed supporting documents provided by TRANSCOM; interviewed TRANSCOM officials to gain an understanding of how they implement these rate setting and forecasting procedures; and analyzed forecast and actual workload data provided by TRANSCOM for the same timeframe. We compared TRANSCOM’s processes against rate-setting and forecasting guidance and reviewed whether TRANSCOM used quality information to establish workload projections, established any performance measures and goals for forecasting its workload, and developed any efforts to improve its forecasting of workload. In addition, we interviewed TRANSCOM and Air Mobility Command officials and reviewed supporting documentation to gain an understanding of challenges that exist to producing accurate workload forecasts, and the relationship with the rate-setting and budgeting process. We obtained revenue, cost, workload, and ARA data in this report from budget documents, accounting reports, and Air Force and TRANSCOM records for fiscal years 2007 through 2017. We assessed the reliability of the data by (1) interviewing Air Force and TRANSCOM officials to gain an understanding of the processes used to produce the cash, revenue, cost, workload and ARA data; (2) reviewing prior work to determine if there were reported concerns with TRANSCOM’s data; (3) comparing cash balances, revenue, costs and workload data provided by TRANSCOM to the same data presented in the Air Force Working Capital Fund budgets for fiscal years 2007 through 2017; and (4) comparing ARA data to Air Force and TRANSCOM supporting documentation, or to Air Force Operations and Maintenance budget execution reports to support ARA reported amounts for fiscal years 2007 through 2017. On the basis of these procedures, we have concluded that these data were sufficiently reliable for the purposes of this report. To address all of our objectives, we conducted a site visit to U.S. Transportation Command Headquarters and Air Mobility Command at Scott Air Force Base, Illinois, and interviewed officials with the Office of the Undersecretary of Defense (Comptroller)/Chief Financial Officer, the Assistant Secretary of the Air Force (Financial Management and Comptroller), the U.S. Transportation Command, and the Air Mobility Command. We conducted this performance audit from August 2017 through September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Air Force Working Capital Fund maintained a positive monthly cash balance throughout fiscal years 2007 through 2017. The Transportation Working Capital Fund (TWCF) is a part of the Air Force Working Capital Fund for cash management purposes. DOD working capital funds are authorized to charge amounts necessary to recover the full costs of goods and services provided. However, the TWCF is authorized to establish airlift customer rates to be competitive with commercial air carriers. Due to mobilization requirements, the resulting revenue does not always cover the full costs of airlift operations provided through the TWCF. To the extent customer revenue is insufficient to support the costs of maintaining airlift capability the Air Force shall provide appropriated funds. The Air Force Working Capital Fund and TWCF monthly cash balances are depicted in figure 6 below. Total costs for airlift services for fiscal years 2007 through 2017 were less than revenue collected for airlift services. Revenue came from rates charged to customers for services performed (workload related revenue), the Airlift Readiness Account (ARA), and other revenue sources. For seven of the eleven years we reviewed, revenues exceeded costs, and for four of the eleven years, costs exceeded revenue. For the eleven year period we reviewed, workload related revenue ($73 billion) was not sufficient to pay for the full costs of airlift services. The remaining revenue included $2 billion from the ARA and $7 billion from other revenue sources. Diana Maurer, (202) 512-9627 or maurerd@gao.gov, or Asif A. Khan, at (202) 512-9869, or khana@gao.gov. In addition to the contacts named above, John Bumgarner (Assistant Director), Doris Yanger (Assistant Director), John E. “Jet” Trubey (Analyst In Charge), Pedro Almoguera, John Craig, Jason Kirwan, Amie Lesser, Felicia Lopez, Keith McDaniel, Clarice Ransom, and Mike Silver made key contributions to this report.
|
TRANSCOM reported spending about $81 billion flying personnel and cargo worldwide in fiscal years 2007-2017. TRANSCOM manages the Transportation Working Capital Fund (TWCF) to provide air, land, and sea transportation for the Department of Defense (DOD). TRANSCOM sets some rates it charges below costs to be competitive with commercial air service providers. The Air Force generally pays for expenses not covered by TWCF rates through the ARA. A House Report accompanying the National Defense Authorization Act for Fiscal Year 2018 included a provision for GAO to review the ARA and the TWCF. GAO's report discusses the extent to which (1) ARA funds were requested, allotted, and expended for airlift activities; (2) the Air Force provided ARA information in its budget requests and informed its requests with information from TRANSCOM; and (3) TRANSCOM has implemented a rate-setting process for airlift services and uses workload forecasts to estimate the annual ARA funding request. GAO analyzed ARA funds and costs and revenues for airlift services for fiscal years 2007-2017; interviewed officials about the ARA budget preparation process; and analyzed TRANSCOM rate-setting and forecasting guidance and results. For fiscal years 2007 through 2017, the Air Force requested $2.8 billion from Congress for Airlift Readiness Account (ARA) requirements, as part of its annual Operations and Maintenance appropriation. The Air Force allotted $2.8 billion (i.e., directed the use of the appropriated funds) and expended $2.4 billion of these funds for the ARA. U.S. Transportation Command (TRANSCOM) uses ARA funds to support airlift operations. Specifically, the Air Force requests ARA funds in its annual Operations and Maintenance budget request and subsequently provides these funds to TRANSCOM to assist in paying for airlift services (see figure). Amounts requested, allotted, and expended varied from year-to-year, in some cases by hundreds of millions of dollars, in part due to changes in the amount of airlift services provided by TRANSCOM. The Air Force has not been including specific ARA information in its budget requests since fiscal year 2010. For fiscal years 2007 through 2009, Air Force budget requests explicitly stated ARA amounts. Air Force officials stated their budget presentation was changed to reduce the overall number of budget line items. In addition, TRANSCOM has not been providing cost estimates in time to support Air Force budget preparations. Specifically, TRANSCOM has been providing this information 2 months later than the Air Force needs it to support budget deliberations. The Air Force and TRANSCOM have taken some initial steps to address this issue, but these efforts lack substantive details regarding formalizing the necessary processes to ensure timely information. Until the Air Force and TRANSCOM resolve this issue, Congress will not have sufficient and complete information to inform its decisions on appropriating funds for ARA. TRANSCOM has a rate-setting process, but faces challenges producing accurate workload forecasts. To provide information to its customers during the annual budget development process, TRANSCOM sets airlift rates in advance of the fiscal year of expenditure. Workload forecasts influence the rate-setting process. Inaccurate forecasts can lead to unreliable budget requests and hinder effective and efficient operational planning. GAO found that forecast inaccuracy (i.e., the variance between the forecast and the actual workload) averaged 25 percent and was becoming increasingly inaccurate since fiscal year 2007. GAO found that TRANSCOM has several workload forecasting challenges. Specifically, TRANSCOM lacks an effective process to gather workload projections from customers. It also no longer uses forecasting accuracy metrics and has not established forecast accuracy goals to monitor its performance. Furthermore, TRANSCOM does not have an action plan to improve its increasingly inaccurate workload forecasts. Taking steps to address these issues would enable TRANSCOM to improve the accuracy of workload forecasts. GAO is making five recommendations to DOD, including improving the clarity and completeness of budget estimates, and taking steps to improve the accuracy of airlift workload forecasts. DOD concurs with GAO's recommendations.
|
Effective communication is vital to first responders’ ability to respond to emergencies and to ensure their safety. For example, first responders use public-safety communications systems to gather information, coordinate a response, and request additional resources and assistance from neighboring jurisdictions and the federal government. OEC has taken a number of steps aimed at supporting and promoting the ability of public-safety officials to communicate in emergencies and work toward operable and interoperable emergency communications nationwide. OEC develops policy and guidance supporting emergency communications across all levels of government and across various types of emerging technologies such as broadband, Wi-Fi, and NextGen 911, among others. OEC also provides technical assistance—including training, tools, and online and on-site assistance—for federal, state, local, and tribal first responders. First responders use different communications systems, such as land mobile radio (LMR), commercial wireless services, and FirstNet’s network. LMR: These systems are the primary means for first responders to use voice communications to gather and share information while conducting their daily operations and coordinating their emergency response efforts. LMR systems are intended to provide secure, reliable voice communications in a variety of environments, scenarios, and emergencies. Across the nation, there are thousands of separate LMR systems. Commercial wireless services: Public-safety entities often pay for commercial wireless services to send data transmissions such as location information, images, and video. Some jurisdictions also use commercial wireless services for voice communications. Nationwide dedicated-broadband network: Consistent with the law, FirstNet is working to establish a nationwide dedicated network for public-safety use that is intended to foster greater interoperability, support important data transmissions, and meet public-safety officials’ reliability needs. In creating FirstNet in 2012, Congress provided it with $7 billion in federal funds for the network’s initial build-out and valuable spectrum for the network to operate on. Unlike current LMR systems, the devices operating on FirstNet’s network will use the same radio frequency band nationwide. It is expected that these devices will be interoperable among first responders using the network because the devices will be built using the same open, non- proprietary, commercially available standards. Communications systems must work together, or be interoperable, even though the systems or equipment vendors may differ. The interoperability of emergency communications enables first responders and public-safety officials to use their radios and other equipment to communicate with each other across agencies and jurisdictions when needed and as authorized, as shown in figure 1. OEC is tasked with developing and implementing a comprehensive national approach to advance interoperable communications capabilities. For example, according to OEC, it supports and promotes communications used by emergency responders and government officials and leads the nation’s operable and interoperable public-safety and national security/emergency preparedness communications efforts. OEC notes that it plays a key role in ensuring federal, state, local, tribal, and territorial agencies have the necessary plans, resources, and training needed to support operable and interoperable emergency communications. To help in this effort, OEC instituted a coordination program that established regional coordinators across the nation. According to OEC, its coordinators work to build trusted relationships, enhance collaboration, and stimulate the sharing of best practices and information between all levels of government, critical infrastructure owners and operators, and key non-government organizations. OEC developed the National Emergency Communications Plan in 2008 and worked with federal, state, local, and tribal jurisdictions to update it in 2014 to reflect an evolving communications environment. The long-term vision of the plan—which OEC views as the nation’s current strategic plan for emergency communications—is to enable the nation’s emergency- response community to communicate and share information across all levels of government, jurisdictions, disciplines, and organizations for all threats and hazards, as needed and when authorized. To help it accomplish this mission, OEC works with three emergency communications advisory groups: SAFECOM, the Emergency Communications Preparedness Center (ECPC), and the National Council of Statewide Interoperability Coordinators (NCSWIC). These organizations promote the interoperability of emergency communications systems by focusing on technologies including, but not limited to, LMR and satellite technology. SAFECOM: According to the 2018 SAFECOM Strategic Plan, SAFECOM develops products and completes a range of activities each year in support of its vision and mission, including providing a national view of public-safety priorities and challenges, developing resources and tools aligned to the 2014 National Emergency Communications Plan, and collaborating with partner organizations to promote the interoperability of emergency communications. One of the products developed by SAFECOM each year is the Guidance on Emergency Communications Grants. SAFECOM consists of more than 50 members that represent local, tribal, and state governments; federal agencies; state emergency responders; and intergovernmental and national public-safety organizations. ECPC: The ECPC is an interagency collaborative group that provides a venue for coordinating federal emergency-communications efforts. The ECPC works to improve coordination and information sharing among federal emergency-communications programs. The ECPC does this by serving as the focal point for emergency communications issues across the federal agencies; supporting the coordination of federal programs, such as grant programs; and serving as a clearing house for emergency communications information, among other responsibilities. The ECPC has 14 member agencies that are responsible for setting its priorities. NCSWIC: This council consists of SWICs and their alternates from 50 states, 5 territories, and the District of Columbia. According to SAFECOM, NCSWIC develops products and services to assist the SWICs with leveraging their relationships, professional knowledge, and experience with public-safety partners involved in interoperable communications at all levels of government. Additionally, in 2013, FirstNet established the PSAC to provide advice to FirstNet. The committee is composed of members who represent local, tribal, and state public-safety organizations; federal agencies; and national public-safety organizations. FEMA is responsible for coordinating government-wide disaster response efforts, including on-the-ground emergency communications support and some technical assistance. For example, FEMA’s regional emergency- communications coordinator is responsible for providing emergency communications assistance on an as-needed basis and coordinating FEMA’s tactical communications support during a disaster or emergency. FEMA also provides a range of grant assistance to state, local, tribal, and territorial entities, including preparedness grants that can be used for emergency communications. As noted above, in November 2018, legislation was signed into law that reorganized and renamed NPPD and OEC. Previously, OEC was one of five divisions under the Office of Cyber Security and Communications which in turn was one of five divisions within NPPD. However, NPPD has been renamed the Cybersecurity and Infrastructure Security Agency, and OEC was renamed the Emergency Communications Division and was elevated to one of three direct reporting divisions within the new agency. See figure 2 for an illustration of changes made to OEC’s organizational placement. OEC and FEMA have responsibilities for developing and implementing grant guidance for grantees using federal funds for interoperable emergency communications. Specifically, OEC and FEMA officials told us FEMA is responsible for administering the grants, and OEC coordinates emergency communications grant guidance annually through SAFECOM’s Guidance on Emergency Communications Grants. We reviewed OEC’s and FEMA’s collaborative efforts related to grant guidance and found that their efforts generally follow our previously identified leading practices for effective interagency collaboration, as described below. Written Guidance and Agreements. Agencies that formally document their agreements can strengthen their commitment to working collaboratively. OEC and FEMA formalized their coordination efforts for interoperable emergency communications grants in a memorandum of agreement in 2014. This memorandum assigned OEC and FEMA responsibilities and established a joint working group to develop standard operating procedures, which OEC said were drafted the following year but not formally approved by FEMA, that govern coordination between the agencies. We also reported that written agreements are most effective when the collaborators regularly monitor and update them. When we started our review, OEC and FEMA officials told us that they had not updated the memorandum of agreement, which included the draft standard operating procedures as an appendix. However, the agencies approved an updated memorandum of agreement and standard operating procedures, and OEC provided them to us in July 2018. Leadership. When buy-in is required from multiple agencies, involving leadership from each can convey the agencies’ support for the collaborative effort. According to OEC and FEMA officials, their grants coordination efforts include high-level leadership. Specifically, senior leaders from both agencies signed the 2014 and 2018 memorandums of agreement. Also, OEC officials told us that their leaders in the grants program office are responsible for overseeing the collaborative effort. Bridging Organizational Culture. Collaborating agencies should establish ways to operate across agency boundaries and address their different organizational cultures. OEC and FEMA operate across agency boundaries in several ways. First, both agencies told us that they participate in the ECPC Grants Focus Group, whose members coordinate across federal grant programs to support interoperable emergency communications. The group reviews SAFECOM guidance and, according to FEMA officials, meets on a quarterly basis. Second, OEC officials said the agencies foster open lines of direct communication via conference calls, e-mail correspondence, and in-person meetings. OEC and FEMA officials told us their communications include sharing and reviewing language in FEMA’s notices that announce grant opportunities and OEC’s SAFECOM guidance. Third, the agencies said that OEC officials conduct emergency-communications-related trainings and briefings for FEMA at least once a year. According to OEC officials, these trainings have included a discussion on the movement toward broadband and FirstNet. Finally, FEMA officials told us that their program analysts have attended conferences with OEC to speak to the SWICs about grant programs. They said the program analysts explained how the grant money can be leveraged to support projects within the individual states and answered questions about the grants. OEC officials said having FEMA attend conferences to discuss specific grant information is useful for public-safety stakeholders. Clarity of Roles and Responsibilities. Collaborating agencies can get clarity when they define and agree upon their respective roles and responsibilities. As part of the 2014 and 2018 memorandums of agreement, OEC and FEMA established clear responsibilities for how each agency will support the grants coordination effort. For example, both offices were responsible for assigning experienced program staff and contributing to the development of standard operating procedures by attending meetings and conducting research. Also, the standard operating procedures clarify how OEC and FEMA will share information, solicit input on grants guidance language, and review grant applications. Participants. Including relevant participants helps ensure individuals with the necessary knowledge, skills, and abilities will contribute to the collaborative effort. OEC and FEMA identify points of contact in their memorandums of agreement. According to OEC officials, they did not always work with the correct FEMA staff before the 2014 memorandum was developed. Also, FEMA officials told us that their grants program staff who participate in the coordination effort with OEC perform those specific responsibilities as a collateral duty on an as needed basis. According to OEC officials, OEC’s performance plans outline coordination with FEMA and areas related to the agencies’ memorandum of agreement for the staff who handle grant issues. OEC and FEMA officials said participants’ responsibilities include serving as technical subject matter experts and reviewing language for grants guidance and notices of funding opportunities. Resources. Collaborating agencies should identify the human, financial, and technological resources they need to initiate or sustain their efforts. OEC and FEMA staff their collaborative effort with employees from their grants offices to address their human resource needs. These employees perform work related to emergency communications grants as outlined in their performance plans or as a collateral duty. The agencies also provide OEC access to FEMA’s non-disaster grants system to share grantee information. According to OEC and FEMA officials, their collaboration efforts do not require either agency to obligate funds or use special technology, such as online information-sharing tools. Outcomes and Accountability. Collaborating agencies that create a means to monitor and evaluate their efforts can better identify areas for improvement. According to OEC and FEMA documentation, the primary goal of the draft standard operating procedures was to prevent grantees from improperly using federal funds, such as purchasing equipment that is not interoperable. OEC officials said the biggest gap in those standard operating procedures was that they did not include a monitoring program to ensure grantees were compliant with grant guidance, which include requirements for interoperability. OEC’s and FEMA’s July 2018 standard operating procedures established a process to track and monitor grantee compliance. They also identified a process for assessing the information they collect and how it will be shared among OEC and FEMA, and when appropriate, other stakeholders. At the time of our review, OEC and FEMA officials told us they had not implemented the monitoring procedures because the grants for the 2018 grant cycle were not yet awarded. Accordingly, we could not evaluate the effectiveness of the new procedures to monitor and assess grantee compliance, and without conducting such an evaluation, we could not determine whether OEC’s and FEMA’s efforts align with the key practice in this area. Senior officials from both agencies said the monitoring procedures would be updated if they do not work as intended. After being established in 2007, OEC initially focused on enhancing the interoperability and continuity of LMR systems. However, according to OEC officials, its programs, products, and services have adapted and evolved to incorporate new modes of communications and technologies. Additionally, OEC’s technical assistance offerings for emergency communications technology have evolved over time as new technologies have come into use. For example, OEC’s technical assistance catalog contains new or enhanced offerings on topics related to broadband issues such as FirstNet’s network, Next Generation 911, alerts and warnings, and incident management. In 2014, DHS released its second National Emergency Communications Plan, which identified the need to focus on broadband technologies, including FirstNet’s nationwide public-safety broadband network. One of the plan’s top priorities is “ensuring emergency responders and government officials plan and prepare for the adoption, integration, and use of broadband technologies, including the planning and deployment of the nationwide public-safety broadband network.” To meet this priority, OEC officials told us that they provide stakeholders with a wide range of products and services to help prepare for the adoption, integration, and use of broadband. For instance, officials said that they leverage OEC’s governance groups—SAFECOM, NCSWIC, and ECPC—to develop products and services and to identify specific challenges and requirements regarding broadband. Additionally, OEC officials told us that they coordinate regularly with FirstNet staff and invite FirstNet to meet and brief the stakeholder community on the latest deployment information. However, OEC officials told us that FirstNet’s network is one option available to public-safety and government officials to access broadband communications and information sharing and explained that OEC maintains a neutral position for all technologies and vendors. Accordingly, OEC is not responsible for promoting any vendor solutions, including FirstNet’s network, and there is no requirement for OEC to do so. Additionally, five of six OEC coordinators we interviewed told us that FirstNet’s network is only one of several emergency-communications technology options and that OEC should continue to provide information to public-safety stakeholders regarding other providers. For example, there are commercial carriers that provide wireless broadband services, and we have previously reported that these commercial carriers could choose to compete with FirstNet. According to OEC officials, prior to the start of each fiscal year, OEC engages with stakeholders to gather feedback on new or revised technical assistance offerings, as well as updates to existing plans and documents. OEC officials told us that they expect an increase in technical assistance requests that focus on issues related to mobile data use, broadband governance, standard operating procedures, and policies and procedures. According to OEC officials, OEC has delivered more than 2,000 technical-assistance-training courses and workshops since 2007, and OEC will continually update its technical assistance offerings to incorporate new modes of communications and technologies into training, exercises, and standard operating procedures for its stakeholders. The majority (7 of 10) of public-safety organizations that we interviewed told us that OEC sufficiently incorporates information regarding FirstNet’s network into its guidance and offerings. For example, officials from 6 of 10 organizations that we interviewed told us that OEC must strike a balance between FirstNet’s network and other emerging technologies, and that OEC has successfully accomplished this task. Additionally, the majority of SWICs responded to our survey that it is at least moderately important for OEC to incorporate the FirstNet network and emerging technologies into its written guidance, technical assistance offerings, training opportunities, workshops, and grant guidance, Furthermore, in most cases, SWICs responded that OEC has incorporated FirstNet’s network and emerging technologies into these areas, as follows: FirstNet network. In our survey, the majority of SWICs responded that OEC has incorporated, to a large or moderate extent, FirstNet’s network into its written guidance (65 percent) and technical assistance offerings (59 percent), and half of SWICs said the same for OEC’s workshops. However, fewer SWICs reported that OEC incorporated FirstNet’s network, to a large or moderate extent, into its training opportunities (39 percent) and grant guidance (33 percent). Emerging technologies. The majority of SWICs reported that OEC has incorporated, to a large or moderate extent, emerging technologies into its written guidance (87 percent); technical assistance offerings (81 percent); training opportunities (74 percent); workshops (78 percent); and grant guidance (56 percent). See figure 3 for complete survey data regarding SWICs’ views on the extent that OEC has incorporated FirstNet’s network and emerging technologies into its offerings. In surveying SWICs on the usefulness of OEC’s efforts to incorporate FirstNet’s network and emerging technologies into its offerings, we found the following: FirstNet network. The majority of SWICs reported that OEC’s efforts to incorporate FirstNet’s network into its written guidance (67 percent), technical assistance offerings (59 percent), and workshops (59 percent) have been very or moderately useful. However, less than a majority of SWICs reported that OEC’s efforts to incorporate FirstNet’s network into its training opportunities (46 percent) and grant guidance (40 percent) have been very or moderately useful. Emerging technologies. The majority of SWICs reported that OEC’s efforts to incorporate emerging technologies into its written guidance (93 percent), technical assistance offerings (85 percent), training opportunities (74 percent), workshops (85 percent), and grant guidance (72 percent) have been very or moderately useful. See figure 4 for complete survey data regarding SWICs’ views on the usefulness of OEC’s efforts to incorporate FirstNet’s network and emerging technologies into its offerings. Even following the implementation of FirstNet, public-safety stakeholders told us they expect OEC will play an important role in ensuring interoperable emergency communications, both regarding the FirstNet network and other technologies. For example, 45 of 54 (83 percent) of SWICs we surveyed reported that OEC will likely have a large or moderate role for ensuring interoperable emergency communications once FirstNet’s network is fully operational. Additionally, nearly all (9 of 10) of public-safety organizations we interviewed said that they believe OEC will continue to play an important role in ensuring interoperable emergency communications after the implementation of FirstNet’s network. OEC is required to conduct extensive nationwide outreach to support and promote interoperable emergency-communications capabilities by state, regional, local, and tribal governments and public-safety agencies in the event of natural disasters and acts of terrorism and other man-made disasters. According to federal standards for internal control, management should externally communicate the necessary quality information to achieve the entity’s objectives. This includes communicating with external parties and using the appropriate methods of communication. The federal standards state that management should periodically assess the entity’s methods of communication so that the organization has the appropriate tools to communicate quality information throughout and outside of the entity on a timely basis. Most public-safety organizations we interviewed told us that OEC communicates with their organization frequently through committee meetings and other means. For example, 9 of the 10 organizations told us that a key form of communication between their organization and OEC is participation in emergency-communications advisory groups such as SAFECOM, NCSWIC, and PSAC. Furthermore, OEC officials reported that OEC’s guidance documents, plans, tools, and technical assistance offerings are formally provided to the public-safety community through the SAFECOM, NCSWIC, and ECPC distribution lists. Governing body representatives then distribute the information to their organizations and stakeholders. These documents are also available on DHS’s website. Furthermore, 4 of the 10 organizations told us that they regularly have direct communications with OEC staff. The large majority of SWICs responded that they are very or moderately satisfied with the communication efforts from both OEC headquarters (81 percent) and OEC coordinators (93 percent). However, some stakeholders identified communication challenges as well as opportunities for OEC to improve communication. For example, approximately one quarter (26 percent) of SWICs said that OEC does not communicate training well, and these SWICs reported that they are either unaware of OEC training opportunities related to FirstNet’s network and other emerging technologies, or that they mostly learn about OEC training opportunities from other sources. See figure 5 below for additional survey information regarding SWICs’ views on how well OEC communicates training opportunities related to FirstNet’s network and other emerging technologies. Also with respect to OEC’s communication efforts with stakeholders, four of six OEC coordinators and 3 of 10 public-safety organizations we interviewed, along with 26 of 54 (48 percent) of the SWICs we surveyed, identified the need for OEC to use additional tools or approaches for improving communication with SWICs and the public-safety community. For example, one coordinator said that there are public-safety stakeholders who are unaware of OEC. Similarly, representatives from a public-safety organization we interviewed told us that OEC should help public-safety stakeholders better understand what OEC does. Both the OEC coordinator and public-safety stakeholders in these examples identified the need for OEC to use social media to improve public-safety stakeholders’ understanding of OEC and its offerings. Additionally, an OEC coordinator told us that each region is different, and unless there is an OEC coordinator who is proactive about communicating information to the public-safety community, then important information does not get out to the appropriate people. The coordinator also said that it is difficult to communicate information to all of the needed stakeholders because he is solely responsible for communicating with many public-safety entities and jurisdictions within multiple states. Furthermore, a SWIC reported that other organizations use social media for communicating during disasters and for notifying interested parties about events and trainings, and that OEC should do the same. OEC officials told us that NPPD recently established a Twitter account that OEC has used to increase awareness of programs, products, and services. However, since the establishment of the account in February 2018 through September 2018, only 23 of NPPD’s 280 tweets and retweets (8.2 percent) made mention of OEC, 15 of which occurred in March 2018. In addition to social media, some public-safety organizations and SWICs identified additional tools or approaches that OEC could use to improve communication with the public-safety community. These tools and approaches include designating an intergovernmental specialist or liaison within OEC to coordinate with public-safety stakeholders, developing additional regional-focused meetings such as conferences and workshops, and creating online or distance-learning opportunities (e.g., online training, webinars, online chat or bulletin board services, etc.). Although OEC officials told us that they employ mechanisms to understand the effectiveness of OEC’s programs, products, and services, we found OEC has not specifically assessed its methods of communication. For example, OEC analyzes feedback forms provided at meetings and stakeholder engagements, gathers direct input from stakeholders through in-person and phone discussions and e-mail, tracks the open rate of e-mails and website and blog post traffic, and reviews social media analytics for specific event campaigns. At the time of our review, OEC officials told us that they were developing a formal performance-management program to measure the impact of OEC’s programs on the public-safety and national security/emergency preparedness communities. However, these broad efforts aimed at reviewing the overall programs are not designed for the specific purpose of assessing OEC’s methods of communication, and OEC does not have any plans in place for doing so. Lacking an assessment of its methods of communication, OEC may be missing opportunities to learn which tools and approaches are the most effective and to use those to deliver timely information to public-safety stakeholders. As noted above, this can result in public-safety officials missing trainings or not receiving other helpful information. Furthermore, not using additional methods of communication or tools could contribute to uncertainty among the public-safety community about OEC’s mission and its efforts to improve the interoperability of emergency communications. OEC has multiple efforts supporting interoperable emergency communications that the public-safety community relies on to better respond to emergency situations. Although public-safety stakeholders we contacted were generally satisfied with OEC’s communications efforts, OEC could be missing opportunities to use additional tools and approaches, such as social media, to improve communication with public- safety officials. Absent an assessment of its methods of communication, OEC cannot ensure it is using the best methods to provide relevant and timely information on training opportunities, workshops, technical assistance offerings, and other emergency-communications information to the public-safety community. OEC should assess its methods of communication to help ensure it has the appropriate tools and approaches to communicate quality information to public-safety stakeholders, and as appropriate, make adjustments to its communications strategy. (Recommendation 1) We provided a draft of this report to DHS for review and comment. In response, DHS provided written comments, which are reprinted in appendix III. DHS concurred with our recommendation and provided an attachment describing the actions it would take to implement the recommendation. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report examines (1) the Office of Emergency Communications’ (OEC) and the Federal Emergency Management Agency’s (FEMA) collaborative efforts to develop and implement guidance for grantees using federal grants for interoperable emergency communications; (2) how OEC incorporates FirstNet’s nationwide public-safety broadband network and other emerging technologies into its plans and offerings, and stakeholders’ views regarding those efforts; and (3) the extent to which OEC has assessed its methods of communication. To evaluate OEC’s and FEMA’s collaborative efforts to develop and implement grant guidance, we collected and reviewed documentation relevant to the collaborative effort, including memorandums of agreements, standard operating procedures, and meeting agendas. We assessed OEC’s and FEMA’s actions against the seven key considerations for interagency collaborations. We also interviewed OEC and FEMA Grant Programs Directorate officials who have responsibilities for Department of Homeland Security (DHS) grants. We asked them to discuss their approach to interagency collaboration, including the process to jointly develop grant guidance language. We asked agency officials questions that were based on the key considerations for implementing interagency collaborative mechanisms that we identified in a prior report. To determine how OEC has incorporated FirstNet’s network and other emerging technologies into its plans and offerings, we reviewed relevant OEC documentation, including fact sheets and technical assistance guides. We also reviewed the 2014 National Emergency Communications Plan (NECP) and OEC’s March 2017 biennial report to Congress on the progress toward meeting NECP goals. We interviewed OEC headquarters officials about the agency’s efforts to date, including how OEC develops its offerings and workshops and communicates this information to the public-safety community. We also interviewed 6 of 10 OEC coordinators using a semi-structured interview format to get on-the- ground perspectives from OEC staff who serve as points of contact for public-safety stakeholders. We selected OEC coordinators to achieve variety across geography, population density, tribal presence, and territory representation. We interviewed OEC coordinators to obtain their perspectives as subject matter experts, but their views should not be attributed to OEC’s official agency position. In addition, to obtain stakeholders’ views on OEC’s efforts to incorporate FirstNet’s network and other emerging technologies into plans and offerings, we surveyed all 54 statewide interoperability coordinators (SWIC) from 48 states, five territories, and the District of Columbia. We obtained a list of SWICs from DHS and confirmed additional contact information via e-mail. We conducted a web-based survey to learn SWICs’ perspectives on issues including the importance of incorporating FirstNet’s network and other emerging technologies into OEC’s plans and offerings, OEC’s communication with the public-safety community, and SWICs’ level of satisfaction with OEC’s efforts. To ensure the survey questions were clear and accurately addressed the relevant terms and concepts, we pretested the survey with SWICs from three states: Illinois, Massachusetts, and Texas. These SWICs were selected to get perspectives from officials who have served in the role for at least several years and SWICs who are new to the position. We administered our survey from May 2018 to July 2018 and received 54 responses for a 100 percent response rate. We also used a semi-structured interview format to obtain views from representatives from 10 public-safety organizations who have expertise in public-safety and federal emergency-communications efforts (see table 1). To identify relevant organizations, we reviewed our prior report that identified 34 organizations that are members of both OEC’s SAFECOM advisory group and FirstNet’s Public Safety Advisory Committee (PSAC). We researched the members to help determine the extent to which each organization is involved in issues related to our review. We selected 10 public-safety organizations to interview on the basis of: (1) this research, (2) information from DHS, and (3) a literature review. Because one association declined our request for an interview, we contacted and interviewed another relevant organization from the original list of 34 member organizations. The views shared by the representatives we interviewed are not generalizable to all public-safety organizations that interact with OEC; however, we were able to secure the participation of organizations that focus on various public-safety issues across federal, state, local, and tribal jurisdictions and thus believe their views provide a balanced and informed perspective on the topics discussed. To evaluate the extent that OEC has assessed its methods of communication, we reviewed OEC’s documentation for collecting stakeholders’ feedback. We also reviewed the interview responses from OEC officials and the public-safety organizations listed in table 1 and the SWIC survey data pertaining to OEC’s communications efforts. We assessed OEC’s efforts against federal standards for internal control regarding external communications and periodic evaluation of its methods of communication. The questions we asked in our survey of statewide interoperability coordinators (SWIC) and the aggregate results of responses to the closed-ended questions are shown below. We do not provide results for the open-ended questions. We surveyed all SWICs from 48 states, five territories, and the District of Columbia. We administered our survey from May 2018 to July 2018 and received 54 responses for a 100 percent response rate. Due to rounding, the aggregated results for each closed- ended question may not add up to exactly 100 percent. For a more detailed discussion of our survey methodology see appendix I. 1. What best describes the Statewide Interoperability Coordinator (SWIC) in your state? 1a. If you selected “Other,” please explain. (Written responses not included) 2. Does the SWIC also serve in the role of the FirstNet State Point of Contact (SPOC)? 0% 2a. If no, how often does the SWIC coordinate with the SPOC on FirstNet’s nationwide public safety broadband network? 2b. If you selected “rarely or never,” please explain. (Written responses not included) The questions in this section ask your opinion about OEC’s efforts to help the public safety community improve interoperable emergency communications capabilities. This section will be about FirstNet’s nationwide public safety broadband network. 3. In your opinion, how important is it for OEC to incorporate FirstNet’s nationwide public safety broadband network into the following areas? Please specify the other area in the box below. (Written responses not included) 4. To what extent has OEC incorporated FirstNet’s nationwide public safety broadband network into the following areas? Please specify the other area in the box below. (Written responses not included) 5. In your opinion, how useful have OEC’s efforts to incorporate FirstNet’s nationwide public safety broadband network into the following areas been in helping your state address challenges with its emergency communications? Please specify the other area in the box below. (Written responses not included) 6. Please provide any additional comments you have on OEC’s efforts to address FirstNet’s nationwide public safety broadband network as part of interoperable emergency communications. (Written responses not included) 7. What, if anything, could OEC do to further address FirstNet’s nationwide public-safety broadband network in its interoperable emergency communications efforts? (Written responses not included) 8. In your opinion, to what extent will OEC have a role for ensuring interoperable emergency communications once FirstNet’s nationwide public-safety broadband network is fully operational? 8a. Please explain your response to question 8 in the box below. (Written responses not included) The questions in this section ask your opinion about OEC’s efforts to help the public safety community improve interoperable emergency- communications capabilities. This section will be about other emerging technologies. 9. Should OEC address the following emerging technologies in its interoperable emergency communications efforts? Wireless Local Area Networks (e.g., Wi-Fi) 9a. If you responded “Yes” to other, please specify in the box below. (Written responses not included) 10. In your opinion, how important is it for OEC to incorporate emerging technologies into the following areas? Please specify the other area in the box below. (Written responses not included) 11. To what extent has OEC incorporated emerging technologies into the following areas? Please specify the other area in the box below. (Written responses not included) 12. In your opinion, how useful have OEC’s efforts to incorporate emerging technologies into the following areas been in helping your state address challenges with its emergency communications? Please specify the other area in the box below. (Written responses not included) 13. Please provide any additional comments you have on the usefulness of OEC’s efforts to incorporate emerging technologies into interoperable emergency communications. (Written responses not included) 14. What, if anything, could OEC do to further incorporate emerging technologies into its interoperable emergency communications efforts? (Written responses not included) The following questions are about OEC’s communication efforts with SWICs and the public safety community. 15. In your opinion, how well does OEC communicate to SWICs training opportunities in the following areas? Emerging technologies (i.e., Wi-Fi, NextGen 911, etc.) 15a. If you responded to other, please specify in the box below. (Written responses not included) 16. How satisfied or dissatisfied are you with the communication efforts from the following OEC organizational levels? 16a. If you responded to other, please specify in the box below. (Written responses not included) 17. In your opinion, are there additional tools or approaches that OEC could use to improve communication with SWICs and the public-safety stakeholder community? 17a. Please identify and describe additional tools and approaches in the box below. (Written responses not included) 18. In your opinion, does OEC face any challenges that affect its ability to meet the needs of the public safety community? 18a. Please explain in the box below. (Written responses not included) The following questions ask your opinion about SAFECOM grant guidance for interoperable emergency communications equipment. OEC develops annual SAFECOM guidance in an effort to provide current information on emergency communications policies, eligible costs, best practices, and technical standards for state, local, tribal, and territorial grantees investing federal funds in emergency communications projects. 19. In your opinion, how clear are the following aspects of the SAFECOM grant guidance for interoperable emergency communications equipment? 19a. If you responded to other, please specify in the box below. (Written responses not included) 20. In the past 2 years, has your state developed supplemental statewide guidance to clarify the SAFECOM grant guidance for interoperable emergency communications equipment? 20a. Please explain in the box below, why your state developed supplemental statewide guidance. (Written responses not included) 21. In your opinion, is there a need to improve the SAFECOM grant guidance for interoperable emergency communications equipment? 21a. If yes, please explain in the box below. (Written responses not included) 22. If you would like to expand upon any of your responses to the questions above, or if you have any other comments about OEC’s interoperable emergency communications efforts, please write them in the box below. (Written responses not included) In addition to the individual named above, Sally Moino (Assistant Director); Ray Griffith (Analyst in Charge); Josh Ormond; Cheryl Peterson; Kelly Rubin; Andrew Stavisky; Sarah Veale; Michelle Weathers; and Ralanda Winborn made key contributions to this report.
|
Public-safety communications systems are used by thousands of federal, state, and local jurisdictions. It is vital that first responders have communications systems that allow them to connect with their counterparts in other agencies and jurisdictions. OEC offers written guidance, governance planning, and technical assistance to help ensure public-safety entities have the necessary plans, resources, and training to support emergency communications. FirstNet, an independent authority within the Department of Commerce, is establishing a public-safety network. GAO was asked to review OEC's efforts related to interoperable emergency communications. This report examines (1) OEC's and FEMA's collaborative efforts to develop grant guidance; (2) how OEC incorporates FirstNet's network and other emerging technologies into its plans and offerings; and (3) the extent to which OEC has assessed its methods of communication. GAO evaluated OEC's and FEMA's coordination against GAO's leading practices for interagency collaboration; surveyed all 54 state-designated SWICs; evaluated OEC's communications efforts against federal internal control standards; and interviewed officials that represented various areas of public safety. The Department of Homeland Security's (DHS) Office of Emergency Communications (OEC) and the Federal Emergency Management Agency (FEMA) collaborate on grant guidance to help public-safety stakeholders use federal funds for interoperable emergency communications. GAO found that OEC's and FEMA's efforts generally align with GAO's leading practices for effective interagency collaboration. For example, OEC's and FEMA's memorandum of agreement and standard operating procedures articulate their agreement in formal documents, define their respective responsibilities, and include relevant participants. During this review, the agencies established a process to monitor and assess grantees' compliance with the grant guidance. However, because the grants for 2018 were not yet awarded at the time of GAO's review, GAO was unable to assess the effectiveness of the new process. OEC incorporates the First Responder Network Authority's (FirstNet) nationwide public-safety broadband network and other emerging technologies into various offerings such as written guidance, governance planning, and technical assistance. Public-safety organizations GAO interviewed and statewide interoperability coordinators (SWIC) GAO surveyed were generally satisfied with OEC's communication efforts. OEC has not assessed its methods for communicating with external stakeholders. According to federal internal control standards, management should externally communicate the necessary quality information to achieve the entity's objectives and periodically assess its methods of communication so that the organization has the appropriate tools to communicate quality information on a timely basis. Some SWIC survey respondents and public-safety representatives identified an opportunity for OEC to improve its methods of communication. For example, 26 of the 54 SWICs responded that OEC could use additional tools or approaches, such as social media, for improving communication with its stakeholders. In addition, public-safety officials reported that they have missed training because they were unaware of opportunities. Because OEC has not assessed its methods of communication, OEC may not be using the best tools and approaches to provide timely information on training opportunities, workshops, and other emergency communications information to the public-safety community. OEC should assess its methods of communication to help ensure it is using the appropriate tools in communicating with external stakeholders. DHS concurred with the recommendation.
|
The Judicial Conference of the United States is the national policy-making body of the federal courts. The Chief Justice of the United States is the presiding officer of the Judicial Conference. The Conference operates through a network of 20 committees, including the Committee on Financial Disclosure. The Judicial Conference delegated authority to redact information from a financial disclosure report to the Committee on Financial Disclosure. Upon request from a judicial official, the committee, in consultation with the USMS, redacts the information when it decides that revealing such personal or sensitive information could endanger the judicial official or a member of his or her family. Responsibilities of the Committee on Financial Disclosure include reviewing reports filed, adjudicating requests for redactions of information from the report, approving and modifying reporting forms and instructions, and monitoring the release of reports to ensure compliance with statute and the committee’s guidance. The Judicial Conference of the United States is responsible for implementing the judiciary’s redaction authority in a manner that provides judicial officials with the intended security measures without compromising timely public access to judicial officials’ financial disclosure reports. AOUSC is the agency within the judicial branch that provides a broad range of legislative, legal, financial, technology, management, administrative, and program support services to federal courts. It is responsible for carrying out Judicial Conference policies, and one of its primary responsibilities is to provide staff support and counsel to the Judicial Conference and its committees, including the Committee on Financial Disclosure. The Director of AOUSC serves as the Secretary to the Judicial Conference and is an ex officio member of the Executive Committee. The Ethics in Government Act of 1978, as amended, requires specified judicial, legislative, and executive branch officials to file annual financial disclosure reports in the spring of each year. These reports include financial information for the previous calendar year. Financial disclosure reports are made up of nine parts—positions, agreements, non- investment income, reimbursements, gifts, liabilities, investments and trusts, explanatory comments, and certification and signature. (See appendix I for a copy of a blank annual financial disclosure report). In addition to filing an annual report, covered judicial officials are required to file financial disclosure reports when nominated (nomination report); within 30 days of taking office (initial report); and within 30 days of leaving their position (final report)—see table 1. Federal law also requires that copies of judicial officials’ financial disclosure reports be made available, upon written request, to members of the public. Judicial officials may request that certain information be redacted before their financial disclosure reports are sent to the requesting individuals. The judiciary’s authority to redact information from financial disclosure reports was established in 1998 and was initially authorized for a 3-year period. That legislation also instituted an annual congressional reporting requirement for the judiciary on the operation of the redaction authority. Over the past 20 years, the judiciary’s redaction authority and reporting requirement have been successively reauthorized for various periods of time, but have lapsed on occasion. The authority was most recently reauthorized on March 23, 2018 through the end of 2027. According to AOUSC officials, while the redaction authority lapsed, the Committee on Financial Disclosure did not grant any new redaction requests, but it did grant requests to continue redactions that were approved prior to December 31, 2017. The Judicial Conference, through its Committee on Financial Disclosure, has developed a multistep process for reviewing federal judges’ requests for redactions of information from their financial disclosure reports and requests for copies of these reports, as shown in figure 1. While the committee encourages judicial officials to request redactions at the time they file their financial disclosure reports, AOUSC officials stated that most redaction requests were made after judicial officials were notified that copies of their reports had been requested. A judicial official may request a redaction of information when his or her financial disclosure report is filed or after receiving a notification of a request for a copy of his or her financial disclosure report. When requesting a redaction, the judicial official must state specifically what information is sought to be redacted and the justification for the redaction. The Committee on Financial Disclosure will determine, in consultation with the USMS, if the information could endanger the judicial official or an immediate family member. For redaction requests involving information pertaining to the unsecured location of (1) a spouse’s employer, (2) a child’s school, or (3) a primary or secondary residence, a separate security consultation is not required based on an agreement AOUSC reached with the USMS memorialized in a 2004 letter that, in essence, serves as a security consultation. For all other types of information requested to be redacted, a further USMS security consultation is required. Taking into account the information provided by the judicial officials, as well as results from the USMS security consultations, members of the Subcommittee on Public Access and Security, a subcommittee under the Committee on Financial Disclosure, decide—by majority vote—to either grant (in whole or in part) or deny each redaction request. Such redactions are good until the end of the calendar year in which they are granted. The Committee on Financial Disclosure notifies the judicial official if the information requested to be redacted has been granted, granted in part, or denied. Judicial officials can appeal a redaction decision; however, according to AOUSC officials, there were no appeals from 2012 through 2016, the time period covered by our review. The Judicial Conference’s Committee on Financial Disclosure has developed an electronic report filing system, written guidance, and a compliance process to help ensure judicial officials file their financial disclosure reports. Specifically, in 2011, AOUSC switched from having judicial officials file financial disclosure reports in hard copy to electronic filing through an online electronic depository, Financial Disclosure Online Filing System (FiDO). AOUSC also uses a separate internal electronic database (LEGO) to track compliance with financial disclosure report filings. LEGO contains the entire database of judicial filers, including what reports should be filed, the dates financial disclosure reports are due, and which are in process. The Committee on Financial Disclosure stated in September 2014 that FiDO had been upgraded, but committee members continued to experience limitations with the system. For example, according to AOUSC officials, FiDO does not keep track of which reports are in process or when they are due. Accordingly, the committee members authorized an assessment to look for an alternative system that would meet their needs and, by 2016, had selected software currently being used by the government to be customized for the judiciary. According to AOUSC officials, the plan is for the Judiciary Electronic Filing System (JEFS) to replace both FiDO and LEGO and be used for filing financial disclosure reports and tracking compliance with filing requirements beginning in 2019. The Committee on Financial Disclosure also provides guidance to judicial officials to ensure that financial disclosure reports are filed correctly. The types of guidance provided include the Guide to Judiciary Policy, Filing Instructions for Judicial Officers and Employees, and a Step by Step Guide for the Preparation and Electronic Filing of Financial Disclosure Reports. Additionally, members of the Committee on Financial Disclosure are to review each filed financial disclosure report to confirm that required items have been sufficiently reported and that the filer is in compliance with applicable laws and regulations. In addition, for some sections, members of the committee will compare information provided in a filed report with what was reported in a prior year’s report to ensure the information reported is accurate and consistent. The Committee on Financial Disclosure also provides guidance on the process to be followed if a judicial official fails to file a required financial disclosure report. Specifically, the Guide to Judiciary Policy states that a late filing fee of $200 will be assessed if a report is filed more than 30 days after the report is due. Further, the Chairman of the Committee on Financial Disclosure is to write a letter to any noncompliant filer. In addition to the guidance described above, in 2013, the Committee on Financial Disclosure reported that it would establish specific procedures for securing filer compliance with all reporting requirements and the late filing assessments. In 2014, the Committee reported on the successful implementation of these new policies. Part of this effort included developing templates for three successive communications that are to be provided to a noncompliant filer. The communications reflect a progressively increasing level of urgency in language and content, culminating in explicit warnings that if a noncompliant filer does not comply, the matter can be referred to the Attorney General. From calendar years 2012 through 2016, more than 4,000 financial disclosure reports were required to be filed each year by judicial officials, as shown in table 2. Most of the reports filed were annual reports. According to AOUSC officials, as of March 2018, all annual financial disclosure reports required to be filed from calendar years 2012 through 2016 were filed, except for one for calendar year 2015. Additionally, all nominee and initial financial disclosure reports required to be filed during this time period were filed, and all but one final financial disclosure report, for calendar year 2016, were filed. The AOUSC officials stated that the remaining final report is still pending and the compliance process is being followed to ensure the report will be filed. The judiciary is complying with the Judicial Conference’s Guide to Judiciary Policy (Volume 2, Part D, Chapters 3-4), which sets forth the process for releasing financial disclosure reports. First, members of the public may request financial disclosure reports by submitting Form AO 10A (see appendix II for a blank copy of the Form AO 10A). The Committee on Financial Disclosure notifies the judicial official that a Form AO 10A has been received and provides the official with a copy. At that time, the judicial official has up to 10 days to decide whether or not to request that information from the financial disclosure report be redacted. Once the members of the Subcommittee on Public Access and Security have reviewed any redaction requests and any accompanying USMS security consultation results, the members vote on whether or not to grant redactions and then forward the results to AOUSC staff for final processing. In March 2017, the Judicial Conference approved the release of financial disclosure reports by electronic storage device free of charge in order to expedite the release of requested reports. As a result, once AOUSC staff receive the redaction decisions from the Subcommittee, AOUSC staff are to ensure that approved redactions are made to the financial disclosure reports, and then download the reports to electronic storage devices to mail to the requesting parties. The AOUSC received, on average, about 70 requests for copies of judicial officials’ financial disclosure reports each year from calendar years 2012 through 2016 using the AO 10A request form. The form can include a request for the financial disclosure report of one judicial official, or for multiple judicial officials. Additionally, the form could include a request for multiple years of financial disclosure reports. Based on the AO Form 10As received from calendar years 2012 through 2016, AOUSC released approximately 16,000 financial disclosure reports. The number of financial disclosure reports released each year varied during this time period, as shown in table 3. According to AOUSC officials, the number of financial disclosure reports released each year varies based on the number of requests received and the time of year the requests are submitted. For example, a requester might submit a Form AO 10A late in the calendar year and the requested reports could be released the following calendar year based on how long it takes to process the request. AOUSC officials noted that there are two organizations that have requested copies of the financial disclosure reports for all federal judges every year. In 2016 AOUSC received the requests late in the year and, therefore, were not able to release the reports until 2017. The number of judicial officials who requested redactions represents a small percentage of the total number of financial disclosure reports filed in recent years. As shown in table 4, the number of redaction requests ranged from a low of 112 in 2014 to a high of 162 in 2012 and 2015. For calendar years 2012 through 2016, there were a total of 716 requests for redaction of information from judicial officials’ financial disclosure reports—711 from judges and 5 from judicial employees—with a yearly average of about 143 redaction requests. In particular, for calendar years 2012 through 2016, judicial officials’ redaction requests accounted for, on average, 3.2 percent of the total financial disclosure reports filed during this time period, as shown in table 5. When we segregated the results by judges and judicial employees, we found that, on average, 5.8 percent of judges requested redactions compared to 0.1 percent of judicial employees over the 5 year time period. Of the 3.2 percent of financial disclosure reports that included redaction requests made from 2012 through 2016, on average, about 85 percent were granted, 3 percent were partially granted, and 12 percent were denied, as seen in figure 2. We analyzed AOUSC data on redaction requests made from calendar years 2012 through 2016 by type of information requested to be redacted and found that the majority (about 76 percent) of the requested redactions pertained to information related to the unsecured location of a judicial official or an immediate family member. The next biggest category of information requested to be redacted was the “other” category, with 10.4 percent. Three categories—asset value, gifts, and reimbursement—each accounted for less than 1 percent of the redaction requests, as shown in Figure 3. We requested copies of the annual redaction reports submitted to Congress for calendar years 2012 through 2016 and determined that AOUSC had not submitted the annual redaction reports to congressional committees of jurisdiction in a timely manner. Specifically, we found that AOUSC submitted the annual report covering 2012 in May 2014 and submitted four annual reports (for calendar years 2013 through 2016) in February and August of 2017, as shown in table 6. For the 2013 and 2014 annual reports, AOUSC prepared and submitted them to the congressional committees of jurisdiction after we asked for them. AOUSC officials told us that they could not find evidence that they had submitted the annual reports for calendar years 2013 and 2014 to the committees of jurisdiction in a timely manner. However, AOUSC staff sent a 5-year report to congressional committees of jurisdiction in March 2017 that included information on redaction requests and results for calendar years 2012 through 2016. Thus, the congressional committees of jurisdiction had received no reports from AOUSC on redaction requests and results from May 2014 to February 2017. While the Ethics in Government Act of 1978, as amended, does not set a specific submission date, it requires that AOUSC submit an annual report (i.e., occurring once every year) to congressional committees of jurisdiction on the operation of the judiciary’s redaction authority. As shown in table 8 above, AOUSC did not submit an annual report every year, and there was an interval of almost three years (from May 2014 to February 2017) in which there is no record of AOUSC providing any annual redaction reports to Congress. AOUSC officials stated that although there are no reporting time frames specified in legislation for preparing and submitting the reports to the congressional committees of jurisdiction (other than annual submission), beginning in 2016, AOUSC staff began to work on preparing the redaction report for the previous year by February of the following year. The AOUSC officials acknowledged, though, that they have not implemented a formal process, with designated steps and time frames, to ensure they consistently produce the annual redaction reports in a timely manner. The AOUSC officials also stated that since 2013, the Financial Disclosure Office—the office responsible for preparing the reports—had experienced a series of changes in management, as well as staff turnover in key positions, which contributed to the inconsistent process for developing and completing the annual redaction reports in a timely manner. Given that AOUSC experienced staff turnover in the past, and could experience it in the future, it is important that AOUSC has the necessary controls in place to overcome staffing issues and ensure that it consistently prepares and submits the annual redaction reports to the committees in a timely manner. Standards for Internal Control in the Federal Government state that management should implement control activities by documenting responsibilities through policies for each unit. With guidance from management, each unit determines the policies necessary to achieve the desired objectives. Management should also define objectives in specific terms so they are understood at all levels. This involves clearly defining what is to be achieved, who is to achieve it, how it will be achieved, and the time frames for achievement. AOUSC officials stated that the annual reports cannot be compiled until after the close of the previous calendar year and after all data have been reviewed. While this is true, without a formal process for ensuring that staff complete the reports in a timely manner, there are no assurances that the process will consistently occur on a regular schedule, or at all. Implementing a more formal process, with specified steps and time frames, would ensure staff are fully informed of their responsibilities and allow AOUSC to be better positioned to provide the congressional committees of jurisdiction with timely redaction reports that can be used to conduct oversight of the federal judiciary’s use of its redaction authority. The Ethics in Government Act of 1978, as amended, serves the public interest by providing access to selected information from financial disclosure reports filed by judicial officials that could represent conflicts of interest for these officials. At the same time, the law accounts for the security threats faced by judicial officials and grants the judiciary authority to redact personal and sensitive information from their financial disclosure reports if a finding is made that the release of the information could endanger these officials or members of their families. Thus, the Judicial Conference has a responsibility to balance the goals of safeguarding judicial officials’ information and providing timely public access. The Judicial Conference developed a compliance process to ensure judicial officials were filing financial disclosure reports that adhere to applicable laws and regulations, and also had procedures in place to ensure the public had access to copies of judicial officials’ financial disclosure reports when requested. While the Ethics in Government Act of 1978, as amended, provides the Judicial Conference with authority to redact information that could pose a security threat to judicial officials, this authority has been used sparingly. From 2012 through 2016, about 3.2 percent of financial disclosure reports included a redaction request and about 85 percent of those were approved. Nevertheless, the law requires AOUSC to submit an annual report to congressional committees of jurisdiction on the operation of the judiciary’s redaction authority, including information on the total number of reports with redactions and the types of information redacted. Our review of available guidance and documentation shows that AOUSC has not implemented a formal process for producing annual redaction reports and has not submitted these reports to Congress in a timely manner. Implementing a more formal process, with specified steps and timeframes, would allow AOUSC to be better positioned to provide congressional committees of jurisdiction with the required annual redaction reports that can be used to conduct oversight of the federal judiciary’s use of its redaction authority. This is particularly important given that Congress recently passed an extension to the judiciary’s redaction authority through the end of 2027. The Director of AOUSC should develop and implement a formal process, with specified steps and associated time frames, to better ensure that required annual redaction reports are completed and submitted to Congress within the following year. In April 2018, we requested comments on a draft of this report from DOJ, USMS, and AOUSC. Neither DOJ nor USMS had any comments. AOUSC provided technical comments, which we have incorporated into the report, as appropriate. In particular, based on AOUSC comments, we amended the report title to provide greater clarity into the subject matter of the report and added additional text to the conclusions section to better address all aspects of the report’s findings. In addition to its technical comments, AOUSC provided an official letter for inclusion in the report, which can be seen in appendix III. In its letter, AOUSC stated it concurred with the recommendation and will determine how best to implement a more formalized process to better ensure it can submit annual redaction reports to Congress in a timely manner. We are sending copies of this report to the Administrative Office of the U.S. Courts, the Attorney General, the United States Marshals Service, selected congressional committees, and other interested parties. In addition, this report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any further questions about this report, please contact me at (202) 512-8777 or goodwing@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributions to this reported are listed in appendix III. In addition to the contact named above, Christopher Conrad (Assistant Director) and Valerie Kasindi (Analyst-in-Charge) managed this assignment. Kristiana Moore, Dominick Dale, Melissa Hargy, Eric Hauswirth, Amanda Miller, Jerry Sandau, and Janet Temko-Blinder made key contributions to this report.
|
Under the Ethics in Government Act of 1978, as amended, federal judges and certain judicial employees must file financial disclosure reports that can be made available to the public. Federal law accounts for the potential security risks of the judiciary and authorizes the redaction of information from judicial officials' reports if the Judicial Conference, in consultation with the United States Marshals Service (USMS), finds that revealing certain information could endanger judicial officials or members of their families. This report addresses the following for calendar years 2012 through 2016, the most recent years for which full data were available: (1) Actions taken by the Judicial Conference to ensure judicial officials file financial disclosure reports, and the number of reports filed; (2) The judiciary's compliance with procedures for responding to requests for financial disclosure reports and the number of reports released; and (3) The number of redaction requests made, the types of information requested to be redacted, and the judiciary's consistency in reporting results to Congress in a timely manner. GAO interviewed AOUSC and USMS officials, reviewed relevant laws and guidance, and analyzed data on redaction requests. The Judicial Conference, the federal judiciary's principle policy-making body, developed an electronic filing system, guidance, and a compliance process to help ensure judicial officials file financial disclosure reports that adhere to applicable laws and regulations, and data provided by the Administrative Office of the U.S. Courts (AOUSC) show that more than 4,000 reports were required to be filed annually from 2012 through 2016. According to AOUSC officials, as of March 2018, all financial disclosure reports required to be filed from 2012 through 2016 were filed, except for one in 2015 and one in 2016. AOUSC officials are working with the filers to ensure these reports will be filed. The Judicial Conference established procedures for responding to requests for copies of financial disclosure reports, and the number of reports released has varied. From 2012 through 2016, AOUSC annually received, on average, about 70 requests for copies of judicial officials' reports and released approximately 16,000 reports during this time. Each request can vary—from a request for a single judicial official's report to a request for multiple judicial officials' reports. From 2012 through 2016, a small percentage of judicial officials requested redactions from their financial disclosure reports. On average, 3.2 percent of financial disclosure reports filed included a redaction request and about 85 percent of those requests were granted. Of the information requested to be redacted, about 76 percent was related to the unsecured location of a judicial official's spouse, child, or residence. AOUSC is required by federal law to submit annual reports to Congress on use of the judicial redaction authority, such as the number of reports with redactions and types of information redacted, but AOUSC has not consistently submitted the reports on an annual basis in recent years. GAO found that AOUSC does not have a formal process for preparing and submitting the reports to Congress. Implementing a more formal process, with specified steps and timeframes, would better position AOUSC to provide Congress with more timely reports. GAO recommends that AOUSC develop and implement a formal process, with steps and timeframes, to better ensure that required annual reports are submitted to Congress within the following year. AOUSC concurred with the recommendation.
|
VA provides education benefits to eligible veterans and their beneficiaries enrolled in approved programs of education and training to help them afford postsecondary education. VA staff conduct oversight of schools receiving these benefits. In addition, each year, VA contracts with state agencies to help provide this school oversight. In fiscal year 2017, there were about 14,460 schools receiving VA education benefits for about 750,000 veterans and their beneficiaries across the country. State agencies’ core oversight functions, as generally required by statute, VA regulations, and their VA contracts, include approval of schools to receive VA education benefits, annual compliance surveys of schools— which are reviews to ensure schools’ compliance with program requirements—and technical assistance to schools, among other things (see fig. 1). VA and state agencies both conduct annual compliance surveys of selected schools, which generally entail a visit to the school. For veterans to receive the education benefits, school employees must certify to VA that they are enrolled in classes and notify VA of any changes in enrollment. NASAA was founded to coordinate the efforts of state agencies and is managed and administered by an executive board and several leadership committees, such as a contract committee and a legislative committee. All members of NASAA leadership are also either directors or have other roles at individual state agencies. VA’s Education Service is led by a Director and is under the Veterans Benefits Administration. This office works with NASAA to prepare annual contracts to allocate federal funding and specify workload requirements for each state agency. For over a decade, funding provided by VA to state agencies remained at the same level of $19 million. In fiscal year 2018, VA allocated $21 million for state agencies—the first increase in funds allocated to states since fiscal year 2006 (see fig. 2). Each year, state agencies can also request supplemental funding from VA if their costs exceed their allocated funding amount. VA has the discretion to approve an agency’s request based on its justification of need and the amount of VA funding available for supplemental requests. NASAA officials said that supplemental funding is helpful, but that it is not a reliable funding source because there is no guarantee that VA will be able to provide states with the requested amount. According to NASAA officials, some state agencies also receive additional funding from their state governments if they request these funds, but many states do not provide this additional funding. NASAA officials also noted that in some cases, states do not want to provide their own funds to state agencies because their view is that the agencies already receive VA funding through their federal contracts. VA recently changed its method of allocating funding to state agencies. VA hired an external contractor to develop a new funding allocation method. Before fiscal year 2017, VA funded state agencies primarily based on the number of schools in the state with at least one veteran student receiving VA education benefits in the previous year. In fiscal year 2017, VA implemented a new funding allocation method. VA officials told us this new method was a significant improvement over the previous method they used, which was very limited. For example, VA officials said the prior funding method did not estimate how long it took state agencies to perform certain oversight activities. The officials said this limitation was a key reason they decided to develop a new funding method. VA’s new method to fund states more equitably is based on their work requirements, i.e., their school oversight activities and the amount of time needed to complete them. The new funding method factors in, among other things: the number of staff needed to complete a state’s workload in overseeing schools; national salary averages ($80,000 for professional and $50,000 for support staff), including benefits; a national travel allowance based on the number of professional staff required to complete work requirements; the number of schools receiving VA education benefits in the state; and the estimated time needed to review different school types, the type of review (such as approvals vs. compliance surveys), and the number of student veterans enrolled. VA, NASAA, and selected state agency officials we spoke with said that limited funding before and after the recent changes to the funding method has impacted state agencies’ ability to fulfill their oversight responsibilities in three areas: (1) ability to pay and train oversight staff, (2) ability to visit geographically dispersed schools due to travel costs, and (3) ability to provide technical assistance and training to schools. Under their contracts with VA, state agencies have been meeting their core school oversight functions, according to NASAA officials. VA and NASAA officials we interviewed, however, said state agencies have been underfunded for many years. They said states’ funding concerns and challenges existed prior to the new method to allocate funds to state agencies and remain despite a total funding increase to state agencies from about $19 million to $21 million in fiscal year 2018. NASAA officials we interviewed said some state agencies have difficulty paying for the number of staff they need because there is a mismatch between VA’s average salary and benefits used to calculate states’ funding and the actual salaries and benefits some state agencies are required to pay under state laws. VA officials acknowledged that some states have required salary and benefit levels that exceed the average levels used in VA’s new funding allocation method. VA’s new funding method uses an average salary of $80,000 (including benefits) for professional staff. VA officials noted that some states have annual salaries for professional staff of over $100,000 excluding benefits. A state agency official we spoke with said the salary and benefit costs for professional staff in her state average $130,000, with some salary and benefits costing up to about $150,000. The official said this can make it difficult for the state agency to be able to pay a sufficient number of staff, which hinders its ability to fulfill its VA-contracted oversight. In another case, a NASAA official said his state agency did not have enough funds to pay for a second full-time employee because the state’s required salary and benefits were higher than VA’s $80,000 allotment for professional staff. Limited funding for state agency oversight staff has led to state requests for additional funds, as well as higher turnover and less training of the staff. VA officials said that the primary reason that some state agencies requested supplemental funding from VA in fiscal years 2016 and 2017 was that their initial allocation was not sufficient to cover salary, benefits, and travel expenses. Some state governments have had to cover those costs, hoping that VA would reimburse the state at the end of the fiscal year, according to VA officials. In addition, some state agencies have had significant turnover due, in part, to the uncertainty about the amount of annual VA funding, according to NASAA officials. NASAA officials also said that funding amounts limit the professional development provided to state agency staff, including travel to conferences. VA officials said that they support professional development and routinely provide funding for travel to conferences. However, according to VA officials, VA has denied requests from state agencies for travel to additional, repetitious conferences during the same year. NASAA officials said limited VA funding also makes it difficult for state agencies in geographically large states to pay travel expenses to visit schools as part of their oversight responsibilities. For example, NASAA officials said state agencies in Alaska, Montana, and Washington find it difficult to afford mileage and hotel costs for school visits that require travelling long distances—sometimes over mountain ranges—and overnight stays. NASAA officials also said VA’s new funding method does not allocate sufficient funding for travel. Officials we interviewed at selected state agencies have had mixed experiences with travel costs. One state agency official told us her agency selected schools to visit that were physically near her office because of insufficient travel funds. In contrast, a state agency official in a geographically small state said the agency has sufficient funding to travel throughout the state to visit schools, mainly because overnight stays are unnecessary. VA and NASAA officials said some state agencies have been able to address travel costs by stationing agency staff in different parts of the state. VA officials, however, acknowledged that this is not possible in all states because some states require agency staff to be located in a central office. VA’s new funding allocation method calculates a national travel allowance for all states based on the total number of professional staff it estimates would be required to complete work requirements in all states. VA officials explained that this travel allowance does not account for individual differences in geographic size among states. VA officials said that in developing the new funding method, the contractor reviewed the historical travel costs of states and determined that a distinction by the geographic size of a state did not need to be factored into the funding method. The contractor based this decision on several factors, including that some state agencies: (1) paid their travel costs using state funds, not VA funds; (2) have located their staff in offices across the state and, as a result, their travel costs were lower than in other states; and (3) planned their travel so they visited schools within a short timeframe, which reduced travel costs. When faced with funding difficulties, many state agencies reduce their technical assistance to schools and outreach activities because they need to use available funds on salaries, benefits, and travel related to compliance survey and approval workloads, according to NASAA officials. For example, one state agency official told us her agency has significantly reduced its technical assistance to schools because it does not have the funds to travel across the large, rural state to provide it. A NASAA official said available funding has reduced his state agency’s ability to conduct outreach, such as connecting veterans with education and benefit resources, or holding in-person meetings to educate employers on providing apprenticeships to veterans using VA education benefits. NASAA officials also said that many state agencies have reduced the number of visits to train school employees on VA education benefits requirements. They noted that this training is important because it helps reduce over- and under-payments and the misuse of VA education benefits. A 2016 report from VA’s Inspector General estimated that VA makes $247.6 million in improper payments of VA education benefits annually, mostly over-payments. The Inspector General found that many of the improper payments occurred because school employees provided VA incorrect or incomplete information on student enrollment. NASAA officials told us that they continue to have concerns that the new funding method’s time estimates for completing certain oversight activities are inaccurate and, as a result, this method does not allocate sufficient funds. For example, NASAA officials said the funding method does not properly estimate the time it takes state officials to travel to schools and carry out oversight functions, including conducting certain school approvals, and providing schools with technical assistance and training. NASAA officials said the time estimates used to fund approvals are inaccurate and need to be revised because different types of schools and education programs—including flight schools, degree programs, and non- degree programs—take different amounts of time to review and approve. For example, NASAA officials said that state agencies need less time to conduct an approval for an on-the-job training program than for a large public university. VA officials said they are aware of the concerns that NASAA and state agencies have raised that the time estimates for oversight in the new funding method are inaccurate—with some being too high and others too low. They are also aware that NASAA and state agencies believe that the analysis to develop these estimates should have more accurately factored in the time needed to approve and review different types of schools and education programs. To address the concerns states have raised about its new funding allocation method, VA provided documentation to us of its plans to hire a contractor in fiscal year 2018 to improve and update its funding method. In September 2018, VA hired a contractor to carry out a contract with a 6- month period of performance. VA reported that the contractor would review the new funding allocation method to determine if any specific changes are needed to more equitably distribute funding across state agencies. Specifically, VA officials said the contractor would review the accuracy of the funding method’s allowances for state agencies’ salary, benefits, and travel costs, and its time estimates for states to conduct oversight activities to determine if changes are needed. VA officials reiterated that allowances for salaries and travel, and the time estimates are critical factors in the funding method. VA officials noted, however, that regardless of how VA divides the funding up among the state agencies, the total amount of program funding to these agencies will remain the same within any one fiscal year. States have the option of not renewing their school oversight contracts with VA, and two have exercised this option in recent years, citing insufficient funding levels from VA to fulfill their responsibilities. When this happens and the state withdraws from its school oversight role, VA must perform all oversight responsibilities for VA education benefits in that state. New Mexico—which currently has 4,754 veteran students and 107 schools receiving VA education benefits—did not renew its contract with VA in fiscal year 2018 because funding was not sufficient to cover its costs for salaries, travel, and technical assistance to schools, according to VA officials (see text box). New Mexico Did Not Renew Department of Veterans Affairs (VA) Contract Due to Lack of Funding New Mexico’s state agency began to face significant funding difficulties starting in fiscal year 2015, according to a state official, and it did not renew its VA contract to oversee schools receiving VA education benefits in fiscal year 2018. Although the state agency was able to conduct the oversight activities required by its VA contract in fiscal year 2017, the official said the agency had to reduce its staff, and the one remaining employee was frequently required to work long hours and weekends to meet contract requirements. Further, New Mexico did not receive adequate funding for travel costs to visit schools in its geographically large, rural state, the state official noted. As a result, the official said the state agency opted not to renew its VA contract in fiscal year 2018. VA and New Mexico officials have differing views on how well VA staff will be able to provide effective oversight of schools receiving veterans’ education benefits in the state. In January 2018, New Mexico state officials stated that although VA regional staff have assumed the former state agency’s oversight responsibilities, they are unlikely to be able to provide the same level of oversight the state agency did because the VA staff are also responsible for overseeing schools in three other states in addition to New Mexico. As a result, state agency officials said schools in New Mexico would likely receive fewer oversight visits. VA officials, on the other hand, believe that their regional staff are handling oversight of schools in New Mexico effectively, although they acknowledged the staff may be conducting fewer compliance surveys and providing schools less technical assistance. Other states have also expressed concerns about their ability to conduct oversight given available funding levels. For example, Alaska—which currently has 4,011 veteran students and 53 schools receiving VA education benefits—also chose not to contract with VA for about 5½ years (fiscal year 2012 through January 2017), according to VA officials and the director of Alaska’s veterans affairs office. Alaska’s director also said that a major reason that Alaska did not renew its contract was limited VA funding. During this time, regional VA staff based in Oklahoma handled Alaska’s oversight, which VA officials said often had to be conducted remotely given that schools are spread throughout the state, and travel to those areas can be expensive as well as challenging given weather conditions. VA officials said that VA’s presence was not as strong in Alaska as in other states because VA staff overseeing Alaska are located in another state and in a different time zone. Further, according to VA data for fiscal years 2014 and 2015, VA staff were unable to complete all the compliance surveys they were assigned in Alaska. In addition, California officials told us they almost did not renew their oversight contract in fiscal year 2018 due in part to funding concerns. California has the largest number of veteran students (86,926) and schools receiving VA education benefits (1,091) of any state, yet state agency officials told us that they lacked sufficient funding to pay salaries for staff to conduct necessary oversight of these schools, including approvals and technical assistance visits. VA officials noted, however, that California receives the most funding of any state and has received the greatest increases of any state in the last two years. Although VA stepped in to provide oversight of schools in New Mexico and Alaska, the agency does not have a plan for how it will oversee additional schools if other states choose not to renew their oversight contracts. VA officials told us their current approach is to assign the state agency’s workload to regional VA staff who already have their own school oversight responsibilities. However, providing oversight in states without a contract in addition to VA staffs’ existing workload is likely to stretch agency resources. For example, existing VA regional staff may not be able to oversee all schools in states with a large number of schools. In addition, VA staff may be strained in providing oversight in geographically large states where schools are widely dispersed because school visits would be time consuming and costly. VA has begun some initial steps to identify and assess how it would handle additional oversight. In August 2017, VA began working with its Office of General Counsel regarding what options the agency has when a state agency chooses not to contract with VA, and the Office issued a legal opinion in September 2017. In April 2018, VA formed a workgroup, which also met a few times in May and once in July, to prepare a draft paper of possible scenarios and response options based on this legal opinion. In August 2018, the workgroup followed up with the field supervisor responsible for approval, compliance, and liaison and produced a new draft paper of scenarios and options. As of September 2018, VA’s Education Service Director is holding discussions with VA leadership regarding assessing the options and developing a formal plan. However, VA has not completed an assessment to ensure the agency can handle additional school oversight responsibilities in states that do not renew their contracts and has yet to prepare a contingency plan. Federal standards for internal control state that agencies should identify, assess, and respond to risks related to achieving objectives. After identifying risks, the agency should assess the significance—or effect on achieving the objective—of these risks, which provides a basis for responding to the risks. Then, in responding to these risks, the standards state that agencies should define contingency plans for assigning responsibilities if key roles are vacated to help the entity continue to achieve its objectives. Specifically, if the agency relies on a separate organization to fulfill key roles, then the agency should assess whether this organization can continue in these key roles, identify others to fill these roles as needed, and implement knowledge sharing with replacement personnel. Without fully identifying and assessing the risks of additional state withdrawals, and without a contingency plan to address how VA can oversee additional schools, the agency runs the risk that if more states withdraw from their oversight responsibilities, then VA will be unprepared to oversee the schools in these states. Each year, VA uses findings from prior compliance surveys and other information to develop a strategy for prioritizing a sample of schools to receive annual reviews, according to VA officials. VA is generally required by statute to conduct an annual compliance survey of schools with 20 or more enrolled veterans at least once every 2 years. VA officials said with the help of state agencies, VA uses these surveys to determine if schools are meeting legal requirements and are using VA education benefits funds appropriately, including whether they are making over- or under-payments on students’ education expenses. According to a VA document, in conducting the surveys, VA and state agencies review various statutory and regulatory requirements, such as the accuracy of a school’s student enrollment records, tuition payments, and whether a school has corrected deficiencies identified in previous compliance surveys. According to VA officials, the agency has taken steps to incorporate risk factors into its compliance survey strategy in response to recommendations from our prior work and recent VA studies. The examples below show how VA has responded to recommendations to use risk in overseeing schools. In 2011, we recommended that VA adopt risk-based approaches to ensure proper oversight of schools. As part of the agency’s official response to this recommendation, VA reported to us that in fiscal year 2012 the agency began prioritizing compliance surveys at for-profit schools. Further, VA officials said that the agency added this focus to its written annual compliance survey strategy for fiscal years 2016 and 2017 based on prior years’ compliance survey findings and congressional priorities. In a 2016 report, VA’s Inspector General recommended that VA consider particular risk factors in selecting schools for compliance surveys. Specifically, the report recommended that VA prioritize schools at risk of payment errors including (1) making errors resulting in over- or under-payments of VA education benefits, and (2) neglecting to recover unspent VA education benefit funds, such as when students receive funds but then reduce their course loads or repeat classes. In response, VA officials stated that the agency began using data on these payment errors to prioritize schools with high error rates. For example, VA officials said that when data revealed that flight schools were particularly prone to such errors—along with charging high tuition and fees and failing to meet some VA education benefits criteria, among other issues—VA decided to prioritize these schools for compliance surveys in its fiscal year 2018 strategy (see text box). VA’s Compliance Survey Strategy for Schools Receiving VA Education Benefits for Fiscal Year 2018 The Department of Veterans Affairs (VA) is generally required by statute to conduct an annual compliance survey of schools receiving VA education benefits and that have 20 or more enrolled veterans at least once every 2 years. For its fiscal year 2018 compliance survey strategy, VA prioritized the following types of schools for review: 100 percent of schools with flight programs; 100 percent of schools with fewer than 20 veterans, with priority to those that had not received surveys for the longest time period; 100 percent of federal on-the-job training and apprenticeship programs; schools with serious deficiencies identified in previous compliance surveys; schools newly approved for the program with enrolled VA beneficiaries; schools that have never received a compliance survey (for example, VA officials said some schools have not received a compliance survey due to a shortage of VA oversight staff or due to the fact that in prior years, the statute did not require VA to conduct compliance surveys at schools with fewer than 300 veterans); and a sample of foreign schools receiving VA education benefits for students from the United States (conducted by VA via remote survey). An August 2017 study, conducted by an external contractor hired by VA, reviewed ways to strengthen VA’s compliance survey process and outcomes. The report found that VA has not placed enough emphasis on improving school compliance over time. For example, VA has historically prioritized completing a certain number of surveys each year rather than ensuring that schools are actually demonstrating compliance. Among other recommendations, the report identified the need for VA to more effectively use data to measure schools’ compliance over time and to establish priorities to select schools for compliance surveys based on their risk level. As of July 2018, VA officials said that the agency has begun analyzing the study’s recommendations to improve its compliance survey process and that its new compliance survey strategy for fiscal year 2019 and future years will address many of these study recommendations. VA officials said that in 2014 they began conducting targeted reviews of schools in response to complaints received from students, government officials, or others. VA’s policies and procedures state that, in addition to complaints, other factors that could trigger a targeted review include compliance survey results, management mandates, and a school self- reporting a violation, among others. VA officials said, however, that VA has not initiated a targeted review in response to anything other than a complaint. To determine whether to conduct a targeted review, VA officials said they review each complaint and may corroborate it with other sources of information, such as compliance survey data on that school and input from states or other agencies. According to VA’s policies and procedures, the focus of targeted reviews varies based on the nature of the complaint, and VA assigns a higher priority to complaints that are higher risk, i.e., those that allege fraud, waste, or abuse (see table 1). As of July 2018, VA and state agencies have conducted about 160 targeted reviews of schools in response to complaints since 2014, resulting in the withdrawal of program approval for 21 schools, according to data provided by VA officials. VA has taken steps to adopt a new risk-based approach to overseeing schools receiving VA education benefits, including selecting schools based on risk factors such as those identified in the Colmery Act. Among other things, the Colmery Act explicitly authorizes VA to use the state agencies for risk-based surveys and other oversight based on a school’s level of risk, and identifies specific risk factors that can be used for school oversight (see text box). Risk Factors Identified in the Harry W. Colmery Veterans Educational Assistance Act of 2017 The Colmery Act explicitly authorizes the Department of Veterans Affairs (VA) and state agencies to use risk-based surveys (reviews) in oversight of schools receiving VA education benefits. The Colmery Act identifies specific risk factors that can be used for school oversight, but does not require VA or state agencies to use these risk factors in their oversight of these schools: rapid increases in veteran enrollment, increases in the amount of VA education benefits a school receives per veteran student, volume of student complaints, rates of federal student loan defaults of veterans, veteran completion rates, deficiencies identified by accreditors and other state agencies, and deficiencies in VA program administration compliance. VA officials told us that they have not yet used the risk factors cited in the Colmery Act in conducting their compliance surveys. VA officials acknowledged, however, that adopting a more risk-based oversight approach could help prevent problems, such as some schools’ use of deceptive practices in recruiting veterans and receipt of overpayments from VA. VA officials said that the agency is exploring risk factors to consider in developing its compliance survey strategy for selecting schools in fiscal years 2019 to 2021. State agency officials we spoke to said that they use the risk factors cited in the Colmery Act to varying degrees in their oversight of schools receiving VA education benefits. For example, one state agency official said that he tracks all of the risk factors cited in the Colmery Act except the rates of veterans’ student loan defaults. On the other hand, a NASAA official said that her state agency tracks the volume of student complaints and deficiencies identified by accreditors and other state agencies. States generally have limited opportunities to select specific schools for compliance surveys, because VA develops the annual priorities for compliance surveys, according to NASAA officials. In some cases, NASAA officials told us, state agency staff work with regional VA staff to select schools for visits based on VA’s priorities. VA has recently taken steps to explore a new risk-based approach to oversee schools receiving VA education benefits that would be in addition to compliance surveys, according to VA officials. Specifically, VA officials told us that VA has participated in a joint working group with NASAA officials focused on developing a new type of school review in which VA would select schools based on specific risk factors, including those identified in the Colmery Act. NASAA officials told us they were supportive of VA’s efforts in this area. As of February 2018, NASAA officials had drafted a possible approach to state agencies’ oversight to monitor one risk factor—rapid increases in veteran enrollment for VA’s consideration. VA officials told us the working group plans to build on this effort in reviewing other risk factors. In May 2018, VA prepared a draft charter for the working group, which, among other things, outlines the potential scope and implementation of new risk-based surveys, and provided it to NASAA for review. Documentation we reviewed from a VA and NASAA working group meeting held in May 2018 stated that in its upcoming meetings, the working group plans to continue developing the charter, including agreeing to roles and responsibilities, establishing the risk factors to be used, and identifying data sources related to these risk factors. VA officials said that at an August 2018 joint working group meeting, the charter was deemed to have served its purpose and the decision was made to establish a risk-based review policy and procedures moving forward. According to VA officials, as of mid-October 2018, VA used this strategy to select five schools to undergo risk-based reviews. VA officials said they expect these five reviews to be completed by late December 2018. VA and state agencies coordinate to divide responsibility for who will conduct compliance surveys of schools receiving VA education benefits in a variety of ways, according to VA and NASAA officials. After VA provides state agencies information about its annual strategy for selecting schools for these surveys, VA regional staff work with state agency staff to select the specific schools for that year, according to these officials. NASAA officials we interviewed said their working relationships with regional VA staff are excellent—they have good communication and understand and help each other. For example, one state official we interviewed said the state agency and regional VA staff in the state coordinate to make sure they alternate who visits which schools to obtain multiple perspectives. They also have discussions before and after each visit, the official said. In some cases, VA officials said, VA and state agency officials collaborate to conduct compliance surveys together. VA also provides information to states on how to conduct and report on compliance surveys, including a checklist to help guide the states’ review of items tied to specific statutory requirements, as well as a template for reporting compliance survey results. VA leadership also holds conferences twice a year that NASAA and state agency staff can attend, and communicates throughout the year on school oversight issues, according to officials from these entities. In addition, VA officials told us they collaborate with NASAA on providing training for state agency staff that NASAA provides through the National Training Institute. According to NASAA’s website, the Institute provides an overview of state agency responsibilities and activities, including information on public laws, accreditation, VA education benefits approval criteria, and compliance surveys. New state agency staff must attend this training, according to NASAA officials. NASAA officials told us that VA has not provided state agencies with sufficient information on how to conduct targeted school reviews in response to complaints, and as a result it is difficult for states to conduct these types of reviews. VA officials acknowledged this lack of information. NASAA officials reported that many state agencies want more direction on how to conduct and report on targeted school reviews in response to complaints. A policy and procedures document on targeted school reviews that VA developed in 2014 describes the criteria to use in determining when to conduct targeted, complaint-based reviews, including what issues to prioritize. VA officials acknowledged, however, that the document is outdated and does not provide sufficient detail. VA officials said the agency is in the process of revising the document to provide more clarity. In July 2018, VA provided a draft document to us showing the changes it plans to make in its policy and procedures on targeted, complaint-based school reviews, which includes specific information about how state agencies should conduct and report on these reviews. As of late October 2018, VA officials said these procedures were undergoing internal review. VA officials said they are open to state agency feedback on the new procedures. In addition, VA officials said they are currently updating their database for complaint-based reviews to add specific, standard data fields for states to use in reporting the results of these reviews. VA officials told us that the revised database and procedures will allow state agencies to develop their own template to electronically report information collected during these reviews in a standardized way. We believe that when implemented, VA’s new procedures could help enhance VA’s and state agencies’ efforts in responding to complaints about schools receiving VA education benefits. It is critical for VA to ensure that schools receiving VA education benefits are complying with program requirements and that veterans receive the education they have been promised. Because funding concerns have led to states withdrawing from their oversight roles, decisions by other states to not renew their school oversight contracts could result in VA taking on additional school oversight responsibilities. However, VA has neither completed identification nor assessment of the risks posed by any future state withdrawals that could leave VA unprepared to conduct oversight in these states. Further, VA’s lack of a contingency plan for assuming the responsibilities of state agencies in these cases raises the risk that schools receiving VA education benefits would not be overseen and student veterans could be adversely affected. We recommend that the Secretary of Veterans Affairs direct the Under Secretary for Benefits to: (1) Complete efforts to identify and assess risks related to future withdrawals by state agencies in overseeing schools and (2) address these risks by preparing a contingency plan for how VA will oversee additional schools if more states choose not to renew their oversight contracts. (Recommendation 1) We provided a draft of this report to VA for review and comment. VA’s comments are reproduced in appendix I. VA agreed with our recommendation. VA also provided technical comments, which we considered and incorporated as appropriate. In addition, we provided relevant excerpts from a draft of this report to NASAA leadership for review and comment. NASAA provided technical comments, which we considered and incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Veterans Affairs and Education; and other interested parties. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0534 or emreyarrasm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Elizabeth Sirois (Assistant Director), Linda L. Siegel (Analyst-in-Charge), Jessica Ard, and Rachel Pittenger made key contributions to this report. Also contributing to this report were Susan Aschoff, James Bennett, Deborah Bland, Sheila R. McCoy, Jean McSween, Benjamin Sinoff, and Sarah Veale.
|
In fiscal year 2017, VA provided about $11 billion in education benefits to about 14,460 schools to help eligible veterans and their beneficiaries pay for postsecondary education and training. VA typically contracts with state agencies to help it provide oversight of schools participating in this education benefit program. The Harry W. Colmery Veterans Educational Assistance Act of 2017 included a provision for GAO to review VA's and states' oversight of schools receiving VA education benefits. This report examines (1) how, if at all, the available level of funding to state agencies has affected states' and VA's ability to carry out their oversight responsibilities, (2) to what extent VA and state agencies use risk-based approaches to oversee schools, and (3) to what extent VA coordinates and shares information with the states to support their oversight activities. GAO reviewed VA documents; assessed VA funding data for fiscal years 2003-2018; interviewed VA and selected state agency officials; and reviewed correspondence between these officials. GAO interviewed officials from eight state agencies who were past or present officials at the association representing state agencies, and officials from three other states, including one that did not renew its contract with VA in fiscal year 2018. The Department of Veterans Affairs (VA) is responsible for overseeing schools nationwide that provide VA education benefits to veterans. To help provide this oversight, VA contracts with state agencies to oversee schools in their states and provide outreach and training to school officials and allocates them funding to cover the cost of oversight, outreach, and training activities. However, since fiscal year 2006, funding for oversight, outreach, and training has remained at about $19 million, and only recently increased in fiscal year 2018 to $21 million. State agency officials told GAO that the limited level of funding they have received from VA has been a long-standing problem that has strained their ability to (1) adequately cover staff costs, (2) pay for travel for school visits, and (3) provide needed technical assistance and training to the schools about VA education benefit requirements. As a result, a few states, such as New Mexico, have chosen to withdraw from their school oversight roles. When this happens, VA must take over the state agencies' oversight responsibilities. GAO found that assuming additional oversight responsibilities is likely to stretch VA's staff resources, especially in large states, where schools are geographically dispersed and school visits are time consuming and costly. VA has begun but has not completed an assessment of the risks that potential future state agency withdrawals could have on its ability to provide school oversight. Moreover, VA has not developed a contingency plan for how it will oversee more schools if additional states do not renew their oversight contracts. Federal standards for internal control state that agencies should identify and assess risks related to achieving objectives, and define contingency plans for assigning responsibilities if key roles are vacated. Until VA takes these steps, the agency runs the risk of being unprepared to conduct effective oversight in the event that more state agencies withdraw from their contracts in the future. VA and state agencies use certain risk factors to select schools for oversight. VA officials said that they prioritize schools for annual reviews of compliance with program requirements based on findings from prior reviews as well as other risk factors, such as schools with a history of VA benefit payment errors. GAO found that VA and state agencies have recently begun a joint effort to explore a new strategy that they expect will strengthen the school review selection and prioritization process. According to VA officials, as of mid-October 2018, VA used this strategy to select five schools to undergo risk-based reviews. VA officials said they expect these five reviews to be completed by late December 2018. VA and state agencies coordinate and share information about their oversight activities in a variety of ways. For example, VA has shared information with the state agencies on how to conduct annual reviews of schools in their states. However, according to officials at the association representing state agencies, VA has not provided specific direction on conducting targeted reviews in response to complaints. VA officials acknowledged that the procedures they currently have in place are outdated and said that they are being revised to provide state agencies with more details. As of late October 2018, VA officials said these procedures were undergoing internal review. Once implemented, VA's new procedures have the potential to enhance VA's and state agencies' efforts to conduct reviews at those schools for which they have received complaints. GAO recommends that VA complete the identification and assessment of oversight risks, and prepare a contingency plan for overseeing schools if additional states do not renew their oversight contracts. VA concurred with the recommendation.
|
Since 1993, USAID has obligated more than $5 billion in bilateral assistance to the Palestinians in the West Bank and Gaza, primarily using funds appropriated through the ESF. According to State officials, through the ESF, USAID provides project assistance and debt relief payments to PA creditors. USAID, with overall foreign policy guidance from State, implements most ESF programs, including programs related to private sector development, health, water and road infrastructure, local governance, civil society, rule of law, education, and youth development. According to USAID officials, this assistance to the West Bank and Gaza contributes to building a more democratic, stable, prosperous, and secure Palestinian society—a goal that USAID described as being in the interest of the Palestinians, the United States, and Israel. Figure 1 shows the location of the West Bank and Gaza relative to surrounding countries. USAID assistance to the West Bank and Gaza is conducted under antiterrorism policies and procedures outlined in an administrative policy document known as Mission Order 21. The stated purpose of the mission order, as amended, is to describe policies and procedures to ensure that the mission’s program assistance does not inadvertently provide support to entities or individuals associated with terrorism. We have previously reported on the status of ESF assistance to the Palestinians and USAID’s antiterrorism policies and procedures in the West Bank and Gaza. As of March 31, 2018, USAID had obligated about $544.1 million (over 99 percent) and expended about $350.6 million (over 64 percent) of approximately $544.5 million in ESF assistance allocated for the West Bank and Gaza in fiscal years 2015 and 2016 (see table 1). USAID obligated portions of the allocated funds for direct payments to PA creditors—specifically, payments to two Israeli fuel companies, to cover debts for petroleum purchases, and to a local Palestinian bank, to pay off a line of credit used for PA medical referrals to six hospitals in the East Jerusalem Hospital network. Project assistance obligated for fiscal years 2015 and 2016 accounted for about $215 million (74 percent) and $184 million (72 percent), respectively, of USAID’s obligations of ESF assistance for the West Bank and Gaza for those fiscal years (see fig. 1). Payments to the PA’s creditors accounted for the remaining obligations—about $75 million (26 percent) of fiscal year 2015 obligations and about $70 million (28 percent) of fiscal year 2016 obligations. According to USAID documents, ESF project assistance for the West Bank and Gaza for fiscal years 2015 and 2016 was obligated for three USAID development objectives: Economic Growth and Infrastructure (about $239 million), Investing in the Next Generation (about $107 million), and Governance and Civic Engagement (about $25 million). Program support—which sustains all development objectives, according to USAID—accounted for about $29 million (see table 2). Economic Growth and Infrastructure. The largest share—about 60 percent—of USAID’s ESF project assistance for the West Bank and Gaza for fiscal years 2015 and 2016 supported the agency’s Economic Growth and Infrastructure development objective. According to USAID documents, as of March 31, 2018, the agency had obligated about $239 million and expended approximately $89 million (about 37 percent) for projects under this objective. USAID officials stated that the agency funded these projects under the following standard State-budgeted program areas: health (including water), infrastructure, private sector competiveness, and stabilization operations and security sector reform. The largest project—the Architecture and Engineering Services project—received about $20 million of fiscal year 2015 ESF assistance and $17 million of fiscal year 2016 ESF assistance. The purpose of the project was to rehabilitate and construct infrastructure through the procurement of infrastructure services, including engineering design and construction management, among other things. The contractor was required to coordinate with relevant PA and Israeli entities, as well as with USAID, to assist in the selection of PA water and wastewater projects and in the planning and design of water projects such as small- to large-scale water distribution systems, water treatment systems, and institutional capacity building. Investing in the Next Generation. The second-largest share—about 27 percent—of USAID’s fiscal years 2015 and 2016 ESF project assistance for the West Bank and Gaza supported the agency’s Investing in the Next Generation development objective. According to USAID documents, as of March 31, 2018, the agency had obligated about $107 million and expended approximately $79 million (about 74 percent) for projects under this objective. Program areas funded included education, health, social and economic services and protection of vulnerable populations. The largest project funded under this objective—a grant to the World Food Program for assistance to vulnerable groups—received $12 million in fiscal year 2015 and $15 million in fiscal year 2016 ESF assistance. The project focused on ensuring food security, including meeting food needs, of the nonrefugee population; increasing food availability and dietary diversity for the most vulnerable and food-insecure nonrefugee population; and establishing linkages with the Palestinian private sector (shopkeepers, farms, and factories) to produce and deliver the aid being provided to Palestinians. For example, the project directly distributed a standard food ration through both direct food distribution and electronic food vouchers to vulnerable nonrefugee families. Governance and Civic Engagement. The smallest share—about 6 percent—of USAID’s fiscal years 2015 and 2016 ESF project assistance for the West Bank and Gaza supported the agency’s Governance and Civic Engagement development objective. According to USAID documents, as of March 31, 2018, USAID had obligated about $24.6 million and expended approximately $14.5 million (about 60 percent) for projects in program areas that included civil society, good governance, and rule of law. The largest project funded under this objective—a contract for the Communities Thrive Project— received about $5.2 million and $8 million in fiscal years 2015 and 2016 ESF assistance, respectively. The project aimed to help 55 West Bank municipalities improve fiscal management, fiscal accountability and transparency, and delivery and management of municipal services, among other things. Under debt relief grant agreements with the PA, USAID made direct payments of ESF assistance to PA creditors totaling about $75 million from fiscal year 2015 funds and $70 million from fiscal year 2016 funds. USAID paid about $40 million from fiscal year 2015 funds and $45 million from fiscal year 2016 funds to two oil companies to cover debts for petroleum purchases. In addition, USAID paid about $35 million from fiscal year 2015 funds and $25 million from fiscal year 2016 funds to the Bank of Palestine, to pay off a PA line of credit that was used to cover PA medical referrals to six hospitals in the East Jerusalem Hospital network. Before using fiscal years 2015 and 2016 ESF assistance to pay PA creditors, USAID vetted the creditors to ensure that the assistance would not provide support to entities or individuals associated with terrorism, as required by its policies and procedures. USAID determined that certain legal requirements, including the requirement for an assessment of the PA Ministry of Finance and Planning, were not applicable for direct payments of these funds to PA creditors. Nevertheless, USAID continued to commission external assessments and financial audits of the PA Ministries of Health and Finance and Planning. USAID documentation for payments to creditors shows that before signing debt relief agreements with the PA, mission officials checked, as required by Mission Order 21, the vetting status of PA creditors who would receive direct payments under the agreements, to ensure their eligibility before any payment was made. USAID Mission Order 21 requires that before payments to PA creditors are executed, the creditors must be vetted—that is, the creditors’ key individuals and other identifying information must be checked against the federal Terrorist Screening Center database and other information sources to determine whether they have links to terrorism. According to USAID policies and procedures, each PA creditor must be vetted if more than 12 months have passed since the last time the creditor was vetted and approved to receive ESF payments. We found that for payments made to PA creditors using fiscal years 2015 and 2016 ESF assistance, USAID vetted each PA creditor that received payments and completed the vetting during the 12- month period before the debt relief agreements with the PA were signed (see table 3). USAID determined that certain legal requirements applicable to cash transfers to the PA were not applicable to direct payments to PA creditors of fiscal years 2015 and 2016 ESF assistance. In September 2015, we reported that USAID ceased making cash payments directly to the PA in 2014 and began making payments of ESF assistance directly to PA creditors. In reviewing USAID’s compliance with key legal requirements, we found that USAID had complied with the requirements when making cash transfers to the PA in fiscal year 2013. However, USAID had determined that some requirements were not applicable to direct payments made to PA creditors in fiscal year 2014, because no funds were being provided directly to the PA. After fiscal year 2015, USAID further defined the scope of statutory requirements it deemed applicable to payments to PA creditors using fiscal years 2015 and 2016 ESF assistance, under the rationale that these payments do not constitute direct payments to the PA. Specifically, according to USAID, the agency determined that the following statutory requirements discussed in our prior report were not applicable to direct payments to PA creditors. A requirement to notify the Committees on Appropriations 15 days before obligating funds for a cash transfer to the PA A requirement for the PA to maintain cash transfer funds in a separate account A requirement for the President to waive the prohibition on providing funds to the PA and to submit an accompanying report to the Committees on Appropriations A requirement for the Secretary of State to provide a certification and accompanying report to the Committees on Appropriations when the President waives the prohibition on providing funds to the PA Requirements for direct government-to-government assistance, including an assessment of the PA Ministry of Finance and Planning According to USAID officials, they currently do not plan to resume cash payments to the PA, because making direct payments to creditors minimizes the misuse of funds and assures full transparency and appropriateness of transfers. Although USAID concluded that the statutory requirement mandating assessments of the PA Ministry of Finance and Planning did not apply to direct payments to PA creditors, the West Bank and Gaza mission commissioned external assessments of the PA Ministry of Health’s medical referral services and Ministry of Finance and Planning’s petroleum procurement system. According to a USAID document, while the payments to the creditors did not constitute direct budget support to the PA, the agency chose to commission external assessments to determine whether the PA’s financial systems were sufficient to ensure adequate accountability for USAID funds consistent with legislative requirements for direct budget support funds. These external assessments identified weaknesses in both systems. Ministry of Health medical referrals. The assessment report stated that the ministry did not have approved policies and procedures for the medical referral process, a list of medical services covered by the referral system, and written criteria for selecting referral hospitals in the medical referral systems. In response, in a January 2016 internal memorandum, the West Bank and Gaza mission officials concluded, among other things, that the findings did not pose a significant risk to USAID funds. They also stated that the Ministry of Health’s medical referral system had adequate policies and procedures for referrals to local hospitals. However, after the assessment report was issued, a USAID contractor worked with the Ministry of Health to update, revise, and approve guidelines for medical referrals. Ministry of Finance and Planning petroleum procurements. The assessment report stated that the ministry lacked specific policies and procedures to prevent or detect fraud in the petroleum procurement systems. In the West Bank and Gaza mission’s January 2016 memorandum, USAID mission officials disagreed with the assessment’s findings regarding the petroleum procurement system, stating that the assessment did not take into account sufficient and adequate internal controls at the ministry as a first line of defense against fraud. The memorandum also stated that the finding did not affect USAID debt relief payments to the PA creditors. USAID officials told us that, while they did not believe the external assessments’ findings affected the integrity of USAID’s debt relief payment process, they took four additional steps to mitigate findings noted in the assessment of the Ministry of Finance and Planning’s fuel procurement processes. According to USAID officials, they (1) confirmed that the fuel companies had controls and systems to ensure an objective and transparent system in receiving and recording PA orders, (2) dispatched orders with official and properly signed shipping delivery and receipt documents, (3) obtained written confirmation from the fuel companies of the costs of the fuel provided to the PA, and (4) confirmed the PA’s petroleum debt with the fuel companies before initiating the payments and after making the payments. In addition, in 2016, USAID commissioned two routine financial audits of the debt relief grant agreed to by USAID and the PA for the use of fiscal year 2015 ESF assistance to make direct payments to PA creditors. According to USAID officials, the auditors were to examine the PA Ministry of Finance and Planning’s recording of USAID payments to PA creditors in its financial records as well as the ministry’s and USAID’s compliance with the terms of the grant agreement and related implementation letters. The audits did not identify any questioned or ineligible costs, reportable material weaknesses in internal control, or material instances of noncompliance with the terms of the debt relief grant. Also, in 2017, USAID contracted for a financial audit of the fiscal year 2016 debt relief grant agreed to by USAID and the PA. According to a USAID document, in May 2018, USAID held an entrance conference with the PA Ministry of Finance and Planning for the audit of the fiscal year 2016 grant. In July 2018, USAID sent the final audit report to the Regional Inspector General for review. According to the USAID document, the report did not identify any questioned or ineligible costs, reportable material weaknesses in internal controls, or material instances of noncompliance with the terms of the grant. We provided a draft of this report to USAID and State for review and comment. USAID provided comments, which we have reproduced in appendix II, as well as technical comments, which we incorporated as appropriate. State did not provide comments. We are sending copies of this report to the appropriate congressional committees, the Administrator of USAID, and the Secretary of State. In addition, the report is available at no charge on the GAO website at http://www.gao.gov If you or your staff have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who contributed to this report are listed in appendix III. Appropriations acts for fiscal years 2015 and 2016 included provisions for GAO to review the treatment, handling, and uses of funds provided through the ESF for assistance to the West Bank and Gaza. This report examines (1) the status of ESF assistance and projects provided to the West Bank and Gaza for fiscal years 2015 and 2016, including payments to PA creditors, and (2) the extent to which USAID conducted required vetting of PA creditors to ensure that assistance would not support entities or individuals associated with terrorism and assessed PA ministries’ capacity to use ESF assistance as intended. To address our first objective, we reviewed appropriations legislation, related budget justification documents, and financial data for fiscal years 2015 and 2016, including expenditures as of March 31, 2018, provided by USAID’s West Bank and Gaza mission in Tel Aviv, Israel. We reviewed data that USAID provided on obligations and expenditures of all ESF assistance for the West Bank and Gaza as of March 31, 2018, from annual allocations for fiscal years 2015 and 2016. We also reviewed relevant USAID documents, including notifications to Congress regarding the use of appropriated funds. In addition, we interviewed USAID and State officials in Washington, D.C., and Tel Aviv. To determine whether the data were sufficiently reliable for the purposes of this report, we requested and reviewed information from USAID officials about their procedures for entering contract and financial information into USAID’s data system. We determined that the USAID data were sufficiently reliable. For the project information included in this report, we relied on data that USAID provided, showing its obligations and expenditures of fiscal year 2015 and 2016 ESF assistance for West Bank and Gaza. For illustrative purposes, we requested and obtained from USAID descriptions of projects that, according to USAID officials, represented the largest financial obligations for each development objective in fiscal years 2015 and 2016. To address our second objective, we identified and reviewed relevant legal requirements as well as USAID policies and procedures to comply with those requirements. USAID Mission Order 21 is the primary document that details USAID procedures to ensure that the mission’s assistance program does not provide support to entities or individuals associated with terrorism, consistent with the prohibition on such support found in relevant laws and executive orders. In addition, we reviewed 27 USAID determinations of compliance for payments to PA creditors and discussed with USAID mission officials their efforts to comply with the policies and procedures in Mission Order 21 before executing payments to hospitals, companies, and banks that facilitated the payments. We also reviewed the timing of USAID’s vetting of each PA creditor that received payments, to ensure that, as required by Mission Order 21, the vetting occurred within 12 months before USAID signed the relevant debt relief grant agreement with the PA. Further, we reviewed external assessments of the PA Ministries of Health and Finance and Planning and financial audits of the PA Ministry of Finance and Planning, and we discussed the assessments’ and audits with USAID officials responsible for payments to PA creditors. We conducted this performance audit from September 2017 to August 2018, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for findings and conclusions based on our audit objectives. In addition to the contact named above, Judith McCloskey (Assistant Director), Tom Zingale (Analyst-in-Charge), Eddie Uyekawa, Jeff Isaacs, and Nicole Willems made significant contributions to this report. David Dornisch, Neil Doherty, Reid Lowe, and Roger Stoltz also contributed to the report.
|
Since 1993, the U.S. government has committed more than $5 billion in bilateral assistance to the Palestinians in the West Bank and Gaza. According to the Department of State, this assistance to the Palestinians promotes U.S. economic and political foreign policy interests by supporting Middle East peace negotiations and financing economic stabilization programs. USAID is primarily responsible for administering ESF assistance to the West Bank and Gaza. Appropriations acts for fiscal years 2015 and 2016 included provisions for GAO to review the treatment, handling, and uses of funds provided through the ESF for assistance to the West Bank and Gaza. This report examines (1) the status of ESF assistance and projects provided to the West Bank and Gaza for fiscal years 2015 and 2016, including project assistance and payments to PA creditors, and (2) the extent to which USAID conducted required vetting of PA creditors to ensure that this assistance would not support entities or individuals associated with terrorism and assessed PA ministries' capacity to use ESF assistance as intended. GAO reviewed relevant laws and regulations and USAID financial data, policies, procedures, and documents. GAO also interviewed USAID and State Department officials. As of March 2018, the U.S. Agency for International Development (USAID) had allocated about $545 million of funding appropriated to the Economic Support Fund (ESF) for assistance in the West Bank and Gaza for fiscal years 2015 and 2016. USAID obligated about $544 million (over 99 percent) and expended about $351 million (over 64 percent) of the total allocations. Project assistance accounted for approximately $399 million of the obligated funds, while payments to Palestinian Authority (PA) creditors accounted for $145 million (see figure). USAID's obligations for project assistance in the West Bank and Gaza for fiscal years 2015 and 2016 supported three development objectives—Economic Growth and Infrastructure ($239 million), Investing in the Next Generation ($107 million), and Governance and Civic Engagement (about $25 million). In fiscal years 2015 and 2016, USAID made payments directly to PA creditors—two Israeli fuel companies, to cover debts for petroleum purchases, and a local Palestinian bank, to pay off a line of credit used for PA medical referrals to six hospitals in the East Jerusalem Hospital network. USAID vetted PA creditors to ensure that the program assistance would not provide support to entities or individuals associated with terrorism and also conducted external assessments and financial audits of PA ministries of Health and Finance and Planning. USAID documentation showed that, as required, officials checked the vetting status of each PA creditor within 12 months before USAID signed its debt relief grant agreements with the PA. In addition, although USAID determined that it was not legally required to assess the PA Ministry of Health's medical referral services and the Ministry of Finance and Planning's petroleum procurement system, the agency commissioned external assessments of both ministries. These assessments found some weaknesses in both ministries' systems; however, USAID mission officials stated that these weaknesses did not affect USAID debt relief payments to the PA creditors. Nevertheless, USAID took additional steps to mitigate the identified weaknesses. For example, a USAID contractor worked with the Ministry of Health to update, revise, and approve guidelines for medical referrals. In addition, USAID commissioned financial audits of the debt relief grant agreements between USAID and the PA for direct payments to PA creditors in fiscal year 2015 and 2016. The audits did not identify any ineligible costs, reportable material weaknesses in internal control, or material instances of noncompliance with the terms of the agreements. GAO is not making recommendations in this report.
|
Performance management systems can be powerful tools in helping an agency achieve its mission and ensuring employees at every level of the organization are working toward common ends. According to OPM regulations, performance management is a systematic process by which an agency involves its employees, both as individuals and members of a group, in improving organizational effectiveness in the accomplishment of agency mission and goals. An agency’s performance management system defines policies and parameters established by an agency for the administration of performance appraisal programs. Under federal law and corresponding regulations, agencies are required to develop at least one employee performance appraisal system. OPM is required to review and approve an agency’s performance appraisal system(s) to ensure it is consistent with the requirements of applicable law, regulation, and OPM policy, and defines the general policies and parameters the agency will use to rate employees. Once the appraisal system is approved, the agency establishes a performance appraisal program. The agency’s performance appraisal program—which does not require OPM review or approval— defines the specific procedures, methods, and requirements for planning, monitoring, and rating employee performance. The program is tailored to the agency’s needs. OPM policy identifies five phases to the performance management cycle: (1) planning work and setting expectations; (2) continually monitoring performance; (3) developing the capacity to perform; (4) rating periodically to summarize performance; and (5) rewarding good performance (see table 1). According to OPM, performance management is a continuous cycle in which an agency involves its employees, both, as individuals and members of a group, in improving organizational effectiveness in accomplishing agency mission and goals (see figure 1). Each phase of the performance management cycle plays an important part in helping to provide structure and focus to an employee’s roles and responsibilities within the organization. Within each phase of the cycle, employees are given the opportunity to provide input, ask questions, and request feedback from their supervisors on their performance. One of the tools agencies can use to determine the effectiveness of their performance management cycle is data from OPM’s annual FEVS. To help understand federal employees’ opinions about what matters most to them and how they feel about their jobs, their supervisors, and their agencies, FEVS scores can help agencies identify challenges and improve guidance. FEVS measures employees’ perceptions of whether, and to what extent, conditions characterizing successful organizations are present in their agencies. According to OPM, the federal workforce is the backbone of the government. Employee opinions shared through FEVS provide an essential catalyst to achieving effective government. From 2010 through 2017, surveyed employees generally demonstrated positive responses to FEVS statements related to four of OPM’s five performance management phases, including: planning and setting expectations, monitoring performance, developing the capacity to perform, and rating performance (as shown in figure 2). Employees had the lowest levels of agreement with statements related to rewarding performance (or an estimated 39 percent positive response). We have previously reported that an explicit alignment of daily activities with broader results is one of the defining features of effective performance management systems in high-performing organizations. These organizations use their performance management systems to improve performance by helping individuals see the connection between their daily activities and organizational goals, a line of sight, and encouraging individuals to focus on their roles and responsibilities to help achieve these goals. Such organizations continuously review and revise their performance management systems to support their strategic and performance goals, as well as their core values and transformational objectives. Based on surveyed employees’ responses, agencies were more successful at planning and setting expectations, which includes how an employee’s work relates to the agency’s goals and priorities, than at all other phases of performance management. The response to these statements highlights the role agencies have in providing information to employees about their responsibilities within the organization. Of the three selected FEVS statements for this phase, “I know how my work relates to the agency’s goals and priorities,” was the statement with the highest percent of employees who agreed or strongly agreed across all of our selected FEVS statements from 2010 to 2017 (see figure 3). Performance management and feedback should be used to help employees improve so that they can do the work or—in the event they cannot do the work—so management can take appropriate action for unacceptable performance. The first opportunity a supervisor has to observe and correct poor performance is in day-to-day performance management activities. We have previously reported that, in general, agencies have three means to address employees’ poor performance, with dismissal as a last resort: (1) day-to-day performance management activities (which should be provided to all employees, regardless of their performance levels); (2) dismissal during probationary periods; and (3) use of formal procedures to dismiss employees. We have also reported that supervisors who take performance management seriously and have the necessary training and support can help poorly performing employees either improve or realize they are not a good fit for the position. However, some supervisors may lack experience and training in performance management, as well as the understanding of the procedures for taking corrective actions against poor performers. We previously recommended that OPM, in conjunction with the Chief Human Capital Officers (CHCO) Council, assess the adequacy of leadership training that agencies provide to supervisors to help ensure supervisors obtain the skills needed to effectively conduct performance management responsibilities. In response, OPM conducted a survey to assess the adequacy of leadership training that agencies provide to supervisors. Based on the survey results, OPM issued a memorandum in May 2018 recommending a number of actions agencies should take to improve the accessibility, adequacy, and effectiveness of supervisory training. Of the FEVS statements we analyzed, the statement, “In my work unit, steps are taken to deal with a poor performer who cannot or will not improve,” had the lowest percent positive agreement by surveyed employees each year from 2010 to 2017 government-wide. However, the other two statements selected for this phase were viewed much more positively by surveyed employees (see figure 4). When we further analyzed the responses to the statement on poor performance, employee responses differed in agreement based on the respondent’s supervisory level. On average, an estimated 25 percent of surveyed employees who identified themselves as nonsupervisors and team leaders agreed with this statement from 2010 through 2017, compared with an estimated average of 54 percent of surveyed employees who identified themselves as managers (see figure 5). According to OPM guidance, the capacity to perform means having the competencies, the resources, and the opportunities available to complete the job. We have previously reported that the essential aim of training and development programs is to assist an agency in achieving its mission and goals by improving individual and, ultimately, organizational performance. In addition, constrained budgets and the need to address gaps in critical federal skills and competencies make it essential that agencies identify the appropriate level of investment and establish priorities for employee training and development. This allows the most important training needs to be addressed first. However, fewer surveyed employees agreed with the statement, “My training needs are assessed,” than with the other statements in this phase (see figure 6). Supervisors should establish performance standards that clearly express what is expected of the employee. An average estimated 82 percent of surveyed employees agreed or strongly agreed with the statement, “I am held accountable for achieving results,” from 2010 to 2017 (see figure 7). Overall, this statement had the second highest level of agreement of the 15 statements selected for our review. According to OPM’s website for performance management, while accountability means being held answerable for accomplishing a goal or assignment, the guidance cautions against using accountability only for punishing employees as fear and anxiety may permeate the work environment. This may prevent employees from trying new methods or proposing new ideas for fear of failure. According to OPM’s website for performance management, if approached correctly, accountability can produce positive, valuable results. According to OPM guidance, rewards are used often and well in an effective organization. We have previously reported that high-performing organizations seek to create effective incentive and reward systems that clearly link employee knowledge, skills, and contributions to organizational results. Rewarding means recognizing employees, individually and as members of groups, for their performance and acknowledging their contributions to the agency’s mission. According to OPM’s website for performance management, the types of awards include: cash; honorary recognition; informal recognition; or time off without charge to leave or loss of pay. From 2010 to 2017, an estimated 39 percent of surveyed employees consistently agreed when asked statements related to how their agency rewards performance (see figure 8). Of the five phases of performance management, the statements related to this phase consistently had the least positive agreement of surveyed employees. We have previously reported that effective performance management requires the organization’s leadership to make meaningful distinctions between acceptable and outstanding performance of individuals. Approximately one-third of surveyed employees agreed or strongly agreed with the statement, “In my work unit, differences in performance are recognized in a meaningful way.” Meaningful distinctions in performance ratings are the starting point for candid and constructive conversations between supervisors and staff. These distinctions also add transparency to the ratings and rewards process. In addition, such distinctions help employees better understand their relative contributions to organizational success, areas where they are doing well, and areas where improvements are needed. We also found that, across our selected statements, many of the largest gaps between supervisors and other employees were related to rewarding performance. Specifically, the responses to the statement, “Promotions in my work unit are based on merit,” varied the most based upon the supervisory status of the employee (see figure 9). Senior leaders agreed or strongly agreed with this statement at an average estimated 40 percentage points more than employees in a nonsupervisory role. We have previously reported that agencies must design and administer merit promotion programs to ensure a systematic means of selection for promotion based on merit. We have also previously reported that perceptions of favoritism, particularly when combined with unclear guidance, a lack of transparency, and limited feedback, negatively impact employee morale. Senior leaders and managers agreed or strongly agreed with the statement, “In my work unit, differences in performance are recognized in a meaningful way,” more frequently than surveyed employees who identified themselves as nonsupervisors (see figure 10). Those who identified themselves as team leaders and nonsupervisors agreed with the statement less frequently than all of the other categories of supervisory status. For example, in 2017, an estimated 69 percent of senior leaders agreed or strongly agreed with the statement, compared to an estimated 48 percent of supervisors and an estimated 33 percent of nonsupervisors and team leaders. Finally, senior leaders and managers agreed or strongly agreed with the statement, “Employees are recognized for providing high quality products and services,” more frequently than nonsupervisors (see figure 11). An effective performance management system can be a strategic tool to improve employee engagement and achieve an agency’s desired results. We found that selected agencies demonstrated some similar practices. This may have been a contributing factor in having relatively high scores on FEVS performance management related statements. Specifically, employees at the Bureau of Labor Statistics (BLS), the Centers for Disease Control and Prevention (CDC), the Drug Enforcement Administration (DEA), and the Office of the Comptroller of the Currency (OCC) consistently agreed or strongly agreed to selected FEVS statements related to the five phases of OPM’s performance management cycle. While these agencies developed different performance management systems to reflect their specific structures and priorities, we found a number of practices common to all four agencies that are intended to help reinforce effective employee performance management and improve agency performance (see figure 12). All four agencies agreed that these practices helped contribute to their employees’ responses to the selected FEVS statements and improved performance management. We have previously reported that organizations with more constructive cultures generally perform better and are more effective. Within constructive cultures, employees exhibit a stronger commitment to mission focus, accountability, coordination, and adaptability. According to OPM FEVS guidance, climate assessments like FEVS are, consequently, important to organizational improvement largely because of the key role culture plays in directing organizational performance. Each of the agencies in our review cited a strong organizational culture that was based on and tied to their agency’s mission. Table 2 highlights examples from CDC and DEA. Each of the four selected agencies in our review demonstrated a focus on analyzing FEVS data to identify areas of improvement and create action plans around the analysis. According to OPM guidance on FEVS, the results from the survey can be used by agency leaders to assist in identifying areas in need of improvement as well as highlight important agency successes. FEVS findings allow agencies to assess trends by comparing earlier results with the 2017 results to (1) compare agency results with the government-wide results, (2) identify current strengths and challenges, and (3) focus on short- and long-term action targets that will help agencies reach their strategic human resource management goals. The recommended approach to assessing and driving change in agencies utilizes FEVS results in conjunction with other resources, such as results from other internal surveys, administrative data, focus groups, exit interviews, and so on. We have previously reported that for agencies to attain the ultimate goal of improving organizational performance, they must take a holistic approach—analyzing data, developing and implementing strategies to improve engagement, and linking their efforts to improved performance. We have also previously reported that OPM stated that agencies are increasingly using FEVS as a management tool to help them understand issues at all levels of an organization, and to take specific action to improve employee engagement and performance. Further, OPM officials noted that if agencies, managers, and supervisors know that their employees will have the opportunity to provide feedback each year, they are more likely to take responsibility for influencing positive change. We found that all four of the selected agencies were building a culture of analyzing their FEVS results to identify areas of improvement, and develop action plans to achieve results, including improving performance management (see table 3). In addition, three of the four selected agencies also used other practices. These practices include using other available survey results to corroborate identified action plans and identify additional areas needing support to create a more complete picture of the employee perspective. We have previously reported that an agency’s FEVS scores should be used as one of several data sources as leaders attempt to develop a comprehensive picture of engagement within an organization, and better target their engagement efforts, particularly in times of limited resources. The key is identifying what practices to implement and how to implement them. This can and should come from multiple sources. Three of four of the case study agencies—BLS, CDC, and DEA—use supplemental survey data to help focus agency efforts to improve performance management. For example, DEA developed its own internal survey—Leadership Engagement Survey—in 2016 because it identified leadership as a key driver for organizational climate and employee engagement. According to agency officials, there was a strong internal push to use the survey results to identify areas of improvement. The fourth agency, OCC, had administered a separate internal engagement survey from 2013 to 2016. According to agency officials, however, they discontinued this effort to focus exclusively on FEVS as the primary survey data source, and to reduce the redundancy of two surveys. However, OCC emphasized the need to consider FEVS data as only one source of data, at a point in time, and to use a diversity of other data (quantitative and qualitative) to inform the survey results. As we have previously reported, agencies invest significant time and resources in recruiting potential employees, training them, and providing them with institutional knowledge that may not be easily or cost-effectively replaceable. Therefore, effective performance management–which consists of activities such as expectation-setting, coaching, and feedback—can help sustain and improve employee performance management. We have also reported that good supervisors are key to the success of any performance management system. Supervisors provide the day-to-day performance management activities that can help sustain and improve the performance of more talented staff, and can help marginal performers to become better. However, agencies may not be providing supervisors with the appropriate training that prepares them for success, such as having difficult performance management conversations. Moreover, we have previously reported that mission- critical skills gaps across the federal government pose a high risk because they impede the government from cost effectively serving the public and achieving results. Strategies to address these gaps include training and development activities focused on improving employees’ skills needed for mission success. All four selected agencies had taken steps in identifying appropriate training for not only supervisors, but also all employees. For example, BLS conducted a general training needs assessment (TNA) for all employees in 2016. The officials stated that the purpose of the TNA was to give employees an avenue to express their interests in various kinds of training. Employee responses were used to inform elements of the BLS training plan for fiscal year 2017. As a result of the TNA, BLS is conducting a training evaluation of its vendor-provided writing courses. During this evaluation, BLS hopes to determine if the techniques and material taught in these courses have actually resulted in expected improvements in the writing of those employees who have taken the course as observed by their supervisors and managers. TNA results showed that managers also expressed a strong interest in additional training on employee leave, labor relations, and employee relations. BLS officials stated that courses on these topics were provided as part of the agency’s fiscal year 2017 training plan. As another example, CDC recently developed two onboarding checklists for new executives in 2017 for training purposes. The intent was to provide a comprehensive, consistent onboarding experience so that new executives are more engaged and knowledgeable. In addition, within the last year, the agency developed a mentoring circle for new supervisors that meets monthly. The purpose of the circle is to provide new supervisors with insider help from their peers, such as how to handle difficult situations. Supervisors are also provided assistance through the agency’s performance management appraisal working group. This group meets quarterly to discuss how to better assist supervisors and employees with performance management related questions. We have previously reported that successful organizations empower and involve their employees to gain insights about operations from a frontline perspective, increase their understanding and acceptance of organizational goals and objectives, and improve motivation and morale. We have also previously reported that what matters most in improving engagement levels is valuing employees—that is, an authentic focus on their performance, career development, and inclusion and involvement in decisions affecting their work. Each of the selected agencies in our review stated that they had made efforts over the last few years to improve internal communication between management and employees, as well as increase the transparency of actions taken and decisions made by management. For instance, BLS hosts quarterly breakfast sessions with the BLS Commissioner in which employees have access to agency leadership where they can offer suggestions or feedback. BLS also provides agency information through its intranet website, which is updated almost daily. Examples include features such as the BLS Daily Report, What’s Up at BLS, and BLS tweets. Specifically, the What’s Up at BLS feature of the BLS intranet is an internal communications hub that includes four sections, including “Employee and Team Spotlight”—highlighting the work of employees and teams across the agency—and “Changing Lanes,” which features stories about employees who decided to switch their career paths by changing occupations or programs within BLS. According to OCC officials, the agency has increased the frequency of agency-wide communications and those from middle management that cascade priorities, decisions, and organizational changes to employees. OCC has also executed enterprise change management to manage the people side of change, including building awareness, knowledge, and ability through stakeholder analysis and communications planning. It also maintains an engagement portal for teams to document action plans related to employee engagement—of which there are more than 200 action items related to improved communications using a top-down and two-way approach. As the government’s chief human resources agency and personnel policy leader, OPM’s role in the federal government is to, among other things, design and promulgate regulations, policy, and guidance covering all aspects of the employee life cycle from hire to retire, including performance management. OPM provides such performance management guidance and resources to agencies on its website, as shown in figure 13, as well as in a new Performance Management Portal (portal) accessible through the Office of Management and Budget’s (OMB) MAX Information System (MAX). Examples of guidance and resources include information for the five phases of the performance management cycle, descriptions on the how to write performance standards, critical components of effective and timely feedback, answers to performance management frequently asked questions, and a list of the various award programs open to employees from all federal agencies. In addition, the Chief Human Capital Officers (CHCO) Council’s website includes information provided by OPM on performance management as well as various OPM memorandums to CHCOs, human resource directors, and agency leaders. According to OPM officials, information on the performance management website is reserved for policy guidance based on current and applicable law and regulation. As such, only minor updates have been made to the website because the law and regulatory requirements for performance management have not recently changed. However, there is no date included on the website that indicates when it was last updated. OPM officials stated that the last update made to the website was in June 2016 when an external entity requested that a public service award be added to OPM’s awards list page. However, OPM has issued training, guidance, and other performance management related resources since the last website update in June 2016. Specifically, we examined more than 100 performance management related online links on both OPM’s and the CHCO Council’s websites, and found that in some instances, the CHCO Council’s website included more up-to-date information issued by OPM that was not found on OPM’s performance management website. Some examples include: The release of OPM’s web-based training course, “Basic Employee Relations: Your Accountability as a Supervisor or Manager,” dated October 12, 2016; Management Tools for Maximizing Employee Performance, dated January 11, 2017; Performance Management Guidance and Successful Practices in Support of Agency Plans for Maximizing Employee Performance, dated July 17, 2017; The release of OPM’s web-based training course, “Performance Management Plus—Engaging for Success,” dated October 6, 2017; Federal Supervisory Training Program Survey Results, dated May 21, Guidance for Implementation of Executive Order 13839 - Promoting Accountability and Streamlining Removal Procedures Consistent with Merit System Principles, dated July 5, 2018. According to OPM officials, the agency does not coordinate with the CHCO Council on its website postings. However, OPM officials stated that performance management guidance approved by OPM is provided to the CHCO Council. We did not find any reference to the CHCO Council’s website using OPM’s internal search engine with the term “performance management” (see figure 14). As a result, agency officials and federal employees who are looking for comprehensive information on performance management using OPM’s website may be unable to easily find or access related performance management guidance or resources. A 2016 Office of Management and Budget memorandum on federal agency public websites and digital services states that federal agency public websites and digital services are the primary means by which the public receives information from and interacts with the federal government, provides government information or services to a specific user group across a variety of delivery platform and devices, and supports the proper performance of an agency function. The memorandum states that, “Federal websites and digital services should provide quality information that is readily accessible to all.” In addition, federal internal control standards state that management should use quality information to achieve the entity’s objective. Quality information should be appropriate, current, complete, accurate, accessible, and timely. However, OPM does not have a process for regularly updating its performance management website with new guidance and resources to ensure that the information is readily available. Agency employees, such as human capital specialists, who visit OPM’s performance management website may be unable to find or access the most recent guidance and training available. In addition to its website, OPM officials stated that the agency recently launched the Performance Management Portal (portal) in September 2017 on OMB MAX to communicate with agencies and provide information and resources related to non-SES performance management, as highlighted earlier. OPM officials said that the portal will be updated with information regarding announcements or updated guidance as needed, or when it is released and becomes available. Although not as comprehensive as the information included on OPM’s performance management website, the portal included slides from OPM’s semiannual facilitated performance management forums and updated information on awards guidance for non-SES employees for fiscal year 2017—neither of which were on OPM’s website. As the government’s chief human resources agency, agencies may see OPM as their primary source of performance management guidance. By establishing a process to ensure that information on the performance management website is regularly updated to include the most recent guidance, agencies would have access to the most current information. OPM provides opportunities for agencies to share promising practices. For example, OPM has several efforts in place that allow agencies to share promising information with each other such as at its semiannual Performance Management Forums (forums), annual Performance Management Steering Committee meetings, and through the previously mentioned portal. According to OPM, the forums provide agencies with updated information, guidance, and support to encourage performance excellence amongst employees. In 2017, OPM began holding annual steering committee meetings which allow interagency representatives to discuss the needs of the federal performance management community, to identify and/or request potential content for future forums, and to share promising practices and lessons learned regarding performance management, according to OPM officials. However, there is no formal process in place or mechanism for agencies to routinely and independently share their own experiences and lessons learned in implementing performance management efforts. For instance, the portal does not currently allow for agencies to post and share their own promising practices with each other in a centralized location. Instead, agencies must rely on OPM to post such information on the portal. OPM officials stated that, although permission to view the portal is granted to all users in the executive branch with a MAX account, OPM is the only agency that has permission to make edits to the portal. OPM officials said they are exploring options to allow for an interactive experience with other agencies. Federal internal control standards state that management should externally communicate the necessary quality information to achieve the entity’s objective. Additionally, our prior work on collaboration practices has shown that agencies can enhance and sustain collaborative efforts, and identify and address needs by leveraging resources, such as through sharing information. Establishing a mechanism to allow agencies to routinely share promising practices and lessons learned from their experiences could assist agencies that are undertaking or considering similar efforts and help inform agencies’ decision-making related to performance management. In addition to driving modernization, OPM identified innovation as one of its five values in its most recent strategic plan for fiscal years 2018 through 2022. Specifically, OPM stated that the agency “constantly seeks new ways to accomplish its work and generate extraordinary results. OPM is dedicated to delivering creative and forward-looking solutions and advancing the modernization of human resources management.” OPM officials stated that innovation was included as one of OPM’s values because the agency seeks to embrace forward-leaning policies and practices within all aspects of human capital management. While OPM officials told us that they maintain a constant scan of the environment to identify and follow promising practices—which could include innovative concepts—in the private sector and other sources to include performance management and performance management systems, they did not specifically identify which promising practices they incorporated into guidance or training. In addition, when we asked OPM to identify innovative performance management practices based on its own research, officials provided us with articles from leading experts that focused on eliminating performance ratings, using a growth mindset concept, and the SCARF model—status, certainty, autonomy, relatedness, and fairness—for collaborating with and influencing others. They also provided references and their notes on new performance management system programs at three corporations. OPM officials said they have not placed these articles, references, or notes on their performance management website or shared them with agencies, and have no plans to do so at this time. Instead, OPM officials stated they were monitoring the progress of these new practices to assess if the methods were effective in maximizing employee and organizational outcomes, in addition to stimulating collaboration and innovation. However, OPM provided no criteria in use to determine when the results would be considered effective or when they could be shared with agencies. Without OPM sharing their research results, agencies may be unaware of current practices in the performance management field because they may not be conducting their own research. Including innovation as an agency value is not sufficient to change an organization’s culture for it to become innovative; it is necessary to also introduce, for example, a strategy to identify and address emerging research and promising practices in performance management. Such a strategic approach could include criteria that identify what research results to share with agencies, when to share them, and by which process (for example, by website). It would also enable OPM to increase transparency and consistency in identifying emerging innovations. One of our case study agencies told us that in the absence of OPM providing research results, the agency used its own resources to research and identify leading practices in the private sector that could potentially apply to their own performance management system, such as focusing on ongoing performance conversations and recognition to increase engagement and performance, while reducing burdensome administrative requirements that do not add value. Officials at this agency stated that OPM’s guidance was not modernized to the extent that the human capital and performance management industry was changing. Without OPM taking the lead to share emerging and innovative research, agencies, and therefore their employees, may not benefit from the best information available. Although OPM identified innovation as one of its five values, we were unable to find any recent information on innovation for performance management in the government on OPM’s website. Specifically, we used “innovation performance management” as a search term on the website and found the “Promoting Innovation in Government” web page, which included archived material and was no longer being updated (see figure 15). As a result, agencies that use OPM’s website as a source of performance management guidance would be unable to find any current resources on performance management innovation. OPM officials explained that older material is archived based on the current leadership’s vision. The officials also confirmed that OPM did not have other active websites that contained innovative performance management practices gathered from external sources, which could be shared with other federal agencies. Implementing a strategic approach to sharing innovation in performance management would then allow OPM to provide relevant and updated information that agencies could use to modernize their performance management systems. Managing employee performance has been a long-standing government- wide issue. As the current administration moves to reform the federal government to become leaner, accountable, and efficient, an effective performance management system is necessary to increase productivity, sustain transformation, and foster a culture of engagement that enables high performance. Federal agencies have a primary responsibility for managing their employees’ performance, but OPM maintains a key role in developing and overseeing human resources programs and policies that support the needs of federal agencies. As the government’s chief human resources agency and personnel policy leader, OPM is responsible for designing and promulgating regulations, policy, and guidance covering all aspects of the employee life cycle, including performance management. While OPM provides performance management resources on its website, some information is not regularly updated and can be challenging to find. Establishing a process to provide agencies with current, accurate, and easy access to guidance and resources would provide them with the most recent guidance and resources available. To be at the forefront of innovation, OPM must consistently challenge traditional performance management practices, and identify opportunities to present and promote new and creative solutions to agencies. Although OPM has identified potential innovative and promising practices for performance management through its own research, OPM has not actively shared these practices with agencies. In addition, agencies do not have access to a common forum by which they could routinely and independently share their own promising practices and lessons learned to avoid common pitfalls. In times of limited resources, developing a strategic approach to identify and share emerging research and innovations in performance management would help agencies inform and, as needed, reform their performance management approaches. As a result, federal employees may have more opportunities to maximize their performance. We are making the following three recommendations to OPM. Specifically: 1. The Director of OPM, in consultation with the CHCO Council, should establish and implement a process for regularly updating the performance management website to include all available guidance and resources, making this information easily accessible, and providing links to other related websites. (Recommendation 1) 2. The Director of OPM, in consultation with the CHCO Council, should develop and implement a mechanism for agencies to routinely and independently share promising practices and lessons learned, such as through allowing agencies to post such information on OPM’s Performance Management portal. (Recommendation 2) 3. The Director of OPM, in consultation with the CHCO Council, should develop a strategic approach for identifying and sharing emerging research and innovations in performance management. (Recommendation 3) We provided a draft of this report to the Secretaries of the Departments of Health and Human Services (Centers for Disease Control and Prevention), Labor (Bureau of Labor Statistics), and Treasury (Office of the Comptroller of the Currency), the Acting Attorney General (Drug Enforcement Administration) and the Acting Director of OPM. In its written comments, reproduced in appendix II, OPM agreed with our findings and concurred with our recommendations. It added that it would establish and implement a process for regularly updating its performance management website, among other things. OPM and the Departments of Health and Human Services, Labor, and Treasury also provided technical comments that we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretaries of the Departments of Health and Human Services, the Department of Labor, the Department of the Treasury, the Acting Attorney General, the Acting Director of OPM, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2757 or goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report (1) describes federal employee perceptions of performance management as measured by the results of selected statements from the Office of Personnel Management’s (OPM) annual survey of federal employees, the Federal Employment Viewpoint Survey (FEVS); (2) identifies practices that selected agencies use to develop and implement strategies to improve performance management; and (3) evaluates OPM’s guidance and resources to support agency efforts to improve performance management government-wide. FEVS provides a snapshot of employees’ perceptions about how effectively agencies manage their workforce. Topic areas are employees’ (1) work experience, (2) work unit, (3) agency, (4) supervisor, (5) leadership, (6) satisfaction, (7) work-life, and (8) demographics. OPM has administered FEVS annually since 2010. From 2002 to 2010, OPM administered the survey biennially. FEVS includes a core set of statements. Agencies have the option of adding questions to the surveys sent to their employees. FEVS is based on a sample of full- and part-time, permanent, non-seasonal employees of departments and large, small, and independent agencies. According to OPM, the sample is designed to ensure representative survey results would be reported by agency, subagency, and senior leader status as well as for the overall federal workforce. Once the necessary sample size is determined for an agency, if more than 75 percent of the workforce would be sampled, OPM conducts a full census of all permanent, nonseasonal employees. To describe government-wide trends in employee perceptions of performance management, we selected 15 FEVS statements that generally align with OPM’s five phases of performance management cycle: (1) planning and setting expectations; (2) continually monitoring performance; (3) developing the capacity to perform; (4) rating periodically to summarize performance; and (5) rewarding good performance (see table 4). We used indexes such as the Employee Engagement Index, the Human Capital Assessment and Accountability Framework Results-Oriented Performance Culture Index, and the Public Partnership for Public Service’s Best Places to Work categories to help guide our selection process of three FEVS statements per OPM performance management phase. We did not look at how surveyed employees responded to the statements when considering which ones to select. Upon selection of our statements, we consulted with our internal human capital (HC) experts as well as external HC experts at OPM and the Merit Systems Protection Board to determine the appropriateness of our FEVS statement selection and categorization. They generally agreed that these statements aligned with the phases. However, FEVS was not designed to measure performance management and, although these statements all provide useful insights, they do not necessarily represent all key aspects of performance management. In addition, we analyzed the 15 FEVS performance management-related questions by supervisory status for the 24 Chief Financial Officers Act (CFO Act) departments and agencies for the years 2010 through 2017. We conducted this analysis because our prior work had shown that supervisory status was the employee population variable that displayed the greatest degree of difference in responses between the categories of respondents in it. For this report, we did not analyze the extent of differences in responses in the performance management questions by other employee population groups, such as age or gender, because that was outside the scope of our engagement. We examined the results for the 15 FEVS questions by supervisory groups, and report the 4 that had the greatest degree of differences by supervisory levels. All of these 4 had differences of at least 28 percentage points between the most and least favorable categories of respondents while the remaining 11 had differences in the range of 2 to 25 percentage points between the views of senior leaders and nonsupervisory employees. We calculated the average percent of employees who agreed or strongly agreed with the three statements comprising the phase for those who answered all three statements to identify trends. Survey respondents who did not answer one or more of the phase statements were not included. Because OPM followed a probability procedure based on random selections for most agencies, the FEVS sample is only one of a large number of samples that could have been drawn. Since each sample could have provided different estimates, we express our confidence in the precision of the FEVS statement estimates using the margin of error at the 95 percent level of confidence. This margin of error is the half-width of the 95 percent confidence interval for a FEVS estimate. A 95 percent confidence interval is the interval that would contain the actual population value for 95 percent of the samples that OPM could have been drawn. To assess the reliability of the FEVS data, in addition to assessing the sampling error associated with the estimates we examined descriptive summary statistics and the distribution of both the survey data and the human capital framework indexes, and assessed the extent of item- missing data. We also reviewed FEVS technical documentation. On the basis of these procedures, we believe the data were sufficiently reliable for use in the analysis presented in this report. To identify practices used by selected agencies to develop and implement strategies to improve performance management, we complemented our government-wide analysis with an additional analysis of agencies (those agencies and units within 1 of the 24 CFO Act departments). Specifically, we analyzed agency results for the same 15 statements in 2015 (the most recent data available at the time) to select a nongeneralizable sample of four agencies to obtain illustrative examples of how they approached performance management and their strategies to improve performance within their agencies. We calculated averages for the agencies based on their scores for our selected statements, and rank ordered them based on these averages. Among other attributes, these agencies had the highest levels of employee agreement with FEVS statements dealing with their performance management processes. We selected agencies that had the highest average scores for the performance management phases. In addition to the FEVS data, we also used secondary factors such as the number of respondents, agency size, mission, and types of employees to identify the following agencies: (1) Bureau of Labor Statistics, Department of Labor; (2) Centers for Disease Control and Prevention, Department of Health and Human Services; (3) Drug Enforcement Administration, Department of Justice; and the (4) Office of the Comptroller of the Currency, Department of the Treasury. We developed a set of standard questions that asked about agency strategies to improve performance management and relevant successes, which we administered to human resources/human capital officials and other officials responsible for performance management at the agencies. We reviewed and analyzed the responses the agencies provided, and identified and reported examples of practices that all four described, which are intended to improve performance management. We also asked agencies about the types of guidance and resources they obtained from OPM. The four common practices we identified do not represent the only practices these agencies employ to improve performance management at their agency. In addition, the practices are not intended to be representative of all those employed by all other federal agencies. To evaluate the guidance and resources OPM provides to agencies to improve performance management government-wide, we reviewed both OPM’s performance management website and the Chief Human Capital Officers (CHCO) Council’s website to identify available guidance, resources, and tools. We compared these documents to OMB’s memorandum on federal agency public websites, OPM’s strategic plan for fiscal years 2018 through 2022, and internal controls. We observed the Performance Management Portal, hosted on OMB’s MAX website, in July 2018 with an OPM official as we did not have access to the portal. We also reviewed agency documentation and other OPM-referenced websites that contained performance management-related information. We used OPM’s internal site search engines and search terms, such as “performance management” and “performance management innovation,” to identify relevant guidance. During the course of our review, we compared performance management guidance posted on the OPM and CHCO websites as well as the portal, and identified discrepancies between what we found on the respective websites. We discussed the discrepancies with OPM officials and included their responses within the report. To supplement the documentary evidence obtained, we also interviewed officials from OPM, the CHCO Council, and selected case study agencies to describe the extent to which OPM assists agencies on performance management. We conducted this performance audit from December 2016 to November 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Thomas Gilbert, Assistant Director; Dewi Djunaidy, Analyst-in-Charge; Jehan Chase; Martin DeAlteriis; Krista Loose; and Susan Sato made major contributions to this report. Also contributing to this report were Carl Barden; Won Lee; Robert Robinson; and Stewart Small. Federal Employee Misconduct: Actions Needed to Ensure Agencies Have Tools to Effectively Address Misconduct. GAO-18-48. Washington, D.C.: July 16, 2018. Federal Workforce: Distribution of Performance Ratings Across the Federal Government, 2013. GAO-16-520R. Washington, D.C.: May 9, 2016. Federal Workforce: Additional Analysis and Sharing of Promising Practices Could Improve Employee Engagement and Performance. GAO-15-585. Washington, D.C.: July 14, 2015. Federal Workforce: Improved Supervision and Better Use of Probationary Periods Are Needed to Address Substandard Employee Performance. GAO-15-191. Washington, D.C.: February 6, 2015. Results-Oriented Management: OPM Needs to Do More to Ensure Meaningful Distinctions Are Made in SES Ratings and Performance Awards. GAO-15-189. Washington, D.C.: January 22, 2015. Federal Workforce: OPM and Agencies Need to Strengthen Efforts to Identify and Close Mission-Critical Skills Gaps. GAO-15-223. Washington, D.C.: January 30, 2015. Federal Workforce: Human Capital Management Challenges and the Path to Reform. GAO-14-723T. Washington, D.C.: July 15, 2014. Office of Personnel Management: Agency Needs to Improve Outcome Measures to Demonstrate the Value of Its Innovation Lab. GAO-14-306. Washington, D.C.: March 31, 2014. Federal Employees: Opportunities Exist to Strengthen Performance Management Pilot. GAO-13-755. Washington, D.C.: September 12, 2013. Results-Oriented Cultures: Creating a Clear Linkage between Individual Performance and Organizational Success. GAO-03-488. Washington, D.C.: March 14, 2003.
|
Managing employee performance has been a long-standing government-wide issue and the subject of numerous reforms since the beginning of the modern civil service. Without effective performance management, agencies risk not only losing the skills of top talent, they also risk missing the opportunity to effectively address increasingly complex and evolving mission challenges. GAO was asked to examine federal non-Senior Executive Service performance management systems. This report examines (1) government-wide trends in employee perceptions of performance management as measured by the results of selected FEVS statements, (2) practices that selected agencies use to improve performance management, and (3) OPM's guidance and resources to support agency efforts to improve performance management government-wide. GAO analyzed responses to selected FEVS statements related to the five performance management phases from 2010 through 2017; selected four agencies based on the highest average scores for the five phases, among other criteria, to identify practices which may contribute to improved performance management; reviewed OPM documents; and interviewed OPM and other agency officials. GAO found that from 2010 through 2017, surveyed employees generally demonstrated positive responses to selected Federal Employee Viewpoint Survey (FEVS) statements related to four of the Office of Personnel Management's (OPM) five performance management phases, including: planning and setting expectations, monitoring performance, developing the capacity to perform, and rating performance. Employees responded least positively to statements related to rewarding performance, with only 39 percent of employees, on average, agreeing with statements regarding this phase. Of the four agencies with among the highest average scores for the performance management phases (Bureau of Labor Statistics, Centers for Disease Control and Prevention, Drug Enforcement Administration, and Office of the Comptroller of the Currency), GAO identified practices that may contribute to improved performance management including strong organizational culture and dedication to mission; use of FEVS and other survey data; and a focus on training. OPM provides guidance and opportunities for agencies to share promising practices on performance management; however, some of this information is not easily accessible on its performance management website. In addition, OPM does not leverage its leadership position to formally identify and share emerging performance management research and innovation with agencies. As a result, agencies, and therefore their employees, may not benefit from the best information available. GAO is making three recommendations, including that OPM improve its website and share innovations in performance management with agencies. OPM agreed with GAO's recommendations.
|
In general, the process for managing inventories of medications at VAMCs and non-VA pharmacies in hospital settings is similar. The steps of the process are (1) procuring medications from vendors or other suppliers, (2) receiving and storing medications, (3) tracking medications to account for all items and prevent diversion, (4) dispensing medications to patients, and (5) disposing of expired or wasted medications. Hospital settings include both inpatient and outpatient pharmacies. Procurement. Pharmacies use a procurement process to order medications for pharmacy inventory, which includes activities such as medication selection, cost analysis, purchasing procedures, and record keeping. As part of medication selection, pharmacies may use a formulary, which is a list of medications that have been approved for prescription within a hospital or health care system. A prime vendor or wholesaler is one of the most commonly used sources to obtain medications for the pharmacy. Prime vendors order large quantities of medications from manufacturers, allowing pharmacies to purchase various products from many drug manufacturers at once. Orders for products that are not carried by the prime vendor may need to be ordered through another source, such as directly from the manufacturer. Receipt and storage. When medications are delivered to the pharmacy, staff are to take several steps to properly receive and store the shipment. For example, to ensure there is segregation of duties, the person responsible for ordering and purchasing the medications is supposed to be different than the person receiving and stocking pharmacy inventory. Additionally, any delivered products that require special storage conditions, such as freezing or refrigeration, are to be checked in first to maintain the stability of the medication. Tracking. Once in storage, pharmacies use a variety of tools to account for the filling, dispensing, and removal of medications in both inpatient and outpatient settings. Some pharmacies have software that allows them to track inventory in real time, an ability known as maintaining perpetual inventory. A perpetual inventory system is a method of recording the quantity of a particular medication continuously as prescriptions are filled and dispensed. After each prescription is filled and dispensed to the patient, the amount of medication used for the prescription is removed from the inventory to ensure the quantity on hand recorded by the software is always current. Many medications have barcodes on their packaging to allow for easy identification of the medication in a computer system. The barcode generally includes the product’s National Drug Code, which indicates the name and package size of the medication. In the hospital setting, medications can be scanned out of the pharmacy and into machines for storage on hospital wards. Dispensing. In both inpatient wards and outpatient pharmacies, automated dispensing machines and barcode technology can assist staff in maintaining and dispensing medications to patients. Automated dispensing machines generally include several drawers and cabinets that have pockets or trays that hold preset levels of a variety of common medications. They may also be used to hold controlled substances, generally in locked boxes or cubes within the machine. On hospital wards medication in automated dispensing machines is often packaged in unit doses—individually packaged medications for patient use. Barcodes can help verify a prescription before nurses give medication to a patient. Hospitals that do not have automatic dispensing machines use carts with drawers filled with each patient’s medication. Outpatient pharmacies use automated dispensing machines to assist with filling prescriptions. Depending on the type of automated dispensing machine, the capabilities can include label printing, pill counting, pouring pills into prescription bottles, and applying the label to the prescription bottle. Return or disposal. Medication waste and expired medications are to be pulled from pharmacy inventory and either returned to a reverse distributor or manufacturer for credit or, if not eligible for return, disposed of by the pharmacy or sent to an outside company for destruction. Reverse distributors charge a fee, which is generally a percentage of the refund that is automatically deducted from the final refund amount. Figure 1 provides an overview of the steps of the pharmacy inventory management process. VA’s health care system is organized into entities at the headquarters, regional, and local levels. At the headquarters level, PBM is responsible for supporting VISNs and VAMCs with a broad range of pharmacy services, such as promoting appropriate drug therapy, ensuring medication safety, providing clinical guidance to pharmacists and other clinicians, and maintaining VA’s formulary of medications and supplies VAMCs use to deliver pharmacy benefits. VA’s OIT is responsible for providing technology services across the department, including the development and management of all IT assets and resources. As such, the office supports VA’s health care system in planning for and acquiring IT capabilities within VA’s health care system network of hospitals, outpatient facilities, and pharmacies. VA’s NAC is responsible for administering various health care-related acquisition and logistics programs across VA. At the regional level, VAMCs are located in one of 18 VISNs. Each VISN is responsible for overseeing VAMC pharmacies within a defined geographic region. At the local level, there are approximately 170 VAMCs. Each VAMC is responsible for implementing VA’s pharmacy policies and programming. VA policy establishes parameters for VAMCs to follow when managing their pharmacy inventories. These policies address various aspects of pharmacy services, including inpatient and outpatient pharmacy services, general pharmacy requirements, supply chain management, controlled substances management, and the formulary management process. For example, the Supply Chain Inventory Management directive states that all VAMC pharmacies should use the prime vendor inventory management software to calculate the amount of each inventory item they need to reorder. However, the directive also states that there are additional pharmacy inventory tools available to VAMC pharmacies and that each pharmacy has the option to use its own automated inventory management systems to generate orders for its prime vendor. VA policy does not specify minimum quantities to order; instead, VAMC procurement staff is authorized to use their expertise to determine the appropriate quantity to order. In general, all five of the selected VAMCs we reviewed take similar approaches for the various steps included in the pharmacy inventory management process—that is, procuring medications from vendors or other suppliers, receiving and storing these medications, tracking medications at the pharmacy to account for all items and prevent diversion, dispensing medications to patients, and disposing of expired medications. (See fig. 2). We found that while the five selected VAMCs have similar approaches for receiving and storing, dispensing, and disposing of medications, some VAMCs have also taken unique approaches in implementing two steps of the pharmacy inventory management process: procurement and tracking. VA policy outlines parameters for VAMCs to manage their pharmacy inventories, and VA officials told us that VAMC pharmacy staff can use discretion to implement their own approaches for managing their pharmacy inventories. All five of the selected VAMC pharmacies we reviewed use several sources of information to inform future orders—including past purchase order history reports from VA’s prime vendor, manual inventory counts by pharmacy staff, and automated dispensing machine inventory information. VA officials told us that all VAMCs also track procurement spending and its impact on the VAMCs’ budget and spending. However, pharmacy officials at one of the selected VAMCs we visited told us they use VA’s health information system—Veterans Health Information Systems and Technology Architecture (VistA)—and additional prime vendor reports to identify specific information regarding 1) expiring medications that may need to be re-purchased, 2) medications that account for the top 80 percent of pharmacy costs, and 3) all medications that are purchased daily. VAMC officials told us these reports help them to better manage pharmacy inventory and track pharmacy spending. To better anticipate and address potential medication shortages, officials at another selected VAMC pharmacy told us they established a shortage committee that meets on a weekly basis. Established in September 2017, the committee includes the Director of Pharmacy and other pharmacy staff. Our review of meeting notes shows that the committee discusses which medications could experience or are experiencing shortages and how the VAMC could adjust to these shortages by, for example, developing clinical and logistical solutions to help maintain optimal patient care. According to the officials at the selected VAMC pharmacy, the committee has been an effective resource to help manage pharmacy inventory problems should they occur. Several VAMC officials also told us that the procurement technicians, who are responsible for ordering pharmacy inventory, are very important because they possess valuable institutional knowledge based on many years of experience and training. However, VAMC officials told us the salaries and potential career advancement opportunities for procurement technicians can be limited, and the officials expressed concern that these technicians could find better opportunities within the VAMC or with external employers. To help retain procurement technicians, two of the selected VAMC pharmacies we visited have created higher paying procurement technician positions (General Schedule level 8 positions, instead of GS-6 or GS-7). To better identify potential instances of diversion, two of the selected VAMC pharmacies use enhanced analytics software on the automated dispensing machines in their inpatient wards to track how frequently controlled substances and other frequently utilized medications are prescribed. For example, one of the pharmacies uses data from these reports to identify how often individual staff members are accessing automated dispensing machines. Additionally, officials at a third VAMC recently deployed automated dispensing machines that are equipped with an enhanced analytics program that can identify trends associated with diversion. The remaining two VAMCs we visited do not have enhanced analytic software that could help them to identify instances of potential diversion. Across all 5 selected VAMCs, we observed several different IT systems used to help manage non-controlled inpatient inventory. One of the selected VAMC pharmacies uses a modular automated dispensing machine together with inventory management software that maintains a perpetual inventory for most non-controlled substances stored in its inpatient pharmacy. (See fig. 3). According to officials, this software has allowed the pharmacy to reduce waste and improve staff workflow, as staff do not have to spend time tracking down inventory. None of the other VAMC pharmacies we visited have the capability to track non- controlled substances in real time. Additionally, to more efficiently identify medication lot numbers during recalls, one VAMC pharmacy we visited was in the process of implementing a technology that allows pharmacy staff to scan a case of medication with the same national drug code, lot number, and expiration date and then print and attach a radio frequency identification tag to each medication bottle. The tag allows for quick electronic identification of the medication for disposal. Other selected VAMC pharmacies manually identify recalled medications from inventory based on the name of the medication and lot number. VA does not yet have a VA-wide pharmacy inventory management system in place that would allow it to monitor VAMC pharmacy inventory in real time and provide better oversight of how VAMC pharmacies manage their inventories. We found that VACO and the five VISNs we reviewed provide some oversight related to VAMC pharmacy inventory management. However, that oversight is limited, as no entity has been assigned responsibility for overseeing system-wide performance of VAMC pharmacies in managing their inventories. VA’s oversight of VAMC pharmacy inventory management is limited in part because VA currently lacks a comprehensive system that would allow the department and its VAMCs to monitor pharmacy inventory in real time. According to PBM officials, the lack of a VA-wide system makes it difficult to oversee VAMC pharmacy inventory management, and PBM has recognized the lack of such a system as a material weakness for several years. PBM officials said that implementation of a VA-wide pharmacy inventory management system would allow them to monitor each VAMC’s pharmacy inventory in real time, which would, in turn, allow them to better manage inventory and help alleviate shortages at the national level by facilitating transfers of inventory between VAMCs as needed. Additionally, officials said that such a system would lead to better planning and projections for purchasing decisions, allow PBM to track medication expiration dates and lot numbers more effectively, and improve VAMC staff response to medication recalls. Although VA has acknowledged the need for a VA-wide pharmacy inventory management system, such a system may not be available for the foreseeable future. PBM officials told us they have requested this system since the early 2000s. However, despite the documented technological challenges VA faces in overseeing its VAMC pharmacies, changing IT priorities, funding challenges, and the narrowing of the scope of a Pharmacy Re-engineering Project have prevented the system’s development. In 2017, we reported that VA’s pharmacy systems could not maintain a real-time inventory across the VAMCs, and we recommended that VA assess the priority for establishing an inventory management system capable of monitoring medication inventory levels and indicating when medications needed to be reordered. VA concurred with our recommendation. In June 2017, VA announced its intention to replace VistA— VA’s health information system—with an off-the-shelf electronic health record system. VA officials told us that the new system will have the capability to monitor pharmacy inventory in real time across VA. VA signed the contract for this new system in May 2018; however, full implementation is expected to take up to 10 years. In the interim, VA officials told us that while they will maintain current pharmacy systems, they do not plan to build any new systems—including a VA-wide pharmacy inventory management system—so they can efficiently manage resources in preparation for the transition to the new system. VACO and the five VISNs we spoke with provide some limited oversight related to VAMC pharmacy inventory management, but no entity has system-wide responsibility for overseeing the performance of VAMC pharmacies in managing their inventories. Instead, responsibility for overseeing pharmacy inventory management is largely delegated to each VAMC’s leadership. (See fig. 4 for a description of VACO headquarters, VISN, and VAMCs’ roles and responsibilities in managing pharmacy inventory.) In absence of a VA-wide inventory management system, PBM officials told us that they have employed manual workaround mechanisms to oversee pharmacy management processes. Specifically, PBM requires VAMC pharmacies to conduct an annual inventory of all medications and a quarterly inventory of 5 selected high-value non-controlled medications at risk of diversion. PBM officials told us they remind VAMCs of the requirement to conduct these inventories, collect and aggregate the data from these inventories, and make summary reports from these data available as a resource to the VPEs and VAMC Chiefs of Pharmacy. PBM officials acknowledged that these manual workarounds are inefficient, increase labor costs, and leave the agency with an inability to see on- hand inventory across the system in real time. Additionally, the manual workarounds may be implemented differently at each VAMC, resulting in varying degrees of data reliability and limited opportunities for high-level oversight and data consolidation. PBM officials said that they do not independently analyze these data to identify trends, and they acknowledged that both the quarterly and annual inventories have limited usefulness for overseeing inventory management system-wide. Additionally, officials at some of the selected VAMCs told us they found the quarterly and annual inventories to have limited usefulness for managing their pharmacy inventories. PBM officials told us they also hold regular meetings with VPEs and VAMCs, which provide the opportunity for discussion of pharmacy inventory management issues. However, our review of the minutes of the meetings between PBM and VPEs found that, over the past 3 years, pharmacy inventory management was rarely a topic of discussion. PBM officials noted that there is always an opportunity for open discussion at these meetings for VPEs to raise any issues, including issues related to pharmacy inventory management, but these discussions may or may not be captured in the meeting minutes. PBM officials said they also regularly discuss various topics with the VAMC Chiefs of Pharmacy and other staff, but none of these calls are directly related to pharmacy inventory management. Officials from VACO’s NAC and OIT told us that they provide some assistance related to pharmacy inventory management but do not take part in the day-to-day management at the VAMC level and also do not have any oversight responsibilities. For example, a NAC official said the office coordinates with PBM on medication shortage issues and establishes national contracts for medications. NAC also sends out a weekly shortages report to various pharmacy groups as a tool to help them with known or expected shortages. Additionally, NAC’s Pharmaceutical Prime Vendor team is responsible for administering the contract with the prime vendor through daily monitoring of issues and quarterly reviews with the prime vendor and PBM. OIT develops pharmacy-related applications for VistA based on requirements from PBM, and officials said that the majority of OIT’s support to VAMCs consists of assisting them with issues related to VistA. At the VISN level, VPEs we interviewed also said they conduct some pharmacy inventory management oversight activities for the VAMCs within their network. While in general VA policy does not outline any specific roles for VPEs related to oversight of pharmacy inventory management, all five VPEs told us that they review the results of their VAMCs’ annual inventories and discuss any issues that arise from this exercise with VAMCs as needed. VPEs told us that they also review the results of the quarterly inventory of five selected high-value, non- controlled substances and may follow-up with the VAMCs if their actual inventory of the medications is inconsistent with expected levels. Additionally, some VPEs reported that they have undertaken additional oversight activities apart from reviewing results of the mandatory inventories. For example, one VPE told us he has developed a dashboard with 53 measures that, while focused on formulary management, also have inventory management implications. Additionally, this VPE said that a VISN-wide procurement work group meets on a monthly basis and serves as a venue for procurement technicians to share inventory management best practices. Such additional activities may be helpful, but since VPEs only have responsibility for VAMC pharmacies within their network, they may not be aware of pharmacy inventory management approaches being used at other VAMCs across VA. Although VA offices at the headquarters and regional levels provide some assistance and oversight of how VAMCs manage pharmacy inventory at the local level, VA has not designated a focal point with defined responsibilities for system-wide oversight; instead they rely on local leadership to oversee pharmacy inventory management at the VAMCs. As a result, VA cannot assess the overall performance of VAMCs’ management of their pharmacy inventories. The lack of a focal point with defined oversight responsibilities is inconsistent with federal internal control standards for establishing structure and authority to achieve the entity’s objectives and internal controls related to monitoring. Specifically, internal controls state that management should establish an organizational structure, assign responsibility, and delegate authority to achieve the entity’s objectives. Also, internal controls state that management should establish and operate monitoring activities to monitor the internal control system and evaluate the results. VA’s actions are also inconsistent with the Office of Management and Budget’s guidance for enterprise risk management and internal control in managing an agency. Enterprise risk management is intended to yield an “enterprise- wide,” strategically aligned portfolio view of organizational challenges that provides better insight about how to most effectively prioritize resource allocations to ensure successful mission delivery. Without a focal point for system-wide oversight of VAMC pharmacy inventory management, VA has limited awareness of the unique approaches that VAMCs use to manage their inventories and is missing an opportunity to evaluate these approaches. Additionally, VA cannot effectively share and standardize pharmacy inventory management best practices as appropriate. Having a focal point for system-wide oversight could allow VA to identify potential best practices that could be disseminated more widely across its facilities. Due to the decentralized nature of VA’s organization, VA policy gives VAMC pharmacies latitude in managing their pharmacy inventories. Several of the VAMCs we visited have taken unique approaches to procuring or tracking their inventory. However, because VA does not have a focal point to systematically oversee VAMCs’ pharmacy management efforts, VA is missing opportunities to evaluate the effectiveness of these efforts, as well as share best practices and standardize them across VA as appropriate. PBM officials told us that the lack of a VA-wide pharmacy inventory management system limits their ability to oversee VAMC pharmacy inventory management. However, our review shows that even without this system there are existing mechanisms that a focal point could leverage to more systematically oversee how VAMC pharmacies manage their inventories. For example, a focal point could ensure that PBM officials, the VPEs, and VAMC pharmacy staff devote time to discussing pharmacy inventory management approaches and related issues during regularly scheduled telephone meetings. Leveraging these existing mechanisms is especially important given that VAMCs have historically had challenges in managing their inventories, and also because a VA- wide pharmacy inventory management system may not be available for the foreseeable future. We are making the following recommendation to the Department of Veterans Affairs: The Secretary of the VA should direct the Undersecretary for Health to designate a focal point for overseeing VAMCs’ pharmacy inventory management system-wide and define the focal point’s responsibilities. (Recommendation 1) We provided a draft of this report to VA for review and comment. In its written comments, reproduced in appendix I, VA stated that it concurred in principle with our recommendation. VA also provided technical comments, which we incorporated as appropriate. In response to our recommendation, VA stated it plans to establish by December 31, 2018, a committee of internal stakeholders and subject matter experts to provide options for overseeing VAMCs’ pharmacy inventory management. However, it was unclear from VA’s response whether the planned committee will recommend or designate an entity or focal point with system-wide oversight responsibilities. VA noted in its general comments that it does have entities or individuals—referred to as focal points by VA—responsible for specific functions. However, these entities do not provide system-wide oversight that could allow the department to better understand VAMCs’ approaches to pharmacy inventory management. As we noted in our report, without a focal point for system-wide oversight, VA has limited awareness of the unique approaches that VAMCs use to manage their inventories and is missing an opportunity to evaluate these approaches and standardize them across VA as appropriate. Additionally, in its general comments, VA raised concerns regarding our characterization in the draft report of medication shortages and the use of automated dispensing units in the context of controlled substances. In response, we updated the report to include more information about one VAMC’s use of a committee to address medication shortages. We also clarified that three VAMCs are using (or will soon have the capability to use) enhanced analytic software to better leverage data generated through their automated dispensing machines, which allows them to more easily identify potential diversion. Finally, VA noted that we did not discuss PBM’s multiple requests for an enterprise-management system since the early 2000s; however, this information was included as part of the draft report sent to VA for review and remains in our final report on page 14 as part of our finding on the lack of a VA-wide pharmacy inventory management system. We are sending copies of this report to the Secretary of the Department of Veterans Affairs and appropriate congressional committees. The report is also available at no charge on GAO’s website at http://www.gao.gov. If you or your staff has any questions regarding this report, please contact Sharon M. Silas at (202) 512-7114 or silass@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Rashmi Agarwal, Assistant Director; Nick Bartine, Analyst-in-Charge; Muriel Brown; Kaitlin Farquharson; Krister Friday; Sandra George; Courtney Liesener; Diona Martyn; and Michelle Paluga made key contributions to this report.
|
VA provides health care services, including pharmacy services, to approximately 9 million veterans each year. Since 2000, VAMCs have faced recurring challenges in managing their pharmacy inventories, including difficulties with accurately accounting for and updating inventory totals through their pharmacy systems. GAO was asked to review VA pharmacy inventory management. This report (1) describes approaches selected VAMCs use to manage their pharmacy inventories and (2) assesses the extent to which VA oversees VAMCs' efforts to manage their pharmacy inventories. To conduct this work, GAO visited a non-generalizable selection of five VAMCs chosen for the complexity of services offered and variation in location. GAO also reviewed VA national policies and local polices for the selected VAMCs and interviewed VA officials at the headquarters, regional, and local levels. GAO assessed VA's oversight of pharmacy management in the context of federal internal control standards. Selected Department of Veterans Affairs' (VA) medical centers (VAMC) use generally similar approaches for managing their pharmacy inventories. For example, all VAMCs store certain medications in secured areas. However, GAO found that VAMCs have also taken unique approaches for procuring and tracking medications, as allowed under VA policy. For example, to better address medication shortages, one VAMC pharmacy GAO visited established a shortage committee that meets on a weekly basis. Another VAMC pharmacy uses an automated dispensing machine together with compatible software that allows the pharmacy to track the location of most inpatient medications in real-time (see figure). GAO also found that VA's oversight of VAMCs' pharmacy inventory management is limited as VA lacks a comprehensive inventory management system or a focal point for system-wide oversight. In May 2018, VA signed a contract for a new electronic health records system that should allow VA to monitor VAMCs' inventories; however, VA officials expect implementation of this system to take up to 10 years. Based on a review of VA policies and interviews with VA officials, GAO found that VA has not designated a focal point with defined responsibilities for system-wide oversight of VAMCs' pharmacy inventory management. This is inconsistent with federal internal control standards for monitoring and establishing structure and authority to achieve an entity's objectives. Without a focal point for system-wide oversight, VA has limited awareness of the unique approaches that VAMCs use to manage their inventories and is missing an opportunity to evaluate these approaches. Additionally, VA cannot effectively share and standardize inventory management best practices as appropriate. Having a focal point is especially important given that VAMCs have historically had challenges in managing their inventories and a comprehensive pharmacy inventory management system may not be available for the foreseeable future. GAO recommends that VA designate a focal point for overseeing VAMCs' pharmacy inventory management efforts system-wide and define the focal point's responsibilities. VA concurred in principle with the recommendation.
|
While no commonly accepted definition of a community bank exists, they are generally smaller banks that provide banking services to the local community and have management and board members who reside in the local community. In some of our past reports, we often defined community banks as those with under $10 billion in total assets. However, many banks have assets well below $10 billion as data from the financial condition reports that institutions submit to regulators (Call Reports) indicated that of the more than 6,100 banks in the United States, about 90 percent had assets below about $1.2 billion as of March 2016. Based on our prior interviews and reviews of documents, regulators and others have observed that small banks tend to differ from larger banks in their relationships with customers. Large banks are more likely to engage in transactional banking, which focuses on the provision of highly standardized products that require little human input to manage and are underwritten using statistical information. Small banks are more likely to engage in what is known as relationship banking in which banks consider not only data models but also information acquired by working with the banking customer over time. Using this banking model, small banks may be able to extend credit to customers such as small business owners who might not receive a loan from a larger bank. Small business lending appears to be an important activity for community banks. As of June 2017, community banks had almost $300 billion outstanding in loans with an original principal balance of under $1 million (which banking regulators define as small business lending), or about 20 percent of these institutions’ total lending. In that same month, non- community banks had about $390 billion outstanding in business loans under $1 million representing 5 percent of their total lending. Credit unions are nonprofit member-owned institutions that take deposits and make loans. Unlike banks, credit unions are subject to limits on their membership because members must have a “common bond”—for example, working for the same employer or living in the same community. Financial reports submitted to NCUA (the regulator that oversees federally-insured credit unions) indicated that of the more than 6,000 credit unions in the United States, 90 percent had assets below about $393 million as of March 2016. In addition to providing consumer products to their members, credit unions are also allowed to make loans for business activities subject to certain restrictions. These member business loans are defined as a loan, line of credit, or letter of credit that a credit union extends to a borrower for a commercial, industrial, agricultural, or professional purpose, subject to certain exclusions. In accordance with rules effective January 2017, the total amount of business lending credit unions can do is not to generally exceed 1.75 times the actual net worth of the credit union. Federal banking and credit union regulators have responsibility for ensuring the safety and soundness of the institutions they oversee, protecting federal deposit insurance funds, promoting stability in financial markets, and enforcing compliance with applicable consumer protection laws. All depository institutions that have federal deposit insurance have a federal prudential regulator. The regulator responsible for overseeing a community bank or credit union varies depending on how the institution is chartered, whether it is federally insured, and whether it is a Federal Reserve member (see table 1). Other federal agencies also impose regulatory requirements on banks and credit unions. These include rules issued by CFPB, which has supervision and enforcement authority for various federal consumer protection laws for depository institutions with more than $10 billion in assets and their affiliates. The Federal Reserve, OCC, FDIC, and NCUA continue to supervise for consumer protection compliance at institutions that have $10 billion or less in assets. Although community banks and credit unions with less than $10 billion in assets typically would not be subject to CFPB examinations, they generally are required to comply with CFPB rules related to consumer protection. In addition, FinCEN also issues requirements that financial institutions, including banks and credit unions, must follow. FinCEN is a component of Treasury’s Office of Terrorism and Financial Intelligence that supports government agencies by collecting, analyzing, and disseminating financial intelligence information to combat money laundering. It is responsible for administering the Bank Secrecy Act, which, with its implementing regulations, generally requires banks, credit unions, and other financial institutions, to collect and retain various records of customer transactions, verify customers’ identities in certain situations, maintain AML programs, and report suspicious and large cash transactions. FinCEN relies on financial regulators and others to examine U.S. financial institutions to determine compliance with these requirements. In addition, financial institutions also have to comply with requirements by Treasury’s Office of Foreign Asset Control to review transactions to ensure that business is not being done with sanctioned countries or individuals. In response to the 2007-2009 financial crisis, Congress passed the Dodd- Frank Act, which became law on July 21, 2010. The act includes numerous reforms to strengthen oversight of financial services firms, including consolidating consumer protection responsibilities within CFPB. Under the Dodd-Frank Act, federal financial regulatory agencies were directed to or granted authority to issue hundreds of regulations to implement the act’s reforms. Many of the provisions in the Dodd-Frank Act target the largest and most complex financial institutions, and regulators have noted that much of the act is not meant to apply to community banks. Although the Dodd-Frank Act exempts small institutions, such as community banks and credit unions, from several of its provisions, and authorizes federal regulators to provide small institutions with relief from certain regulations, it also contains provisions that impose additional restrictions and compliance costs on these institutions. As we reported in 2012, federal regulators, state regulatory associations, and industry associations collectively identified provisions within 7 of the act’s 16 titles that they expected to affect community banks and credit unions. The provisions they identified as likely to affect these institutions included some of the act’s mortgage reforms, such as those requiring institutions to ensure that a consumer obtaining a residential mortgage loan has the reasonable ability to repay the loan at the time the loan is consummated; comply with a new CFPB rule that combines two different mortgage loan disclosures that had been required by the Truth-in-Lending Act and the Real Estate Settlement Procedures Act of 1974; and ensure that property appraisers are sufficiently independent. In addition to the regulations that have arisen from provisions in the Dodd-Frank Act, we reported that other regulations have created potential burdens for community banks. For example, the depository institution regulators also issued changes to the capital requirements applicable to these institutions. Many of these changes were consistent with the Basel III framework, which is a comprehensive set of reforms to strengthen global capital and liquidity standards issued by an international body consisting of representatives of many nations’ central banks and regulators. These new requirements significantly changed the risk-based capital standards for banks and bank holding companies. As we reported in November 2014, officials interviewed from community banks did not anticipate any difficulties in meeting the U.S. Basel III capital requirements but expected to incur additional compliance costs. In addition to regulatory changes that could increase burden or costs on community banks, some of the Dodd-Frank Act provisions have likely resulted in reduced costs for these institutions. For example, revisions to the way that deposit insurance premiums are calculated reduced the amount paid by banks with less than $10 billion in assets by $342 million or 33 percent from the first to second quarter of 2011 after the change became effective. Another change reduced the audit-related costs that some banks were incurring in complying with provisions of the Sarbanes- Oxley Act. A literature search indicated that prior studies by other entities, including regulators, trade associations or others, which examined how to measure regulatory burden generally focused on direct costs resulting from compliance with regulations, and our analysis of them identified various limitations that restrict their usefulness in assessing regulatory burden. For example, researchers commissioned by the Credit Union National Association, which advocates for credit unions, found costs attributable to regulations totaled a median of 0.54 percent of assets in 2014 for a non- random sample of the 53 small, medium, and large credit unions responding to a nationwide survey. However, one of the study’s limitations was its use of a small, non-random sample of credit unions. In addition, the research was not designed to conclusively link changes in regulatory costs for the sampled credit unions to any one regulation or set of regulations. CFPB also conducted a study of regulatory costs associated with specific regulations applicable to checking accounts, traditional savings accounts, debit cards, and overdraft programs. Through case studies involving 200 interviews with staff at seven commercial banks with assets over $1 billion, the agency’s staff determined that the banks’ costs related to ongoing regulatory compliance were concentrated in operations, information technology, human resources, and compliance and retail functions, with operations and information technology contributing the highest costs. While providing detailed information about the case study institutions, reliance on a small sample of mostly large commercial banks limits the conclusions that can be drawn about banks’ regulatory costs generally. In addition, the study notes several challenges to quantifying compliance costs that made their cost estimates subject to some measurement error, and the study’s design limits the extent to which a causal relationship between financial regulations and costs could be fully established. Researchers from the Mercatus Center at George Mason University used a nongeneralizable survey of banks to find that respondents believed they were spending more money and staff time on compliance than before due to Dodd-Frank regulations. From a universe of banks with less than $10 billion of assets, the center’s researchers used a non-random sample to collect 200 responses to a survey sent to 500 banks with assets less than $10 billion about the burden of complying with regulations arising from the Dodd-Frank Act. The survey sought information on the respondents’ characteristics, products, and services and the effects various regulatory and compliance activities had on operations and decisions, including those related to bank profitability, staffing, and products. About 83 percent of the respondents reported increased compliance costs of greater than or equal to 5 percent due to regulatory requirements stemming from the Dodd-Frank Act. The study’s limitations include use of a non-random sample selection, small response rate, and use of questions that asked about the Dodd-Frank Act in general. In addition, the self-reported survey items used to capture regulatory burden—compliance costs and profitability—have an increased risk of measurement error and the causal relationship between Dodd- Frank Act requirements and changes in these indicators is not well- established. Community bank and credit union representatives that we interviewed identified three sets of regulations as most burdensome to their institutions: (1) data reporting requirements related to loan applicants and loan terms under the Home Mortgage Disclosure Act of 1975 (HMDA); (2) transaction reporting and customer due diligence requirements as part of the Bank Secrecy Act and related anti-money laundering laws and regulations (collectively, BSA/AML); and (3) disclosures of mortgage loan fees and terms to consumers under the TILA-RESPA Integrated Disclosure (TRID) regulations. In focus groups and interviews, many of the institution representatives said these regulations were time- consuming and costly to comply with, in part because the requirements were complex, required preparation of individual reports that had to be reviewed for accuracy, or mandated actions within specific timeframes. However, federal regulators and consumer advocacy groups said that benefits from these regulations were significant. Representatives of community banks and credit unions in all our focus groups and in most of our interviews told us that HMDA’s data collection and reporting requirements were burdensome. Under HMDA and its implementing Regulation C, banks and credit unions with more than $45 million in assets that do not meet regulatory exemptions must collect, record, and report to the appropriate federal regulator, data about applicable mortgage lending activity. For every covered mortgage application, origination, or purchase of a covered loan, lenders must collect information such as the loan’s principal amount, the property location, the income relied on in making the credit decision, and the applicants’ race, ethnicity, and sex. Institutions record this on a form called the loan/application register, compile these data each calendar year, and submit them to CFPB. Institutions have also been required to make these data available to the public upon request, after modifying them to protect the privacy of applicants and borrowers. Representatives of many community banks and credit unions with whom we spoke said that complying with HMDA regulations was time consuming. For example, representatives from one community bank we interviewed said it completed about 1,100 transactions that required HMDA reporting in 2016, and that its staff spent about 16 hours per week complying with Regulation C. In one focus group, participants discussed how HMDA compliance was time consuming because the regulations were complex, which made determining whether a loan was covered and should be reported difficult. As a part of that discussion, one bank representative told us that it was not always clear whether a residence that was used as collateral for a commercial loan was a reportable mortgage under HMDA. In addition, representatives in all of our focus groups in which HMDA was discussed and in some interviews said that they had to provide additional staff training for HMDA compliance. Among the 28 community banks and credit unions whose representatives commented on HMDA in our focus groups, 61 percent noted having to conduct additional HMDA-related training. In most of our focus groups and three of our interviews, representatives of community banks and credit unions also expressed concerns about how federal bank examiners review HMDA data for errors. When regulatory examiners conducting compliance examinations determine that an institution’s HMDA data has errors above prescribed thresholds, the institution has to correct and resubmit its data, further adding to the time required for compliance. While regulators have revised their procedures for assessing errors as discussed later, prior to 2018, if 10 percent or more of the loan/application registers that examiners reviewed had errors, an institution was required to review all of their data, correct any errors, and resubmit them. If 5 percent or more of the reviewed loan/application registers had errors in a single data field, an institution had to review all other registers and correct the data in that field. Participants in one focus group discussed how HMDA’s requirements left them little room for error and that they were concerned that examiners weigh all HMDA fields equally when assessing errors. For example, representatives of one institution noted that for purposes of fair lending enforcement, errors in fields such as race and ethnicity can be more important than errors in the action taken date (the field for the date when a loan was originated or when an application not resulting in an origination was received). Representatives of one institution also noted that they no longer have access to data submission software that allowed them to verify the accuracy of some HMDA data, and this has led to more errors in their submissions. Representatives of another institution told us that they had to have staff conduct multiple checks of HMDA data to ensure the data met accuracy standards, which added to the time needed for compliance. Representatives of many community banks and credit unions with whom we spoke also expressed concerns that compliance requirements for HMDA were increasing. The Dodd-Frank Act included provisions to expand the information institutions must collect and submit under HMDA, and CFPB issued rules implementing these new requirements that mostly became effective January 2018. In addition to certain new data requirements specified in the act, such as age and the total points and fees payable at origination, CFPB’s amendments to the HMDA reporting requirements also added additional data points, including some intended to collect more information about borrowers such as credit scores, as well as more information about the features of loans, such as fees and terms. In the final rule implementing the new requirements, CFPB also expanded the types of loans on which some institutions must report HMDA data to include open-ended lines of credit and reverse mortgages. Participants in two of our focus groups with credit unions said reporting this expanded information will require more staff time and training and cause them to purchase new or upgraded computer software. In most of our focus groups, participants said that changes should be made to reduce the burdens associated with reporting HMDA data. For example, in some focus groups, participants suggested raising the threshold for institutions that have to file HMDA reports above the then current $44 million in assets, which would reduce the number of small banks and credit unions that are required to comply. Representatives of two institutions noted that because small institutions make very few loans compared to large ones, their contribution to the overall HMDA data was of limited value in contrast to the significant costs to the institutions to collect and report the data. Another participant said their institution sometimes make as few as three loans per month. In most of our focus groups, participants also suggested that regulators could collect mortgage data in other ways. For example, one participant discussed how it would be less burdensome for lenders if federal examiners collected data on loan characteristics during compliance examinations. However, staff of federal regulators and consumer groups said that HMDA data are essential for enforcement of fair lending laws and regulations. Representatives of CFPB, FDIC, NCUA, and OCC and groups that advocate for consumer protection issues said that HMDA data has helped address discriminatory practices. For example, some representatives noted a decrease in “redlining” (refusing to make loans to certain neighborhoods or communities). CFPB staff noted that HMDA data provides transparency about lending markets, and that HMDA data from community banks and credit unions is critical for this purpose, especially in some rural parts of the country where they make the majority of mortgage loans. While any individual institution’s HMDA reporting might not make up a large portion of HMDA data for an area, CFPB staff told us that if all smaller institutions were exempted from HMDA requirements, regulators would have little or no data on the types of mortgages or on lending patterns in some areas. Agency officials also told us that few good alternatives to HMDA data exist and that the current collection regime is the most effective available option for collecting the data. NCUA officials noted that collecting mortgage data directly from credit unions during examinations to enforce fair lending rules likely would be more burdensome for the institutions. CFPB staff and consumer advocates we spoke with also said that HMDA provides a low-cost data source for researchers and local policy makers, which leads to other benefits that cannot be directly measured but are included in HMDA’s statutory goals—such as allowing local policymakers to target community investments to areas with housing needs. While representatives of some community banks and credit unions argued that HMDA data were no longer necessary because practices such as redlining have been reduced and they receive few requests for HMDA data from the public, representatives of some consumer advocate groups responded that eliminating the transparency that HMDA data creates could allow discriminatory practices to become more common. CFPB staff and representatives of one of these consumer groups also said that before the financial crisis of 2007–2009, some groups were not being denied credit outright but instead were given mortgages with terms, such as high interest rates, which made them more likely to default. The expanded HMDA data will allow regulators to detect such problematic lending practices for mortgage terms. CFPB and FDIC staff also told us that while lenders will have to collect and report more information, the new fields will add context to lending practices and should reduce the likelihood of incorrectly flagging institutions for potential discrimination. For example, with current data, a lender may appear to be denying mortgage applications to a particular racial or ethnic group, but with expanded data that includes applicant credit scores, regulators may determine that the denials were appropriate based on credit score underwriting. CFPB staff acknowledged that HMDA data collection and reporting may be time consuming, and said they have taken steps to reduce the associated burdens for community banks and credit unions. First, in its final rule implementing the Dodd-Frank Act’s expanded HMDA data requirements, CFPB added exclusions for banks and credit unions that make very few mortgage loans. Effective January 2018, an institution will be subject to HMDA requirements only if it has originated at least 25 closed-end mortgage loans or at least 100 covered open-end lines of credit in each of the 2 preceding calendar years and also has met other applicable requirements. In response to concerns about the burden associated with the new requirement for reporting open-end lines of credit, in 2017. CFPB temporarily increased the threshold for collecting and reporting data for open-end lines of credit from 100 to 500 for the 2018 and 2019 calendar years. CFPB estimated that roughly 25 percent of covered depository institutions will no longer be subject to HMDA as a result of these exclusions. Second, the Federal Financial Institutions Examination Council (FFIEC), which includes CFPB, announced the new FFIEC HMDA Examiner Transaction Testing Guidelines that specify when agency examiners should direct an institution to correct and resubmit its HMDA data due to errors found during supervisory examinations. CFPB said these revisions should greatly reduce the burden associated with resubmissions. Under the revised standards, institutions will no longer be directed to resubmit all their HMDA data if they exceeded the threshold for HMDA files with errors, but will still be directed to correct specific data fields that have errors exceeding the specified threshold. The revised guidelines also include new tolerances for some data fields, such as application date and loan amount. Third, CFPB also introduced a new online system for submitting HMDA data in November 2017. CFPB staff said that the new system, the HMDA Platform, will reduce errors by including features to allow institutions to validate the accuracy and correct the formatting of their data before submitting. They also noted that this platform will reduce burdens associated with the previous system for submitting HMDA data. For example, institutions no longer will have to regularly download software, and multiple users within an institution will be able to access the platform. NCUA officials added that some credit unions had tested the system and reported that it reduced their reporting burden. Finally, on December 21, 2017, CFPB issued a public statement announcing that, for HMDA data collected in 2018, CFPB does not intend to require resubmission of HMDA data unless errors are material, and does not intend to assess penalties for errors in submitted data. CFPB also announced that it intends to open a rule making to reconsider various aspects of the 2015 HMDA rule, such as the thresholds for compliance and data points that are not required by statute. In all our focus groups and many of our interviews, participants said they found BSA/AML requirements to be burdensome due to the staff time and other costs associated with their compliance efforts. To provide regulators and law enforcement with information that can aid in pursuing criminal, tax, and regulatory investigations, BSA/AML statutes and regulations require covered financial institutions to file Currency Transaction Reports (CTR) for cash transactions conducted by a customer for aggregate amounts of more than $10,000 per day and Suspicious Activity Reports (SAR) for activity that might signal criminal activity (such as money laundering or tax evasion); and establish BSA/AML compliance programs that include efforts to identify and verify customers’ identities and monitor transactions to report, for example, transactions that appear to violate federal law. Participants in all of our focus groups discussed how BSA/AML compliance was time-consuming, and in most focus groups participants said this took time away from serving customers. For example, representatives of one institution we interviewed told us that completing a single SAR could take 4 hours, and that they might complete 2 to 5 SARs per month. However, representatives of another institution said that at some times of the year it has filed more than 300 SARs per month. In a few cases, representatives of institutions saw BSA/AML compliance as burdensome because they had to take actions that seemed unnecessary based on the nature of the transactions. For example, one institution’s representatives said that filing a CTR because a high school band deposited more than $10,000 after a fundraising activity seemed unnecessary, while another’s said that it did not see the need to file SARs for charitable organizations that are well known in their community. Representatives of institutions in most of our focus groups also noted that BSA/AML regulations required additional staff training. Some of these representatives noted that the requirements are complex and the activities, such as identifying transactions potentially associated with terrorism, are outside of their frontline staff’s core competencies. Representatives in all focus groups and a majority of interviews said BSA imposes financial costs on community banks and credit unions that must be absorbed by those institutions or passed along to customers. In most of our focus groups, representatives said that they had to purchase or upgrade software systems to comply with BSA/AML requirements, which can be expensive. Some representatives also said they had to hire third parties to comply with BSA/AML regulations. Representatives of some institutions also noted that the compliance requirements do not produce any material benefits for their institutions. In most of our focus groups, participants were particularly concerned that the compliance burden associated with BSA/AML regulations was increasing. In 2016, FinCEN—the bureau in the Department of the Treasury that administers BSA/AML rules—issued a final rule that expanded due-diligence requirements for customer identification. The final rule was intended to strengthen customer identification programs by requiring institutions to obtain information about the identities of the beneficial owners of businesses opening accounts at their institutions. The institutions covered by the rule are expected to be in compliance by May 11, 2018. Some representatives of community banks and credit unions that we spoke with said that this new requirement will be burdensome. For example, one community bank’s representatives said the new due-diligence requirements will require more staff time and training and cause them to purchase new or upgraded computer systems. Representatives of some institutions also noted that accessing beneficial ownership information about companies can be difficult, and that entities that issue business licenses or tax identification numbers could perform this task more easily than financial institutions. In some of our focus groups, and in some comment letters that we reviewed that community banks and credit unions submitted to bank regulators and NCUA as part of the EGRPRA process, representatives of community banks and credit unions said regulators should take steps to reduce the burdens associated with BSA/AML. Participants in two of our focus groups and representatives of two institutions we interviewed said that the $10,000 CTR threshold, which was established in 1972, should be increased, noting it had not been adjusted for inflation. One participant told us that if this threshold had been adjusted for inflation over time, it likely would be filing about half of the number of CTRs that it currently files. In several focus groups, participants also indicated that transactions that must be checked against the Office of Foreign Assets Control list also should be subject to a threshold amount. Representatives of one institution noted that they have to complete time-consuming compliance work for even very small transactions (such as less than $1). Representatives of some institutions suggested that the BSA/AML requirements be streamlined to make it easier for community banks and credit unions to comply. For example, representatives of one institution that participated in the EGRPRA review suggested that institutions could provide regulators with data on all cash transactions in the format in which they keep these records rather than filing CTRs. Finally, participants in one focus group said that regulators should better communicate how the information that institutions submit contributes to law enforcement successes in preventing or prosecuting crimes. Staff from FinCEN told us that the reports and due-diligence programs required in BSA/AML rules are critical to safeguarding the U.S. financial sector from illicit activity, including illegal narcotics and terrorist financing activities. They said they rely on CTRs and SARs that financial institutions file for the financial intelligence they disseminate to law enforcement agencies, and noted that they saw all BSA/AML requirements as essential because activities are designed to complement each other. Officials also pointed out that entities conducting terrorism, human trafficking, or fraud all rely heavily on cash, and reporting frequently made deposits makes tracking criminals easier. They said that significant reductions in BSA/AML reporting requirements would hinder law enforcement, especially because depositing cash through ATMs has become very easy. FinCEN staff said they utilize a continuous evaluation process to look for ways to reduce burden associated with BSA/AML requirements, and noted actions taken as a result. They said that FinCEN has several means of soliciting feedback about potential burdens, including through its Bank Secrecy Act Advisory Group that consists of industry, regulatory, and law enforcement representatives who meet twice a year, and also through public reporting and comments received through FinCEN’s regulatory process. FinCEN officials said that based on this advisory group’s recommendations, the agency provided SAR filing relief by reducing the frequency of submission for written SAR summaries on ongoing activity from 90 days to 120 days. FinCEN also has recognized that financial institutions do not generally see the beneficial impacts of their BSA/AML efforts, and officials said they have begun several different feedback programs to address this issue. FinCEN staff said they have been discussing ways to improve the CTR filing process, but in response to comments obtained as part of a recent review of regulatory burden they noted that the staff of law enforcement agencies do not support changing the $10,000 threshold for CTR reporting. FinCEN officials said that they have taken some steps to reduce the burden related to CTR reporting, such as by expanding the ability of institutions to seek CTR filing exemptions, especially for low-risk customers. FinCEN is also utilizing its advisory group to examine aspects of the CTR reporting obligations to assess ways to reduce reporting burden, but officials said it is too early to know the outcomes of the effort. However, FinCEN officials said that while evaluation of certain reporting thresholds may be appropriate, any changes to them or other CTR requirements to reduce burden on financial institutions, must still meet the needs of regulators and law enforcement, and prevent misuse of the financial system. FinCEN staff also said that some of the concerns raised about the upcoming requirements on beneficial ownership may be based on misunderstandings of the rule. FinCEN officials told us that under the final rule, financial institutions can rely on the beneficial ownership information provided to them by the entity seeking to open the account. Under the final rule, the party opening an account on behalf of the legal entity customer is responsible for providing beneficial ownership information, and the financial institution may rely on the representations of the customer unless it has information that calls into question the accuracy of those representations. The financial institution does not have to confirm ownership; rather, it has to verify the identity of the beneficial owners as reported by the individual seeking to open the account, which can be done with photocopies of identifying documents such as a driver’s license. FinCEN issued guidance explaining this aspect of the final rule in 2016. In all of our focus groups and many of our interviews, representatives of community banks and credit unions said that new requirements mandating consolidated disclosures to consumers for mortgage terms and fees have increased the time their staff spend on compliance, increased the cost of providing mortgage lending services, and delayed the completion of mortgages for customers. The Dodd Frank Act directed CFPB to issue new requirements to integrate mortgage loan disclosures that previously had been separately required by the Truth-in-Lending Act (TILA) and the Real Estate Settlement Procedures Act (RESPA), and their implementing regulations, Regulation Z and X, respectively. Effective in October 2015, the combined TILA-RESPA Integrated Disclosure (known as TRID) requires mortgage lenders to disclose certain mortgage terms, conditions, and fees to loan applicants during the origination process for certain mortgage loans and prescribe how the disclosures should be made. The disclosure provisions also require lenders, in the absence of specified exceptions, to reimburse or refund to borrowers portions of certain fees that exceed the estimates previously provided in order to comply with the revised regulations. Under TRID, lenders generally must provide residential mortgage loan applicants with two forms, and deliver these documents within specified time frames (as shown in fig. 1). Within 3 business days of an application and at least 7 business days before a loan is consummated, lenders must provide the applicant with the loan estimate, which includes estimates for all financing costs and fees and other terms and conditions associated with the potential loan. If circumstances change after the loan estimate has been provided (for example, if a borrower needs to change the loan amount), a new loan estimate may be required. At least 3 days before a loan is consummated, lenders must provide the applicant with the closing disclosure, which has the loan’s actual terms, conditions, and associated fees. If the closing disclosure is mailed to an applicant, lenders must wait an additional 3 days for the applicant to receive it before they can execute the loan, unless they can demonstrate that the applicant has received the closing disclosure. If the annual percentage rate or the type of loan change after the closing disclosure is provided, or if a prepayment penalty is added, a new closing disclosure must be provided and a new 3-day waiting period is required. Other changes made to the closing disclosure require the provision of a revised closing disclosure, but a new 3-day waiting period is not required. If the fees in the closing disclosure are more than the fees in the loan estimate (subject to some exceptions and tolerances discussed later in this section), the lender must reimburse the applicant for the amount of the increase in order to comply with the applicable regulations. In all of our focus groups and most of our interviews, representatives of community banks and credit unions said that TRID has increased the time required to comply with mortgage disclosure requirements and increased the cost of mortgage lending. In half of our focus groups, participants discussed how they have had to spend additional time ensuring the accuracy of their initial estimates of mortgage costs, including fees charged by third parties, in part because they are now financially responsible for changes in fees during the closing process. Some participants also discussed how they have had to hire additional staff to meet TRID’s requirements. In one focus group of community banks, participants described how mortgage loans frequently involve the use of multiple third parties, such as appraisers and inspectors, and obtaining accurate estimates of the amounts these parties will charge for their services within the 3-day period prescribed by TRID can be difficult. The community banks we spoke with also discussed how fees from these parties often change at closing, and ensuring an accurate estimate at the beginning of the process was not always possible. As a result, some representatives said that community banks and credit unions have had to pay to cure or correct the difference in changed third-party fees that are outside their control. In most of our focus groups and some of our interviews, representatives told us that this TRID requirement has made originating a mortgage more costly for community banks and credit unions. Community banks and credit unions in half of our focus groups and some of our interviews also told us that TRID’s requirements are complex and difficult to understand, which adds to their compliance burden. Participants in one focus group noted that CFPB’s final rule implementing TRID was very long—the rule available on CFPB’s website is more than 1,800 pages including the rule’s preamble—and has many scenarios that require different actions by mortgage lenders or trigger different responsibilities as the following examples illustrate. Some fees in the loan estimate, such as prepaid interest, may be subsequently changed provided that the estimates were in good faith. Other fees, such as for third-party services where the charge is not paid to the lender or the lender’s affiliate, may be changed by as much as 10 percent in aggregate before the lender becomes liable for the difference. However, for some charges the lender must reimburse or refund to the borrower portions of subsequent increases, such as fees paid to the creditor, mortgage broker, or a lender affiliate, without any percentage tolerance. Based on a poll we conducted in all six focus groups, 40 of 43 participants said that they had to provide additional training to staff to ensure that TRID’s requirements were understood, which takes additional time from serving customers. In all of our focus groups and most of our interviews, community banks and credit unions also said that TRID’s mandatory waiting periods and disclosure schedules increased the time required to close mortgage loans, which created burdens for the institutions and their customers. Several representatives we interviewed told us that TRID’s waiting periods led to delays in closings of about 15 days. The regulation mandates that mortgage loans generally cannot be consummated sooner than 7 business days after the loan estimate is provided to an applicant, and no sooner than 3 business days after the closing disclosure is received by the applicant. If the closing disclosure is mailed, the lender must add another 3 business days to the closing period to allow for delivery. Representatives in some of our focus groups said that when changes needed to be made to a loan during the closing period, TRID requires them to restart the waiting periods, which can increase delays. For example, if the closing disclosure had been provided, and the loan product needed to be changed, a new closing disclosure would have to be provided and the applicant given at least 3 days to review it. Some representatives we interviewed said that their customers are frustrated by these delays and would like to close their mortgages sooner than TRID allows. Others said that TRID’s waiting periods decreased flexibility in scheduling the closing date, which caused problems for homebuyers and sellers (for instance, because transactions frequently have to occur on the same day). However, CFPB officials and staff of a consumer group said that TRID has streamlined previous disclosure requirements and is important for ensuring that consumers obtaining mortgages are protected. CFPB reported that for more than 30 years lenders have been required by law to provide mortgage disclosures to borrowers, and CFPB staff noted that prior time frames were similar to those required by TRID and Regulation Z. CFPB also noted that information on the disclosure forms that TRID replaced was sometimes overlapping, used inconsistent terminology, and could confuse consumers. In addition, CFPB staff and staff of a consumer group said that the previous disclosures allowed some mortgage-related fees to be combined, which prevented borrowers from knowing what charges for specific services were. They said that TRID disclosures better highlight important items for home buyers, allowing them to more readily compare loan options. Furthermore, CFPB staff told us that before TRID, lenders and other parties commonly increased a mortgage loan’s fees during the closing process, and then gave borrowers a “take it or leave it” choice just before closing. As a result, borrowers often just accepted the increased costs. CFPB representatives said that TRID protects consumers from this practice by shifting the responsibility for most fee increases to lenders, and increases transparency in the lending process. CFPB staff told us that it is too early to definitively identify what impact TRID has had on borrowers’ understanding of mortgage terms, but told us that some information they have seen indicated that it has been helpful. For example, CFPB staff said that preliminary results from the National Survey of Mortgage Originations conducted in 2017 found that consumer confidence in mortgage lending increased. While CFPB staff said that this may indicate that TRID, which became effective in October 2015, has helped consumers better understand mortgage terms, they noted that the complete survey results are not expected to be released until 2018. CFPB staff said that these results should provide valuable information on how well consumers generally understood mortgage terms and whether borrowers were comparison shopping for loans that could be used to analyze TRID’s effects on consumer understanding of mortgage products. CFPB staff also told us that complying with TRID should not result in significant time being added to the mortgage closing process. Based on the final rule, they noted that TRID’s waiting periods should not lead to delays of more than 3 days. CFPB staff also pointed out that the overall 7-day waiting period and the 3-day waiting period can be modified or waived if the consumer has a bona fide personal financial emergency, and thus should not be creating delays for those consumers. To waive the waiting period, consumers have to provide the lender with a written statement that describes the emergency. CFPB staff also said that closing times are affected by a variety of factors and can vary substantially, and that the delays that community banks and credit unions we spoke with reported may not be representative of the experiences of other lenders. A preliminary CFPB analysis of industry-published mortgage closing data found that closing times increased after it first implemented TRID, but that the delays subsequently declined. CFPB staff also said that they plan to analyze closing times using HMDA data now that they are collecting these data, and that they expect that delays that community banks and credit unions may have experienced so far would decrease as institutions adjusted to the new requirements. Based on our review of TRID’s requirements and discussions with community banks and credit unions, some of the burden related to TRID that community banks and credit unions described appeared to result from institutions taking actions not required by regulations, and community banks and credit unions told us they still were confused about TRID requirements. For example, representatives of some institutions we interviewed said that they believed TRID requires the entire closing disclosure process to be restarted any time any changes were made to a loan’s amount. CFPB staff told us that this is not the case, and that revised loan estimates can be made in such cases without additional waiting periods. Representatives of several other community banks and credit unions cited 5- and 10-day waiting periods not in TRID requirements, or believed that the 7-day waiting period begins after the closing disclosure is received by the applicant, rather than when the loan estimate is provided. Participants in one focus group discussed that they were confused about when to provide disclosures and what needs to be provided. Representatives of one credit union said that if they did not understand a requirement, it was in their best interest to delay closing to ensure they were in compliance. CFPB staff said that they have taken several steps to help lenders understand TRID requirements. CFPB has published a Small Entity Compliance Guide and a Guide to the Loan Estimate and Closing Disclosure Forms. As of December 2017, these guides were accessible on a TRID implementation website that has links to other information about the rule, as well as blank forms and completed samples. CFPB staff told us that the bureau conducted several well-attended, in-depth webinars to explain different aspects of TRID, including one with more than 20,000 participants, and that recordings of the presentations remained available on the bureau’s TRID website. CFPB also encourages institutions to submit questions about TRID through the website, and the staff said that they review submitted questions for any patterns that may indicate that an aspect of the regulation is overly burdensome. However, the Mortgage Bankers Association reported that CFPB’s guidance for TRID had not met the needs of mortgage lenders. In a 2017 report on reforming CFPB, this association stated that timely and accessible answers to frequently asked questions about TRID were still needed, noting that while CFPB had assigned staff to answer questions, these answers were not widely circulated. The association also reported that it had made repeated requests for additional guidance related to TRID, but the agency largely did not respond with additional materials in response to these requests. Although we found that misunderstandings of TRID requirements could be creating unnecessary compliance burdens for some small institutions, CFPB had not assessed the effectiveness of the guidance it provided to community banks and credit unions. Under the Dodd-Frank Act, CFPB has a general responsibility to ensure its regulations are not unduly burdensome, and internal control standards direct federal agencies to analyze and respond to risks related to achieving their defined objectives. However, CFPB staff said that they have not directly assessed how well community banks and credit unions have understood TRID requirements and acknowledged that some of these institutions may be applying the regulations improperly. They said that CFPB intends to review the effectiveness of its guidance, but did not indicate when this review would be completed. Until the agency assesses how well community banks and credit unions understand TRID requirements, CFPB may not be able to effectively respond to the risk that some smaller institutions have implemented TRID incorrectly, unnecessarily burdening their staff and delaying consumers’ home purchases. We did not find that regulators directed institutions to comply with regulations from which they were exempt, although institutions were concerned about the appropriateness of examiner expectations. To provide regulatory relief to community banks and credit unions, Congress and regulators have sometimes exempted smaller institutions from the need to comply with all or part of some regulations. Such exemptions are often based on the size of the financial institution or the level of particular activities. For example, CFPB exempted institutions with less than $45 million in assets and fewer than 25 closed-end mortgage loans or 500 open-end lines of credit from the expanded HMDA reporting requirements. In January 2013, CFPB also included exemptions for some institutions in a rule related to originating loans that meet certain characteristics—known as qualified mortgages—in order for the institutions to receive certain liability protections if the loans later go into default. To qualify for this treatment, the lenders must make a good faith effort to determine a borrower’s ability to repay a loan and the loan must not include certain risky features (such as interest-only or balloon payments). In its final rule, CFPB included exemptions that allow small creditors to originate loans with certain otherwise restricted features (such as balloon payments) and still be considered qualified mortgage loans. Concerns expressed to legislators about exemptions not being applied appeared to be based on misunderstandings of certain regulations. For example, in June 2016, a bank official testified that he thought his bank would be exempt from all of CFPB’s requirements. However, CFPB’s rules applicable to banks apply generally to all depository institutions, although CFPB only conducts compliance examinations for institutions with assets exceeding $10 billion. The depository institution regulators continue to examine institutions with assets below this amount (the overwhelming majority of banks and credit unions) for compliance with regulations enacted by CFPB. Although not generalizable, our analysis of select examinations did not find that regulators directed institutions to comply with requirements from which they were exempt. In our interviews with representatives from 17 community banks and credit unions, none of the institutions’ representatives identified any cases in which regulators required their institution to comply with a regulatory requirement from which they should have been exempt. We also randomly selected and reviewed examination reports and supporting material for 28 examinations conducted by the regulators to identify any instances in which the regulators had not applied exemptions. From our review of the 28 examinations, we found no instances in the examination reports or the scoping memorandums indicating that examiners had required these institutions to comply with the regulations covered by the eight selected exemptions. Because of the limited number of the examinations we reviewed, we cannot generalize our findings to the regulatory treatment of all institutions qualifying for exemptions. Although not identifying issues relating to exemptions, representatives of community banks and credit unions in about half of our interviews and focus groups expressed concerns that their regulators expected them to follow practices they did not feel corresponded to the size or risks posed by their institutions. For example, representatives from one institution we interviewed said that examiners directed them to increase BSA/AML activities or staff, whereas they did not see such expectations as appropriate for institutions of their size. Similarly, in public forums held by regulators as part of their EGRPRA reviews (discussed in the next section) a few bank representatives stated that regulators sometimes considered compliance activities by large banks to be best practices, and then expected smaller banks to follow such practices. However, institution representatives in the public forums and in our interviews and focus groups that said sometimes regulators’ expectations for their institutions were not appropriate, but did not identify specific regulations or practices they had been asked to consider following when citing these concerns. To help ensure that applicable exemptions and regulatory expectations are appropriately applied, federal depository institution regulators told us they train their staff in applicable requirements and conduct senior-level reviews of examinations to help ensure that examiners only apply appropriate requirements and expectations on banks and credit unions. Regulators said that they do not conduct examinations in a one-size-fits- all manner, and aim to ensure that community banks and credit unions are held to standards appropriate to their size and business model. To achieve this, they said that examiners undergo rigorous training. For example, FDIC staff said that its examiners have to complete four core trainings and then receive ongoing on-the-job instruction. Each of the four regulators also said they have established quality assurance programs to review and assess their examination programs periodically. For example, each Federal Reserve Bank reviews its programs for examination inconsistency and the Federal Reserve Board staff conducts continuous and point-in-time oversight reviews of Reserve Banks’ examination programs to identify issues or problems, such as examination inconsistency. The depository institution regulators also said that they have processes for depository institutions to appeal examination findings if they feel they were held to inappropriate standards. In addition to less formal steps, such as contacting a regional office, each of the four regulators have an ombudsman office to which institutions can submit complaints or concerns about examination findings. Staffs of the various offices are independent from the regulators’ management and work with the depository institutions to resolve examination issues and concerns. If the ombudsman is unable to resolve the complaints, then the institutions can further appeal their complaints through established processes. Federal depository institution regulators address regulatory burden of their regulated institutions through the rulemaking process and also through retrospective reviews that may provide some regulatory relief to community banks. However, the retrospective review process has some limitations that limit its effectiveness in assessing and addressing regulatory burden on community banks and credit unions. Federal depository institution regulators can address the regulatory burden of their regulated institutions throughout the rulemaking process and through mandated, retrospective or “look back” reviews. According to the regulators, attempts to reduce regulatory burden start during the initial rulemaking process. Staff from FDIC, Federal Reserve, NCUA, and OCC all noted that when promulgating rules, their staff seek input from institutions and others throughout the process to design requirements that achieve the goals of the regulation at the most reasonable cost and effort for regulated entities. Once a rule has been drafted, the regulators publish it in the Federal Register for public comment. The staff noted that regulators often make revisions in response to the comments received to try to reduce compliance burdens in the final regulation. After regulations are implemented, banking regulators also address regulatory burdens by periodically conducting mandated reviews of their regulations. The Economic Growth and Regulatory Paperwork Reduction Act of 1996 (EGRPRA) directs three regulators (Federal Reserve, FDIC, and OCC, as agencies represented on the Federal Financial Institutions Examination Council) to review at least every 10 years all of their regulations and through public comment identify areas of the regulations that are outdated, unnecessary or unduly burdensome on insured depository institutions. Under the act, the regulators are to categorize their regulations and provide notice and solicit public comment on all the regulations for which they have regulatory authority. The act also includes a number of requirements on how the regulators should conduct the review, including reporting results to Congress. The first EGRPRA review was completed in 2007. The second EGRPRA review began in 2014 and the report summarizing its results was submitted to Congress in March 2017. While NCUA is not required to participate in the EGRPRA review (because EGRPRA did not include the agency in the list of agencies that must conduct the reviews), NCUA has been participating voluntarily. NCUA’s assessment of its regulations appears in separate sections of the reports provided to Congress for each of the 2007 and 2017 reviews. Regulators began the most recent EGRPRA review by providing notice and soliciting comments in 2014–2016. The Federal Reserve, FDIC, and OCC issued four public notices in the Federal Register seeking comments from regulated institutions and interested parties on 12 categories of regulations they promulgated. The regulators published a list of all the regulations they administer in the notices and asked for comments, including comments on the extent to which regulations were burdensome. Although not specifically required under EGRPRA, the regulators also held six public meetings across the country with several panels of banks and community groups. At each public meeting, at least three panels of bank officials represented banks with assets of generally less than $5 billion and a large number of the panels included banks with less than $2 billion in assets. Panels were dedicated to specific regulations or sets of regulations. For example, one panel covered capital-related rules, consumer protection, and director-related rules, and another addressed BSA/AML requirements. Although panels were dedicated to specific regulations or sets of regulations, the regulators invited comment on all of their regulations at all public meetings. The regulators then assessed the public comments they received and described actions they intended to take in response. EGRPRA requires that the regulators identify the significant issues raised by the comments. The regulators generally deemed the issues that received the most public comments as significant. For the 2017 report, representatives at the Federal Reserve, FDIC, and OCC reviewed, evaluated, and summarized more than 200 comment letters and numerous oral comments they received. For interagency regulations that received numerous comments, such as those relating to capital and BSA/AML requirements, the comment letters for each were provided to staff of one of the three regulators or to previously established interagency working groups to conduct the initial assessments. The regulators’ comment assessments also included reviews by each agency’s subject-matter experts, who prepared draft summaries of the concerns and proposed agency responses for each of the rules that received comments. According to one bank regulator, the subject-matter experts assessed the comments across three aspects: (1) whether a suggested change to the regulation would reduce bank burdens; (2) how the change to the regulation would affect the safety and soundness of the banking system; and (3) whether a statutory change would be required to address the comment. The summaries drafted by the subject-matter experts then were shared with staff representing all three regulators and further revised. The staff of the three regulators said they then met jointly to analyze the merits of the comments and finalize the comment responses and the proposed actions for approval by senior management at all three regulators. In the 2017 report summarizing their assessment of the comments received, the regulators identified six significant areas in which commenters raised concerns: (1) capital rules, (2) financial condition reporting (Call Reports), (3) appraisal requirements, (4) examination frequency, (5) Community Reinvestment Act, and (6) BSA/AML. Based on our analysis of the 2017 report, the Federal Reserve, FDIC, and OCC had taken or pledged to take actions to address 11 of the 28 specific concerns commenters had raised across these six areas. We focused our analysis on issues within the six significant issues that affected the smaller institution and defined an action taken by the regulators as a change or revision to a regulation or the issuance of guidance. Capital rules. The regulators noted in the 2017 EGRPRA report that they received comment letters from more than 30 commenters on the recently revised capital requirements. Although some of the concerns commenters expressed related to issues affecting large institutions, some commenters sought to have regulators completely exempt smaller institutions from the requirements. Others objected to the amounts of capital that had to be held for loans made involving more volatile commercial real estate. In response, the regulators stated that the more than 500 failures of banks in the recent crisis, most of which were community banks, justified requiring all banks to meet the new capital requirements. However, they pledged in the report to make some changes, and have recently proposed rules that would alter some of the requirements. For example, on September 27, 2017, the regulators proposed several revisions to the capital requirements that would apply to banks not subject to the advanced approach requirements under the capital rules (generally, banks with less than $250 billion in assets and less than $10 billion in total foreign exposure). For example, the proposed rule simplifies the capital treatment for certain commercial acquisition, development, and construction loans, and would change the treatment of mortgage servicing assets. Call Reports. The regulators also received more than 30 comments relating to the reports—known as Call Reports—that banks file with the regulators outlining their financial condition and performance. Generally, the commenters requested relief (reducing the number of items required to be reported) for smaller banks and also asked that the frequency of reporting for some items be reduced. In response to these concerns, the regulators described a review of the Call Report requirements intended to reduce the number of items to be reported to the regulators. The regulators had started this effort to address Call Report issues soon after the most recent EGRPRA process had begun in June 2014. In the 2017 EGRPRA report, the regulators noted that they developed a new Call Report form for banks with assets of less than $1 billion and domestic offices only. For instance, according to the regulators, the new form reduced the number of items such banks had to report by 40 percent. Staff from the regulators told us that about 3,500 banks used the new small-bank reporting form in March 2017, which represented about 68 percent of the banks eligible to use the new form. OCC officials told us that an additional 100 federally chartered banks submitted the form for the 2017 second quarter reporting period. After the issuance of the 2017 EGRPRA report, in June 2017 the regulators issued additional proposed revisions to the three Call Report forms that banks are required to complete. These proposed changes are to become effective in June 2018. For example, one of the proposed changes to the new community bank Call Report form would change the frequency of reporting certain data on non-accrual assets— nonperforming loans that are not generating their stated interest rate— from quarterly to semi-annually. In November 2017, the agencies issued further proposed revision to the community bank Call Report that would delete or consolidate a number of items and add a new, or raise certain existing, reporting thresholds. The proposed revision would take effect as of June 2018. Appraisals. The three bank regulators and NCUA received more than 160 comments during the 2017 EGRPRA process related to appraisal requirements. The commenters included banks and others that sought to raise the size of the loans that require appraisals, and a large number of appraisers that objected to any changes in the requirements According to the EGRPRA report, several professional appraiser associations argued that raising the threshold could undermine the safety and soundness of lenders and diminish consumer protection for mortgage financing. These commenters argued that increasing the thresholds could encourage banks to neglect collateral risk-management responsibilities. In response, in July 2017, the regulators proposed raising the threshold for when an appraisal is required from $250,000 to $400,000 for commercial real estate loans. The regulators indicated that the appraisal requirements for 1-4 family residential mortgage loans above the current $250,000 would not be appropriate at the this time because they believed having such appraisals for loans above that level increased the safety of those loans and better protected consumers and because other participants in the housing market, such as the Department of Housing and Urban Development and the government-sponsored enterprises, also required appraisals for loans above that amount. However, the depository institution regulators included in the proposal a request for comment about the appraisal requirements for residential real estate and what banks think are other factors that should be included when considering the threshold for these loans. As part of the 2017 EGRPRA process, the regulators also received comments indicating that banks in rural areas were having difficulty securing appraisers. In the EGRPRA report, the regulators acknowledged this difficulty and in May 2017, the bank regulators and NCUA issued agency guidance on how institutions could obtain temporary waivers and use other means to expand the pool of persons eligible to prepare appraisals in cases in which suitable appraiser staff were unavailable. The agencies also responded to commenters who found the evaluation process confusing by issuing an interagency advisory on the process in March 2016. Evaluations may be used instead of an appraisal for certain transactions including those under the threshold. Frequency of safety and soundness examinations. As part of the 2017 EGRPRA process, the agencies also received comments requesting that they raise the total asset threshold for an insured depository institution to qualify for the extended 18-month examination cycle from $1 billion to $2 billion and to further extend the examinations cycle from 18 months to 36 months. During the EGRPRA process, Congress took legislative action to reduce examination frequency for smaller, well-capitalized banks. In 2015, the FAST Act raised the threshold for the 18-month examination cycle from less than $500 million to less than $1 billion for certain well-capitalized and well-managed depository institutions with an “outstanding” composite rating and gave the agencies discretion to similarly raise this threshold for certain depository institutions with an “outstanding” or “good” composite rating. The agencies exercised this discretion and issued a final rule in 2016 making qualifying depository institutions with less than $1 billion in total assets eligible for an 18-month (rather than a 12-month) examination cycle. According to the EGRPRA report, agency staff estimated that the final rules allowed approximately 600 more institutions to qualify for an extended 18-month examination cycle, bringing the total number of qualifying institutions to 4,793. Community Reinvestment Act. The commenters in the 2017 EGRPRA process also raised various issues relating to the Community Reinvestment Act, including the geographic areas in which institutions were expected to provide loans to low- and moderate-income borrowers and whether credit unions should be required to comply with the act’s requirements. The regulators noted that they were not intending to take any actions to revise regulations relating to this act because many of the revisions the commenters suggested would require changes to the statute (that is, legislative action). The regulators also noted that they had addressed some of the concerns by revising the Interagency Questions and Answers relating to this act in 2016. Furthermore, the agencies noted that they have been reviewing their existing examination procedures and practices to identify policy and process improvements. BSA/AML. The regulators also received a number of comments as part of the 2017 EGRPRA process on the burden institutions encounter in complying with BSA/AML requirements. These included the threshold for reporting currency transactions and suspicious activities. The regulators also received comments on both BSA/AML examination frequency and the frequency of safety and soundness examinations generally. Agencies typically review BSA/AML compliance programs during safety and soundness examinations. As discussed previously, regulators allowed more institutions of outstanding or good composite condition to be examined every 18 months instead of every 12 months. Institutions that qualify for less frequent safety-and-soundness examinations also will be eligible for less frequent BSA/AML examinations. For the remainder of the issues raised by commenters, the regulators noted they do not have the regulatory authority to revise the requirements but provided the comments to FinCEN, which has authority for these regulations. A letter with FinCEN’s response to the comments was included as an appendix of the EGRPRA report. In the letter, the FinCEN Acting Director stated that FinCEN would work through the issues raised by the comments with its advisory group consisting of regulators, law enforcement staff, and representatives of financial institutions. Additional Burden Reduction Actions. In addition to describing some changes in response to the comments deemed significant, the regulators’ 2017 report also includes descriptions of additional actions the individual agencies have taken or planned to take to reduce the regulatory burden for banks, including community banks. The Federal Reserve Board noted that it changed its Small Bank Holding Company Policy Statement that allows small bank holding companies to hold more debt than permitted for larger bank holding companies. In addition, the Federal Reserve noted that it had made changes to certain supervisory policies, such as issuing guidance on assessing risk management for banks with less than $50 billion in assets and launching an electronic application filing system for banks and bank holding companies. OCC noted that it had issued two final rules amending its regulations for licensing/chartering and securities-related filings, among other things. According to OCC staff, the agency conducted an internal review of its agency-specific regulations and many of the changes to these regulations came from the internal review. The agency also noted that it integrated its rules for national banks and federal savings associations where possible. In addition, OCC noted that it removed redundant and unnecessary information requests from those made to banks before examinations. FDIC noted that it had rescinded enhanced supervisory procedures for newly insured banks and reduced the consumer examination frequency for small and newly insured banks. Similarly to OCC, FDIC is integrating its rules for both non-state member banks and state- chartered savings and loans associations. In addition, FDIC noted it had issued new guidance on banks’ deposit insurance filings and reduced paperwork for new bank applications. The 2017 report also presents the results of NCUA’s concurrent efforts to obtain and respond to comments as part of the EGRPRA process. NCUA conducts its review separately from the bank regulators’ review. In four Federal Register notices in 2015, NCUA sought comments on 76 regulations that it administers. NCUA received about 25 comments raising concerns about 29 of its regulations, most of which were submitted by credit union associations. NCUA received no comments on 47 regulations. NCUA’s methodology for its regulatory review was similar to the bank regulators’ methodology. According to NCUA, all comment letters responding to a particular notice were collected and reviewed by NCUA’s Special Counsel to the General Counsel, an experienced, senior-level attorney with overall responsibility for EGRPRA compliance. NCUA staff told us that criteria applied by the Special Counsel in his review included relevance, depth of understanding and analysis exhibited by the comment, and degree to which multiple commenters expressed the same or similar views on an issue. The Special Counsel prepared a report summarizing the substance of each comment. The comment summary was reviewed by the General Counsel and circulated to the NCUA Board and reviewed by the Board members and staff. NCUA identified in its report the following as significant issues relating to credit union regulation: (1) field of membership and chartering; (2) member business lending; (3) federal credit union ownership of fixed assets; (4) expansion of national credit union share insurance coverage; and (5) expanded powers for credit unions. For these, NCUA took various actions to address the issues raised in the comments. For example, NCUA modified and updated its field of credit union membership by revising the definition of a local community, rural district and underserved area, which provided greater flexibility to federal credit unions seeking to add a rural district to their field of membership. NCUA also lessened some of the restrictions on member lending to small business; and raised some of the asset thresholds for what would be defined as a small credit union so that fewer requirements would apply to these credit unions. Also, in April 2016, the NCUA Board issued a proposed rule that would eliminate the requirement that federal credit unions must have a plan by which they will achieve full occupancy of premises within an explicit time frame. The proposal would allow for federal credit unions to plan for and manage their use of office space and related premises in accordance with their own strategic plans and risk-management policies. The bank and credit union regulators’ process for the 2007 EGRPRA review also began with Federal Register notices that requested comments on regulations. The regulators then reviewed and assessed the comments and issued a report in 2007 to Congress in which they noted actions they took in some of the areas raised by commenters. Our analysis of the regulators’ responses indicated that the regulators took responsive actions in a few areas. The regulators noted they already had taken action in some cases (including after completion of a pending study and as a result of efforts to work with Congress to obtain statutory changes). However, for the remaining specific concerns, the four regulators indicated that they would not be taking actions. Similar to its response in 2017, NCUA discussed its responses to the significant issues raised about regulations in a separate section of the 2007 report. Our analysis indicated that NCUA took responsive actions in about half of the areas. For example, NCUA adjusted regulations in one case and in another case noted previously taken actions. For comments related to three other areas, NCUA took actions not reflected in the 2007 report because the actions were taken over a longer time frame (in some cases, after 8 years). In the remaining areas, NCUA deemed actions as not being desirable in four cases and outside of its authority in two other cases. The bank regulators do not conduct other retrospective reviews of regulations outside of the EGRPRA process. We requested information from the Federal Reserve, FDIC, and OCC about any discretionary regulatory retrospective reviews that they performed in addition to the EGRPRA review during 2012–2016. All three regulators reported to us they have not conducted any retrospective regulatory reviews outside of EGRPRA since 2012. However, under the Regulatory Flexibility Act (RFA), federal agencies are required to conduct what are referred to as section 610 reviews. The purpose of these reviews is to determine whether certain rules should be continued without change, amended, or rescinded consistent with the objectives of applicable statutes, to minimize any significant economic impact of the rules upon a substantial number of small entities. Section 610 reviews are to be conducted within 10 years of an applicable rule’s publication. As part of other work, we assessed the bank regulators’ section 610 reviews and found that the Federal Reserve, FDIC, and OCC conducted retrospective reviews that did not fully align with the Regulatory Flexibility Act’s requirements. Officials at each of the agencies stated that they satisfy the requirements to perform section 610 reviews through the EGRPRA review process. However, we found that the requirements of the EGRPRA reviews differ from those of the RFA-required section 610 reviews, and we made recommendations to these regulators to help ensure their compliance with this act in a separate report issued in January 2018. In addition to participating in the EGRPRA review, NCUA also reviews one-third of its regulations every year (each regulation is reviewed every 3 years). NCUA’s “one-third” review employs a public notice and comment process similar to the EGRPRA review. If a specific regulation does not receive any comments, NCUA does not review the regulation. For the 2016 one-third review, NCUA did not receive comments on 5 of 16 regulations and thus these regulations were not reviewed. NCUA made technical changes to 4 of the 11 regulations that received comments. In August 2017, NCUA staff announced they developed a task force for conducting additional regulatory reviews, including developing a 4-year agenda for reviewing and revising NCUA’s regulations. The primary factors they said they intend to use to evaluate their regulations will be the magnitude of the benefit and the degree of effort that credit unions must expend to comply with the regulations. Because the 4-year reviews will be conducted on all of NCUA’s regulations, staff noted that the annual one-third regulatory review process will not be conducted again until 2020. Our analysis of the EGRPRA review found three limitations to the current process. First, the EGRPRA statute does not include CFPB and thus the significant mortgage-related regulations and other regulations that it administers— regulations that banks and credit unions must follow—were not included in the EGRPRA review. Under the Dodd-Frank Act, CFPB was given financial regulatory authority, including for regulations implementing the Home Mortgage Disclosure Act (Regulation C); the Truth-in-Lending Act (Regulation Z); and the Truth-in-Savings Act (Regulation DD). These regulations apply to many of the activities that banks and credit unions conduct; the four depository institution regulators conduct the large majority of examinations of these institutions’ compliance with these CFPB-administered regulations. However, EGRPRA was not amended after the Dodd-Frank Act to include CFPB as one of the agencies that must conduct the EGRPRA review. During the 2017 EGRPRA review, the bank regulators only requested public comments on consumer protection regulations for which they have regulatory authority. But the banking regulators still received some comments on the key mortgage regulations and the other regulations that CFPB now administers. Our review of 2017 forum transcripts identified almost 60 comments on mortgage regulations, such as HMDA and TRID. The bank regulators could not address these mortgage regulation-related comments because they no longer had regulatory authority over these regulations; instead, they forwarded these comment letters to CFPB staff. According to CFPB staff, their role in the most recent EGRPRA process was very limited. CFPB staff told us they had no role in assessing the public comments received for purposes of the final 2017 EGRPRA report. According to one bank regulator, the bank regulators did not share non- mortgage regulation-related letters with CFPB staff because those comment letters did not involve CFPB regulations. Another bank regulator told us that CFPB was offered the opportunity to participate in the outreach meetings and were kept informed of the EGRPRA review during the quarterly FFIEC meetings that occurred during the review. Before the report was sent to Congress, CFPB staff said that they reviewed several late-stage drafts, but generally limited their review to ensuring that references to CFPB’s authority and regulations and its role in the EGRPRA process were properly characterized and explained. As a member of FFIEC, which issued the final report, CFPB’s Director was given an opportunity to review the report again just prior to its approval by FFIEC. CFPB must conduct its own reviews of regulations after they are implemented. Section 1022(d) of the Dodd-Frank Act requires CFPB to conduct an assessment of each significant rule or order adopted by the bureau under federal consumer financial law. CFPB must publish a report of the assessment not later than 5 years after the effective date of such rule or order. The assessment must address, among other relevant factors, the rule’s effectiveness in meeting the purposes and objectives of title X of the Dodd-Frank Act and specific goals stated by CFPB. The assessment also must reflect available evidence and any data that CFPB reasonably may collect. Before publishing a report of its assessment, CFPB must invite public comment on recommendations for modifying, expanding, or eliminating the significant rule or order. CFPB announced in Federal Register notices in spring 2017 that it was commencing assessments of rules related to Qualified Mortgage/Ability- to-Repay requirements, remittances, and mortgage servicing regulations. The notices described how CFPB planned to assess the regulations. In each notice, CFPB requested comment from the public on the feasibility and effectiveness of the assessment plan, data, and other factual information that may be useful for executing the plan; recommendations to improve the plan and relevant data; and data and other factual information about the benefits, costs, impacts, and effectiveness of the significant rule. Reports of these assessments are due in late 2018 and early 2019. According to CFPB staff, the requests for data and other factual information are consistent with the statutory requirement that the assessment must reflect available evidence and any data that CFPB reasonably may collect. The Federal Register notices also describe other data sources that CFPB has in-house or has been collecting pursuant to this requirement. CFPB staff told us that they have not yet determined whether certain other regulations that apply to banks and credit unions, such as the revisions to TRID and HMDA requirements, will be designated as significant and thus subjected to the one-time assessments. CFPB staff also told us they anticipate that within approximately 3 years after the effective date of a rule, it generally will have determined whether the rule is a significant rule for section 1022(d) assessment purposes. In tasking the bank regulators with conducting the EGRPRA reviews, Congress indicated its intent was to require these regulators to review all regulations that could be creating undue burden on regulated institutions. According to a Senate committee report relating to EGRPRA, the purpose of the legislation was to minimize unnecessary regulatory impediments for lenders, in a manner consistent with safety and soundness, consumer protection, and other public policy goals, so as to produce greater operational efficiency. Some in Congress have recognized that the omission of CFPB in the EGRPRA process is problematic, and in 2015 legislation was introduced to require that CFPB—and NCUA—formally participate in the EGRPRA review. Currently, without CFPB’s participation, key regulations that affect banks and credit unions may not be subject to the review process. In addition, these regulations may not be reviewed if CFPB does not deem them significant. Further, if reviewed, CFPB’s mandate is for a one-time, not recurring, review. CFPB staff told us that they have two additional initiatives designed to review its regulations, both of which have been announced in CFPB’s spring and fall 2017 Semiannual Regulatory Agendas. First, CFPB launched a program to periodically review individual existing regulations—or portions of large regulations—to identify opportunities to clarify ambiguities, address developments in the marketplace, or modernize or streamline provisions. Second, CFPB launched an internal task force to coordinate and bolster their continuing efforts to identify and relieve regulatory burdens, including with regard to small businesses such as community banks that potentially will address any regulation the agency has under its jurisdiction. Staff told us the agency has been considering suggestions it received from community banks and others on ways to reduce regulatory burden. However, CFPB has not provided public information specifically on the extent to which it intends to review regulations applicable to community banks and credit unions and other institutions or provided information on the timing and frequency of the reviews. In addition, it has not indicated the extent to which it will coordinate the reviews with the federal depository institution regulators as part of the EGRPRA reviews. Until CFPB publicly provides additional information indicating its commitment to periodically review the burden of all its regulations, community banks, credit unions, and other depository institutions may face diminished opportunities for relief from regulatory burden. Second, the federal depository institution regulators have not conducted or reported on quantitative analyses during the EGRPRA process to help them determine if changes to regulations would be warranted. Our analysis of the 2017 report indicated that in responses to comments in which the regulators did not take any actions, the regulators generally only provided their arguments against taking actions and did not cite analysis or data to support their narrative. In contrast, other federal agencies that are similarly tasked with conducting retrospective regulatory reviews are required to follow certain practices for such reviews that could serve as best practices for the depository institution regulators. For example, the Office of Management and Budget’s Circular A-4 guidance on regulatory analysis notes that a good analysis is transparent and should allow qualified third parties reviewing such analyses to clearly see how estimates and conclusions were determined. In addition, executive branch agencies that are tasked under executive orders to conduct retrospective reviews of regulations they issue generally are required under these orders to collect and analyze quantitative data as part of assessing the costs and benefits of changing existing regulations. However, EGRPRA does not require the regulators to collect and report on any quantitative data they collected or analyzed as part of assessing the potential burden of regulations. Conducting and reporting on how they analyzed the impact of potential regulatory changes to address burden could assist the depository institution regulators in conducting their EGRPRA reviews. For example, as discussed previously, Community Reinvestment Act regulations were deemed a significant issue, with commenters questioning the relevance of requiring small banks to make community development loans and suggesting that the asset threshold for this requirement be raised from $1 billion to $5 billion. The regulators told us that if the thresholds were raised, then community development loans would decline, particularly in underserved communities. However, regulators did not collect and analyze data for the EGRPRA review to determine the amount of community development loans provided by banks with assets of less than $1 billion; including a discussion of quantitative analysis might have helped show that community development loans from smaller community banks provided additional credit in communities—and thus helped to demonstrate the benefits of not changing the requirement as commenters requested. By not performing and reporting quantitative analyses where appropriate in the EGRPRA review, the regulators may be missing opportunities to better assess regulatory impacts after a regulation has been implemented, including identifying the need for any changes or benefits from the regulations and making their analyses more transparent to stakeholders. As the Office of Management and Budget’s Circular A-4 guidance on the development of regulatory analysis noted, sound quantitative estimates of costs and benefits, where feasible, are preferable to qualitative descriptions of benefits and costs because they help decision makers understand the magnitudes of the effects of alternative actions. By not fully describing their rationale for the analyses that supported their decisions, regulators may be missing opportunities to better communicate their decisions to stakeholders and the public. Lastly, in the EGRPRA process, the federal depository institution regulators have not assessed the ways that the cumulative burden of the regulations they administer may have created overlapping or duplicative requirements. Under the current process, the regulators have responded to issues raised about individual regulations based on comments they have received, not on bodies of regulations. However, congressional intent in tasking the depository institution regulators with the EGRPRA reviews was to ensure that they considered the cumulative effect of financial regulations. A 1995 Senate Committee on Banking, Housing, and Urban Affairs report stated while no one regulation can be singled out as being the most burdensome, and most have meritorious goals, the aggregate burden of banking regulations ultimately affects a bank’s operations, its profitability, and the cost of credit to customers. For example, financial regulations may have created overlapping or duplicative regulations in the areas of safety and soundness. One primary concern noted in the EGRPRA 2017 report was the amount of information or data banks are required to provide to regulators. For example, the cumulative burden of information collection was raised by commenters in relation to Call Reports, Community Reinvestment Act, and BSA/AML requirements. But in the EGRPRA report, the regulators did not examine how the various reporting requirements might relate to each other or how they might collectively affect institutions. In contrast, the executive branch agencies that conduct retrospective regulatory reviews must consider the cumulative effects of their own regulations, including cumulative burdens. For example, Executive Order 13563 directs agencies, to the extent practicable, to consider the costs of cumulative regulations. Executive Order 13563 does not apply to independent regulatory agencies such as the Federal Reserve, FDIC, OCC, NCUA, or CFPB. A memorandum from the Office of Management and Budget provided guidance to the agencies required to follow this order for assessing the cumulative burden and costs of regulations. The actions suggested for careful consideration include conducting early consultations with affected stakeholders to discuss potential interactions between rulemaking under consideration and existing regulations as well as other anticipated regulatory requirements. The executive order also directs agencies to consider regulations that appear to be attempting to achieve the same goal. However, other researchers often acknowledge that cumulative assessments of burden are difficult. Nevertheless, until the Federal Reserve, FDIC, OCC, and NCUA identify ways to consider the cumulative burden of regulations, they may miss opportunities to streamline bodies of regulations to reduce the overall compliance burden among financial institutions, including community banks and credit unions. For example, regulations applicable to specific activities of banks, such as lending or capital, could be assessed to determine if they have overlapping or duplicative requirements that could be revised without materially reducing the benefits sought by the regulations. New regulations for financial institutions enacted in recent years have helped protect mortgage borrowers, increase the safety and soundness of the financial system, and facilitate anti-terrorism and anti-money laundering efforts. But the regulations also entail compliance burdens, particularly for smaller institutions such as community banks and credit unions, and the cumulative burden on these institutions can be significant. Representatives from the institutions with which we spoke cited three sets of regulations—HMDA, BSA/AML, and TRID—as most burdensome for reasons that included their complexity. In particular, the complexity of TRID regulations appears to have contributed to misunderstandings that in turn caused institutions to take unnecessary actions. While regulators have acted to reduce burdens associated with the regulations, CFPB has not assessed the effectiveness of its TRID guidance. Federal internal control standards require agencies to analyze and respond to risks to achieving their objectives, and CFPB’s objectives include addressing regulations that are unduly burdensome. Assessing the effectiveness of TRID guidance represents an opportunity to reduce misunderstandings that create additional burden for institutions and also affect individual consumers (for instance, by delaying mortgage closings). The federal depository institution regulators (FDIC, Federal Reserve, OCC, as well as NCUA) also have opportunities to enhance the activities they undertake during EGRPRA reviews. Congress intended that the burden of all regulations applicable to depository institutions would be periodically assessed and reduced through the EGRPRA process. But because CFPB has not been included in this process, the regulations for which it is responsible were not assessed, and CFPB has not yet provided public information about what regulations it will review, and when, and whether it will coordinate with other regulators during EGPRA reviews. Until such information is publicly available, the extent to which the regulatory burden of CFPB regulation will be periodically addressed remains unclear. The effectiveness of the EGRPRA process also has been hampered by other limitations, including not conducting and reporting on depository institution regulators’ analysis of quantitative data and assessing the cumulative effect of regulations on institutions. Addressing these limitations in their EGRPRA processes likely would make the analyses the regulators perform more transparent, and potentially result in additional burden reduction. We make a total of 10 recommendations, which consist of 2 recommendations to CFPB, 2 to FDIC, 2 to the Federal Reserve, 2 to OCC, and 2 to NCUA. The Director of CFPB should assess the effectiveness of TRID guidance to determine the extent to which TRID’s requirements are accurately understood and take steps to address any issues as necessary. (Recommendation 1) The Director of CFPB should issue public information on its plans for reviewing regulations applicable to banks and credit unions, including information describing the scope of regulations the timing and frequency of the reviews, and the extent to which the reviews will be coordinated with the federal depository institution regulators as part of their periodic EGRPRA reviews. (Recommendation 2) The Chairman, FDIC, should, as part of the EGRPRA process, develop plans for their regulatory analyses describing how they will conduct and report on quantitative analysis whenever feasible to strengthen the rigor and transparency of the EGRPRA process. (Recommendation 3) The Chairman, FDIC, should, as part of the EGRPRA process, develop plans for conducting evaluations that would identify opportunities for streamlining bodies of regulation. (Recommendation 4) The Chair, Board of Governors of the Federal Reserve System, should, as part of the EGRPRA process develop plans for their regulatory analyses describing how they will conduct and report on quantitative analysis whenever feasible to strengthen the rigor and transparency of the EGRPRA process. (Recommendation 5) The Chair, Board of Governors of the Federal Reserve System, should, as part of the EGRPRA process, develop plans for conducting evaluations that would identify opportunities to streamline bodies of regulation. (Recommendation 6) The Comptroller of the Currency should, as part of the EGRPRA process, develop plans for their regulatory analyses describing how they will conduct and report on quantitative analysis whenever feasible to strengthen the rigor and transparency of the EGRPRA process. (Recommendation 7) The Comptroller of the Currency should, as part of the EGRPRA process, develop plans for conducting evaluations that would identify opportunities to streamline bodies of regulation. (Recommendation 8) The Chair of NCUA should, as part of the EGRPRA process, develop plans for their regulatory analyses describing how they will conduct and report on quantitative analysis whenever feasible to strengthen the rigor and transparency of the EGRPRA process. (Recommendation 9) The Chair of NCUA should, as part of the EGRPRA process, develop plans for conducting evaluations that would identify opportunities to streamline bodies of regulation. (Recommendation 10) We provided a draft of this report to CFPB, FDIC, FinCEN, the Federal Reserve, NCUA, and OCC. We received written comments from CFPB, FDIC, the Federal Reserve, NCUA, and OCC that we have reprinted in appendixes II through VI, respectively. CFPB, FDIC, FinCEN, the Federal Reserve, NCUA, and OCC also provided technical comments, which we incorporated as appropriate. In its written comments, CFPB agreed with the recommendation to assess its TRID guidance to determine the extent to which it is understood. CFPB stated it intends to solicit public input on how it can improve its regulatory guidance and implementation support. In addition, CFPB agreed with the recommendation on issuing public information on its plan for reviewing regulations. CFPB committed to developing additional plans with respect to their reviews of key regulations and to publicly releasing such information and in the interim, CFPB stated it intends to solicit public input on how it should approach reviewing regulations. FDIC stated that it appreciated the two recommendations and stated that it would work with the Federal Reserve and OCC to find the most appropriate ways to ensure that the three regulators continue to enhance their rulemaking analyses as part of the EGRPRA process. In addition, FDIC stated that as part of the EGRPRA review process, it would continue to monitor the cumulative effects of regulation through for example, a review of the community and quarterly banking studies and community bank Call Report data. The Federal Reserve agreed with the two recommendations pertaining to the EGRPRA process. Regarding the need conduct and report on quantitative analysis whenever feasible to strengthen and to increase the transparency of the EGRPRA process, the Federal Reserve plans to coordinate with FDIC and OCC to identify opportunities to conduct quantitative analyses where feasible during future EGRPRA reviews. With respect to the second recommendation, the Federal Reserve agreed that the cumulative impact of regulations on depository institutions is important and plans to coordinate with FDIC and OCC to identify further opportunities to seek comment on bodies of regulations and how they could be streamlined. NCUA acknowledged the report’s conclusions as part of their voluntary compliance with the EGRPRA process; NCUA should improve its qualitative analysis and develop plans for continued reductions to regulatory burden within the credit union industry. In its letter, NCUA noted it has appointed a regulatory review task force charged with reviewing and developing a four-year plan for revising their regulations and the review will consider the benefits of NCUA’s regulations as well as the burden they have on credit unions. In its written comments, OCC stated that it understood the importance of GAO’s recommendations. They stated they OCC will consult and coordinate with the Federal Reserve and FDIC to develop plans for regulatory analysis, including how the regulators should conduct and report on quantitative analysis and also, will work with these regulators to increase the transparency of the EGRPRA process. OCC also stated it will consult with these regulators to develop plans, as part of the EGRPRA process, to conduct evaluations that identify ways to decrease the regulatory burden created by bodies of regulations. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to CFPB, FDIC, FinCEN, the Federal Reserve, NCUA, and OCC. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or evansl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. This report examines the burdens that regulatory compliance places on community banks and credit unions and actions that federal regulators have taken to reduce these burdens; specifically: (1) the financial regulations that community banks and credit unions reported viewing as the most burdensome, the characteristics of those regulations that make them burdensome, and the benefits are associated with those regulations and (2) federal financial regulators’ efforts to reduce any existing regulatory burden on community banks and credit unions. To identify the regulations that community banks and credit unions viewed as the most burdensome, we first constructed a sample frame of financial institutions that met certain criteria for being classified as community banks or community-focused credit unions for the purposes of this review. These sample frames were then used as the basis for drawing our non-probability samples of institutions for purposes of interviews, focus group participation, and document review. Defining a community bank is important because, as we have reported, regulatory compliance may be more burdensome for community banks and credit unions than for larger banks because they are not as able to benefit from economies of scale in compliance resources. While there is no single consensus definition for what constitutes a community bank, we reviewed criteria for defining community banks developed by the Federal Deposit Insurance Corporation (FDIC), officials from the Independent Community Bankers Association, the Office of the Comptroller of the Currency (OCC). Based on this review, we determined that institutions that had the following characteristics would be the most appropriate to include in our universe of institutions, (1) fewer total assets, (2) engage in traditional lending and deposit taking activities, have limited geographic scope, and (3) did not have complex operating structures. To identify banks that met these characteristics, we began with all banks that filed a Consolidated Reports of Condition and Income (Call Report) for the first quarter of 2016 (March 31, 2016) and are not themselves subsidiaries of another bank that filed a Call Report. We then excluded banks using an asset-size threshold, to ensure we are including only small institutions. Based on interviews with regulators and our review of the FDIC’s community bank study, we targeted institutions around the $1 billion in assets as the group that could be relatively representative of the experiences of many community banks in complying with regulations. Upon review of the Call Reports data, we found that the banks in the 90th percentile by asset size were had about $1.2 billion, and we selected this to be an appropriate cutoff for our sample frame. In addition we excluded institutions with characteristics suggesting they do not engage in typical community banking activities like such as deposit-taking and lending; and those with characteristics suggesting they conduct more specialized operations not typical of community banking, such as credit card banks. In addition to ensure that we excluded banks whose views of regulatory compliance might be influenced by being part of a large and/or complex organization, we also excluded banks with foreign offices and banks that are subsidiaries of either foreign banks or of holding companies with $50 billion or more in consolidated assets. Finally, as a practical matter, we excluded banks for which we could not obtain data on one or more of the characteristics listed below. We also relied on a similar framework to construct a sample frame for credit unions. We sought to identify credit unions that were relatively small, engaged in traditional lending and deposit taking activities, and had limited geographic scope. To do this, we began with all insured credit unions that filed a Call Report for the first quarter of 2016 (March 31, 2016). We then excluded credit unions using an asset-size threshold of $860 million, which is the 95th percentile of credit unions, to ensure we are including only smaller institutions. The percentile of credit unions was higher than the percentile of banks because there are more large banks than there are credit unions. We then excluded credit unions that did not engage in activities that are typical of community lending, such as taking deposits, making loans and leases, and providing consumer checking accounts, as well as those credit unions with headquarters outside of the United States. We assessed the reliability of data from FFIEC, FDIC, the Federal Reserve Bank of Chicago, and NCUA by reviewing relevant documentation and electronically testing the data for missing values or obvious errors, and we found the data from these sources to be sufficiently reliable for the purpose of creating sample frames of community banks and credit unions. The sample frames were then used as the basis for drawing our nonprobability samples of institutions for purposes of interviews and focus groups. To identify regulations that community banks and credit unions viewed as among the most burdensome, we conducted structured interviews and focus groups with a sample of a total of 64 community banks and credit unions. To reduce the possibility of bias, we selected the institutions to ensure that banks and credit unions with different asset sizes and from different regions of the country were included. We also included at least one bank overseen by each of the three primary federal depository institution regulators, Federal Reserve, FDIC, NCUA, and OCC in the sample. We interviewed 17 institutions (10 banks and 7 credit unions) about which regulations their institutions experienced the most compliance burden. On the basis of the results of these interviews, we determined that considerable consensus existed among these institutions as to which regulations were seen as most burdensome, including those relating to mortgage fees and terms disclosures to consumers, mortgage borrower and loan characteristics reporting, and anti-money laundering activities. As a result, we determined to conduct focus groups with institutions to identify the characteristics of the regulations identified in our interviews that made these regulations burdensome. To identify the burdensome characteristics of the regulations identified in our preliminary interviews, we selected institutions to participate in three focus groups of community banks and three focus groups of credit unions. For the first focus group of community banks, we randomly selected 20 banks among 647 banks between $500 million and $1 billion located in nine U.S. census geographical areas using the sample frame of community banks we developed, and contacted them asking for their participation. Seven of the 20 banks agreed to participate in the first focus group. However, mortgages represented a low percentage of the assets of two participants in the first focus group, so we revised our selection criteria because two of the regulations identified as burdensome were related to mortgages. For the remaining two focus groups with community banks, we randomly selected institutions with more than $45 million and no more than $1.2 billion in assets to ensure that they would be required to comply with the mortgage characteristics reporting and with at least a 10 percent mortgage to asset ratio to better ensure that they would be sufficiently experienced with mortgage regulations. After identifying the large percentage of FDIC regulated banks in the first 20 banks we contacted, we decided to prioritize contact with banks regulated by OCC and the Federal Reserve for the institutions on our list. When banks declined or when we determined an institution merged or was acquired, we selected a new institution from that state and preferenced institutions regulated by OCC and the Federal Reserve. The three focus groups totaled 23 community banks with a range of assets. We used a similar selection process for three focus groups of credit unions consisting of 23 credit unions. We selected credit unions with at least $45 million in assets so that they would be required to comply with the mortgage regulations and with at least a 10 percent mortgage-to-asset ratio. During each of the focus groups, we asked the representatives from participating institutions what characteristics of the relevant regulations made them burdensome with which to comply. We also polled them about the extent to which they had to take various actions to comply with regulations, including hiring or expanding staff resources, investing in additional information technology resources, or conducting staff training. During the focus groups, we also confirmed with the participants that the three sets of regulations (on mortgage fee and other disclosures to consumers, reporting of mortgage borrower and loan characteristics, and anti-money laundering activities) were generally the ones they found most burdensome. To identify in more detail the steps a community bank or credit union may take to comply with the regulations identified as among the most burdensome, we also conducted an in-depth on-site interview with one community bank. We selected this institution by limiting the community bank sample to only those banks in the middle 80 percent of the distribution in terms of assets, mortgage lending, small business lending, and lending in general that were no more than 70 miles from Washington, D.C. We limited the sample in this way to ensure that the institution was not an outlier in terms of activities or size, and to limit the travel resources needed to conduct the site visit. We also interviewed associations representing consumers to understand the benefits of these regulations. These groups were selected using professional judgement of their knowledge of relevant banking regulations. We interviewed associations representing banks and credit unions. To identify the requirements of the regulations identified as among the most burdensome, we reviewed the Home Mortgage Disclosure Act (HMDA) and its implementing regulation, Regulation C; Bank Secrecy Act and anti-money laundering (BSA/AML) regulations, including those deriving from the Currency and Foreign Transactions Reporting Act, commonly known as the Bank Secrecy Act (BSA), and the 2001 USA PATRIOT Act; and the Integrated Mortgage Disclosure Rule Under the Real Estate Settlement Procedures Act (RESPA) with the implementing Regulation X; and the Truth-in-Lending Act (TILA) with implementing Regulation Z. We reviewed the Consumer Financial Protection Bureau’s (CFPB) small entity guidance and supporting materials on the TILA- RESPA Integrated Disclosure (TRID) regulation and HMDA to clarify the specific requirements of each rule and to analyze the information included in the CFPB guidance. We interviewed staff from each of the federal regulators responsible for implementing the regulations, as well as from the federal regulators responsible for examining community banks and credit unions. To identify the potential benefits of the regulations that were considered burdensome by community banks and credit unions, we interviewed representatives from four community groups to document their perspectives on the benefits provided by the identified regulations. To determine whether the bank regulators had required banks to comply with certain provisions from which the institutions might be exempt, we identified eight exemptions from the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010 from which community banks and credit unions should be exempt and reviewed a small group of the most recent examinations to identify instances in which a regulator may not have applied an exemption for which a bank was eligible. We reviewed 20 safety and soundness and consumer compliance examination reports of community banks and eight safety and soundness examination reports of credit unions. The bank examination reports we reviewed were for the first 20 community banks we contacted requesting participation in the first focus group. The bank examination reports included examinations from all three bank regulators (FDIC, Federal Reserve, and OCC). The NCUA examination reports we reviewed were for the eight credit unions that participated in the second focus group of credit unions. Because of the limited number of the examinations we reviewed, we cannot generalize whether regulators extended the exemptions to all qualifying institutions. To assess the federal financial regulators’ efforts to reduce the existing regulatory burden on community banks and credit unions, we identified the mechanisms the regulators used to identify burdensome regulations and actions to reduce potential burden. We reviewed laws and congressional and agency documentation. More specifically, we reviewed the Economic Growth and Regulatory Paperwork Reduction Act of 1996 (EGRPRA) that requires the Federal Reserve, FDIC, and OCC to review all their regulations every 10 years and identify areas of the regulations that are outdated, unnecessary, or unduly burdensome and reviewed the 1995 Senate Banking Committee report, which described the intent of the legislation. We reviewed the Federal Register notices that bank regulators and NCUA published requesting comments on their regulations. We also reviewed over 200 comment letters that the regulators had received through the EGRPRA process from community banks, credit unions, their trade associations, and others, as well as the transcripts of all six public forums regulators held as part the 2017 EGRPRA regulatory review efforts they conducted. We analyzed the extent to which the depository institutions regulators addressed the issues raised in comments received for the review. In assessing the 2017 and 2007 EGRPRA reports sent to Congress, we reviewed the significant issues identified by the regulators and determined the extent to which the regulators proposed or took actions in response to the comments relating to burden on small entities. We compared the requirements of Executive Orders 12866, 13563, and 13610 issued by Office of Management and Budget with the actions taken by the regulators in implementing their 10-year regulatory retrospective review. The executive orders included requirements on how executive branch agencies should conduct retrospective reviews of their regulations. For both objectives, we interviewed representatives from CFPB, FDIC, Federal Reserve, Financial Crimes Enforcement Network, NCUA, and OCC to identify any steps that regulators took to reduce the compliance burden associated with each of the identified regulations and to understand how they conduct retrospective reviews. We also interviewed representatives of the Small Business Administration’s Office of Advocacy, which reviews and comments on the burdens of regulations affecting small businesses, including community banks. We conducted this performance audit from March 2016 to February 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact name above, Cody J. Goebel (Assistant Director); Nancy Eibeck (Analyst in Charge); Bethany Benitez; Kathleen Boggs; Jeremy A. Conley; Pamela R. Davidson; Courtney L. LaFountain; William V. Lamping; Barbara M. Roesmann; and Jena Y. Sinkfield made key contributions to this report.
|
In recent decades, many new regulations intended to strengthen financial soundness, improve consumer protections, and aid anti-money laundering efforts were implemented for financial institutions. Smaller community banks and credit unions must comply with some of the regulations, but compliance can be more challenging and costly for these institutions. GAO examined (1) the regulations community banks and credit unions viewed as most burdensome and why, and (2) efforts by depository institution regulators to reduce any regulatory burden. GAO analyzed regulations and interviewed more than 60 community banks and credit unions (selected based on asset size and financial activities), regulators, and industry associations and consumer groups. GAO also analyzed letters and transcripts commenting on regulatory burden that regulators prepared responding to the comments. Interviews and focus groups GAO conducted with representatives of over 60 community banks and credit unions indicated regulations for reporting mortgage characteristics, reviewing transactions for potentially illicit activity, and disclosing mortgage terms and costs to consumers were the most burdensome. Institution representatives said these regulations were time-consuming and costly to comply with, in part because the requirements were complex, required individual reports that had to be reviewed for accuracy, or mandated actions within specific timeframes. However, regulators and others noted that the regulations were essential to preventing lending discrimination and use of the banking system for illicit activity, and they were acting to reduce compliance burdens. Institution representatives also said that the new mortgage disclosure regulations increased compliance costs, added significant time to loan closings, and resulted in institutions absorbing costs when others, such as appraisers and inspectors, changed disclosed fees. The Consumer Financial Protection Bureau (CFPB) issued guidance and conducted other outreach to educate institutions after issuing these regulations in 2013. But GAO found that some compliance burdens arose from misunderstanding the disclosure regulations—which in turn may have led institutions to take actions not actually required. Assessing the effectiveness of the guidance for the disclosure regulations could help mitigate the misunderstandings and thus also reduce compliance burdens. Regulators of community banks and credit unions—the Board of Governors of the Federal Reserve, the Federal Deposit Insurance Corporation, the Office of the Comptroller of the Currency, and the National Credit Union Administration—conduct decennial reviews to obtain industry comments on regulatory burden. But the reviews, conducted under the Economic Growth and Regulatory Paperwork Reduction Act of 1996 (EGRPRA), had the following limitations: CFPB and the consumer financial regulations for which it is responsible were not included. Unlike executive branch agencies, the depository institution regulators are not required to analyze and report quantitative-based rationales for their responses to comments. Regulators do not assess the cumulative burden of the regulations they administer. CFPB has formed an internal group that will be tasked with reviewing regulations it administers, but the agency has not publicly announced the scope of regulations included, the timing and frequency of the reviews, and the extent to which they will be coordinated with the other federal banking and credit union regulators as part of their periodic EGRPRA reviews. Congressional intent in mandating that these regulators review their regulations was that the cumulative effect of all federal financial regulations be considered. In addition, sound practices required of other federal agencies require them to analyze and report their assessments when reviewing regulations. Documenting in plans how the depository institution regulators would address these EGRPRA limitations would better ensure that all regulations relevant to community banks and credit unions were reviewed, likely improve the analyses the regulators perform, and potentially result in additional burden reduction. GAO makes a total of 10 recommendations to CFPB and the depository institution regulators. CFPB should assess the effectiveness of guidance on mortgage disclosure regulations and publicly issue its plans for the scope and timing of its regulation reviews and coordinate these with the other regulators' review process. As part of their burden reviews, the depository institution regulators should develop plans to report quantitative rationales for their actions and addressing the cumulative burden of regulations. In written comments, CFPB and the four depository institution regulators generally agreed with the recommendations.
|
EXIM is an independent executive branch agency and a wholly owned U.S. government corporation. EXIM is the official export credit agency (ECA) of the United States, and its mission is to support the export of U.S. goods and services, thereby supporting U.S. jobs. EXIM’s charter states that it should not compete with the private sector. Rather, EXIM’s role is to assume the credit and country risks that the private sector is unable or unwilling to accept, while still maintaining a reasonable assurance of repayment. EXIM must operate within the parameters and limits authorized by law, including, for example, statutory mandates that it support small business and promote sub–Saharan African and environmentally beneficial exports. In addition, EXIM is authorized to provide financing on a competitive basis with other ECAs and must submit annual reports to Congress on its actions. EXIM operates under the leadership of a president who also serves as Chairman of EXIM’s Board of Directors. The board is structured to include five members. All positions are appointed for 4-year terms by the President of the United States with the advice and consent of the Senate. The board is responsible for adopting and amending bylaws for the proper management and functioning of EXIM. Furthermore, the board approves EXIM’s financing either directly or through delegated authority. On May 8, 2019, the Senate confirmed a new president and two other board members, ending the lack of a quorum needed to approve transactions over $10 million that had existed since July 20, 2015. EXIM’s organizational structure includes various offices and divisions operating under its president. The Office of Board Authorized Finance is subdivided into business divisions that are responsible for underwriting related to loans and loan guarantees, including processing applications, evaluating the compliance of transactions with credit and other policies, performing financial analyses, negotiating financing terms, coordinating and synthesizing input to credit recommendations from other divisions, and presenting credit recommendations for approvals. EXIM facilitates support for U.S. exports through three major products: (1) loans; (2) loan guarantees, which include working capital guarantees; and (3) export credit insurance. All EXIM obligations carry the full faith and credit of the U.S. government. Based on its mission to support U.S. employment, EXIM currently requires a certain amount of U.S. content for an export contract to receive EXIM financing. EXIM’s loans generally carry fixed-interest rate terms under the Arrangement on Officially Supported Export Credits negotiated among OECD members. EXIM’s loan guarantees cover loans disbursed by private lenders by committing to pay the lenders if the borrower defaults. Both loans and loan guarantees may be classified as short-, medium-, or long-term. From fiscal year 2008 to fiscal year 2017, EXIM was “self-financing” for budgetary purposes—financing its operations from receipts collected from its customers—and operating within the parameters and limits authorized by Congress. However, according to EXIM, because of the lack of quorum on the Board of Directors, in fiscal year 2018 it was unable to approve transactions over $10 million and, as a result, was not able to generate sufficient cash inflows to fully self-finance program and administrative costs. EXIM reported that when it is back to being fully operational, it plans to regain full self-financing status. See figure 1 for additional details on EXIM’s loans and loan guarantees. Short-term loans and loan guarantees: Short-term financing consists of all transactions with repayment terms of less than 1 year, while Working Capital Guarantee program short-term financing may be approved for a single loan or a revolving line of credit that can be renewed for up to 3 years. In general, if the financed eligible product contains at least 50 percent U.S. content, then the entire transaction value is eligible for a working capital guarantee. Generally, for working capital guarantees, EXIM guarantees 90 percent of the loan’s principal and interest if the borrower defaults. Therefore, the lender maintains the risk of the remaining 10 percent. EXIM’s payment of working capital claims is conditional upon transaction participants’ compliance with EXIM requirements such as underwriting policies, deadlines for filing claims, payment of premiums and fees, and submission of proper documentation. EXIM has reported that over 80 percent of its working capital guarantee transactions are approved by lenders with delegated authority, which means that commercial lenders approve the guaranteed loans in accordance with agreed- upon underwriting requirements without first obtaining EXIM approval. If a lender does not have delegated authority, EXIM performs its own underwriting procedures and approves the guaranteed loans. Medium- and long-term loans and loan guarantees: For medium- and long-term loan and loan guarantee transactions, EXIM provides up to 85 percent financing with the remaining 15 percent paid by the borrower or financed separately. The financing could be less than 85 percent depending on the U.S. content. EXIM’s medium- and long- term loan guarantees generally cover 100 percent of the financed amount if the borrower defaults. EXIM’s guarantee to the lender is transferable and unconditional, meaning that EXIM must pay submitted claims regardless of the cause of default. EXIM generally underwrites medium- and long-term loans and loan guarantees for $10 million and less, and EXIM officials with delegated authority approve the transactions. Further, EXIM has provided certain lenders delegated authority to underwrite and approve these guarantees. EXIM underwrites long-term loans and loan guarantees greater than $10 million, and its Board of Directors approves the transactions. As noted earlier, EXIM’s authority to approve transactions lapsed from July 1, 2015, to December 4, 2015. Further, from July 20, 2015 to May 8, 2019, EXIM’s Board of Directors lacked a quorum, and, as a result, EXIM was unable to approve transactions greater than $10 million. Consequently, EXIM’s annual authorizations for loans, loan guarantees, and export credit insurance decreased from about $20 billion in 2014 to about $3 billion in 2018, a decrease of about 83 percent. See figure 2 for EXIM’s total authorizations by type and length of term for 2014 through 2018. EXIM’s Manual describes EXIM’s underwriting policies and procedures for each of its products offered, including short-, medium-, and long-term loans and loan guarantees. The Manual describes the responsibilities of EXIM’s divisions (e.g., Transportation, Structured and Project Finance, or Working Capital Finance) involved in the underwriting process. EXIM’s Office of Board Authorized Finance is in the process of streamlining the Manual, which is over 1,400 pages. A goal of this process is to separate procedures from policies, thus allowing for policies and procedures to be continuously reviewed. An EXIM official told us that these steps should improve the agency’s efficiency, transparency, and accountability. The underwriting sections of the Manual are tentatively scheduled for review in 2019. EXIM loan officers perform the underwriting for loans and long-term loan guarantees. The underwriting of medium-term or working capital loans guaranteed by EXIM is performed by either EXIM loan officers or qualified lenders with delegated authority, which allows the lender to authorize a loan that EXIM guarantees in accordance with agreed-upon underwriting requirements without first obtaining EXIM approval. When the underwriting and credit decision is delegated to approved lenders, EXIM does not perform the underwriting procedures. EXIM’s underwriting process calls for thorough credit assessments by subject matter experts and loan officers. These assessments evaluate key transactional risks, such as the borrower’s industry, competitive position, operating performance, liquidity position, leverage, and ability to service debt obligations. Frequently, credit enhancements are included in the structure of long-term financing (often in the form of collateral) in order to decrease the risk of a borrower default but also to increase the recovery in the event of a default. A risk rating is assigned to the transaction based on this evaluation which, in turn, determines the transaction fee that a borrower pays and assists in establishing the level of loss reserves EXIM must set aside. The credit assessments undergo multiple levels of internal review. All transactions of EXIM carry some risk; however, transactions approved through delegated authority lenders potentially carry a higher level of inherent risk because third-party financial institutions make the decisions. To mitigate the risk, EXIM reviews medium-term transactions approved by delegated authority lenders before the transactions are executed to assure compliance with EXIM’s delegated authority lending policies. For working capital guarantee delegated authority, EXIM conducts periodic examinations of the lenders, reviewing ongoing transactions and lender compliance with the delegated authority program. The examinations are intended to identify lenders that are not satisfactorily managing the requirements of the delegated authority program. To mitigate the risk for its internal credit process, EXIM developed and documented underwriting processing steps from the time the application is received through the approval of the appropriate credit structure. These steps serve to (1) establish a framework for sound credit decisions, (2) communicate to EXIM employees the requirements governing the extension of credit, and (3) encourage documentation and the consistent application of EXIM’s credit policies and procedures. According to EXIM officials, the underwriting process also serves as EXIM’s primary method for preventing fraud because of the due diligence performed on the proposed transaction. Figure 3 summarizes EXIM’s underwriting process. Application intake. When an application is initially received, it is screened for basic completeness, follow-up on incomplete or unacceptable applications is performed, and it is assigned to a processing division. Application screening. After an application is determined to be complete, it is assigned to the applicable EXIM division that oversees the applicable type of project. For example, an application for the purchase of an aircraft would be assigned to the Transportation Division. Once assigned, a loan officer in that division is to assess the eligibility of the transaction. To ensure compliance with laws and regulations, the loan officer is to obtain and assess various certifications from transaction participants. Loan officers are also required to submit the corporate and individual names and addresses of lenders, borrowers, guarantors, and other transaction participants to the EXIM Library. Library staff are then to conduct a Character, Reputational, and Transaction Integrity (CRTI) review—a procedure designed to provide a level of due diligence over various risks and to help prevent fraud by checking loan participants’ information against 28 databases. Risk assessment and due diligence. Once the transaction is considered minimally eligible for EXIM support, the loan officer is required to perform a series of due diligence activities to determine (1) whether the transaction provides a reasonable assurance of repayment, (2) any potential material issues regarding the transaction or the participants that would preclude EXIM support, and (3) the appropriate risk level and pricing for the transaction. As part of the financial evaluation of the transaction, the loan officer is required to obtain and analyze the borrower’s financial statements, credit reports or rating agency reports, financial projections, and other relevant information. As applicable, the loan officer is required to obtain input from other EXIM staff, such as attorneys or engineers, to conclude on the legal, technical, economic, or environmental risks of the transaction. Based on this due diligence, the loan officer is to assess the transaction for risk and assign an overall risk rating. This rating is used to calculate the exposure fee EXIM will charge the borrower for guaranteeing the transaction. Greater perceived risks result in higher fees. Credit structure. After the risk assessment and due diligence is performed, the loan officer determines the financing terms and conditions to be recommended. The loan officer is generally required to structure the transaction to include a security interest (collateral) in the financed goods or other assets of the borrower. If it is determined that collateral is not necessary, the loan officer is to document the explanation and mitigating factors (e.g., EXIM support is small relative to a borrower’s size). For all aircraft transactions, the loan officer is required to perform an assessment and loan-to-value analysis of the collateral, and the financing terms must include requirements for the borrower to maintain ownership and condition of the collateral. Credit decision. The loan officer is to document the due diligence in a credit or board memo, which includes the loan officer’s recommendation to approve or decline the transaction. These memos and applicable supporting documentation are then to be forwarded to the approving party. The credit memo applicable to working capital or medium-term transactions is to be provided to EXIM officials with individual delegated authority to approve transactions of $10 million and less. Board memos for long-term transactions or transactions greater than $10 million are to be provided to the EXIM Board of Directors for approval. From July 2015 to May 2019, EXIM lacked a quorum on its Board of Directors, and as a result, EXIM was unable to approve new transactions greater than $10 million. Government-wide guidance for federal agencies to follow for the management and operation of federal credit programs, such as loan and loan guarantee programs, include the following: OMB Circular A-129, Policies for Federal Credit Programs and Non- Tax Receivables, revised in January 2013, describes policies and procedures for designing and managing federal credit programs. The guidance addresses various standards for applicant screening, loan documentation, collateral requirements, determining and monitoring lender and servicer eligibility, and lender and borrower stake in full repayment. In addition, it details risk sharing practices that agencies should follow, such as ensuring that lenders and borrowers who participate in federal credit programs have a substantial stake in full repayment in accordance with the loan contract. Treasury’s Bureau of the Fiscal Service’s Managing Federal Receivables provides federal agencies with an overview of standards, guidance, and procedures for successful management of credit activities, including screening applicants for creditworthiness and financial responsibility, and managing, processing, evaluating and documenting loan applications and awards for loan assistance. Furthermore, it details how federal agencies should manage lenders and servicers that participate in federally insured guaranteed loan programs. We found that EXIM’s process for updating its underwriting policies and procedures was properly designed and implemented. Standards for Internal Control in the Federal Government states that management should design control activities to achieve objectives and respond to risks. Management’s design of internal control establishes and communicates the who, what, when, where, and why of internal control execution to personnel. Management should clearly document internal control in a manner that allows the documentation to be readily available and properly managed and maintained. Further, management should also implement control activities through policies and periodically review policies, procedures, and related control activities for continued relevance and effectiveness in achieving the entity’s objectives or addressing related risks. Underwriting policies and procedures are documented in EXIM’s Manual, which consists of 26 chapters, covering various topics by product (e.g., long-term loans and loan guarantees) or process (e.g., application intake or credit structure). We found that the Manual provides EXIM’s divisions involved in the underwriting process with direction and guidance for making credit decisions and is to be updated at least annually, except for material changes, which are required to be incorporated as soon as possible. The Credit Policy Division (Credit Policy) maintains and manages the process for updating of the Manual and relies on EXIM’s divisions for additions, updates, and revisions to it. Credit Policy maintains an assignment list of the primary officer, the primary reviewer, and the Office of General Counsel (OGC) reviewer, who are responsible for each chapter in the Manual. Each year, the process calls for Credit Policy to send an email to the primary officer and two reviewers assigned to each chapter. This communication requests that the officer review the assigned chapters for any needed changes. After the assigned officer has reviewed the chapter, if there are no contemplated changes, the primary officer assigned to the chapter is required to notify Credit Policy of this determination by email with the concurrence of the respective primary and OGC reviewers. If changes are needed, the assigned officer is required to provide the updated chapter to Credit Policy by email with the concurrence of the primary and OGC reviewers. According to EXIM’s process, when material changes to the Manual are needed, the necessary revisions do not wait for the annual update. Instead, the responsible division is required to incorporate such changes into the applicable chapter(s) of the Manual and submit them to Credit Policy as soon as possible. EXIM officials stated that examples of material changes that would be addressed immediately include recommendations from oversight bodies, such as the EXIM OIG or GAO, and changes resulting from legislative actions, such as updates to EXIM’s charter or changes in compliance procedures related to sanctions. The underwriting policies and procedures in EXIM’s Manual for its loan and loan guarantee transactions were mostly consistent with OMB and Treasury guidance for managing federal credit programs. We evaluated these policies and procedures for (1) applicant screening, (2) loan documentation, (3) collateral requirements, (4) lender and servicer eligibility, and (5) risk sharing practices. As shown in table 1, EXIM’s underwriting policies and procedures for the loan and loan guarantee programs were consistent with 12 of 15 applicable standards for managing federal credit programs and were partially consistent with three. Three other standards were not applicable to EXIM’s underwriting. Applicant screening refers to determining an applicant’s eligibility and creditworthiness for a loan or loan guarantee. Federal guidance for applicant screening includes specific standards related to the applicant’s (1) program eligibility, (2) delinquency on federal debt, (3) creditworthiness, (4) delinquent child support, and (5) taxpayer identification number (TIN). As shown in table 2, EXIM’s underwriting policies and procedures were consistent with federal guidance for applicant screening. For all loan and loan guarantee applications, EXIM requires applicants to provide identifying information, such as name, address, phone number, and Dun & Bradstreet Data Universal Numbering System (DUNS) number. Applicants are also required to provide relevant financial information, such as income, assets, cash flows, liabilities, financial statements, and credit reports. EXIM’s underwriting process requires screening of applicants for eligibility, which is partly completed through the CRTI review. As part of the CRTI review, EXIM screens the corporate and individual names and addresses of lenders, borrowers, guarantors, and other transaction participants against 28 databases that include various U.S. government and international debarment and sanctions lists for red flags. If a match is identified, EXIM’s Credit Review and Compliance Division works with the loan officers to determine the legitimacy of the match and, as necessary, works with OGC to determine what additional due diligence measures may be required and whether to continue the underwriting process. In addition to the CRTI review process, loan officers must obtain and use credit reports to assess creditworthiness and identify whether transaction applicants are delinquent on federal tax or nontax debts, including judgment liens against property for a debt to the federal government, and are therefore not eligible to receive federal loans and loan guarantees. EXIM’s policies and procedures contain instructions to suspend application processing and contact OGC for further guidance upon finding federal debt delinquencies or other insufficient or negative information on applicant credit reports. Loan officers must document any issues encountered on applicant credit reports and explain why a transaction is creditworthy if they recommend it for approval. Lastly, OMB Circular A-129 requires agencies to obtain the TIN of all persons doing business with the agency. The working capital guarantee application form requests the TIN for transaction applicants, which an EXIM official stated are used to obtain applicant credit reports. EXIM does not require the TIN for medium- and long-term applications. EXIM officials stated that applicants for medium- and long-term transactions are likely foreign entities and thus would not have federal TINs. However, all applications request the DUNS number which EXIM must use to perform the credit review and CRTI due diligence procedures. Federal guidance calls for the maintenance of loan files containing key information used in loan underwriting. As shown in table 3, EXIM’s underwriting policies and procedures were consistent with the federal guidance related to loan documentation. EXIM’s underwriting policies and procedures state that loan officers must maintain a loan file on the transaction applicant and other participants, which includes the completed application, credit bureau reports, credit analysis, certifications, verifications and other legal documents, and loan or service agreements with the debtor, as appropriate. EXIM’s process calls for obtaining debt collection certification statements for the working capital guarantee applications because the applicants are domestic entities. While the debt collection certification statement is not applicable for medium- and long-term applications, because the applicants are foreign entities, EXIM’s executed credit agreements and promissory notes define the terms of the transactions, including defaults and the remedies EXIM may take, such as declaring default and accelerating debt repayment, and pursuing restructuring or recovery actions, including possible litigation. Collateral refers to the assets used to secure a loan. For many types of loans, the government can reduce its risk of default and potential losses through well-managed collateral requirements. However, several of the collateral requirements contained in federal guidance relate specifically to real property. Since EXIM’s mission is to support U.S. exports, it does not finance real property and, accordingly, does not accept real property as the primary collateral. As a result, three of the four federal guidance standards were not applicable to EXIM’s underwriting. As shown in table 4, EXIM’s underwriting policies and procedures were consistent with the applicable federal guidance related to collateral. EXIM’s underwriting policies and procedures state that it should have a security interest in the financed export items. The loan officer and a transaction engineer will evaluate the export sales contracts, and this evaluation will be used as the assessment of collateral for the transaction. If using the financed export items as collateral is not possible, the loan officer should secure the EXIM financing with other assets owned by the primary source of repayment that are at least of comparable value to the financed items. Collateral that could be considered includes fixed assets, inventory, accounts receivable, or a third-party guarantee. While OMB Circular A-129 requires a real property appraisal and contains specific criteria defining acceptable appraisals, the standard was not applicable to EXIM’s loans and loan guarantees. According to EXIM officials, EXIM rarely takes real property as collateral because the primary collateral for EXIM’s transactions is the asset financed, and EXIM does not finance real property. Further, EXIM officials stated that the U.S. appraisal standards cannot be applied to foreign real property. However, if real property is taken as collateral, it would be as secondary or additional collateral. When EXIM accepts real property as additional collateral for a transaction, EXIM officials stated that an independent third-party appraisal in accordance with regional practices is obtained. Federal guidance calls for policies and procedures related to lender and servicer eligibility, monitoring, and recertification. As shown in table 5, EXIM’s policies and procedures were consistent with three and partially consistent with two of five federal standards for lender and servicer eligibility. OMB Circular A-129 calls for agencies to establish specific procedures to continuously review lender and servicer eligibility and decertify lenders and servicers that fail to meet the agency’s standards for continued participation. EXIM’s policies and procedures related to requirements for working capital guarantee delegated authority lenders were consistent with federal guidance. However, for medium-term delegated authority lenders, EXIM has not established documented policies and procedures for (1) determining their eligibility for continued participation in the program and (2) decertifying or taking other appropriate actions for those that do not meet compliance or eligibility standards. EXIM officials told us that currently EXIM has only three medium-term delegated authority lenders: two were renewed for continued participation and one became inactive in 2018. Further, according to EXIM officials, since 2009 only 2.3 percent of all medium- term guarantee authorizations have been delegated authority authorizations ($71 million out of $3.1 billion). EXIM reviews the performance of its primary medium-term lenders quarterly. In these reviews, EXIM officials evaluate the lenders’ portfolio performance, underwriting capabilities, and a set of qualitative factors. However, without documented policies and procedures for determining the eligibility of the medium-term delegated authority lenders’ continued participation in the program and for decertifying such lenders, as appropriate, EXIM may allow lenders who are not qualified to underwrite transactions, thus increasing the risk for improper underwriting and defaults. EXIM officials stated that they are in the process of updating and enhancing the Manual and will include procedures for medium-term delegated authority lender reviews and the consequences of an unfavorable assessment. OMB Circular A-129 calls for lenders and borrowers who participate in federal credit programs to have a substantial stake in full repayment but also states that the level of guarantee should be no more than necessary to achieve program purposes. As shown in table 6, EXIM’s underwriting policies and procedures were generally consistent with the federal guidance related to certain risk sharing practices for lenders and borrowers to have a stake in full repayment and were partially consistent with the federal guidance related to periodic program reviews. Although OMB Circular A-129 calls for lenders who extend credit to have substantial stake in full repayment and bear at least 20 percent of any loss from a default, it also states that the level of guarantee should be no more than necessary to achieve program purposes. However, consistent with its charter, EXIM is authorized to provide terms that are competitive with those of other ECAs, such as up to 100 percent loan guarantee coverage. EXIM does not require lenders to bear 20 percent of the risk of default. For working capital guarantees, EXIM offers 90 percent guarantee coverage and lenders retain 10 percent risk. For medium- and long-term loan guarantees, EXIM provides up to 85 percent financing with the remaining 15 percent paid by the borrower or financed separately. EXIM financing could be less than 85 percent depending on the U.S. content. According to EXIM, guaranteeing 100 percent of the amount it finances permits it to explore capital markets and is more desirable to banks for large and long-term projects. As a result, the lender may not retain any risk of default in the transaction. According to an OECD official, guaranteeing 100 percent of the financed amount is consistent with other ECAs. For example, the ECAs of Canada, Germany, and the United Kingdom also provide guarantees up to 100 percent of the financed amount on certain products. OMB Circular A-129 states that borrowers should have equity interest in assets financed with credit assistance and substantial capital or equity at risk in their business. However, consistent with its charter, EXIM is authorized to provide terms that are competitive with those of other ECAs. EXIM does not specifically require borrowers to have an equity interest in the transaction or to contribute the minimum cash payment. EXIM’s policies and procedures state that in practice, buyers often secure alternative financing for the cash payment, which is permissible as long as the financing is not officially supported by EXIM or another U.S. government agency. EXIM officials noted that during the analysis of creditworthiness, loan officers examine supporting documents for the alternative financing to assure that it is not guaranteed by EXIM or another U.S. government agency. OMB Circular A-129 states that the agency should periodically review programs in which the government bears more than 80 percent of any loss. The review is intended to evaluate the extent to which credit programs achieve intended objectives and whether the private sector has become able to bear a greater share of the risk. EXIM officials stated that EXIM performs program reviews through annual budget justifications submitted to OMB and annual competitiveness reports submitted to Congress. EXIM officials also stated that there are established timelines for preparing these reviews that must be followed to ensure that EXIM meets deadlines for submitting its budget documentation and the June 30 deadline for the annual competitiveness report. In addition, EXIM employs a detailed summary of the products and terms that other countries’ official ECAs offer. However, EXIM does not have documented policies or procedures related to performing periodic program reviews. As a result, EXIM runs the risk that it will not effectively review its programs to determine whether the private sector could bear a greater share of the risk. EXIM’s Manual provides a framework for making credit decisions so that only qualified applicants that demonstrate reasonable assurance of repayment are provided loans or loan guarantees. This framework helps ensure consistent application of procedures for assessing an applicant’s creditworthiness and for overseeing certain delegated authority lenders. However, EXIM’s underwriting process could be improved by additional procedures. For example, the Manual did not address medium-term delegated authority lenders’ eligibility requirements for continued participation and decertification procedures for lenders who fail to meet agency’s standards. Further, EXIM has not documented its process for periodic program reviews to determine whether the private sector could bear a greater share of the risk. Improvements in these areas could help enhance the oversight of lenders and the usefulness of program reviews. We are making the following two recommendations to EXIM: The Chief Operating Officer of EXIM should consider establishing documented policies and procedures for (1) determining medium-term delegated authority lenders’ eligibility for continued participation in EXIM’s programs and (2) decertifying or taking other appropriate actions for such lenders that do not meet compliance or eligibility standards. (Recommendation 1) The Chief Operating Officer of EXIM should establish documented policies and procedures for periodically reviewing credit programs in which the government bears more than 80 percent of any loss to determine whether private sector lenders should bear a greater share of the risk. (Recommendation 2) We provided a draft of this report to EXIM for review and comment. In written comments on a draft of this report, which are reproduced in appendix II, EXIM concurred with our two recommendations. EXIM also provided technical comments that we incorporated into the final report, as appropriate. In its written comments, EXIM described planned actions to address our recommendations. Specifically, EXIM stated that it will consider establishing documented policies and procedures for determining medium-term delegated authority lenders' eligibility for continued participation in EXIM's programs and decertifying or taking other appropriate actions for such lenders that do not meet compliance or eligibility standards. Further, EXIM will establish documented policies and procedures for periodically reviewing credit programs in which the government bears more than 80 percent of any loss to determine whether private sector lenders should bear a greater share of the risk. If implemented effectively, EXIM’s planned actions should address the intent of our recommendations. We are sending copies of this report to appropriate congressional committees, the Chairman of the Export-Import Bank, and the EXIM Inspector General. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3133 or dalkinj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objectives were to determine the extent to which Export-Import Bank’s (EXIM) (1) process for updating its underwriting policies and procedures is properly designed and implemented and (2) underwriting policies and procedures for loan and loan guarantee transactions are consistent with guidance for managing federal credit programs. To assess the extent to which EXIM’s process for updating its underwriting policies and procedures was properly designed and implemented, we reviewed EXIM’s policies and procedures for updating its Loan, Guarantee and Insurance Manual (Manual) and interviewed EXIM officials. We assessed EXIM’s process to determine whether it sufficiently communicated the procedures to be performed and documentation to be prepared and was consistent with Standards for Internal Control in the Federal Government. We did not evaluate EXIM’s compliance with its process for updating its underwriting policies and procedures or assess their operating effectiveness. To assess the extent to which EXIM’s underwriting policies and procedures for loan and loan guarantee transactions were consistent with guidance for managing federal credit programs, we reviewed relevant requirements and guidance, including the Office of Management and Budget’s (OMB) Circular A-129, Policies for Federal Credit Programs and Non-Tax Receivables, and the Department of the Treasury’s Bureau of the Fiscal Service’s Managing Federal Receivables: A Guide for Managing Loans and Administrative Debt. Specifically, we focused on OMB Circular A-129’s Section II (C)(1)(a) through (c), Section III (A)(1) through (3), and Section III (C)(1)(a) through (e)), which contain standards pertinent to risk management for loan and loan guarantee programs, including standards for (1) applicant screening (program eligibility, delinquency on federal debt, creditworthiness, delinquent child support, and taxpayer identification number); (2) loan documentation; (3) collateral (appraisal of real property, loan-to-value ratio, liquidation of real property collateral, and asset management standards and systems for real property disposal); (4) lender and servicer eligibility (participation criteria, review of eligibility, fees, decertification, and loan servicers); and (5) risk sharing practices (private lenders stake in full repayment, borrowers stake in full repayment, and program reviews). From the Bureau of the Fiscal Service’s Managing Federal Receivables, we identified key guidance related to credit extension (ch. 3) and management of guaranteed lenders and servicers (ch. 5). We reviewed EXIM’s policies and procedures related to underwriting for the loan and loan guarantee programs contained in its Manual and other documentation, such as its charter. We also discussed EXIM’s policies and procedures related to underwriting with EXIM officials. We compared EXIM’s underwriting processes to federal guidance for managing federal credit programs. As part of this comparison, we assessed whether policies and procedures included in EXIM’s Manual were consistent with federal guidance. However, because of EXIM’s limited lending authority during the period of our audit, we did not verify EXIM’s compliance with its underwriting policies and procedures or assess their operating effectiveness. In areas where we found EXIM’s policies and procedures to be consistent with federal guidance, there may still be opportunities to improve operating effectiveness. Further, guidance for managing federal credit programs includes additional requirements not related to underwriting, which we did not assess. In addition, we reviewed EXIM’s Office of Inspector General (OIG) reports since 2014 related to underwriting issues, various laws applicable to EXIM, and GAO reports related to EXIM. We also reviewed EXIM’s annual reports and competitiveness reports. We also discussed EXIM underwriting process with EXIM OIG officials and export credit financing and risk sharing practices with an official from the Organisation for Economic Co-operation and Development. We conducted this performance audit from January 2017 to May 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Marcia Carlsen (Assistant Director), Dragan Matic (Analyst in Charge), Sarah Lisk, Erika Szatmari, and Jingxiong Wu made key contributions to this report.
|
EXIM serves as the official export credit agency of the United States, providing a range of financial products to support the export of U.S. goods and services. Following the 2007–2009 financial crisis, demand for EXIM support increased. However, from July 2015 to May 2019, EXIM lacked a quorum on its Board of Directors and, as a result, was unable to approve medium- and long-term transactions greater than $10 million. The Export-Import Bank Reauthorization Act of 2012 includes a provision for GAO to evaluate EXIM's underwriting process. This report discusses the extent to which EXIM's (1) process for updating its underwriting policies and procedures is properly designed and implemented and (2) underwriting policies and procedures for loan and loan guarantee transactions are consistent with guidance for managing federal credit programs. To address these objectives, GAO evaluated EXIM's underwriting policies and procedures against federal guidance and discussed the underwriting process with EXIM officials. GAO found that Export-Import Bank's (EXIM) process for updating its underwriting policies and procedures was properly designed and implemented. EXIM's Loan, Guarantee and Insurance Manual (Manual) describes EXIM's underwriting policies and procedures for its short-, medium-, and long-term loans and loan guarantees. The Manual describes the responsibilities of divisions and loan officers involved in the underwriting process and is required to be updated at least annually, except for material changes (e.g., changes resulting from legislative actions or compliance with sanctions), which are required to be made as soon as possible. EXIM has initiated a process to streamline the Manual, which consists of over 1,400 pages, by separating the policies and procedures, thus allowing for continuous reviews. The underwriting sections of the Manual are tentatively scheduled for review in 2019. The primary guidance for designing and managing federal credit programs is Office of Management and Budget Circular A-129, Policies for Federal Credit Programs and Non-Tax Receivables . GAO found that EXIM‘s policies and procedures were consistent with three of five areas of federal guidance; two areas related to lender and servicer eligibility and risk sharing practices were partially consistent with federal guidance. Applicant screening. EXIM's policies and procedures were consistent with guidance in that they require applicants to provide relevant financial information and assessments of applicant eligibility and creditworthiness. Loan documentation. EXIM's process was consistent with guidance in that it requires the preparation of loan files, which include the application, credit reports, and related analyses, as well as collateral documentation and loan agreements. Collateral requirements. EXIM's process was consistent with guidance in that it requires a security interest in the financed export items. Lender and servicer eligibility. EXIM established eligibility and decertification procedures for short-term delegated authority lenders that were consistent with guidance. However, it did not establish similar procedures for medium-term delegated authority lenders. Risk sharing practices. EXIM's process was generally consistent with guidance in that EXIM provides loan guarantee terms that officials stated were necessary to achieve program purposes. However, federal guidance also calls for an agency to periodically review programs in which the government bears more than 80 percent of any loss. While EXIM prepares various program reviews, it has not developed procedures to help ensure that its risk sharing practices are routinely reviewed. Without enhancements to its policies and procedures, EXIM may allow lenders that are not qualified to underwrite transactions and runs the risk that it will not effectively review its programs. GAO is making two recommendations to enhance EXIM's policies and procedures related to (1) the use of medium-term delegated authority lenders and (2) periodic program reviews. EXIM concurred with GAO's recommendations and described actions planned to address them.
|
Opioids, such as hydrocodone, oxycodone, morphine, and methadone, can be prescribed to treat both acute and chronic pain. Because many opioids have a high potential for abuse and may lead to severe psychological or physical dependence, many of them are classified as Schedule II drugs under the Controlled Substances Act. The abuse of opioids has been associated with serious consequences, including addiction, overdose, and death. Medicare Part D plan sponsors are private organizations, such as health insurance companies and pharmacy benefit managers, contracted by CMS to provide outpatient drug benefit plans to Medicare beneficiaries. CMS provides guidance to plan sponsors that are responsible for establishing reasonable and appropriate drug utilization review (DUR) programs that assist in preventing misuse of prescribed medications in general, including the unsafe use of opioid pain medications. In 2013, CMS implemented the Medicare Part D opioid overutilization policy intended to improve medication safety. Through the Overutilization Monitoring System (OMS), CMS seeks to ensure that plan sponsors establish reasonable and appropriate DUR programs to prevent overutilization of opioids. CMS uses criteria in the OMS to identify high- risk use of opioids. Plan sponsors may, but are not required to, use these guidelines as part of their DUR. CMS’s Center for Program Integrity (CPI) oversees Part D program integrity and coordinates with other parts of CMS that monitor plan sponsor compliance with the Part D program. CPI has primary responsibility for overseeing NBI MEDIC, which is responsible for identifying and investigating potential Part D fraud, waste, and abuse, in general. NBI MEDIC handles complaints from beneficiaries and others, as well as requests from law enforcement; investigates providers and refers them to law enforcement as appropriate; and analyzes Part D program prescription drug event records and other data to identify patterns that may indicate fraud, waste, or abuse. NBI MEDIC’s responsibilities are for all Part D drugs and are not opioid-specific. One concern associated with prescribed opioids is their diversion—that is, the redirection of prescription drugs for an illegal purpose such as recreational use or resale. Diversion can include selling prescription drugs that were obtained legally, transferring a legitimately prescribed opioid to family or friends who may be trying to self-medicate, or pretending to be in pain to obtain a prescription opioid due to an addiction. It is often associated with “doctor shopping,” the attempt to obtain large amounts of opioids through multiple providers, or from multiple pharmacies. Doctor shopping can be used to help support an individual’s addiction or to obtain opioids for resale on the black market. Drug diversion can also include illicit prescribing, whereby providers—commonly known as “pill mills”—write unnecessary prescriptions or prescribe larger quantities than are medically necessary. Opioids are among the drugs with the highest potential for drug diversion. In 2016, CDC issued guidelines with recommendations for prescribing opioids in outpatient settings for chronic pain, based on consultation with experts and a review of scientific evidence. CDC noted in the guidelines that primary care physicians have reported concerns about opioid misuse and addiction, and find managing patients with chronic pain a challenge, possibly because of insufficient training in prescribing opioids. According to the guidelines, most experts agreed that long-term opioid dosage of 50 milligrams (mg) morphine equivalent dose (MED) per day or more generally increases overdose risk without necessarily adding benefits for pain control or function. Experts also noted that daily opioid dosages close to or greater than 100 mg MED per day are associated with significant risks. The guidelines therefore recommended that providers use caution when prescribing opioids at any dose, carefully reassess evidence of individual benefits and risks when increasing the dosage to 50 mg MED per day or more, and either avoid or carefully justify dosage at 90 mg MED or more. In making these recommendations, CDC noted that there is not a dosage threshold below which the risk of overdose is eliminated, but found that dosages less than 50 mg MED would reduce the risk for a large portion of patients. CDC also noted that providers should use additional caution in prescribing opioids to patients aged 65 and older, because the drugs can accumulate in the body to toxic levels. CMS provides guidance to plan sponsors on how they should monitor opioid overutilization problems among Part D beneficiaries. The agency includes this guidance in its annual letters to plan sponsors, known as call letters; it also provided a supplemental memo to plan sponsors in 2012. Among other things, these guidance documents instructed plan sponsors to implement a retrospective drug utilization review (DUR) system to monitor beneficiary utilization starting in 2013. As part of the DUR systems, CMS requires plan sponsors to have methods to identify beneficiaries who are potentially overusing specific drugs or groups of drugs, including opioids. Also in 2013, CMS created the Overutilization Monitoring System (OMS), which outlines criteria to identify beneficiaries with high-risk use of opioids and to oversee sponsors’ compliance with CMS’s opioid overutilization policy. Plan sponsors may use the OMS criteria for their DUR systems, but they have some flexibility to develop their own targeting criteria, within CMS guidance. The OMS considers beneficiaries to be at a high risk of opioid overuse when they meet all three of the following criteria: (1) receive a total daily MED greater than 120 mg for 90 consecutive days, (2) receive opioids prescriptions from four or more providers in the previous 12 months, and (3) receive opioids from four or more pharmacies in the previous 12 months. The criteria exclude beneficiaries with a cancer diagnosis and those in hospice care, for whom higher doses of opioids may be appropriate. Officials from all six plan sponsors we interviewed confirmed they have a DUR system that specifically looks at opioids. In addition, to be consistent with CMS, all of the plan sponsors adopted criteria similar to the OMS, with some minor modifications—typically involving the number of months in which they measured beneficiaries’ opioid prescriptions. Through the OMS, CMS generates quarterly reports that list beneficiaries who meet all of the criteria and who are identified as high-risk and then distributes the reports to the plan sponsors. Plan sponsors are expected to review the list of identified beneficiaries, determine appropriate action, and then respond to CMS with information on their actions within 30 days. According to CMS officials, the agency also expects that plan sponsors will share any information with CMS on beneficiaries that they identify through their own DUR systems. Some actions plan sponsors may take include Case management. After plan sponsors identify beneficiaries with patterns of inappropriate opioid use and possible coordination of care issues through their DUR analysis, they may conduct case management. Case management may include an attempt to improve coordination issues, and often involves provider outreach, whereby the plan sponsor will contact the providers associated with the beneficiary to let them know that the beneficiary is receiving high levels of opioids and may be at risk of harm. In addition to outreach, officials from two of the six plan sponsors we interviewed told us they focus on provider education and one plan sponsor said they may direct the providers to the CDC guidelines or other information to help reduce overutilization. Officials from two plan sponsors reported that they also reach out to beneficiaries to let them know they are receiving high levels of opioids and may be at risk of harm. Beneficiary-specific point-of-sale (POS) edits. When plan sponsors determine that a beneficiary is at risk for opioid harm, they may choose to implement a beneficiary-specific POS edit to prevent overutilization. Beneficiary-specific POS edits are restrictions that limit these beneficiaries to certain opioids and amounts. Pharmacists receive a message when a beneficiary attempts to fill a prescription that exceeds the limit in place for that beneficiary. CMS expects plan sponsors to report on the POS edits they use through CMS’s Medicare Advantage and Prescription Drug System for information sharing and monitoring purposes. That way, if a beneficiary changes plans, the new plan sponsor will receive an alert about the beneficiary’s record of POS edits. From February 2014 through March 10, 2016, there were 2,693 POS edits reported in that system for 2,520 beneficiaries. Formulary-level POS edits. CMS expects plan sponsors to use formulary-level POS edits to prospectively prevent opioid overutilization. These edits alert providers who may not have been aware that their patients are receiving high levels of opioids from other doctors. CMS recommends these formulary-level edits to be used when a beneficiary has a cumulative opioid MED of at least 90 mg. Referrals for investigation. According to the six plan sponsors we interviewed, the referrals can be made to NBI MEDIC or to the plan sponsor’s own internal investigative unit, if they have one. After investigating a particular case, if a plan sponsor or NBI MEDIC determines that a beneficiary is suspected of diverting opioids, they may refer the case to the HHS-OIG, or a law enforcement agency, according to CMS, NBI MEDIC, and one plan sponsor. Pharmacy lock-ins. Beginning in 2019, Medicare Part D plan sponsors will be able to restrict certain beneficiaries identified as at- risk for prescription drug abuse to a single pharmacy for all their opioid prescriptions, known as a pharmacy “lock in.” Some plan sponsors explained that they use pharmacy lock-ins for their commercial and Medicaid lines of business, and generally found them to be a useful tool for controlling opioid use. Based on CMS’s use of the OMS and the actions taken by plan sponsors, CMS reported a decrease in the number of beneficiaries meeting the OMS criteria of high-risk—which agency officials consider an indication of success toward its goal of decreasing opioid use disorder. From calendar years 2011 through 2016, there was a 61 percent decrease in the number of beneficiaries meeting the OMS criteria. (See table 1.) In addition to using the OMS as a monitoring tool to oversee plan sponsors’ compliance with their DUR system requirements, CMS relies on patient safety measures to assess how well Part D plan sponsors are monitoring beneficiaries and taking appropriate actions. Specifically, CMS tracks data on plan sponsors’ performance for 15 measures related to Part D patient safety that are developed and maintained by the Pharmacy Quality Alliance, and CMS communicates with plan sponsors about their performance. In 2016, CMS started tracking plan sponsors’ performance on three Pharmacy Quality Alliance-approved patient safety measures that are directly related to opioids, which were 1. The proportion of beneficiaries that use opioids at high dosages (more than 120 mg MED for 90 days or longer) in persons without cancer or not in hospice care. 2. The proportion of beneficiaries that use opioids from multiple providers (four or more providers and four or more pharmacies) in persons without cancer or not in hospice care. 3. The proportion of beneficiaries that use opioids at high dosage and from multiple providers in persons without cancer or not in hospice care, and that meet both of the other measures. The three measures are similar to the OMS criteria in that they identify beneficiaries with high dosages of opioids (120 mg MED) from multiple providers and pharmacies (four or more of each). However, there are a number of differences between these measures and the OMS. For example, the OMS counts actual beneficiaries, while the patient safety measures report member-years, which are adjusted to account for beneficiaries who are enrolled in a plan for only part of a year. In addition, these measures separately identify beneficiaries who fulfill each of those criteria individually. For example, data gathered on the first measure indicate that about 285,119 beneficiaries, counted as member- years across all Part D plans, received high doses (more than 120 mg MED) of opioids for 90 days or longer during calendar year 2016. CMS also uses these data in different ways from how it uses OMS data. The OMS criteria were developed and maintained by CMS to identify patients at risk for harm who may warrant case management and to examine opioid use trends across the Part D program, including progress toward its goal of decreasing opioid use disorder. In contrast, CMS officials told us that the agency uses the patient safety measures to assess plan sponsor performance. The patient safety measures also serve as a tool for Part D sponsors to compare their performance to overall averages, and to track progress in improving these measures over time. CMS also tracks sponsors’ progress in improving the measures, according to agency officials. Each quarter, CMS contacts plan sponsors who have the lowest performance on each measure and expects them to respond about actions they take to improve performance. Beginning in April 2017, the agency began distributing to plan sponsors the beneficiary-level files for the patient safety measures. CMS officials said that these files provide a complete list of beneficiaries included in each of the measures. While CMS tracks the total number of beneficiaries who meet all three OMS criteria as part of its opioid overutilization oversight across the Part D program, it does not have comparable information on most beneficiaries who may be at risk for harm. CMS has goals to reduce the risk of opioid use disorders, overdoses, inappropriate prescribing, and drug diversion in its Opioid Misuse Strategy, but OMS does not track the number of beneficiaries with prescriptions for high doses of opioids unless those beneficiaries are also receiving them both from four or more providers and from four or more pharmacies; and agency officials told us that CMS has no plans for OMS to begin doing so. According to CDC guidelines, long-term use of high opioid dosages—those above a MED of 90 mg per day—are associated with significant risk of harm and should be avoided if possible. Based on the CDC guidelines, outreach to Part D plan sponsors, and CMS analyses of Part D data, CMS has revised its current OMS criteria to include more at-risk beneficiaries beginning in 2018. The new OMS criteria define a high user as having an average daily MED greater than 90 mg for any duration, and who receives opioids from four or more providers and four or more pharmacies, or from six or more providers regardless of the number of pharmacies, for the prior 6 months. According to CMS officials, the revised OMS criteria, like the current criteria, are intended to identify the beneficiaries it determined are at the greatest risk of harm: those who may lack coordinated care as a result of using multiple pharmacies and providers. CMS officials also noted that the revised criteria are intended to limit the increase in the number of beneficiaries for whom plan sponsors are expected to take action, such as case management, to avoid overburdening plan sponsors with unreasonable workload levels. While the revised criteria will help identify beneficiaries who CMS determined are at the highest risk of opioid misuse and therefore may need case management by plan sponsors, they will not provide information on most Part D beneficiaries who may also be at risk of harm. In developing the revised criteria, CMS conducted a one-time analysis that estimated there were 727,016 beneficiaries with an average MED of 90 mg or more, for any length of time during a 6 month measurement period in 2015, regardless of the number of providers or pharmacies used. These beneficiaries may be at risk of harm from opioids, according to CDC guidelines, and therefore tracking the number of these beneficiaries over time could help CMS to determine whether it is making progress toward meeting the goals specified in its Opioid Misuse Strategy. However, CMS officials told us that the agency does not keep track of these beneficiaries, and does not have plans to do so as part of OMS. Instead, CMS uses the number of beneficiaries who meet the OMS criteria as an indicator of progress toward its goals. CMS estimated that 33,223 beneficiaries would have met its revised criteria based on 2015 data, which is a much smaller number than the estimated 727,016 beneficiaries at risk of harm from opioids. (See fig. 1.) In 2016, CMS began to gather information from its patient safety measures on the number of beneficiaries who use more than 120 mg MED of opioids for 90 days or longer, regardless of the number of providers and pharmacies. However, this information does not include all at-risk beneficiaries, because the threshold is more lenient than indicated in CDC guidelines and CMS’s new criteria for OMS. Specifically, CMS’s one-time analysis of 2015 data indicated that 727,016 beneficiaries received prescriptions with an average MED of 90 mg or more for any length of time during a 6-month measurement period. In contrast, the 2016 patient safety measures reports identified significantly fewer beneficiaries, 285,119, in its most comparable measure—member years for opioid prescriptions at 120 mg MED for 90 consecutive days or longer. According to CMS officials, CMS shared feedback with the Pharmacy Quality Alliance to consider updating the threshold to 90 mg MED to align with CDC guidelines and the revised OMS criteria. CMS officials said the agency will consider adopting these updates once complete. In addition, while CMS monitors the patient safety measure data, these data are relatively new. CMS officials told us that, as a result, the agency does not yet have enough data to report changes over time toward its goals to reduce the risk of opioid use disorders, overdoses, and inappropriate prescribing. Neither the data gathered as part of OMS, nor patient safety measures gathered so far are adequate to provide CMS with the information necessary to track progress toward meeting its goal of reducing harm from opioids. While tracking a smaller number of beneficiaries in OMS is useful for targeting resource-intensive plan sponsor actions, keeping track of the larger number of beneficiaries at risk of harm from high doses of opioids—greater than 90 mg MED for any duration regardless of the number of providers and pharmacies—could provide CMS with information on progress toward its goals without additional monitoring by plan sponsors. Doing so would also be consistent with federal internal control standards, which require agencies to use quality information to achieve objectives and address risks. Without tracking the number of beneficiaries who receive potentially dangerous levels of opioids regardless of the number of providers or pharmacies, and then examining changes in that number over time, CMS lacks key information that would be useful to determine if it is making progress toward reducing the risk of opioid harm for Part D beneficiaries. CMS oversees providers who prescribe opioids to Medicare Part D beneficiaries through its contractor, NBI MEDIC, and the Part D plan sponsors. CMS requires NBI MEDIC to identify providers who prescribe high amounts of drugs classified as Schedule II under the Controlled Substances Act, which indicates a high potential for abuse and includes many opioids. Using prescription drug event data, NBI MEDIC conducts a peer comparison of providers’ prescribing practices to identify outlier providers—the highest prescribers of Schedule II drugs, which include, but are not limited to, opioids. NBI MEDIC’s initial analyses focuses on providers associated with at least 100 prescription drug event records or at least $100,000 in total Part D payments for Schedule II drugs over the course of one year. These providers are then classified as outliers if they are listed as high in both the number of prescription drug records per prescriber and prescriptions per beneficiary by specialty within each state. NBI MEDIC reports to CMS on the providers with the highest number of prescriptions identified by the analysis. Beginning with the October 2016 report, CMS began sharing NBI MEDIC’s prescriber outlier report with the plan sponsors quarterly to supplement their own investigations of potential fraud, waste, and abuse. According to data from NBI MEDIC, the number of outlier providers identified has generally remained stable except for an increase in 2015. NBI MEDIC and CMS officials said this increase occurred when a commonly used opioid, hydrocodone, was added to the analysis after it was reclassified as a Schedule II drug. NBI MEDIC gathers data on Medicare Part C and Part D and uses its Predictive Learning Analytics Tracking Outcome (PLATO) system to conduct a number of data analysis projects. According to NBI MEDIC officials, these PLATO projects seek to identify potential fraud by examining data on provider behaviors. In addition, according to officials, PLATO is capable of allowing NBI MEDIC to share information on providers with plan sponsors. NBI MEDIC officials stated there are two current PLATO projects that include a focus on some opioids. The TRIO data project identifies providers who prescribe beneficiaries a combination of an opioid, a benzodiazepine, and the muscle relaxant Carisoprodol. This well-known combination of drugs is used to increase the effects of opioids. The Pill Mill data project identifies providers with abnormal prescribing behavior in authorizing controlled substances, including opioids, absent medical necessity. To identify providers potentially operating a pill mill, 17 risk factors are considered, including the number of beneficiaries for whom a provider prescribed controlled substances, the quantity of these medications, the number of beneficiaries who travel long distances to receive medications, and the number of beneficiaries treated for drug abuse or misuse at emergency rooms. Another analysis that NBI MEDIC conducts, according to its officials, is the Transmucosal Immediate Release Fentanyl project, which identifies potential improper payments for medicines containing fentanyl, a prescription opioid pain reliever. NBI MEDIC looks for instances of this drug being prescribed to beneficiaries who do not have cancer combined with breakthrough pain, the only approved use for this drug. NBI MEDIC officials said they conduct investigations to assist CMS in identifying cases of potential fraud, waste, and abuse among providers for Medicare Part C and Part D. The investigations are prompted by complaints from plan sponsors, calls to NBI MEDIC’s call center, NBI MEDIC’s analysis of outlier providers, or from one of its other data analysis projects. As part of its investigations, NBI MEDIC officials said they may access data from Medicare Part B, which includes coverage for doctors’ services and outpatient care, to determine whether providers’ diagnoses coincide with their prescriptions. Officials added that they investigate inappropriate prescribing by reviewing Part D prescription records, medical records, or PLATO data; or by conducting background checks, interviewing beneficiaries, or conducting site visits, among other activities. NBI MEDIC data indicates that the total number of its investigations decreased from 2013 to 2016, which, according to NBI MEDIC officials, occurred because it increased activities related to data analysis and collaboration with plan sponsors. After identifying providers engaged in potential fraudulent overprescribing, NBI MEDIC officials said they may refer cases to agencies for further investigation and potential prosecution, such as the HHS-OIG, state and local law enforcement, the Federal Bureau of Investigations, or the Drug Enforcement Administration. In 2016, NBI MEDIC data showed that it referred a total of 119 cases to the HHS-OIG and 48 to agencies within the Department of Justice, including the Federal Bureau of Investigations and the Drug Enforcement Agency. CMS officials told us that they do not routinely track the results of individual cases referred by NBI MEDIC to other agencies. A 2016 Senate committee report indicated that the HHS- OIG declined and returned more than half of the cases referred to it from 2013 through 2015. According to NBI MEDIC officials, cases may be rejected for reasons such as not meeting prosecutorial thresholds for evidence, or HHS-OIG does not having enough staff to take on the workload. NBI MEDIC officials told us that HHS-OIG does not always inform NBI MEDIC of its reasons for declining the referrals. CMS requires all plan sponsors to adopt and implement an effective compliance program, which must include measures to prevent, detect, and correct Part C or Part D program noncompliance, as well as fraud, waste, and abuse. CMS communicates guidance for plan sponsor’s compliance programs through Chapter 9 of CMS’s Prescription Drug Benefit Manual and in annual letters. CMS’s guidance focuses broadly on prescription drugs, and does not specifically address opioids. To detect fraud, waste, and abuse among providers, plan sponsors told us they use their own data analysis and criteria, as well as NBI MEDIC’s list of outlier providers. For example, plan sponsors identify providers suspected of fraud, waste, or abuse by looking for certain characteristics, such as providers who have a large number of beneficiaries traveling from a different zip code to receive prescriptions, or providers who prescribe large quantities of commonly abused drugs with no associated medical claims to support the prescriptions. Once the suspected providers are identified, plan sponsors said that they conduct their own investigations to determine if there is sufficient evidence of inappropriate prescribing. Plan sponsors told us they may choose to take a number of actions based on these investigations, including choosing to refer the case to NBI MEDIC. Additionally, if appropriate, plan sponsors can educate providers about prescribing guidelines and best practices, or notify them that their patients may be doctor shopping, in order to improve coordination of care. They may also terminate a provider from their plan if they find evidence of fraud or abuse. CMS lacks the information necessary to adequately determine the number providers potentially overprescribing opioids, and therefore cannot determine the effectiveness of efforts to achieve the agency’s goals of reducing the risk of opioid use disorders, overdoses, inappropriate prescribing, and drug diversion. CMS’s oversight actions focus broadly on Schedule II drugs rather than specifically on opioids. For example, NBI MEDIC’s analyses to identify outlier providers do not indicate the extent to which they may be overprescribing opioids specifically. According to CMS officials, they direct NBI MEDIC to focus on Schedule II drugs, because they have a high potential for abuse, whether they are opioids or other drugs. However, without specifically identifying opioids in these analyses—or an alternate source of data— CMS lacks data on providers who prescribe high amounts of opioids, and therefore cannot assess progress toward meeting its goals related to opioid use. CMS also lacks key information necessary for oversight of opioid prescribing, because it does not require plan sponsors to report to NBI MEDIC or CMS cases of fraud, waste, and abuse; cases of overprescribing; or any actions taken against providers. Plan sponsors collect information on cases of fraud, waste, and abuse, and can choose to report this information to NBI MEDIC or CMS. PLATO, a voluntary reporting system, is one way that plan sponsors can report information to NBI MEDIC or CMS, and share with other plan sponsors about providers they investigate and about actions they take. While CMS receives some information from plan sponsors who voluntarily report their actions, it does not know the full extent to which plan sponsors have identified providers who have prescribed high amounts of opioids and taken action to reduce overprescribing. Without this information, CMS cannot determine the extent to which plan sponsors are taking action to reduce overprescribing, making it difficult to assess progress in this area. CMS officials told us that they receive reports on what information plan sponsors enter into PLATO. However, according to these officials, they do not have information on all actions taken by plan sponsors; therefore, CMS does not know how often plan sponsors use PLATO or what proportion of actions they report. A 2015 HHS-OIG report recommended that CMS require plan sponsors to report all potential fraud and abuse to CMS and/or NBI MEDIC. CMS disagreed with this recommendation, and stated that plan sponsors currently have several options for referring incidents, that CMS has worked with plan sponsors to improve organizational performance, and that plan sponsors regularly share information on best practices for prevention and detection of fraud. The HHS-OIG continues to recommend that CMS require reporting due to the lack of a comprehensive set of data needed to monitor providers’ inappropriate prescribing. Without specifically monitoring providers’ overprescribing of opioids, CMS cannot determine if its efforts, or the efforts of NBI MEDIC and plan sponsors, are helping to contribute to its goals related to opioid use. Federal internal control standards require agencies to conduct monitoring activities and to use quality information to achieve objectives and address risks. Without adequate information on providers’ opioid prescribing patterns in Part D, CMS is unable to determine whether its related oversight efforts—including such efforts by NBI MEDIC or Part D plan sponsors—are effective or should be adjusted. A large number of Medicare Part D beneficiaries use prescription opioids, and reducing the inappropriate prescribing of these drugs is a key part of CMS’s strategy to decrease the risk of opioid use disorder, overdoses, and deaths. Despite working to identify and decrease egregious opioid use behavior—such as doctor shopping—among beneficiaries in Medicare Part D, CMS lacks the necessary information to effectively determine the full number of beneficiaries at risk of opioid harm. CMS recently expanded the number of beneficiaries for whom it expects plan sponsors to conduct intervention efforts, such as case management, and has begun to collect additional patient safety measure data on beneficiaries at risk of harm from opioids. However, these efforts have not yet provided CMS with sufficient data to track how many beneficiaries are receiving large doses of opioids, and therefore are at risk of harm. Without expanding and enhancing its data collection efforts to include information on more at-risk beneficiaries, CMS cannot fully assess whether it is making sufficient progress toward its goals of reducing opioid use disorders, overdoses, inappropriate prescribing, and drug diversion. CMS’s efforts to oversee opioid prescribing specifically are also inadequate. CMS directs NBI MEDIC to focus its analyses on providers who prescribe any drugs with a high risk of abuse, but NBI MEDIC does not specifically track those providers who prescribe opioids. Absent opioid-specific monitoring, CMS cannot assess whether its efforts to reduce opioid overprescribing are effective, or if opioid prescribing patterns are changing over time. In addition, neither CMS nor NBI MEDIC can be sure they have complete information about providers potentially overprescribing opioids to Part D beneficiaries, because plan sponsors are not required to report to CMS or NBI MEDIC all potential fraud and abuse incidents or actions sponsors have taken against providers. As a result, CMS lacks information about plan sponsors’ monitoring of providers who overprescribe opioids, and is therefore unable to determine if the agency’s and plan sponsors’ efforts are successful in achieving CMS’s goals. We are making the following three recommendations to CMS. The Administrator of CMS should gather information over time on the number of beneficiaries at risk of harm from opioids, including those who receive high opioid morphine equivalent doses regardless of the number of pharmacies or providers, as part of assessing progress over time in reaching the agency’s goals related to reducing opioid use. (Recommendation 1) The Administrator of CMS should require its contractor, NBI MEDIC, to identify and conduct analyses on providers who prescribe high amounts of opioids separately from providers who prescribe high amounts of any Schedule II drug. (Recommendation 2) The Administrator of CMS should require plan sponsors to report to CMS on investigations and other actions taken related to providers who prescribe high amounts of opioids. (Recommendation 3) We provided a draft of this report to HHS for comment. HHS provided written comments, which are reprinted in appendix I, and technical comments, which we incorporated as appropriate. In its written comments, HHS described its efforts to reduce opioid overutilization in Medicare Part D. HHS noted that these efforts include a medication safety approach to improve care coordination for high-risk beneficiaries using opioids, quality metrics for plan sponsors, and data analysis of prescribing patterns to target potential fraud, waste, and abuse. For example, HHS noted that CMS adopted a Medicare Part D opioid overutilization policy in 2013 that provided specific guidance to Part D plans on effective drug utilization review programs to reduce overutilization of opioids. As described in our report, CMS’s opioid overutilization policy requires sponsors to implement retrospective drug utilization review programs to identify beneficiaries who are potentially overusing opioids. Among other things, sponsors may choose to implement beneficiary-specific edits that limit high-risk beneficiaries to certain opioids and amounts, and CMS expects them to use formulary- level edits to alert providers when their patients are receiving high levels of opioids from other doctors. HHS also concurred with two of our three recommendations. HHS concurred with our recommendation that CMS gather information over time on the number of beneficiaries at risk of harm from opioids, as part of assessing progress toward agency goals. HHS commented that CMS tracks beneficiaries who meet these criteria through the patient safety measures. However, while these patient safety measures are a potential source of this information, they currently do not include all at-risk beneficiaries, because the opioid use threshold they use (120 mg MED for 90 days or longer) is more lenient than indicated in CDC guidelines or in CMS’s revised OMS criteria. In addition, while CMS uses the patient safety measures to assess plan sponsor performance, the data are relatively new, and CMS has not yet used them to report progress over time toward its goals. HHS concurred with our recommendation that CMS require NBI MEDIC to gather separate data on providers who prescribe high amounts of opioids, and HHS noted that it intends to work with NBI MEDIC to identify trends in outlier prescribers of opioids. HHS did not concur with our recommendation that CMS require plan sponsors to report on investigations and other actions taken related to providers who prescribe high amounts of opioids. HHS noted that plan sponsors have the responsibility to detect and prevent fraud, waste, and abuse and that CMS reviews cases when it conducts audits. HHS also stated that it seeks to balance requirements on plan sponsors when considering new regulatory requirements. As noted in our report, plan sponsors conduct investigations and take actions against providers, and some plan sponsors report actions to CMS and NBI MEDIC. However, without complete reporting, such as reporting from all plan sponsors on the actions they take to reduce overprescribing, CMS is missing key information that could help assess progress in this area. Due to the importance of this information, we continue to believe that CMS should require plan sponsors to report on the actions they take. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of HHS and the Administrator of CMS. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or CurdaE@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Will Simerl (Assistant Director), Carolyn Feis Korman (Analyst-in-Charge), Amy Andresen, Samantha Pawlak, and Patricia Roy made key contributions to this report. Also contributing were Muriel Brown, Drew Long, and Emily Wilson.
|
Misuse of prescription opioids can lead to overdose and death. In 2016, over 14 million Medicare Part D beneficiaries received opioid prescriptions, and spending for opioids was almost $4.1 billion. GAO and others have reported on inappropriate activities and risks associated with these prescriptions, such as receiving multiple opioid prescriptions from different providers. GAO was asked to describe what is known about CMS’s oversight of Medicare Part D opioid use and prescribing. This report examines (1) CMS oversight of beneficiaries who receive opioid prescriptions under Part D, and (2) CMS oversight of providers who prescribe opioids to Medicare Part D beneficiaries.GAO reviewed CMS opioid utilization and prescriber data, CMS guidance for plan sponsors, and CMS’s strategy to prevent opioid misuse. GAO also interviewed CMS officials, the six largest Part D plan sponsors, and 12 national associations selected to represent insurance plans, pharmacy benefit managers, physicians, patients, and regulatory and law enforcement authorities. The Centers for Medicare & Medicaid Services (CMS) provides guidance on the monitoring of Medicare beneficiaries who receive opioid prescriptions to plan sponsors—private organizations that implement the Medicare drug benefit, Part D—but lacks information on most beneficiaries at risk of harm. CMS provides plan sponsors guidance on how they should monitor opioid overutilization among Medicare Part D beneficiaries and requires them to implement drug utilization review systems that use criteria similar to CMS's. CMS's criteria focus on beneficiaries who (1) receive prescriptions of high doses of opioids, (2) receive prescriptions from four or more providers, and (3) fill the prescriptions at four or more pharmacies. According to CMS officials, this approach allows plan sponsors to focus their actions on those beneficiaries it determined to have the highest risk of harm from opioid use. CMS’s criteria, including recent revisions, do not provide sufficient information about the larger population of potentially at-risk beneficiaries. CMS estimates that while 33,223 beneficiaries would have met the revised criteria in 2015, 727,016 would have received high doses of opioids regardless of the number of providers or pharmacies. In 2016, CMS began to collect information on some of these beneficiaries using a higher dosage threshold for opioid use. This approach misses some who could be at risk of harm, based on Centers for Disease Control and Prevention guidelines. As a result, CMS is limited in its ability to assess progress toward meeting the broader goals of its Opioid Misuse Strategy, which includes activities to reduce the risk of harm from opioid use. CMS Estimates of 2015 Part D Beneficiaries with High Opioid Doses and Those Who Would Have Met Revised Overutilization Monitoring Criteria CMS oversees the prescribing of drugs at high risk of abuse through a variety of projects, but does not analyze data specifically on opioids. According to CMS officials, CMS and plan sponsors identify providers who prescribe large amounts of drugs with a high risk of abuse, and those suspected of fraud or abuse may be referred to law enforcement. However, GAO found that CMS does not identify providers who may be inappropriately prescribing large amounts of opioids separately from other drugs, and does not require plan sponsors to report actions they take when they identify such providers. As a result, CMS is lacking information that it could use to assess how opioid prescribing patterns are changing over time, and whether its efforts to reduce harm are effective. GAO recommends that CMS (1) gather information on the full number of at-risk beneficiaries receiving high doses of opioids, (2) identify providers who prescribe high amounts of opioids, and (3) require plan sponsors to report to CMS on actions related to providers who inappropriately prescribe opioids. HHS concurred with the first two recommendations, but not with the third. GAO continues to believe the recommendation is valid, as discussed in the report.
|
Major disaster declarations can trigger a variety of federal response and recovery programs for government and nongovernmental entities, households, and individuals. FEMA’s Office of Response and Recovery manages the PA grant program, providing funds to states, territorial governments, local government agencies, Indian tribes, authorized tribal organizations, and certain private nonprofit organizations in response to presidentially declared disaster declarations to repair damaged public infrastructure such as roads, schools, and bridges. Figure 1 shows the total amount of PA funds obligated by county from January 2009 through February 2017 for federal disaster declarations. To implement the PA program, FEMA’s staff includes a mix of temporary, reservist, and permanent employees under two authorities, the Stafford Act and Title 5. Reservists make up the largest share of the PA workforce, which consisted of 1,852 employees––1,041 reservists, 634 full-time equivalents, and 177 temporary Cadre of On-Call Response/Recovery Employees––as of June 2017, according to PA officials. Figure 2 summarizes the key characteristics for each type of employee. After a disaster, FEMA sends PA program staff to the affected area to work with state and local officials to assess the damage prior to a disaster declaration. FEMA officials establish a temporary Joint Field Office (JFO) to house staff who will manage response and recovery functions after a declared disaster (including operations, emergency response and support teams, planning, administration, finance, and logistics). Once the President has declared a disaster, PA staff work with grant applicants to help them document damages, identify eligible costs and work, and prepare requests for PA grant funds by developing project proposals. These proposals may include proposals for hazard mitigation if the hazard mitigation work is related to the repair of damaged facilities, referred to as permanent work projects. Immediate emergency measures, such as debris removal, are not eligible for hazard mitigation. Officials then review and obtain approval of the projects prior to FEMA obligating funds to state grantees. Figure 3 describes the process used to develop, review, and obligate PA projects. In addition to rebuilding and restoring infrastructure to its predisaster state, the PA program can be used to fund hazard mitigation measures that will reduce future risk to the infrastructure in conjunction with the repair of disaster-damaged facilities. There is no preset limit to the amount of PA funds a community may receive; however, PA hazard mitigation measures must be determined to be cost effective. Some examples of hazard mitigation measures that FEMA has predetermined to be cost effective, if they meet certain requirements, include installing shut-off valves on underground pipelines so that damaged sections can be isolated during or following a disaster; securing a roof using straps, clips, or other anchoring systems in locations subject to high winds; and installing shutters on windows or replacing glass with impact-resistant material. Applicants can also propose mitigation measures that are separate from the damaged portions of a facility, such as constructing floodwalls around damaged facilities to avoid future flooding. FEMA evaluates these proposals, considering how the proposed measure protects damaged portions of a facility and whether the measure is reasonable based on the extent of the damage, and determines eligibility on a case-by-case basis. FEMA’s Federal Insurance and Mitigation Administration (FIMA) deploys a cadre of mitigation staff to help coordinate and implement hazard mitigation activities during disaster recovery, including PA hazard mitigation. A primary task of these staff is to identify and assess opportunities to incorporate hazard mitigation into PA projects. Generally, if an applicant seeks to incorporate hazard mitigation measures into a PA project, FIMA’s hazard mitigation staff develop a hazard mitigation proposal. We, the DHS OIG, and others have reported past challenges with FEMA’s management of the PA program related to workforce management, information sharing, and hazard mitigation. For example, we reported in 2008 that the PA program had a shortage of experienced and knowledgeable staff, relied on temporary rotating staff, and provided limited training to their workforce, which impaired PA program delivery and delayed recovery efforts after Hurricanes Katrina and Rita. We found that staff turnover, coupled with information sharing challenges, delayed projects when applicants had to provide the same information each time FEMA assigned new staff and that poorly trained staff provided incomplete and inaccurate information during their initial meetings with applicants or made inaccurate eligibility determinations, which also caused processing delays. We recommended that FEMA strengthen continuity among staff involved in administering the PA program by developing protocols to improve information and document sharing among FEMA staff. In response, in 2013 FEMA instituted a PA Consistency Initiative, which included hiring new managers for FEMA regional offices, stakeholder training on PA program administration, and using a newly developed internal website to allow staff to post and share information to address continuity and knowledge sharing concerns during disaster operations. FEMA also developed the Public Assistance Program Delivery Transition Standard Operating Procedure to facilitate the transfer of responsibility for PA program activities during cases of staff turnover during recovery operations. Despite FEMA’s efforts to implement our recommendations, the DHS-OIG, in 2016, found continuing challenges after Hurricane Sandy with workforce levels, skills, and performance of reservists, who make up the majority of the PA workforce. Regarding information sharing, in 2008, we also identified difficulties sharing documents among federal, state, and local participants in the PA process and difficulties tracking the status of projects. We recommended that FEMA improve information sharing within the PA process by identifying and disseminating practices that facilitate more effective communication among federal, state, and local entities. In response, FEMA proceeded with the implementation of a grant tracking and management system, called EMMIE, which was used previously in 2007. However, in subsequent years we found weaknesses in how FEMA developed the system and the DHS-OIG found that information sharing problems similar to the ones identified in our 2008 report persisted. Regarding hazard mitigation, we reported in 2015 that state and local officials experienced challenges in using PA hazard mitigation during the Hurricane Sandy recovery efforts because PA officials did not consistently prioritize hazard mitigation, and in some cases discouraged mitigation projects during the PA grant application process, among other challenges. We recommended that FEMA assess the challenges state and local officials reported, including the extent to which they can be addressed, and implement corrective actions, as needed. In response to our recommendation, FEMA developed a corrective action plan that included actions and milestones for reviewing, updating, and implementing PA hazard mitigation policy. FEMA also identified the PA new delivery model as a solution for some of the challenges state and local officials reported. Previously, the OIG also reported that PA program officials did not consistently identify eligible PA hazard mitigation projects, and that PA officials did not prioritize the identification of PA hazard mitigation opportunities at the onset of recovery efforts after the 2005 Gulf Coast hurricanes. See appendix I for a summary of findings and the status of our past recommendations on challenges with workforce management, information sharing, and hazard mitigation related to the PA program since our last review in December 2008. FEMA’s own internal reviews and outreach efforts have also identified similar challenges. For example, at FEMA’s request the Homeland Security Studies and Analysis Institute assessed the effectiveness and efficiency of the PA program in 2011. The institute’s report outlined 3 key findings and 23 recommendations relating to the PA preaward process. For example, the report found that FEMA could enhance training programs to develop a skilled and experienced workforce; utilize technology and employ web-based tools to support centralized processing, transparency, and efficient decision making; and identify and address potential special considerations, such as hazard mitigation proposals, as early as possible in the preaward process to improve consistency. In 2014, PA program officials analyzed the PA grant process and used input from agency staff and officials involved in various aspects of the program to identify potential improvements. The resulting Public Assistance Program Realignment report found that challenges in workforce management, information sharing, and hazard mitigation continued, and included recommendations for improvement. For example, the report concluded that a shortage of qualified staff, high turnover, unclear organizational responsibilities, and inconsistent training were long-standing and continuing challenges that impaired the PA pre-award process. In addition, from January 2015 to April 2015, FEMA conducted extensive outreach with more than 260 stakeholders across FEMA headquarters, all 10 regions, 43 states, and 4 tribal nations to discuss challenges in the PA program and opportunities for improvement. For example, stakeholders identified challenges with ineffective information collection during the preaward process and suggested identifying special considerations, such as hazard mitigation, earlier in the PA process as an idea for improvement. In response, FEMA began redesigning the PA preaward process to operationalize the results of its 2014 report and address areas for improvement identified through its outreach efforts. FEMA awarded a contract for program support to help PA officials implement a redesigned PA program in 2015. This included a new process to develop and review grant applications, and obligate PA funds to states affected by disasters; new positions, such as a new program delivery manager who is the single point of contact throughout the grant application process; a new Consolidated Resource Center (CRC) to support field operations by supplementing project development, validation, and review of proposed PA project applications; and a new information system to maintain and share PA grant application documents. As part of the new process, PA program officials identified the need to ensure that staff emphasize special considerations, such as hazard mitigation, earlier in the process. Taken together, these efforts represent FEMA’s “new delivery model” for awarding PA program grants. Enhancements in the PA program under the new delivery model are presented in figure 4. Regarding the new delivery model process, FEMA introduced several changes to enhance outreach to applicants during the “exploratory call”— the first contact between FEMA and local officials—and during the first in- person meeting, called the “recovery scoping meeting.” FEMA also revised decision points during the process, when program officials can request more information from applicants, and applicants can review and approve the completion of project development steps. FEMA also incorporated special considerations, such as hazard mitigation, earlier in the new process during the exploratory calls and recovery scoping meetings. The changes and enhancements to the PA grant award process in the new delivery model are presented in figure 5. The new process divides proposed PA projects based on complexity and type of work into three categories—100 percent completed, standard, and specialized—that PA staff manage to expedite review or assign skilled staff to technical projects as needed. If the applicant has already completed work following a disaster, such as debris removal, it is considered “100 percent completed” and JFO staff collect the necessary documents and provide the information to the CRC staff who complete the development of project applications, validate the information, and complete all necessary reviews. Projects that require repairs and further assistance from PA program staff at the JFO include “standard” and “specialized” projects, which include a site inspection to document damages, before the JFO staff provide the information to the CRC. Further, PA program officials assign PA staff based on their skills and experience to standard projects, which are less technically complex to develop, and specialized projects, which are more technically complex and costly. We discuss the new workforce positions FEMA developed for JFOs and CRCs, the new information system FEMA developed to maintain and share PA grant documents with applicants, and FEMA’s efforts to incorporate hazard mitigation into PA projects later in this report. Since 2015, FEMA has invested almost $9 million to redesign the PA program through the reengineering and implementation of the new delivery model, including about $4.7 million for contract support for implementation, and $4 million for acquisition of the new information system. FEMA tested the new delivery model in a series of selected disasters, using a continuous process improvement approach to assess and improve the process, workforce changes, and information system requirements, prior to implementing the new model for all future disasters. For example, FEMA first tested the new process in Iowa in July 2015 and, in February 2016, PA program officials expanded their test to include all of the new staff positions. In October 2016, PA program officials added the new information system to achieve a comprehensive implementation of all of the elements of the new delivery model for the agency’s response to Hurricane Matthew in Georgia, two additional disasters in Georgia in January 2017, and in Missouri, North Dakota, Wyoming, Vermont, and two disasters in New Hampshire from June through August 2017. The timeline for PA’s implementation of the new delivery model is shown in figure 6. According to program officials, FEMA planned to implement the new model for all future disasters beginning in January 2018. However, historic disaster activity during the 2017 hurricane season accelerated full implementation. As a result, on September 12, 2017, FEMA officials announced that, unless officials determined it would be infeasible in an individual disaster, the program would use the new delivery model in all future disasters. According to FEMA’s 2014 PA Program Realignment report and other program documents, PA officials designed the new delivery model to respond to persistent workforce management challenges related to identifying the required number of staff and needed skills and training, among other things, to improve the efficiency and effectiveness of the PA preaward process. To address these challenges, PA program officials centralized much of the responsibility for processing PA projects in the CRCs, created additional new positions with specialized roles and responsibilities in JFOs, and established training and mentoring programs to help build the new staffs’ skills. In 2016, PA program officials centralized some of the project activities that otherwise were being carried out at individual JFOs at FEMA’s first new CRC in Denton, Texas. Officials did so by establishing 18 new positions, many of which directly correlated with positions that FEMA deployed to individual JFOs in the legacy PA delivery model. According to PA officials, centralizing positions will improve standardization in project processing, and result in a higher quality work product. As part of the new delivery model, PA program officials created a new hazard mitigation liaison position for PA program staff at the CRC that did not previously exist at individual JFOs. The other new positions that PA program officials either created or centralized at the CRC included two specialized positions responsible for costing and validating PA projects. Previously, the PA project specialist deployed to the JFO would complete these tasks and others; however, the consistency of project development varied across the regions and disasters. In contrast, CRC staff are full-time employees who receive training to specialize in completing standardized project development steps for PA projects from multiple disasters on an ongoing basis. Program officials anticipate that centralizing new specialized staff at the CRCs will also reduce PA administrative costs and staffing levels at the JFOs. For example, staff at the CRCs, such as the new hazard mitigation liaisons and insurance and costing specialists, could support project development for multiple disasters and regions simultaneously, whereas PA previously needed to deploy staff to each JFO to fulfill these roles. In addition, once JFOs operating under the new model send projects to the CRCs for processing and review, FEMA can more rapidly close its JFOs, reducing associated administrative costs. For example, following Hurricane Matthew, FEMA credited the new delivery model, in part, with its ability to close the JFO in Georgia sooner than several other JFOs in neighboring states not involved in the implementation of the new delivery model. PA program officials created new positions with more specialized roles and responsibilities to help PA staff at JFOs provide more consistency in the project development process and guidance to applicants. Program officials split the broad responsibilities previously managed at the JFOs by PA crew leaders and project specialists, into two new, specialized positions—the program delivery manager and site inspector. The program delivery manager serves as the applicant’s single point-of-contact throughout the preaward process, manages communication with the applicant, and oversees document collection. All three PA grant applicants we spoke to following Hurricane Matthew in Georgia greatly appreciated the knowledge and assistance provided by their program delivery managers. Site inspectors are responsible for conducting the site inspection to document all disaster-related damages; determining the applicant’s plans for recovery, coordinating with other specialists, and verifying the information collected with the applicant. Officials expect deployed staff at JFOs can complete the fieldwork faster and provide greater continuity of service to applicants. Further, program officials believe that specializing roles will enable them to provide more targeted training, and improve employee satisfaction. Site inspection, hazard mitigation, and environmental and historic preservation specialists, along with a new Public Assistance program mentor, conduct a site inspection with the applicant to document damages to a historic cemetery in Savannah, Georgia, following Hurricane Matthew in 2016. PA program officials designed new training and mentoring programs for the new positions at the CRCs and JFOs and used a continuous feedback process to update and improve the training, position guides, and task books throughout the implementation of the new delivery model, according to PA officials. According to a June 2017 update of the PA Cadre Training Plan, training for the new model has five major focuses: required training and skills for position qualification; on-site refresher training; mentor training; regional-based state, local, tribal, and territorial training; and training on the new information system. Specifically, officials developed six new training courses, and identified which are required for each position under the new delivery model. For example, a program delivery manager at the JFO is required to complete both the program delivery manager and site inspector specialist courses. As of June 2017, PA program officials had provided at least one new model training course to 93 percent of their cadre (including program delivery manager training to 366 individuals and site inspector training to 1,172 individuals) and planned to provide 28 additional courses through September 2017 to the PA cadre. According to regional and CRC officials, the training courses and mentoring from experienced staff helped maximize new staff’s capabilities in the new process. Throughout the third implementation of the new delivery model, JFO and CRC staff, as well as regional PA staff, stakeholders, and applicants, identified staff skills and training as a key area that needed more attention for full implementation of the new delivery model. Our work and FEMA’s after-action reports from the third test in Georgia identified problems with site inspector skills, which affected the timeliness and accuracy of projects. Specialists and managers at the CRC noted that poorly trained site inspectors did not consistently provide the necessary information from the field, which resulted in delays for the CRC staff to process projects, and after-action reports also identified challenges with site inspector skills. According to a PA applicant in Georgia, the inconsistency of skills and experience of their site inspector resulted in the need to conduct a “do-over” site inspection on one of the applicant’s projects, causing delays. PA staff and state officials attribute much of the site inspectors’ skill gaps to their lack of training and experience in the program. According to PA Region officials, providing timely training will be a resource-intensive challenge for implementing the new delivery model for all future disasters. For example, it can be difficult to train reservists before FEMA deploys them to disasters, and many of the program’s experienced reservists have retired or resigned, resulting in few mentors for the program and a high need to provide training to inexperienced and newly hired staff. PA officials and stakeholders also emphasized the need for FEMA to provide additional training for state and local officials to build capacity and support the goals of the new delivery model. For example, according to JFO officials at the third implementation, the new delivery model increases responsibilities for applicants, who will require more applicant training than FEMA currently provides. According to state officials, applicant capabilities vary, and FEMA should provide training to state and local officials on the new delivery model and the information system before a disaster. Skill gaps among applicants could result in inconsistent implementation of the new process, according to PA staff and stakeholders, and PA staff said that training was important to prevent applicants from reverting back to the legacy PA grant application process. To support full implementation of the new delivery model for all disasters, PA program officials have updated training courses for PA staff and applicants, and planned additional training to address these challenges and other lessons learned through the test implementation. For example, PA officials told us they updated the site inspector training program in May 2017 and scheduled a new site inspector training session in August 2017 to include more hands-on training to help address the skill gaps identified for site inspectors. PA officials created a new training course for FEMA’s regional offices, in part to enable regional PA staff to provide new delivery model training to state and local officials. PA officials also planned to develop a self-paced, online course for state and local officials by the end of 2017. PA officials have not fully assessed the workforce needed for JFO field operations, CRC staff, or FIMA’s hazard mitigation staff to support implementation of the new delivery model for all future disasters. PA program officials developed an initial assessment of the total number of staff needed in the field and the CRCs in 2016 to estimate cost savings associated with consolidating and specializing positions at the CRCs and deploying fewer staff to JFOs. However, the assessment did not identify the number of staff required to fill specific positions, including program delivery managers and hazard mitigation specialists, needed to support the new delivery model for full implementation. In reviewing the test implementations of the new delivery model, we found that inadequate staffing levels at the JFOs and CRCs, and with FIMA’s hazard mitigation staff, affected staffs’ ability to achieve the goals of the new delivery model. Staff levels at the JFO. We identified challenges with having the right number of program delivery managers and site inspection specialists to achieve program goals for customer satisfaction, efficiency, and quality in test implementations of the new delivery model. For example, in the second test implementation of the new delivery model in Oregon in 2016, PA did not deploy enough program delivery managers to the disaster, which resulted in unmanageable caseloads for program delivery managers, according to state and PA officials. PA program officials assigned program delivery managers an average caseload of 12 PA applicants, which was more than they could effectively manage, according to PA staff, and program officials aim for a caseload of 8 to 10 applicants. According to state officials, local officials reported they did not always receive the support they needed from program delivery managers who managed caseloads consisting of dozens of projects at multiple sites for each applicant during the Oregon implementation. As a result of overwhelmed program delivery managers, local officials faced challenges understanding their responsibilities, such as recognizing when they needed to provide information for the project development to proceed, according to state officials. PA staff involved with the third test implementation in Georgia in 2016 and 2017 said there were not enough site inspectors or program delivery managers to fully manage the workload at the JFO. Because of the specialization of roles, projects could not move forward when there were not enough staff to execute the next step in the process. For example, PA staff at the JFO said program delivery managers completed recovery scoping meetings rapidly, but faced a bottleneck in scheduling site inspections because there were more applicants awaiting site inspections than could be fulfilled by the number of site inspection specialists available. Staff levels at the CRC. Staff at the CRC reported challenges with staffing levels during the Oregon and Georgia test implementations, and expressed concerns about when PA officials will staff the CRCs to support full implementation of the new model for all disasters. During the Oregon test implementation, a CRC specialist said there were not enough technical specialists to manage the workload and, as a result, PA program officials had to redeploy site inspectors from their JFO field operations to the CRC to complete costing estimates. During the third test in Georgia, quality assurance specialists said that their workload resulted in added stress trying to complete the work in time while adhering to quality standards. According to CRC specialists in Denton, Texas, PA officials had not determined required staff levels for full implementation, but agreed that workload was too high and program officials needed to determine the appropriate staff levels for each CRC to support full implementation. PA officials were still evaluating CRC processing times and workload management from the Oregon and Georgia test implementations to determine staffing needs, according to PA officials. Further, PA program officials plan to establish a second CRC in Winchester, Virginia, before the end of 2017, but have not determined the number of additional permanent full-time staff needed to support the CRCs for full implementation of the new delivery model. Staff levels for the hazard mitigation specialists. PA officials have not identified the number of hazard mitigation specialists in FIMA’s hazard mitigation cadre needed for full implementation of the new delivery model. According to JFO staff, current hazard mitigation staff levels are insufficient to provide the desired in-person participation of hazard mitigation staff on all recovery scoping meetings to share information on hazard mitigation with applicants and help them identify potential mitigation opportunities. A PA program official said officials missed opportunities to pursue hazard mitigation during the test implementation after Hurricane Matthew in Georgia due to lack of hazard mitigation specialists. In addition, for the test implementation in Oregon, there were not enough hazard mitigation specialists to cover all site inspections and implement their new delivery model responsibilities, according to FEMA’s after-action reports. The absence of hazard mitigation specialists in the early stages of PA project development may cause delays in officials’ identifying hazard mitigation opportunities, according to a FIMA official. PA program officials said they did not work with FIMA to determine the appropriate levels of hazard mitigation staff under the new delivery model because they were refining the new process, but as of June 2017 were working with FIMA to do so. One of the key implementation activities in our Business Process Reengineering Assessment Guide includes addressing workforce management issues. Specifically, this includes identifying how many and which employees will be affected by the position changes and retraining. Further, our prior work has found that high-performing organizations identify their current and future workforce needs—including the appropriate number and deployment of staff across the organization— and address workforce gaps, to improve the contribution of critical skills and competencies needed for mission success. According to a PA program official, their initial workforce assessment was not comprehensive because they were still collecting data required to make informed decisions. PA officials agreed that updating their workforce assessments prior to full implementation could be helpful, and acknowledged that program officials needed to be more proactive applying the lessons learned as they pivot from testing to full implementation of the new delivery model in 2018. FEMA also conducts a standard agency wide workforce structure review every 2 to 3 years, which helps officials determine the appropriate disaster workforce levels. As of June 2017, PA officials were working with other offices within FEMA to expedite the agency-wide assessment of the PA and FIMA hazard mitigation cadres, but did not know when they would complete the assessment. PA officials also acknowledged that they faced an aggressive schedule to complete various planned activities for workforce management, training, and other efforts, in support of full implementation, and that they may not be able to complete all efforts as thoroughly as they would like in order to expedite the transition of the PA program to the new delivery model. The gaps in PA workforce assessment in the JFOs, CRCs, and for FIMA’s hazard mitigation cadre present a risk that PA program managers will not have a sufficient workforce to support the goals of the new delivery model. In addition, the timing and implementation of the hiring and training activities for new PA program staff could take multiple months, and program officials will need to know what staff levels are necessary for full implementation of the new delivery model to inform resource decisions for the program in coordination with other agency offices. According to PA program officials, workforce assessment efforts have been delayed as a result of disaster response and recovery efforts related to Hurricanes Harvey, Irma, and Maria. Completing a workforce assessment will help program officials identify gaps in their workforce and skills, which could help PA program officials minimize the effects of long- standing workforce staffing and training challenges on the PA program delivery and inform full implementation for all disasters. costs. For example, EMMIE does not collect information on all of the preaward activities that are part of the PA grant application process. As a result, PA program officials said they, and applicants, must use ad hoc reports and personal tracking documents to manage and monitor the progress of grant applications. PA officials added that EMMIE is not user- friendly and applicants often struggle to access the system. In response to these ongoing challenges, PA program officials developed FAC-Trax— a separate information system from EMMIE—with new capabilities designed to improve transparency, efficiency, and management of the PA program. Specifically, FAC-Trax allows FEMA staff (PA Grants Manager) and applicants (PA Grants Portal), to review, manage, and track current PA project status and documentation. For example, applicants can use FAC- Trax to submit requests for public assistance, upload required project documentation, approve grant application items, and send and receive notifications on grant progress and activities. In addition, the FAC-Trax system includes standardized forms, as well as required fields and tasks that PA program staff and applicants must complete before moving on to the next steps in the PA preaward process. According to PA officials, these capabilities increase transparency, encourage greater applicant involvement, and enhance collaboration and communication between FEMA and grant applicants, to improve efficiency in processing and awarding grant applications and enhance the quality of project development. Further, PA officials said that FAC-Trax could reduce challenges associated with staff turnover during the project development process because the system stores and maintains applicant information and project documentation, making it easier for transitioning staff to assist an applicant. They also said they use FAC-Trax to gather and analyze data that supports management of the PA process, including measuring the timeliness of the grant application process. For example, during the test implementation of the new delivery model in Georgia following Hurricane Matthew, officials were able to document that, on average, program delivery managers took 5 days to conduct the exploratory call and 14 days to hold the recovery scoping meeting with applicants, and CRC officials took 33 days to develop and review grant proposals. Managers use this data to assess staffing needs and identify bottlenecks in the PA process, according to PA officials. FAC-Trax is critical to the new PA delivery model and will be a primary means of sharing grant application documents, tracking ongoing PA projects, and ensuring that FEMA staff and applicants follow PA grant policies and procedures. Given the importance of developing and testing this new information sharing system, we evaluated its development against four key IT management controls—(1) project planning; (2) risk management; (3) requirements development; and (4) systems testing and integration. When implemented effectively, these controls provide assurance that IT systems will be delivered within cost and schedule and meet the capabilities needed by its users. We found that FEMA’s development of FAC-Trax fully satisfied best practices for project planning and risk management, but additional steps are needed to fully satisfy the areas of requirements development and systems testing and integration, as discussed below. See appendix II for the full assessment of each IT management control. PA program officials fully satisfied all five practices in the project planning control area, according to our assessment. Key project planning practices are (1) establishing and maintaining the program’s acquisition strategy, (2) developing and maintaining the overall project plan and obtaining commitment from relevant stakeholders, (3) developing and maintaining the program’s cost estimate, (4) establishing and maintaining the program’s schedule estimate, and (5) identifying the necessary knowledge and skills needed to carry out the program. To address the first and second practices, program officials established detailed plans that describe the acquisition strategy and objectives, the program’s scope, and its framework for using an Agile software development approach, among other key actions. Agile is a method of software development that utilizes an iterative process and constantly improves software based on user needs and feedback. Program officials also developed a plan detailing the program’s approach to deploy and maintain FAC-Trax and established stakeholder groups and an integrated product team to support and oversee the development of FAC-Trax. To address the third and fourth practices, they developed and maintained a master schedule of all implementation tasks and milestones through project completion, and developed a life-cycle cost estimate of over $19 million. Additionally, FAC-Trax’s acquisition performance baseline describes the system’s minimum acceptable and desired baselines for performance, schedule, and cost. Lastly, in regards to the fifth practice, program officials identified the knowledge and skills needed to carry out the program in the FAC-Trax Request for Proposal and FAC-Trax Capability Development Plan. PA program officials fully satisfied all four practices in the risk management control area, according to our assessment. Key risk management practices are (1) identifying risks, threats, and vulnerabilities that could negatively affect work efforts, (2) evaluating and categorizing each identified risk using defined risk categories and parameters, (3) developing risk mitigation plans for selected risks, and (4) monitoring the status of each risk periodically and implementing the risk mitigation plan as appropriate. To address the first and second practices, program officials identified key risks that could negatively affect FAC-Trax in a “risk register”—an online site used to track risks, issues, and mitigating actions. As of May 2017, officials had identified 13 risks in the risk register—four open and nine closed—and evaluated and categorized the identified risks based on the probability of occurrence and scope, schedule, and cost impacts. For example, program officials reported that two of its open risks have a “medium” risk rating—meaning the risk has the potential to slightly affect project cost, schedule, or performance. To address the third and fourth practices, program officials developed and documented risk mitigation plans for all identified risks. For example, program officials planned to mitigate the risk of limited engagement of subject matter experts by identifying and engaging with appropriate experts through workshops, and monitoring the capability development process to identify any issues that may cause project delays. In addition, PA program officials documented the responsible officials, reevaluation date, and risk status, among other things, for each risk in the register, and reviewed and updated risks during weekly and monthly program reviews with stakeholders throughout FEMA. PA program officials fully satisfied four out of five practices in the requirements development control area, according to our assessment. Key requirements development practices are (1) eliciting stakeholder needs, expectations, and constraints, and transforming them into prioritized customer requirements; (2) developing and reviewing operational concepts and scenarios to refine and discover requirements; (3) analyzing requirements to ensure that they are complete, feasible, and verifiable; (4) analyzing requirements to balance stakeholder needs and constraints; and (5) testing and validating the system as it is being developed. To address the first and second practices, program officials developed a requirements management plan outlining how officials capture, assess, and plan for FAC-Trax enhancements, and established a change control process to review, prioritize, and verify user requests for changes to the system and feedback. As of May 2017, the PA program office received 734 change requests related to FAC-Trax, of which program officials completed 420 changes and planned to address an additional 277 entries. Program officials also developed a functional requirements document outlining the high-level requirements for FAC- Trax and detailed operational concepts and scenarios for each phase of the preaward process in the system’s concept of operations. To address the fourth practice, program officials created a standard template to analyze and document the user needs and acceptance criteria for planned system capabilities in March 2017. In addition, PA program officials identified risks and dependencies for recommended changes to FAC-Trax, and balanced the cost and priority of system enhancements as part of the change control process. Lastly, regarding the fifth practice, program officials tested and evaluated FAC-Trax during development, which included validating system enhancements through user acceptance testing. However, program officials did not fully address the third practice— analyzing requirements to ensure they are complete, feasible, and verifiable—because they did not ensure detailed user requirements were necessary and sufficient by tracking them back to higher-level requirements. For example, although program officials reviewed change requests for completeness and followed up with users to verify requirements, officials did not track system enhancements, made in response to detailed user requirements (e.g., allowing users to search PA projects by project number), back to the high-level requirements (e.g., storing data and information provided by the applicant) identified in the FAC-Trax functional requirements document and performance work statement. Officials did not track system enhancements back to high-level requirements because they did not have a complete understanding of basic user needs and system requirements at the beginning of the FAC- Trax effort, according to the PA program manager. A PA official also said the change control process was a way to identify the basic capabilities FAC-Trax needed to have and that tracking enhancements back to high- level requirements could have made the change control process more difficult to manage, and reduced user participation if, for example, users needed to understand how their change requests related to high-level requirements. However, program officials could have tracked enhancements back to high-level requirements themselves using the change control process without putting any additional burden on users. Despite not having a complete understanding of user needs and system requirements at the beginning of the FAC-Trax effort, analyzing whether users’ change requests satisfy higher-level requirements identified in key design and planning documents would have provided officials with a basis for more detailed and precise requirements throughout project development and helped them better manage the project, according to IT management controls. Further, according to the PMBOK® Guide, tracking or measuring system capabilities against approved requirements is a key process for managing a project’s scope, measuring project completion, and ensuring the project meets user needs and expectations. Program officials acknowledged the importance of tracking system enhancements back to documented system requirements. Ensuring that FAC-Trax meets user needs and expectations is especially important because the information system is key to the success of the new delivery model, according to PA officials. By analyzing progress made on documented, high-level requirements, a step that reflects a key IT management control for requirements development, the PA program will have greater assurance that FAC-Trax will provide functionality that meets user needs and expectations. PA program officials did not fully satisfy either of the two practices in the systems testing and integration control area, according to our assessment. Key systems testing and integration practices are (1) developing test plans and test cases, which include a description of the overall approach for system testing, the set of tasks necessary to prepare for and perform testing, the roles and responsibilities for individuals or groups responsible for testing, and criteria to determine whether the system has passed or failed testing; and (2) developing a systems integration plan to identify all systems to be integrated, describe how integration problems are to be documented and resolved, define roles and responsibilities of all relevant participants, and establish a sequence and schedule for every integration step. In regards to the first practice, PA program officials and the FAC-Trax contractor established a test plan that identifies the method and strategy to perform testing, including the necessary tasks, such as responding to user feedback and testing errors, and incorporating necessary resolutions into future work, testing parameters, and the roles and responsibilities of the individuals responsible for testing. However, program officials have not developed system testing criteria to evaluate FAC-Trax, which would align with the practice described above of using criteria to determine whether the system has passed or failed testing. A key feature of Agile software development is the “definition of done”—a set of clear, comprehensive, and objective criteria, that the government should use to evaluate software after each iteration of development. PA program officials said they did not establish a definition of done because officials initially managing the FAC-Trax effort lacked familiarity with system development in the Agile environment. Officials acknowledged the importance of establishing a definition of done and said they are planning to develop one, but have not identified how or when to incorporate it into the development process. According to the TechFAR—the government’s handbook for procuring digital services using Agile processes—the government and vendor should establish this definition after contract award at the beginning of each cycle of software development. By establishing criteria, such as a definition of done, to evaluate the system—a step that reflects a key IT management control for system testing and is an effective practice for applying Agile to software development—the PA program will have greater assurance that FAC- Trax is usable and responsive to specified requirements. In regards to the second practice, PA program officials developed a systems integration plan in June 2017 that identified the potential for integration of FAC-Trax with four FEMA systems, including EMMIE. In addition, program officials included a description of how staff should document integration problems and the resolution of problems in FAC- Trax development and test plans. However, the systems integration plan does not define roles and responsibilities of all participants for system integration activities or establish a sequence and schedule for every integration step for the four FEMA systems. PA officials said that system integration planning for FAC-Trax is in the early stages, but acknowledged the importance of these elements of system integration planning. Officials plan to define roles and responsibilities of all participants for system integration activities and develop the sequence and schedule for every integration step as they add new systems to the FAC-Trax development plan and obtain funding needed for their integration. Nonetheless, FEMA has used FAC-Trax for selected PA disasters since October 2016 and plans to use FAC-Trax for all future disasters. According to IT management controls, agencies should establish the systems integration plan early in the project and revise it to reflect evolving and emerging user needs. By ensuring that the FAC- Trax systems integration plan defines the roles and responsibilities of relevant participants for all integration relationships and establishes a sequence and schedule for every integration step, the PA program will have greater assurance that FAC-Trax functions properly with other systems and meets user needs. FEMA’s new delivery model enhances participation of hazard mitigation staff with the goal of identifying opportunities for mitigation earlier in the PA preaward process, according to PA officials. Two key changes related to hazard mitigation under the new model include (1) an emphasis on engaging with hazard mitigation specialists at the JFO earlier in the PA process and involving them in specific PA preaward activities and (2) the establishment of the PA program’s hazard mitigation liaison at the CRC. For example, position guides direct program delivery managers to coordinate with FIMA’s hazard mitigation specialists prior to recovery scoping meetings, and site inspectors to coordinate with hazard mitigation specialists prior to site inspections to discuss a PA grant applicant’s damages and any potential mitigation opportunities. PA program officials also developed guidance for conducting the exploratory call and the recovery scoping meeting with applicants, which include questions for PA staff to ask on the applicant’s interest in or plans for incorporating hazard mitigation into potential projects. In addition, a new hazard mitigation liaison at the CRC is responsible for reviewing PA projects for hazard mitigation opportunities and serving as a mitigation subject matter expert for the PA program. According to data provided by FEMA, PA grant applicants incorporated hazard mitigation into approximately 18 percent of permanent work projects for all disasters nationwide from 2012 to 2015. During test implementation of the new delivery model, state, PA, and FIMA officials all reported an increase in the number of hazard mitigation activities on PA permanent work projects. For example, state officials who participated in the second new model test in Oregon said that effective communication and coordination between PA and hazard mitigation staff resulted in applicants incorporating hazard mitigation into over 60 percent of permanent work projects. Furthermore, PA officials reported an increase in hazard mitigation during the third test implementation of the new model in Georgia following Hurricane Matthew, where approximately 16 percent of permanent work projects included mitigation, as of June 2017. This represents an increase compared to the PA program’s estimate for the proportion of projects that incorporate hazard mitigation among previous PA hurricane disasters in Georgia, which was about 3 percent, according to PA officials. While PA officials are trying to increase hazard mitigation through the new delivery model, not all disasters present the same number of opportunities to incorporate hazard mitigation. First, the PA program only incorporates hazard mitigation measures for permanent work projects, such as repairs to roads, bridges, and buildings. For example, as of June 2017, approximately 60 percent of the projects FEMA funded in Georgia for the third test implementation after Hurricane Matthew were for emergency work, which is not eligible for hazard mitigation measures. Second, the PA program only funds mitigation measures that officials determine to be cost-effective. In addition, we have previously reported on other factors that affect whether applicants incorporate hazard mitigation into PA projects, such as their capacity to manage and ability to fund hazard mitigation projects. National Planning for Hazard Mitigation In our 2015 report on disaster resilience following Hurricane Sandy, we noted that disaster affected areas have different threats and vulnerabilities, and local stakeholders make the ultimate determination whether or not to incorporate hazard mitigation into a project. Further, without a strategic approach to making disaster resilience investments, the federal government and its nonfederal partners may be unable to fully capitalize on opportunities for mitigation on the greatest known threats and hazards. We recommended that the Mitigation Framework Leadership Group develop an investment strategy to help ensure that federal funds expended to enhance disaster resilience achieve the goal of reducing the nation’s fiscal exposure because of climate change and the rise in the number of federal major disaster declarations as effectively and efficiently as possible. In response, the Federal Emergency Management Agency (FEMA) plans to issue a final National Mitigation Investment Strategy in 2018. The goals of this strategy include increasing the effectiveness of investments in reducing disaster losses and increasing resilience, and improving coordination of disaster risk management among federal, state, local, tribal, territorial, and private entities. Although the new model establishes hazard mitigation activities for PA and FIMA staff in the preaward process, it does not standardize and prioritize hazard mitigation planning at JFOs in the way FEMA has done with prior PA program policy. Specifically, FEMA’s 2007 PA program policy standardized planning for hazard mitigation across PA recovery efforts by stating that agency and state officials should issue a memorandum of understanding (MOU) early in the disaster, outlining how PA hazard mitigation will be addressed for the disaster, including what mitigation measures will be emphasized, and identifying applicable codes and standards, and any potential integration with other mitigation grant programs. However, PA program officials did not include guidance that standardizes planning for hazard mitigation, such as encouraging the use of an MOU, in FEMA’s 2010 PA program policy, the most recent update to the Public Assistance Program and Policy Guide in April 2017, or the New Delivery Model Operations Manual. As a result, FIMA officials said FEMA and state officials do not consistently issue MOUs that outline how FEMA and the state plan to promote PA hazard mitigation during the recovery effort, explaining that the use of the MOU is based on the preferences and priorities of the FEMA officials involved. When not issuing an MOU, FIMA hazard mitigation staff and PA officials at the JFO meet to determine the extent which hazard mitigation staff interact directly with applicants regarding PA hazard mitigation during the recovery process, according to a FIMA official. Having a consistent approach to planning for and prioritizing hazard mitigation across all disasters is important for FEMA, given that FEMA experienced challenges consistently prioritizing and integrating hazard mitigation across PA recovery efforts, according to GAO and others. For example, in our 2015 report on resilience in the Hurricane Sandy recovery, we found that state and local officials experienced challenges maximizing disaster resilience in the recovery effort because PA officials did not consistently prioritize hazard mitigation during project development. According to FEMA’s National Mitigation Framework, planning is vital for mitigation efforts during disaster recovery, and federal, state, and local officials should establish procedures that emphasize a coordinated delivery of mitigation activities and capitalize on opportunities to reduce future disaster losses. Similarly, the Recovery Federal Interagency Operational Plan, which supports FEMA’s National Disaster Recovery Framework, identifies planning as a key task for identifying mitigation opportunities and integrating risk reduction considerations into decisions and investments during the recovery process. FIMA officials agreed that including the development of a formal plan, such as the historical 2007 PA program policy regarding the use of MOUs, for PA hazard mitigation in operations guidance would help program officials plan for and prioritize hazard mitigation. They noted that FIMA’s hazard mitigation field operations guide includes procedures for implementing proposed MOUs to achieve mitigation goals. PA program officials said that, in light of changes to the PA process under the new model and subsequent updates to program policies, the MOU policy from the 2007 PA program policy was outdated. But officials agreed that planning for and prioritizing hazard mitigation at the operational level is important and said they were examining additional ways to incorporate these activities early in the PA process. As FEMA continues to implement the new model, establishing procedures to standardize hazard mitigation planning for each disaster, as it did through prior policy, could improve the prioritization of hazard mitigation in PA recovery efforts and increase the effectiveness of mitigation for reducing disaster losses and increasing resilience. PA program officials developed performance objectives and measures for hazard mitigation in the new delivery model, but could add measures to better align performance assessment for the PA program with FEMA’s broader strategic goals for hazard mitigation. In its strategic plan for 2014–2018, FEMA established an agency-wide goal to increase the percentage of FEMA-funded disaster projects, such as those under the PA program, that provide mitigation above local, state, and federal building code requirements by 5 percentage points by the end of fiscal year 2018. For example, local building codes may require measures for new construction to mitigate against future damage. To align with FEMA’s strategic goal, PA officials would also need to measure the number of PA projects that included mitigation measures that bring any repaired infrastructure to a level above applicable building codes. However, under the new model, FEMA officials developed performance objectives (and associated measures) to increase the number of projects that include hazard mitigation by 5 percent, and increase the total dollars spent on hazard mitigation by 2 percent. While these measures could help to incentivize mitigation, they are not tied to building codes and do not include specific information that FEMA could use to continually assess the PA program’s contributions to meeting the agency’s strategic goal. According to Standards for Internal Control in the Federal Government, agency management should design control activities, such as establishing and reviewing performance measures, to achieve the agency’s objectives. In addition, our work on leading public sector organizations has found that such organizations assess the extent to which their programs and activities contribute to meeting their mission and desired outcomes, and strive to establish clear hierarchies of performance goals and measures. A clear connection between performance measures and program offices helps to both reinforce accountability and ensure that, in their day-to-day activities, managers keep in mind the outcomes their organization is striving to achieve. FEMA’s ability to evaluate and report on PA hazard mitigation data is constrained, but officials are addressing this challenge through the development of data reporting and analytics capabilities for the PA program’s new information system, according to PA officials. PA program officials developed measures they could use to evaluate the new model during test implementation and compare new model performance to the legacy PA process, and agreed that aligning PA program hazard mitigation goals with FEMA’s agency-wide strategic goals would be helpful. As FEMA continues to develop and implement the new model, developing performance measures and objectives to better inform and support the agency’s broader strategic goals could help to ensure that FEMA capitalizes on hazard mitigation opportunities in PA recovery efforts. FEMA’s Public Assistance grant program is a complicated, multi-billion dollar program that is critical to helping state and local communities rebuild and recover after a major disaster. In recent years, FEMA has undertaken a major reengineering effort to make the PA preaward process simpler and more efficient for applicants and to address challenges encountered during recovery from past disasters. FEMA’s new delivery model represents a significant opportunity to strengthen the PA program and address these past challenges, and growing pains are to be expected when implementing any large reengineering effort. Further, FEMA officials work to implement these changes while supporting response and recovery following disasters, including the catastrophic flooding from Hurricane Harvey in August 2017 and widespread damages from Hurricanes Irma and Maria in September 2017. As such, it is critical that feedback obtained and lessons learned while testing the new model inform decisions and actions as FEMA proceeds with full implementation for all disasters, including the complex recovery efforts in the states and territories affected by Hurricanes Harvey, Irma, and Maria. FEMA has redesigned the PA delivery model to address various challenges related to workforce management, information sharing with state and local grantees, and incorporating hazard mitigation into PA projects. FEMA has developed new workforce processes, training, and positions to address past challenges, but completing a workforce assessment that identifies the number of staff needed will inform workforce management and resource allocation decisions to help FEMA ensure a more successful implementation. This is particularly important as FEMA is using the new model for the long-term recovery from the 2017 hurricanes, and FEMA faces capacity challenges as its workforce is stretched thin. Further, FEMA’s new FAC-Trax information sharing system provides FEMA and state and local applicants and grantees with better capabilities to address past challenges in managing and tracking PA projects. In developing FAC-Trax, FEMA implemented many of the key IT management controls that can help ensure that new IT systems are implemented effectively. However, additional steps are needed to fully satisfy the areas of requirements development and systems testing and integration. Finally, FEMA has taken some actions to better promote hazard mitigation as part of its new PA model. However, more consistent planning for hazard mitigation following a PA disaster and developing specific performance measures and objectives that better align with and support the agency’s broader strategic goals related to hazard mitigation could help to ensure that mitigation is incorporated into recovery efforts, which presents an opportunity to encourage disaster resilience and reduce federal fiscal exposure from recurring catastrophic natural disasters. We are making the following five recommendations to FEMA’s Assistant Administrator for Recovery: The FEMA Assistant Administrator for Recovery should complete a workforce staffing assessment that identifies the appropriate number of staff needed at joint field offices, Consolidated Resource Centers, and in FIMA’s hazard mitigation cadre to implement the new delivery model nationwide. (Recommendation 1) The FEMA Assistant Administrator for Recovery should establish controls for tracking FAC-Trax capabilities to the system’s functional and operational requirements to more fully satisfy requirements development controls and ensure that the new information system provides capabilities that meets users’ needs and expectations. (Recommendation 2) The FEMA Assistant Administrator for Recovery should establish system testing criteria, such as a “definition of done,” to assess FAC- Trax as it is developed; define the roles and responsibilities of all participants; and develop the sequence and schedule for integration of other systems with FAC-Trax to more fully satisfy systems testing and integration controls. (Recommendation 3) The FEMA Assistant Administrator for Recovery, in coordination with the Associate Administrator of the Federal Insurance and Mitigation Administration, should implement procedures to standardize planning for addressing PA hazard mitigation at the joint field offices, for example, by requiring FEMA and state officials to develop a memorandum of understanding outlining how they will prioritize and address hazard mitigation following a disaster as it did through prior policy. (Recommendation 4) The FEMA Assistant Administrator for Recovery, in coordination with the Associate Administrator of the Federal Insurance and Mitigation Administration, should develop performance measures and associated objectives for the new delivery model to better align with FEMA’s strategic goal for hazard mitigation in the recovery process. (Recommendation 5) We provided a draft of this report to DHS and FEMA for review and comment. DHS provided written comments, which are reproduced in appendix III. In its comments, DHS concurred with our recommendations and described actions planned to address them. FEMA also provided technical comments, which we incorporated as appropriate. With regard to our first recommendation, that FEMA complete a workforce staffing assessment that identifies the number of staff needed at joint field offices, Consolidated Resource Centers, and FIMA’s hazard mitigation cadre, DHS stated that PA, in coordination with the Field Operations Directorate and FIMA, will continue to refine and evaluate staffing needs and update the cadre force structures under the new delivery model. DHS estimated that this effort would be completed by June 28, 2019. This action, if fully implemented, should address the intent of the recommendation. With regard to our second recommendation, that FEMA establish controls for tracking FAC-Trax capabilities to ensure the new information system meets users’ needs, DHS stated that Recovery program managers will update the FAC-Trax Requirements Management Plan and the FAC-Trax Release Plan to ensure the tracking and traceability of FAC-Trax functional and operational requirements. DHS estimated that this effort would be completed by January 31, 2018. This action, if fully implemented, should address the intent of the recommendation. With regard to our third recommendation, that FEMA establish systems testing criteria to assess the development of FAC-Trax; and define the roles and responsibilities, and sequence and schedule for system integration, DHS stated that Recovery program managers will update the FAC-Trax System Integration Plan to include integration with the Deployment Tracking System, Enterprise Data Warehouse, Preliminary Damage Assessment interface, and State Grants Management system interface. DHS estimated that this effort would be completed by June 29, 2018. This action, if fully implemented, should address the intent of the recommendation. With regard to our fourth recommendation, that FEMA implement procedures to standardize planning for addressing PA hazard mitigation at the JFO, DHS stated that PA will update current process documents or develop new documents to better incorporate mitigation into the operational planning phase of the new delivery model. DHS estimated that this effort would be completed by July 31, 2018. This action, if fully implemented, should address the intent of the recommendation. With regard to our fifth recommendation, that PA coordinate with FIMA to develop performance measures and associated objectives for the new delivery model that better align with FEMA’s strategic goals for hazard mitigation in the recovery process, DHS stated that PA will reconvene the PA-Mitigation working group to develop and refine PA related hazard mitigation performance measures. DHS estimated that this effort would be completed by June 29, 2018. This action, if fully implemented, should address the intent of the recommendation. We are sending copies of this report to the Secretary of Homeland Security and interested congressional committees. If you or your staff have any questions about this report, please contact me at (404) 679-1875 or CurrieC@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix II: Assessment of Information Technology Management Controls for the FEMA Applicant Case Tracker (FAC-Trax) Table 2 shows details on the Federal Emergency Management Agency (FEMA) Public Assistance (PA) program office’s implementation of key practices across four information technology (IT) management control areas for its new information system, the FEMA Applicant Case Tracker (FAC-Trax). PA developed FAC-Trax as a web-based project tracking and case management system to supplement the Emergency Management Mission Integrated Environment (EMMIE) and help resolve long-standing information sharing challenges. To determine the extent to which the FAC-Trax program office implemented IT management controls, we reviewed documentation from the FAC-Trax program and compared it to key management best practices, including the Software Engineering Institute’s Capability Maturity Model® Integration for Acquisition and Development, the Project Management Institute’s Guide to the Project Management Body of Knowledge (PMBOK® Guide), and the Institute of Electrical and Electronics Engineers’ Standard for Software and System Test Documentation. We assessed the program as having fully implemented a practice if the agency provided evidence that it fully addressed the practice; partially implemented if the agency provided evidence that it addressed some, but not all, portions of the practice; and not implemented if the agency did not provide any evidence that it addressed the practice. Table 2. Public Assistance (PA) Program Office’s Implementation of Key Information Technology Management Controls for FAC-Trax PA program officials developed an acquisition plan for FAC-Trax identifying the capabilities the system is intended to deliver, the acquisition approach, and acquisition objectives. Additionally, program officials developed a capability development plan outlining a strategy for the program to obtain approval to acquire FAC-Trax. Lastly, program officials developed a systems engineering plan describing the program’s scope and its framework for using an Agile development approach, as well as a deployment, support, and maintenance plan for FAC-Trax. PA program officials developed an acquisition program baseline detailing FAC-Trax’s cost parameters and a life-cycle cost estimate for the system. As of May 2017, the life- cycle cost estimate for FAC-Trax through fiscal year (FY) 2023 is approximately $19.3 million. PA program officials updated the life-cycle cost estimate for FYs 2016 and 2017 after price negotiations with the FAC-Trax contractor, and will continue to update the estimate as annual budgets are approved, according to the Integrated Logistic Support Plan. The contracting officer’s representative for FAC-Trax performs a cost review at the end of each month, according to program officials. Furthermore, the contractor’s weekly status report includes information on the number of hours worked and the percent of contract value spent. Program officials also review program costs with Office of Response and Recovery, PA, Office of the Chief Information Officer (OCIO), and other program office stakeholders during a weekly program review. PA program officials developed an acquisition program baseline detailing FAC-Trax’s schedule parameters, as well as an integrated master schedule for the system. The integrated master schedule identifies tasks, major milestones, and task dependencies. The PA program manager reviews and updates the integrated master schedule on a weekly basis. Program officials also review FAC-Trax’s schedule with Office of Response and Recovery, PA, OCIO, and other program office stakeholders during a weekly program review. PA program officials identified the knowledge and skills needed to carry out the program in FAC-Trax contract documentation and the capability development plan. Specifically, program officials included an attachment to the FAC-Trax contract listing the required labor categories and corresponding functional position descriptions. Program officials also described the role, position type, minimum grade, and minimum certification for required personnel resources for the acquisition, development, and implementation of FAC-Trax. PA program officials developed, reviewed, and maintained project planning documents and obtained commitment from relevant stakeholders. For example, program officials reviewed and updated the integrated master schedule and costs on a weekly and monthly basis, respectively. Further, program officials reviewed the status of project elements, such as the schedule, quality and technical issues, stakeholders, staffing, cost, and risks, with Office of Response and Recovery, PA, OCIO, and other program office stakeholders during a weekly program review. PA program officials also established tactical, functional, and stakeholder groups, as well as an Integrated Product Team to support and oversee the development of FAC-Trax. FEMA’s Recovery Technology Programs Division (RTPD) has a division-level risk management plan that serves as guidance for all Recovery systems, including FAC- Trax. Program officials identified key risks that could negatively affect FAC-Trax work efforts in RTPD’s “risk register”—an online site used to track risks, issues, and mitigating actions for the division and each program office. Program officials also identified five technical, cost, and schedule risks in the FAC-Trax acquisition plan. Program officials included one of these risks in the risk register, while the remaining four were managed outside of the register. As of May 2017, program officials had identified 13 risks in its risk register—four open and nine closed. The four open risks were (1) limited subject matter expert engagement during requirements development, (2) vacancies in program management office support positions, (3) unresolved service level agreement support and funding issues, and (4) the loss of the authority to operate due to a Trusted Internet Connection that is not compliant with Department of Homeland Security security policy. Program officials evaluated and categorized the identified risks based on the probability of occurrence and scope, schedule, and cost impacts. These four points of measurement are used to calculate an overall risk score. The risk score helps program officials determine a risk’s risk rating—low, medium, or high. For example, program officials reported that two of its open risks have a “medium” risk rating—meaning the risk has the potential to slightly impact project cost, schedule, or performance. In addition, program officials detailed the risk category, probability, and impact for the five risks identified in the FAC-Trax acquisition plan. Program officials developed risk mitigation and contingency plans for each risk in the risk register. For example, program officials planned to mitigate the open risk concerning subject matter expert engagement, by identifying and engaging with appropriate subject matter experts through requirements development workshops scheduled in advance of the sprint they are to support, and monitoring the development of user stories to identify any issues that may cause delays. In addition, program officials described the risk management plan and responsible officials for the five risks identified in the FAC-Trax acquisition plan. PA program officials review and update program risks during a monthly program meeting. Program officials also review program risks with Office of Response and Recovery, PA, OCIO, and other program office stakeholders during a weekly program review. Furthermore, the FAC-Trax contractor provides a weekly status update which includes a section on identified risks. Program officials established re-evaluation dates and recorded updates, including any actions taken, for each risk in the risk register. In addition, program officials were able to provide updates on the four risks identified in the FAC-Trax acquisition plan and managed outside of the register. According to PA officials, these risks were addressed and closed by the approval of program planning documents, such as the mission needs statement, concept of operations, and operational requirements document, following the solutions engineering review, which demonstrates the readiness of the program to proceed with the procurement, in September 2016. Program officials established a requirements management plan outlining how it captures, assesses, and plans for FAC-Trax enhancements, and established a change control process to review, prioritize, and verify user requests for changes to the system and feedback. As of May 2017, the PA program office received 734 change requests related to FAC-Trax, of which program officials completed 420 changes and planned to address an additional 277 entries. PA program officials also facilitated workshops to gather requirements for specific user groups and obtained additional requirements for FAC-Trax through customer feedback on a temporary technology tool— an Access database referred to as the Public Assistance Recovery Information System—used to support an early stage of the new model implementation. Further, program officials developed a functional requirements document outlining the high-level functional and operational requirements for FAC-Trax. PA program officials developed a concept of operations for FAC-Trax detailing operating concepts and scenarios for each phase of the PA preaward process. Program officials also detailed the workflow, phases, business functions, and data inputs and outputs for the re-engineered PA process in FAC-Trax’s functional requirements document. In March 2017, program officials developed a standard template to describe the process, tasks, and data inputs and outputs for specific system capabilities. As part of the change control process, PA program officials meet three times a week to discuss and prioritize change requests. Specifically, program officials review submissions to the change control form to ensure completeness, validate impacts and root cause, and research details for incoming requests. PA program officials also follow up with users to understand and verify requirements. In March 2017, program officials developed a standard template to capture acceptance criteria for specific requirements. However, PA program officials do not track system enhancements back to the high-level requirements identified in FAC-Trax’s operational and functional requirements documentation and performance work statement. PA program officials identified system requirements and constraints in the FAC-Trax concept of operations and functional and operational requirements documents. Further, through its change control process, program officials collect suggestions, issues, and feedback on FAC-Trax and system enhancements from stakeholders, identify risks for change requests, and balance prioritized requirements and estimated level of efforts with projected costs prior to each sprint. In March 2017, program officials developed a standard template to analyze and document the urgency and need for specific requirements. PA program officials and the FAC-Trax contractor established a testing and evaluation plan for the system, developed acceptance criteria for user stories, and obtained feedback from users during and after testing. The testing process concludes with user acceptance testing (UAT). If a change request fails during UAT or a new requirement is discovered during development, the PA program will capture the failed request or new requirement in the product backlog for implementation in a future product release. Key practices Systems testing and integration Developing test plans and test cases PA program officials and the FAC-Trax contractor tested and evaluated the system during development. The FAC-Trax test plan identifies the method and strategy to perform the testing, including the necessary tasks, testing parameters, and the roles and responsibilities of the individuals responsible for testing. However, program officials did not develop system testing criteria to evaluate FAC-Trax. A key feature of Agile software development is the “definition of done”—a set of clear, comprehensive, and objective criteria, that the government should use to evaluate software after each iteration of development. PA program officials developed a systems integration plan in June 2017 that identifies potential integration of FAC-Trax and four FEMA systems, including the Emergency Management Mission Integrated Environment. Specifically, the plan includes data requirements and standards; descriptions of the four systems FEMA plans to integrate with FAC-Trax and the proposed relationship for each connection; and security and access management requirements. In addition, program officials included a description of how integration problems are to be documented and resolved in FAC-Trax development and test plans. However, the systems integration plan does not define roles and responsibilities of all participants for system integration activities or establish a sequence and schedule for every integration step for the four FEMA systems. ● Fully implemented: The agency provided evidence that it fully addressed this practice. ◐ Partially implemented: The agency provided evidence that it addressed some, but not all, portions of this practice. ◌ Not implemented: The agency did not provide any evidence that it addressed this practice. In addition to the contact named above, Chris Keisling (Assistant Director), Amanda R. Parker (Analyst-in-Charge), Mathew Bader, Allison Bawden, Anthony Bova, Eric Hauswirth, Susan Hsu, Rianna Jansen, Justin Jaynes, Tracey King, Matthew T. Lowney, Heidi Nielson, Claire Peachey, Brenda Rabinowitz, Ryan Siegel, Martin Skorczynski, Niti Tandon, Walter K. Vance, James T. Williams, and Eric Winter made key contributions to this report.
|
FEMA, an agency of the Department of Homeland Security (DHS), has obligated more than $36 billion in PA grants to state, local, and tribal governments to help communities recover and rebuild after major disasters since 2009. Further, costs are rising with disasters, such as Hurricanes Harvey, Irma, and Maria in 2017. FEMA recently redesigned how the PA program delivers assistance to state and local grantees to improve operations and address past challenges identified by GAO and others. FEMA tested the new delivery model in selected disasters and announced implementation in September 2017. GAO was asked to assess the redesigned PA program. This report examines, among other things, the extent to which FEMA's new delivery model addresses (1) past workforce management challenges and assesses future workforce needs; and (2) past information sharing challenges and key IT management controls. GAO reviewed FEMA policy, strategy, and implementation documents; interviewed FEMA and state officials, PA program applicants, and other stakeholders; and observed implementation of the new model at one test location following Hurricane Matthew in 2016. The Federal Emergency Management Agency (FEMA) redesigned the Public Assistance (PA) grant program delivery model to address past challenges in workforce management, but has not fully assessed future workforce staffing needs. GAO and others have previously identified challenges related to shortages in experienced and trained FEMA PA staff and high turnover among these staff. These challenges often led to applicants receiving inconsistent guidance and to PA project delays. As part of its new model, FEMA is creating consolidated resource centers to standardize and centralize PA staff responsible for managing grant applications, and new specialized positions, such as hazard mitigation liaisons, program delivery managers, and site inspectors, to ensure more consistent guidance to applicants. However, FEMA has not assessed the workforce needed to fully implement the new model, such as the number of staff needed to fill certain new positions, or to achieve staffing goals for supporting hazard mitigation on PA projects. Fully assessing workforce needs will help to ensure that FEMA has the people and the skills needed to fully implement the new PA model and help to avoid the long-standing workforce challenges the program encountered in the past. FEMA designed a new PA information and case management system—called the FEMA Applicant Case Tracker (FAC-Trax)—to address past information sharing challenges, such as difficulties in sharing grant documentation among FEMA, state, and local officials and tracking the status of PA projects, but additional actions could better ensure effective implementation. Both FEMA and state officials involved in testing of the new model stated that the new information system allows them to better manage and track PA applications and documentation, which could lead to greater transparency and efficiencies in the program. Further, GAO found that this new system fully addresses two of four key information technology (IT) management controls—project planning and risk management—that are necessary to ensure systems work effectively and meet user needs. However, GAO found that FEMA has not fully addressed the other two controls—requirements development and systems testing and integration. By better analyzing progress on high-level user requirements, for example, FEMA will have greater assurance that FAC-Trax will meet user needs and achieve the goals of the new delivery model. GAO is making five recommendations, including that FEMA assess the workforce needed for the new delivery model and improve key IT management controls for its new information sharing and case management system, FAC-Trax. DHS concurred with all recommendations.
|
Remittances can be sent through money transmitters and depository institutions, among other organizations. A typical remittance sent through a bank may be in the thousands of dollars, while the typical remittance sent by money transmitters is usually in the hundreds of dollars. International remittances through money transmitters and banks may include cash-to-cash money transfers, international wire transfers, some prepaid money card transfers, and automated clearinghouse transactions. Transfers through money transmitters. Historically, many consumers have chosen to send remittances through money transmitters due to convenience, cost, familiarity, or tradition. Money transmitters typically work through agents—separate business entities generally authorized to, among other things, send and receive money transfers. Most remittance transfers are initiated in person at retail outlets that offer these services. Money transmitters generally operate through their own retail storefronts, or through grocery stores, financial services outlets, convenience stores, and other retailers that serve as agents. In one type of common money transmitter transaction—known as a cash-to-cash transfer—a sender walks into a money transmitter agent location and provides cash to cover the transfer amount and fees. Generally, for transfers at or above $3,000, senders must provide basic information about themselves (typically a name and address, among other information) at the time of the transfer request. The agent processes the transaction, and the money transmitter’s headquarters screens it for BSA compliance. The money is then transferred to a recipient, usually through a distributor agent in the destination country. The money may be wired through the money transmitter’s bank to the distributor agent’s bank (see fig. 1), or transferred by other means to a specified agent in the recipient’s country. The distributor agent pays out cash to the recipient in either U.S. dollars or local currency. Money transmitters also offer other transfer methods, including online or mobile technology, prepaid money cards or international money orders sent by U.S. Postal Service, cash courier services, or informal value transfer systems such as hawala. Transfers through banks. Another method which remittance senders use to send funds is through bank to bank transfers. Figure 2 is an example of a simple funds transfer between two customers with only the remittance sender’s and remittance recipient’s banks involved. If a remittance sender’s bank does not have a direct relationship with the remittance recipient’s bank, the bank-to-bank transfer scenario becomes more complicated. In such cases, one or more financial institutions may rely upon correspondent banking relationships to complete the transaction, as illustrated in figure 3. Both federal and state agencies oversee money transmitters and banks. In general, money transmitters must register with FinCEN and provide information on their structure and ownership. According to Treasury, in all states except one, money transmitters are required to obtain licenses from states in which they are incorporated or conducting business. Banks are supervised by state and federal banking regulators according to how they are chartered, and the banks provide related information when obtaining their charter. The key federal banking regulators include OCC, FDIC, the Federal Reserve, and National Credit Union Administration (NCUA). FinCEN often works with federal and state regulators. For example, as administrator of the BSA, FinCEN issues BSA regulations and has delegated examination authority for BSA compliance to the federal banking regulators for banks within their jurisdictions. Further, the federal banking regulators have issued regulations requiring institutions under their supervision to establish and maintain a BSA compliance program. FinCEN has also delegated examination authority for BSA compliance for money transmitters to the Internal Revenue Service (IRS). Money transmitters are subject to the BSA but are not examined by federal regulators for safety and soundness. To ensure consistency in the application of BSA requirements, in 2005 the federal banking regulators collaborated with FinCEN on a BSA examination manual that was issued by the Federal Financial Institutions Examination Council for federal bank examiners conducting BSA examinations of banks. Similarly, in 2008 FinCEN issued a BSA examination manual to guide reviews of money transmitters, including reviews by the IRS and state regulators. The manual for BSA examinations of banks was updated in 2014 to further clarify supervisory expectations and regulatory changes. FinCEN has authority for enforcement and compliance under the BSA and may impose civil penalties and seek injunctions to compel compliance. In addition, each of the federal banking regulators has the authority to initiate enforcement actions against supervised institutions for violations of law and also impose civil money penalties for BSA violations. Under the BSA, the IRS also has authority for investigating criminal violations. The U.S. Department of Justice prosecutes violations of federal criminal money laundering statutes and violations of the BSA, and several law enforcement agencies can conduct BSA-related criminal investigations. Money transmitters and banks are subject to requirements under the BSA. They are generally required to design and implement a written anti- money laundering (AML) program, report certain transactions to Treasury, and meet recordkeeping and identity documentation requirements for funds transfers of $3,000 or more. All financial institutions subject to the BSA—including banks and money transmitters—are required to establish an anti-money laundering program. At a minimum, each AML program must establish written AML compliance policies, procedures, and internal designate an individual to coordinate and monitor day-to-day provide training for appropriate personnel; and provide for an independent audit function to test for compliance. Bank Secrecy Act anti-money laundering (BSA/AML) regulations require that each financial institution tailor a compliance program that is specific to its own risks based on factors such as the products and services offered, customers, and locations served. BSA/AML compliance programs are expected to address the following: Customer Identification Program. Banks must have written procedures for opening accounts and must specify what identifying information they will obtain from each customer. At a minimum, the bank must obtain the following identifying information from each customer before opening the account: name, date of birth, address, and identification number. In addition, banks’ Customer Identification Programs must also include risk-based procedures for verifying the identity of each customer to the extent reasonable and practicable. Customer Due Diligence. These procedures enable banks to predict, with relative certainty, the types of transactions in which a customer is likely to engage, which assists banks in determining when transactions are potentially suspicious. Banks must document their process for performing Customer Due Diligence. Enhanced Due Diligence. Customers who banks determine may pose a higher risk for money laundering or terrorist financing are subject to these procedures. Enhanced Due Diligence for higher-risk customers helps banks understand these customers’ anticipated transactions and implement an appropriate suspicious activity monitoring system. Banks review higher-risk customers and their transactions more closely at account opening and more frequently throughout the term of their relationship with the bank. Suspicious Activity Monitoring. Banks and money transmitters must also have policies and procedures in place to monitor and identify unusual activity. They generally use two types of monitoring systems to identify or alert staff of unusual activity: manual transaction monitoring systems, which involve manual review of transaction summary reports to identify suspicious transactions, and automated monitoring systems that use computer algorithms to identify patterns of unusual activity. Large-volume banks typically use automated monitoring systems. Banks and money transmitters also must comply with certain reporting requirements, including: Currency Transaction Report. Banks and money transmitters must electronically file this type of report for each transaction in currency— such as a deposit, withdrawal, exchange, or other payment or transfer—of more than $10,000. Suspicious Activity Report. Banks and money transmitters are required to electronically file this type of report when (1) a transaction involves or aggregates at least $5,000 in funds or other assets (for banks) or at least $2,000 in funds or other assets (for money transmitters), and (2) the institution knows, suspects, or has reason to suspect that the transaction is suspicious. Remittances from the United States are an important source of funds for our case-study countries—Haiti, Liberia, Nepal, and Somalia. The Organisation for Economic Co-operation and Development identified these countries as fragile states because of weak capacity to carry out basic governance functions, among other things, and their vulnerability to internal and external shocks such as economic crises or natural disasters. Haiti. Currently the poorest country in the western hemisphere, Haiti has experienced political instability for most of its history. In January 2010, a catastrophic earthquake killed an estimated 300,000 people and left close to 1.5 million people homeless. Haiti has a population of approximately 11 million, of which roughly 25 percent live on less than the international poverty line of $1.90 per day. Nearly 701,000 Haitians live in the United States. In 2015, estimated remittances from the United States to Haiti totaled roughly $1.3 billion, or about 61 percent of Haiti’s overall remittances. Official development assistance for Haiti in 2015 totaled slightly more than $1 billion. Liberia. In 2003, Liberia officially ended a 14-year period of civil war but continued to face challenges with rebuilding its economy, particularly following the Ebola epidemic in 2014. Liberia has a population of nearly 5 million people, of which roughly 39 percent live on less than $1.90 per day. There are roughly 79,000 Liberians in the United States. In 2015, remittances from the United States to Liberia were estimated to be roughly $328 million, which represented over half of that country’s estimated total remittances. In 2015, Liberia reported roughly $1.1 billion in official development assistance. Nepal. In 2006, Nepal ended a 10-year civil war between Maoist and government forces, which led to a peace accord, and ultimately a constitution that came into effect 9 years later. In April 2015, Nepal was struck by a 7.8 magnitude earthquake, which resulted in widespread destruction and left at least 2 million people in need of food assistance from the World Food Programme 6 weeks following the earthquake. Nepal has a population of nearly 29 million people, of which 15 percent live on less than $1.90 per day. In 2015, the foreign- born population of Nepalese in the United States was nearly 125,000, and roughly $320 million in remittances flowed from the United States to Nepal. For 2015, Nepal received over $1.2 billion in official development assistance. Somalia. Since 1969, Somalia has endured political instability and civil conflict, and is the third largest source of refugees, after Syria and Afghanistan. According to a 2017 State report, Somalia remained a safe haven for terrorists who used their relative freedom of movement to obtain resources and funds to recruit fighters, and plan and mount operations within Somalia and neighboring countries. Somalia has an estimated population of over 11 million people, of which about half the population live on less than $1.90 per day, and roughly 82,000 Somalis reside in the United States. Oxfam estimated global remittances to Somalia in 2015 at $1.3 billion, of which $215 million originated from the United States. In 2015, Somalia received nearly $1.3 billion in official development assistance. Figure 4 shows the estimated U.S. remittances to each of our case-study countries as a total amount in U.S. dollars and as a percentage of the country’s GDP. Money transmitters serving Haiti, Liberia, Nepal, and especially Somalia reported losing bank accounts or having restrictions placed on them, which some banks confirmed. As a result, some money transmitters have relied on non-banking channels, such as cash couriers, to transfer remittances. All of the 12 money transmitters we interviewed reported losing some banking relationships in the last 10 years. Some money transmitters, including all 4 that served Somalia, said they relied on non- banking channels, such as moving cash, to transfer funds, which increased their operational costs and exposure to risks. Further, in our interviews some banks reported that they had closed the accounts of money transmitters because of the high cost of due diligence actions they considered necessary to minimize the risk of fines under BSA/AML regulations. Treasury officials noted that despite information that some money transmitters have lost banking accounts, Treasury sees no evidence that the volume of remittances is falling or that costs of sending remittances are rising. In addition, U.S.-based remittance senders who send money to our case-study countries reported no significant difficulties in using money transmitters to remit funds. All 12 money transmitters we interviewed reported that they or their agents had lost accounts with banks during the last 10 years. All 4 Somali money transmitters and many agents of the 2 Haitian money transmitters we spoke with had lost bank accounts and were facilitating remittance transfers without using bank accounts. Additionally, all 4 large money transmitters that process transfers globally (including to our case-study countries of Haiti, Liberia, and Nepal) also reported that their agents had lost accounts. Almost all of the money transmitters said they also faced difficulties in getting new accounts. Somali money transmitters were most affected by the loss of bank accounts, as 2 of the 4 Somali money transmitters had lost all corporate accounts. While some money transmitters said the banks that closed their accounts did not provide a reason, in other cases, money transmitters said the banks told them that they had received pressure from regulators to terminate money transmitter accounts. As a result of losing access to bank accounts, several money transmitters, including all of the Somali money transmitters, reported that they were using non-banking channels to transfer funds. In some cases the money transmitter was forced to conduct operations in cash, which has increased the risk of theft and forfeitures, and led to increased risk for agents and couriers. Nine of the money transmitters that we interviewed, including 3 of the 4 Somali money transmitters, some agents of one Haitian money transmitter, and some agents of the 4 larger money transmitters, rely on couriers or armored trucks to transport cash domestically (to the money transmitter’s main offices or bank) or internationally (see fig. 5). Money transmitters use cash couriers either because the money transmitter or their agents had lost bank accounts or because it was cheaper to use armored trucks than banks to move funds. In addition to the safety risks money transmitters face when they only accept cash, customers who remit large sums of money also face safety risks because they must transport cash to the money transmitter. For example, in our interviews with remittance senders to Somalia, some of them shared concerns about having to carry cash to money transmitters. Money transmitters we interviewed reported increased costs associated with moving cash and bank fees. For example, one Haitian money transmitter reported that use of couriers and trucks has increased its cost of moving money from its agents to its primary bank account by about $75,000 per month (increasing from approximately $15,000 per month using bank transfers to move funds, to $90,000 per month with the addition of couriers and trucks). Two of the money transmitters we spoke to stated that they did not have options other than to pay any fees the bank required due to the difficulty in finding new bank accounts. Money transmitters with access to bank accounts reported that bank charges for services such as cash counting, wire transfers, and monthly compliance fees had in some cases doubled or tripled, or were so high that it was less expensive to use a cash courier. For example, some money transmitters stated that their banks charged a monthly fee for compliance related costs that ranged from $100 a month to several thousand dollars a month. Over half of the money transmitters we interviewed said the loss of bank accounts limits their growth potential. The 4 larger money transmitters reported that in some cases, the relationship between the agent and money transmitter was terminated, either by the agent or the money transmitter, if the agent no longer had a bank account. In other cases, some large money transmitters compensated for their agents’ lost bank accounts by using armored vehicles to transfer cash from the agents’ locations to the bank. However, the agents need to have a high volume of transactions in order to make the expense of a cash courier worthwhile. The money transmitters that we spoke with said that they have not passed their increased operational and banking costs on to remittance senders. Most said that they have not increased their fees for sending remittances or have increased fees only slightly. Some of the money transmitters said that they have compensated for higher costs by finding cost-savings in other areas or that they have reduced their profit margin. Most of the banks we interviewed expressed concerns regarding account holders who are money transmitters because they tend to be low-profit, high-risk clients. Some banks in our survey reported that constraints in accessing domestic and foreign correspondent banks were also a reason for restricting the number or percentage of money transmitter accounts. Banks have closed accounts of money transmitters serving our case- study countries. Some banks we surveyed reported terminating accounts of money transmitters who transfer funds to Haiti, Nepal, and Somalia. While 7 of the 193 banks that responded to our survey noted that during the 3-year period from 2014 to 2016 they provided services to money transmitters that facilitated transfers to at least one of our case-study countries, 3 of these 7 banks also reported closing at least one account of a money transmitter serving at least one of the case-study countries. Risks associated with the countries or regions that the money transmitter served was given as one reason (among others) for the closure of the account by 2 out of the 3 banks. Money transmitters are generally low-profit clients for banks. Most of the banks we interviewed that currently offer money transmitter services stated that BSA/AML compliance costs have significantly increased in the last 10 years due to the need to hire additional staff and upgrade information systems to conduct electronic monitoring of all transactions that are processed through their system. Some banks indicated in our survey and interviews that the revenue from money transmitter accounts was at times not sufficient to offset the costs of BSA/AML compliance, leading to terminations and restrictions on money transmitter accounts. A few banks we interviewed stated that they do not allow money transmitters to open accounts because of the BSA/AML compliance resources they require. Moreover, according to one credit union we interviewed, money transmitters require labor- intensive banking services—such as counting cash and processing checks—that are more expensive for the banks than providing basic services to businesses that are not cash intensive. Banks expressed concerns over the adequacy of money transmitters’ ability to conduct due diligence on the money transmitter’s customers. In our survey, one bank stated that being unable to verify the identity of beneficiaries, the source of the funds, or the subsequent use of the funds was a challenge the bank faced in managing accounts for money transmitters that remit to fragile countries such as Haiti, Liberia, Nepal, and Somalia. Another bank in our survey noted that it closed some money transmitter accounts because it was unable to get any detail on the purpose of individual remittances. In addition, another bank noted that unlike bank clients, money transmitters’ customers may not have ongoing relationships with them, so money transmitters tend to know less about their customers than banks know about theirs. A few banks we interviewed expressed concern that they would be held responsible if, despite the bank carrying out due diligence, authorities detect an illicit transaction has been processed through the bank on behalf of a money transmitter. In addition, one extra-large bank indicated that differences in state regulators’ assessments of money transmitters are a challenge for the bank. Banks we surveyed reported reduced access to correspondent banks. Banks responding to our survey cited reduced access to correspondent banks as a reason for restricting the number of money transmitter accounts. Out of the 193 banks that answered our survey, 30 indicated they have relied on a correspondent bank to transfer funds to our case-study countries (25 to Haiti, 16 to Liberia, 23 to Nepal, and 9 to Somalia). While not specific to our case-study countries, of the 29 banks in our survey that said they had restricted the number or percentage of money transmitter accounts, 8 said that they did so because of difficulty in maintaining correspondent banking relationships, while 3 said they did so due to loss of a correspondent banking relationship. The absence of direct relations with foreign banks can cause electronic money transfers to take longer to process or in some cases to be rejected. One bank official told us that the reduction in correspondent banking relations may not stop funds from being transferred but may increase the cost or time to process the transfer. However, one bank that responded to our survey identified multiple transactions with our case- study countries in recent years that were terminated because a correspondent bank could not be located or had closed. Customer due diligence is a challenge for correspondent banks. Some banks told us that exposure to risk related to the customers of banks they serve was a key challenge to providing foreign correspondent banking services. Some banks expressed concern that violations of anti-money laundering and terrorism financing guidelines by a customer’s customer may result in fines for the bank even when the bank has conducted enhanced due diligence and monitoring of transactions. Two extra-large banks that do not provide foreign correspondent banking services cited due diligence concerns as one reason they choose not to offer such services. Some of the banks that provide correspondent banking services said they conduct more due diligence on the customers of the banks they serve than regulatory guidance requires. Several of the correspondent banks noted that this additional due diligence was challenging to conduct due to the distance between the correspondent bank and the customers of the banks they serve. For example, one bank told us that the farther removed a customer is from being its direct customer, the greater the risk to the bank due to a lack of confidence in the originating institution’s procedures to conduct due diligence on its customers. Banks identified country-level risk as a factor. For banks that responded to our survey, country-level risk was noted as a factor in account closures. Two out of the three banks that had closed accounts for money transmitters serving at least one of our case- study countries noted that risks associated with the countries or regions that the money transmitter served was a contributing reason for the account closures. Additionally, in our interviews with extra-large banks that serve as a correspondent bank for foreign banks all said that they consider risk related to the country served by a foreign bank when deciding whether to allow the foreign bank to open and maintain accounts. However, most of these extra-large banks also said that the country or region where a foreign bank is located is only one of several factors in determining whether the foreign bank is considered high risk. One of the extra-large banks noted that Somalia was an exception because the lack of a banking infrastructure, which compounded concerns that money transmitters serving Somalia pose a higher risk to the bank. While banks in general told us that they did not make exit decisions regarding correspondent banking at the country level, seven of the eight extra-large banks we interviewed did not currently have correspondent banking relationships with any of our case-study countries, and the one remaining bank served only one country (Haiti). Two of the extra-large banks mentioned closing correspondent banking relationships during the last 10 years in Haiti, Nepal, or Somalia. One extra-large bank indicated that, with the exception of Somalia, funds can still be sent to foreign countries with limited correspondent banking access through banking channels; however, the transaction may need to be routed through multiple banks in order to be processed. Treasury officials reported that remittances continue to flow to fragile countries even though money transmitters face challenges, including some evidence of money transmitter bank account closures. Furthermore, U.S.-based individuals we interviewed who send remittances to Haiti, Liberia, Nepal, and Somalia told us that they are still able to send funds to these countries using money transmitters. Treasury reported money transmitters’ banking access difficulties have not affected the estimated volume of remittance flows to fragile countries. Treasury has collected information through engagement with money transmitters and banks about closures of money transmitter bank accounts and foreign correspondent banking relationships. Treasury officials indicated that remittance flows to fragile countries have not been impacted by such account closures. According to Treasury officials, World Bank estimates of remittance flows show that the volume of international transfers from the United States has continued to increase. At the same time, World Bank data indicate that the global average cost of sending remittances has continued to decrease. In regards to our case study countries, Treasury officials noted that they were not aware of any decrease in remittance volume to any of these fragile countries. Citing these trends, and anecdotal evidence from Treasury’s engagement with banks, the officials stated that there are no clear systemic impacts on the flow of remittances from closures of money transmitter bank accounts and correspondent banking relations. Treasury officials added that the scope of money transmitter bank account closures is largely unknown, but they acknowledged that such closures can be a significant challenge for money transmitters that serve certain regions or countries, including Somalia. Regarding a possible reduction in the number of correspondent banks, which can make it more challenging to transfer remittances, Treasury officials noted that to the extent there has been consolidation in this sector, it could be a natural process unrelated to correspondent banking risk management processes. Moreover, if consolidation results in stronger banking institutions and lower compliance costs, that would be a positive development for the sector, according to these officials. Treasury officials noted unique challenges in remitting funds to Somalia. Officials acknowledged that U.S.-based money transmitters transferring funds to Somalia have lost accounts with U.S.-based banks. According to Treasury, Somalia’s financial system is uniquely underdeveloped, as the country has not had a functioning government for about 20 years, and the terrorist financing threat is pronounced. Officials said that some Somali money transmitters have in the past moved money to assist al-Shabaab, a terrorist organization, increasing the need for stringent controls specific to anti-money laundering and combating terrorist financing efforts. As a result of these and other factors, Treasury officials stated that difficulties remitting to Somalia are not generalizable to other countries. Further, Treasury officials said they were aware that some Somali money transmitters have resorted to non-banking channels by carrying cash overseas. They noted that although physically moving cash is risky, it is not unlawful. Additionally, Treasury officials stated that the use of cash couriers to remit funds has not been a concern for regulators because this practice has not increased the remittance fees that money transmitters charge their consumers. Reasons Senders Reported General Satisfaction with Money Transmitters The remittance senders for Haiti, Liberia, Nepal, and Somalia told us that they are generally satisfied using money transmitters over other methods to transfer money abroad because money transmitters quickly deliver the funds to recipients; are cheaper than banks; can be used even if the recipient lacks a bank account; and tend to have more locations in recipient countries compared to banks. specialized Somali money transmitters cost less than transmitters that serve many countries, and overseas agents of the Somali money transmitters are knowledgeable about the communities where they operate and have earned the trust of the community members. U.S.-based remittance senders we interviewed are generally satisfied with their money transmitters. The U.S.-based remittance senders we spoke with from each of our case-study countries reported that they frequently use money transmitters and have not encountered major difficulties in sending remittances. In general, these senders expressed satisfaction with their money transmitters and stated that they had not experienced major problems in sending money via money transmitters. Senders told us that they generally preferred using money transmitters because money transmitters were cheaper than banks and were quicker in delivering the funds. In addition, money transmitters were often more accessible for recipients collecting the remittances because the money transmitters had more locations than banks in recipient countries. However, some remittance senders told us that they experienced delays or were unable to send large amounts of money through money transmitters. In addition, some Somali senders told us that they were dissatisfied with being unable to use personal checks or online methods due to a requirement to pay in cash. U.S. agencies, including Treasury, Federal Deposit Insurance Corporation (FDIC), the Federal Reserve, and National Credit Union Administration (NCUA), have issued guidance to the financial institutions they regulate to clarify expectations for providing banking services to money transmitters. In addition, Treasury’s Office of Technical Assistance (OTA) is engaged in long-term capacity building efforts in Haiti, Liberia, and Somalia to improve those countries’ weak financial institutions and regulatory mechanisms, factors that may cause banks to consider money transmitters remitting to these countries to be more risky clients. However, agency officials disagreed with some suggestions for government action proposed by banks and others because such actions would contravene agencies’ Bank Secrecy Act anti-money laundering (BSA/AML) compliance goals. Treasury, including FinCEN and OCC, as well as FDIC, the Federal Reserve, and NCUA have issued various guidance documents intended to ensure BSA/AML compliance while mitigating negative impacts on money transmitter banking access. Since 2011, Group of Twenty (G20) leaders, including the U.S. government, have committed to increasing financial inclusion through actions aimed at reducing the global average cost of sending remittances to 5 percent. According to Treasury officials, financial inclusion and BSA/AML compliance are complementary goals. In published statements, Treasury has affirmed that money transmitters provide essential financial services, including to low-income people who are less likely or unable to make use of traditional banking services to support family members abroad. Treasury has also acknowledged that leaving money transmitters without access to banking channels can lead to an overall reduction in financial sector transparency to the extent that money transmitters resort to non-banking channels for transferring funds. Nonetheless, Treasury officials we spoke to noted that in implementing BSA/AML regulations, banks retain the flexibility to make business decisions such as which clients to accept, since banks are in the best position to know whether they are able to implement controls to manage the risk associated with any given client. These officials indicated that Treasury pursues market-driven solutions and cannot order banks to open or maintain accounts. Treasury officials noted that Treasury works through existing multilateral bodies to promote policies that will support market driven solutions to banking access challenges and deepen financial inclusion globally. To clarify how banks assess BSA/AML risks posed by money transmitters and foreign banks, Treasury and other regulators have issued various guidance documents that, among other things, describe best practices for assessing such risks (see table 1). Some of the guidance emphasizes that risk should be assessed on a case-by-case basis and should not be applied broadly to a class of customers when making decisions to open or close accounts. The agencies issuing these guidance documents have taken some steps to assess the impact of guidance on bank behavior. For example, Treasury officials told us that Treasury periodically engages with banks and money transmitters on an ad hoc basis to learn their views and gain insight into their concerns. According to Federal Reserve officials, anecdotal information suggests that some money transmitters lost bank accounts after issuance of the 2005 joint guidance summarized above in table 1, and that outcome was contrary to the regulators’ intent. To address concerns about the guidance, according to these officials, Treasury held several public discussions on money transmitter account terminations. OCC officials stated that they have not conducted a separate assessment of the effects of their October 2016 correspondent banking guidance on banks’ risk assessment practices. However, they noted that OCC examiners evaluate banks’ policies, procedures, and processes for risk reevaluation, including processes for assessing individual foreign correspondent bank customer risks, as a part of OCC’s regular bank examination process. Bank officials we spoke to noted that while the guidance from regulators provides broad direction for banks’ risk assessments of foreign banks and money transmitter clients, the guidance does not provide specific details to clarify how banks can ensure BSA/AML compliance for specific higher- risk clients. According to Treasury officials, there is no feasible short-term solution to address the loss of banking services facing money transmitters involved in transferring funds to certain fragile countries, especially Somalia. These officials explained that U.S. banks may be reluctant to transfer funds to fragile countries because key governmental and financial institutions in these countries have weak oversight and therefore may face difficulties in detecting and preventing money laundering and terrorism financing. As of September 2017, Treasury’s OTA is providing capacity building support to fragile countries, including Haiti, Liberia, and Somalia, with some of its efforts aimed at addressing long-term factors affecting these countries’ BSA/AML supervisory capability. Table 2 identifies and describes the status of OTA projects in our case- study countries of Haiti, Liberia, and Somalia. OTA does not currently have a project in Nepal. Banks, money transmitters, trade associations, and state regulators we interviewed, as well as third parties such as the World Bank and Center for Global Development, have proposed several actions to address banking access challenges money transmitters face in transferring funds through banks from the United States to fragile countries. Use of public sector transfer methods. Most banks we spoke to mentioned regulatory risk as a challenge to creating or maintaining money transmitter accounts. These banks stated that the ultimate risk for conducting transactions for money transmitter accounts falls on the bank, and that banks face substantial risk of regulatory action for such transactions. Therefore, one extra-large bank and one credit union we spoke to suggested using public sector transfer methods such as the Fedwire Funds Service (Fedwire) or FedGlobal Automated Clearing House Payments (FedGlobal) to process remittances to fragile countries, thereby mitigating the regulatory risk posed to banks that transfer such funds. Providing regulatory immunity, given appropriate oversight. To mitigate the regulatory risk to banks posed by money transmitter clients that send remittances to fragile countries, one extra-large bank, one credit union, and several money transmitters we spoke to suggested that regulators provide forms of regulatory immunity or regulator assurances that banks would not face enforcement actions if they carried out a specified level of due diligence to process remittances to fragile countries. Issuing more specific guidance. About half of the banks we spoke to mentioned fear of regulatory scrutiny due to ambiguities in regulatory agencies’ guidance or examiner practices. This fear of regulatory scrutiny served as a disincentive for these banks to maintain money transmitter accounts. While officials from about half of the banks we spoke to stated that additional guidance issued by Treasury and other agencies was helpful to clarify regulatory expectations and that examiner practices were consistent with guidance, others stated that they were uncertain about how much due diligence constituted enough for regulatory purposes, because regulations incorporated ambiguous language or because examiner practices exceeded regulations. These bank officials suggested that regulators could provide more specific guidance for banks on risk management, for instance, by including example scenarios and answers to frequently asked questions. The World Bank recommended in 2015 that regulators provide banks with additional guidance on assessing the risk of different money transmitter clients. U.S. agency officials stated that they disagreed with implementing these proposals for reasons specific to each one, as discussed below. Use of public sector transfer methods. Treasury officials told us that they prefer market-based solutions to the challenges of transferring remittances to fragile countries, rather than a solution in which the U.S. government assumes the risk in transferring these remittances, such as using the Federal Reserve to directly transfer payments from money transmitters. Federal Reserve officials told us that Fedwire is reserved for domestic wire transfers, and while the Federal Reserve continues to evaluate the scope of the FedGlobal service, no decisions have been made to expand the service to additional countries at this time. Federal Reserve officials told us they seek to increase remittance flows to the countries the program already serves. Providing regulatory immunity, given appropriate oversight. Treasury officials told us that while they would need to see the suggested duration and conditions pertaining to any proposal for regulatory immunity or exemptions in order to judge its feasibility, implementing this suggestion could raise a number of legal and policy concerns. Officials told us that while Treasury has the authority to provide regulatory exemptions, creating particular conditions for regulatory immunity would stray from Treasury’s intended risk-based approach to BSA/AML compliance, and bad actors might take advantage of any such exemptions for criminal activity. Issuing more specific guidance. OCC informed us that it is not currently considering implementing more specific guidance. Treasury officials told us that existing guidance clarifies that Treasury does not have a zero tolerance approach to BSA/AML compliance and that Treasury does not expect banks to know their customers’ customers. These officials told us that they prefer not to issue further amplifying guidance with very specific examples as to what constitutes “compliance” by financial institutions, because Treasury does not wish to institute a “check the boxes” approach to regulatory compliance. Treasury cannot assess the effects of money transmitters’ loss of banking access on remittance flows because existing data do not allow Treasury to identify remittances transferred through banking and non-banking channels. Recent efforts to collect international remittance data from banks and credit unions do not include transfers these institutions make on behalf of money transmitters. Since these data collection efforts are designed to protect U.S. consumers, the remittance data that banks and credit unions report are limited to remittances individual consumers send directly through these institutions. Additionally, a few state regulators recently began requiring money transmitters to report remittance data by destination country, but these data do not distinguish money transmitters’ use of banking and non-banking channels to transfer funds. Finally, while Treasury has a long-standing effort to collect information on travelers transporting cash from U.S. ports of exit, this information does not to identify cash transported for remittances. Without information on remittances sent through banking and non-banking channels, Treasury cannot assess the effects of money transmitter and foreign bank account closures on remittances, especially shifts in remittance transfers from banking to non-banking channels for fragile countries. Non-banking channels are generally less transparent than banking channels and thus more susceptible to the risk of money laundering and other illicit financial transactions. Federal regulators recently began collecting data on international remittances from banks and credit unions by requiring these institutions to provide more information in pre-existing routine reports. However, these reports do not require banks and credit unions to include information on remittance transfers these institutions make on behalf of money transmitters, among other business clients. According to officials from the Office of the Comptroller of the Currency (OCC) and from the Consumer Financial Protection Bureau, the additional reporting requirements for remittances were intended to help regulators monitor compliance with rules aimed at protecting U.S. consumers who use remittance services offered by banks and credit unions. Furthermore, banks and credit unions are not required to report on destination countries for remittance flows. Specifically: Beginning in 2014, Federal banking regulators—FDIC, the Federal Reserve, and OCC— required banks to provide data on international remittances in regular reporting known as the Consolidated Reports of Condition and Income (Call Reports). These reports, which are required on a quarterly basis from FDIC-insured banks, generally include banks’ financial information such as assets and liabilities, and are submitted through the Federal Financial Institutions Examination Council, a coordinating body. Specifically, the agencies required banks to indicate whether they offered consumers mechanisms, including international wire transfers, international automated clearinghouse transactions, or other propriety services, to send international remittances. The Consumer Financial Protection Bureau uses the remittance data in Call Reports to better understand the effects of its rules regarding remittance transfers including its rules on disclosure, error resolution, and cancellation rights. Additionally, according to bureau officials, they also use the data for other purposes, for example, to monitor markets and to identify banks for remittance exams and, if needed, additional supervision. The Call Reports do not require a bank to report remittances for which the bank is providing such service to business customers, including money transmitters. According to OCC officials, because the remittance regulation that the Consumer Protection Financial Bureau enforces originated in response to consumer-focused legislation, a bank is required to report only those remittances for which the bank is the direct service provider to the individual consumer. Consequently, remittances reported in the Call Reports do not include remittances for which the banks served as a correspondent bank or as a provider for a money transmitter. Furthermore, banks are not required to report remittance data by destination country. In 2013, the National Credit Union Administration (NCUA) began requiring credit unions to provide data on the number of remittance transactions, but not data on the dollar amount transferred, in their Call Reports to NCUA. Similarly, and consistent with its treatment of banks, the Consumer Financial Protection Bureau uses the remittance data submitted by credit unions in Call Reports, for example, to better understand the effects of its rules and for market monitoring. The credit unions are also not required to include transactions they process on behalf of business clients, such as money transmitters, and do not provide remittance data by destination country. In 2017 some states began collecting remittance data from money transmitters by state and destination country through the Money Services Business Call Report. The purpose of these reports is to enhance and standardize the information available to state financial regulators concerning the activities of their Money Services Business licensees to effectively supervise these organizations. However, money transmitters are not required to distinguish whether the remittances they transferred were sent through banking or other channels. Additionally, while these reports collect remittance data by destination country, these data are not comprehensive because, according to the Nationwide Multistate Listing System, as of the first quarter of 2018, about half the states (24) had adopted the reports for money transmitters and of these 12 states had made it mandatory to report the remittances by destination country. Due to a lack of reporting on money transmitters’ use of banking channels to transfer remittances, Treasury cannot assess the extent of the decline in money transmitters’ use of banking channels to transfer remittances to fragile countries, including the four we selected as case-study countries: Haiti, Liberia, Nepal, and Somalia. While Treasury has a long-standing effort to collect information on travelers transporting cash from U.S. ports of exit, this information is not designed to enable Treasury to identify cash transported for remittances or the intended final destination of the cash. For financial transfers through non-banking channels, Treasury requires persons or businesses to report the export of currency and monetary instruments at ports of exit, which include remittances sent through money transmitters carried out in cash. Specifically, Treasury requires persons or businesses, including money transmitters, who physically transport currency or other monetary instruments exceeding $10,000 at one time, from the United States to any place outside of the United States, to file a Report of International Transportation of Currency or Monetary Instruments (CMIR) with U.S. Customs and Border Protection at the port of departure. The CMIR collects information such as the name of the person or business on whose behalf the importation or exportation of funds was conducted, the date, the amount of currency, U.S. port or city of arrival or departure, and country of origin or destination, among other information. The forms are filled out manually by individuals carrying cash. U.S. Customs and Border Protection officers collect the forms at ports of exit, and that agency’s contractors manually enter the data reported on these forms into a central database. Money transmitters and their agents who carry cash in excess of $10,000 from the United States are required to submit the CMIR to U.S. Customs and Border Protection upon departure. Thus, to some extent, CMIR data include data on remittances transferred by money transmitters in cash; however, the CMIR is not intended to capture information specific to remittances, and thus its usefulness is limited for agencies in tracking the flow of remittances through non-banking channels. First, the destination country reported on the CMIR may not be the final destination of the cash or other monetary instrument being transported. For example, money transmitters we interviewed told us that they use cash couriers to transfer funds to Somalia via the United Arab Emirates, where the funds may enter a clearinghouse that can transfer the funds to Somalia. While the ultimate destination of the remittances is Somalia, the CMIR may list the United Arab Emirates as the destination because it is the first destination out of the United States. Second, FinCEN officials acknowledged they do not know the extent of underreporting in general with regard to the CMIR; however, money transmitters we interviewed indicated that they have incentives to file CMIR for their own protection in case they have to file an insurance claim. Finally, CMIR does not ask if the currency or monetary instruments are remittances, which makes it difficult if not impossible to separate out the data on remittances from the overall data. Existing data do not enable Treasury to identify remittances transferred by money transmitters through banking and non-banking channels. Non- banking channels are generally less transparent than banking channels and thus more susceptible to the risk of money laundering and terrorist financing. FinCEN’s mission is to safeguard the financial system from illicit use, combat money laundering, and promote national security by, among other things, receiving and maintaining financial transactions data and analyzing that data for law enforcement purposes. Additionally, federal standards for internal control state that agency managers should comprehensively identify risks and analyze them for their possible effects. A lack of data on remittances sent through banking and non-banking channels limits the ability of Treasury to assess the effects of money transmitter and foreign bank account closures on remittances, in particular shifts of remittances to non-banking channels for fragile countries. The risks associated with shifts of remittances to non-banking channels may vary by country and are likely greater for fragile countries such as Somalia where the United States has concerns about terrorism financing. Remittances continue to flow to fragile countries, but the loss of banking services for money transmitters, as well as a decline in foreign banking relationships, has likely resulted in shifts to non-banking channels for remittances to some of these countries. While money transmitters who have lost bank accounts may adapt by moving remittances in cash or other non-banking channels, the lack of a bank account presents operational risks for these organizations. Moreover, the flow of funds such as remittances from banking to non-banking channels decreases the transparency of these transactions. While U.S. regulators have issued guidance to banks indicating that they should not terminate accounts of money transmitters without a case-by-case assessment, several banks we contacted remain apprehensive and are reluctant to incur additional costs for low-profit customers such as money transmitters. At the same time, senders of remittances still prefer to use money transmitters to send funds, which the senders regard as a critical lifeline for family and friends in fragile countries. Although federal and state regulators have undertaken recent efforts to obtain remittance data from financial institutions such as banks and money transmitters, these efforts are designed for consumer protection and the regulatory supervision of financial institutions, rather than to track remittances sent by money transmitters using banking channels. As a result, the available data are not sufficient for the purposes of tracking changes in money transmitters’ use of banks to transfer funds. Similarly, while Treasury has a long- standing effort to collect information on large amounts of cash physically transported by travelers at U.S. ports of exit, this information collection is not intended to track the flow of remittances through non-banking channels. Consequently, to the extent money transmitters losing banking access switch to non-bank methods to transport remittances, Treasury may not be able to monitor these remittance flows. This, in turn could increase the risk of terrorism financing or money laundering, especially for remittances to fragile countries where risks related to illicit use of funds are considered higher. We are making one recommendation to Treasury. The Secretary of Treasury should assess the extent to which shifts in remittance flows from banking to non-banking channels for fragile countries may affect Treasury’s ability to monitor for money laundering and terrorist financing and, if necessary, should identify corrective actions. We provided a draft of this product for comment to Treasury, FDIC, the Federal Reserve, CFPB, U.S. Customs and Border Protection, Commerce, NCUA, State, and USAID. Treasury, FDIC, the Federal Reserve, CFPB, and U.S. Customs and Border Protection, provided technical comments, which we have incorporated, as appropriate. We requested that Treasury provide a response to our recommendation, but Treasury declined to do so. Commerce, NCUA, State, and USAID, did not provide comments on the draft of this report. We are sending copies of this report to the appropriate congressional committees; the Secretary of the Treasury; the Chairman of the Federal Deposit Insurance Corporation; the Chair of the Board of Governors of the Federal Reserve System; the Acting Director of the Consumer Financial Protection Bureau; the Secretaries of Commerce, Homeland Security, and State; the Administrators of the U.S. Agency for International Development and the National Credit Union Administration; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-9601, or melitot@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report examines (1) what stakeholders believe are the challenges facing money transmitters in remitting funds from the United States to selected fragile countries, (2) what actions U.S. agencies have taken to address identified challenges, and (3) U.S. efforts to assess the effects of such challenges on remittance flows from the United States to fragile countries. To address the objectives, we identified four case-study countries: Haiti, Liberia, Nepal, and Somalia. We selected these countries based on their inclusion in the Organisation for Economic Co-operation and Development’s States of Fragility reports from 2013 to 2015. In addition, we limited our selection to countries that have a foreign-born population of 50,000 or more living in the United States. Finally, we considered the size of estimated total remittances from the United States relative to the recipient countries’ gross domestic products (GDP). We rank ordered the 17 countries that met these criteria and selected the top four. For our first objective, to understand the challenges that stakeholders believe money transmitters face in remitting funds from the United States to fragile countries, we surveyed banks and interviewed U.S. agency officials, money transmitters, banks, credit unions, and remittance senders. To obtain insights from U.S agency officials, we interviewed and received written responses from officials of the Department of the Treasury (Treasury)—including the Office of Technical Assistance (OTA), the Financial Crimes Enforcement Network (FinCEN), the Office of Terrorism and Financial Intelligence, and the Office of the Comptroller of the Currency (OCC). To obtain insights from money transmitters, we used the World Bank’s Remittance Prices Worldwide database to select U.S.-based money transmitters serving our case-study countries. The World Bank database includes a sample of money transmitters, which the World Bank reported it selected to cover the maximum remittance market share possible and survey a minimum aggregated market share of 80 percent for each country. We attempted to contact the 18 money transmitters that the World Bank identified as the major service providers for our case-study countries. We interviewed 12 of these 18 money transmitters, of which 8 provided services to only one of our case-study countries (2 money transmitters provided services to Haiti, 4 provided services to Somalia, and 2 provided services to Nepal) and 4 provided remittance services from the United States to at least three of our case-study countries. To obtain insights from individuals that remit to fragile states, we conducted six small-group interviews, and one additional interview, of individuals that remit to our selected case-study countries. From 3 to 6 individuals participated in our small group interviews. We interviewed one Haitian small group, one Liberian small group, one Nepali small group, and three Somali small groups. To set up these interviews, we identified community-based organizations (CBOs) and other groups that work with remittance senders to these countries and obtained contact information for these groups. We identified the CBOs through searching Internal Revenue Service (IRS) lists of tax- exempt community organizations for the names of our case-study countries or their populations. To focus our search efforts, we concentrated on the five areas in the United States with the largest populations of immigrants from each case-study country. The five areas were identified using information on immigrant populations from the U.S. Census Bureau’s 2015 American Community Survey 1-year Public Use Microdata Samples. We sent emails outlining our research goals and soliciting interest in participating in interviews to 287 CBOs and related groups and obtained positive responses from 46. Of the 46 that responded positively, we were able to schedule meetings with seven CBOs covering the four case-study countries. The groups that agreed to participate in our interviews cannot be considered representative of all CBOs and remittance senders to the four selected countries, and their views and insights are not generalizable to those of all individuals that remit to these four countries. We asked the CBO points-of-contact to invite individuals with experience remitting funds to the case-study countries to participate in telephone interviews. We pre-tested our methodology by emailing contacts at the CBOs and requesting they provide feedback on the questions. We also pre-tested the questions with a group located in Virginia because the location was close to the GAO headquarters and allowed for in-person testing. In the interviews, we asked semi-structured questions about the ease or difficulty of remitting funds to the participants’ home countries, the costs of remitting, and any recent changes they had noticed. We asked the participants to provide us with their personal experiences rather than to speak for their CBO, group, or community. We used two methods—a web-based survey of a nationally representative sample of banks and semi-structured interviews of bank officials—to examine what banks identify as challenges, if any, in offering bank accounts for money transmitters and correspondent banks serving fragile countries. In the survey, we asked banks about limitations and terminations of accounts related to BSA/AML risk, the types of customer categories being limited or terminated, and the factors influencing these decisions. We administered the survey from July 2017 to September 2017, and collected information for the 3-year time period of January 1, 2014 to December 31, 2016. Aggregate responses for the close-ended survey questions that are related to this report are included in appendix II. The survey also collected information for two additional GAO reports: one reviewing closure of bank branches along the southwest border of the United States, and another assessing the causes of bank account terminations involving money transmitters. To identify the universe of banks, we used the bank asset data from FDIC’s Statistics on Depository Institutions database. Our initial population list contained 5,922 banks downloaded from FDIC’s Statistics on Depository Institutions database as of December 31, 2016. We stratified the population into five sampling strata, and used a stratified random sample. In order to meet the sampling needs of related reviews, we used a hybrid stratification scheme. First, banks that did not operate in the Southwest border region were stratified into four asset sizes (small, medium, large, and extra-large). Next, by using FDIC’s Summary of Deposit database we identified 115 Southwest border banks as of June 30, 2016. Our initial sample size allocation was designed to achieve a stratum-level margin of error no greater than plus or minus 10 percentage points for an attribute level at the 95 percent level of confidence. Based upon prior surveys of financial institutions, we assumed a response rate of 75 percent to determine the sample size for the asset size strata. Because there are only 17 extra-large banks in the population, we included all of them in the sample. We also included the entire population of 115 Southwest border banks as a separate certainty stratum. We reviewed the initial population list of banks in order to identify nontraditional banks not eligible for this survey. We treated nontraditional banks as out-of- scope. In addition, during the administration of our survey, we identified 27 banks that were either no longer in business or that had been bought and acquired by another bank, as well as 2 additional banks that were nontraditional banks and, therefore, not eligible for this survey. We treated these sample cases as out-of-scope; this adjusted our population of banks to 5,805 and reduced our sample size to 406. We obtained a weighted survey response rate of 46.5 percent. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (for example, plus or minus 7 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Confidence intervals are provided along with each sample estimate in the report. For survey questions that are not statistically reliable, we present only the number of responses to each survey question and the results are not generalizable to the population of banks. The practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question or sources of information available to respondents can introduce unwanted variability into the survey results. We took steps in developing the questionnaire, collecting the data, and analyzing the results to minimize such nonsampling error. We conducted pretests with four banks. We selected these banks to achieve variation in geographic location and asset size (small, medium, large, extra-large). The pretests of the survey were conducted to ensure that the survey questions were clear, to obtain any suggestions for clarification, and to determine whether representatives would be able to provide responses to questions with minimal burden. To supplement the results of the survey, we conducted interviews with eight extra-large banks regarding correspondent banking and money transmitter accounts and with two credit unions regarding money transmitter accounts. We selected the eight banks to interview using the following criteria: (1) the bank was in the extra-large asset size group (banks with greater than $50 billion in assets), and (2) the bank was mentioned by at least one of the money transmitters that we interviewed as terminating accounts with them or the bank was listed in an internal Treasury study on correspondent banking. Of the banks in the extra- large asset size group, 7 were mentioned in our interviews with money transmitters as having closed accounts with them. Nearly all of these banks, plus one additional bank were also mentioned as correspondent banks in the Treasury study. In addition, we selected two credit unions to interview based on information from our interviews with money transmitters. Money transmitters identified four credit unions in our interviews; of these, we selected for interviews two that were mentioned as closing accounts with money transmitters. We did not contact the other two credit unions that currently have money transmitter accounts. The results of the survey and the interviews only provide illustrative examples and are not generalizable to all banks or credit unions. For our second objective, we analyzed U.S. agency information and documentation about relevant projects and activities. We also interviewed officials and obtained relevant guidance documents from Treasury, including OCC, OTA, FinCEN, and Terrorism and Financial Intelligence; the Federal Deposit Insurance Corporation (FDIC); the U.S. Department of State; the U.S. Agency for International Development; the Board of Governors of the Federal Reserve System (Federal Reserve); and the National Credit Union Administration (NCUA). Additionally, we also interviewed officials from the World Bank and International Monetary Fund to understand the data, methodology, and findings contained within reports by those organizations, as well as to understand the International Monetary Fund’s role in technical assistance in our case-study countries. To gather information on solutions proposed by banks and others to address challenges money transmitters face in transferring funds through banks from the United States to fragile countries, we interviewed banks and credit unions as noted above. We also reviewed reports by the World Bank, the Center for Global Development, and Oxfam to gather recommendations addressing challenges in transferring remittances to fragile countries. We interviewed officials from Treasury, FDIC, the Federal Reserve, and the U.S. Agency for International Development to gain their perspectives on these proposed solutions. For our third objective on U.S. agencies’ efforts to assess the effects of challenges facing U.S. money transmitters on remittance flows to fragile countries, we interviewed agency officials and analyzed available data on flows going through banking and non-banking channels. For available data on flows through the banking channel, we analyzed the Consolidated Reports of Condition and Income (Call Report) data from the Federal Financial Institutions Examination Council, which started collecting these data in 2014. These remittance data are reported on a semiannual basis. We also reviewed Call Report data on remittances for credit unions, which started to be collected in 2013, as well as data collected from Money Service Businesses, which some states started collecting in 2017. For data on remittance flows through non-banking channels, we obtained and analyzed data on filings of FinCEN’s Form 105 – Report of International Transportation of Currency or Monetary Instruments. This report is required of individuals who physically transport currency or other monetary instruments exceeding $10,000 at one time from the United States to any place outside the United States, or into the United States from any place outside the United States. The paper form is collected by the Department of Homeland Security’s U.S. Customs and Border Protection at the port of entry or departure. We obtained the tabulated Form 105 data from FinCEN by arrival country, state of U.S. exit port, and for calendar years 2006 through 2016. We also interviewed officials and obtained written responses from FinCEN and the Federal Financial Institutions Examination Council. We compared the results of our data analysis and information from interviews with agency officials against FinCEN’s mission to safeguard the financial system from illicit use by, among other things, obtaining and analyzing financial transactions data. Additionally, we also compared the results of our analysis and information obtained from agencies against the federal standards for internal control, which state that agency managers should comprehensively identify risks and analyze them for their possible effects. We conducted this performance audit from September 2016 to March 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. From July 2017 to September 2017, we administered a web-based survey to a nationally representative sample of banks. In the survey, we asked banks about the number of account terminations for reasons related to Bank Secrecy Act anti-money laundering (BSA/AML) risk; whether banks are terminating, limiting, or not offering accounts to certain types of customer categories; and the factors influencing these decisions. We collected information for the 3-year period from January 1, 2014, to December 31, 2016. We obtained a weighted survey response rate of 46.5 percent. The survey included 44 questions, 16 of which were directly applicable to the research objectives in this report. Responses to the questions that were directly applicable to the research objectives in this report are shown below (see tables 3 through 16). When our estimates are from a generalizable sample, we express our confidence in the precision of our particular estimates as 95 percent confidence intervals. Survey results presented in this appendix are aggregated for banks of all asset sizes, unless otherwise noted. Results for some of the survey questions were not statistically reliable. In those cases we present only the number of responses to each survey question. These results are not generalizable to the population of banks. Our survey included closed- and open-ended questions. We do not provide information on responses provided to the open-ended questions. For a more detailed discussion of our survey methodology, see appendix I. The following open-ended question was only asked to banks that responded “Yes” to question 33: Please provide any additional comments or challenges the bank may face in managing accounts for money transmitters that remit to fragile countries such as Haiti, Liberia, Nepal or Somalia. (Question 36) The following open-ended question was only asked to banks that responded “Yes” to question 37: Please provide any additional comments on how changes (increase or decrease) in correspondent banking services facilitating the transfer of funds to Haiti, Liberia, Nepal or Somalia has impacted your bank’s ability to provide services to money transmitters. (Question 41) The following open-ended question was only asked to banks that responded “Yes” to using a correspondent bank to facilitate the transfer of funds Somalia (question 38, response d): If your bank relied on a respondent bank to facilitate the transfer of funds to Somalia, in what country was the respondent bank located? (Question 39) Thomas Melito, (202) 512-9601, or melitot@gao.gov. In addition to the contact named above, Mona Sehgal (Assistant Director), Kyerion Printup (Analyst-in-Charge), Sushmita Srikanth, Madeline Messick, Ming Chen, Lilia Chaidez, Natarajan Subramanian, Carl Barden, James Dalkin, David Dayton, Martin De Alteriis, Mark Dowling, Rebecca Gambler, Tonita Gillich, Stefanie Jonkman, Christopher Keblitis, Jill Lacey, Michael Moran, Verginie Tarpinian, and Patricia Weng made key contributions to this report.
|
The United States is the largest source of remittances, with an estimated $67 billion sent globally in 2016, according to the World Bank. Many individuals send remittances through money transmitters, a type of business that facilitates global money transfers. Recent reports found that some money transmitters have lost access to banking services due to derisking—the practice of banks restricting services to customers to, in part, avoid perceived regulatory concerns about facilitating criminal activity. GAO was asked to review the possible effects of derisking on remittances to fragile countries. This report examines (1) what stakeholders believe are the challenges facing money transmitters in remitting funds from the United States to selected fragile countries, (2) actions U.S. agencies have taken to address identified challenges, and (3) U.S. efforts to assess the effects of such challenges on remittance flows to fragile countries. GAO selected four case-study countries—Haiti, Liberia, Nepal, and Somalia—based on factors including the large size of U.S. remittance flows to them. GAO interviewed U.S.-based money transmitters, banks, U.S. agencies, and individuals remitting to these countries and also surveyed banks. Stakeholders, including money transmitters, banks, and U.S. Department of the Treasury (Treasury) officials, reported a loss of banking access for money transmitters as a key challenge, although remittances continue to flow to selected fragile countries. All 12 of the money transmitters GAO interviewed, which served Haiti, Liberia, Nepal, and particularly Somalia, reported losing some banking relationships during the last 10 years. As a result, 9 of the 12 money transmitters reported using channels outside the banking system (hereafter referred to as non-banking channels), such as cash couriers, to move funds domestically or, in the case of Somalia, for cross-border transfer of remittances (see figure). Several banks reported that they had closed the accounts of money transmitters because of the high cost of due diligence actions they considered necessary to minimize the risk of fines under Bank Secrecy Act regulations. Treasury officials noted that despite some money transmitters losing bank accounts, they see no evidence that the volume of remittances is falling. Example of a Cash-to-Cash Remittance Transfer Using a Cash Courier U.S. agencies have taken steps that may mitigate money transmitters' loss of banking access. For example, several agencies have issued guidance to clarify expectations for providing banking services to money transmitters. In addition, Treasury is implementing projects to strengthen financial institutions in some fragile countries. However, U.S. agencies disagreed with other suggestions, such as immunity from enforcement actions for banks serving money transmitters, since those actions could adversely affect goals related to preventing money laundering and terrorism financing. Treasury cannot assess the effects of money transmitters' loss of banking access on remittance flows because existing data do not allow Treasury to identify remittances transferred through banking and non-banking channels. Remittance data that U.S. agencies collect from banks do not include transfers that banks make on behalf of money transmitters. Additionally, the information Treasury collects on transportation of cash from U.S. ports of exit does not identify remittances sent as cash. Therefore, Treasury cannot assess the extent to which money transmitters are shifting from banking to non-banking channels to transfer funds due to loss of banking access. Non-banking channels are generally less transparent than banking channels and thus more susceptible to the risk of money laundering and terrorism financing. Treasury should assess the extent to which shifts in remittance flows to non-banking channels for fragile countries may affect Treasury's ability to monitor for financial crimes and, if necessary, should identify corrective actions. GAO requested comments from Treasury on the recommendation, but none were provided.
|
VA pays monthly disability compensation to veterans with service- connected disabilities according to the severity of the disability. VA’s disability compensation claims process starts when a veteran submits a claim to VA (see fig. 1). A claims processor then reviews the claim and helps the veteran gather the relevant evidence needed to evaluate the claim. Such evidence includes the veteran’s military service records, medical exams, and treatment records from VHA medical facilities and private medical service providers. If necessary to provide support to substantiate a claim, VA will also provide a medical exam for the veteran, either through a provider at a VHA medical facility or through a VBA contractor. According to VBA officials, VBA monitors a VHA facility’s capacity to conduct exams and in instances when the facility may not have capacity to conduct a timely exam, VBA will send an exam request to one of its contractors instead. For exams assigned to a VBA contractor, VBA sends an exam request to the contractor, who then rejects or accepts the exam request. Once the contractor accepts the exam, it assigns a contracted examiner to conduct the exam and complete an exam report designed to capture essential medical information for purposes of determining entitlement to disability benefits. The contractors send the completed report to VBA, which uses the information as part of the evidence to evaluate the claim and determine whether the veteran is eligible for benefits. According to contractor officials, if they need clarification on an exam request, they might reject the request and send it back to VBA who, in turn, will revise the request before sending it back to the contractor. VA has used contracted examiners—through VBA and VHA contracts—to supplement VHA-provided exams for at least two decades. VBA began using contractors to conduct disability compensation exams at 10 VBA regional offices in the late 1990s through a pilot program authorized under federal law. In 2014, federal law authorized VBA to expand the pilot to all its regional offices starting in fiscal year 2017. Before fiscal year 2017, VHA and VBA both administered disability exam contracts. However, since fiscal year 2017, all such contracts have been administered by VBA and none have been administered by VHA. VBA awarded 12 contracts to five contractors to begin providing exams in 2016. According to VA officials, performance under 10 of these contracts was delayed until late September 2017 due, in part, to multiple contract bid protests. During this delay, VA officials told us that the agency awarded short-term contracts to allow existing contractors to perform exams until the bid protests were resolved. VBA’s current contracts cover exams for veterans in five U.S. geographic districts, one district for overseas exams, and one district for servicemembers participating in special programs, such as the Benefits Delivery at Discharge and Integrated Disability Evaluation System programs (see fig. 2). VBA awarded two contracts in each of its five U.S. geographic districts and one contract each in districts 6 and 7, which include special programs and overseas exams, respectively. VBA also awarded two additional short- term contracts in December 2017 to help address workload issues in districts 1-5. With the addition of these two contracts, VBA has a total of 14 contracts currently in place. According to agency officials, because VBA wanted to update performance measures for its contractors, VA issued a Request for Proposals in May 2018 with plans to award new contracts in fall 2018 for its U.S. geographic districts. Until it awards the new contracts, VBA will continue to use the current contracts. According to VBA officials, VA plans to continue using VBA contractors in the long term to conduct exams that exceed VHA’s capacity. In recent years, VBA contractors have completed an increasing number of exams, from roughly 178,000 in fiscal year 2012 to almost 600,000 in fiscal year 2017, according to VBA- provided data. VA estimates that in fiscal year 2019, contractors will complete over 1.8 million exam reports for almost 800,000 veterans. However, VBA officials noted that future projections for contracted exams might change based on the need to supplement VHA capacity to ensure timely exams. In 2016, VBA established an exam program office to manage and oversee contractors, monitor their performance, and ensure that they meet contract requirements. For example, the contracts require that contractors develop plans outlining how they will ensure examiners are adequately trained. Contractors are also required to provide VBA with monthly exam status reports, which include the number of canceled, rescheduled, and completed exams, among other things. VBA also has an office dedicated to completing quality reviews of contractors’ exam reports, which are used to assess contractor performance. The contracts require that VBA conduct quality reviews of a sample of contractors’ exam reports. According to VA documents and officials, the results of these quality reviews, and contractor timeliness scores in completing exams, are included in quarterly performance reports. The contracts require that VBA provide these performance reports to the contractors. VBA holds quarterly meetings with the contractors to discuss their quarterly performance based on these reports. VBA contracts require that contracted examiners have full, current, valid, and unrestricted licenses, and current and valid State Medical Board certifications, before conducting any exams—the same requirements that apply to VHA medical providers. According to agency officials, VBA also requires that contracted examiners complete the same training that VHA providers must take before they can conduct any disability medical exams. The required training consists of a set of online courses developed by VHA’s Disability Medical Assessment Office, such as courses on VA’s disability claims process and one on completing exam reports. In addition, examiners who provide some specialized exams, such as posttraumatic stress disorder exams and traumatic brain injury exams, are required to take additional courses. In addition to VHA- developed training, VBA contracts require that contractors provide examiners with a basic overview of VA programs. The contracts also outline quality and timeliness performance targets that VBA uses to assess contractor performance (see table 1). VBA can use contractors’ performance in meeting these targets to determine financial incentives. VBA’s performance measures are as follows: Contractor quality: VBA calculates quality scores for each contractor based on a sample of exam reports that VBA’s quality office selects for review on a quarterly basis for each contract. According to VBA documents, the quality score represents the percentage of exam reports reviewed that had no errors as measured against specific criteria. Errors identified in quality reviews could range from incomplete information (e.g., an examiner’s medical specialty information is not listed on exam report) to completing the wrong exam report for a given condition. Contractor timeliness: VBA calculates timeliness scores for each contractor based on the average timeliness of all exams completed in a given quarter for each contract. VBA measures timeliness as the number of calendar days between the date the contractor accepts an exam request and the date the contractor initially sends the completed exam report to VBA. VBA reported that almost all contractors missed VBA’s quality target of 92 percent in the first half of calendar year 2017, and more recent data are not yet available for most districts. More specifically, VBA-determined quarterly quality scores—the percentage of disability compensation exam reports with no errors as measured against VBA criteria—for the seven contracts used by VBA in calendar year 2017 showed that contractors were frequently well below the quality target. Quarterly quality scores ranged from 62 percent to 92 percent (see fig. 3). According to VBA data, only one contractor’s quality score in one quarter met VBA’s target of 92 percent while the vast majority of contractors’ scores were classified by VBA as “unsatisfactory” performance. VBA has not yet completed all of the quality reviews used to calculate contractor quality scores, particularly for exams that were completed in the second half of 2017. VBA is hiring and training additional quality review staff to complete these reviews and help manage the workload moving forward. According to VBA officials, staff will complete the remaining quality reviews and finalize the quality scores for 2017 by December 2018. According to agency officials, VBA has not calculated contractor timeliness as it is outlined in the contracts. VBA measures timeliness as the number of days between the date the contractor accepts an exam request and the date the contractor initially sends the completed exam report to VBA. According to officials, this measure does not include any time contractors may spend correcting an exam report returned to them by VBA. Returned exam reports are few in number, VBA officials said. However, once a contractor submitted a corrected or clarified exam report, VBA officials said the exam management system did not preserve the date the exam was initially completed. At that point, the system only tracked the date VBA received the corrected or clarified report. As a result, the number of days in VBA’s system could include time contractors took to correct any issues identified by VBA after submitting the initial report. While VBA’s data does not allow it to reliably assess contractor performance against the targets in the contracts, VBA’s data can be used to measure timeliness in other ways. For example, we were able to use the data to calculate the entire amount of time it took to complete exams, which includes time contractors took to correct any issues identified by VBA. As such, the results of our analysis should not be interpreted as reflecting contractor compliance with timeliness targets under the contracts. However, to provide timeframes that are similar to VBA’s targets, we chose 20 days for districts 1-5 and 30 days for districts 6-7 as timeframes for our analysis. Moreover, we analyzed timeliness across all contractors rather than for individual contractors. In particular, we analyzed VBA data on 646,005 contracted exams completed from February 2017 to January 2018, which included 575,739 exams in districts 1-5 and 70,266 exams in districts 6-7. Our analysis of VBA data shows that 53 percent of exams were completed within 20 days for districts 1-5, and 56 percent were completed within 30 days for districts 6-7. However, some exams took at least twice as long to complete. For example, 12 percent of exams in districts 1-5 took more than 40 days to complete (see fig. 4). Contractor officials described a number of reasons why exams might take longer in some cases. For example, they said that scheduling delays might occur due to a veteran’s availability or severe weather, and that it can be challenging to find specialists for certain exam types in rural locations. Our analysis of timeliness focused on exams that were completed, and it did not include exams that have been requested and not yet completed by a contractor. For example, a contractor may have accepted an exam request from VBA, but not yet scheduled an appointment with the veteran. Alternatively, a contractor may have conducted an exam with the veteran, but not yet sent the exam report to VBA. As of late June 2018, VBA-calculated data showed that 87,768 requested exams had not yet been completed, including 37,077 exams that had already exceeded VBA’s timeliness targets. Tracking these exams is important because a large volume of such exams could ultimately increase the amount of time veterans have to wait for their claims to be processed. VBA officials stated that the agency closely monitors contractors’ workloads and helps expedite requested exams that have exceeded VBA’s targets for completing exams. In addition, VBA included a performance measure in its May 2018 Request for Proposals to track the percentage of requested exams that have been with a contractor for more than seven days. Such a measure could help VBA identify whether contractors have a backlog of exams and better assess whether veterans are receiving timely exams. VBA’s contract exam program office, primarily through its Contracting Officer’s Representatives (COR), has identified some contractor performance problems, such as delays in completing specific exams, through its oversight of contractor performance. This oversight includes day-to-day monitoring of contractor workloads and frequent contact with contractor officials. Through such contact and reviews of contractors’ daily and weekly exam status updates, the CORs work with contractor officials to identify ways to expedite disability compensation exams for veterans who have been waiting longer than VBA’s 20-day or 30-day targets. In addition, VBA contract quality staff who review samples of contractor exam reports hold teleconferences with the CORs and contractor officials to provide feedback and discuss issues arising from their reviews, such as specific types of errors. The VBA contract exam program office also oversees and manages contractors through supplemental guidance memos, contractor site visits, and reviews of veteran customer satisfaction surveys. For example, in November 2017, VBA sent a supplemental guidance memo to all contractors to clarify guidance on conducting and documenting hearing loss exams. Further, VBA has conducted site visits to all five contractors’ headquarters or clinic sites since September 2017. Headquarters visits include reviews of contractors’ procedures, such as those for assigning exam requests, and contractors’ information systems, such as those for tracking the status of exams. VBA visits to contractor clinics focus on facility issues, such as accessibility and safety. According to VBA officials, the CORs also review reports on satisfaction surveys completed by veterans after their exam appointments to identify veterans’ concerns regarding contractors and to follow up with contractors, when needed. For example, in response to one veteran’s survey comment regarding a contracted examiner who did not show up to conduct a scheduled exam, VBA officials told us they followed up with the contractor and learned that the examiner’s car broke down. According to VBA, it reimbursed the veteran for round-trip transportation costs to the clinic. Additionally, VBA’s contract quality review staff have conducted special focused reviews to investigate concerns raised by veterans and by staff in VBA regional offices and VHA medical facilities. For example, VBA conducted a review of one contracted examiner who had high rates of diagnosing severe posttraumatic stress disorder. After reviewing this examiner’s reports, VBA found their overall quality to be poor. As a result, VBA requested that the contractor no longer use this examiner. In addition to identifying and addressing problems with individual exams and examiners, VBA has identified broader challenges faced by contractors in meeting VBA’s demand for exams and providing timely reports. For example, VBA identified two contractors who were not prepared to perform all of their assigned exams because they did not have enough examiners, particularly in rural locations, which led to delays and a backlog of exam requests, according to VBA officials. VBA officials described how they worked with these contractors over several months to adjust and closely monitor the volume of exams sent to the contractors to address the backlog. However, according to VBA officials, by December 2017, VBA determined that one of the contractors was not able to meet the demand for exams, and the agency stopped sending new exam requests to this contractor. According to VBA, by late June 2018, it had discontinued all work with this contractor. VA officials said that to obtain additional exam capacity to make up for the two contractors’ shortages, they awarded short-term contracts in December 2017 to two other contractors who were providing exams in other VBA districts. VBA has not completed all required quarterly quality reviews and accompanying quarterly performance reports on contractors, according to VBA officials. These reviews and reports are key components to effectively assessing contractor performance in a timely manner. Specifically, in late June 2018, VBA officials said that they had conducted almost all their quality reviews for contracted exams completed in districts 1-5 during the second half of 2017, but that they needed to finalize the quality scores. They also said that they were beginning their quality reviews for contracted exams completed in 2018. At the time of our review, VBA had released one quarterly performance report for the fourth quarter of calendar year 2017, and officials said they were drafting others. VBA officials attributed delays in completing quality reviews and quarterly performance reports primarily to a lack of VBA quality review staff. The quarterly performance reports provide contractors with information on their performance against VBA quality and timeliness targets. For example, prior reports included detailed breakouts of quality errors by type and suggestions for performance improvements. As officials of one contractor said, delays in receiving quarterly performance reports limit VBA’s ability to provide contractors with timely and valuable feedback they can use to improve the quality of their exams. The delay in completing the quarterly reviews and reports also has implications for VBA’s ability to allocate exam requests across contractors and administer potential financial incentives across contractors. More specifically, VBA can use performance data to help determine how to allocate exams in each district that has two contractors, as outlined in the contracts. For example, VBA can decide to allocate more exams to the contractor with higher performance results. Further, the contracts outline how VBA can use performance data to administer financial incentives linked to performance targets. For example, VA is to provide a bonus to a contractor who meets or exceeds the 92 percent quality standard for a quarter, and meets or exceeds the 20- or 30-day timeliness standard. However, because of its delays in completing quality reviews and the lack of reliable data on contractor timeliness, VA has not yet administered these incentives. VA officials told us that the agency will determine if it will administer the 2017 incentives after it completes its performance assessments of contractors. VBA officials said they are currently hiring more staff to address the lag in quality reviews and subsequent reports to contractors, as well as to provide more oversight of contractors. At the time of our review, VBA did not have its authorized level of 15 quality analysts and 2 senior quality reviewers, but VBA officials said that they expected to complete hiring to bring the quality reviewer staff up to 17 full-time positions by the end of fiscal year 2018. In addition, VBA officials acknowledged that they did not have enough CORs in VBA’s exam program office to oversee the 14 exam contracts (including the two short-term contracts). As of April 2018, VBA officials said the office had 3 CORs, but hiring was expected to bring the number up to 14 by the end of fiscal year 2018. VBA officials said that they determined staffing levels for VBA’s contract exam program office—including CORs and exam quality reviewers—based on an assessment of the resources needed to expand the program, among other factors. Although VBA did not provide documentation outlining how it determined its workforce needs, the agency provided us with updated organizational charts in June 2018 demonstrating increased staff levels for the exam program office. VBA’s lack of reliable data on the status of exams, including insufficient exams—exam reports that VBA returns to contractors to be corrected or clarified—limits its ability to effectively oversee certain contract provisions. VBA officials acknowledged that they could not calculate the number of completed exams that were once marked as insufficient or how long they had remained in that status due to the data limitations of the exam management system the agency used until spring 2018. The contracts require that contractors correct insufficient exams within a certain number of days and bill VBA for these exams at half price. However, VBA’s lack of complete and reliable information on insufficient exams hinders its ability to ensure that either of these requirements is met. VBA officials also indicated that they were unable to fully assess individual contractor timeliness against VBA’s performance targets because the exam management system did not include the date the initial exam report was submitted to VBA, which is needed to calculate timeliness as outlined in the contracts. In March 2018, VBA began implementing a new exam management system designed to collect more comprehensive and accurate information on the status of exams. VA documentation on the new system shows that it will include detailed data on insufficient exams, which, according to VBA officials, should allow VBA to track whether contractors are properly discounting their invoices for those exams. However, in June 2018, VBA stated that three of its five contractors did not have complete functionality with VBA’s new exam management system. As a result, VBA officials said the agency still did not have complete data in the new system that would allow it to track insufficient exams. Officials said they were working to address these issues. More broadly, as described in VA system documents, the new system is designed to allow VBA to track more detailed data on exam completion dates and on other points throughout the exam process, such as dates for initial requests for clarification from contractors, and dates when appointments are scheduled. However, VBA is in the early stage of this transition, and agency officials stated that unexpected technical issues have affected communication between the new exam management system and other VBA systems. While they work to resolve the issues, VBA officials said that they have been manually moving some exam requests through the system each day. Further, VBA has not documented how it plans to ensure the additional data is accurate and use it to oversee contractor performance as outlined in the contracts, particularly for insufficient exams. Federal internal control standards state that management should use quality information to achieve key objectives. In addition, management should formulate plans to achieve those objectives. For example, agencies should assess collected data and ensure it is accurate so that it can be used to provide quality information to evaluate performance. In the absence of a plan for how it will capture and use data in its new exam management system to assess performance, VBA risks overpaying contractors for insufficient exams and continuing to inaccurately measure contractor timeliness. Further, according to agency officials, VBA has not conducted comprehensive analyses of performance data that would allow it to identify and address higher-level trends and program-wide challenges across contractors, geographic districts, exam types, or other relevant factors. Agency officials told us they have no plans to conduct such analyses. Federal internal control standards state that management should establish and operate monitoring activities and evaluate the results of those activities. In addition, management should evaluate deficiencies both at the individual and aggregate level. While VBA officials acknowledged that higher-level analyses could improve program oversight, they explained that analyzing performance data has been challenging due to the limitations of the exam management system. Thus, VBA has prioritized addressing contractor-specific problems and resolving long-standing pending exams over in-depth analysis of the performance data. However, with the expected improvements provided by VBA’s new exam management system and increased staff to manage the program and conduct quality reviews, VBA should be better positioned to conduct analyses of performance data in the future. By conducting higher-level analyses across contractors, geographic districts, exam types, or other relevant factors, VBA could make a more informed assessment of the challenges contractors and examiners face and where additional workload capacity and training may be needed. In addition, better analyses would allow VBA to determine if the contract exam program is achieving its quality and timeliness goals in a cost effective manner. VBA has a third-party auditor who verifies that all active contracted examiners have a current, valid, and unrestricted medical license in the state where they examined a veteran. The auditor provides regular reports of its audits to VBA. Specifically, the auditor verifies the license numbers of all active contracted examiners in the states where they perform VA disability compensation exams; National Provider Identifiers; and any prior or current sanctions or restrictions resulting in a revoked or suspended license at the time of a VA exam. In addition, contractors send VBA monthly reports of examiners’ medical license, specialty, and accreditation based on the contractors’ verification of this information. Every 2 months, VBA sends the auditor a consolidated report of this information covering all five contractors. The auditor verifies examiners’ information in that report before sending a final audit report to VBA, noting if the auditor was or was not able to verify examiners’ licenses. After reviewing the report, VBA contacts the contractors to gather additional information to resolve any issues, and in cases in which licensing requirements are not met, VBA stops using the examiner and offers new exams to veterans who have been seen by the examiner. VBA and auditing firm officials noted that audit results show that almost all examiners have current and valid licenses, and contractors are required to stop using those who do not meet licensing requirements. VBA and auditing firm officials said that issues identified in the audits are usually due to typos or differences in how information is captured across different licensing databases. However, based on an audit, VBA provided an example of an examiner with a restricted medical license who had completed exams for one contractor. In this case, VBA notified the contractor, who then stopped using the examiner and said it was taking action to prevent errors in its license verification process from occurring again. In addition, the contractor reimbursed VBA for the cost of exams conducted by the examiner and also offered new exams to veterans who had been seen by the examiner. VBA relies on contractors to verify that their examiners complete required training, and agency and contractor officials told us that VBA does not review contractors’ self-reported training reports for accuracy or request supporting documentation, such as training certificates, from contractors. As required by the contracts, contractors must track and maintain records demonstrating each examiner has completed required training. Each of VBA’s five contractors has its own process for ensuring that required training is provided to and completed by their examiners, but generally, contractors export the courses from VA’s online training system into their own online training systems for their examiners to access. The contractors, rather than VBA, access the contractor training systems to verify that examiners have completed the required training before they are approved to conduct exams. When requested by VBA, contractors are required to send VBA reports demonstrating that their examiners have met training requirements. As stated in the latest version of the contracts, contractors must immediately stop using any examiner found to have not completed required training, notify VBA, and re-examine the involved veterans at no cost to VBA, if requested by the agency. Although VBA currently does not verify the accuracy of training self- reported by contractors to the agency, VBA officials said that they plan to enhance monitoring through spot checks of training records and a new training system. Specifically, in fiscal year 2019, VBA officials said they plan to start conducting spot checks of some examiners’ training records for accuracy and compliance during site visits to contractor headquarters and clinics. However, VBA has not provided details or documentation on these planned checks, such as how it will determine which records to review or the steps it will take to verify the accuracy of training records. VBA officials also said they are planning to develop an online system that would allow VBA to certify that examiners have completed required training, rather than relying on contractors for this information. However, as of July 2018, VBA had yet to determine when this system would be developed and had not documented plans to do so in order to use such information for monitoring training. VBA also said it would hire staff to manage contractor training, but has yet to do so. GAO’s prior work has emphasized tracking and other control mechanisms to ensure that all employees receive appropriate training. While VBA said it would enhance its monitoring of training records, documenting and implementing a plan and processes to verify training could help ensure examiners have met training requirements. Without such a plan, VBA risks using contracted examiners who are unaware of the agency’s process for conducting exams and reporting the results, which could lead to delays for veterans as a result of poor-quality exams that need to be redone and insufficient exam reports that need to be corrected. VBA does not collect information from contractors or examiners to help determine if required training effectively prepares examiners to conduct high quality exams and complete exam reports. VBA has provided additional guidance to contractors for some specialty exams. However, VBA identified these issues after some contractors requested guidance in monthly meetings, rather than through VBA efforts to proactively or regularly collect information from contractors or examiners to inform potential changes to training. VBA is considering including a component in the online training system that would collect information on the effectiveness of required training. However, VBA has not outlined additional details on collecting such information. VBA officials said that VBA did not collect such information in the past, in part, because staff were focused on program oversight. To assess progress toward achieving results and to make changes to training if needed, GAO has found that evaluation is a key component of any training program. Given that VBA officials told us that the agency plans to issue new contracts in fall 2018, the number of contracted examiners who are new to VA processes may increase. Thus, collecting and assessing regular feedback on training from contractors and examiners, such as through surveys, discussion groups, or interviews, could help VBA determine if training effectively prepares examiners to conduct exams and complete exam reports. Further, information on the effectiveness of training could supplement data on contractor performance and results from VBA’s quality reviews to help assess if additional training courses are needed across contractors or for specific exam types. As VBA increasingly relies on contractors to perform veterans’ disability compensation exams, it is important that the agency ensures proper oversight of these contractors. VBA’s lack of accurate and up-to-date data and reports on contractor performance hampers its ability to oversee the quality and timeliness of exams provided through contractors. VBA’s new exam management system provides opportunities to improve oversight through more comprehensive and accurate data. These improvements might be limited, however, without a plan to use the data to produce the quality information needed by VBA to monitor insufficient exams, ensure it pays contractors the correct amount for those exams, and help it accurately calculate contractor timeliness. Further, the new system provides an opportunity for VBA to conduct analyses that could identify high-level trends and challenges facing the program across contractors and districts, such as delays in completing exams in specific parts of the country or contractor performance issues related to specific exam types. Despite these capabilities, VBA has not outlined plans for using improved information in this manner. Without doing so, the agency may miss opportunities to improve the program and, ultimately, its service to veterans. VBA could better prepare contracted examiners for their role by taking actions to ensure required training has been completed and by collecting information to assess and improve training. Such actions could help improve the quality of exams and exam reports, which could mitigate the need for exam rework and, ultimately, delays in determining veterans’ benefits. With VBA planning to award new contracts and potentially more new contracted examiners coming on board, verifying that required training is completed and collecting information on the effectiveness of training are critical. As VA continues to rely on contracted examiners, it is important that the agency is well positioned to carry out effective oversight of contractors to help ensure that veterans receive high-quality and timely exams. We are making the following four recommendations regarding contracted disability compensation exams to VA. The Under Secretary for Benefits should develop and implement a plan for how VBA will use data from the new exam management system to oversee contractors, including how it will capture accurate data on the status of exams and use it to (1) assess contractor timeliness, (2) monitor time spent correcting inadequate and insufficient exams, and (3) verify proper exam invoicing. (Recommendation 1) The Under Secretary for Benefits should regularly monitor and assess aggregate performance data and trends over time to identify higher-level trends and program-wide challenges. (Recommendation 2) The Under Secretary for Benefits should document and implement a plan and processes to verify that contracted examiners have completed required training. (Recommendation 3) The Under Secretary for Benefits should collect information from contractors or examiners on training and use this information to assess training and make improvements as needed. (Recommendation 4) We provided a draft of our report to the Department of Veterans Affairs (VA) for its review and comment. VA provided written comments, which are reproduced in appendix II. VA concurred with all our recommendations and described the Veterans Benefits Administration’s (VBA) plans for taking action to address them. Regarding our first recommendation, VA outlined improvements in the information collected through VBA’s new exam management system, and said that VBA is currently testing a mechanism to validate exam invoices submitted by contractors. We noted these improvements to the system in our draft report sent to the agency for comment. We maintain that it will be important for VBA to take the next step of developing and implementing a plan for how it will use information from the new system to ensure both accurate timeliness data and proper exam invoicing. Regarding our second recommendation, VA stated that VBA will use improved data in the new exam management system to regularly monitor and assess aggregate performance data, identify error trends, and monitor contractor performance and program-wide challenges. Regarding our third and fourth recommendations, VA stated that VBA plans to develop and implement a training plan for contractors that will include a mechanism to validate that required training has been completed and to assess the effectiveness of this training through feedback from trainees, contractors, and quality review staff in VBA’s contract exam program office. VA stated that VBA will use this data to improve the implementation and content of training. VA requested that GAO combine these two recommendations into one. However, we believe they are two distinct recommendations and have kept them as such. VBA could meet the intent of each recommendation with the development and implementation of one plan that covers both training verification and assessment, as outlined in its comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 7215 or curdae@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in Appendix III. To evaluate VBA monitoring of contractor performance and VBA oversight of contracted examiners’ qualifications and training, we reviewed relevant federal laws, regulations, and VA guidance on the use of contracted examiners for disability compensation exams. To identify relevant contract provisions and requirements related to contractor performance, monitoring of such performance, licensing, and training, among other areas, we reviewed selected provisions of selected versions of the 12 current VA Medical Disability Examination contracts originally awarded in 2016, of 5 short-term contracts VA awarded in early 2017, and of 2 short-term contracts VA awarded in December 2017. With regard to the 12 current contracts, we reviewed the selected provisions in the originally awarded contract from 2016 and in the most recently amended version of the contract (as provided to us by VBA officials). Based on our review of these two versions of the contract, the selected provisions appeared to remain in place, unless noted otherwise in this report. However, we did not review the various contract modifications that, according to VBA, occurred in the interim period to confirm whether the selected provisions we focused on in our review actually remained in place during the period between the original contract and the most recent amendment. With regard to the 2 short-term contracts awarded in December 2017, we reviewed the selected provisions in the original December contract. According to VBA officials, there have been no subsequent modifications to these short-term contracts. With regard to the 5 short-term contracts awarded in early 2017, we only reviewed selected provisions relating to contractor quality and timeliness performance. Thus, any statements in this report relating to other aspects of the contracts are not based on these short-term contracts. Further, we only reviewed such provisions in the originally awarded short-term contract, and we did not review the various contract modifications that, according to VBA, occurred subsequently, to confirm that those provisions remained in place over time. However, we found that those selected provisions were generally in place in all of the various contracts we reviewed. To answer what is known about the timeliness of VBA contracted exams, we analyzed VBA data on disability compensation exams completed by five contractors between February 2017 and January 2018. VBA’s Office of Performance Analysis and Integrity provided exam-level data that it maintains in the agency’s Enterprise Data Warehouse, including data on the exam request date, the date the contractor accepted the request, the date the contractor completed the exam, and the VBA district where the exam was conducted, among other information. These data were created from data originally collected in VBA’s Centralized Administrative Accounting Transaction System (CAATS), which is the system that VBA used to request exams from contractors until spring 2018. According to VBA officials, the status of exam requests (e.g., pending, completed, cancelled) was not always accurate in CAATS. To create more reliable data and identify the most current information on the status of exams, the Office of Performance Analysis and Integrity identified and replaced missing or incorrect data in CAATS by running checks against other VBA systems, including the Veterans Benefits Management System, which maintains veterans’ benefits claims records. We assessed the reliability of the data we received from VBA by conducting electronic testing for missing data and errors, and by interviewing VBA officials about their data collection and quality control procedures. We determined that the data were sufficiently reliable for our purposes of reporting the time it took to complete exams within districts. Our analysis included 646,005 contracted exams completed between February 2017 and January 2018. We selected February 2017 as our starting point because it was the first full month of data available that covered most of VBA’s current contractors. To allow for 12 full months of data, we selected January 2018 as our ending point. In addition, we limited our population to include exams that were requested on or after January 13, 2017 in districts 1-5 or on or after April 1, 2016 in districts 6- 7, based on the periods of performance in the contracts for those districts. We calculated timeliness at the level of the exam request. We calculated the number of days between the date an exam request was accepted by the contractor and the date the exam report was completed by the contractor. The timeliness values we calculated may include additional time needed to request and receive contractors’ corrections or clarifications on previously submitted exam reports. In our report, we refer to these exams as “insufficient exams.” VBA officials acknowledged that due to data limitations the new exam management system is intended to resolve, VBA’s CAATS system did not retain data on the number of exams that were once marked as insufficient or how long they remained in that status. While VBA officials acknowledged that this data limitation affects the agency’s ability to assess individual contractor timeliness on VBA’s performance targets outlined in the contracts, the limitation did not prevent us from analyzing the timeliness of contracted exams overall. The overall timeliness values we calculated represent the total time taken to complete exams regardless of whether additional time was needed for corrections. To put the timeliness values we calculated in context, we calculated the percentage of exams that were completed within VBA’s timeliness targets of 20 days for districts 1-5 and 30 days for districts 6-7 for the entire 12- month period of our analysis. We also calculated the percentage of exams that were completed within other timeframes (e.g., 21-40 days, more than 40 days). According to the contracts, contractors are not expected to complete all exams within the timeliness target, but rather should meet the timeliness target on average in a given quarter, so our analysis was different from one that VBA might conduct in order to determine contract compliance. Because VBA does not retain detailed data on exam completion dates necessary to assess contractor performance against VBA’s timeliness targets, and because we calculated timeliness across contractors, the percentages we calculated do not represent an assessment of whether contractors met VBA’s timeliness targets. GAO did not conduct a legal analysis of the various contractors’ compliance with the contract requirements. Given that the start of VBA’s timeliness measure is the date the contractor accepts the exam request (rather than the date VBA requests the exam), we calculated alternate timeliness values to account for potential delays in accepting exam requests. VBA officials stated that VBA requests contractors accept or reject exam requests within 3 days. For all exam requests that contractors took more than 3 days to accept, we calculated alternate totals that included the additional days. For example, if a contractor took 5 days to accept the exam request and completed the exam 20 days later, we calculated an alternate total of 22 days to complete the exam. We used these alternate values to calculate adjusted percentages for each category presented in Figure 4 of our report. For example, using the alternate timeliness values, about 50 percent of exams in districts 1-5 would have been completed in 20 days and 53 percent in districts 6-7 would have been completed within 30 days, rather than the respective 53 percent and 56 percent shown in Figure 4. Moreover, we found that about 82 percent of exam requests during our period of analysis were accepted within 3 days. To report more recent data on exams that were accepted but not yet completed by contractors—pending contracted exams—VBA provided aggregate data on the number of pending exams as of June 25, 2018. For example, for districts 1-5, it provided data on the number of exams that had been pending for 20 days or fewer, 21-40 days, 41-60 days, 61- 100 days, and more than 100 days. We calculated percentages based on the VBA-provided totals. Elizabeth Curda, (202) 512-7215 or curdae@gao.gov. In addition to the contact named above, Nyree Ryder Tee (Assistant Director); Teresa Heger (Analyst-in-Charge); Alex Galuten; Justin Gordinas; and Greg Whitney made key contributions to this report. Also contributing to this report were James Bennett, Matthew T. Crosby, Teague Lyons, Sheila R. McCoy, Jessica Orr, Claudine Pauselli, Samuel Portnow, Monica Savoy, Almeta Spencer, and April Van Cleef.
|
In 2016, VBA awarded 12 contracts to five private firms for up to $6.8 billion lasting up to 5 years to conduct veterans' disability medical exams. Both VBA contracted medical examiners and medical providers from the Veterans Health Administration perform these exams, with a growing number of exams being completed by contractors. Starting in 2017, VBA contracted examiners conducted about half of all exams. GAO was asked to review the performance and oversight of VBA's disability medical exam contractors. This report examines (1) what is known about the quality and timeliness of VBA contracted exams; (2) the extent to which VBA monitors contractors' performance; and (3) how VBA ensures that its contractors provide qualified and well-trained examiners. GAO analyzed the most recent reliable data available on the quality and timeliness of exams (January 2017 to February 2018), reviewed VBA and selected contract documents and relevant federal laws and regulations, and interviewed agency officials, exam contractors, an audit firm that checks examiners' licenses, and selected veterans service organizations. The Veterans Benefits Administration (VBA) has limited information on whether contractors who conduct disability compensation medical exams are meeting the agency's quality and timeliness targets. VBA contracted examiners have completed a growing number of exams in recent years (see figure). VBA uses completed exam reports to help determine if a veteran should receive disability benefits. VBA reported that the vast majority of contractors' quality scores fell well below VBA's target—92 percent of exam reports with no errors—for the first half of 2017. Since then, VBA has not completed all its quality reviews, but has hired more staff to do them. VBA officials acknowledged that VBA also does not have accurate information on contractor timeliness. VBA officials said the exam management system used until spring 2018 did not always retain the initial exam report completion date, which is used to calculate timeliness. In spring 2018, VBA implemented a new system designed to capture this information. VBA monitoring has addressed some problems with contractors, such as reassigning exams from contractors that did not have enough examiners to those that did. However, the issues GAO identified with VBA's quality and timeliness information limit VBA's ability to effectively oversee contractors. For example, VBA officials said they were unable to track the timeliness of exam reports sent back to contractors for corrections, which is needed to determine if VBA should reduce payment to a contractor. The new system implemented in spring 2018 tracks more detailed data on exam timeliness. However, VBA has not documented how it will ensure the data are accurate or how it will use the data to track the timeliness and billing of corrected exam reports. VBA also has no plans to use the new system to analyze performance data to identify trends or other program-wide issues. Without such plans, VBA may miss opportunities to improve contractor oversight and the program overall. A third-party auditor verifies that contracted examiners have valid medical licenses, but VBA does not verify if examiners have completed training nor does it collect information to assess training effectiveness in preparing examiners. While VBA plans to improve monitoring of training, it has not documented plans for tracking or collecting information to assess training. These actions could help ensure that VBA contractors provide veterans with high-quality exams and help VBA determine if additional training is needed. GAO recommends VBA (1) develop a plan for using its new data system to monitor contractors' quality and timeliness performance, (2) analyze overall program performance, (3) verify that contracted examiners complete required training, and (4) collect information to assess the effectiveness of that training. The Department of Veterans Affairs agreed with GAO's recommendations.
|
As part of the annual budget formulation process for each fiscal year, DOD establishes for each of nine foreign currencies, a foreign currency budget rate (units of foreign currency per one United States (U.S.) Dollar) to use when developing O&M and MILPERS funding requirements for overseas expenditures. Foreign currency budget rates for a particular fiscal year are established approximately 18 months prior to the fiscal year when overseas obligations will be incurred and disbursements made. For example, in June 2015, OUSD(C) issued guidance to, in part, instruct the services on the foreign currency rates to use in building their fiscal year 2017 budgets. In February 2016, as part of the President’s budget, DOD submitted its proposed fiscal year 2017 budget to Congress, and it began incurring obligations against subsequently appropriated amounts on October 1, 2016. DOD has used various methodologies for establishing the foreign currency budget rates. In 2005, we reviewed DOD’s methodology for developing its foreign currency budget rates and reported that DOD’s approach for estimating its foreign currency requirements for the fiscal year 2006 budget was a reasonable approach for forecasting foreign currency rates that could produce a more realistic estimate than its historical approach. In its fiscal year 2006 through 2016 budget requests, DOD used a centered weighted average model that combined both a 5-year average of exchange rates and an average of the most recently observed 12 months of exchange rates. For its fiscal year 2017 request, DOD adjusted its methodology to establish the foreign currency budget rates. Specifically, DOD established its foreign currency rates by calculating a 6-month average of Wall Street Journal rates published every Monday from May 25, 2015, to November 16, 2015. According to an OUSD(C) official, the 6-month average more closely represented foreign currency exchange rates experienced by the department during budget formulation, and it accounted for the strength of the U.S. Dollar, which had increased as compared with its historical 5- year average. DOD’s analysis found that the use of the 5-year historical average would have resulted in substantial gains when compared with gains expected from application of the 6-month average. More specifically, DOD projected gains of about $1 billion using the 5-year average of rates. During the fiscal year for which a budget is developed, DOD incurs obligations for its overseas O&M and MILPERS activities. Those obligations are recorded using the foreign currency budget rates. DOD uses various methods for selecting foreign currency rates to liquidate those obligations through disbursements, which may differ from the budget rates. DOD’s preferred payment method for foreign currency transactions is the Department of Treasury’s (Treasury) comprehensive international payment and collection system—the International Treasury Services (ITS.gov) system—which serves federal agencies making payments in nearly 200 countries. ITS.gov offers a number of rates, including advanced rates available up to 5 days in advance of disbursement, and the spot rate. The spot rate is the price for foreign currencies for delivery in 2 business days. While advanced rates, like spot rates, are based on the current market rate, advanced rates at the time they are selected are generally higher than the spot rate, with the 5-day advanced rate being the highest, because the rates are locked in ahead of the actual value date. While the spot rate can be more cost-effective, it requires immediate transaction processing, which may not be feasible for all disbursements. Differences between obligations incurred at the foreign currency budget rates and the amounts that DOD actually disburses drive gains or losses in the appropriated amounts DOD has available for its planned overseas expenditures. For example, if DOD budgeted for the U.K. Pound at a rate of .6289 (that is, 1 U.S. Dollar buys .6289 U.K. Pounds) as it did in fiscal year 2016, and the rate experienced at the time of disbursement was .6845, then DOD would have requested more funds than were actually needed for transactions involving the U.K. Pound. That would have resulted in a gain from the transaction—meaning that DOD would need less funds than were budgeted for the transaction. Conversely, a current rate that is lower than what was budgeted will result in a loss—and DOD would require more funds than were budgeted for the transaction. Within each of the services’ O&M and MILPERS appropriations accounts, amounts are available for overseas activities. Amounts obligated for overseas activities, along with associated foreign currency gains and losses, are managed by the services as part of the overall management of their O&M and MILPERS appropriations accounts. Service components use foreign currency fluctuation accounts within their O&M and MILPERs appropriations to manage realized gains and losses in direct programs due to fluctuations in foreign exchange rates. The service-level foreign currency fluctuation accounts are maintained at various budgetary levels within the service components. In fiscal year 1979, Congress appropriated $500 million to establish the FCFD account for purposes of maintaining the budgeted level of operations in the MILPERS and O&M appropriation accounts by mitigating substantial gains or losses to those appropriations caused by foreign currency rate fluctuations. FCFD appropriations are different from the O&M and MILPERS appropriations in two ways. First, FCFD account amounts are no-year amounts, meaning that they are available until expended, while in general, O&M and MILPERS appropriations are 1-year amounts and expire at the end of the fiscal year for which they were appropriated. Expired O&M and MILPERS amounts remain available only for limited purposes for 5 additional fiscal years. At the end of the 5-year expired period, any remaining O&M or MILPERS amounts, obligated or unobligated, are canceled and returned to Treasury. Second, FCFD account amounts may be used only to pay obligations incurred because of fluctuations in currency exchange rates of foreign countries, while O&M amounts are available for diverse expenses necessary for the operation and maintenance of the services and MILPERS amounts are available for service personnel-related expenses, such as pay, permanent changes of station travel, and expenses of temporary duty travel, among other purposes. Amounts from the FCFD account may be transferred to service-level foreign currency fluctuation accounts within O&M and MILPERS appropriation accounts to offset losses in buying power due to unfavorable differences between the budget rate and the foreign currency exchange rate prevailing at the time of disbursement. The FCFD account may be replenished in several ways. Amounts transferred from the FCFD to O&M and MILPERS appropriations may be returned when not needed to liquidate obligations because of subsequent favorable foreign currency rates in relation to the budget rate, or because other amounts have become available to cover obligations. A transfer back to the FCFD of unneeded amounts must be made before the end of the second fiscal year of expiration following the fiscal year of availability of the O&M or MILPERS appropriation to which the funds were originally transferred. Amounts may also be transferred to the FCFD account even if they did not originate there. Specifically, DOD may transfer to the FCFD account any unobligated O&M and MILPERS amounts unrelated to foreign currency exchange fluctuations so long as the transfers are made not later than the end of the second fiscal year of expiration of the appropriation. While multiple transfers of these unobligated amounts may be made during a fiscal year, any such transfer is limited so that the amount in the FCFD account does not exceed the statutory maximum of $970 million at the time of transfer. When the FCFD account balance is at the maximum balance, the services normally retain in their service- level O&M and MILPERs foreign currency fluctuation accounts any gains resulting from favorable foreign currency rates. Finally, any amounts transferred, whether from the FCFD account to an O&M or MILPERS account, or from an O&M or MILPERS account to the FCFD, are merged with the account and assume the characteristics of that account, including the period of availability of the funds contained in the account. Visibility of service-level foreign currency fluctuation account and FCFD transactions is maintained through the services’ accounting systems and execution reports. DOD uses the following reports to track its foreign currency funds: Foreign Currency Fluctuations, Defense Report (O&M): provides data on O&M foreign currency gains and losses for each service, by currency, including data on projected gains or losses for any remaining obligations that have not yet been liquidated and disbursed at the time of the report. Foreign Currency Fluctuation, Defense Report (MILPERS): provides data on MILPERS foreign currency gains and losses for each service, by currency, including data on projected gains or losses for any remaining obligations that have not yet been disbursed at the time of the report. In 2013 we analyzed and reported on carryover balances in federal accounts, which amounted to $2.2 trillion in fiscal year 2012, and we found that greater examination of carryover balances by an agency provides opportunities for enhanced oversight of their management of federal funds and may help identify opportunities for potential budgetary savings. Carryover balances are composed of both obligated and unobligated amounts. Only accounts with multi-year or no-year amounts, such as the FCFD, may carry over amounts that remain legally available for new obligations from one fiscal year to the next. DOD’s carryover balances would include FCFD account balances carried from one year to the next. DOD’s FCFD account is composed of unobligated carryover amounts that accumulate when unneeded for transfer to O&M and MILPERS accounts to cover foreign currency fluctuations. FCFD unobligated carryover balances include any expired, unobligated balances from the military services’ O&M and MILPERS accounts, which can include any gains due to favorable foreign currency fluctuations that are not used to cover other losses and that are transferred into the FCFD. DOD revised its foreign currency budget rates in fiscal years 2014 through 2016, which resulted in budget rates in these years that were more closely aligned with rates published by Treasury. Furthermore, the revised budget rates in fiscal years 2014 through 2016 decreased DOD’s projected O&M and MILPERS funding needs. The revised budget rates also decreased potential gains and losses in the amount of funds that DOD had available for its planned overseas expenditures. DOD revised its foreign currency budget rates in fiscal year 2014 and continued to do so in fiscal years 2015 and 2016 before making adjustments to its methodology in fiscal year 2017. According to an OUSD(C) official, the methodology developed in 2017 resulted in budget rates that were more closely aligned with market rates than in previous years, making revision of the 2017 budget rates unnecessary. DOD’s revisions to its foreign currency budget rates in fiscal years 2014 through 2016 resulted in rates that more closely aligned with those published by Treasury. Further, they decreased the expected gains that would have otherwise resulted from a substantial increase in the strength of the U.S. Dollar, in fiscal years 2014 through 2016, relative to other foreign currencies from the time the budget rates were set as compared with the rates available once the fiscal year began. Prior to fiscal year 2014, DOD did not revise its foreign currency budget rates. DOD officials did not provide an explanation for why the budget rates for fiscal years 2009 through 2013 were not revised. DOD developed, in November 2015, a set of standard operating procedures that describe the methodology it used for formulating budget rates for the nine foreign currencies included in its budget submission. These procedures also state that DOD is required to update the budget rates once an appropriation is enacted for the fiscal year. For example, if Congress reduces DOD’s appropriations due to favorable foreign currency rates, such as the $1.5 billion reduction in DOD’s total fiscal year 2016 appropriations, OUSD(C) then revises the budget rates to absorb the reduced funding levels. OUSD(C) officials stated that other factors are also considered when determining whether to revise the foreign currency budget rates, and that the department communicates the revised budget rates to the DOD components and Congress. For example, OUSD(C) assesses the value of each of the nine foreign currencies used to develop the budget request relative to the strength of the U.S. Dollar during the fiscal year. An OUSD(C) official also noted that the effects that the rate changes would have across these foreign currencies are also considered prior to submitting recommended rate revisions to the OUSD(C) leadership for approval. The official stated that one currency may be experiencing a loss, while another is experiencing a gain, which can affect whether to revise the rates and what those revisions should be. Additionally, the OUSD(C) official stated that “significant” projected gains or losses could drive a revision to the foreign currency budget rates, and that an informal $10 million threshold for projected gains and losses is used to determine when the foreign currency budget rates are revised. According to OUSD(C) officials, DOD components and Congress were notified when the budget rates were revised during fiscal years 2014 through 2016, including an explanation for why the rates were revised. OUSD(C) also includes the budget rates for each of the nine foreign currencies on its website and identifies any instances in which the budget rates were revised with the effective date of any rate revisions. Our analysis of DOD’s use of revised budget rates during fiscal years 2014 through 2016 found that the revised budget rates for those years were more closely aligned with rates published by Treasury. More specifically, for the nine foreign currencies included in DOD’s budget, our analysis comparing DOD’s initial and revised budget rates for fiscal years 2009 through 2017 with average Treasury rates for these years found that DOD’s budget rates differed from Treasury rates by less than 10 percent in about 64 percent of the total 162 occurrences we examined. While we are unaware of any criteria that suggest how closely DOD’s foreign currency budget rates should align with market rates, we used 10 percent as a basis for our analysis because Treasury’s guidance states that amendments to its published exchange rates are required if rates differ from current rates by 10 percent or more. We further examined these occurrences to determine what the differences were between the DOD and Treasury rates before and after DOD began revising its budget rates in fiscal year 2014. Of the 162 occurrences we reviewed, there were 90 occurrences included in our comparison for fiscal years 2009 through 2013, and 72 occurrences were included in our comparison for fiscal years 2014 through 2017. Our analysis shows the following: For fiscal years 2014 through 2017, DOD’s budget rates for its nine foreign currencies differed from Treasury rates by less than 10 percent in about 71 percent of the occurrences. This increased from about 59 percent of the occurrences for the period of fiscal years 2009 through 2013, before DOD began revising its rates after the fiscal year began. For fiscal years 2014 through 2017, DOD’s budget rates differed from Treasury’s rates by 10 percent or more after DOD began revising its rates in fiscal year 2014 in about 29 percent of the occurrences, which is a decrease from about 41 percent of the occurrences prior to fiscal year 2014. Figure 2 below shows the number of occurrences in which DOD’s initial and revised rates differed from Treasury rates by less than 10 percent, and the occurrences in which DOD’s rates differed from Treasury rates by 10 percent or more. The occurrences that are less than 10 percent of Treasury rates are most closely aligned with Treasury rates. According to DOD officials, the differences between DOD’s foreign currency budget rates and Treasury rates are driven primarily by market volatility (that is, the differences in the foreign currency rates from when DOD formulates its budget rates, prior to the fiscal year, and the foreign currency rates determined by Treasury when obligated amounts are liquidated through disbursements during the fiscal year). According to the OUSD(C) official responsible for formulating and revising the foreign currency budget rates, the delay that occurs between the time when a budget rate is set (approximately 18 months prior to the beginning of a particular fiscal year) and the actual fiscal year is a major factor for why the budget rate may be revised. According to the official, the market rates experienced during fiscal years 2014 through 2016 were substantially different from those expected when the budget rates for those years were developed. Therefore, DOD revised its budget rates during these years to more closely align with market rates experienced. Specifically, this official stated that DOD revised its budget rates during fiscal years 2014 through 2016 to decrease the expected gains that would have otherwise resulted during these fiscal years from a substantial increase in the strength of the U.S. Dollar relative to other foreign currencies from the time the budget rates were set as compared with more favorable rates available once the fiscal year began. In order to more closely align its budget rates with market rates, DOD introduced a new methodology to establish the foreign currency budget rates for fiscal year 2017 because DOD anticipated approximately $1 billion in projected gains if it used the prior methodology. As a result of this change in the methodology, according to the OUSD(C) official, DOD did not experience substantial gains or losses in fiscal year 2017. Therefore, DOD did not revise its foreign currency budget rates during fiscal year 2017. However, as previously stated, the official did not provide an explanation as to why the budget rates for fiscal years 2009 through 2013 were not revised. DOD’s use of revised foreign currency budget rates decreased DOD’s projected O&M and MILPERS funding needs and any potential gains and losses that would have occurred due to foreign currency fluctuations during fiscal years 2014 through 2016. Because DOD uses its budget rates to establish its projected annual O&M and MILPERS funding requirements for planned overseas expenditures, any revisions to the budget rates affect DOD’s estimate of its funding needs. For example, our analysis shows that as a result of revising its budget rates during fiscal years 2014 through 2016, DOD’s projected funding needs for the period of fiscal years 2009 through 2017 decreased from about $60.2 billion to about $57.5 billion—a decrease of about $2.7 billion. To further show the effect that changing foreign currency rates could have on DOD’s projected funding for planned overseas expenditures for fiscal years 2009 through 2017, we also compared DOD’s projected O&M and MILPERS funding needs, based on its initial and revised foreign currency budget rates, against projected funding needs based on the use of foreign currency rates published by Treasury during the fiscal year. Our analysis shows that DOD’s projected O&M and MILPERS foreign currency funding needs using Treasury rates would have been about $58.4 billion, or about $885 million more than the $57.5 billion that DOD had projected using its initial and revised budget rates. DOD also uses foreign currency budget rates to calculate gains or losses attributable to foreign currency fluctuations. Specifically, DOD determines gains and losses due to foreign currency fluctuations by comparing the budget rate (that is, initial or revised budget rate) used to incur obligations against a more current market rate at the time it liquidates its obligations through disbursements. Therefore, revisions to the budget rates not only change DOD’s projected O&M and MILPERS funding requirements for the fiscal year in which the revisions occur, but also change the baseline from which the potential gains or losses would result when DOD liquidates its overseas O&M and MILPERS obligations through disbursements. For example, in fiscal year 2016, Congress reduced DOD’s total appropriations by $1.5 billion. As a result of this reduction and favorable foreign currency rates, DOD revised its fiscal year 2016 budget rates in February 2016 and applied the revised foreign currency budget rates in its calculations of gains and losses due to foreign currency fluctuations since the beginning of the fiscal year to absorb the reduced funding level. In applying the revised budget rates, a $30 million gain DOD had previously projected became a projected loss of about $186.2 million. The use of revised budget rates also affects the movement of funds from the FCFD account. For example, if the use of the revised budget rate creates a loss and DOD is unable to cover the increased costs to its O&M or MILPERS appropriations, funds from the FCFD account may be used to cover its planned overseas expenditures. DOD has taken some steps to reduce costs in selecting foreign currency rates to liquidate its obligations through disbursements. However, DOD organizations are not always selecting the most cost-effective rates to convert U.S. Dollars, and DOD has not determined whether opportunities exist to achieve additional efficiencies when making disbursements. DOD liquidates its obligations through disbursements for overseas expenditures using Treasury’s ITS.gov system, which provides DOD organizations with a choice of foreign currency rates to apply when making disbursements in a foreign currency. The foreign currency rate chosen determines how many U.S. Dollars must be paid for the transaction. Treasury officials explained that customers may choose either the spot rate or an advanced rate. The spot rate is the price for foreign currencies for delivery in 2 business days. Treasury officials explained that advanced rates are exchange rates that are “locked in” and guaranteed by the bank processing the disbursement 5, 4, or 3 days in advance of payment processing, which is known as the “value date” of a disbursement. Normally, the cost of the rate increases the further from the date of disbursement that it is locked in. While DOD often uses a 5-day advanced rate to make its disbursements, the other rate options available, such as a 3-day advanced and a spot rate, can be more cost-effective. We analyzed data provided by Treasury from its ITS.gov system and found that for disbursements made during the period of June and July 2017, the 5-day advanced rate was more costly than the 3-day advanced rate. In instances where the spot rate was available, we found that it was also more cost-effective than either the 3- day or 5-day advanced rates. For example, for those transactions processed through ITS.gov on June 13, 2017, DOD would have paid 1 U.S. Dollar for .881 European Euros if using the 5-day advanced rate; .883 European Euros if using the 3-day advanced rate; and .889 European Euros if using the spot rate. In the case of the Army, an Army Financial Management Command official provided us information indicating that the service has estimated potential cost savings that would result from more consistently selecting 3-day advanced rates through the ITS.gov system to make overseas disbursements of amounts, rather than the 5-day advanced rate. More specifically, the Army estimated between $8 million and $10 million in annual savings by transitioning from a 5-day to a 3-day advanced rate when selecting foreign currency rates. According to officials, the Army has transitioned all paying locations to the 3-day advanced rate. The Army estimates that these locations have produced $6.04 million in savings through February 2018. Although the Army indicated that it also planned to analyze whether use of the spot rate was feasible, it had not conducted this review at the time of our review. Data provided to us by Treasury from its ITS.gov system indicate that in June and July of 2017, the Air Force used the 5-day advanced rate exclusively for its disbursements, while the Navy and Marine Corps relied on both the 5-day and the 3-day advanced rates. Our analysis of these data show the Air Force would have achieved total savings for those 2 months of about $258,000 if it had made its disbursements using the 3- day versus the 5-day advanced rate. The savings resulting from each transaction varied based on the amount of the transaction. For example, on June 13, 2017, the Air Force disbursed a payment exceeding $3.7 million and would have saved more than $9,000 for that transaction if the 3-day advanced rate had been used. For the same single transaction, if the spot rate had been used instead of the 5-day advanced rate, the Air Force would have saved more than $31,000. The savings associated with the Navy’s and Marine Corps’ disbursements for the same 2-month period showed the potential for less dramatic savings of less than $100 because the Navy and Marine Corps used the 3-day advanced rate as opposed to the 5-day advanced rate for most of its disbursements. Where information on the spot rate was available, its use, as opposed to either the 5-day or 3-day advanced rate, would have resulted in additional savings opportunities for those 2 months. While these examples are illustrative of cost savings opportunities in June and July 2017, Treasury data show that in fiscal year 2016, DOD disbursed more than $11.8 billion through ITS.gov and, as of July 2017, had disbursed more than $9.6 billion through ITS.gov. Our analysis suggests that DOD could achieve further cost savings by more consistently selecting cost-effective foreign currency rates, such as the 3-day advanced or spot rates, with which to make disbursements. In selecting foreign currency rates, DOD’s Financial Management Regulation states that disbursements should be computed to avoid gains or deficiencies (losses) due to fluctuations in rates of exchange to the greatest extent possible. If there is no rate of exchange established by agreement between the U.S. government and the foreign country, then foreign currency transactions are to be conducted at the prevailing rate. The prevailing rate of exchange is the most favorable rate legally available for acquisition of foreign currency for official disbursement and other exchange transactions. Additionally, GAO’s Standards for Internal Control in the Federal Government calls for management to periodically review policies, procedures, and related control activities for continued relevance and effectiveness in achieving the entity’s objectives or addressing related risks. DOD disbursement organizations have flexibility in selecting foreign currency rates to use when making disbursements using ITS.gov. There is no DOD-wide requirement for the services to review the rates used to make disbursements and, except for the Army, the services have not conducted such a review. This step is necessary to determine whether there are opportunities for savings by more consistently selecting cost- effective foreign currency rates. We discussed disbursement processes with DOD and Air Force, Navy, and Marine Corps financial management officials, including the factors considered when selecting foreign currency rates. In addition, a Defense Finance and Accounting Service official noted that currencies can have criteria specifying when a payment is made and provided us the ITS.gov user’s guide, which addresses “special currency requirements,” such as those that would drive advanced payment for a currency. For example, the user’s guide indicates that payment for transactions involving the Afghanistan Afghani must be made 2 days in advance of the value date, and cannot be made on a Friday. However, information that is contained in the ITS.gov user’s guide and that we received from a Treasury official indicate that none of the nine foreign currencies for which DOD budgets place restrictions on when payment must be made; and therefore, this consideration should not drive the use of a specific rate at disbursement. Marine Corps financial management officials told us that the foreign currency rate selected at disbursement is at the discretion of the disbursing officer based on operational requirements, with the understanding that the most favorable rate for the government is the preference, while balancing mission requirements and the time necessary to process the transaction. These officials acknowledged that the 3-day advanced rate can be more cost-effective to the government but indicated that there are occasions when the 5-day advanced rate should be used because it provides more time to process the payments from deployed locations operating in different time zones or with limited communication capabilities. However, we found that OUSD (C) officials and financial management officials with the headquarters of the Air Force, Navy, and Marine Corps were not involved in disbursement, were unaware of what rates were being used at disbursement, and had not reviewed the rationale for selecting one rate over another. For example, Air Force and Navy headquarters officials we spoke with were unable to provide insight as to what drives the decision to use one rate over another. One Navy financial management official told us that he was unaware of any Navy policy that directs a specific rate to be used when disbursing funds, and suggested that the absence of such a policy provides the flexibility for officials to determine which approach is best. Headquarters, Marine Corps officials also stated that they did not monitor foreign currency rates used for disbursements or the reasons why one rate was selected over another. Based on our inquiry, officials indicated that they would analyze the foreign currency rates used for disbursements in 2017 and whether opportunities existed to achieve savings by using other rates available through ITS.gov. A Marine Corps official subsequently provided us with information that showed that two of three disbursing offices that currently utilize ITS.gov for disbursements use the 3-day advanced rate exclusively and one uses the 5-day advanced rate. The official noted that a technical issue within ITS.gov has restricted the disbursing office currently using the 5-day advanced rate from choosing any other rate, but that the service was further assessing options to correct the issue. In our conversations with an official in OUSD(C) about why the other services had not reviewed the foreign currency rates used for disbursements to determine what was being paid through ITS.gov and whether there was an opportunity for savings, the official commented that OUSD(C) had not directed the services to conduct any reviews in this area. This official was unaware that different foreign currency rates were used to make disbursements, and assumed that the military services all make disbursements in the same way. However, as discussed above, the services are using different rates resulting in inconsistency across the department. The official further indicated that DOD could perform a review to determine the cost differences of using one disbursement rate over another. Absent a review of the rates the services are using in making disbursements and whether cost savings could be achieved by more consistently selecting the most cost-effective foreign currency rates available for use at disbursement, DOD is at risk for paying more to convert U.S. Dollars for overseas expenditures than would otherwise be required. In fiscal years 2009 through 2016, DOD used the FCFD account to cover losses that the services experienced due to foreign currency fluctuations in 6 of the 8 years we reviewed. However, DOD does not effectively manage the FCFD account balance based on projected gains or losses. Transfers of expired unobligated balances from MILPERS and O&M accounts into the FCFD account have been made to replenish the account balance to the statutory limit of $970 million, without consideration of projected losses due to foreign currency fluctuations. Furthermore, DOD’s financial reporting on foreign currency fluctuations for fiscal years 2009 through 2016 contains incomplete and inaccurate information. In fiscal years 2009 through 2016, DOD transferred approximately $1.92 billion out of the FCFD account to cover losses that the services experienced due to foreign currency fluctuations in 6 of the 8 years we reviewed. For these years, DOD transferred funds from the FCFD account to the services’ MILPERS and O&M accounts during the fiscal year in which the funds were obligated for overseas expenses. The transfer amounts were based on both losses realized from actual disbursements and projected losses for any remaining obligations to be liquidated. The projected losses were calculated based on the current foreign currency market rates as of the time of the calculation. Based on the service-level data we reviewed, all of the services reported that they experienced losses in at least 5 of the fiscal years we reviewed. For example, the Army reported that it experienced losses in its MILPERS account for 5 of 8 years, while the Marine Corps reported that it experienced losses in its O&M and MILPERS accounts in each of the 8 years. In addition to the transfers to cover losses within the services’ MILPERS and O&M accounts, in fiscal year 2013 DOD transferred an additional $969 million to the Defense Working Capital Fund to offset fuel cost losses. Since fiscal year 2012, DOD has maintained the FCFD end-of-year account balance at $970 million—the maximum allowed by statute. To replenish the funds that were transferred out of the FCFD account, DOD transferred unobligated balances to the FCFD account from the services’ O&M and MILPERS accounts. While DOD can also replenish the FCFD account or absorb foreign currency losses in certain currencies by transferring to the FCFD account any gains experienced by the services, our analysis found that DOD did not transfer any gains into the FCFD account for fiscal years 2009 through 2016. Figure 3 shows the transfers into and out of the FCFD account and the end-of-year FCFD account balance for fiscal years 2009 through 2016. Our analysis also shows that DOD transferred funds to maintain the FCFD account at its maximum balance since 2012, despite experiencing fewer losses due to foreign currency fluctuations than it had experienced in fiscal years 2009 to 2011. Of the $1.92 billion transferred from the FCFD account to the services’ MILPERS and O&M accounts to cover losses, $464.5 million was transferred since fiscal year 2012, when DOD began maintaining its FCFD account at the maximum level. During that time, some of the services experienced foreign currency gains, while others experienced losses. For example, at the end of fiscal year 2013 the Navy reported a total realized and projected cumulative gain for its O&M and MILPERS accounts of about $98.6 million. In that same year, the Marine Corps reported a cumulative realized and projected loss for its O&M and MILPERS accounts of approximately $12.7 million. Had DOD not transferred unobligated funds back into the FCFD account, it would have retained a positive balance of approximately $505.5 million. However, DOD maintained the account balance at $970 million by transferring approximately $495.3 million in unobligated balances into the account. As part of its management of the FCFD account balance, DOD analyzes data on realized and projected losses as the basis for transferring funds from the FCFD account to the services’ MILPERS and O&M accounts to cover losses. However, DOD does not consider projected losses when making transfers of unobligated O&M and MILPERS balances into the FCFD account. Figure 4 below shows the FCFD account balance that DOD has maintained in relation to the transfers out of the account to cover losses. Specifically, according to the OUSD(C) official responsible for managing the FCFD account, DOD maintains the FCFD account balance at $970 million to maximize unobligated balances within the military services’ O&M and MILPERS accounts before they are canceled and are no longer available to DOD. In addition, this official stated that DOD prefers to maintain the maximum balance in case it is needed due to sudden, unfavorable swings in foreign currency exchange rates. Our review of the documentation used to make transfers into and out of the FCFD account corroborates that DOD maintains the FCFD account balance to maximize the retention of unobligated balances. Specifically, we found instances in which the documentation states that the transfers of unobligated balances into the FCFD account were made for the purpose of replenishing the account balance to the statutory limit. For example, DOD transferred $89 million from the FCFD account to the Army for losses it had realized and projected in fiscal year 2014, and later transferred unobligated balances of the same amount back into the account. DOD’s documentation states that this transfer of unobligated balances was made for the purpose of replenishing the account to $970 million in order to finance estimated foreign currency losses resulting from the decline in value of the U.S. Dollar. However, the transfer to the Army already covered the realized losses and projected losses for any remaining disbursements. In other words, estimated foreign currency losses had already been accounted for at the time of the transfer to the Army. In addition, based on data reported by the Air Force, Marine Corps, and Navy, DOD had an estimated cumulative gain of about $30 million for fiscal year 2014 based on the other services’ gains and losses, which could have been transferred to the FCFD account to absorb any additional foreign currency losses elsewhere. However, DOD did not transfer those gains to the FCFD account. Similarly, based on data reported by these services, DOD experienced cumulative realized and projected gains of more than $200 million in fiscal year 2013 and about $92.6 million in fiscal year 2015, but it did not transfer any gains to the FCFD account because the account balance had already reached its maximum using transferred unobligated balances. Despite replenishing the account balance to the maximum amount for the purpose of covering additional losses, the FCFD transfers have not been made to fully offset losses in some years, further raising questions about the need to maintain the balance at the statutory cap of $970 million annually. Specifically, in 3 of the 6 years in which DOD transferred funds from the FCFD account to the services’ MILPERS and O&M accounts, DOD did not use the FCFD account to fully cover the losses that the Air Force, Marine Corps, and Navy experienced. In fiscal year 2011, for example, DOD’s transfers out of the FCFD account to these services covered about 88 percent of the reported MILPERS and O&M losses that these services had realized and projected to lose by the end of the fiscal year. In fiscal year 2012, FCFD transfers covered almost 72 percent of the MILPERS and O&M realized and projected losses reported by the Air Force, Marine Corps, and Navy, as of the end of the fiscal year. In fiscal year 2016, DOD FCFD transfers to these services covered approximately 55 percent of their reported MILPERS and O&M realized and projected losses by the end of the fiscal year. The OUSD(C) official we spoke with stated that FCFD transfers to cover losses begin with a request from the services, and the OUSD(C) office and the services then coordinate on the final transfer amount. In addition, some service officials told us that they try to cover their losses using each service’s available funding before reaching out for assistance from the FCFD account. Therefore, based on a service’s ability to cover the loss, it may not always request an FCFD transfer to cover the full amount of realized and projected losses. Further, according to an OUSD(C) official, the timing of a service’s request for an FCFD transfer may also affect any differences between the amount transferred and the actual losses experienced. Specifically, if a service requests a transfer early in the fiscal year based on realized and projected losses, actual losses experienced as of the end of the fiscal year may be greater than or less than the transfer amount due to foreign currency fluctuations. Using transfers of unobligated balances, DOD has maintained its FCFD account balance at the maximum level allowed by statute because it has not analyzed realized and projected losses to determine what size account balance is necessary to meet the intended purpose of the account. In our prior work, we have developed key questions for evaluating federal account balances that agencies may use to identify the amount of the balance necessary to maintain agency or program operations. Through examination of carryover balances, oversight of agencies’ management of federal funds may be enhanced. Specifically, we reported that understanding an agency’s processes for estimating and managing carryover balances provides information to assess how effectively agencies anticipate program needs, and ensure the most efficient use of resources. To estimate and manage carryover balances, agencies may consider such factors as future needs of the account, economic indicators, and historical data. If an agency does not have a robust strategy in place to manage carryover balances or is unable to adequately explain or support the reported carryover balance, then a more in-depth review is warranted. In those cases, balances may either fall too low to efficiently manage operations or rise to unnecessarily high levels, producing potential opportunities for those funds to be used more efficiently elsewhere. When asked about maintaining the balance at a level necessary to cover losses, rather than at the maximum level allowed by statute, the OUSD(C) official indicated that the OUSD(C) takes a cautious approach and prefers to have the additional flexibility allowed by the higher balance. Further, the official stated that it would be difficult for DOD to attempt to base its unobligated balance transfers and the FCFD account balance on analysis and evaluation, given the unpredictable nature and constant volatility of foreign currency rates. Our guidelines on evaluating carryover balances acknowledge that external events beyond an agency’s control can dramatically affect carryover balances. However, the challenges that are inherent in predicting foreign currency rates do not preclude DOD from conducting analysis to glean insight as to the appropriate size for the balance of the account and what potential opportunities for savings might exist. Specifically, our guidelines suggest that agencies would benefit from considering the sources and fiscal characteristics of an account with carryover balances. In this case, the FCFD account can receive funds from transfers of unobligated balances and realized foreign currency gains. In addition, DOD can make multiple transfers throughout a fiscal year and can transfer funds from the FCFD to and from the services’ O&M and MILPERS accounts simultaneously, if necessary. These characteristics of the FCFD account already provide the department with flexibility, indicating that DOD may be positioned to manage the FCFD balance in a more analytical manner based on any projected losses. Without analyzing any realized or projected losses to determine what balance may be needed to meet the FCFD account’s intended purpose, the account balance may be kept at a higher level than is necessary. As a result, although an exact amount is unknown, DOD may be maintaining balances in the FCFD account that are hundreds of millions of dollars higher than needed to cover any losses it has experienced, and these funds may have been more efficiently used in supporting other defense activities or returned to Treasury after the account is canceled by law. DOD prepares financial reports to monitor the status of its foreign currency funds, but some of DOD’s financial reporting on foreign currency fluctuations for fiscal years 2009 through 2016 is incomplete and inaccurate. DOD’s Financial Management Regulation establishes reporting requirements specifically for tracking all transactions that increase or decrease the FCFD. In accordance with that guidance, the services provide data from their accounting systems to the Defense Finance and Accounting Service to generate reports that are used as a tool with which the services and OUSD(C) can monitor how they are expending funds appropriated for overseas expenditures. For O&M appropriations, the Foreign Currency Fluctuations, Defense (O&M) report provides data on foreign currency gains and losses for each service, by currency, including data on projected gains or losses for any remaining obligations that have not yet been disbursed at the time of the report. The Foreign Currency Fluctuations, Defense Report (MILPERS) provides similar information for the MILPERS appropriation. We reviewed end-of-year Foreign Currency Fluctuations, Defense (O&M) and (MILPERS) reports for fiscal years 2009 through 2016 and found that some of the reporting for O&M was incomplete and inaccurate, which hampers the quality of information available to manage the FCFD account. For instance, we found the following: Incomplete data in the Foreign Currency Fluctuations Defense (O&M) reports: In our review of the end-of-year Foreign Currency Fluctuations Defense (O&M) and (MILPERS) reports we observed several instances of incomplete data in the O&M reports, and these affect managers’ ability to make sound decisions to manage foreign currency gains and losses. First, for the Navy, we found that the report data showed, for multiple currencies across fiscal years 2011 through 2016, values in the realized variance column, indicating that the service had experienced a gain or loss in a particular currency; however, the reports showed values of zero in other columns that are necessary for calculating the gain or loss. Second, the Air Force data for the Turkey Lira, in fiscal year 2012, showed a gain or loss without any data indicating what would have driven the gain or loss. Third, in one instance, Marine Corps data on obligations for fiscal year 2011 were missing from the end-of-year reports until 2014. Missing obligation data for these end-of-year reports indicate a limitation in using these reports for tracking actual gains and losses. Inaccurate data in the Army’s Foreign Currency Fluctuations Defense (O&M) reports: The Army’s Foreign Currency Fluctuation Defense (O&M) reports are inaccurate and cannot be used to reliably track gains or losses, and this hinders managers from making sound decisions regarding the Army’s foreign currency gains and losses. The reports are inaccurate in that the Army’s accounting system charges disbursements to the current fiscal year appropriation rather than to the fiscal year appropriation that incurred the obligation, as required by the Financial Management Regulation. According to officials from the Army Budget Office, the Army designed its General Fund Enterprise Business System (GFEBS) to record disbursements to the current fiscal year based on differing interpretations of a previous version of the Regulation. Because the Army is not recording its disbursements to the fiscal year appropriation as the other services are, Army data are inaccurate and cannot be used by the OUSD(C) official responsible for overseeing DOD’s foreign currency program to track the Army’s foreign currency transactions and maintain full visibility of DOD’s overall gains and losses in a given fiscal year. Army Budget Office officials acknowledged that the Army will need to modify its system to record disbursements consistent with Financial Management Regulation guidance, but it has not developed a plan or timeline for doing so. Without accurate reporting of the Army’s foreign currency transactions, DOD lacks information for tracking and helping to manage the Army’s foreign currency gain and losses. DOD’s Financial Management Regulation specifies the data that must be included in the Foreign Currency Fluctuations Defense (O&M) and (MILPERS) reports and the roles and responsibilities of the services as well as the Defense Finance and Accounting Service for ensuring the quality of those data. However, we identified data issues in our analysis that indicate that quality is inconsistent. For example, officials from the Navy stated that they had observed the incomplete data for some currencies and speculated that the incompleteness was attributable to data entry errors. Similarly, according to an OUSD(C) official, the Defense Finance and Accounting Service is notified when discrepancies are found in the reports and the Defense Finance and Accounting Service officials coordinate with the services to correct the data. However, neither Navy nor the Defense Finance and Accounting Service officials have corrected the data. Although DOD’s Financial Management Regulation specifies the data that are to be included, as well as roles and responsibilities of the services and the Defense Finance and Accounting Service, it does not identify who is responsible for correcting erroneous or missing data. According to an OUSD(C) official, correcting reporting issues is an area that OUSD(C), the Defense Finance and Accounting Service, and the services can improve on, and they would benefit from guidance in the Financial Management Regulation that establishes the steps that should be taken for making such corrections. Further, GAO’s Standards for Internal Control in the Federal Government and the Federal Accounting Standards Advisory Board’s Handbook of Federal Accounting Standards and Other Pronouncements, as Amended, both establish the importance of using reliable and complete information for making decisions. In addition, DOD’s Financial Management Regulation establishes responsibilities for both the DOD components and the Defense Finance and Accounting Service to establish appropriate internal controls to ensure that financial reporting data are complete, accurate, and supportable, in order for managers to make sound decisions and exercise proper stewardship over these resources. Effectively managing foreign currency gains and losses as well as any projected gains or losses for any remaining obligations that have not yet been liquidated through disbursement requires complete and accurate data. OUSD(C) and service officials recognize the importance of reliable data, as well as the need to take steps to improve the quality of the foreign currency gains and losses data. Without OUSD(C) establishing guidance to ensure that the Foreign Currency Fluctuation Defense (O&M) report data that tracks foreign currency gains and losses are complete, DOD and Congress do not have information to make sound decisions and exercise proper stewardship over resources due to foreign currency fluctuations. Furthermore, until the Army establishes a plan and timeline for modifying its system to record foreign currency disbursements in an accurate manner, the Army and DOD will lack quality information for tracking and helping to manage the Army’s and DOD’s foreign currency gain and losses. Congress provides DOD with a significant amount of funding each year to purchase goods and services overseas and to pay service-members stationed abroad. DOD develops and can revise foreign currency budget rates to determine its funding needs and calculate any gains or losses that result from DOD’s overseas expenditures. The Army has estimated potential cost savings that would result from more consistently selecting a more cost-effective foreign currency rate for making disbursements to liquidate its overseas O&M obligations. However, DOD has not fully determined whether additional cost-saving opportunities exist because the services have not reviewed the rates used for foreign currency disbursements. Absent a review of the foreign currency rates the services are using at disbursement, including whether cost-saving opportunities exist, by more consistently selecting cost-effective foreign currency rates, DOD risks paying more than would be required otherwise. Further, while DOD has used the FCFD account to cover losses that resulted from foreign currency fluctuations, it has not managed the FCFD account balance by basing the transfers of unobligated balances into the FCFD account on an analysis of realized and projected losses. Without basing its FCFD account balance on such analyses, DOD may be maintaining balances in the FCFD account that are hundreds of millions of dollars higher than needed to cover any losses it has experienced, and these amounts may have been more efficiently used supporting other defense activities or ultimately returned to Treasury, once expired. Moreover, DOD has not established guidance and other procedures to ensure that complete and accurate data are included in financial reporting on foreign currency funds, and this limits the quality of information available to effectively manage the FCFD account. We are making the following four recommendations to DOD. The Secretary of Defense ensures that: The Under Secretary of Defense (Comptroller), in coordination with the U.S. Army, Air Force, Navy, and Marine Corps, should conduct a review of the foreign currency rates used at disbursement to determine whether cost-saving opportunities exist by more consistently selecting cost- effective rates at disbursement. (Recommendation 1) The Under Secretary of Defense (Comptroller) should analyze realized and projected losses to determine the necessary size of the FCFD account balance and use the results of this analysis as the basis for transfers of unobligated balances to the account. (Recommendation 2) The Under Secretary of Defense (Comptroller) should revise the Financial Management Regulation to include guidance on ensuring that data are complete and accurate, including assignment of responsibility for correcting erroneous data in its Foreign Currency Fluctuations Defense (O&M) reports. (Recommendation 3) The Secretary of the Army should develop a plan with timelines for implementing changes to its General Fund Enterprise Business System to accurately record its disbursements, consistent with DOD Financial Management Regulation guidance. (Recommendation 4) We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix II, DOD concurred with our first, third, and fourth recommendations and outlined its plan to address them. DOD partially concurred with our second recommendation that the Under Secretary of Defense (Comptroller) analyze realized and projected losses to determine the necessary size of the FCFD account balance and use the results of the analysis as the basis for transfers of unobligated balances to the account. DOD also provided technical comments, which we incorporated in the report, where appropriate. In partially concurring with our second recommendation, DOD stated that projecting foreign currency gains or losses can be difficult given that foreign currency rates can be volatile due to various factors, such as trade balances, money supply, and national income, as well as arbitrary disturbances that affect foreign currency rates that cannot be predicted or forecasted, such as the departure of the United Kingdom from the European Union. DOD noted that because of the risk and volatility associated with foreign currency rates, the Congress established the FCFD account. We agree that forecasting foreign currency rates is challenging due to market volatility and include examples in our report of the effect of foreign currency rate fluctuations on DOD’s planned foreign currency obligations. Our report also describes the relationship between gains and losses and foreign currency fluctuations, and the movement of funds from the FCFD account to offset any losses. As our report also discusses, DOD calculates actual and projected losses due to foreign currency fluctuations and uses those projections as the basis, at least in part, for any transfers out of the FCFD account to cover losses experienced in the military services’ O&M and MILPERS appropriations. However, our report also notes that DOD does not consider its calculations of actual and future projected losses when making transfers of unobligated O&M and MILPERS balances to replenish the FCFD account. Instead, since fiscal year 2012, DOD has kept the FCFD account balance at the maximum level allowed by statute by using unobligated balances before they are canceled and are no longer available to DOD, regardless of whether the funds were needed in the account to offset any projected losses. DOD’s comments also stated that projecting gains or losses for foreign currency to determine the size of the FCFD account opens the door to greater uncertainty and risk at a time when the department is working to rebuild readiness and implement the National Defense Strategy. Our report describes the characteristics of the FCFD account that provide DOD with flexibility to manage market volatility, thereby helping to address uncertainty and reduce risk. For example, DOD can make multiple transfers of funds to the FCFD account throughout a fiscal year in response to unforeseen foreign currency fluctuations. The FCFD account can also receive funds from transfers of actual foreign currency gains and/or unobligated balances. As we also noted, DOD made use of its authority to transfer expired unobligated MILPERS and O&M amounts into the FCFD account in the event that actual losses exceeded the projected amounts and additional transfers were deemed necessary. We continue to believe that by analyzing actual and projected losses and basing the transfer of any unobligated balances on these losses, DOD would be better positioned to determine the size of the FCFD account balance that is necessary to meet its intended purpose. Further, such analyses would provide opportunities to more efficiently use unobligated balances for other defense activities or return the balances to Treasury. We are sending copies of this report to the Secretary of Defense, the Under Secretary of Defense (Comptroller), the Secretary of the Army, the Secretary of the Navy, the Secretary of the Air Force, the Commandant of the Marine Corps, and appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To describe the Department of Defense’s (DOD) revised foreign currency budget rates since 2009 and the relationship between the revised budget rates and DOD’s projected Operation and Maintenance (O&M) and Military Personnel (MILPERS) funding needs, we reviewed DOD’s foreign currency budget rates for the period of fiscal years 2009 through 2017, and we identified any years during which DOD revised the initial budget rates. We compared DOD’s initial foreign currency budget rates and revised foreign currency budget rates with rates published by the U.S. Treasury Department (Treasury) for fiscal years 2009 through 2017. This period corresponded with data available to us on DOD’s initial and revised rates and allowed for use of the most current data available, since DOD had not yet decided whether or not to revise the fiscal year 2018 budget rates, while we were conducting our audit work. We chose rates published by Treasury for this comparison because Treasury has the sole authority to establish for all foreign currencies or credits the exchange rates at which such currencies are to be reported by all agencies of the government. Because Treasury rates are issued quarterly, we averaged Treasury’s first and second quarter rates for each currency and compared the Treasury average with DOD’s initial budget rates. Similarly, we computed an average of the third and fourth quarter Treasury rates for each currency and compared them with the DOD initial or revised budget rates, where applicable. These comparisons are meant to show the difference between DOD’s budget rates and Treasury rates for the first 6 months of the fiscal year, and the difference between DOD’s revised exchange rates and Treasury rates for the last 6 months of the fiscal year. Further, we analyzed the extent to which DOD’s budget rates were within 10 percent of Treasury rates during these same years. We chose 10 percent as the basis for our analysis because Treasury’s guidance states that amendments to the quarterly rates will be published during the quarter to reflect significant changes in the quarterly data, such as rate changes of 10 percent or more. Additionally, to understand the effect that revising the budget rates had on DOD’s O&M and MILPERS funding estimates and on potential gains or losses due to foreign currency fluctuations, we used a three-step approach. First, we identified the amount of O&M and MILPERS funds DOD requested for each currency. We converted the U.S. Dollars requested to the total amount of foreign currency needed by multiplying the U.S. Dollars requested by DOD’s initial budget rate. Second, we determined the total amount of U.S. Dollars required using the revised rates by dividing the total amount of foreign currency needed using DOD’s initial budget rate by DOD’s revised budget rate. We used this same approach to determine the total amount of U.S. Dollars required using the average Treasury rates. Third, we computed the differences in DOD’s O&M and MILPERS foreign currency funding needs by subtracting the U.S. Dollars required to meet its foreign currency needs based on the average Treasury rates from the amounts required based on DOD’s initial budget rates and DOD’s revised budget rates, respectively. We discussed further with officials from the Office of the Under Secretary of Defense, Comptroller (OUSD(C)) the factors considered in revising the rates and whether those factors are communicated within and outside of the department. To evaluate the extent to which DOD has taken steps to reduce costs in selecting foreign currency rates at which to make disbursements and determine whether opportunities exist to gain additional savings, we reviewed accounting standards and any guidelines that exist regarding disbursements and calculations of foreign currency gains and losses, such as DOD’s Financial Management Regulation 7000.14-R, which calls for the use of prevailing foreign currency rates to make disbursements. We also discussed with agency officials how those guidelines are being carried out, and whether DOD or the services have developed guidance that instructs the services in selecting rates used for disbursements in foreign currencies. Additionally, we examined a non-generalizable selection of data for DOD disbursements made during the months June and July 2017 from Treasury’s International Treasury Service (ITS.gov) system to determine which rates DOD used during this period and what savings might be achievable from using alternate rates. We chose data from those 2 months because it was the most recent data available on disbursements at the time Treasury provided the data for our review. Additionally, we discussed with officials from OUSD(C) and the services any analysis and ongoing efforts to transition to more cost-effective rates, including savings that may result. To assess the extent to which DOD has effectively managed the Foreign Currency Fluctuations, Defense (FCFD) account to cover losses, and maintained quality information to manage these funds, we analyzed DOD data for fiscal years 2009 through 2016 on foreign currency gains and losses reported by each of the services as reported in their Foreign Currency Fluctuation, Defense (O&M) and (MILPERS) reports; movements of funds between the FCFD account and the services’ O&M and MILPERS accounts; and the end-of-year FCFD account balances. We chose this time period in order to capture years in which both gains and losses were experienced, and for which DOD had complete data on gains and losses, fund transfers, and end-of-year balances for the FCFD account. Because the Army charges disbursements to the current fiscal year appropriation instead of the fiscal year appropriation that incurred the obligation, we requested that the Army adjust its reported data on foreign currency gains and losses and provide information consistent with how the other services report them, and with DOD’s Financial Management Regulation. However, the Army was unable to provide us with data that were consistent with what was provided by the other services at the time of our review. We, therefore, were unable to use Army data for purposes of comparison with data provided by the other services. We compared the end-of-year FCFD account balances and the use of the account with guidelines established in our prior work on the importance of examining unobligated balances. Additionally, we reviewed and analyzed DOD financial reports on foreign currency gains or losses and compared the reports, including any identified discrepancies, against best practices and standards on accurate reporting and maintaining quality information, such as those in GAO’s Standards for Internal Control in the Federal Government, and the Federal Accounting Standards Advisory Board’s Handbook of Federal Accounting Standards and Other Pronouncements, as Amended. To determine the reliability of the data used in addressing these objectives, we analyzed DOD and Treasury foreign currency rates, data on DOD foreign currency disbursements, and DOD financial reporting data on foreign currency gains and losses to identify any missing or inaccurate information, and we discussed with agency officials any identified abnormalities and how the information was extracted from systems, when appropriate. We found the data to be sufficiently reliable for the purposes of our reporting objectives, with the exception of the financial reporting on financial gains and losses. Specifically, based on problems with the completeness and accuracy of DOD’s financial reporting on foreign currency gains and losses, we found that these data were not sufficiently reliable for the purpose of computing exact totals for the gains and losses DOD experienced. However, because DOD uses these data as the basis for decisions related to management of the FCFD account, we included the data in our analysis to provide insight into the scope of gains and losses experienced. We also spoke with OUSD(C), military service, and Treasury officials regarding the process and systems used to input the reviewed data and generate the foreign currency reports we reviewed. We conducted this performance audit from February 2017 to April 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Matt Ullengren, Assistant Director; and Tulsi Bhojwani, Justin Bolivar, Carol Bray, Amie Lesser, Kelly Liptan, Felicia Lopez, Leah Nash, Randy Neice, Jacqueline McColl, Mike Silver, Roger Stoltz, Susan Tindall, John Trubey, Elaine Vaurio, and Cheryl Weissman made key contributions to this report.
|
DOD requested about $60 billion for fiscal years 2009 - 2017 to purchase goods and services overseas and reimburse service-members for costs incurred while stationed abroad. DOD uses foreign currency exchange rates to budget and pay (that is, disburse amounts) for these expenses. It also manages the FCFD account to mitigate a loss in buying power that results from foreign currency rate changes. GAO was asked to examine DOD's processes to budget for and manage foreign currency fluctuations. This report (1) describes DOD's revision of its foreign currency budget rates since 2009 and the relationship between the revised rates and projected O&M and MILPERS funding needs; (2) evaluates the extent to which DOD has taken steps to reduce costs in selecting foreign currency rates to disburse funds to liquidate O&M obligations, and determined whether opportunities exist to gain additional savings; and (3) assesses the extent to which DOD has effectively managed the FCFD account balance. GAO analyzed data on foreign currency rates, DOD financial management regulations, a non-generalizable sample of foreign currency disbursement data, and FCFD account balances. The Department of Defense (DOD) revised its foreign currency exchange rates (“budget rates”) during fiscal years 2014 through 2016 for each of the nine foreign currencies it uses to develop its Operation and Maintenance (O&M) and Military Personnel (MILPERS) budget request. These revisions decreased DOD's projected O&M and MILPERS funding needs. DOD's revision of the budget rates during these years also decreased the expected gains (that is, buying power) that would have resulted from an increase in the strength of the U.S. Dollar relative to other foreign currencies. DOD did not revise its budget rates in fiscal years 2009 through 2013. For fiscal year 2017, DOD changed its methodology for producing budget rates, resulting in rates that were more closely aligned with market rates. According to officials, that change made it unnecessary to revise the budget rates during the fiscal year. DOD has taken some steps to reduce costs in selecting foreign currency rates used to pay (that is, disburse amounts) for goods and services, but DOD has not fully determined whether opportunities exist to achieve additional savings. The Army has estimated potential savings of up to $10 million annually by using a foreign currency rate available 3 days in advance of paying for goods or services rather than a more costly rate available up to 5 days in advance. The Army has converted to the use of a 3-day advanced rate. GAO's analysis suggests that DOD could achieve cost savings if the services reviewed and consistently selected the most cost-effective foreign currency rates when paying for their goods and services. Absent a review, DOD is at risk for paying more than it would otherwise be required to conduct its transactions. DOD used the Foreign Currency Fluctuations, Defense (FCFD) account to cover losses (that is, less buying power) due to unfavorable foreign currency fluctuations in 6 of the 8 years GAO reviewed. Since 2012, DOD has maintained the FCFD account balance at the statutory limit of $970 million, largely by transferring unobligated balances before they are cancelled from certain DOD accounts into the FCFD. However, DOD has not identified the appropriate FCFD account balance needed to maintain program operations by routinely analyzing projected losses and basing any transfers into the account on those expected losses. Thus, DOD may be maintaining balances that are hundreds of millions of dollars higher than needed, and that could have been used for other purposes or returned to the Treasury Department (see figure). GAO is making four recommendations, including that DOD review opportunities to achieve cost savings by more consistently selecting the most cost-effective foreign currency rates used for the payment of goods and services, and analyze projected losses to manage the FCFD account balance. DOD generally concurred with the recommendations.
|
Scientific research on and projections of the changes taking place in the Arctic vary, but there is a general consensus that the Arctic is warming and that its sea ice is diminishing. For example, scientists at the National Snow and Ice Data Center reported that for 2018 the minimum amount of sea ice coverage in the Arctic—typically occurring in September each year—was the sixth lowest in the satellite record and 656,000 square miles fewer than the mean for the 1981 through 2010 time frame. Further, the scientists found that the 12 lowest recordings of September ice coverage on satellite record have all occurred in the past 12 years. Figure 1 shows the sea ice coverage (i.e., extent) in the Arctic for September 2018 compared with the median ice edge for 1981 through 2010. While much of the Arctic Ocean remains ice-covered for the majority of the year, most scientific estimates predict there will be a continued decrease in sea ice coverage in the Arctic Ocean in the summer sometime in the next 20 to 40 years. According to the Navy’s Arctic Roadmap for 2014 to 2030, while there may be less sea ice there in the future, the ice that remains will continue to be a challenge to those operating in the area. Most commercial ship activity in the Arctic is regional—shipping into or out of the Arctic, mainly in support of commercial activity—not trans- Arctic. However, according to the official Navy estimate from 2013, the decreasing coverage of sea ice will result in more open water allowing increased maritime activity along three trans-Arctic routes from 2012 through 2030: the Northern Sea Route, the Northwest Passage, and the Trans-Polar Route (see fig. 2). This development could, for example, reduce by thousands of miles and by several days of travel the shipping of goods between countries in Asia and North America. Increased economic activity in the Arctic could potentially increase the need for military capabilities there to safeguard U.S. interests. For example, estimates of significant oil, gas, and mineral deposits in the Arctic have increased the interest in exploration opportunities in the region. These resources include an estimated 13 percent of the world’s undiscovered oil; 30 percent of the world’s undiscovered gas; and approximately $1 trillion of minerals including gold, zinc, nickel, and platinum. According to information provided by the Department of State, the vast majority of these resources are within the undisputed continental shelf of the respective coastal states. Officials from the Department of State stated that disputed claims related to the small remaining portions of the Arctic seabed may be addressed within the international framework established by the United Nations Convention on the Law of the Sea. However, as we reported in 2015, even with the changing climate and growing interest in the region, several enduring characteristics will continue to provide challenges to surface navigation in the Arctic for the foreseeable future. These include large amounts of winter ice and increased movement of ice from spring to fall. Increased movement of sea ice makes its location less predictable, a situation that increases the risk that ships can become trapped or damaged by ice impacts. In addition, the lack of infrastructure in the Arctic region affects the reliability of shipping through the area. Economic factors such as risk costs, as well as changes in the shipping market resulting from the Panama Canal expansion may also affect the amount of shipping along these routes. As figure 3 shows, even as the seasonal ice decreases over time, the Navy has projected that the Arctic will remain impassable for most commercial ships for most of the year from 2012 through 2030. These factors combined are likely to affect the pace at which commercial activity will increase. We have previously examined emerging issues and challenges for the United States in the Arctic. See figure 4 for a timeline of our prior reports related to Arctic issues. We also include a list of our prior work related to the Arctic at the end of this report. The Navy’s June 2018 report aligns with DOD’s assessments that the Arctic threat level remains low and that DOD has the capabilities required to execute its 2016 DOD Arctic Strategy. Specifically, the June 2018 report and the information it provides for each of the reporting elements discusses how the department can execute the 2016 DOD Arctic Strategy. The strategy contains two overarching objectives: to (1) ensure security, support safety, and promote defense cooperation and (2) prepare to respond to a wide range of challenges and contingencies to maintain stability in the region. These objectives reflect DOD’s assessment that there is a low level of military threat in the Arctic, as well as the stated commitment of the Arctic nations to work within a common framework of diplomatic engagement. In the strategy, DOD identifies the types of investments that will need to be made over time as activity in the region increases; however, DOD also discusses the importance of assessing the needs in the Arctic and of balancing potential Arctic-specific capabilities investments against other national security priorities and fiscal realities. The Arctic threat assessment briefings we received from officials at the U.S. Northern Command and the Office of Naval Intelligence also reflected the low risk for conflict in the Arctic referenced in the Navy’s June 2018 report. Below, we summarize the Navy’s response to each reporting element, and our evaluation of whether the response aligns with current assessments of Arctic threat levels and capabilities required to execute DOD’s 2016 Arctic Strategy. Reporting Element One: The Navy was required to report on the current naval capabilities of the Department of Defense in the Arctic region, with a particular emphasis on surface capabilities. The June 2018 report provides information on this required element, with the Navy stating that it relies on the submarine force as well as on aviation assets and surface operations when necessary to operate in the Arctic. These capabilities in the Arctic region are consistent with those identified in The United States Navy Arctic Roadmap for 2014 to 2030 to execute the 2016 DOD Arctic Strategy, and as corroborated in our discussions with U.S. Northern Command and Navy officials. In addition, the Navy discusses the significant limitations of its surface ships for Arctic operations in the June 2018 report. The limitations identified are consistent with information contained in the U.S. Navy Cold Weather Handbook for Surface Ships and with information we discussed with Naval Sea Systems Command officials who oversee modifications to the fleet and the acquisition of new ships. For example, Navy officials told us that top-side icing has detrimental effects on ships. As sea spray accumulates on a ship deck and freezes, a ship can lose some of the capabilities of its external sensors and radars and a ship’s stability in the water decreases as the ship’s center of gravity becomes top heavy. Navy and Coast Guard officials told us that while the Coast Guard regularly operates in the Arctic given its ice-breaking and maritime safety missions, among others, Navy surface ships have not been designed to maneuver and operate in icy waters. Although some of the Navy’s T-class ships have some capability to operate in light or broken first-year ice due to the inherent strength of their hulls, traditional surface combatant ships (e.g., Cruisers, Destroyers, or Frigates) are not designed to operate in icy waters. Reporting Element Two: The Navy was required to report on any gaps that exist between the current naval capabilities and the ability of the department to fully execute its updated strategy for the Arctic region. The June 2018 report provides information on this required element, with the Navy stating that the department can execute the 2016 DOD Arctic Strategy with current naval capabilities. The June 2018 report is similarly aligned with Navy assessments of Arctic capabilities and gaps contained in its plan, The United States Navy Arctic Roadmap for 2014 to 2030 that the Office of the Chief of Naval Operations issued in February 2014. This plan provides guidance to prepare the Navy to respond effectively to future Arctic Region contingencies, delineates the Navy’s leadership role, and articulates the Navy’s support to achieve national priorities in the region. At the time of our review, DOD was in the process of drafting another report—on DOD arctic capability and resource gaps—as required by section 1054 of the National Defense Authorization Act for Fiscal Year 2018. In addition, according to Navy officials, the Navy was also drafting its Arctic Strategic Outlook, which is a follow-up to The United States Navy Arctic Roadmap for 2014 to 2030. According to DOD and Navy officials, both forthcoming reports will focus on contextualizing Arctic needs within the framework of the 2018 National Defense Strategy. Because these efforts were not complete at the time of our review, we were unable to determine whether the Navy’s June 2018 report aligns with these assessments. Reporting Element Three: The Navy was required to report on any gaps in the current naval capabilities that require ice-hardening of existing vessels or the construction of new vessels to preserve freedom of navigation in the Arctic region whenever and wherever necessary. The June 2018 report provides information on this required element, with the Navy stating that there are currently no validated capability gaps that require the Navy to ice-harden existing vessels or construct new ice- capable vessels to preserve freedom of navigation in the Arctic. Furthermore, the Navy stated that its current assets are sufficient to execute the 2016 DOD Arctic Strategy. As noted above, freedom of navigation operations are undertaken to, among other things, promote maritime stability and to challenge excessive sovereignty claims. In addition, DOD officials stated that the United States already has options other than Navy surface ships for demonstrating the United States’ freedom to operate in the Arctic, including using Coast Guard vessels, Navy submarines, or military aircraft. Reporting Elements Four and Five: The Navy was required to provide an analysis and recommendation of which Navy vessels could be ice-hardened to effectively preserve freedom of navigation in the Arctic region when and where necessary, in all seasons and weather conditions, and an analysis of any cost increases or schedule adjustments that may result from ice-hardening existing or new Navy vessels. The June 2018 report provides some information on these required elements, with the Navy stating that it is not pursuing ice-hardening or the winterization of surface ships. According to the Navy, because there is no specific capability requirement for the Navy to ice-harden ships, the report does not list or name potential ice-hardening candidates among existing vessels or provide cost or schedule estimates for ice-hardening vessels. Officials with the Naval Sea Systems Command, which develops cost and schedule estimates for ship modifications and new construction, told us that they had not conducted life-cycle cost studies for ice-hardening existing ships because there is no capability requirement for an ice- hardened ship and, therefore, no ship design on which to base such a study or estimate. Furthermore, the June 2018 report states that the Navy is leveraging cooperative research with international partner-nations such as Canada, Denmark, Finland, and Norway, to better understand how other Arctic nations are meeting additional requirements for Arctic operations. Navy officials from the Naval Sea Systems Command stated that ships built to operate in ice and extreme cold environments have unique features, including stronger, thicker construction of all portions of the hull that would come into contact with ice; different hull form design; redesigned propellers constructed of higher than traditional strength material; increased strength ship parts, such as rudders and seawater intakes and discharges designed to resist the formation or accumulation of ice; and more powerful heating and ventilation to accommodate sustained operations in extreme cold environments, among other things. They also noted that research completed to date has advanced the Navy’s knowledge in several of these areas including hull form and propeller design. Navy officials estimated that a new ship design might require 20 years to reach initial operational capability. They noted the process might take only 10 years if the Navy can leverage an ongoing program, such as the DDG-51 Class program. Navy officials cautioned that the combination of features that enable ice-capable ships to sustain operating in extreme cold environments could compromise other performance areas such as speed, range, and ship motion. Officials told us that this would add to the Navy’s already strained efforts to maintain existing global naval presence requirements. Although the June 2018 report did not discuss any cost and schedule adjustments that might arise from ice-hardening or new ship construction, we have previously reported that the Navy has faced challenges meeting its shipbuilding cost, schedule, and performance goals over the past decade. Specifically, we found that the 11 lead ships most recently delivered to the Navy cost $8 billion more to construct than initially budgeted for. Navy officials stated that the Navy contractor construction yards currently lack expertise in the design for construction of winterized, ice-capable surface combatant and amphibious warfare ships. Accordingly ice-hardening and winterization design practices could introduce cost and schedule risk, challenging the execution of an ice- hardened new construction ship building program for an ice-capable ship. If the Navy executes this potential program without the requisite knowledge at key points it could be at risk of cost and schedule growth that we have seen in recent Navy shipbuilding programs. The Navy has faced these challenges in part because the department has proceeded with construction prior to completing technology development and ship design. We have found that successful ship building programs are based on sound business cases, starting with the lead ship, and on the attainment of critical levels of knowledge at key points in the process prior to making significant investments. Navy officials said that the Navy does not currently have a specific capability requirement for ice-hardening existing vessels or for the construction of new ones, and stated that the Navy or Joint Force is unlikely to produce such a requirement in the near term. Navy officials told us that the Navy will continue to use DOD’s established process, the Joint Capabilities Integration and Development System (JCIDS), which governs the department’s requirements process, to assess Arctic-related capability requirements in the near and long term (see fig. 5). All DOD components use the JCIDS process or variations of the process within their organizations to identify, assess, validate, and prioritize joint military requirements. Before starting the JCIDS process, the military services, combatant commanders, and other DOD components conduct capabilities-based assessments or other studies to assess capability requirements and associated capability gaps and the associated risks. In October 2017, the Joint Requirements Oversight Council (JROC) validated U.S. Northern Command’s initial capabilities document identifying three gaps in the ability to exercise/deploy, position, and conduct deterrence/decisive operations in ice-diminished Arctic waters. At the time of our review, the JROC had reviewed and validated the U.S. Northern Command’s Arctic initial capabilities document and designated it for further study by the Navy. The validation of an initial capabilities document by the JROC is an early part of the JCIDS process, and informs updates to capability requirement documents related to specific materiel and nonmateriel capability solutions to be pursued. A Navy official stated that the capability gaps identified in the U.S. Northern Command’s validated initial capabilities document will now compete for resources with other issues designated for study across the Navy. According to a Navy official, whenever the Navy initiates a study, this triggers the analysis of alternatives phase of the JCIDS process. Under this process, each alternative would need to be specifically evaluated for its costs and benefits. DOD officials noted that there are several analytical steps in the JCIDS process during which potential solutions for any identified gaps are analyzed. They told us that potential solutions might also include alternatives other than ice-hardening or new ship construction, such as adding capabilities to Coast Guard ships or partnering with allies to achieve common strategic goals in the Arctic. Even as the seasonal ice decreases over time, according to Navy officials, the Arctic will remain impassable for most commercial ships for most of the year. For these reasons, projections of increased Arctic sea activity remain uncertain. DOD, U.S. Northern Command, Navy, and Coast Guard officials told us that even as Arctic maritime activity is expected to increase, several enduring characteristics will continue to provide challenges to surface navigation in the Arctic for the foreseeable future. These challenges include large amounts of winter ice and increased movement of ice from spring to fall. As mentioned earlier, the increased movement of sea ice makes its location less predictable, a situation that is likely to increase the risk that ships can become trapped or damaged by ice impacts. Coast Guard officials noted that a challenging environment like the Arctic may result in a higher likelihood of incidents occurring. Further, responding to incidents with search and rescue operations are riskier to execute than in non-polar environments. In addition, the lack of infrastructure and logistical support in the Arctic affects maritime activities through that region. We are not making any recommendations in this report. We provided a draft of our report to DOD, Department of Homeland Security, and the Department of State for comment. DOD, Department of Homeland Security, and Department of State provided technical comments, which we incorporated into this report as appropriate. We are sending copies of this report to the appropriate congressional committees. We are also sending copies to the Secretary of Defense, Secretary of State, and the Secretary of Homeland Security. In addition, this report will be available at no charge on our website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-3489 or pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Suzanne Wren (Assistant Director), Delia Zee (Analyst-in-Charge), John Beauchamp, Mae Jones, Amie Lesser, Ned Malone, and Shahrzad Nikoo made key contributions to this report. Coast Guard Acquisitions: Polar Icebreaker Program Needs to Address Risks before Committing Resources. GAO-18-600. Washington, D.C.: September 4, 2018. Navy Shipbuilding: Past Performance Provides Valuable Lessons for Future Investments. GAO-18-238SP. Washington, D.C.: June 6, 2018. Coast Guard Acquisitions: Status of Coast Guard’s Heavy Polar Icebreaker Acquisition. GAO-18-385R. Washington, D.C.: April 13, 2018. Coast Guard: Status of Polar Icebreaking Fleet Capability and Recapitalization Plan. GAO-17-698R. Washington, D.C.: September 25, 2017. High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. Arctic Planning: DOD Expects to Play a Supporting Role to Other Federal Agencies and Has Efforts Under Way to Address Capability Needs and Update Plans. GAO-15-566. Washington, D.C.: June 19, 2015. Climate Change Adaptation: DOD Can Improve Infrastructure Planning and Processes to Better Account for Potential Impacts. GAO-14-446. Washington, D.C.: May 30, 2014. Arctic Issues: Better Direction and Management of Voluntary Recommendations Could Enhance U.S. Arctic Council Participation. GAO-14-435. Washington, D.C.: May 16, 2014. Maritime Infrastructure: Key Issues Related to Commercial Activity in the U.S. Arctic over the Next Decade. GAO-14-299. Washington, D.C.: March 19, 2014. Managing for Results: Implementation Approaches Used to Enhance Collaboration in Interagency Groups. GAO-14-220. Washington, D.C.: February 14, 2014. Managing for Results: Key Considerations for Implementing Interagency Collaborative Mechanisms. GAO-12-1022. Washington, D.C.: September 27, 2012. Arctic Capabilities: DOD Addressed Many Specified Reporting Elements in Its 2011 Arctic Report but Should Take Steps to Meet Near- and Long- term Needs. GAO-12-180. Washington, D.C.: January 13, 2012. Coast Guard: Efforts to Identify Arctic Requirements Are Ongoing, but More Communication about Agency Planning Efforts Would Be Beneficial. GAO-10-870. Washington, D.C.: September 15, 2010. Alaska Native Villages: Limited Progress Has Been Made on Relocating Villages Threatened by Flooding and Erosion. GAO-09-551. Washington, D.C.: June 3, 2009.
|
The Navy is responsible for providing ready forces for current operations and contingency response in the Arctic Ocean. According to data from the National Snow and Ice Data Center, the coverage of sea ice in the Arctic has diminished significantly since 1981. This could potentially increase maritime activities there, leading to a need for a greater U.S. military and homeland security presence in the region. Public Law 115-91 required the Navy to report to Congress on the Navy's capabilities in the Arctic, including any capability gaps and requirements for ice-hardened vessels. It also included a provision for GAO to review the Navy's report. This report (1) assesses the extent to which the Navy's report aligns with current assessments of Arctic threat levels and capabilities required to execute DOD's 2016 Arctic Strategy and (2) describes any current requirements for ice-hardened vessels and DOD's approach for evaluating the capabilities needed as Arctic requirements evolve. GAO reviewed the Navy's report along with DOD's assessments of Arctic threats and naval capabilities. GAO also reviewed the 2016 DOD Arctic Strategy— the most current strategy, DOD and Department of State information on the freedom of navigation program as well as DOD's processes for developing capabilities and assessing Arctic capability gaps. GAO is not making any recommendations in this report. DOD provided written technical comments which were incorporated as appropriate. The Navy's June 2018 report aligns with Department of Defense (DOD) assessments that the Arctic is at low risk for conflict and that DOD has the capabilities to execute the 2016 DOD Arctic Strategy . The June 2018 report also aligns with assessments of Arctic capabilities and gaps in the Navy's 2014 roadmap for implementing the strategy. The June 2018 report states that the Navy can execute the strategy with subsurface, aviation, and surface assets. The report notes the significant limitations for operating surface ships in the Arctic, but states that the Navy has the capabilities required for executing the strategy , and so has no plan to design ice-hardened surface ships. In addition, DOD officials stated that the United States has options other than Navy surface ships for demonstrating the U.S. right to operate in the Arctic, including using Coast Guard vessels, Navy submarines, or military aircraft. Navy officials said that the Navy does not have a specific requirement for ice-hardening existing vessels or constructing new ones. The Navy plans to continue to use DOD's established process, the Joint Capabilities Integration and Development System to reassess Arctic-related requirements as conditions evolve (see fig.). In October 2017, the Joint Requirements Oversight Council validated U.S. Northern Command's initial capabilities document identifying three gaps in the ability to exercise/deploy, position, and conduct deterrence/decisive operations in ice-diminished Arctic waters. At the time of GAO's review, the Joint Staff had validated the capability gaps, which will now compete for resources with other issues designated for further study. Officials said additional study may identify alternative solutions such as adding capabilities to Coast Guard ships or partnering with allies to achieve common strategic goals in the Arctic.
|
IRS administration of the LIHTC program involves overseeing compliance on the part of allocating agencies and taxpayers and developing and publishing regulations and guidance. IRS is responsible for reviewing LIHTC information on three IRS forms that are the basis of LIHTC program reporting and then determining whether program requirements have been met. Taxpayer noncompliance with LIHTC requirements may result in IRS denying claims for the credit in the current year or recapturing—taking back—credits claimed in prior years. Published guidance may include revenue rulings and procedures, notices, and announcements. Other guidance for the program includes an Audit Technique Guide for Completing Form 8823 that includes specific instructions for allocating agencies, including when site visits and file reviews are to be performed, and guidelines for determining noncompliance in areas such as health and safety standards, rent ceilings, income limits, and tenant qualifications. State and local allocating agencies are responsible for day-to-day administration of the LIHTC program based on Section 42 of the Internal Revenue Code and Treasury regulations. More specifically, allocating agencies are responsible for Awarding tax credits. Each state receives an annual allocation of LIHTCs, determined by statutory formula. Allocating agencies then competitively award the tax credits to owners of qualified rental housing projects that reserve all or a portion of their units for low-income tenants, consistent with the agencies’ QAPs. Developers typically attempt to obtain funding for their projects by attracting third-party investors willing to contribute equity to the projects; the project investors then can claim the tax credits. Monitoring costs. Section 42 states that allocating agencies must consider the reasonableness of costs and their uses for proposed LIHTC projects, allows for agency discretion in making this determination, and also states that credits allocated to a project may not exceed the amount necessary to assure its feasibility and its viability as a low-income housing project. However, Section 42 does not provide a definition or offer guidance on determining how to calculate these amounts. Monitoring compliance. After credits are awarded, Treasury regulations state that allocating agencies must conduct regular site visits to physically inspect units and review tenant files for eligibility information. The agencies also have reporting and notification requirements. For example, allocating agencies must notify IRS of any noncompliance found during inspections and ensure that owners of LIHTC properties annually certify they met certain requirements for the preceding 12-month period. Developers of awarded projects typically attempt to obtain funding for their projects by attracting third-parties willing to invest in the project in exchange for the ability to claim tax credits. The developer sells an ownership interest in the project to one or more investors, or in many instances, to a fund managed by a syndicator who acts as an intermediary between the developer and investors. Investors and syndicators play several roles in the LIHTC market. For example, syndicators help initially connect investors and developers and oversee acquisition of projects. Once a project is acquired, syndicators perform ongoing monitoring and asset management to help ensure the project complies with LIHTC requirements and is financially sound. Syndicators attempt to identify potential problems and intercede if necessary, such as replacing under- or nonperforming general partners, and may use their own reserves to help resolve problems. In exchange for these services, syndicators typically are compensated through an initial acquisition fee—usually a percentage of the gross equity raised— and an annual asset management fee. Syndicators that we surveyed for our 2017 report were nonprofit or for- profit entities, generally had multistate operations, and averaged more than 20 years of experience with the LIHTC program. Of the 32 syndicators we surveyed, the syndicators collectively had raised more than $100 billion in LIHTC equity since 1986, helping to fund more than 20,000 properties and about 1.4 million units placed-in-service through 2014. Projects for which these syndicators raised equity in 2005–2014 represented an estimated 75 percent of all LIHTC properties placed-in- service in that period. As we reported in 2016, allocating agencies implemented requirements for QAPs in varying ways and had processes in place to meet requirements for credit awards. Allocating agencies also had procedures to assess costs, but determined award amounts for projects differently, used various cost limits and benchmarks to determine reasonableness of costs, and used widely varying criteria for basis boosts. Agencies also had processes in place to monitor compliance. However, some of these practices raised concerns. In our 2016 report, we generally found that allocating agencies implemented requirements for QAPs in varying ways and had processes in place to meet requirements for awarding the tax credit. Based on our 2016 review of 58 QAPs and our nine site visits, we found the QAPs did not always contain, address, or mention preferences and selection criteria required in Section 42. Rather, some allocating agencies incorporated the information into other LIHTC program documents, or implemented the requirements in practice. While Section 42 specifies some selection criteria (such as project location or tenant populations with special housing needs), it also more broadly states that a QAP set forth selection criteria “appropriate to local conditions.” As a result, allocating agencies have the flexibility to create their own methods and rating systems for evaluating applicants. We found that nearly all the allocating agencies that we reviewed used points or a threshold system for evaluating applicants. They used criteria such as qualifications of the development team, cost effectiveness, or leveraging of funds from other federal or state programs. According to Section 42, allocating agencies must notify the chief executive officer (or the equivalent) of the local jurisdiction in which the project is to be located. However, some agencies imposed an additional requirement of letters of support from local officials. Specifically, as of 2013, we found that of the 58 agencies in our review,12 agencies noted that their review or approval of applications was contingent on letters of support, and another 10 agencies awarded points for letters of local support. HUD officials have cited fair housing concerns in relation to any preferences or requirements for local approval or support because of the discriminatory influence these factors could have on where affordable housing is built. In December 2016, IRS issued a revenue ruling that clarified that Section 42 neither requires nor encourages allocating agencies to reject all proposals that do not obtain the approval of the locality where the project developer proposes to place the project. Allocating agencies we visited for our 2016 report had processes in place to meet other Section 42 requirements, including awarding credit to nonprofits and long-term affordability of projects. Allocating agencies must allocate at least 10 percent of the state housing credit ceiling to projects involving qualified nonprofit organizations. All nine allocating agencies we visited had a set-aside of at least 10 percent of credits to be awarded to projects involving nonprofits. Section 42 also requires allocating agencies to execute an extended low-income housing commitment of at least 30 years before a building can receive credits. For example, one allocating agency we visited required developers to sign agreements for longer extended-use periods, while some agencies awarded points to applications whose developers elect longer periods. Allocating agencies we reviewed for our 2016 report had procedures to assess costs, but determined award amounts for projects differently and used various cost limits and benchmarks to determine reasonableness of costs. All nine allocating agencies we visited required applicants to submit detailed cost and funding estimates, an explanation of sources and uses, and expected revenues as part of their applications. These costs were then evaluated to determine a project’s eligible basis (total allowable costs associated with depreciable costs in the project), which in turn determined the qualified basis and ultimately the amount of tax credits to be awarded. Reasonableness of costs. We found that allocating agencies had different ways for determining the reasonableness of project costs. Based on our analysis of 58 QAPs and our nine site visits, agencies had established various limits against which to evaluate the reasonableness of submitted costs, such as applying limits on development costs, total credit awards, developer fees, and builder’s fees. Section 42 does not provide a definition of reasonableness of costs, giving allocating agencies discretion on how best to determine what costs are appropriate for their respective localities. Discretionary basis boosts. Allocating agencies commonly “boosted” the basis for projects, but used widely varying criteria for doing so. Section 42 notes that an increase or “boost” of up to 130 percent in the eligible basis can be awarded by an allocating agency to a housing development in a qualified census tract or difficult development area. According to our QAP analysis, 44 of 58 plans we reviewed included criteria for awarding discretionary basis boosts, with 16 plans explicitly specifying the use of basis boosts for projects as needed for financial or economic feasibility. The discretionary boosts were applied to different types of projects and on different scales (for example, statewide or citywide). For example, we found one development that received a boost to the eligible basis for having received certain green building certifications, although the applicant did not demonstrate financial need or request the boost. The allocating agency told us that all projects with specified green building certifications received the boost automatically, as laid out in its QAP. At the time of our review, agency officials said that the agency had changed its practices to prevent automatic basis boosts from being applied and required additional checks for financial need. In another QAP we reviewed, one agency described an automatic 130 percent statewide boost for all LIHTC developments. According to the officials, the automatic statewide boost remained in effect because officials made the determination that nearly all projects would need it for financial feasibility. Section 42 requires that allocating agencies determine that “discretionary basis boosts” were necessary for buildings to be financially feasible before granting them to developers. Section 42 does not require allocating agencies to document their analysis for financial feasibility (with or without the basis boost). However, legislative history for the Housing and Economic Recovery Act of 2008 included expectations that allocating agencies would set standards in their QAPs for which projects would be allocated additional credits, communicate the reasons for designating such criteria, and publicly express the basis for allocating additional credits to a project. In addition, NCSHA (a nonprofit advocating for state allocating agencies) recommends that allocating agencies set standards in their QAPs to determine eligibility for discretionary basis boosts and make the determinations publicly available. In our 2016 report we found that the allocating agencies we visited had processes for and conducted compliance monitoring of projects consistent with Section 42 and Treasury regulations. Treasury regulations require allocating agencies to conduct on-site physical inspections for at least 20 percent of the project’s low-income units and file reviews for the tenants in these units at least once every 3 years. In addition, allocating agencies must annually review owner certifications that affirm that properties continue to meet LIHTC program requirements. Allocating agencies we visited followed regulatory requirements on when to conduct physical inspections and tenant file reviews. Allocating agencies we visited generally used electronic databases to track the frequency of inspections, file reviews, and certifications, although most of these agencies documented these reviews on paper. All the allocating agencies we visited had inspection and review processes in place to monitor projects following the 15-year compliance period, as required under Section 42. Allocating agencies must execute an extended low-income housing commitment to remain affordable for a minimum of 30 years before a tax credit project can receive credits. After the compliance period is over, the obligation for allocating agencies to report to IRS on compliance issues ends and investors are no longer at risk for tax credit recapture. Our prior reports found IRS conducted few reviews of allocating agencies and had not reviewed how agencies determined basis boosts. Data on noncompliance were not reliable and IRS used little of the reported program information. IRS had not directly participated in an interagency initiative to augment HUD’s databases with LIHTC property inspection data. Both our 2015 and 2016 reports concluded that opportunities existed to enhance oversight of the LIHTC program, specifically by leveraging the knowledge and experience of HUD. Few reviews of allocating agencies. In our 2015 report, we found that IRS had conducted seven audits (reviews) of allocating agencies from 1986 (inception of the program) through May 2015. In the audits, IRS found issues related to QAPs, including missing preferences and selection criteria. But in both our 2015 and 2016 reports, IRS officials stated that they did not regard a regular review of QAPs as part of their responsibilities as outlined in Section 42 and therefore did not regularly review the plans. IRS officials said that allocating agencies have primary responsibility to ensure that the plans meet Section 42 preferences and selection criteria. IRS officials noted that review of a QAP to determine if the plan incorporated the elements specified in Section 42 could occur if IRS were to audit an allocating agency. No review of agencies’ discretionary basis boosts. In our 2016 report, we found IRS had not reviewed the criteria allocating agencies used to award discretionary basis boosts. The use of basis boosts has implications for LIHTC housing production because of the risk of oversubsidizing projects, which would reduce the amount of the remaining allocable subsidies and yield fewer LIHTC projects overall within a state. IRS also had not provided guidance to agencies on how to determine the need for the additional basis to make projects financially feasible. IRS officials told us that Section 42 gives allocating agencies the discretion to determine if projects receive a basis boost and does not require documentation of financial feasibility. Additionally, IRS officials explained that because the overall amount of subsidies allocated to a state is limited, the inherent structure of the program discourages states from oversubsidizing projects. However, during our 2016 review, we observed a range of practices for awarding discretionary basis boosts, including a blanket basis boost that could result in fewer projects being subsidized and provide more credits than necessary for financial feasibility. We concluded that because IRS did not regularly review QAPs, many of which list criteria for discretionary basis boosts, IRS was unable to determine the extent to which agency policies could result in oversubsidizing of projects. Unreliable data. We reported in 2015 that IRS had not comprehensively captured information reported for the program in its Low-Income Housing Credit database and the existing data were not complete and reliable. IRS guidance requires the collection of data on the LIHTC program in an IRS database, which records information submitted by allocating agencies and taxpayers on three forms. The forms include Credit allocation and certification (Form 8609). The two-part form is completed by the allocating agency and the taxpayer. Agencies report the allocated amount of tax credits available over a 10-year period for each building in a project. The taxpayer reports the date on which the building was placed-in-service (suitable for occupancy). Noncompliance or building disposition (Form 8823). Allocating agencies must complete and submit this form to IRS if an on-site physical inspection of a LIHTC project finds any noncompliance. The form records any findings (and corrections of previous findings) based on the inspection of units and review of the low-income tenant certifications. Annual report (Form 8610). IRS staff review the reports to ensure allocations do not exceed a statutorily prescribed ceiling for that year. Based on our analysis of the information in the database, we found in 2015 that the data on credit allocation and certification information were not sufficiently reliable to determine if basic requirements for the LIHTC program were being achieved. For example, we could not determine how often LIHTC projects were placed-in-service within required time frames. We concluded that without improvements to the data quality of credit allocation and certification information, it was difficult to determine if credit allocation and placed-in-service requirements had been met by allocating agencies and taxpayers, respectively. Thus, we recommended that IRS should address weaknesses identified in data entry and programming controls to ensure reliable data are collected on credit allocations. At the time of our 2015 report, IRS acknowledged the need for improvements in its controls and procedures (including data entry and quality reviews). IRS officials agreed that these problems should be corrected and data quality reviews should be conducted on an ongoing basis. As of March 2017, in response to our recommendation, IRS officials said that they had explored possibilities to improve the database, which not only houses credit allocation information, but also data from noncompliance and building disposition forms. Specifically, IRS is working to move the database to a new and updated server, which will address weaknesses identified in data entry and programming controls. IRS expects to complete the data migration step by early fall of 2017. Until IRS implements its plan to improve the data, this recommendation will remain open. Limited noncompliance data, analysis, and guidance on reporting. We found in our 2015 and 2016 reports that IRS had done little with the information it collects on noncompliance. IRS had captured little information from the Form 8823 submissions in its database and had not tracked the resolution of noncompliance issues or analyzed trends in noncompliance. As of April 2016, the database included information from about 4,200 of the nearly 214,000 Form 8823s IRS received since 2009 (less than 2 percent of forms received). For our 2015 report, officials told us the decision was made during the 2008–2009 timeframe to input information only from forms that indicated a change in building disposition, such as a foreclosure. IRS focused on forms indicating this change for reasons including the serious nature of the occurrence for the program and impacts on taxpayers’ ability to receive credit. Officials also stated it was not cost effective to input all the form information and trend analysis on all types of noncompliance was not useful for purposes of ensuring compliance with the tax code. In addition, as we reported in both 2015 and 2016, IRS had assessed little of the noncompliance information collected on the Form 8823 or routinely used it to determine trends in noncompliance. Because little information was captured in the Low-Income Housing Credit database, IRS was unable to provide us with program-wide information on the most common types of noncompliance. Furthermore, IRS had no method to determine if issues reported as uncorrected had been resolved or if properties had recurring noncompliance issues. In our 2016 report, we also found inconsistent reporting on the noncompliance forms, the reasons for which included conflicting IRS guidance, different interpretations of the guidance by allocating agencies, and lack of IRS feedback about agency submissions. IRS developed guidelines for allocating agencies to use when completing the Form 8823, the “fundamental purpose” of which was identified as providing standardized operational definitions for the noncompliance categories listed on the form. The IRS guide adds that it is important that noncompliance be consistently identified, categorized, and reported and notes that the benefits of consistency included enhanced program administration by IRS. Allocating agencies we visited had various practices for submitting Form 8823 to IRS, including different timing of submissions, reporting on all violations (whether minor or corrected during inspections) or not, and amounts of additional detail provided. Partly because of these different practices, the number of forms each of the nine agencies told us they sent to IRS in 2013 varied from 1 to more than 1,700. We concluded that without IRS clarification of when to send in the Form 8823, allocating agencies will continue to submit inconsistent noncompliance data to IRS, which will make it difficult for IRS to efficiently distinguish between minor violations and severe noncompliance, such as properties with health and safety issues. We recommended that IRS should clarify what to submit and when—in collaboration with the allocating agencies and Treasury—to help IRS improve the quality of the noncompliance information it receives and help ensure that any new guidance is consistent with Treasury regulations. In August 2016, IRS stated it would review the Form 8823 Audit Technique Guide to determine whether additional guidance and clarification were needed for allocating agencies to report noncompliance information on the form. If published legal guidance is required, IRS stated that it will submit a proposal for such guidance for prioritization. IRS indicated an expected implementation date by November 2017. In addition, in March 2017, officials stated that IRS Counsel attended an industry conference with allocating agencies at which issues related to the Form 8823 were discussed. Lack of participation in data initiative. Moreover, in our 2016 report we found IRS had not taken advantage of the important progress HUD made through the Rental Policy Working Group (working group)—which was established to better align the operation of federal rental policies across the administration—to augment its databases with LIHTC property inspection data. This data collection effort created opportunities for HUD to share inspection data with IRS that could improve the effectiveness of reviews for LIHTC noncompliance. However, the IRS Small Business/Self-Employed Division managing the LIHTC program had not been involved in the working group. We concluded that such involvement would allow IRS to leverage existing resources, augment its information on noncompliance, and better understand the prevalence of noncompliance. We recommended that staff from the division participate in the physical inspection initiative of the working group and also recommended that the IRS Commissioner evaluate how IRS could use HUD’s real estate database, including how the information might be used to reassess reporting categories on Form 8823 and reassess which categories of noncompliance information to review for audit potential. As of March 2017, IRS had implemented our recommendation to include the appropriate staff at the working group meetings. However, IRS officials stated that since HUD’s database with property inspection data was not complete as of March 2017 and contained data from 30 states, it was unclear how the database could be used. IRS officials said they would continue exploring the HUD database if the data for all LIHTC properties were included and it was possible to isolate the LIHTC property data from other rental properties in the HUD database. Both our 2015 and 2016 reports found that opportunities existed to enhance oversight of the LIHTC program, specifically by leveraging the knowledge and experience of HUD. We found in 2015 that while LIHTC is the largest federal program for increasing the supply of affordable rental housing, LIHTC is a peripheral program in IRS in terms of resources and mission. Oversight responsibilities for the program include monitoring allocating agencies and taxpayer compliance. However, as we have discussed previously, IRS oversight has been minimal and IRS has captured and used little program information. As we previously stated, such information could help program managers and congressional decision makers assess the program’s effectiveness. HUD─which has a housing mission─collects and analyzes information on low-income rental housing, including LIHTC-funded projects. As we reported in 2015, HUD’s role in the LIHTC program is generally limited to the collection of information on tenant characteristics (mandated by the Housing and Economic Recovery Act of 2008). However, it has voluntarily collected project-level information on the program since 1996 because of the importance of LIHTC as a source of funding for affordable housing. HUD also has sponsored studies of the LIHTC program that use these data. HUD’s LIHTC databases, the largest federal source of information on the LIHTC program, aggregates project-level data that allocating agencies voluntarily submit and information on tenant characteristics that HUD must collect. Since 2014, HUD also has published annual reports analyzing data it must collect on tenants residing in LIHTC properties. As part of this report, HUD compares property information in its tenant database to the information in its property database to help assess the completeness of both databases. In our 2015 report, we also discussed HUD’s experience in working with allocating agencies. While multiple federal agencies administer housing- related programs, HUD is the lead federal agency for providing affordable rental housing. Much like LIHTC, HUD’s rental housing programs rely on state and local agencies to implement programs. HUD is responsible for overseeing these agencies, including reviewing state and local consolidated plans for the HOME Investment Partnership and Community Development Block Grant programs—large grant programs that also are used to fund LIHTC projects. HUD also has experience in directly overseeing allocating agencies in their roles as contract administrators for project-based Section 8 rental assistance. HUD has processes, procedures, and staff in place for program evaluation and oversight of state and local agencies that could be built upon and strengthened. In our 2015 report, we concluded that significant resource constraints affected IRS’s ability to oversee taxpayer compliance and precluded wide-ranging improvement to such functions, but that IRS still had an opportunity to enhance oversight of LIHTC. We also concluded that leveraging the experience and expertise of another agency with a housing mission, such as HUD, might help offset some of IRS’s limitations in relation to program oversight. HUD’s existing processes and procedures for overseeing allocating agencies could constitute a framework on which further changes and improvements in LIHTC could be effected. However, enhancing HUD’s role could involve additional staff and other resources. An estimate of potential costs and funding options for financing enhanced federal oversight of the LIHTC program would be integral to determining an appropriate funding mechanism. We asked that Congress consider designating HUD as a joint administrator of the program responsible for oversight. As part of the deliberation, we suggested that Congress direct HUD to estimate the costs to monitor and perform the additional oversight responsibilities, including a discussion of funding options. Treasury agreed that it would be useful for HUD to receive ongoing responsibility for, and resources to perform, research and analysis on the effectiveness of LIHTCs in increasing the availability of affordable rental housing. Treasury noted that such research and analysis are not part of IRS’s responsibilities or consistent with its expertise in interpreting and enforcing tax laws. However, Treasury stated that responsibility for interpreting and enforcing the code should remain entirely with IRS. Our report noted that if program administration were changed, IRS could retain certain key responsibilities consistent with its tax administration mission. In our 2016 report, we concluded that IRS oversight of allocating agencies continued to be minimal, particularly in reviewing QAPs and allocating agencies’ practices for awarding discretionary basis boosts. As a result, we reiterated the recommendation from our 2015 report that Congress should consider designating HUD as a joint administrator of the program responsible for oversight due to its experience and expertise as an agency with a housing mission. In response to our 2016 report, HUD stated it remains supportive of mechanisms to use its significant expertise and experience administering housing programs for enhanced effectiveness of LIHTC. HUD also stated that enhanced interagency coordination could better ensure compliance with fair housing requirements and improve alignment of LIHTC with national housing priorities. As of July 2017, Congress had not enacted legislation to give HUD an oversight role for LIHTC. Chairman Hatch, Ranking Member Wyden, and Members of the Committee, this concludes my prepared statement. I would be happy to respond to any questions that you may have at this time. For further information about this testimony, please contact me at 202-512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Nadine Garrick Raidbard, Assistant Director; Anar N. Jessani, Analyst in Charge; William R. Chatlos; Farrah Graham; Daniel Newman; John McGrail; Barbara Roesmann; and MaryLynn Sergent. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The LIHTC program, established under the Tax Reform Act of 1986, is the largest source of federal assistance for developing affordable rental housing and will represent an estimated $8.5 billion in forgone revenue in 2017. LIHTC encourages private-equity investment in low-income rental housing through tax credits. The program is administered by IRS and allocating agencies, which are typically state or local housing finance agencies established to meet affordable housing needs of their jurisdictions. Responsibilities of allocating agencies (in Section 42 of the Internal Revenue Code and regulations of the Department of the Treasury) encompass awarding credits, assessing the reasonableness of project costs, and monitoring projects. In this testimony, GAO discusses (1) how allocating agencies implement federal requirements for awarding LIHTCs, assess reasonableness of property costs, and monitor properties' ongoing compliance; and (2) IRS oversight of the LIHTC program. This statement is based primarily on three reports GAO issued in July 2015 ( GAO-15-330 ), May 2016 ( GAO-16-360 ), and February 2017 ( GAO-17-285R ). GAO also updated the status of recommendations made in these reports by reviewing new or revised IRS policies, procedures, and reports and interviewing IRS officials. In its May 2016 report on the Low-Income Housing Tax Credit (LIHTC) program of the Internal Revenue Service (IRS), GAO found that state and local housing finance agencies (allocating agencies) implemented requirements for allocating credits, reviewing costs, and monitoring projects in varying ways. Moreover, some allocating agencies' day-to-day practices to administer LIHTCs also raised concerns. For example, qualified allocation plans (developed by 58 allocating agencies) that GAO analyzed did not always mention all selection criteria and preferences that Section 42 of the Internal Revenue Code requires; and allocating agencies could increase (boost) the eligible basis used to determine allocation amounts for certain buildings if needed for financial feasibility. However, they were not required to document the justification for the increases. The criteria used to award boosts varied, with some allocating agencies allowing boosts for specific types of projects and one allowing boosts for all projects in its state. In its 2015 and 2016 reports, GAO found IRS oversight of the LIHTC program was minimal. Additionally, IRS collected little data on or performed limited analysis of compliance in the program. Specifically, GAO found that Since 1986, IRS conducted seven audits of the 58 allocating agencies we reviewed. Reasons for the minimal oversight may include LIHTC being viewed as a peripheral program in IRS in terms of its mission and priorities for resources and staffing. IRS had not reviewed the criteria allocating agencies used to award discretionary basis “boosts,” which raised concerns about oversubsidizing projects (and reducing the number of projects funded). IRS guidance to allocating agencies on reporting noncompliance was conflicting. As a result, allocating agencies' reporting of property noncompliance was inconsistent. IRS had not participated in and leveraged the work of the physical inspection initiative of the Rental Policy Working Group—established to better align the operations of federal rental assistance programs—to augment its databases with physical inspection data on LIHTC properties that the Department of Housing and Urban Development (HUD) maintains. In its prior reports, GAO made a total of four recommendations to IRS. As of July 2017, IRS had implemented one recommendation to include relevant IRS staff in the working group. IRS has not implemented the remaining three recommendations, including improving the data quality of its LIHTC database, clarifying guidance to agencies on reporting noncompliance, and evaluating how the information HUD collects could be used for identifying noncompliance issues. In addition, because of the limited oversight of LIHTC, in its 2015 report GAO asked that Congress consider designating certain oversight responsibilities to HUD because the agency has experience working with allocating agencies and has processes in place to oversee the agencies. As of July 2017, Congress had not enacted legislation to give HUD an oversight role for LIHTC.
|
The mission of IRS, a bureau within the Department of the Treasury, is to (1) provide America’s taxpayers top quality service by helping them understand and meet their tax responsibilities and (2) enforce the law with integrity and fairness to all. In carrying out its mission, IRS annually collects over $3 trillion in taxes from millions of taxpayers, and manages the distribution of over $400 billion in refunds. To guide its future direction, the agency has two strategic goals: (1) deliver high quality and timely service to reduce taxpayer burden and encourage voluntary compliance; and (2) effectively enforce the law to ensure compliance with tax responsibilities and combat fraud. Effective management of IT is critical for agencies to achieve successful outcomes. This is particularly true for IRS, given the role of IT in enabling the agency to carry out its mission and responsibilities. For example, IRS relies on information systems to process tax returns; account for tax revenues collected; send bills for taxes owed; issue refunds; assist in the selection of tax returns for audit; and provide telecommunications services for all business activities, including the public’s toll-free access to tax information. For fiscal year 2016, IRS was pursuing 23 major and 114 non-major IT investments to carry out its mission. According to the agency, it expended approximately $2.7 billion on these investments during fiscal year 2016, including $1.9 billion, or 70 percent, for operations and maintenance activities, and approximately $800 million, or 30 percent, for development, modernization, and enhancement. We have previously reported on a number of the agency’s major investments, to include the following investments in development, modernization, and enhancement: The Affordable Care Act investment encompasses the planning, development, and implementation of IT systems needed to support tax administration responsibilities associated with key provisions of the Patient Protection and Affordable Care Act. IRS expended $253 million on this investment in fiscal year 2016. Customer Account Data Engine 2 is being developed to replace the Individual Master File investment, IRS’s authoritative data source for individual tax account data. A major component of the program is a modernized database for all individual taxpayers that is intended to provide the foundation for more efficient and effective tax administration and help address financial material weaknesses for individual taxpayer accounts. Customer Account Data Engine 2 data is also expected to be made available for access by downstream systems, such as the Integrated Data Retrieval System for online transaction processing by IRS customer service representatives. IRS expended $182.6 million on this investment in fiscal year 2016. The Return Review Program is IRS’s system of record for fraud detection. As such, it is intended to enhance the agency’s capabilities to detect, resolve, and prevent criminal and civil tax noncompliance. In addition, it is intended to allow analysis and support of complex case processing requirements for compliance and criminal investigation programs during prosecution, revenue protection, accounts management, and taxpayer communications processes. According to IRS, as of May 2017, the system has helped protect over $4.5 billion in revenue. IRS expended $100.2 million on this investment in fiscal year 2016. We have also reported on the following investments in operations and maintenance: Mainframes and Servers Services and Support provides for the design, development, and deployment of server; middleware; and large systems and enterprise storage infrastructures, including supporting systems software products, databases, and operating systems. This investment has been operational since 1970. IRS expended $499.4 million on this investment in fiscal year 2016. Telecommunications Systems and Support provides for IRS’s network infrastructure services such as network equipment, video conference service, enterprise fax service, and voice service for over 85,000 employees at about 1,000 locations. According to IRS, the investment supports the delivery of services and products to employees, which translates into service to taxpayers. IRS expended $336.4 million on this investment in fiscal year 2016. Individual Master File is the authoritative data source for individual taxpayer accounts. Using this system, accounts are updated, taxes are assessed, and refunds are generated as required during each tax filing period. Virtually all IRS information system applications and processes depend on output, directly or indirectly, from this data source. IRS expended $14.3 million on this investment in fiscal year 2016. In fiscal year 2017, the federal government planned to spend more than $89 billion for IT that is critical to the health, economy, and security of the nation. However, we have reported that prior IT expenditures have often resulted in significant cost overruns, schedule delays, and questionable mission-related achievements. In light of these ongoing challenges, in February 2015, we added improving the management of IT acquisitions and operations to our list of high-risk areas for the federal government. This area highlights several critical IT initiatives in need of additional congressional oversight, including (1) reviews of troubled projects; (2) efforts to increase the use of incremental development; (3) efforts to provide transparency relative to the cost, schedule, and risk levels for major IT investments; (4) reviews of agencies’ operational investments; (5) data center consolidation; and (6) efforts to streamline agencies’ portfolios of IT investments. We noted that implementation of these initiatives has been inconsistent and more work remains to demonstrate progress in achieving acquisitions and operations outcomes. Between fiscal years 2010 and 2015, we made about 800 recommendations related to this high-risk area to the Office of Management and Budget and agencies. As of September, 2017, about 54 percent of these recommendations had been implemented. The Federal Information Technology Acquisition Reform provisions (commonly referred to as FITARA), enacted as a part of the Carl Levin and Howard P. ‘Buck’ McKeon National Defense Authorization Act for Fiscal Year 2015, aimed to improve federal IT acquisitions and operations and recognized the importance of the initiatives mentioned above by incorporating certain requirements into the law. For example, among other things, the act requires the Office of Management and Budget to publicly display investment performance information and review federal agencies’ IT investment portfolios. The current administration has also initiated additional efforts aimed at improving federal IT. Specifically, in March 2017, the administration established the Office of American Innovation, which has a mission to, among other things, make recommendations to the President on policies and plans aimed at improving federal government operations and services and modernizing federal IT. Further, in May 2017, the administration established the American Technology Council, which has a goal of helping to transform and modernize federal agency IT and how the federal government uses and delivers digital services. Recently this council worked with several agencies to develop a draft report on modernizing IT in the federal government. The council released the draft report for public comment in August 2017. In reviews that we have undertaken over the past several years, we have identified various opportunities for the IRS to improve the management of its IT investments. These reviews have identified a number of weaknesses with the agency’s reporting on the performance of its modernization investments to Congress and other stakeholders. In this regard, we have pointed out that information on investments’ performance in meeting cost, schedule, and scope goals is critical to determining the agency’s progress in completing key IT investments. We have also stressed the importance of the agency addressing weaknesses in its process for prioritizing modernization activities. Accordingly, we have made a number of related recommendations, which IRS is in various stages of implementing. In our June 2012 report on IRS’s performance in meeting cost, schedule, and scope goals for selected investments, we noted that, while IRS reported on the cost and schedule of its major IT investments, the agency did not have a quantitative measure of scope—a measure that shows whether these investments delivered planned functionality. We stressed that having such a measure is a good practice as it provides information about whether an investment has delivered the functionality that was paid for. Accordingly, we recommended that the agency develop a quantitative measure of scope for its major IT investments, to have more complete information on the performance of these investments. In response, IRS started developing a quantitative measure of scope for selected investments in December 2015 and has been working to gradually expand the measure to other investments. In April 2013, based on another review of IRS’s performance in meeting cost, schedule, and scope goals, we reported that there were weaknesses, to varying degrees, in the reliability of IRS’s investment performance information. Specifically, we found that IRS had not updated investment cost and schedule variance information with actual amounts on a timely basis (i.e., within the 60-day time frame required by the Department of Treasury) in about 25 percent of the activities associated with the investments selected in our review. In addition, the agency had not specified how project managers should estimate the cost and schedule performance of ongoing projects. As a result of these findings, we recommended that IRS ensure that its projects consistently follow guidance for updating performance information 60 days after completion of an activity and develop and implement guidance that specifies best practices to consider when estimating ongoing projects’ progress in meeting cost and schedule goals. IRS agreed with, and subsequently addressed, the recommendation related to updating performance information on a timely basis. However, the agency partially disagreed with the recommendation to develop guidance on estimating progress in meeting cost and schedule goals for ongoing projects. In this regard, we had suggested the use of earned value management data as a best practice to determine projected cost and schedule amounts. IRS did not agree with the use of the technique, stating that it was not part of the agency’s current program management processes and that the cost and burden to use earned value management would outweigh the value added. We disagreed with the agency’s view of earned value management because best practices have found that its value generally outweighs the cost and burden of its implementation (although we suggested it as one of several examples of practices that could be used to determine projected amounts). We also stressed that implementing our recommendation would help improve the reliability of reported cost and schedule variance information, and that IRS had flexibility in determining which best practices to use to calculate projected amounts. For those reasons, we maintained that our recommendation was warranted. However, IRS has yet to address the recommendation. We reported in April 2014, that the cost and schedule performance information that IRS reported for its major investments was for the fiscal year only. We noted that this reporting would be more meaningful if supplemented with cumulative cost and schedule performance information in order to better indicate progress toward meeting goals. In addition, we noted that the reported variances for selected investments were not always reliable because the estimated and actual cost and schedule amounts on which they depended had not been consistently updated in accordance with Department of Treasury reporting requirements as we had previously recommended. We recommended that IRS report more comprehensive and reliable cost and schedule information for its major investments. The agency agreed with our recommendation and said it believed it had addressed the recommendation in its quarterly reports to Congress. We disagreed with IRS’s assertion, however, noting that, while the report includes cumulative costs, they are cumulative for the fiscal year, not for the investment or investment segment as we recommended and they therefore do not account for cost variances from prior fiscal years. We therefore maintained our recommendation. In February 2015, after assessing the status and plans of the Return Review Program and Customer Account Data Engine 2, we reported that these investments had experienced significant variances from initial cost, schedule, and scope plans; yet, IRS did not include these variances in its reports to Congress because the agency had not addressed our prior recommendations. Specifically, IRS had not addressed our recommendation to report on how delivered scope compared to what was planned, and it also did not address guidance for determining projected cost and schedule amounts, or the reporting of cumulative cost and schedule performance information. We stressed that implementing these recommendations would improve the transparency of congressional reporting so that Congress has the appropriate information needed to make informed decisions. We made additional recommendations for the agency to improve the reliability and reporting of investment performance information and management of selected major investments. IRS agreed with the recommendations and has since addressed them. In our most recent report in June 2016, we assessed IRS’s process for determining its funding priorities for both modernization and operations. We found that the agency had developed a structured process for allocating funding to its operations activities consistent with best practices, which specify that an organization should document policies and procedures for selecting new and reselecting ongoing IT investments, and include criteria for making selection and prioritization decisions. However, IRS did not have a similarly structured process for prioritizing its modernization activities, to which the agency allocated hundreds of millions of dollars for fiscal year 2016. Agency officials stated that discussions were held to determine the modernization efforts that were of highest priority to meet IRS’s future state vision and technology roadmap. The officials reported that staffing resources and lifecycle stage were considered, but there were no formal criteria for making final determinations. Senior IRS officials said they did not have a structured process for the selection and prioritization of business systems modernization activities because the projects were established; and there were fewer competing activities than for operations support. Nevertheless, we stressed that, while there may have been fewer competing activities, a structured, albeit simpler, process that is documented and consistent with best practices would provide transparency into the agency’s needs and priorities for appropriated funds. We concluded that such a process would better assist Congress and other decision makers in carrying out their oversight responsibilities. Accordingly, we recommended that IRS develop and document its processes for prioritizing IT funding. The agency agreed with the recommendations and has taken steps to address them. Further, we found that IRS had reported complete performance information for two of the six selected investments in our review, to include a measure of progress in delivering scope, which we have been recommending since 2012. However, the agency did not always use best practices for determining the amount of work completed by its own staff, resulting in inaccurate reports of work performed. Consequently, we recommended that IRS modify its processes for determining the work performed by its staff. The agency disagreed with the recommendation, stating that the costs involved would outweigh the value provided. Specifically, IRS stated that modifying the use of the level of effort measure would equate to a certified earned value management system, which would add immense burden on IRS’s programs on various fronts and would outweigh the value it provides. However, we did not specify the use of an earned value management system in our report and believe other methods could be used to more reliably measure work performed.. In addition, we believed that it is a reasonable expectation for IRS to reliably determine the actual work completed, as opposed to assuming that work is always completed as planned since, as noted in our report, 22 to 100 percent of the work for selected projects was performed by IRS staff. Accordingly, we maintained that the recommendation was still warranted. Our work has also emphasized the importance of IRS more effectively managing its aging legacy systems. For example, in November 2013, we reported on the extent to which 10 of the agency’s large investments had undergone operational analyses—a key performance evaluation and oversight mechanism required by the Office of Management and Budget to ensure investments in operations and maintenance continue to meet agency needs. We noted that IRS’s Mainframe and Servers Services and Support had not had an operational analysis for fiscal year 2012. As a result, we recommended that the Secretary of Treasury direct appropriate officials to perform an operational analysis for the investment, including ensuring that the analysis addressed the 17 key factors identified in the Office of Management and Budget’s guidance for performing operational analyses. The department did not comment on our recommendation but subsequently implemented it. In addition, we previously reported on legacy IT systems across the federal government, noting that these systems were becoming increasingly obsolete and that many of them used outdated software languages and hardware parts that were unsupported. As part of that work, we noted that the Department of the Treasury used assembly language code—a computer language initially used in the 1950s and typically tied to the hardware for which it was developed—and Common Business Oriented Language (COBOL)—a programming language developed in the late 1950s and early 1960s—to program its legacy systems. It is widely known that agencies need to move to more modern, maintainable languages, as appropriate and feasible. For example, the Gartner Group, a leading IT research and advisory company, has reported that organizations using COBOL should consider replacing the language and, in 2010, noted that there should be a shift in focus to using more modern languages for new products. The use of COBOL presents challenges for agencies such as IRS given that procurement and operating costs associated with this language will steadily rise, and because fewer people with the proper skill sets are available to support the language. Further, we reported that IRS’s Individual Master File was over 50 years old and, although IRS was working to modernize it, the agency did not have a time frame for completing the modernization or replacement. Thus, we recommended that the Secretary of the Treasury direct the Chief Information Officer to identify and plan to modernize and replace legacy systems, as needed, and consistent with the Office of Management and Budget’s draft guidance on IT modernization, including time frames, activities to be performed, and functions to be replaced or enhanced. The department had no comments on our recommendation. We will continue to follow-up with the agency to determine the extent to which this recommendation has been addressed. In addition, we have ongoing work identifying risks associated with IRS’s legacy IT systems, and the agency’s management of these risks. In summary, IRS faces longstanding challenges in managing its IT systems. While effective IT management has been a prevalent issue throughout the federal government, it is especially concerning at IRS given the agency’s extensive reliance on IT to carry out its mission of providing service to America’s taxpayers in meeting their tax obligations. Thus, it is important that the agency establish, document, and implement policies and procedures for prioritizing its modernization efforts, as we have recently recommended, and provide Congress with accurate information on progress in delivering such modernization efforts. In addition, we have emphasized the need for IRS to address the inherent challenges associated with aging legacy systems so that it does not continue to maintain investments that have outlived their effectiveness and are consuming resources that outweigh their benefits. Continued attention to implementing our recommendations will be vital to helping IRS ensure the effective management of its efforts to modernize its aging IT systems and ensure its multibillion dollar investment in IT is meeting the needs of the agency. Chairman Buchanan, Ranking Member Lewis, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staffs have any questions about this testimony, please contact me at (202) 512-9286 or at pownerd@gao.gov. Individuals who made key contributions to this testimony are Sabine Paul (Assistant Director), Rebecca Eyler, and Bradley Roach (Analyst in Charge). IRS 2013 Budget: Continuing to Improve Information on Program Costs and Results Could Aid in Resource Decision Making, GAO-12-603 (Washington, D.C.: June 8, 2012) Information Technology: Consistently Applying Best Practices Could Help IRS Improve the Reliability of Reported Cost and Schedule Information, GAO-13-401 (Washington, D.C.: April 17, 2013) Information Technology: Agencies Need to Strengthen Oversight of Multibillion Dollar Investments in Operations and Maintenance, GAO-14-66 (Washington, D.C.: Nov. 6, 2013) Information Technology: IRS Needs to Improve the Reliability and Transparency of Reported Investment Information, GAO-14-298 (Washington, D.C.: April 2, 2014) Information Technology: Management Needs to Address Reporting of IRS Investments’ Cost, Schedule, and Scope Information, GAO-15-297 (Washington, D.C.: February 25, 2015) Information Technology: Federal Agencies Need to Address Aging Legacy Systems, GAO-16-468 (Washington, D.C.: May 25, 2016) This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The IRS, a bureau of the Department of the Treasury, relies extensively on IT to annually collect more than $3 trillion in taxes, distribute more than $400 billion in refunds, and carry out its mission of providing service to America's taxpayers in meeting their tax obligations. For fiscal year 2016, IRS expended approximately $2.7 billion for IT investments, 70 percent of which was allocated for operational systems. GAO has long reported that the effective and efficient management of IT acquisitions and operational investments has been a challenge in the federal government. Accordingly, in February 2015, GAO introduced a new government-wide high-risk area, Improving the Management of IT Acquisitions and Operations. GAO has also reported on challenges IRS has faced in managing its IT acquisitions and operations, and identified opportunities for IRS to improve the management of these investments. In light of these challenges, GAO was asked to testify about IT management at IRS. To do so, GAO summarized its prior work regarding IRS's IT management, including the agency's management of operational, or legacy, IT systems. GAO has issued a series of reports in recent years which have identified numerous opportunities for the Internal Revenue Service (IRS) to improve the management of its major acquisitions and operational, or legacy, information technology (IT) investments. For example, In June 2016, GAO reported that IRS had developed a structured process for allocating funding to its operations activities, consistent with best practices; however, GAO found that IRS did not have a similarly structured process for prioritizing modernization activities to which the agency allocated hundreds of millions of dollars for fiscal year 2016. Instead, IRS officials stated that they held discussions to determine the modernization efforts that were of highest priority to meet IRS's future state vision and technology roadmap, and considered staffing resources and lifecycle stage. However, they did not use formal criteria for making final determinations. GAO concluded that establishing a structured process for prioritizing modernization activities would better assist Congress and other decision makers in ensuring that the right priorities are funded. Accordingly, GAO recommended that IRS establish, document, and implement policies and procedures for prioritizing modernization activities. IRS agreed with the recommendation and has efforts underway to address it. In the same report, GAO noted that IRS could improve the accuracy of reported performance information for key development investments to provide Congress and other external parties with pertinent information about the delivery of these investments. This included investments such as Customer Account Data Engine 2, which IRS is developing to replace its 50-year old repository of individual tax account data, and the Return Review Program, IRS's system of record for fraud detection. GAO recommended that IRS take steps to improve reported investment performance information. IRS agreed with the recommendation, and has efforts underway to address it. In a May 2016 report on legacy IT systems across the federal government, GAO noted that IRS used assembly language code to program key legacy systems. Assembly language code is a computer language initially used in the 1950s that is typically tied to the hardware for which it was developed; it has become difficult to code and maintain. One investment that used this language is IRS's Individual Master File which serves as the authoritative data source for individual taxpayer accounts. GAO noted that, although IRS has been working to replace the Individual Master File, the bureau did not have time frames for its modernization or replacement. Therefore, GAO recommended that the Department of Treasury identify and plan to modernize and replace this legacy system, consistent with applicable guidance from the Office of Management and Budget. The department had no comments on the recommendation. GAO has made a number of recommendations to IRS to improve its management of IT acquisitions and operations. IRS has generally agreed with the recommendations and is in various stages of implementing them.
|
The Bureau’s address canvassing operation updates its address list and maps, which are the foundation of the decennial census. An accurate address list both identifies all households that are to receive a notice by mail requesting participation in the census (by Internet, phone, or mailed- in questionnaire) and serves as the control mechanism for following up with households that fail to respond to the initial request. Precise maps are critical for counting the population in the proper locations—the basis of congressional apportionment and redistricting. Our prior work has shown that developing an accurate address list is challenging—in part because people can reside in unconventional dwellings, such as converted garages, basements, and other forms of hidden housing. For example, as shown in figure 1, what appears to be a single-family house could contain an apartment, as suggested by its two doorbells. During address canvassing, the Bureau verifies that its master address list and maps are accurate to ensure the tabulation for all housing units and group quarters is correct. For the 2010 Census, the address canvassing operation mobilized almost 150,000 field workers to canvass almost every street in the United States and Puerto Rico to update the Bureau’s address list and map data—and in 2012 reported the cost at nearly $450 million. The cost of going door-to-door in 2010, along with the emerging availability of imagery data, led the Bureau to explore an approach for 2020 address canvassing that would allow for fewer boots on the ground. Traditionally, the Bureau went door-to-door to homes across the country to verify addresses. This “in-field address canvassing” is a labor-intensive and expensive operation. To achieve cost savings, in September 2014 the Bureau decided to use a reengineered approach for building its address list for the 2020 Census and not go door-to-door (or “in-field”) across the country, as it has in prior decennial censuses. Rather, some areas (known as “blocks”) would only need a review of their address and map information using computer imagery and third-party data sources— what the Bureau calls “in-office” address canvassing procedures. According to the Bureau’s address canvassing operational plan, in-office canvassing had two phases: During the first phase, known as “Interactive Review,” Bureau employees use current aerial imagery to determine if areas have housing changes, such as new residential developments or repurposed structures, or if the areas match what is in the Bureau’s master address file. The Bureau assesses the extent to which the number of housing units in the master address file is consistent with the number of units visible in the current imagery. If the housing shown in the imagery matches what is listed in the master address file, then those areas are considered to be resolved or stable and would not be canvassed in-field. During the second phase, known as “Active Block Resolution,” employees would try to resolve coverage concerns identified during the first phase and verify every housing unit by virtually canvassing the entire area. As part of this virtual canvass, the Bureau would compare what is found in imagery to the master address file data and other data sources in an attempt to resolve any discrepancies. If Bureau employees still could not reconcile the discrepancies, such as housing unit count or street locations with what is on the address list, then they would refer these blocks to in-field address canvassing. However, in March 2017, citing budget uncertainty the Bureau decided to discontinue the second phase of in-office review for the 2020 Census. According to the Bureau, in order to ensure that the operations implemented in the 2018 End-to-End Test were consistent with operations planned for the 2020 Census, the Bureau added the blocks originally resolved during the second phase of in-office review back into the in-field workload for the test. The cancellation of Active Block Resolution is expected to increase the national workload of the in-field canvassing workload by 5 percentage points (25 percent to 30 percent). During in-field address canvassing, listers use laptop computers to compare what they see on the ground to what is on the address list and map. Listers confirm, add, delete, or move addresses to their correct map positions. At each housing unit, listers are trained to speak with a knowledgeable resident to confirm or update address data, ask about hidden housing units, confirm the housing unit location on the map, (known as the map spot) and collect a map spot using global positioning systems (GPS). If no one is available, listers are to use house numbers and street signs to verify the address data. The data are transmitted electronically to the Bureau. The Census Bureau expects that the End-to-End Test for address canvassing will identify areas for improvement and changes that need to be made for the 2020 Census. Our prior work has shown the importance of robust testing. Rigorous testing is a critical risk mitigation strategy because it provides information on the feasibility and performance of individual census-taking activities, their potential for achieving desired results, and the extent to which they are able to function together under full operational conditions. In February 2017, we added the 2020 Census to GAO’s High-Risk List because operational and other issues are threatening the Bureau’s ability to deliver a cost-effective enumeration. We reported on concerns about the Bureau’s capacity to implement innovative census-taking methods, uncertainties surrounding critical information technology systems, and the quality of the Bureau’s cost-estimates. Underlying these issues are challenges in such essential management functions as the Bureau’s ability to: collect and use real-time indicators of cost, performance, and schedule; follow leading practices for cost estimation; scheduling; risk management; IT acquisition, development, testing, and security; and cost-effectively deal with contingencies including, for example, fiscal constraints, potential changes in design, and natural disasters. The Bureau completed in-field address canvassing as scheduled by September 29, 2017, canvassing approximately 340,400 addresses. Most of the listers we observed generally followed procedures. For example, 15 of 18 listers knocked on doors, and 16 of 18 looked for hidden housing units, which is important for establishing that address lists and maps are accurate and for identifying hard-to-count populations. Those procedures include taking such steps as: comparing the housing units they see on the “ground” to the housing units on the address list, knocking on all doors so they could speak with a resident to confirm the address (even if the address is visible on the mailbox or house) and to confirm that there are no other living quarters such as a basement apartment, looking for “hidden housing units”, looking for group quarters such as group homes or dormitories, and confirming the location of the housing unit on a map with GPS coordinates collected on the doorstep. To the extent procedures were not followed, it generally occurred when listers did not go up to the door and speak with a resident or take a map spot on the doorstep. Failure to follow procedures could adversely affect a complete count, as addresses could be missed or a group quarter could be misclassified as a residential address. After we alerted the Bureau to our observations, the Bureau agreed moving forward, to emphasize the importance of following procedures during training for in-field address canvassing. Address canvassing has tight time frames, so work needs to be assigned efficiently. Sometimes this means the Bureau needs to reassign work from one lister to another. During address canvassing, the Bureau discovered that reassigned census blocks sometimes would appear in both the new and the original listers’ work assignments. In some cases, this led to blocks being worked more than once, which decreased efficiency, increased costs, and could create confusion and credibility issues when two different listers visit a house. According to Bureau procedures, listers were instructed to connect to the Bureau’s Mobile Case Management (MCM) system to download work assignments (address blocks) and to transmit their completed work at the beginning and end of the work day but not during the work day. Thus during the work day, they were unaware when unworked blocks had been reassigned to another lister. Bureau officials also told us that the Listing and Mapping Application (LiMA) software used to update the address file and maps was supposed to have the functionality to prevent blocks from being worked more than once, but this functionality was not developed because of budget cuts. For 2020, Bureau officials told us they plan to create operational procedures for reassigning work. According to Bureau officials, they plan to require supervisors to contact the original lister when work is reassigned. We have requested a copy of those procedures; however, the Bureau has not finalized them. Standards for Internal Control in the Federal Government (Standards for Internal Control) call for management to design control activities, such as policies and procedures to achieve objectives. Finalizing these procedures should help prevent blocks from being canvassed more than once. The Bureau conducts tests under census-like conditions, in part, to verify 2020 Census planning assumptions, such as workload, how many houses per hour a lister can verify (also known as a lister’s productivity rate), and how many people the Bureau needs to hire for an operation. Moreover, one of the objectives of the test is to validate that the operations being tested are ready at the scale needed for the 2020 Census. For the 2018 End-to-End Test, the Bureau completed in-field address canvassing on time at two sites and early at one site; despite workload increases at all three test sites and hiring shortfalls at two sites. The Bureau credits this success to better than expected productivity. As the Bureau reviews the results of address canvassing, evaluating the factors that affected workload, productivity rates, and staffing and making adjustments to its estimates, if necessary, before the 2020 Census would help the Bureau ensure that address canvassing has the appropriate number of staff and equipment to complete the work in the required time frame. For the 2020 Census, the Bureau estimates it will have to send 30 percent of addresses to the field for listers to verify. However, at the three test sites, the workload was higher than this estimate (see table 1). At one test site, the percent of addresses verified through in-field address canvassing was 76 percent or 46 percentage points more than the Bureau’s expected 2020 Census in-field address canvassing workload estimate of 30 percent. Bureau officials told us that the 30 percent in-field workload estimate is a national average and is not specific to any of the three test sites. Prior to the test, officials said that the Bureau also knew that the West Virginia site was assigning new addresses to some of the test site’s housing units due to local government emergency 911 address conversion and that the in-field workload would be greater in West Virginia when compared to the other test sites. We requested documentation for the Bureau’s original estimate that 30 percent of the 133.8 million expected addresses would be canvassed in- field for the 2020 Census. However, the Bureau was unable to provide us with documentation to support how they arrived at the 30 percent estimate. Instead, the Bureau provided us with a November 2017 methodology document that showed three in-field address canvassing workload scenarios, whereby, between 41.9 and 45.1 percent of housing units would need to go to the field for address canvassing. The three scenarios consider a range of stability in the address file as well as different workload estimates for in-field follow-up. At 30 percent the Bureau would need to canvass about 40.2 million addresses; however, at 41.9 and 45.1 percent the Bureau would need to canvass between 56 million and 60.4 million addresses, respectively. According to Bureau officials, they are continuing to assess whether changes to its in-office address canvassing procedures would be able to reduce the in-field address canvassing workload to 30 percent, while at the same time maintaining address quality. However, Bureau officials did not provide us with documentation to show how the in-field address canvassing workload would be reduced because the proposed changes were still being reviewed internally. Workload for address canvassing directly affects cost – the greater the workload the more people as well as laptop computers needed to carry out the operation. We found that the 30 percent workload threshold is what is reflected in the December 2017 updated 2020 Census cost estimate that was used to support the fiscal year 2019 budget request. Thus, if the 30 percent threshold is not achieved then the in-field canvassing workload will likely increase for the 2020 Census and the Bureau would be at risk of exceeding its proposed budget for the address canvassing operation. Standards for Internal Control call for organizations to use quality information to achieve their objectives. Thus, continuing to evaluate and finalize workload estimates for in-field address canvassing with the most current information will help ensure the Bureau is well-positioned to conduct addressing canvassing for the 2020 Census. For example, according to Bureau officials, preliminary workload estimates will need to be delivered by January 2019 for hiring purposes and the final in-field workload numbers for address canvassing will need to be determined by June 2019 for the start of address canvassing, which is set to begin in August 2019. Moreover, by February 2019 the Bureau’s schedule calls for it to determine how many laptops will be needed to conduct 2020 Census address canvassing. At the test sites, listers were substantially more productive than the Bureau expected. The expected production rate is defined as the number of addresses expected to be completed per hour, and it affects the cost of the address canvassing operation. This rate includes time for actions other than actually updating addresses, such as travel time. In the 2010 Census the rates reflected different geographic areas, and the country was subdivided into three areas: urban/suburban, rural, and very rural. According to Bureau officials, for the 2020 Census the Bureau will have variable production rates based on geography, similar to the design used in the 2010 Census. The Bureau told us they have not finalized the 2020 Census address canvassing production rates. Table 2 shows the expected and actual productivity rates (addresses per hour) for the in-field address canvassing operation at all three test sites. To ensure address canvassing for the test was consistent with the 2020 Census, Bureau officials told us they included the blocks resolved during the now discontinued second phase of in-office review, into the in-field workload for the test. The Bureau attributed the greater productivity to this discontinued second phase. Bureau officials told us that they believe that listers spent less time updating those blocks because they had already been resolved, and any necessary changes were already incorporated. Moreover, while benefitting from the second phase of in-office address canvassing may be one explanation for why listers were more productive. Bureau officials told us that they are unable to evaluate the differences in expected versus actual productivity for blocks added to the workload as a result of the discontinued second phase because of limitations with the data. However, there could be other reasons as well such as travel time and geography. Standards for Internal Control require that organizations use quality information to achieve their objectives. Therefore, continuing to evaluate other factors from the 2018 End-to-End Test that may have increased or could potentially decrease productivity will be important for informing lister productivity rates for 2020, as productivity affects the number of listers needed to carry out the operation, the number of staff hours charged to the operation, and the number of laptops to be procured. For the 2018 End-to-End Test address canvassing operation, the Bureau hired fewer listers than it assumed it needed at two sites and hired more at the other site. In West Virginia, 60 percent of the required field staff was hired and in Washington, 74.5 percent of the required field staff was hired. Nevertheless, the operation finished on schedule at both these sites. In contrast in Rhode Island the Bureau hired 112 percent of the required field staff and finished early. According to Bureau officials, both the West Virginia and Washington state test sites started hiring field staff later than expected because of uncertainty surrounding whether the Bureau would have sufficient funding to open all three test sites for the 2018 End-to-End Test. When a decision was made to open all three sites for the address canvassing operation only, that decision came late, and Bureau officials told us that once they were behind in hiring and were never able to catch up because of low unemployment rates and the short duration of the operation. According to Bureau officials, their approach to hiring for the 2018 End-to-End Test was similar to that used for the 2010 and 2000 Censuses. In both censuses the Bureau’s goal was to recruit and hire more workers than it needed because of immutable deadlines and attrition. After the 2010 Census we reported that the Bureau had over recruited; conversely, for the 2000 Census the Bureau had recruited in the midst of one of the tightest labor markets in three decades. Thus we recommended, and the Bureau agreed to evaluate current economic factors that are associated with and predictive of employee interest in census work, such as national and regional unemployment levels, and use these available data to determine the potential temporary workforce pool and adjust its recruiting approach. The Bureau implemented this recommendation, and used unemployment and 2010 Census data to determine a base recruiting goal at both the Los Angeles, California and Houston, Texas 2016 census test sites. Specifically, the recruiting goal for Los Angeles was reduced by 30 percent. Bureau officials told us that it continues to gather staffing data from the 2018 End-to-End Test that will be important to consider looking forward to 2020. Although address canvassing generally finished on schedule even while short staffed, Bureau officials told us they are carefully monitoring recruiting and hiring data to ensure they have sufficient staff for the test’s next census field operation non-response follow-up, when census workers go door-to-door to follow up with housing units that have not responded. Non-response follow-up is set to begin in May 2018. According to test data as of March 2018, the Bureau is short of its recruiting goal for this operation which is being conducted in Providence County, Rhode Island. The Bureau’s goal is to recruit 5,300 census workers and as of March 2018, the Bureau had only recruited 2,732 qualified applicants to fill 1,166 spots for training and deploy 1,049 census workers to conduct non-response follow-up. Bureau officials told us they believe that low unemployment is making it difficult to meet its recruiting goals in Providence County, Rhode Island, but they are confident they will be able to hire sufficient staff without having to increase pay rates. Recruiting and retaining sufficient staff to carry out operations as labor- intensive as address canvassing and nonresponse follow-up for the 2020 Census is a huge undertaking with implications for cost and accuracy. Therefore, striking the right staffing balance for the 2020 Census is important for ensuring deadlines are met and costs are controlled. Bureau officials told us that during the test 11 out of 330 laptop computers did not properly transmit address and map data collected for 25 blocks. The lister-collected address file and map data are supposed to be electronically transmitted from the listers’ laptops to the Bureau’s data processing center in Jeffersonville, Indiana. The data are encrypted and remain on the laptop until the laptops are returned to the Bureau where the encrypted data are deleted. Prior to learning that not all data had properly transmitted off the laptops, data on seven of the laptops was deleted. Data on the remaining four laptops were still available. In Providence, Rhode Island, where the full test will take place, the Bureau recanvassed blocks where data were lost to ensure that the address and map information for nonresponse follow-up was correct. Recanvassing blocks increases costs and can lead to credibility problems for the Bureau when listers visit a home twice. Going into address canvassing for the End-to-End Test, Bureau officials said they knew there was a problem with the LiMA software used to update the Bureau’s address lists and maps. Specifically, address and map updates would not always transfer when a lister transmitted their completed work assignments from the laptop to headquarters. Other census surveys using LiMA had also encountered the same software problem. Moreover, listers were not aware that data had not transmitted because there was no system-generated warning. Bureau officials are working to fix the LiMA software problem, but told us that the software problem has been persistent across other census surveys that use LiMA and they are not certain it will be fixed. Bureau officials told us that prior to the start of address canvassing they created an alert report to notify Bureau staff managing the operation at headquarters if data were not properly transmitted. When transmission problems were reported, staff was supposed to remotely retrieve the data that were not transmitted. This workaround was designed to safeguard the data but according to officials was not used. Bureau officials told us that they do not know whether this was because the alert reports were not viewed by responsible staff or whether the alert report to notify the Bureau staff managing the operation was not triggered. Bureau officials told us they recognize the importance of following procedures to monitor alert reports, and acknowledge that the loss of data on seven of the laptops may have been avoided had the procedures that alert reports get triggered and monitored been followed; however, officials did not know why the procedures were not followed. For 2020, if the software problem is not resolved, then officials said the Bureau plans to create two new alert reports to monitor the transmission of data. One report would be triggered when the problem occurs and a second report would capture a one-to-one match between data on the laptop and data transmitted to the data center so that discrepancies would be immediately obvious. While these new reports should help ensure that Bureau staff are alerted when data has not properly transmitted, the Bureau has not determined and addressed why the procedures that required an alert report get triggered and then reviewed by Bureau staff did not work as intended. Standards for Internal Control require that organizations safeguard data and follow policies and procedures to achieve their objectives. Thus, either fixing the LiMA software problem, or if the software problem cannot be fixed, then determining and addressing why procedures that alert reports get triggered and monitored were not followed would position the Bureau to help prevent future data losses. To effectively manage address canvassing, the Bureau needs to be able to monitor the operation’s progress in near real time. Operational issues such as listers not working assigned hours or falling behind schedule need to be resolved quickly because of the tight time frames of the address canvassing and subsequent operations. During the address canvassing test, the Bureau encountered several challenges that hindered its efforts to efficiently monitor lister activities as well as the progress of the address canvassing operation. The Bureau provides data-driven tools for the census field supervisors to manage listers, including system alerts that identify issues that require the supervisor to follow-up with a lister. For the address canvassing operation, the system could generate 14 action codes that covered a variety of operational issues such as unusually high or low productivity (which may be a sign of fraud or failure to follow procedures) and administrative issues such as compliance with overtime and completion of expense reports and time cards. During the operation, over 8,250 alerts were sent to CFSs or about 13 alerts were sent per day per CFS. Each alert requires the CFS to take action and then record how the alert was resolved. CFSs told us and the Bureau during debriefing sessions that they believed many of the administrative alerts were erroneous and they dismissed them. For example, during our site visit one CFS showed us an alert that incorrectly identified that a timecard had not been completed. The CFS then showed us that the lister’s timecard had indeed been properly completed and submitted. CFSs we spoke to said that they often dismissed alerts related to expense reports and timecards and did not pay attention to them or manage them. Bureau officials reported that one CFS was fired for not using the alerts to properly manage the operation. To assist supervisors, these alerts need to be reliable and properly used. Bureau officials said that they examined alerts for errors after we told them about our observation. They reported that they did not find any errors in the alerts. They believe that CFSs may not fully understand that the alerts stay active until they are marked as resolved by the CFS. For example, if a CFS gets an alert that a lister has not completed a timecard the alert will remain active until the CFS resolves the alert by stating the time card was completed. The Bureau’s current CFS manual does not address that by the time a CFS sees the alert a lister may have already taken action to resolve it. Because this was a reoccurring situation, CFSs told us they had a difficult time managing the alerts. Standards for Internal Control call for an agency to use quality information to achieve objectives. Bureau officials acknowledge that it is a problem that some CFSs view the alerts as erroneous and told us they plan to address the importance of alerts in training. We spoke to Bureau officials about making the alerts more useful to CFSs, such as by differentiating between critical and noncritical alerts and streamlining alerts by perhaps combining some of them. Bureau officials told us they would monitor the alerts during the 2018 End-to-End Test’s nonresponse follow-up operation and make adjustments if appropriate. However, while the Bureau told us it will monitor alerts for the non-response follow-up operation, the Bureau does not have a plan for how it will examine and make alerts more useful. Ensuring alerts are properly followed up on is critical to the oversight and management of an operation. If the CFSs view the alerts as unreliable, they could be likely to miss key indicators of fraud such as unusually high or low productivity or an unusually high or low number of miles driven. Moreover, monitoring overtime alerts and the submission of daily time cards and expense reports is also important to ensure that overtime is appropriately approved before worked and that listers get paid on time. Another tool the Bureau uses to monitor operations is its Unified Tracking System (UTS), a management dashboard that combines data from a variety of Census systems, bringing the data to one place where the users can run or create reports. It was designed to track metrics such as the number and percentage of blocks assigned and blocks completed as well as the actual expenditures of an operation compared to the budgeted expenditures. However, information in UTS was not always accurate during address canvassing. For example UTS did not always report the correct number of addresses assigned and completed by site. As a result, Bureau managers reported they did not rely on UTS and instead used data from the source systems that fed into it. Bureau officials agreed that inaccurate data is a problem and that this workaround was inefficient as users had to take extra time to go to multiple systems to get the correct data. Bureau officials reported problems importing information from the feeder systems into UTS because of data mismatches. They said that address canvassing event codes were not processed sequentially, as they should have been, which led to inaccurate reporting. Bureau officials told us that they did not specify that the codes needed to be processed in chronological order as part of the requirements for UTS. Bureau officials said UTS passed the requisite readiness reviews and tests. However, Bureau officials also acknowledged that some of these problems could have been caught by exception testing which was not done prior to production. To resolve this issue for 2020, Bureau officials stated they are developing new requirements for UTS to automatically consider the chronological order of event codes. The Bureau told us they are working on these UTS requirements and will provide us with documentation when they are complete. They also said the Bureau plans to implement a process which compares field management reports with UTS reports to help ensure that the reports have the same definitions and are reporting accurate information. Standards for Internal Control call for an organization’s data be complete and accurate and processed into quality information to achieve their objectives. Thus, finalizing UTS requirements for the address canvassing reporting should help increase efficiency for the 2020 Census by avoiding time consuming workarounds. The Bureau has taken significant steps to use technology to reduce census costs. These steps include using electronic systems to transmit listers’ assignments and address and map data. However, during the address canvassing test, several listers and CFSs at the three test sites experienced problems with Internet connections primarily during training. The West Virginia site, which was more rural than the other sites, experienced the most problems with Internet connectivity. All six West Virginia CFSs reported Internet connectivity problems during the operation. As a work around, CFSs told us that a couple of their listers transmitted their work assignments from libraries where they could access the Internet. Bureau officials stated that the laptops in the 2018 End-to-End Test only used two broadband Internet service providers, which may have contributed to some of the Internet access issues. Bureau officials added that despite the reported Internet connectivity issues, the 2018 End-to- End Test for address canvassing finished on schedule and without any major problems. While this might be true for the test, we have previously reported that minor problems can become big challenges when the census scales up to the entire nation. Therefore, it is important that these issues get resolved before August 2019 when in-field address canvassing for the 2020 Census is set to begin. The Bureau is analyzing the cellular network coverage across all 2020 Census areas using coverage maps and other methods to determine which carrier is appropriate (including a backup carrier) for geographic areas where network coverage is limited. According to Bureau officials, they anticipate identifying the cellular carriers for each of its 248 area census offices by the summer of 2018. The officials said they are considering both national and regional carriers to provide service in some geographic areas because the best service provider in a certain geographic area may not be one of the national providers, but a regional provider. In those cases, listers and other staff in those areas will receive devices with the regional carrier. According to Bureau officials, for the 2020 Census, the ability to access multiple carriers should provide field staff with better connectivity around the country. We also found that there was no guidance for listers and CFSs on what to do if they experienced Internet connectivity problems and were unable to access the Internet. Bureau officials told us that staff in the field can use different methods to access the Internet, such as using home wireless networks or mobile hotspots located at libraries, or coffee shops to transmit data. However, the Bureau did not provide such instructions to listers. In addition, the Bureau also does not define what constitutes a secure Internet public connection. Ensuring data are safeguarded is important because census data are confidential. Bureau officials told us that the Bureau plans to provide instructions to field staff on what to do if they are unable to access census systems and what constitutes a secure Internet connection for the next 2018 End-to-End Test field operation, non-response follow-up. However, the Bureau has not finalized or documented these instructions. Standards for Internal Control call for management to design control activities, such as providing instructions to employees to achieve objectives. Finalizing these instructions to field staff will help ensure listers have complete information on how to handle problems with Internet connectivity and that data are securely transmitted. Some listers had difficulty accessing the Internet to take online training for address canvassing. This is the first decennial census that the Bureau is using online training, in previous decennials training was instructor-led in a class room. According to the Bureau, in addition to the Bureau provided laptop, listers also needed a personal home computer or laptop and Internet access at their home in order to complete the training. However, while the Bureau reported that listers had access to a personal computer to complete the training, we found some listers did not have access to the Internet at their home and were forced to find workarounds to access the training. According to American Community Survey data from 2015, among all households, 77 percent had a broadband Internet subscription. Bureau officials told us they are aware that not all households have access to the Internet and that the Bureau’s field division is working on back-up plans for accessing online training. Specifically, Bureau officials told us for 2020 they plan to identify areas of the country that could potentially have connectivity issues and plan to identify alternative locations such as libraries or community centers where Internet connections are available to ensure all staff has access to training. However, they have not finalized those plans to identify locations for training sites. Standards for Internal Control call for management to design control activities, such as having plans in place to achieve objectives. Finalizing these plans to identify alternative training locations will help ensure listers have a place to access training. The Bureau’s re-engineered approach for address canvassing shows promise for controlling costs and maintaining accuracy. However, the address canvassing operation in the 2018 End-to-End test identified the need to reexamine assumptions and make some procedural and technological improvements. For example, at a time when plans for in- field address canvassing should be almost finalized, the Bureau is in the process of evaluating workload and productivity assumptions to ensure sufficient staff are hired and that enough laptop computers are procured. Moreover, Bureau officials have not finalized (1) procedures for reassigning work from one lister to another to prevent the unnecessary duplication of work assignments, (2) instructions for using the Internet when connectivity is a problem to ensure listers have access to training and the secure transmission of data to and from the laptops, and (3) plans for alternate training locations. To ensure address and map data are not lost during transmission, Bureau officials will also need to either (1) fix the problem with the LiMA software used to update the address and map files or (2) determine and address why procedures that alert reports be triggered and monitored were not followed. Finally, the Bureau has made progress in using data driven technology to manage address canvassing operations. However, ensuring data used by supervisors to oversee and monitor operations are both useful and accurate will help field supervisors take appropriate action to address supervisor alerts and will help managers monitor the real-time progress of the address canvassing operation. With little time remaining it will be important to resolve these issues. Making these improvements will better ensure address canvassing for the actual enumeration, beginning in August 2019, fully functions as planned and achieves desired results. We are making the following seven recommendations to the Department of Commerce and the Census Bureau: Secretary of Commerce should ensure the Director of the U.S. Census Bureau continues to evaluate and finalize workload estimates for in-field address canvassing as well as evaluates the factors that impacted productivity rates during the 2018 End-to-End Test and, if necessary, make changes to workload and productivity assumptions before the 2020 Census in-field address canvassing operation to help ensure that assumptions that impact staffing and the number of laptops to be procured are accurate. (Recommendation 1) Secretary of Commerce should ensure the Director of the U.S. Census Bureau finalizes procedures for reassigning blocks to prevent the duplication of work. (Recommendation 2) Secretary of Commerce should ensure the Director of the U.S. Census Bureau finalizes backup instructions for the secure transmission of data when the Bureau’s contracted mobile carriers are unavailable. (Recommendation 3) Secretary of Commerce should ensure the Director of the U.S. Census Bureau finalizes plans for alternate training locations in areas where Internet access is a barrier to completing training. (Recommendation 4) Secretary of Commerce should ensure the Director of the U.S. Census Bureau takes action to either fix the software problem that prevented the successful transmission of data, or if that cannot be fixed, then determine and address why procedures that alert reports be triggered and monitored were not followed. (Recommendation 5) Secretary of Commerce should ensure the Director of the U.S. Census Bureau develops a plan to examine how to make CFS alerts more useful so that CFSs take appropriate action, including alerts a CFS determines are no longer valid because of timing differences. (Recommendation 6) Secretary of Commerce should ensure the Director of the U.S. Census Bureau finalizes UTS requirements for address canvassing reporting to ensure that the data used by census managers who are responsible for monitoring real-time progress of address canvassing are accurate before the 2020 Census. (Recommendation 7) We provided a draft of this report to the Department of Commerce. In its written comments, reproduced in appendix I the Department of Commerce agreed with our recommendations. The Census Bureau also provided technical comments that we incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we are sending copies of this report to the Secretary of Commerce, the Under Secretary of Economic Affairs, the Acting Director of the U.S. Census Bureau, and interested congressional committees. The report also will be available at no charge on GAO’s website at http://www.gao.gov. If you have any questions about this report please contact me at (202) 512-2757 or goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix II. In addition to the contact named above, Lisa Pearson, Assistant Director; Kate Wulff, Analyst-in-Charge; Mark Abraham; Devin Braun; Karen Cassidy; Robert Gebhart; Richard Hung; Kirsten Lauber; Krista Loose; Ty Mitchell; Kayla Robinson; Kate Sharkey; Stewart Small; Jon Ticehurst; and Timothy Wexler made key contributions to this report.
|
The success of the decennial census depends in large part on the Bureau's ability to locate every household in the United States. To accomplish this monumental task, the Bureau must maintain accurate address and map information for every location where a person could reside. For the 2018 End-to-End Test, census workers known as listers went door-to-door to verify and update address lists and associated maps in selected areas of three test sites—Bluefield-Beckley-Oak Hill, West Virginia; Pierce County, Washington; and Providence County, Rhode Island. GAO was asked to review in-field address canvassing during the End-to-End Test. This report determines whether key address listing activities functioned as planned during the End-to-End Test and identifies any lessons learned that could inform pending decisions for the 2020 Census. To address these objectives, GAO reviewed key documents including test plans and training manuals, as well as workload, productivity and hiring data. At the three test sites, GAO observed listers conducting address canvassing. The Census Bureau (Bureau) recently completed in-field address canvassing for the 2018 End-to-End Test. GAO found that field staff known as listers generally followed procedures when identifying and updating the address file; however, some address blocks were worked twice by different listers because the Bureau did not have procedures for reassigning work from one lister to another while listers work offline. Bureau officials told GAO that they plan to develop procedures to avoid duplication but these procedures have not been finalized. Duplicating work decreases efficiency and increases costs. GAO also found differences between actual and projected data for workload, lister productivity, and hiring. For the 2020 Census, the Bureau estimates it will have to verify 30 percent of addresses in the field. However, at the test sites, the actual workload ranged from 37 to 76 percent of addresses. Bureau officials told GAO the 30 percent was a nationwide average and not site specific; however, the Bureau could not provide documentation to support the 30 percent workload estimate. At all three test sites listers were significantly more productive than expected possibly because a design change provided better quality address and map data in the field, according to the Bureau. Hiring, however, lagged behind Bureau goals. For example, at the West Virginia site hiring was only at 60 percent of its goal. Bureau officials attributed the shortfall to a late start and low unemployment rates. Workload and productivity affect the cost of address canvassing. The Bureau has taken some steps to evaluate factors affecting its estimates, but continuing to so would help the Bureau refine its assumptions to better manage the operation's cost and hiring. Listers used laptops to connect to the Internet and download assignments. They worked offline and went door-to-door to update the address file, then reconnected to the Internet to transmit their completed assignments. Bureau officials told GAO that during the test 11 out of 330 laptops did not properly transmit address and map data collected for 25 blocks. Data were deleted on 7 laptops. Because the Bureau had known there was a problem with software used to transmit address data, it created an alert report to notify the Bureau staff if data were not properly transmitted. However, Bureau officials said that either responsible staff did not follow procedures to look at the alert reports or the reports were not triggered. The Bureau is working to fix the software problem and develop new alert reports, but has not yet determined and addressed why these procedures were not followed. The Bureau's data management reporting system did not always provide accurate information because of a software issue. The system was supposed to pull data from several systems to create a set of real-time cost and progress reports for managers to use. Because the data were not accurate, Bureau staff had to rely on multiple systems to manage address canvassing. The Bureau agreed that not only is inaccurate data problematic, but that creating workarounds is inefficient. The Bureau is developing new requirements to ensure data are accurate but these requirements have not been finalized. GAO is making seven recommendations to the Department of Commerce and Bureau including to: (1) finalize procedures for reassigning work, (2) continue to evaluate workload and productivity data, (3) fix software problem, or determine and address why procedures were not followed, and (4) finalize report requirements to ensure data are accurate. The Department of Commerce agreed with GAO's recommendations, and the Bureau provided technical comments that were incorporated, as appropriate.
|
This section describes DOE’s tank waste treatment approach at Hanford and DOE’s quality assurance framework and requirements. Cleanup of the Hanford Site is governed by two main compliance agreements: (1) the 1989 Hanford Federal Facility Agreement and Consent Order, or Tri-Party Agreement, an agreement between DOE, the Washington State Department of Ecology, and the Environmental Protection Agency, and (2) a 2010 consent decree. The Tri-Party Agreement was signed in May 1989 and lays out a series of legally enforceable milestones for completing major activities in Hanford’s waste treatment and cleanup process. The Tri-Party Agreement has been amended a number of times to establish additional enforceable milestones for certain WTP construction and tank waste retrieval activities, among other things. Under the Tri-Party Agreement, DOE must complete waste treatment at the Hanford Site by 2047. The overall mission of the WTP is to treat and immobilize a large part of 54 million gallons of radioactive and chemical waste stored in 177 underground storage tanks. The WTP is the most technically complex and largest construction project within DOE’s Office of Environmental Management, occupying 65 acres of the Hanford Site. Some of DOE’s tank waste is highly radioactive material—known as high-level waste— mixed with hazardous waste. Under current law, this waste must be vitrified—a process in which the waste is immobilized in glass—prior to disposal. Low-activity waste is DOE’s term for the portion of the tank waste at Hanford with low levels of radioactivity. Low-activity waste is primarily the liquid portion of the tank waste that remains after as much radioactive material as technically and economically practical has been removed. The WTP consists of the following set of facilities that are designed to separate waste into low-activity and high-level waste streams and, once completed, treat these waste streams in separate facilities using vitrification. Pretreatment Facility. This facility is to receive the waste from the tanks and separate it into high-level and low-activity waste streams. Low-Activity Waste Facility. This facility is to receive the low-activity waste from the Pretreatment facility and immobilize it by vitrification. The canisters of vitrified waste will be permanently disposed of at another facility at Hanford. High-Level Waste Facility. This facility is to receive the high-level waste from the Pretreatment Facility and immobilize it by vitrification. The canisters of vitrified waste will be stored on-site until a final repository is established. Effluent Management Facility. The Effluent Management Facility is being built to evaporate much of the secondary waste produced during low-activity waste processing and vitrification at the Low- Activity Waste Facility. Analytical Laboratory. This facility will conduct analyses as needed, such as testing samples of the vitrified waste to ensure that it meets certain criteria and regulatory requirements for disposal. Balance of Facilities. These facilities consist of the 22 support facilities that make up the plant infrastructure, such as cooling water systems and silos that hold vitrifying materials. In part because of the 2012 work stoppage at the WTP’s Pretreatment and High-Level Waste Facilities, in 2012 DOE adopted a phased waste treatment strategy through which the department aims to begin treating some of the low-activity waste before resolving all WTP technical issues. During the first phase of this strategy, DOE plans to implement a Direct Feed Low-Activity Waste (DFLAW) approach to transfer some low- activity waste from the tanks to the WTP’s Low-Activity Waste Facility for vitrification before the Pretreatment Facility is completed. The approach relies on construction of a new facility—the Low-Activity Waste Pretreatment System—designed to remove highly radioactive particles from liquid tank waste before sending the waste stream to the Low- Activity Waste Facility. During later phases, DOE intends to complete the WTP Pretreatment Facility and High-Level Waste Facilities. DOE also plans to construct a Tank Waste Characterization and Staging Facility under a different contract to stage, mix, sample, and characterize high- level waste from the tanks prior to delivery to the Pretreatment Facility. Figure 1 illustrates WTP and other facilities planned for Hanford tank waste treatment. A set of federal regulations, DOE orders, and ORP procedures collectively make up DOE’s quality assurance framework that aims to ensure that all WTP quality assurance problems can be identified and that identified problems do not recur. DOE’s quality assurance regulations require DOE contractors to establish DOE-approved quality assurance programs. The regulations specify that under an approved program, the contractor’s quality assurance program must, among other things, (1) establish and implement processes to detect and prevent quality problems; (2) identify, control, and correct items, services, and processes that do not meet established requirements; (3) procure items and services that meet established requirements and perform as specified; (4) plan and conduct independent assessments to measure item and service quality, to measure the adequacy of work performance, and to promote improvement; and (5) maintain items to prevent damage, loss, or deterioration. In addition, DOE Order 226.1B requires that DOE’s organizations and contractors implement oversight processes that ensure that relevant quality assurance problems are evaluated and corrected on a timely basis to prevent recurrence. The WTP contract requires compliance with these regulations and requirements. The WTP contract specifies that as the owner of the WTP project, DOE is responsible for providing quality assurance oversight of the WTP. ORP’s Quality Assurance Division provides such oversight, for example, by doing the following: Reviewing a sampling of the contractor’s documentation on the WTP’s engineering, procurement, and construction. Conducting audits and assessments to ensure that the contractor’s work complies with applicable quality assurance requirements. Assessing the effectiveness of the contractor’s Corrective Action Management Program, which involves identifying, documenting, planning, addressing, and tracking actions required to resolve or correct problems. Both the contractor’s and ORP’s quality assurance programs require that corrective actions to address significant problems with the quality of the work must include a determination of the extent to which the problematic conditions exist (known as an extent-of-condition review) as well as the underlying causes of those conditions. If corrective actions do not address the conditions, ORP’s quality assurance policy allows the office to call for a suspension of work. ORP’s stop work procedure includes the process ORP is to follow when the Quality Assurance Division Director, in consultation with ORP management, determines that work needs to be suspended as a result of the occurrence or reoccurrence of significant quality assurance problems. ORP updated this procedure in February 2016 to describe the type of quality assurance deficiencies that should trigger consideration of work stoppage. According to the updated procedure, characteristics of a deficiency that can trigger an order to stop work include, but are not limited to, problems that will result in $25 million or more in loss of productivity, construction rework, or environmental damage or a significant quality problem that if left uncorrected can result in construction delays or create adverse safety conditions. Until February 2016, ORP did not have precise criteria describing the conditions under which it should evaluate work for possible stoppage, according to a DOE headquarters report. ORP has taken several actions to identify and address quality assurance problems at the WTP, but all planned actions have not been completed. In 2013 ORP conducted a comprehensive audit, which resulted in several actions, including when the office had the contractor begin implementing a Managed Improvement Plan (MIP) in 2014. The MIP is intended to ensure that the WTP could operate in compliance with DOE-approved safety and quality requirements. Implementation of the MIP was to be completed by April 2016. Although the contractor reported that the implementation was complete, some of the plan’s corrective measures have not been fully implemented, according to contractor documents we reviewed and quality assurance experts we spoke to. In addition, ORP’s effort to verify the extent to which the contractor has implemented MIP corrective measures is not scheduled to be complete until at least December 2018. ORP has taken several actions to identify and address quality assurance problems at the WTP. After the partial work stoppage in 2012, ORP conducted an audit in 2013 to evaluate the adequacy, implementation, and effectiveness of the contractor’s quality assurance program. The audit found that the contractor’s quality assurance program was generally adequate. However, it also found that the contractor’s quality assurance program was not fully effective in several areas. In response to the audit, ORP and the WTP contractor took the following actions: Developed compensatory measures. At ORP’s request, in 2013, the contractor started implementing “compensatory measures” to ensure that ongoing WTP work during a 2-year performance improvement period would meet DOE quality and safety requirements. For example, in September 2013, the contractor implemented a measure requiring senior management review of all condition reports and their associated levels of significance. According to ORP officials, the compensatory measures were intended to be additional, temporary internal controls to ensure that work at the WTP did not result in new or recurring quality assurance problems. Initiated the MIP. To systematically integrate compensatory measures, the contractor developed the MIP to address all quality assurance problems identified in the two Priority Level One findings and the seven Priority Level One findings associated with engineering and nuclear safety. In August 2014, the contractor started implementing the MIP. The MIP is a set of 52 corrective measures intended to establish processes, procedures, and metrics to produce an overall quality program that ensures that the WTP can safely operate in compliance with DOE-approved nuclear safety requirements, according to the contractor. The measures include the following: Actions to enhance external independent oversight. This measure calls for the contractor to conduct assessments using external subject matter experts to evaluate the ability of the contractor’s quality assurance program to identify precursors to potential problems and their causes. This measure responds to the 2013 audit in which DOE concluded that the contractor’s quality assurance program could not ensure compliance with requirements. Specifically, the audit found that the contractor’s quality assurance program was not fully effective in several areas, including, but not limited to, design, software quality, procurement, and ensuring that identified problems are corrected. Actions to ensure that procured items and services meet requirements and perform as specified. This measure is intended to ensure that the contractor’s processes and procedures to identify and ensure the quality of technical products meet requirements. The nuclear industry uses “commercial grade dedication” to refer to the process by which the contractor or subcontractor verifies that an item (e.g., an electric switch) or service (e.g., design of an electrical system) can meet commercial quality and safety requirements and be approved for use in a nuclear facility. It requires the contractor to perform source verification, perform inspections and tests, and assess the processes that control the quality of purchased items and services to help ensure that critical components of procured items and services are designed, fabricated, assembled, installed, and tested with appropriate documentation to support their compliance with WTP safety requirements. This measure also responds to DOE’s 2013 audit, which found that the contractor had inadequate control over the quality of purchased items and services. Actions to control and correct items and processes that do not meet requirements. This measure is intended to allow the contractor to identify and ensure that materials and equipment that have been received, and that will be received in the future, meet requirements. The contractor is to conduct comprehensive reviews of previously received material and equipment, as well as all future deliveries, to help ensure the verification, accuracy, and completeness of documentation for materials and equipment received from suppliers. This measure also responds to DOE’s 2013 audit, which found that the contractor had received components that did not comply with safety requirements. Performed targeted audits to test compensatory measures and the implementation of the MIP. To assess the effectiveness of the compensatory measures and the MIP, ORP performed targeted audits. For example, to assess the extent to which the contractor has addressed quality assurance program deficiencies, in early 2017 ORP’s Quality Assurance Division conducted a “vertical slice audit.” This audit reviewed engineering, procurement, and construction of a key system that will be needed for initial WTP operations. Because of the long-standing quality assurance problems at the WTP, DOE required ORP to closely monitor the contractor’s implementation of the MIP. Specifically, as a result of a DOE Office of Enforcement investigation into the contractor’s quality assurance and corrective action management programs, DOE entered into a Consent Order with the contractor in 2015. The Consent Order required the contractor to complete the actions identified in the MIP to the extent necessary to restore quality assurance program to full effectiveness by April 30, 2016. The Consent Order does not preclude DOE from reopening the investigation or issuing an enforcement action if there is a recurrence of nuclear safety deficiencies similar to those identified in the Consent Order or the if contractor fails to complete actions required by the Consent Order in a timely and effective manner to prevent recurrence of the identified issues. The contractor has not fully implemented corrective measures for all identified quality assurance problems, according to contractor documents we reviewed. In August 2017, the contractor reported that it had finished its actions to implement the MIP. However, according to the contractor’s MIP status update accompanying the contractor’s report, 13 of the 52 corrective measures specified in the MIP had not been fully implemented. Our review of these 13 MIP corrective measures we found that 9 were intended to exclusively or partially address weaknesses in the contractor’s quality assurance program. For example, the two corrective measures to ensure that WTP facilities’ computer software meets requirements were not complete, according to the MIP status update. These corrective measures included improving the software procurement process and revising the quality assurance manual. In addition, of the 39 measures that the contractor considers complete, some do not appear to be fully implemented, according to one ORP quality assurance expert that we spoke to. For example, one ORP quality assurance expert disagreed with the contractor’s assessment that a corrective measure for documentation pertaining to radiographic film— which is needed for conducting quality assurance reviews of certain equipment—was fully implemented. This corrective measure calls for the contractor to review purchase orders for radiographic film and then store the radiographic film as documentation of compliance with nuclear quality standards. According to the expert, radiographic film reviews are still not consistently conducted, and radiographic film documentation is still not consistently stored. In cases where such documentation is incomplete or missing, the contractor is at times forced to re-create the documentation at considerable cost to DOE. According to ORP’s MIP oversight plan, it will take the office until at least December 2018 to verify the extent to which the contractor has implemented each of the 52 MIP corrective measures. According to DOE documents we reviewed and ORP quality assurance experts we spoke with, ORP’s actions have not ensured that all quality assurance problems have been identified at the WTP, and some previously identified problems are recurring. Specifically, according to DOE documents and the experts we spoke with, ORP’s oversight has not ensured that the contractor has identified all quality assurance problems in structures, systems, and components that were completed and installed before the 2012 work stoppage or identified all such problems in newer structures, systems, and components needed for initial WTP operations. In addition, according to the documents we reviewed and experts we interviewed, previously identified quality assurance problems are recurring. Recent DOE reviews have found that ORP has not ensured that all quality assurance problems have been identified at the WTP. First, a 2016 DOE Office of Enterprise Assessment report found quality assurance deficiencies that neither ORP nor the contractor had identified at the time the work was conducted. The report identified numerous construction deficiencies, procurement and supplier deficiencies, engineering errors, maintenance issues, and materials with expired shelf lives. For example, the report identified welding deficiencies on tanks designed to hold nuclear waste that were identified in a WTP facility several years after the tanks were installed. The report concluded that the contractor is aware that significant quality assurance problems likely exist in older structures, systems, and components. This report noted that much of the equipment in older structures, systems, and components was manufactured and delivered to the project from 5 to 10 years ago—and some of this equipment was supplied by vendors or manufacturers that are no longer in business—which could lead to costly rework. Second, a 2015 DOE Inspector General report found that the contractor had procured $4 billion in parts and materials through fiscal year 2014, but ORP and the contractor had not always identified problems with the quality of procured items in a timely manner. For example, the report found that in about 45 percent of the nearly 1,400 procurement problems reviewed, the contractor did not identify the problems until at least 2 years after the items arrived on site. The report also found that in many cases the contractor canceled its efforts to recover the costs to resolve the problems because of the length of time that had passed. The report concluded that these problems were caused by weaknesses in the contractor’s quality assurance program and that the contractor’s procedures to prevent or identify problems with procured items were not always followed effectively. The findings of these reports are consistent with the views of ORP quality assurance experts we spoke with who stated that ORP oversight has not ensured that the contractor has identified all quality assurance problems in structures, systems, and components—particularly those that were completed and installed before the 2012 work stoppage. These quality assurance experts said that because quality assurance problems have not been identified, they expect significant rework will be needed for work that was completed before 2012. Specifically, most of the ORP quality assurance experts (seven of the nine) told us that they expect rework will be needed for existing WTP facilities, such as the Pretreatment and High- Level Waste Facilities. One of these seven quality assurance experts noted that the contractor does not have a complete record of the documentation for key systems and equipment, which is required for demonstrating compliance with nuclear safety standards and eventual permitting of WTP facilities for operation. According to this expert, the extent of this shortcoming is not known, but fixing it—that is, creating a complete record of required documentation—may lead to years of delays. ORP Quality Assurance Division officials told us that because ORP’s focus is on ensuring that facilities needed for initial operations will be ready to operate by December 2023, they have not been directed by ORP management to focus on identifying all quality assurance problems for work completed before 2012 for facilities needed for later phases of WTP operations, such as structures, systems, and components of the Pretreatment and High-Level Waste Facilities. In addition, they stated that there may be significant changes to these facilities needed for the WTP’s later phases, making it unnecessary for them to review the extent of quality assurance problems until it is known what parts of the facilities will remain and which parts will not. However, similar problems appear to exist in WTP facilities needed for initial operations. ORP quality assurance experts that we interviewed also stated that ORP oversight has not always ensured that all quality assurance problems in facilities needed for the initial WTP operations, or DFLAW, have been identified. Five experts told us that issues such as identifying problematic items, services, and processes had not been fully resolved. Specifically, these ORP quality assurance experts told us that when quality assurance problems are identified in structures, systems, or components needed for DFLAW, ORP does not always ensure that the contractor identifies the extent to which such problems may exist in other areas affected by the same structures, systems, or components. For example, an ORP quality assurance expert cited an instance in which an ORP quality assurance team reviewed a sample of 25 procurement “packages” (out of thousands) for a DFLAW facility and identified 143 problems—significantly more problems than the team expected for such a small sample. Consistent with ORP quality assurance requirements, this ORP quality assurance expert recommended to ORP upper management that the contractor determine the extent to which such problems could affect other structures, systems, and components needed for DFLAW. However, according to an ORP memo, ORP upper management did not require the contractor to implement this recommendation, instead citing “extenuating circumstances” and requiring a lesser corrective action than what was recommended. Three ORP quality assurance experts told us that they believe that because problems have not been comprehensively assessed, there may be equipment and systems within DFLAW that will fail to meet their intended functions. We also found that although ORP conducted its vertical slice audit in 2017 to test its compensatory measures and the MIP to improve quality assurance, the audit report notes that it was focused on only one system within the Low-Activity Waste Facility. According to ORP officials, there are numerous structures, systems, and components in facilities needed for DFLAW that have not been audited or reviewed in a manner similar to the vertical slice audit. Both the contractor’s and ORP’s quality assurance programs require that corrective actions to address significant problems with the quality of the work include a determination of the extent to which the problematic conditions exist as well as the underlying causes of those conditions. Until ORP requires the WTP contractor to determine the full extent to which problems exist in all WTP structures, systems, and components, DOE lacks a comprehensive understanding of all potential quality assurance problems at all WTP facilities. DOE requires its program offices, such as ORP, and contractors to have oversight processes to ensure that quality assurance problems are evaluated and corrected in a timely basis to prevent recurrence. However, several DOE documents we reviewed show that previously identified quality assurance problems have recurred in recent years, including the following: In 2015, an ORP audit report identified recurring weaknesses in quality assurance for the contractor’s process for procuring commercial items for use in a nuclear facility. For example, ORP found that the contractor’s internal controls for this process were not consistently performed; did not consistently comply with procedural requirements; and, in many cases, did not establish reasonable assurance that procured systems, services, and components acquired from 2010 to 2014 would perform their intended safety functions. In a 2015 report on the design and operability of key systems and components for the Low-Activity Waste Facility, ORP found that the quality of computer systems software was not in full compliance with DOE requirements, leading to conditions where personnel and the environment may not be adequately protected. ORP had identified a similar problem in 2008, when it found that the contractor’s computer programs used in engineering calculations were not always verified to show that they produced correct solutions within defined limits for all parameters, as required by the contractor’s quality assurance manual. ORP had also previously identified WTP computer software quality problems in 2010 when it issued a Priority Level Two finding on software procedures and another Priority Level Two finding on software testing. In 2017, ORP’s Quality Assurance Division issued a report that examined the contractor’s quality assurance program and found problems in quality assurance areas that had been previously identified. The report noted that in 6 of 19 quality assurance program areas, the contractor’s performance was marginal—and in need of improvement—or indeterminate. These 6 areas included identifying, controlling, and correcting items, services, and processes that do not meet established requirements; maintaining items to prevent damage, loss, or deterioration; and procuring items and services that meet established requirements and perform as specified. ORP quality assurance experts that we spoke with also stated that previously identified quality assurance problems are recurring, including some in areas where the contractor had implemented corrective measures. These quality assurance experts told us that quality assurance problems are recurring in several key areas, including those areas identified in the documents described above: (1) procurement of items and services that do not meet established requirements or perform as specified; (2) software that does not meet established requirements; and (3) a maintenance program that does not prevent damage, loss, or deterioration of WTP structures, systems, and components. For example, see the following. Procurement of items and services that do not meet requirements or perform as specified. Four out of the five ORP quality assurance experts we interviewed who had recent experience with the procurement of items and services told us that problems with procured items and services that do not meet established requirements or perform as specified are not fully resolved. One of these ORP quality assurance experts stated that an ORP team recently reviewed a random sample of 45 of the roughly 30,000 procurements the contractor had made for the WTP and identified a number of instances where materials did not meet requirements, which resulted in one Priority Level Two finding—which represents a serious issue that indicates an adverse condition, such as a noncompliance or breakdown of a management system—and five Priority Level Three findings. The expert noted that this was many more deficiencies than the team expected for such a small sample. Settlement of Allegations of Contractors Knowingly Mischarging Costs at the Waste Treatment and Immobilization Plant (WTP) In November 2016, the WTP contractor and certain subcontractors agreed to pay $125 million to resolve allegations under the False Claims Act that they made false statements and claims to the Department of Energy (DOE) by charging DOE for deficient nuclear quality materials, services, and testing that were provided to the WTP at DOE’s Hanford Site. The contract required materials, testing, and services to meet certain nuclear quality standards. The Department of Justice alleged that the defendants violated the False Claims Act by charging the government the cost of complying with these standards when they failed to do so. In particular, the Department of Justice alleged that the defendants improperly billed the government for materials and services from vendors that did not meet quality control requirements, for piping and waste vessels that did not meet quality standards, and for testing from vendors that did not have compliant quality programs. As part of the settlement, the contractors admitted no wrongdoing, and the United States did not concede that its claims were not well founded. Software that does not meet requirements. ORP quality assurance experts told us that problems are recurring in certain areas where items and processes do not meet requirements, such as computer software quality assurance, despite the contractor developing two MIP corrective measures in this area. Two ORP quality assurance experts reported that problems with software quality are recurring. One ORP quality assurance expert added that the contractor often fails to develop software quality documentation that is needed to demonstrate compliance with quality requirements when permitting facilities for operation. As a result, the contractor will have to re-create this documentation at some cost. A maintenance program that does not prevent damage, loss, or deterioration. Each of the three ORP quality assurance experts with knowledge in this area told us that the contractor had not established a fully effective WTP maintenance program, particularly for the Pretreatment and High-Level Waste Facilities, and as a result, structures, systems, and components at these facilities have deteriorated and been damaged. Such statements are consistent with findings of the Defense Nuclear Facilities Safety Board, which reported in April 2016 that systems and components stored in an outdoor storage yard were not properly covered and showed signs of being affected by water, sand, or animals. In March 2016, ORP reported significant water intrusion into several areas of the High- Level Waste Facility. As a result, some of the facility’s structures, systems, and components had deteriorated and will require costly rework. The contractor notified DOE in April 2017 that because DOE’s focus is on completing facilities needed for initial WTP operations, it would submit a proposal to change the WTP contract to account for the increased scope, cost, and schedule of long-term maintenance, storage, and management of procured and partially installed structures, systems, and components at those facilities not needed for initial WTP operations. Consistent with its quality assurance procedures, ORP can use its authorities—such as those under the Consent Order and its quality assurance policy—to stop work if corrective measures do not prevent quality assurance problems from recurring. However, ORP has not used such authorities. ORP senior officials told us that they did not consider it necessary to stop work because of the recurrence of problems in certain areas because they plan to evaluate the extent of the contractor’s implementation of MIP corrective measures over the next year and have allowed work to continue because they believe that the contractor’s quality assurance program is generally adequate. Without directing ORP to use its authorities to stop work in areas where quality assurance problems are recurring until it can verify that the problems are corrected and will not recur, DOE may face future rework that could increase costs and schedule delays for the WTP. A 2017 assessment from DOE headquarters and our interviews with nine ORP quality assurance experts suggest that ORP’s organizational structure does not provide the quality assurance function with sufficient independence from upper management—which includes the ORP Manager and the WTP Federal Project Director—to effectively oversee the contractor’s quality assurance program. Our prior work has found that to be independent, an oversight organization should be structurally distinct and separate from program offices responsible for achieving the program’s mission to avoid management interference or conflict between program office mission objectives and safety. At ORP, however, the Quality Assurance Division is not fully separate and independent from the upper management of the WTP project, which manages cost and schedule performance. We believe that such a structure has the potential to create a conflict of interest. Specifically, we found that ORP’s Quality Assurance Division performs assessments of the contractor’s quality assurance program, among other things, and reports its findings to ORP upper management, including the ORP Manager, who has the discretion to determine whether and to what extent to require the contractor to take action in response to findings. When quality assurance issues are identified, ORP upper management must balance its mission of meeting cost and schedule targets with its responsibility to ensure that nuclear safety and quality standards are met. However, these are two potentially conflicting responsibilities because meeting WTP cost and schedule targets may be threatened if serious quality assurance problems are identified. A February 2017 external assessment from DOE headquarters noted that ORP’s Quality Assurance Division’s effectiveness has been limited because, in some instances, its findings have been mischaracterized by ORP upper management, and in others, ORP upper management has not used this division effectively to evaluate the extent of potential quality assurance problems. This assessment found that ORP had not performed adequate oversight of the contractor’s MIP and that some critical quality assurance areas were not receiving the necessary scrutiny from ORP. Further, the assessment found that ORP management sometimes mischaracterized the seriousness of the Quality Assurance Division’s findings and, as a result, did not require the contractor to conduct extent-of-condition review for significant quality assurance problems. While this assessment stated that ORP had an effective quality assurance program, it concluded that three of the eight quality assurance areas the assessment team reviewed were not fully effective, including ORP’s ability to conduct assessments of the contractor’s quality assurance program. A Cautionary Tale: Quality Assurance Problems Doom Commercial Nuclear Power Plant In the commercial nuclear industry, there is a notable example of a construction project that faced significant quality assurance challenges. In the 1970s and early 1980s, Cincinnati Gas & Electric attempted to construct a commercial nuclear power plant, known as the Zimmer Plant, near Moscow, Ohio. After 10 years of construction and more than $2 billion spent, the company abandoned its effort to construct the plant. An independent review mandated by the Nuclear Regulatory Commission in 1982 concluded that several issues impeded successful construction of the Zimmer Plant as a commercial nuclear power plant. These issues included (1) the company’s failure to elevate its commitment to quality and quality assurance to an equal status with cost and schedule, (2) the regulator’s failure to hold the company accountable for quality in design and construction, and (3) the company’s inadequate quality assurance procedures. To recoup some of the $2 billion spent in attempting to construct this commercial nuclear power plant, Cincinnati Gas & Electric later converted facilities built at the site for use in a coal-fired power plant. management and the contractor place cost and schedule performance above identifying and resolving quality assurance issues. One quality assurance expert specified that ORP’s culture does not encourage staff to identify quality assurance problems or ineffective corrective measures. This expert said that people who discover problems are not rewarded; rather, their findings are met with resistance, which has created a culture where quality assurance staff are hesitant to identify quality assurance problems or problems with corrective measures. This expert added that quality assurance is subordinate to cost and schedule—that is, senior managers responsible for approving quality assurance findings are more concerned with whether WTP construction meets schedule milestones than identifying and resolving quality assurance issues. This expert compared the WTP to the Zimmer Power Plant—a power plant in Ohio that was designed to be a nuclear power plant but that was never licensed because of unresolved quality assurance problems and a focus on schedule over construction quality. As stated earlier, in October 2008, we identified key elements that any nuclear safety oversight organization should have in order for it to provide effective independent oversight. For example, we found that an organization should be structurally distinct and separate from DOE program offices to avoid management interference or conflict between program office mission objectives, such as cost and schedule performance and safety. We also found that the organization should have sufficient authority to require program offices to effectively address its findings and recommendations. ORP’s Assistant Manager for Technical and Regulatory Support and ORP senior quality assurance staff told us that ORP’s organizational structure ensures that the quality assurance function is sufficiently independent of ORP management. These officials and the ORP Quality Assurance Program Description state that the Quality Assurance Division is structured to report directly to the ORP Assistant Manager for Technical and Regulatory Support and the ORP Manager. They also cited the ORP Quality Assurance Program policy, which states that the Quality Assurance Division has the authority and overall responsibility to independently audit the contractor’s quality assurance program to verify the achievement of quality. According to these officials, this organizational structure ensures independence from cost and schedule considerations and ensures objectivity in quality assurance evaluations, and they added that the ORP Manager evaluates differing opinions without any hindrances or organizational bias. Given that some previously identified problems are recurring at the WTP, including some in areas where the contractor had implemented corrective measures, and given the findings of the 2017 headquarters assessment and the statements of ORP’s quality assurance experts outlined above, we are concerned that ORP’s organizational structure may not entirely ensure that the Quality Assurance Division meets key elements for a nuclear safety oversight organization to provide effective independent oversight. According to ORP reports and officials, in ORP’s current organizational structure, upper level management retains discretion in how to resolve quality assurance problems. As a result, the Quality Assurance Division does not have sufficient authority to ensure that its findings are addressed and its recommendations are implemented. By revising ORP’s organizational structure so that the quality assurance function is independent of ORP upper-level management, DOE can have better assurance that compliance with nuclear safety requirements will not be subordinated to meeting cost and schedule targets. For years DOE has faced quality assurance problems at the WTP. Upon learning in 2012 that it could not verify that engineering, procurement, and construction at the WTP met nuclear safety and quality requirements, ORP directed the contractor to implement quality assurance corrective measures to ensure that problems would be identified and prevented from recurring. However, 5 years later, the contractor has not fully implemented all planned corrective measures. Moreover, in some areas where the contractor has stated that corrective measures are now in place, ORP continues to encounter quality assurance problems similar to those it encountered in the past. When and where problems have recurred, ORP has not always required the contractor to determine the extent to which the problems may affect all parts of the WTP. By directing ORP to require the WTP contractor, where quality assurance problems have been identified, to determine the full extent to which problems exist in all WTP structures, systems, and components, DOE will gain a comprehensive understanding of all quality assurance problems at all WTP facilities. In addition, ORP has not always used its authorities to stop work when problems are detected before they are fully corrected. Without directing ORP to use its authorities to stop work in areas where quality assurance problems are recurring until it can verify that the problems are corrected and will not recur, DOE may face future rework that could increase costs and schedule delays for the WTP. Also of concern is the potential lack of sufficient independence of ORP’s Quality Assurance Division from ORP’s upper management. This has resulted in ORP upper management not always allowing its own experts to fully examine the contractor’s work even when problems have recurred. At other times, this has resulted in the significance of identified problems—and strength of associated corrective measures—being reduced. DOE’s ability to effectively self-regulate a high-hazard nuclear facility not only depends on vigorous oversight of the contractor by the program office but also on active oversight by an independent group. The WTP is the largest and most technically complex cleanup project managed by DOE, and we recognize that meeting its cost and schedule targets places immense pressure on ORP upper management. However, meeting those targets is further threatened when quality assurance problems are downgraded. By revising ORP’s organizational structure so that the quality assurance function is independent of ORP upper management, DOE can have better assurance that compliance with nuclear safety requirements will not be subordinated to meeting cost and schedule targets. We are making the following three recommendations to DOE: The Secretary of Energy should direct ORP to require the WTP contractor to determine the full extent to which problems exist in all WTP structures, systems, and components. The Secretary of Energy should direct ORP to use its authorities to stop work in areas where quality assurance problems are recurring until ORP’s Quality Assurance Division can verify that the problems are corrected and will not recur. The Secretary of Energy should revise ORP’s organizational structure so that the quality assurance function is independent of ORP upper management. We provided DOE with a draft of this report for its review and comment. In its written comments, reproduced in appendix I, DOE generally agreed with the findings in the report and its recommendations. DOE agreed with our first two recommendations and described actions it has under way and planned to address them. In addition, DOE agreed with our third recommendation—to revise ORP’s organizational structure so that the quality assurance function is independent of ORP upper management—in principle. While DOE states that it believes that the current ORP quality assurance reporting relationship meets all established requirements, it also states that the report identifies instances that indicate that ORP could be strengthened to improve the effectiveness and independence of its quality assurance functions. In response to our recommendation, DOE plans to direct ORP to assess the quality assurance functional reporting lines, responsibilities, and processes to enhance the independence of the quality function from cost and schedule influences and to strengthen and clarify quality assurance reporting to the ORP Manager. This planned action is a positive first step toward implementing our recommendation. We are sending copies of this report to the appropriate congressional committees; the Secretary of Energy; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix II. In addition to the contact named above, Nathan Anderson (Assistant Director), Mark Braza, Scott Fletcher, Ellen Fried, Richard Johnson, Paul Kazemersky, and Peter Ruedel made key contributions to this report.
|
DOE and its contractor are building the WTP—which consists of multiple facilities—to treat a large portion of nuclear waste at Hanford. The project has faced persistent challenges, including quality assurance problems that have delayed it by decades and more than tripled its costs, to nearly $17 billion. DOE's quality assurance framework aims to ensure that all problems are identified and do not recur. Senate Report 114-49 accompanying the National Defense Authorization Act for Fiscal Year 2016 included a provision for GAO to carry out an ongoing evaluation of the WTP. This first report examines (1) the actions DOE has taken to identify and address WTP quality assurance problems, (2) the extent to which DOE has ensured that quality assurance problems have been identified and do not recur, and (3) the extent to which DOE's organizational structure at ORP provides the Quality Assurance Division with independence to effectively oversee the contractor's quality assurance program. GAO reviewed DOE documents and obtained the insights of ORP's internal experts on WTP quality assurance efforts and outcomes. The Department of Energy (DOE) has taken several actions to identify and address quality assurance problems at the Waste Treatment and Immobilization Plant (WTP) at its Hanford site in Washington. Among the actions taken is the implementation of the Managed Improvement Plan by DOE's Office of River Protection (ORP) and the WTP contactor. The plan is intended to ensure that the WTP can operate in compliance with DOE-approved safety and quality requirements. The contractor has stated that the plan is fully implemented, but GAO found that a number of key activities may be incomplete and ORP officials will not be able to verify the extent of implementation until December 2018. According to DOE documents that GAO reviewed and ORP quality assurance experts GAO spoke with, ORP has not ensured that all WTP quality assurance problems have been identified and some previously identified problems are recurring. For example, a 2016 DOE report found quality assurance problems, such as engineering errors and construction deficiencies, that neither ORP nor the contractor had identified when the work was conducted. ORP quality assurance experts GAO spoke with reiterated the issues identified in reports. In addition, DOE audits have found that previously identified quality assurance problems have recurred in key areas, such as the procurement of items that do not meet requirements or perform as specified. These problems were also raised by several of the ORP quality assurance experts GAO interviewed. According to these experts, such recurring problems may lead to significant rework at WTP facilities in the future if work is not stopped and the issues addressed. ORP's quality assurance framework requires the contractor to determine the extent to which quality assurance problems exist in all WTP structures, systems, and components when such problems are identified, and allows ORP to stop work at a facility if recurring issues arise. However, ORP has neither directed the contractor to make this determination nor stopped work when problems recur because it has confidence in the Managed Improvement Plan. ORP's organizational structure may not provide its Quality Assurance Division with sufficient independence from the office's upper management to oversee the contractor's quality assurance program effectively. GAO has previously found that an oversight organization should be structurally distinct and separate from program offices responsible for cost and schedule performance to avoid conflict between mission objectives and safety. However, a 2017 DOE headquarters assessment found that ORP's Quality Assurance Division's effectiveness has been limited. This is because in some cases ORP upper management had mischaracterized its findings, and in other instances, ORP upper management had not used this division to evaluate the extent of potential quality assurance problems. ORP quality assurance experts GAO spoke to were also concerned that ORP's organizational structure does not always ensure the independence of the division. For example, two of these experts described instances when ORP upper management had downgraded the division's findings so that the contractor could take less stringent corrective measures. By providing the Quality Assurance Division adequate independence, DOE can better ensure that compliance with nuclear safety requirements will not be subordinated to other project management goals, such as meeting cost and schedule targets. GAO recommends that DOE direct the WTP contractor to determine the extent of problems in WTP structures, systems, and components and order work stops when problems recur, and DOE should direct ORP to revise its organizational structure to ensure the independence of the Quality Assurance Division. DOE generally agreed with GAO's recommendations.
|
Human spaceflight at NASA began in the 1960s with the Mercury and Gemini programs leading up to the Apollo moon landings. After the last lunar landing, Apollo 17, in 1972, NASA shifted its attention to low earth orbit operations with human spaceflight efforts that included the Space Shuttle and International Space Station programs through the remainder of the 20th century. In the early 2000s, NASA once again turned its attention to cislunar and deep space destinations, and in 2005 initiated the Constellation program, a human exploration program that was intended to be the successor to the Space Shuttle. The Constellation program was canceled, however, in 2010 due to factors that included cost and schedule growth and funding gaps. Following Constellation, the National Aeronautics and Space Administration Authorization Act of 2010 directed NASA to develop a Space Launch System, to continue development of a crew vehicle, and prepare infrastructure at Kennedy Space Center to enable processing and launch of the launch system. To fulfill this direction, NASA formally established the SLS program in 2011. Then, in 2012, the Orion project transitioned from its development under the Constellation program to a new development program aligned with SLS. To transition Orion from Constellation, NASA adapted the requirements from the former Orion plan with those of the newly created SLS and the associated ground systems programs. In addition, NASA and the European Space Agency agreed that it would provide a portion of the service module for Orion. Figure 1 provides details about the heritage of each SLS hardware element and its source as well as identifies the major portions of the Orion crew vehicle. The EGS program was established to modernize the Kennedy Space Center to prepare for integrating hardware from the three programs as well as processing and launching SLS and Orion and recovery of the Orion crew capsule. EGS is made up of nine major components, including: the Vehicle Assembly Building, Mobile Launcher, Launch Control Center and software, Launch Pad 39B, Crawler-Transporter, Launch Equipment Test Facility, Spacecraft Offline Processing, Launch Vehicle Offline Processing, and Landing and Recovery. See figure 2 for pictures of the Mobile Launcher, Vehicle Assembly Building, Launch Pad 39B, and Crawler-Transporter. NASA’s Exploration Systems Development (ESD) organization is responsible for directing development of the three individual human spaceflight programs—SLS, Orion, and EGS—into a human space exploration system. The integration of these programs is key because all three systems must work together for a successful launch. The integration activities for ESD’s portfolio occur at two levels in parallel throughout the life of the programs: as individual efforts to integrate the various elements managed within the separate programs and as a joint effort to integrate the three programs into an exploration system. The three ESD programs support NASA’s long term goal of sending humans to distant destinations, including Mars. NASA’s approach to developing and demonstrating the technologies and capabilities to support their long term plans for a crewed mission to Mars includes three general stages of activities—Earth Reliant, Proving Ground, and Earth Independent. Earth Reliant: From 2016 to 2024, NASA’s planned exploration is focused on research aboard the International Space Station. On the International Space Station, NASA is testing technologies and advancing human health and performance research that will enable deep space, long duration missions. Proving Ground: From the mid-2020s to early-2030s, NASA plans to learn to conduct complex operations in a deep space environment that allows crews to return to Earth in a matter of days. Primarily operating in cislunar space—the volume of space around the moon featuring multiple possible stable staging orbits for future deep space missions—NASA will advance and validate capabilities required for humans to live and work at distances much farther away from our home planet, such as on Mars. Earth Independent: From the early-2030s to the mid-2040s, planned activities will build on what NASA learns on the space station and in deep space to enable human missions to the vicinity of Mars, possibly to low-Mars orbit or one of the Martian moons, and eventually the Martian surface. The first launch of the integrated ESD systems, EM-1, is a Proving Ground mission. EM-1 is planned as an uncrewed test flight currently planned for no earlier than October 2019 that will fly about 70,000 kilometers beyond the moon. The second launch, Exploration Mission 2 (EM-2), which will utilize an evolved SLS variant with a more capable upper stage, is also a Proving Ground mission planned for no later than April 2023. EM-2 is expected to be a 10- to 14-day crewed flight with up to four astronauts that will orbit the moon and return to Earth to demonstrate the baseline Orion vehicle capability. NASA eventually plans to develop larger and more capable versions of the SLS to support Proving Ground and Earth Independent missions after EM-2. As noted above, in April 2017 we found that given the combined effects of ongoing technical challenges in conjunction with limited cost and schedule reserves, it was unlikely that the ESD programs would achieve the November 2018 launch readiness date. We recommended that NASA confirm whether the EM-1 launch readiness date of November 2018 was achievable, as soon as practicable but no later than as part of its fiscal year 2018 budget submission process. We also recommended that NASA propose a new, more realistic EM-1 date if warranted. NASA agreed with both recommendations and stated that it was no longer in its best interest to pursue the November 2018 launch readiness date. Further, NASA stated that, in fall 2017, it planned to establish a new launch readiness date. Subsequently, in June 2017, NASA sent notification to Congress that EM-1’s recommended launch date would be no earlier than October 2019. The life cycle for NASA space flight projects consists of two phases— formulation, which takes a project from concept to preliminary design, and implementation, which includes building, launching, and operating the system, among other activities. NASA further divides formulation and implementation into pre-phase A through phase F. Major projects must get approval from senior NASA officials at key decision points before they can enter each new phase. The three ESD programs are completing design and fabrication efforts prior to beginning Phase D system assembly, integration and test, launch and checkout. Figure 3 depicts NASA’s life cycle for space flight projects. NASA’s approach for integrating and assessing programmatic and technical readiness, executed by ESD, differs from prior NASA human spaceflight programs. This new approach offers some cost and potential efficiency benefits. However, it also brings challenges specific to its structure. In particular, there are oversight challenges because only one of the three programs, Orion, has a cost and schedule estimate for EM-2. NASA is already contractually obligating money on SLS and EGS for EM- 2, but the lack of cost and schedule baselines for these programs will make it difficult to assess progress over time. Additionally, the approach creates an environment of competing interests because it relies on dual- hatted staff to manage technical and safety aspects on behalf of ESD while also serving as independent oversight of those same areas. NASA is managing the human spaceflight effort differently than it has in the past. Historically, NASA used a central management structure to manage human spaceflight efforts for the Space Shuttle and the Constellation programs. For example, both the Shuttle and Constellation programs were organized under a single program manager and used a contractor to support integration efforts. Additionally, the Constellation program was part of a three-level organization—the Exploration Systems Mission Directorate within NASA headquarters, the Constellation program, and then projects, including the launch vehicle, crew capsule, ground systems, and other lunar-focused projects, managed under the umbrella of Constellation. Figure 4 illustrates the three-level structure used in the Constellation program. In the Constellation program, the programmatic workforce was distributed within the program and projects. For example, systems engineering and integration organizations—those offices responsible for making separate technical designs, analyses, organizations and hardware come together to deliver a complete functioning system—were embedded within both the Constellation program and within each of the projects. NASA’s current approach is organized with ESD, rather than a contractor, as the overarching integrator for the three separate human spaceflight programs—SLS, Orion, and EGS. ESD manages both the programmatic and technical cross-program integration, and primarily relies on personnel within each program to implement its integration efforts. Exploration Systems Integration, an office within ESD, leads the integration effort from NASA headquarters. ESD officials stated that this approach is similar to that used by the Apollo program, wherein the program was also managed out of NASA headquarters. Within Exploration Systems Integration, the Cross-Program Systems Integration sub-office is responsible for technical integration and the Programmatic and Strategic Integration sub-office is responsible for integrating the financial, schedule, risk management, and other programmatic activities of the three programs. The three programs themselves perform the hardware and software integration activities. This organizational structure that consists of two levels is shown in figure 5. ESD is executing a series of six unique integration-focused programmatic and technical reviews at key points within NASA’s acquisition life cycle, as shown in figure 6, to assess whether NASA cost, schedule, and technical commitments are being met for the three-program enterprise. These reviews cover the life cycle of the integrated programs to EM-1, from formulation to readiness to launch. Some of these reviews are unique to ESD’s role as integration manager, For example, ESD established two checkpoints—Design to Sync in 2015 and Build to Sync in 2016. The purpose of Design to Sync was to assess the ability of the integrated preliminary design to meet system requirements, similar to a preliminary design review and the purpose of Build to Sync was to assess the maturity of the integrated design in readiness for assembly, integration, and test, similar to a critical design review (CDR). At both events, NASA assessed the designs as ready to proceed. Key participants in these integration reviews include ESD program personnel and the Cross-Program Systems Integration and Programmatic and Strategic Integration staff that are responsible for producing and managing the integration activities. ESD’s integration approach offers some benefits in terms of cost avoidance relative to NASA’s most recent human spaceflight effort, the Constellation program. NASA estimated it would need $190 million per year for the Constellation program integration budget. By comparison, between fiscal years 2012 and 2017, NASA requested an average of about $84 million per year for the combined integration budgets of the Orion, SLS, EGS, and ESD. This combined average of about $84 million per year represents a significant decrease from the expected integration budget of $190 million per year under the Constellation program. In addition, as figure 7 shows, NASA’s initial estimates for ESD’s required budget for integration are close to the actuals for fiscal years 2012-2017. NASA originally estimated that ESD’s budget for integration would require approximately $30 million per year. ESD’s integration budget was less than $30 million in fiscal years 2012 and 2013 and increased to about $40 million in fiscal year 2017—an average of about $30 million a year. According to NASA officials, some of the cost avoidance can be attributed to the difference in workforce size. The Constellation program’s systems engineering and integration workforce was about 800 people in 2009, the last full year of the program; whereas ESD’s total systems engineering and integration workforce in 2017 was about 500 people, including staff resident in the individual programs. ESD officials also stated that, in addition to cost avoidance, their approach provides greater efficiency. For example, ESD officials said that decision making is much more efficient in the two-level ESD organization than Constellation’s three-level organization because the chain of command required to make decisions is shorter and more direct. ESD officials also indicated that the post-Constellation elimination of redundant systems engineering and integration staff at program and project levels contributed to efficiency. Additionally, they stated that program staff are invested in both their respective programs and the integrated system because they work on behalf of the programs and on integration issues for ESD. Finally, they said another contribution to increased efficiency was NASA’s decision to establish SLS, Orion, and EGS as separate programs, which allowed each program to proceed at its own pace. One caveat to this benefit, however, is that ESD’s leaner organization is likely to face challenges to its efficiency in the integration and test phases of the SLS, Orion, and EGS programs. We analyzed the rate at which ESD has reviewed and approved the different types of launch operations and ground processing configuration management records for integrated SLS, Orion, and EGS operations, and found that the process is proceeding more slowly than ESD anticipated. For example, as figure 8 illustrates, ESD approved 403 fewer configuration management records than originally planned in the period from March 2016 through June 2017. According to an ESD official, the lower-than-planned approval rate resulted from the time necessary to establish and implement a new review process as well as final records being slower to arrive from the programs for review than ESD anticipated. Additionally, the official stated that the records required differing review timelines because they varied in size and scope. As figure 8 shows, ESD originally expected the number of items that needed review and approval to increase and create a “bow wave” during 2017 and 2018. In spring 2017, ESD re-planned its review and approval process and flattened the bow wave. The final date for review completion is now aligned with the new planned launch readiness date of no earlier than October 2019, which added an extra year to ESD’s timeframe to complete the record reviews. While the bow wave is not as steep as it was under the original plan, ESD will continue to have a large number of records that require approval in order to support the launch readiness date. An ESD official stated that NASA had gained experience managing such a bow wave as it prepared for Orion’s 2014 exploration flight test launch aboard a Delta IV rocket and as part of the Constellation program’s prototype Ares launch in 2009, but acknowledged that ESD will need to be cautious that its leaner staff is not overwhelmed with documentation, which could slow down the review process. ESD is responsible for overall affordability for SLS, Orion, and EGS, while each of the programs develops and maintains an individual cost and schedule baseline. The baseline is created at the point when a program receives NASA management approval to proceed into final design and production. In their respective baselines, as shown in table 1, SLS and EGS cost and schedule are baselined to EM-1, and Orion’s are baselined to EM-2. NASA documentation indicates that Orion’s baselines are tied to EM-2 because that is the first point at which it will fulfill its purpose of carrying crew. Should NASA determine it is likely to exceed its cost estimate baseline by 15 percent or miss a milestone by 6 months or more, NASA is required to report those increases and delays—along with their impacts—to Congress. In June 2017, NASA sent notification to Congress that the schedule for EM-1 has slipped beyond the allowed 6- month threshold, but stated that cost is expected to remain within the 15 percent threshold. NASA has not established EM-2 cost baselines or expected total life- cycle costs for SLS and EGS, including costs related to the larger and more capable versions of SLS needed to implement the agency’s plans to send crewed missions to Mars. GAO’s Cost Estimating and Assessment Guide, a guidebook of cost estimating best practices developed in concert with the public and private sectors, identifies baselines as a critical means for measuring program performance over time and addresses how a baseline backed by a realistic cost estimate increases the probability of a program’s success. In addition, prior GAO work offers insight into the benefits of how baselines enhance a program’s transparency. For example, we found in 2009 that costs for the Missile Defense Agency’s (MDA) ballistic missile defense system had grown by at least $1 billion, and that lack of baselines for each block of capability hampered efforts to measure progress and limited congressional oversight of MDA’s work. MDA responded to our recommendation to establish these baselines and, in 2011, we reported that MDA had a new process for setting detailed baselines, which had resulted in a progress report to Congress more comprehensive than the one it provided in 2009. To that end, we have made recommendations in the past on the need for NASA to baseline the programs’ costs for capabilities beyond EM-1; however, a significant amount of time has passed without NASA taking steps to fully implement these recommendations. Specifically, in May 2014, we recommended that, to provide Congress with the necessary insight into program affordability, ensure its ability to effectively monitor total program costs and execution, and to facilitate investment decisions, NASA’s Administrator should direct the Human Exploration and Operations Mission Directorate to: Establish a separate cost and schedule baseline for work required to support the SLS for EM-2 and report this information to the Congress through NASA’s annual budget submission. If NASA decides to fly the SLS configuration used in EM-2 beyond EM-2, establish separate life cycle cost and schedule baseline estimates for those efforts, to include funding for operations and sustainment, and report this information annually to Congress via the agency’s budget submission; and Establish separate cost and schedule baselines for each additional capability that encompass all life cycle costs, to include operations and sustainment, because NASA intends to use the increased capabilities of the SLS, Orion, and ground support efforts well into the future and has chosen to estimate costs associated with achieving the capabilities. As part of the latter recommendation, we stated that, when NASA could not fully specify costs due to lack of well-defined missions or flight manifests, the agency instead should forecast a cost estimate range— including life cycle costs—having minimum and maximum boundaries and report these baselines or ranges annually to Congress via the agency’s budget submission. In its comments on our 2014 report, NASA partially concurred with these two recommendations, noting that much of what it had already done or expected to do would address them. For example, the agency stated that establishing the three programs as separate efforts with individual cost and schedule commitments met GAO’s intent as would its plans to track and report development, operations, and sustainment costs in its budget to Congress as the capabilities evolved. In our response, we stated that while NASA’s prior establishment of three separate programs lends some insight into expected costs and schedule at the broader program level, it does not meet the intent of the two recommendations because cost and schedule identified at that level is unlikely to provide the detail necessary to monitor the progress of each block against a baseline. Further, reporting the costs via the budget process alone will not provide information about potential costs over the long term because budget requests neither offer all the same information as life-cycle cost estimates nor serve the same purpose. Life-cycle cost estimates establish a full accounting of all program costs for planning, procurement, operations and maintenance, and disposal and provide a long-term means to measure progress over a program’s life span. In 2016, NASA requested closure of these recommendations, citing, among other factors, changes to the programs’ requirements, design, architecture, and concept of operations. However, NASA’s request did not identify any steps taken to meet the intent of these two recommendations, such as establishing cost and schedule baselines for EM-2, baselines for each increment of SLS, Orion, or ground systems capability, or documentation of life cycle cost estimates with minimum and maximum boundaries. Further, a senior level ESD official told us that NASA does not intend to establish a baseline for EM-2 because it is not required to do so. The limited scope that NASA has chosen to use as the basis for formulating the programs’ cost baselines does not provide the transparency necessary to assess long-term affordability. Plainly, progress cannot be assessed without a baseline that serves as a means to compare current costs against expected costs; consequently, it becomes difficult to assess program affordability and for Congress to make informed budgetary decisions. NASA’s lack of action in regards to our 2014 recommendations means that it is now contractually obligating NASA to spend billions of dollars in potential costs for EM-2 and beyond without a baseline against which to assess progress. For example: in fiscal year 2016, the SLS program awarded two contracts to Aerojet Rocketdyne: a $175 million contract for RL-10 engines to power the exploration upper stage during EM-2 and EM-3 and a $1.2 billion contract to restart the RS-25 production line required for engines for use beyond EM-4, and to produce at least 4 additional RS-25 engines; in 2017, SLS modified the existing Boeing contract upwards by $962 million for work on the exploration upper stage that SLS will use during EM-2 and future flights; and on a smaller scale, in fiscal year 2016 the EGS program obligated $4.8 million to support the exploration upper stage and EM-2. As illustrated by these contracting activities, the SLS program is obligating more funds for activities beyond EM-1 than Congress directed. Specifically, of approximately $2 billion appropriated for the SLS program, the Consolidated Appropriations Act, 2016 directed that NASA spend not less than $85 million for enhanced upper stage development for EM-2. NASA has chosen to allocate about $360 million of its fiscal year 2016 SLS appropriations towards EM-2, including enhanced upper stage development, additional performance upgrades, and payload adapters, without a baseline to measure progress and ensure transparency. The NASA Inspector General (IG) also recently reported that NASA is spending funds on EM-2 efforts without a baseline in place and expressed concerns about the need for EM-2 cost estimates. Because NASA has not implemented our recommendations, it may now be appropriate for Congress to take action to require EM-2 cost and schedule baselines for SLS and EGS, and separate cost and schedule baselines for additional capabilities developed for Orion, SLS, and EGS for missions beyond EM-2. These baselines would be important tools for Congress to make informed, long-term budgetary decisions with respect to NASA’s future exploration missions, including Mars. NASA’s governance model prescribes a management structure that employs checks and balances among key organizations to ensure that decisions have the benefit of different points of view and are not made in isolation. As part of this structure, NASA established the technical authority process as a system of checks and balances to provide independent oversight of programs and projects in support of safety and mission success through the selection of specific individuals with delegated levels of authority. The technical authority process has been used in other parts of the government for acquisitions, including the Department of Defense and Department of Homeland Security. ESD is organizationally connected to three technical authorities within NASA. The Office of the Chief Engineer technical authority is responsible for ensuring from an independent standpoint that the ESD engineering work meets NASA standards, The Office of Safety and Mission Assurance technical authority is responsible for ensuring from an independent standpoint that ESD products and processes satisfy NASA’s safety, reliability, and mission assurance policies, and The Office of Chief Health and Medical technical authority is responsible for ensuring from an independent standpoint that ESD programs meet NASA’s health and medical standards. These NASA technical authorities have delegated responsibility for their respective technical authority functions directly to ESD staff. According to NASA’s project management requirements, the program or project manager is ultimately responsible for the safe conduct and successful outcome of the program or project in conformance with governing requirements and those responsibilities are not diminished by the implementation of technical authority. ESD has established an organizational structure in which the technical authorities for engineering and safety and mission assurance (S&MA) are dual hatted to also serve simultaneously in programmatic positions. The chief engineer technical authority also serves as the Director of ESD’s Cross Program System Integration Office and the S&MA technical authority also serves as the ESD Safety and Mission Assurance Manager. In their programmatic roles for ESD, the individuals manage resources, including budget and schedule, to address engineering and safety issues. In their technical authority roles, these same individuals are to provide independent oversight of programs and projects in support of safety and mission success. Having the same individual simultaneously fill both a technical authority role and a program role creates an environment of competing interests where the technical authority may be subject to impairments in their ability to impartially and objectively assess the programs while at the same time acting on behalf of ESD in programmatic capacities. This duality makes them more subject to program pressures of cost and schedule in their technical authority roles. Figure 9 describes some of the conflicting roles and responsibilities of these officials in their two different positions. The concurrency of duties leaves the positions open to conflicting goals of safety, cost, and schedule and increases the potential for the technical authorities to become subject to cost and schedule pressures. For example: the dual-hatted engineering and S&MA technical authorities serve on decision-making boards both in technical authority and programmatic capacities, making them responsible for providing input on technical and safety decisions while also keeping an eye on the bottom line for ESD’s cost and schedule; and the technical authorities are positioned such that they have been the reviewers of the ESD programmatic areas they manage—in essence, “grading their own homework.” For example, at ESD’s Build to Sync review in 2016, the engineering and S&MA technical authorities evaluated the areas that they manage in their respective capacities as ESD Director of Cross Program System Integration and ESD Safety and Mission Assurance Manager. This process relied on their abilities as individuals to completely separate the two hats—using one hand to put on the ESD hat and manage technical and safety issues within programmatic cost and schedule constraints and using the other hand to take off that hat and assess the same issues with an independent eye. NASA officials identified several reasons why the dual-hat structure works for their purposes. Agency officials stated that one critical factor to successful dual-hatting is having the “right” people in those dual-hat positions—that is, personnel with the appropriate technical knowledge to do the work and the ability to act both on behalf of ESD and independent of it. Officials also indicated that technical authorities retain independence because their technical authority reporting paths and performance reviews are all within their technical authority chain of command rather than under the purview of the ESD chain of command. Additionally, agency officials said that dual-hat roles are a commonplace practice at NASA and cited other factors in support of the approach, including that: it would not be an efficient use of resources to have an independent technical authority with no program responsibilities because that person would be unlikely to have sufficient program knowledge to provide useful insight and could slow the program’s progress; a technical authority that does not consider cost and schedule is not helpful to the program because it is unrealistic to disregard those aspects of program management; a strong dissenting opinion process is in place and allows for issues to be raised through various levels to the Administrator level within NASA; and ESD receives additional independent oversight through three NASA internal organizations—the independent review teams that provide independent assessments of a program’s technical and programmatic status and health at key points in its life cycle; the NASA Engineering and Safety Center that conducts independent safety and mission success-related testing, analysis, and assessments of NASA’s high- risk projects; and the Aerospace Safety Advisory Panel (ASAP) that independently oversees NASA’s safety performance. These factors that NASA officials cite in support of the dual-hat approach minimize the importance of having independent oversight and place ESD at risk of fostering an environment in which there is no longer a balance between preserving safety with the demands of maintaining cost and schedule. The Columbia Accident Investigation Board (CAIB) report—the result of an in-depth assessment of the technical and organizational causes of the Columbia accident—concluded that NASA’s organization for the Shuttle program combined, among other things, all authority and responsibility for schedule, cost, safety, and technical requirements and that this was not an effective check and balance. The CAIB report recommended that NASA establish a technical authority to serve independently of the Space Shuttle program so that employees would not feel hampered to bring forward safety concerns or disagreements with programmatic decisions. The Board’s findings that led to this recommendation included a broken safety culture in which it was difficult for minority and dissenting opinions to percolate up through the hierarchy; dual Center and programmatic roles vested in one person that had confused lines of authority, responsibility, and accountability and made the oversight process susceptible to conflicts of interest; and oversight personnel in positions within the program, increasing the risk that these staffs’ perspectives would be hindered by too much familiarity with the programs they were overseeing. ESD officials stated that they had carefully and thoughtfully implemented the intent of the CAIB; they said they had not disregarded its finding and recommendations but instead established a technical authority in such a way that it best fit the context of ESD’s efforts. These officials did acknowledge, though, that the dual hat approach does not align with the CAIB report’s recommendation to separate programmatic and technical authority or with NASA’s governance framework. Further, over the course of our review, we spoke with various high-ranking officials outside and within NASA who expressed some reservations about ESD’s dual hat approach. For example: The former Chairman of the CAIB stated that, even though the ESD programs are still in development, he believes the technical authority should be institutionally protected against the pressures of cost and schedule and added that NASA should never be lulled into dispensing of engineering and safety independence because human spaceflight is an extremely risky enterprise. Both NASA’s Chief Engineer and Chief of S&MA acknowledged there is inherent conflict in the concurrent roles of the dual hats, while also expressing great confidence in the ESD staff now in the dual roles. NASA’s Chief of S&MA indicated that the dual-hat S&MA structure is working well within ESD, but he believes these dual-hatted roles may not necessarily meet the intent of the CAIB’s recommendation because the Board envisioned an independent safety organization completely outside the programs. NASA’s Chief Engineer stated that he believes technical authority should become a separate responsibility and position as ESD moves forward with integration of the three programs and into their operation as a system. As these individuals made clear, ensuring the ESD engineering and S&MA technical authorities remain independent of cost and schedule conflicts is key to human spaceflight success and safety. Along these lines, the ASAP previously conveyed concerns about NASA’s implementation of technical authority that continue to be valid today. In particular, the ASAP stated in a 2013 report that NASA’s technical authority was working at that time in large measure due to the well- qualified, strong personnel that had been assigned to the process. The panel noted, however, that should there be a conflict or weakening of the placement of strong individuals in the technical authority position, this could introduce greater risk into a program. Although a current ASAP official stated she had no concerns with ESD’s present approach to technical authority, the panel’s prior caution remains applicable, and the risk that the ASAP identified earlier could be realized if not mitigated by eliminating the potential for competing interests within the ESD engineering and S&MA positions. NASA is currently concluding an assessment of the implementation of the technical authority role to determine how well that function is working across the agency. According to the official responsible for leading the study, the assessment includes examining the evolution of the technical authority role over the years and whether NASA is spending the right amount of funds for those positions. NASA expects to have recommendations in 2017 on how to improve the technical authority function, but does not expect to address the dual hat construct. A principle of federal internal controls is that an agency should design control activities to achieve objectives and respond to risks, which includes segregation of key duties and responsibilities to reduce the risk of error, misuse, or fraud. By overlapping technical authority and programmatic responsibilities, NASA will continue to run the risk of creating an environment of competing interests for the ESD engineering and S&MA technical authorities. Despite the development and integration challenges associated with a new human spaceflight capability, ESD has improved its overall cross- program risk posture over the past 2 years. Nonetheless, it still faces key integration risk areas within software development and verification and validation (V&V). Both are critical to readiness for EM-1 because software acts as the “brain” that ties SLS, Orion, and EGS together in a functioning body, while V&V ensures the integrated body works as expected. The success of these efforts forms the foundation for a launch, no matter the date of EM-1. We have previously reported on individual SLS, Orion, and EGS program risks that were contributing to potential delays within each program. For example, in July 2016, we found that delays with the European Service Module—which will provide services to the Orion crew module in the form of propulsion, consumables storage, and heat rejection and power—could potentially affect the Orion program’s schedule. Subsequently, in April 2017, we found that those delays had worsened and were contributing to the program likely not making a November 2018 launch readiness date. All three programs continue to manage such individual program risks, which is to be expected of programs of this size and complexity. The programs may choose to retain these risks in their own risk databases or elevate them to ESD to track mitigation steps. A program would elevate a risk to ESD when decisions are needed by ESD management, such as a need for additional resources or requirement changes. Risks with the greatest potential for negative impacts are categorized as top ESD risks. In addition to these individual programs risks that are elevated to ESD, ESD is also responsible for overseeing cross-program risks that affect multiple programs. An example of a cross-program risk is the potential for delayed delivery of data from SLS and Orion to affect the EGS software development schedule. ESD has made progress reducing risks over the last 2 years, from the point of the Design to Sync preliminary design review equivalent for the integrated programs to the Build to Sync critical design review equivalent. As figure 10 illustrates, ESD has reduced its combined total of ESD and cross program risks from 39 to 25 over this period, and reduced the number of high risks from about 49 percent of the total to about 36 percent of the total. The ESD risk system is dynamic, with risks coming into and dropping out of the system over time as development proceeds and risk mitigation is completed. A total of 29 of the 39 risks within the ESD risk portfolio were removed from the register and 15 risks were added to the register between November 2014, prior to Design to Sync, and March 2017, after Build to Sync. Examples of risks removed over this time period include risks associated with late delivery of Orion and SLS ground support equipment hardware to EGS and establishing a management process to identify risks stemming from the programs being at differing points in development. Nine risks remained active in the system over the 2-year period we analyzed, and NASA experienced delays in the length of time it anticipated it would take to complete mitigation of the majority of these nine risks. Three of these nine risks that have remained active in the risk system since before Design to Sync are still classified as high risk; the remaining six are classified as medium risk. Mitigation is an action taken to eliminate or reduce the potential severity of a risk, either by reducing the probability of it occurring, by reducing the level of impact if it does occur, or both. ESD officials indicated a number of reasons why risks could take longer to mitigate. For instance, risks with long-term mitigation strategies may go for extended periods of time without score changes. In addition, ESD may conduct additional risk assessments and determine that certain risks need to be reprioritized over time and that resources should be focused towards higher risks. In addition, some risk mitigation steps are tied to hardware delivery and launch dates, and as those delay, the risk mitigation steps will as well. As illustrated in table 2, we found that six of these nine risks were related to software and V&V and represented some of the primary causes in terms of estimated completion delays. On average, the estimated completion dates for these six risks were delayed about 16 months. In addition, the two V&V risks that have remained active since before Design to Sync were still considered top ESD risks as of March 2017 when we completed this analysis. Software development is one of the top cross-program technical issues facing ESD as the programs approach EM-1. Software is a key enabling technology required to tie the human spaceflight systems together. Specifically, for ESD to achieve EM-1 launch readiness, software developed within each of the programs has to be able to link and communicate with software developed in other programs in order to enable a successful launch. Furthermore, software development continues after hardware development and is often used to help resolve hardware deficiencies discovered during systems integration and test. ESD has defined six critical paths—the path of longest duration through the sequence of activities that determines the program’s earliest completion date—for its programs to reach EM-1, and three are related to software development. These three software critical paths support interaction and communication between the systems the individual programs are developing—SLS to EGS software, Orion to EGS software, and the Integrated Test Laboratory (ITL) facility that supports Orion software and avionics testing as well as some SLS and EGS testing. The other critical paths are development of the Orion crew service module, SLS core stage, and the EGS Mobile Launcher. Because of software’s importance to EM-1 launch readiness, ESD is putting a new method in place to measure how well these software efforts are progressing along their respective critical paths. To that end, it is currently developing a set of “Key Progress Indicators” milestones that will include baseline and forecast dates. Officials indicated that these metrics will allow ESD to better track progress of the critical path software efforts toward EM-1 during the remainder of the system integration and test phase. ESD officials have indicated, however, that identifying and establishing appropriate indicators is taking longer than expected and proving more difficult than anticipated. One of the software testing critical paths, the ITL, has already experienced delays that slipped completion of planned software testing from September 2018 until March 2019, a delay of 6 months. Officials told us that this delay was primarily due to a series of late avionics and software deliveries by the European Space Agency for Orion’s European Service Module. The delay in the Orion testing in turn affects SLS and EGS software testing and integration because those activities are informed by the completion of the Orion software testing. Furthermore, some EGS and SLS software testing scheduled to be conducted within the ITL has been re-planned as a result of the Orion delays. The Orion program indicates that it has taken action to mitigate ITL issues as they arise. For example, the European Service Module avionics and software delivery delay opened a 125-day gap between completion of crew module testing and service module testing. Orion officials indicated that the program had planned to proceed directly into testing of the integrated crew module and service module software and systems, but the integrated testing cannot be conducted until the service module testing is complete. As illustrated by figure 11, to mitigate the impact of the delay, Orion officials indicated that the program filled this gap by rescheduling other activities at the ITL such as software integration testing and dry runs for the three programs. These adjustments narrowed the ITL schedule gap from 125 days to 24 days. The officials stated that they will continue to adjust the schedule to eliminate gaps. The other two software critical paths—SLS to EGS and Orion to EGS software development—are also experiencing software development issues. In July 2016, for example, we found that delays in SLS and Orion requirements development, as well as the programs’ decisions to defer software content to later in development, were delaying EGS’s efforts to develop ground command and control software and increasing cost and schedule. Furthermore, ESD reports show that delays and content deferral in the Orion and SLS software development continue to affect EGS software development and could delay launch readiness. For example, the EGS data throughput risk that both ESD and EGS are tracking is that the ground control system software is currently not designed to process the amount of telemetry it will receive and provide commands to SLS and ground equipment as required during launch operations. EGS officials stated that, if not addressed, the risk is that if there is a SLS or Orion failure, the ground control system software may not display the necessary data to launch operations technicians. EGS officials told us that the reason for the mismatch between the data throughput being sent to the ground control software and how much is it designed to process is that no program was constrained in identifying its data throughput. These officials stated that retrospectively, they should have established an interface control document to manage the process. The officials also stated that the program is taking steps to mitigate this risk, including defining or constraining the data parameters and buying more hardware to increase the amount of data throughput that can be managed, but will not know if the risk is fully mitigated until additional data are received and analyzed during upcoming tests. For example, EGS officials stated that the green run test will provide additional data to help determine if the steps they are taking address this throughput risk. If the program determines the risk is not fully mitigated and additional software redesign is required, it could lead to schedule delays. ESD officials overseeing software development acknowledged that software development for the integrated systems is a difficult task and said they expect to continue to encounter and resolve software development issues during cross-program integration and testing. As we have found in past reviews of NASA and Department of Defense systems, software development is a key risk area during system integration and testing. For example, we found in April 2017 that software delivery delays and development problems with the U.S. Air Force’s F-35 program experienced during system integration and testing were likely to extend that program’s development by 12 months and increase its costs by more than $1.7 billion. Verification and validation (V&V) is acknowledged by ESD as a top cross- program integration risk that NASA must monitor as it establishes and works toward a new EM-1 launch readiness date. V&V is a culminating development activity prior to launch for determining whether integrated hardware and software will perform as expected. V&V consists of two equally important aspects: verification is the process for determining whether or not a product fulfills the requirements or specifications established for it at the start of the development phase; and validation is the assessment of a planned or delivered system ability to meet the sponsor’s operational need in the most realistic environment achievable during the course of development or at the end of development. Like software development and testing, V&V is typically complex and made even more so by the need to verify and validate how SLS, Orion, and EGS work together as an integrated system. ESD’s V&V plans for the integrated system have been slow to mature. In March 2016, leading up to ESD’s Build to Sync review, ESD performed an audit of V&V-related documentation for the program CDRs and ESD Build to Sync. The audit found that 54 of 257 auditable areas (21 percent) were not mature enough to meet NASA engineering policy guidance for that point in development. According to ESD documentation, there were several causes of this immaturity, including incomplete documentation and inconsistent requirements across the three programs. NASA officials told us that our review prompted ESD to conduct a follow-up and track the status of these areas. As of June 2017, 53 of the 54 auditable areas were closed, which means these areas are at or have exceeded CDR level of maturity—6 months after Build to Sync was completed. NASA officials indicated that the remaining one auditable area, which is related to the test plan for the integrated communication network, was closed in August 2017. Nevertheless, other potential V&V issues still remain. According to ESD officials, distributing responsibility for V&V across the three programs has created an increased potential for gaps in testing. If gaps are discovered during testing, or if integrated systems do not perform as planned, money and time for modifications to hardware and/or software may be necessary, as well as time for retesting. This could result in delayed launch readiness. As a result, mature V&V plans are needed to ensure there are no gaps in planned testing. ESD officials indicated that a NASA Engineering and Safety Center review of their V&V plans, requested by ESD’s Chief Engineer to address concerns about V&V planning, would help define the path forward for maturing V&V plans. V&V issues add to cost and schedule risk for the program because they may take more time and money to resolve than ESD anticipates. In some cases, they may have a safety impact as well. For example, if the structural models are not sufficiently verified, it increases flight safety risks. Each of the programs bases its individual analyses on the models of the other programs. As a result, any deficiencies discovered in one can have cascading effects through the other systems and programs. We will continue to monitor ESD’s progress mitigating risks as NASA approaches EM-1. NASA is at the beginning of the path leading to human exploration of Mars. The first phase along that path, the integration of SLS, Orion, and EGS, is likely to set the stage for the success or failure of the rest of the endeavor. Establishing a cost and schedule baseline for NASA’s second mission is an important initial step in understanding and gaining support for the costs of SLS, Orion, and EGS, not just for that one mission but for the Mars plan overall. NASA’s ongoing refusal to establish this baseline is short-sighted, because EM-2 is part of a larger conversation about the affordability of a crewed mission to Mars. While later stages of the Mars mission are well in the future, getting to that point in time will require a funding commitment from the Congress and other stakeholders. Much of their willingness to make that commitment is likely to be based on the ability to assess the extent to which NASA has met prior goals within predicted cost and schedule targets. Furthermore, as ESD moves SLS, Orion, and EGS from development to integrated operations, its efforts will reach the point when human lives will be placed at risk. Space is a severe and unforgiving environment; the Columbia accident showed the disastrous consequences of mistakes. As the Columbia Accident Investigation Board report made clear, a program’s management approach is an integral part of ensuring that human spaceflight is as safe and successful as possible. The report also characterized independence as key to achieving that safety and success. ESD’s approach, however, tethers independent oversight to program management by vesting key individuals to wear both hats at the same time. As a result, NASA is relying heavily on the personality and capability of those individuals to maintain independence rather than on an institutional process, which diminishes lessons learned from the Columbia accident. We are making the following matter for congressional consideration. Congress should consider requiring the NASA Administrator to direct the Exploration Systems Development organization within the Human Exploration and Operations Mission Directorate to establish separate cost and schedule baselines for work required to support SLS and EGS for Exploration Mission 2 and establish separate cost and schedule baselines for each additional capability that encompass all life cycle costs, to include operations and sustainment. (Matter for Consideration 1) We are making the following recommendation to the Exploration Systems Development organization. Exploration Systems Development should no longer dual-hat individuals with both programmatic and technical authority responsibilities. Specifically, the technical authority structure within Exploration Systems Development should be restructured to ensure that technical authorities for the Offices of the Chief Engineer and Safety and Mission Assurance are not fettered with programmatic responsibilities that create an environment of competing interests that may impair their independence. (Recommendation 1) NASA provided written comments on a draft of this report. These comments are reprinted in appendix II. NASA also provided technical comments, which were incorporated as appropriate. In responding to a draft of our report, NASA partially concurred with our recommendation that the Exploration Systems Development (ESD) organization should no longer dual-hat individuals with both programmatic and technical authority responsibilities. Specifically, we recommended that the technical authority structure within ESD should be restructured to ensure that technical authorities for the Offices of Chief Engineer and Safety and Mission Assurance are not fettered with programmatic responsibilities that create an environment of competing interests that may impair their independence. In response to this recommendation, NASA stated that it created the technical authority governance structure after the Columbia Accident Investigation Board report and that the dual- hat technical authority structure has been understood and successfully implemented within ESD. NASA recognized, however, that as the program moves from the design and development phase into the integration and test phase, it anticipates that the ESD environment will encounter more technical issues that will, by necessity, need to be quickly evaluated and resolved. NASA asserted that within this changed environment it would be beneficial for the Engineering Technical Authority role to be performed by the Human Exploration and Operations Chief Engineer (who reports to the Office of the Chief Engineer). NASA stated that over the next year or so, it would solicit detailed input from these organizations and determine how to best support the program while managing the transition to integration and test and anticipated closing this recommendation by September 30, 2018. We agree that NASA should solicit detailed input from key organizations within the agency as it transitions away from the dual hat technical authority structure to help ensure successful implementation of a new structure. In order to implement this recommendation, however, NASA needs to assign the technical authority role to a person who does not have programmatic responsibilities to ensure they are independent of responsibilities related to cost and schedule performance. To fulfill this, this person may need to reside outside of the Human Exploration and Operations Mission Directorate and NASA should solicit input from the Office of the Chief Engineer when making this decision to ensure that there are no competing interests for the technical authority. Moreover, in its response, NASA does not address the dual-hat technical authority role for Safety and Mission Assurance. We continue to believe that similar changes for this role would be appropriate as well. Further, in response to this recommendation, NASA makes two statements that require additional context. First, NASA stated that GAO’s recommendation was focused on overall Agency technical authority management. While this review involved meeting with the heads of the Office of Chief Engineer and the Office of Safety and Mission Assurance, the scope of this review and the associated recommendation are limited to ESD. Second, NASA stated “As you found, we agree that having the right personnel in senior leadership positions is essential for a Technical Authority to be successful regardless of how the Technical Authority is implemented.” To clarify, this perspective is attributed to NASA officials in our report and does not represent GAO’s position. We are sending copies of this report to NASA’s Administrator and to appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report assesses (1) the benefits and challenges of the National Aeronautics and Space Administration’s (NASA) approach for integrating and assessing the programmatic and technical readiness of Orion, SLS, and EGS; and (2) the extent to which the Exploration Systems Development (ESD) organization is managing cross-program risks that could affect launch readiness. To assess the benefits and challenges of NASA’s approach for integrating and assessing the programmatic and technical readiness of its current human spaceflight programs relative to other selected programs, we reviewed and analyzed NASA policies governing program and technical integration, including cost, schedule, and risk. We obtained and analyzed ESD implementation plans to assess the role of ESD in cross program integration of the three programs. We reviewed the 2003 Columbia Accident Investigation Board’s Report’s findings and recommendations related to culture and organizational management of human spaceflight programs as well as the Constellation program’s lessons learned report. We reviewed detailed briefings and documentation from Cross-Program Systems Integration and Programmatic and Strategic Integration teams explaining ESD’s approach to programmatic and technical integration, including implementation of systems engineering and integration. We interviewed NASA officials to discuss the benefits and challenges of NASA’s integration approach and their roles and responsibilities in managing and overseeing the integration process. We met with the technical authorities and other representatives from the NASA Office of the Chief Engineer, Office of Safety and Mission Assurance, Crew Health and Safety, addressed cost and budgeting issues with the Chief Financial Officer and discussed and documented their roles in executing and overseeing the ESD programs. We also interviewed outside subject matter experts to gain their insight of ESD’s implementation of NASA’s program management policies on the independent technical authority structure. Additionally, we compared historical budget data from the now- cancelled Constellation program to ESD budget data and quantified systems engineering and integration budget savings through preliminary design review, the point at which the Constellation program was cancelled. In addition, we assessed the scope of NASA’s funding estimates for the second exploration mission and beyond against best practices criteria outlined in GAO’s cost estimating guidebook. We assessed the reliability of the budget data obtained using GAO reliability standards as appropriate. We compared the benefits and challenges of NASA’s integration approach to that of other complex, large-scale government programs, including NASA’s Constellation and the Department of Defense’s Missile Defense Agency programs. To determine the extent to which ESD is managing cross-program risks that could affect launch readiness, we obtained and reviewed NASA and ESD risk management policies; detailed monthly and quarterly briefings; and documentation from Cross-Program Systems Integration and Programmatic and Strategic Integration teams explaining ESD’s approach to identifying, tracking, and mitigating cross-program risks. We reviewed Cross-Program Systems Integration systems engineering and systems integration areas as well as Programmatic and Strategic Integration risks, cost, and schedule to determine what efforts presented the highest risk to cross program cost and schedule. We conducted an analysis of ESD’s risk dataset and the programs’ detailed risk reports, which list program risks and their potential schedule impacts, including mitigation efforts to date. We examined risk report data from Design to Sync to Build to Sync and focused our analyses to identify risks with current mitigation plans to determine if risk mitigation plans are proceeding on schedule. We did not analyze risks that were categorized under “Accept,” “Candidate,” “Research,” “Unknown,” or “Watch” because these risks were not assigned an active mitigation plan by ESD. To assess the reliability of the data, we reviewed related documentation and interviewed knowledgeable agency officials. We determined the data was sufficiently reliable for identifying risks and schedule delays associated with those risks. We examined ESD integrated testing facility schedules to determine the extent to which they can accommodate deviation in ESD’s planned integrated test schedule. We also interviewed program and contractor officials on technical risks, potential impacts, and risk mitigation efforts underway and planned. We conducted this performance audit from August 2016 to October 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact name above, Molly Traci (Assistant Director), LaTonya Miller, John S. Warren Jr., Tana Davis, Laura Greifner, Roxanna T. Sun, Samuel Woo, Marie P. Ahearn, and Lorraine Ettaro made key contributions to this report.
|
NASA is undertaking a trio of closely related programs to continue human space exploration beyond low-Earth orbit. All three programs (SLS, Orion, and EGS) are working toward a launch readiness date of no earlier than October 2019 for the first test flight. Each program is a complex technical and programmatic endeavor. Because all three programs must work together for launch, NASA must integrate the hardware and software from the separate programs into a working system capable of meeting its goals for deep space exploration. The House Committee on Appropriations report accompanying H.R. 2578 included a provision for GAO to assess the progress of NASA's human space exploration programs. This report assesses (1) the benefits and challenges of NASA's approach for integrating these three programs and (2) the extent to which cross-program risks could affect launch readiness. GAO examined NASA policies, the results of design reviews, risk data, and other program documentation and interviewed NASA and other officials. The approach that the National Aeronautics and Space Administration (NASA) is using to integrate its three human spaceflight programs into one system ready for launch offers some benefits, but it also introduces oversight challenges. To manage and integrate the three programs—the Space Launch System (SLS) vehicle; the Orion crew capsule; and supporting ground systems (EGS)—NASA's Exploration Systems Development (ESD) organization is using a more streamlined approach than has been used with other programs, and officials GAO spoke with believe that this approach provides cost savings and greater efficiency. However, GAO found two key challenges to the approach: The approach makes it difficult to assess progress against cost and schedule baselines. SLS and EGS are baselined only to the first test flight. In May 2014, GAO recommended that NASA baseline the programs' cost and schedule beyond the first test flight. NASA has not implemented these recommendations nor does it plan to; hence, it is contractually obligating billions of dollars for capabilities for the second flight and beyond without establishing baselines necessary to measure program performance. The approach has dual-hatted positions, with individuals in two programmatic engineering and safety roles also performing oversight of those areas. As the image below shows, this presents an environment of competing interests. These dual roles subject the technical authorities to cost and schedule pressures that potentially impair their independence. The Columbia Accident Investigation Board found in 2003 that this type of tenuous balance between programmatic and technical pressures was a contributing factor to that Space Shuttle accident. NASA has lowered its overall cross-program risk posture over the past 2 years, but risk areas—related to software development and verification and validation, which are critical to ensuring the integrated body works as expected—remain. For example, delays and content deferral in Orion and SLS software development continue to affect ground systems software development and could delay launch readiness. GAO will continue to monitor these risks. Congress should consider directing NASA to establish baselines for SLS and EGS's missions beyond the first test flight. NASA's ESD organization should no longer dual-hat officials with programmatic and technical authority responsibilities. NASA partially concurred with our recommendation and plans to address it in the next year. But NASA did not address the need for the technical authority to be independent from programmatic responsibilities for cost and schedule. GAO continues to believe that this component of the recommendation is critical.
|
In December 2003 Congress enacted the Century of Aviation Reauthorization Act, laying the foundation for NextGen. The intent of NextGen is to increase air transportation-system capacity, enhance airspace safety, reduce delays experienced by airlines and passengers, lower fuel consumption, and lessen adverse environmental effects from aviation, among other benefits. This effort is a multi-year, incrementally iterative transformation that will introduce new technologies and leverage existing technologies to affect every part of the national airspace system. These new technologies will use an Internet Protocol-based network to communicate. NextGen consists of components that provide digital communications between controllers and pilots, and that also use satellite-based surveillance to aid in airspace navigation. Because of these new communication methods, NextGen increases reliance on integrated information systems and distribution of information, digital communication methods, and global positioning system (GPS) technology that may put the air traffic control system at greater risk for intentional or unintentional information-system failures and breaches. We have previously reported on progress that FAA has made in implementing NextGen. For example, in 2015 we found that FAA faces cybersecurity challenges in at least three areas: (1) protecting air-traffic control information systems, (2) protecting aircraft avionics used to operate and guide aircraft, and (3) clarifying cybersecurity roles and responsibilities among multiple FAA offices. Among other recommendations, we recommended—and FAA concurred—that the agency should assess developing a cybersecurity threat model. Historically, FAA and DOD capabilities have allowed both agencies—as well as NORAD—to monitor and track military aircraft flying in the national airspace. For example, FAA maintains two layers of radar—primary surveillance radar and secondary surveillance radar—to track and identify aircraft flying in the national airspace system. Primary surveillance radar identifies the location of aircraft flying in the national airspace by transmitting a signal and calculating the amount of time that passes until that signal bounces off the aircraft and returns to the radar. FAA also uses secondary surveillance radar that transmits an interrogation signal to aircraft flying in the national airspace. A receiver on the aircraft receives the interrogation signal and then transmits a broadcast back to this radar with flight information. Table 1 shows the evolution and capabilities of different transponders that broadcast aircraft information to receivers. The fields identified in the table are critical for identifying and tracking aircraft. Of the different transponder modes and technology, ADS-B Out provides the most precise and comprehensive data. ADS-B Out makes it easier for third parties to identify and track aircraft, as ADS-B Out broadcasts include registration number, precise location, aircraft dimensions, and other information. This additional information reduces the need to identify aircraft using private databases and to determine aircraft location by comparing time difference of arrival among receivers. The content of these aircraft broadcasts varies depending on the type of transmitter providing the information from the aircraft. For example, earlier broadcast systems, including the Mode 3/A and Mode C systems, transmit a temporary four-digit transmit code (commonly referred to as a squawk code) assigned by air traffic control that facilitates aircraft tracking during a single flight. Since FAA was the sole source of flight data for systems preceding Mode S, the agency could filter out military aircraft flight information for security reasons before providing information to the public about other aircraft flying in the national airspace. Mode S transponders provide more information than do the Mode 3/A and Mode C transponders. For example, the Mode S transponder broadcast identifies an aircraft-specific, 24-bit fixed address (commonly known as the ICAO address) assigned under International Civil Aviation Organization (ICAO) standards. An aircraft retains this fixed address based on its registration, and thereby facilitates aircraft identification until the aircraft is reregistered and receives a new ICAO address. FAA and aviation groups have reported that with the proliferation of commercial and amateur receivers, the public can now track individual aircraft by receiving the aircraft’s ICAO address, squawk code, and altitude. In addition, these entities have reported that since aviation groups and hobbyists have connected the receivers, the networked receivers can calculate and identify the latitude and longitude of the aircraft they are tracking. In addition, according to these reports, some groups maintain aircraft information databases and receiver networks that can identify aircraft by ICAO address and can locate aircraft by comparing the time difference of arrival of Mode S signals between three or more receivers. Using data derived from this work, interested parties— including adversaries (for example, foreign intelligence entities, terrorists, and criminals)—can identify military aircraft by type and registration number, and can track the aircraft while in flight through Mode S fixed address broadcasts. Using this readily available public information, we were able to track various kinds of military aircraft that were equipped with Mode S transponders. ADS-B consists of two distinct aircraft information services, ADS-B Out and ADS-B In. As previously stated, ADS-B Out technology is one of the main components of FAA’s NextGen effort. It is a performance-based surveillance technology using GPS-enabled satellites to produce flight information, such as an aircraft’s location and velocity, and according to FAA, it is more precise than radar. These precise data provide air traffic controllers and pilots with more accurate information to keep aircraft safely separated in the national airspace. This technology combines aircraft avionics, a positioning capability, and ground infrastructure to enable accurate transmission of information from aircraft to the air traffic control system. This technology periodically transmits information without a pilot or operator involved (that is, Automatic); collects information from GPS or other suitable navigation systems (that is, Dependent); provides a method of determining 3-dimensional position and identification of aircraft, vehicles, or other assets (that is, Surveillance); and transmits the information available to anyone with the appropriate receiving equipment (that is, Broadcast). Using this readily available public information, we were able to track various kinds of military aircraft that were equipped with ADS-B transponders. ADS-B In is the technology that enables receivers to have direct access to information broadcasted through ADS- B Out transponders. FAA’s final rule requiring all aircraft that fly in certain categories of airspace to equip with ADS-B by January 1, 2020, applies to the ADS-B Out technology. FAA has not issued a rule or requirement for aircraft to equip with the ADS-B In technology, as of July 2017. However, according to representatives from Airlines for America, an airline industry advocacy organization, airlines have begun to install the ADS-B In capability on commercial aircraft due to the benefits they anticipate from the capability (for example, the ability of passenger airliners to reduce separation standards to save time and reduce fuel consumption). In addition, according to Air Force officials, the Air Force plans to install ADS-B In on future KC-46 transport/tanker aircraft. This report focuses on the ADS-B Out requirement when referencing ADS-B technology unless otherwise noted. According to DOD and FAA documents and officials, FAA has identified ADS-B implementation as providing an opportunity to save costs by divesting a number of secondary-surveillance radars. According to FAA officials, as of April 2017 the agency was re-evaluating its original ADS-B backup strategy and the need for retaining additional secondary- surveillance radars. According to these officials, FAA plans to maintain all high-altitude secondary-surveillance radars and the low-altitude secondary-surveillance radars around 30 or more of the busiest airports. The FAA and DOD are to cooperate in order to regulate airspace use. Specifically, the FAA is responsible for providing air navigation services, including air traffic control across most of the United States, and is leading the overall NextGen efforts in the United States. The FAA’s air traffic control system works to prevent collisions involving aircraft operating in the national airspace system, while also facilitating the flow of air traffic and supporting national security and homeland defense missions. In addition, in accordance with International Civil Aviation Organization guidelines, the FAA has categorized airspace as controlled, uncontrolled, or not used in the United States. According to the ADS-B Out rule, after January 1, 2020, no person may operate an aircraft in certain categories of airspace defined by the rule unless otherwise authorized by air traffic control authorities. DOD conducts its missions within the national airspace system as both an aircraft operator and, as delegated by the FAA, as provider of air traffic control and other air navigation services. DOD has the authority to certify its own aircraft, manage airspace, and provide air traffic control-related services in accordance with FAA requirements. DOD also provides guidance to FAA concerning security matters pertaining to the national airspace system. DOD is responsible for ensuring that DOD components, such as the military services, have sufficient access to airspace to meet security requirements, and that civilian and military aircraft can operate safely both domestically and abroad. DOD also releases airspace to the FAA when it does not need the space for military purposes. The FAA also works with DOD to ensure aviation safety between civil and military aircraft. The FAA designates airspace over certain parts of the United States as Special Use Airspace, because the areas may have prohibited airspace, restricted airspace, warning areas, or alert areas. It might be hazardous for civil aircraft to operate in that restricted airspace due to these designations. Special Use Airspace allows military aircraft to operate safely in separate, clearly defined airspace in order to conduct missions in support of the National Security Strategy and the National Military Strategy. The FAA also issues safety briefings that could identify military-protected, temporarily flight-restricted areas, to prevent civil pilots from flying into the airspace. These briefings also include information such as flight safety advice and information on air traffic technology, such as ADS-B. The FAA also shares radar information with NORAD to support the defense of North America over areas such as the National Capital Region surrounding Washington, D.C. The FAA is responsible for providing airspace navigation services within the United States and has a particular entity—the FAA Office of NextGen—that directs its NextGen requirements. In 2007 the Deputy Secretary of Defense designated the Air Force as the lead service for representing DOD and for leading and coordinating efforts across DOD. To accomplish this responsibility, the Air Force established a Lead Service Office, hereinafter referred to as the DOD Lead Service Office. These and numerous other entities have a role in implementing NextGen and ADS-B, as shown in table 2 below. Since 2008, DOD and FAA have identified a variety of ADS-B- related risks that could adversely affect military security and missions. While DOD and FAA have identified some potential mitigations for these risks, the departments have not approved any solutions. Documents we reviewed and officials we met with identified a variety of operations and physical security risks that could adversely affect DOD missions. These risks arise from information broadcast by ADS-B itself, as well as from potential ADS-B vulnerabilities to electronic warfare- and cyber-attacks, and from the potential divestment of secondary- surveillance radars. Information broadcasted from ADS-B transponders poses an operations security risk for military aircraft. For example, a 2015 assessment that RAND conducted on behalf of the U.S. Air Force stated that the broadcasting of detailed and unencrypted position data for fighter aircraft, in particular for a stealth aircraft such as the F-22, may present an operations security risk. The report noted that information about the F- 22’s precise position is classified Secret, which means that unauthorized disclosure of this information could reasonably be expected to cause serious damage to the national security. Similarly, in 2012 MITRE issued a report on behalf of the DOD Lead Service Office that identified a number of risks—including the ability to track movement in and out of restricted airspaces and changes in operations—to ADS-B-equipped aircraft. In addition to these documents, DOD officials identified a number of increased operations and physical security risks associated with aircraft equipped with ADS-B technology. In DOD’s 2008 comments about FAA’s draft rule requiring ADS-B Out technology, the department informed FAA that DOD aircraft could be identified conducting special flights for sensitive missions in the United States and potentially compromised due to ADS-B technology. Such sensitive missions could include low-observable surveillance, combat air patrol, counter-drug, counter-terrorism, and key personnel transport. While some military aircraft are currently equipped with Mode S transponders that provide individuals who have tracking technology the altitude of the aircraft, ADS- B poses an increased risk. Specifically, according to documents we reviewed and officials we met with, a confluence of the following three issues has led to ADS-B technology presenting more risks to DOD aircraft, personnel, equipment, and operations: Additional information. The additional information provided through ADS-B technology— including the aircraft’s precise location, velocity, and airframe dimensions—increases both direct physical risks to DOD aircraft, personnel, and equipment, and long-term risks to DOD air operations. Accessibility of information. ADS-B technology also introduces risks to aircraft, personnel, equipment, and operations, because it provides information to the public that was not previously accessible. FAA filters information about DOD’s flights so that the information is not available to the public via any FAA data feed. According to FAA officials, this filtering was effective for protecting such information for Mode-S equipped DOD aircraft until the 2012 timeframe, when the capability of third-party networked receivers started to allow position determination for such aircraft. With ADS-B, aircraft location and other information is broadcast from the aircraft, where FAA cannot filter it. While individuals and groups could obtain additional information about DOD flights operating with Mode S, such as an aircraft’s fixed address, information such as geographic location and velocity was not included in broadcasts. Individuals could estimate location and velocity of DOD flights by locating the signal through privately owned receiver networks. By equipping military aircraft with ADS-B technology, individuals and groups would receive additional identifiers, location information, and airframe information through aircraft broadcasts and, as a result, could identify and track aircraft without the use of fixed address databases and with less receiver infrastructure. Historical data. ADS-B technology better enables individuals and groups to track flights in real time and use computer programs to log ADS-B transmissions over time. Therefore, individuals or groups could observe flight paths in detail, identify patterns-of-life, or counter or exploit DOD operations. While NORAD and DOD officials told us that they will benefit from information provided by ADS-B technology, NORAD, DOD, and professional organizations’ documents and officials also noted that electronic warfare- and cyber-attacks—and the potential divestment of secondary-surveillance radars as a result of reliance on ADS-B—could adversely affect current and future air operations. For example, a 2015 Institute of Electrical and Electronics Engineers article about ADS-B stated that ADS-B is vulnerable to an electronic- warfare attack—such as a jamming attack—whereby an adversary can effectively disable the sending and receiving of messages between an ADS-B transmitter and receiver by transmitting a higher power signal on the ADS-B frequencies. The article notes that while jamming is a problem common to all wireless communication, the effect is severe in aviation due to the system’s inherently wide-open spaces, which are impossible to control, as well as to the importance and criticality of the transmitted data. As a stand-alone method, jamming could create problems within the national airspace. Jamming can also be used to initiate a cyber-attack on aircraft or ADS-B systems. According to the article in the 2015 Institute of Electrical and Electronics Engineers publication, adversaries could use a cyber-attack to inject false ADS-B messages (that is, create “ghost” aircraft on the ground or air); delete ADS-B messages (that is, make an aircraft disappear from the air traffic controller screens); and modify messages (that is, change the reported path of the aircraft). The article states that jamming attacks against ADS- B systems would be simple, and that ADS-B data do not include verification measures to filter out false messages, such as those used in spoofing attacks. FAA officials stated that the agency is aware of these possible attacks, and that it addresses such vulnerabilities by validating ADS-B data against primary- and secondary-surveillance radar tracks. Both FAA and DOD have identified a potential solution to address this vulnerability. However, this solution has not been tested and as of November 2017, no testing has been scheduled. In addition to electronic warfare- and cyber-attacks, both NORAD and DOD officials expressed concerns that the air defense and military air traffic control missions would be affected if FAA were to divest secondary- surveillance radars following ADS-B implementation. According to DOD and FAA documents and officials, FAA has identified ADS-B implementation as an opportunity to save costs by divesting a number of secondary-surveillance radars. However, according to NORAD and DOD officials, in those locations where FAA divests of radars, the missions would be at higher risk if an aircraft operator were to turn off the aircraft’s ADS-B technology; if an adversary were to conduct an electronic or cyber-attack on the ADS-B system; or if the ADS-B system were to experience a technical failure. According to NORAD command officials, the command relies on information from FAA radars to monitor air traffic in the national airspace system. If an aircraft is operating without ADS-B, if a GPS or ADS-B system fails, or if an adversary has jammed an aircraft’s GPS signal or ADS-B transmissions, then the command will have to rely on primary- and secondary-surveillance radar to track the aircraft’s location. FAA officials stated that FAA is chiefly responsible for air safety, while NORAD and DOD are chiefly responsible for air defense, and that they believe there will be sufficient radar coverage for DOD to conduct its missions. FAA officials stated that they will maintain sufficient backup systems to ensure air traffic safety for all flights, and will maintain radar in excess of their needs to support NORAD’s missions. FAA officials stated that they will maintain all primary-surveillance radar, all high-altitude secondary-surveillance radar, and low-altitude secondary-surveillance radar near at least thirty major flight terminals. However, according to NORAD and DOD officials, FAA has not proposed an updated legacy primary- and secondary-surveillance radar divestment plan since 2012 for use by NORAD and DOD in assessing potential effects on the mission. NORAD is a bi-national command that requires support from U.S. federal agencies—not just DOD—and relies on FAA radar to support its mission, and it will need to ensure that sufficient air surveillance resources are in place. Although DOD, FAA, and other organizations have identified risks to military security and missions since 2008, DOD and FAA have not approved any solutions to address these risks. This is because DOD and FAA have focused on equipping military aircraft with ADS-B technology and have not focused on solving or mitigating security risks from ADS-B. The approach being taken by FAA and DOD will not address key security risks that have been identified, and delays in producing an interagency agreement have significantly reduced the time available to implement any agreed-upon solutions before January 1, 2020, when the full deployment of ADS-B Out is required. Federal internal control standards state that federal agencies should make risk-based decisions in a timely manner. Specifically, OMB Circular A-123 states that management should evaluate and document internal control issues and determine appropriate corrective actions for internal control deficiencies on a timely basis. In the case of equipping military aircraft with ADS-B technology and addressing any risks associated with it, DOD and FAA have shared responsibility. In 2008 DOD informed FAA that military aircraft would need special accommodations to the ADS-B Out rule due to national security concerns, such as sensitive missions and electronic warfare vulnerabilities. In 2010 FAA responded to DOD’s comments to the draft ADS-B Out rule stating that the agency would collaborate with departments or agencies, including DOD and the Department of Homeland Security, to develop memorandums of agreement to accommodate their national defense mission requirements while supporting the needs of all other national airspace system users. Since that time, DOD components have identified actions that could mitigate some of the risks. For example, DOD and others have identified such mitigations as masking DOD aircraft identifiers, maintaining current inventory of primary-surveillance radars, allowing pilots to turn off ADS-B broadcasts, and seeking an exemption from installing ADS-B technology on select military aircraft (for example, fighter and bomber aircraft). However, as of June 2017—almost 7 years after FAA acknowledged that it would address DOD’s concerns (and less than 3 years before full deployment of ADS-B Out is required)—DOD and FAA have not approved any solutions to these risks. The DOD’s Lead Service Office and FAA have focused on developing a memorandum of agreement that they hope will create a framework for future collaboration at the local level. However, our work and that of NORAD and other DOD components identified a number of limitations to DOD’s Lead Service Office and FAA’s dependence on this draft memorandum of agreement. For example, the draft memorandum does not address the following: the details necessary to establish solutions or mitigations between DOD and FAA for identified security risks. The draft memorandum focuses on equipage of ADS-B technology on military aircraft, cost estimates, and agency and office responsibilities. DOD acknowledges that it will equip military aircraft with ADS-B technology and operate to the greatest extent possible by the January 1, 2020, compliance date. However, the draft memorandum does not identify solutions for the identified operations and physical security risks. the electronic warfare and cyber-attack concerns and the effect on sensitive defense missions that DOD has identified. the flexibility required by NORAD to support freedom of movement within the continental United States, Alaska, and Canada airspace for military missions. The draft memorandum would place negotiating accommodations for NORAD’s bi-national mission at the local level— an act that NORAD officials characterized as unfeasible because military aircraft supporting NORAD missions require uninhibited airspace access throughout the United States and Canada, as a response may be required anywhere and at any time. According to NORAD officials, the command would incur a significant burden to finalize memorandums of agreement with more than 600 air traffic control facilities and ensure commonality with all facilities in the continental United States and Alaska. Furthermore, NORAD officials stated that these missions should not be limited by local restrictions created by the ADS-B Out rule. For example, DOD aircraft flying over one state while supporting an Operation Noble Eagle mission could be stationed at a military base in another state and thus not have an agreement with local FAA controllers. potential mission risks associated with the divestment of secondary- surveillance radars. Delays in the completion of a memorandum of agreement have exacerbated uncertainty as to whether security issues will be addressed in any manner. DOD and FAA have met to discuss the existing draft memorandum of agreement since December 2016. In April 2017 officials from DOD Lead Service Office told us that they expected DOD and FAA to finalize the memorandum of agreement by June 2017; however, in May 2017 DOD officials informed us that the estimated completion date had slipped to February 2018. A significant amount of work will likely need to be accomplished between the eventual approval of the memorandum and implementation in a timely manner. For example, FAA officials acknowledged that the agency would need to issue, update, or both issue and update internal guidance once the memorandum is signed prior to local FAA officials being able to negotiate and agree to arrangements with local military commanders. Similarly, the draft memorandum, if approved, would place a significant burden on local DOD entities to negotiate agreements. For example, the Army expressed concerns that local negotiations—at 76 locations, according to Army estimates—would take from 1 to 2 years to complete after FAA and DOD have signed the memorandum of agreement. Army officials also highlight concerns that local FAA air traffic controllers may not enter into agreements with Army units, or that local agreements will be contingent upon the density of local air traffic or the personalities of those negotiating the agreements. Additionally, assuming that actions are agreed upon among the key stakeholders—DOD, FAA, and NORAD—to resolve or mitigate the identified security risks, DOD, FAA and NORAD will need sufficient time to implement these actions. This is due to the complexity of the ADS-B vulnerabilities and potential mitigations for operations and physical security, electronic warfare, cyber-attack, and potential effects of secondary-radar divestment. As of June 2017, DOD and FAA had not identified any other solutions that could address the risks and concerns identified by DOD and others since 2008. Unless FAA and DOD approve one or more solutions that address all the risks associated with ADS-B technology, DOD security and military missions could face unmitigated risks. These include physical, cyber- attack, and electronic warfare security risks, as well as risks associated with divesting secondary-surveillance radars. Furthermore, unless FAA and DOD focus on the security risks of ADS-B and approve one or more solutions in a timely fashion, they may not have time to plan for and execute any technical, programmatic, or policy actions that may be necessary before all of DOD’s aircraft are required to be equipped with ADS-B technology on January 1, 2020. Of the eight tasks associated with the implementation of ADS-B Out technology in the 2007 DOD NextGen memorandum—issued by the Deputy Secretary of Defense to ensure that the NextGen vision for the future national airspace system met DOD’s requirements and the appropriate management of DOD’s resources—DOD has implemented two, has partially implemented four, and has not implemented two. Specifically, we found that DOD has implemented the following two tasks: Establishing a Joint Program Office. The Deputy Secretary of Defense directed the Secretary of the Air Force to establish and provide administrative support for a DOD Joint Program Office for NextGen. According to the 2007 NextGen memorandum, the office is responsible for coordinating DOD activities related to the NextGen effort, facilitating technology transfer for those research and development activities with potential NextGen application, and advocate for DOD interests, requirements, and capabilities in NextGen. The Air Force established a Joint Program Office to provide services to the entire military aviation community on communication navigation surveillance/air traffic management issues in various capacities. Officials from the DOD Joint Program Office told us that the office has tested various avionic systems for methods of meeting ADS-B requirements. The office has also established an Internet portal for the services to order avionics, including those associated with ADS-B technology. Appointing a DOD representative to the FAA’s interagency Joint Planning and Development Office. The 2007 NextGen memorandum directed that the Secretary of the Air Force appoint a DOD representative to the Joint Planning and Development Office’s board of directors responsible for assisting in the development and coordination of DOD-wide policies and decisions concerning NextGen. In March 2012 DOD’s Lead Service Office appointed an Air Force officer who also manages the DOD Lead Service Office as the DOD representative to the FAA’s interagency Joint Planning and Development Office. DOD partially implemented the following four tasks: Validating NextGen program requirements. The 2007 NextGen memorandum directed that the Secretary of the Air Force document and seek validation for NextGen program requirements through the Joint Capabilities Integration Development System process. The Air Force took the initial step in having its NextGen program requirements validated through DOD’s Joint Capabilities Integration Development System process in October 2014. However, the focus of the assessment was on the Air Force’s requirements and not that of other military services or components. This is not fully consistent with the 2007 memo, which states that the Air Force—as the lead service— should integrate the needs and requirements of the DOD components into cohesive plans and policies for inclusion in NextGen joint planning and development. Establishing guidance on DOD NextGen responsibilities and objectives. The 2007 NextGen memorandum directed the Assistant Secretary of Defense for Homeland Defense and Global Security, the DOD Chief Information Officer, and the Director of Administration, in consultation with the DOD Lead Service, to submit a proposed DOD directive within 180 days specifying the department’s objectives with respect to NextGen and the continuing roles and responsibilities of the Lead Service and the DOD Policy Board on Federal Aviation. In 2013, about 5 years after the original due date for the180-day requirement, DOD updated its DOD Directive 5030.19, DOD Responsibilities on Federal Aviation. While the updated directive references the responsibilities of the DOD Policy Board on Federal Aviation and the Secretary of the Air Force, per the 2007 NextGen memorandum, the directive does not specify DOD’s objectives with respect to NextGen, as required by the memorandum. Developing an initial plan defining actions, responsibilities, and milestones for DOD’s NextGen efforts: The 2007 NextGen memorandum required DOD’s Lead Service, in coordination with the principal members of the DOD Policy Board on Federal Aviation, to develop an initial plan defining actions, responsibilities, and milestones for DOD’s participation in the NextGen efforts and FAA’s Joint Planning and Development Office. This initial plan was to include an implementation plan for the NextGen Joint Program Office and was to be updated semiannually. In 2013 the Air Force, in executing its responsibilities as Lead Service, issued a DOD NextGen Implementation Plan to describe the strategy, principles, and actions for the transition of DOD aviation operations (air and ground) to the national airspace system environment defined by FAA in its NextGen Implementation Plan. We found that the 2013 plan identified responsibilities of DOD components and established indicators meant to give a sense of progress made in NextGen implementation. However, the plan did not include detailed transition planning for ADS- B and was not updated semiannually, as required. Incorporating NextGen into the planning, budgeting, and programming process: According to the 2007 NextGen memorandum, the Secretary of the Air Force is to coordinate DOD- wide NextGen planning, budgeting, and programing guidance in conjunction with the Under Secretary of Defense for Policy and the Director of Program Analysis and Evaluation for consideration in the formulation of planning and programming guidance documents. The memorandum also directed DOD components to coordinate with the Air Force on NextGen programs they agreed to support using inter- service memorandums of understanding, and to fund procurement through service annual program objective memorandum processes. DOD provided evidence that the military departments used the program objective memorandum process to fund ADS-B Out. However, the DOD Lead Service Office did not provide department- wide planning, budgeting, and programming guidance for ADS-B or any other NextGen elements to DOD components. Similarly, DOD did not provide any inter-service memorandums of understanding that would document NextGen programs that the services agreed to fund. According to officials from the DOD Lead Service Office, this office is not responsible for planning, budgeting, and programming because the office is organizationally located within the Air Force Headquarters Office of the Deputy Chief of Staff for Operations. However, while the office may not be responsible for planning, budgeting, and programming within the Air Force, the office can issue—or coordinate the issuance—of such guidance, as directed by the Deputy Secretary of Defense. DOD had not taken significant action or fully implemented the following two actions: Integrating NextGen requirements into plans and policies: The Secretary of the Air Force, in executing the service’s responsibilities as Lead Service, did not integrate the needs and requirements of DOD components related to ADS-B into cohesive plans and policies for inclusion in NextGen joint planning and development, as directed by the Deputy Secretary of Defense in 2007. According to officials from the DOD Lead Service Office, they met the intent of these tasks through the 2012 United States Air Force Next Generation Air Transportation System Keystone Document, the 2013 Department of Defense (DOD) Mid-Term NextGen Concept of Operations, and the 2013 Department of Defense (DOD) Mid-Term Next Generation (NextGen) Implementation Plan. However, the Air Force NextGen Keystone Document applies to the Air Force and not to NORAD or other DOD components. In addition, the DOD Mid-Term NextGen Concept of Operations and the DOD Mid-Term NextGen Implementation Plan do not discuss planning for ADS-B Out requirements, which are critical to NextGen. Providing periodic and recurring NextGen progress reports: The Assistant Secretary of Defense for Homeland Defense and Global Security did not provide periodic and recurring NextGen progress reports to the Deputy Secretary of Defense, as instructed in the 2007 NextGen memorandum. According to the memorandum, the Assistant Secretary was designated as the principal staff assistant for NextGen and was responsible for oversight, support, and advocacy for the lead service with respect to the interagency and Joint Planning and Development Office. Officials from the Office of the Deputy Assistant Secretary of Defense for Homeland Defense Integration and from Defense Support to Civil Authorities acknowledged that the Office of the Assistant Secretary of Defense for Homeland Defense and Global Security had not tracked ADS-B implementation or provided progress reports to the Deputy Secretary of Defense—with the exception of advocating for ADS-B installation exemptions for aircraft that could not comply with the mandate—for retention of ground-based radars, and some minimal advocacy related to compliance with the FAA ADS-B Out rule. DOD could not provide a clear explanation with regard to those requirements that we determined not to have been fully implemented. Officials from the DOD Lead Service Office provided a number of potential reasons to explain why the memorandum’s tasks might not have been fully implemented. For example, as noted earlier, officials stated that other documents captured those requirements. Further, officials told us they believe that implementation of many of the preceding tasks was accomplished through other means, although our analysis concluded that the task was either not implemented or was partially implemented, as noted previously. These officials also noted that—although there is no expiration date on the 2007 NextGen memorandum—many DOD officials consider such memorandums to be applicable for 12 to 18 months. In addition, DOD Lead Service Office officials noted that many DOD components had not assigned a high level of priority to NextGen implementation. As a result of DOD’s not fully implementing the 2007 NextGen memorandum—including developing or revising a DOD directive that specifies DOD’s objectives with respect to NextGen, issuing an implementation plan that includes detailed transition planning for ADS-B and is updated semiannually, and providing recurring progress reports to the Deputy Secretary of Defense—DOD components have lacked direction and cohesion while trying to address FAA’s requirement to equip military aircraft. For example: Officials from the Air Force Life Cycle Management Center’s Fighters and Bombers Directorate told us that they have not been provided any guidance. The directorate does not intend to install ADS-B technology on Air Force fighters or bombers until they receive DOD guidance. Yet, the deadline to equip DOD aircraft that will fly in the national airspace remains January 1, 2020. DOD does not have a coordinated or accurate schedule for equipping ADS-B technology on military aircraft. Although DOD submitted a schedule to Congress in June 2015, officials from the DOD Lead Service Office told us that the timeframes for that plan were no longer accurate, and that the plan would be updated as part of the memorandum of agreement in February 2018. Some DOD components have installed or plan to install civilian GPS receivers on their aircraft to meet FAA’s ADS-B technical requirements. According to DOD officials, DOD aircraft that equip with commercial GPS receivers will not be as protected from GPS security issues as they would have been had they used a military GPS receiver. According to officials from the Office of the DOD Chief Information Officer, the office with primary responsibility for GPS receiver security policy, no one within DOD—including the DOD Lead Service Office or other DOD components—had made them aware that DOD components were installing civilian receivers on aircraft. Since—according to an official within the DOD Lead Service Office— neither the Office of the Assistant Secretary of Defense for Homeland Defense and Global Security nor any other elements of the Office of the Under Secretary of Defense for Policy were engaged in discussion regarding the draft memorandum of agreement with the DOD Lead Service Office and FAA, the Secretary of Defense’s senior policy advisor may not be aware of provisions that may be incorporated in the agreement. For example, the draft memorandum of agreement contains a provision that could result in the department’s being financially responsible for sharing the costs of sustaining secondary- surveillance radars. According to a 2007 FAA document, it will cost FAA approximately $442 million to maintain these radars from fiscal years 2017 to 2035. If DOD components do not fully implement key tasks that would facilitate assurance of DOD requirements in the future NextGen system and appropriate management of DOD resources—such as those tasks that the Deputy Secretary of Defense originally directed in 2007, or any tasks that the Secretary deems appropriate—DOD may risk having less efficient and less effective implementation of NextGen requirements, increased costs of implementation, or missed opportunities to address operations risks. The NextGen system has the potential to increase the efficiency and effectiveness of the nation’s expanding air traffic. As with many such procedural and technological innovations, DOD stands to benefit from NextGen’s vision. As is the case with all such electronic and cyber systems in the information age, this must be balanced with sufficient consideration of the operations and security effects for DOD. DOD and FAA have not approved any solutions that address risks resulting from ADS-B on DOD aircraft—including operations, physical, cyber, and electronic warfare security risks, as well as risks associated with divesting secondary-surveillance radars. Unless DOD and FAA focus their efforts on the security aspects of ADS-B on DOD aircraft and produce one or more solutions to these risks, DOD aircraft and missions will be exposed to unmitigated risks that could jeopardize safety, security, and mission success. Also, unless DOD fully implements the tasks that would facilitate consistent, long-term planning and implementation of NextGen throughout the department, DOD’s full integration into the NextGen system and the integrity and security of DOD’s forces and missions will be hindered. Given the amount of time that has transpired since DOD initially raised security concerns in 2008 and the amount of time it will take to formalize, operationalize, and train employees to implement any agreements prior to the January 1, 2020, deadline, it is critical that both DOD and FAA make this a high priority. We are making two recommendations, including one to the Secretaries of Defense and Transportation, and one to the Secretary of Defense: We recommend that the Secretaries of Defense and of Transportation address ADS-B Out security concerns by approving one or more solutions that address ADS-B Out -related security risks or incorporating mitigations for security risks into the existing draft memorandum of agreement. These approved solutions should address operations, physical, cyber-attack, and electronic warfare security risks; and risks associated with divesting secondary-surveillance radars. The solution or mitigations should be approved as soon as possible in order to allow sufficient time for implementation. We recommend that the Secretary of Defense direct DOD components to implement key tasks that would facilitate consistent, long-term planning and implementation of NextGen—such as those tasks that the Deputy Secretary of Defense originally directed in 2007, or any tasks that the Secretary deems appropriate based on a current assessment of the original tasks. We provided a draft of the report to DOD and the Department of Transportation for review and comment. Written comments from DOD on the classified draft and from the Department of Transportation on this report are reprinted in their entirety in appendixes II and III, respectively, and summarized below. DOD and the Department of Transportation also provided technical comments, which we incorporated as appropriate. The Department of Transportation concurred and DOD partially concurred with the first recommendation to approve one or more solutions that address ADS-B Out security risks or incorporating mitigations for security risks into the existing draft memorandum of agreement and that these solutions should address operations, physical, cyber-attack, and electronic warfare security risks as well as risks associated with divesting secondary-surveillance radar. In its written comments, the Department of Transportation stated that it has recently developed and is now in the process of validating military flight tracking risk mitigation solutions that are technologically viable and operationally effective. Both the Department of Transportation and DOD stated that they would approve one or more solutions to address ADS-B Out related security risks. For example, both departments stated that among other actions, they would complete a memorandum of agreement between FAA and DOD that would incorporate security concerns identified in the report. DOD estimated that the memorandum of agreement will be signed in February 2018. We believe the steps identified by both the Department of Transportation and DOD, if implemented as planned, would meet the intent of our recommendation. DOD partially concurred with the second recommendation to implement key tasks that would facilitate consistent, long-term planning and implementation of NextGen—such as those tasks that the Deputy Secretary of Defense originally directed in 2007 or any tasks that the Secretary deems appropriate based on a current assessment of the original tasks. DOD stated the Secretary of the Air Force would identify within the next 120 days which relevant key tasks would facilitate the implementation of NextGen to include assessing the status of tasks that were directed in the Deputy Secretary of Defense memorandum, “Implementation of the Next Generation Air Transportation within the Department of Defense 2007.” DOD stated that the assessment would include a comprehensive review of modernization efforts regarding NextGen and other global initiatives and that includes suitable security and cybersecurity mitigation measures. DOD also stated that the Policy Board for Federal Aviation would track key task implementation in coordination with the Secretary of the Air Force and other appropriate DOD officials. This would also include periodic updates to the Deputy Secretary of Defense. We believe these steps would meet the intent of our recommendation if implemented as planned. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretary of Homeland Security; the Secretary of Transportation; and the commander of NORAD. We are also sending copies to the Under Secretary of Defense for Policy; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Chairman of the Joint Chiefs of Staff; the Secretaries of the military departments; and the Administrator of FAA. In addition, the report is available at no charge on the GAO website http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9971 or kirschbaumj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Senate Report 114-255, accompanying a bill for the National Defense Authorization Act of Fiscal Year 2017, included a provision that we assess issues related to the defense implications of implementation of the Federal Aviation Administration’s (FAA) Next Generation Air Transportation System (NextGen) and Automatic Dependent Surveillance—Broadcast (ADS-B), a main component of NextGen. This report assesses the extent to which (1) the Department of Defense (DOD) and the FAA have identified security and operations risks and approved solutions to address these risks to military aircraft equipped with ADS-B Out technology; and (2) DOD has implemented key tasks in the 2007 Deputy Secretary of Defense memorandum on implementing NextGen. The scope of our review included all DOD and Department of Transportation offices responsible for oversight or administration of ADS- B implementation by DOD as part of the NextGen program. Our review also included Airlines for America, as it represented a significant portion of the civil aviation industry in negotiations with FAA on ADS-B implementation. Table 3 contains a list of the organizations and offices we contacted during the course of our review. To assess the extent to which DOD and FAA have identified security and operations risks and approved solutions to address these risks to military aircraft equipped with ADS-B Out technology, we reviewed policies, procedures, guidance, assessments, and other relevant documents from DOD, FAA, and NORAD that address ADS-B Out implementation, acquisition, operations, and cybersecurity risk management and mitigation, and any other issues that might be pertinent to identifying and addressing operations and security risks resulting from ADS-B Out. We also reviewed publicly available literature discussing potential ADS-B Out cybersecurity vulnerabilities. Specifically, we conducted a literature review of work related to vulnerabilities in ADS-B technology. To identify studies that potentially highlighted vulnerabilities that we could discuss with agency officials, we conducted key-word searches of government and private databases to identify public, private, academic, and other professional research related to ADS-B vulnerabilities. The government databases we searched included those of GAO, the Congressional Research Service, the Congressional Budget Office, and agency Inspectors General. The private databases searched include Web of Science, ProQuest, and ProQuest Professional. To determine relevance to our review, we assessed whether article subjects were related to vulnerabilities or vulnerability mitigations for ADS-B systems. We reviewed those studies cited in our report and found their methodologies to be sufficient. To further address our objective, we interviewed officials from NORAD, DOD, the military services, and FAA on potential risks, vulnerabilities, and mitigation strategies. We did not conduct independent security and vulnerability assessments of ADS-B technology to corroborate or validate security risks identified by NORAD, DOD, FAA, and others. While military aircraft and existing radar systems may be equipped with devices (including Mode S transponders) that could also pose security risks, this report focused on risks and potential solutions associated with ADS-B Out technology that FAA mandated DOD to install on its aircraft by January 1, 2020. We also visited multiple public websites to understand the extent to which the public could track current military flights over the United States. We met with a representative from one of these websites to understand the underlying sources of information and how the information was used to compile the images. To understand DOD and FAA coordination, we reviewed laws, guidance, and directives related to agency cooperation for the NextGen system and implementation of ADS-B technology. This included the 2010 FAA Federal Register entry that provided guidelines and requirements for coordination between agencies and the 2007 Deputy Secretary of Defense memorandum on implementing NextGen, which states that DOD components must develop cohesive plans and policies. To assess the extent to which DOD has implemented key tasks in the 2007 Deputy Secretary of Defense memorandum on implementing NextGen, we reviewed the Deputy Secretary of Defense’s 2007 NextGen memorandum and identified 20 tasks that were directed by the Deputy Secretary for the purpose of ensuring that NextGen meets DOD requirements, and that DOD’s resources are appropriately focused and managed. We focused on the 8 tasks wherein the accomplishment of the task would be significant to the development of plans and policies related to the implementation of the FAA’s ADS-B Out technology requirement. To evaluate the implementation status of these 8 tasks, we collected relevant documentation, interviewed officials from DOD, and reviewed this information. Initially, two analysts separately reviewed this information to determine whether each of the 8 tasks was implemented or not implemented. Later, a panel of four analysts collectively reviewed both sets of analyses completed for each task and determined whether a task would be better categorized as partially implemented, instead of implemented, or as not implemented. We conducted this performance audit from June 2016 to January 2018, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Tommy Baril (Assistant Director), Tracy Barnes, David Beardwood, Virginia Chanley, Benjamin Emmel, Kevin Newak, Joshua Ormond, Matthew Sakrekoff, Amanda Weldon, and Edwin Yuen made major contributions to this report. Colleen Candrl, Mark Canter, Raj Chitikila, Tracy Harris, Kirk Kiester, Amie Lesser, Nicholas Marinos, Madhav Panwar, John Shumann, James Tallon, and Cheryl Weissman also made contributions to this report. Next Generation Air Transportation System: Improved Risk Analysis Could Strengthen FAA’s Global Interoperability Efforts. GAO-15-608. Washington, D.C.: July 29, 2015 Air Traffic Control: FAA Needs a More Comprehensive Approach to Address Cybersecurity As Agency Transitions to NextGen. GAO-15-370. Washington D.C.: April 14, 2015. National Airspace System: Improved Budgeting Could Help FAA Better Determine Future Operations and Maintenance Priorities. GAO-13-693. Washington, D.C.: August 22, 2013. NextGen Air Transportation System: FAA Has Made Some Progress in Midterm Implementation, but Ongoing Challenges Limit Expected Benefits, GAO-13-264. Washington D.C.: April 8, 2013. Next Generation Air Transportation System: FAA Faces Implementation Challenges. GAO-12-1011T. Washington, D.C.: September 12, 2012. Next Generation Air Transportation: Collaborative Efforts with European Union Generally Mirror Effective Practices, but Near-Term Challenges Could Delay Implementation, GAO-12-48. Washington, D.C.: November 3, 2011.
|
DOD has until January 1, 2020, to equip its aircraft with ADS-B Out technology that would provide DOD, FAA, and private citizens the ability to track their flights in real-time and track flight patterns over time. This technology is a component of NextGen, a broader FAA initiative that seeks to modernize the current radar-driven, ground-based air transportation system into a satellite-driven space-based system. Senate Report 114-255 included a provision for GAO to assess the national defense implications of FAA's implementation of ADS-B. This report assesses the extent to which (1) DOD and FAA have identified operations and security risks and approved solutions to address these risks to ADS-B Out -equipped military aircraft; and (2) DOD has implemented key tasks in the 2007 memorandum on implementing NextGen. GAO analyzed risks identified by DOD and FAA related to ADS-B vulnerabilities, and how they could affect current and future air defense and air traffic missions. GAO also reviewed the tasks in the 2007 NextGen Memorandum and assessed whether the eight tasks specifically related to ADS-B were implemented. Since 2008, the Department of Defense (DOD) and the Department of Transportation's Federal Aviation Administration (FAA) have identified a variety of risks related to Automatic Dependent Surveillance-Broadcast (ADS-B) Out technology that could adversely affect DOD security and missions. However, they have not approved any solutions to address these risks. Compared with other tracking technology, ADS-B Out provides more information, such as an aircraft's precise location, velocity, and airframe dimensions, and better enables real-time and historical flight tracking. Individuals—including adversaries—could track military aircraft equipped with ADS-B Out technology, presenting risks to physical security and operations. This readily available public information allowed GAO to track various kinds of military aircraft. ADS-B Out is also vulnerable to electronic warfare and cyber-attacks. Since FAA is planning to divest radars as part of ADS-B implementation, homeland defense could also be at risk, since the North American Aerospace Defense Command relies on information from FAA radars to monitor air traffic. DOD and FAA have drafted a memorandum of agreement that focuses on equipping aircraft with ADS-B Out but does not address specific security risks. Unless DOD and FAA focus on these risks and approve one or more solutions in a timely manner, they may not have time to plan and execute actions that may be needed before January 1, 2020—when all aircraft are required to be equipped with ADS-B Out technology. Of the eight tasks associated with the implementation of ADS-B Out technology in the 2007 DOD NextGen memorandum—issued by the Deputy Secretary of Defense to ensure that the NextGen vision for the future national airspace system met DOD's requirements and the appropriate management of DOD's resources—DOD has implemented two, has partially implemented four, and has not implemented two. DOD has established a joint program office and identified a lead service, but it has only partially validated ADS-B Out requirements, developed a directive, issued an implementation plan, and incorporated NextGen into the planning, budgeting, and programming process. DOD has not taken significant action to integrate the needs and requirements of DOD components related to ADS-B into cohesive plans and policies for inclusion in NextGen joint planning and development, and has not provided periodic and recurring NextGen progress reports to the Deputy Secretary of Defense. As a result of DOD not fully implementing the 2007 NextGen memorandum, DOD components have lacked direction and cohesion while trying to address FAA's requirement to equip military aircraft. This is a public version of a classified report GAO issued in January 2018. GAO is recommending that DOD and FAA approve one or more solutions to address ADS-B -related security risks; and that DOD implement key tasks to facilitate consistent, long-term planning and implementation of NextGen. DOD and the Department of Transportation generally concurred and described planned actions to implement the recommendations.
|
VA has undertaken a number of initiatives to help prevent veteran suicide, including identifying suicide prevention as VA’s highest clinical priority in its strategic plan for fiscal years 2018 through 2024 (see fig. 2). VA uses CDC’s research on risk factors and prevention techniques to inform its approach to suicide prevention in the veteran community. There is no single determining cause for suicide; instead, suicide occurs in response to biological, psychological, interpersonal, environmental, and societal influences, according to the CDC. Specifically, suicide is associated with risk factors that exist at the individual level (such as a history of mental illness or substance abuse, or stressful life events, such as divorce or the death of a loved one), community level (such as barriers to health care), or societal level (such as the way suicide is portrayed in the media and stigma associated with seeking help for mental illness). According to VA, veterans may possess risk factors related to their military service, such as a service-related injury or a difficult transition to civilian life. CDC reports that protective factors—influences that help protect against the risk for suicide—include effective coping and problem- solving skills, strong and supportive relationships with friends and family, availability of health care, and connectedness to social institutions such as school and community. VA’s 2018 National Strategy for Suicide Prevention identifies four focus areas: (1) healthy and empowered veterans, families, and communities; (2) clinical and community preventative services; (3) treatment and support services; and (4) surveillance, research, and evaluation. Collectively, these four areas encompass 14 goals for preventing veteran suicide, one of which is implementing communication designed to prevent veteran suicide by changing knowledge, attitude, and behaviors. VHA’s suicide prevention media outreach campaign is just one of its initiatives intended to reduce veteran suicide. For example, in 2007, VHA established the Veteran’s Crisis Line (VCL), a national toll-free hotline that supports veterans in emotional crisis. Veterans, as well as their family and friends, can access the VCL by calling a national toll-free number—1-800-273-8255—and pressing “1” to be connected with a VCL responder, regardless of whether these veterans receive health care through VHA. VHA added the option to communicate with VCL responders via online chat in 2009, followed by text messaging in 2011. Another VHA suicide prevention initiative is the Recovery Engagement and Coordination for Health – Veterans Enhanced Treatment initiative, or REACH VET. Established in 2016, REACH VET uses predictive modeling to analyze existing data from veterans’ health records to identify veterans at increased risk for adverse outcomes, such as suicide, hospitalization, or illness. Suicide prevention officials within VHA’s Office of Mental Health and Suicide Prevention (OMHSP) are responsible for implementing the suicide prevention media outreach campaign. Since 2010, VHA has used a contractor to develop suicide prevention media outreach content and monitor its effectiveness. In September 2016, VHA awarded a new contract to the same contractor to provide both suicide prevention and mental health media outreach. Under the 2016 contract, the suicide prevention and mental health outreach campaigns remain separate and are overseen by separate suicide prevention and mental health officials, both within OMHSP. VHA officials told us that beginning in fiscal year 2019, VHA will separate the contract for suicide prevention and mental health media outreach. Specifically, VHA will utilize an existing agreement with a different contractor for suicide prevention media outreach while the existing contractor will continue to provide mental health media outreach. According to VHA, the purpose of its suicide prevention media outreach campaign is to raise awareness among veterans, their families and friends, and the general public about VHA resources that are available to veterans who may be at risk for suicide. The primary focus of the outreach campaign since 2010 has been to raise awareness of the services available through the VCL. VHA’s suicide prevention media outreach falls into two main categories: unpaid and paid. Unpaid media outreach content is typically displayed on platforms owned by VA or VHA, or is disseminated by external organizations or individuals that share VHA suicide prevention content at no cost, as discussed below (see fig. 3). Social media. VA and VHA each maintain national social media accounts on platforms such as Facebook, Twitter, and Instagram, and post content, including suicide prevention content developed by VHA’s contractor. VHA also works with other federal agencies, non-governmental organizations, and individuals that post its suicide prevention content periodically. Public service announcements (PSA). VHA’s contractor typically develops two PSAs per year, which various local and national media networks display at no cost to VHA. Website. VHA’s contractor maintains the content displayed on the VCL website (veteranscrisisline.net), including much of the content it develops for other platforms, such as PSAs and social media content. Visitors to the website can both view the content on the website and share it on their own platforms. Paid digital media. An example of paid digital media includes online keyword searches, in which VHA pays a search engine a fee for its website to appear as a top result in response to selected keywords, such as “veterans crisis line” or “veteran suicide.” Paid digital media also includes social media posts for which VHA pays a fee to display its content to a widespread audience, such as users with a military affiliation. Paid “out-of-home” media: “Out-of-home” refers to the locations where this type of content is typically displayed. Examples include billboards, bus and transit advertisements, and local and national radio commercials. VHA recognizes September as Suicide Prevention Month each year. During this month, VHA establishes a theme and increases its outreach activities, including a combination of both paid and unpaid media outreach. According to VHA, it typically incorporates additional outreach techniques during this month, such as enlisting the support of celebrities or hosting live chat sessions on social media platforms, including Facebook and Twitter. VHA’s suicide prevention media outreach activities declined in fiscal years 2017 and 2018 compared to earlier years of the campaign. We identified declines in social media postings, PSAs, paid media, and suicide prevention month activities, as discussed below. Social media. The amount of social media content developed by VHA’s contractor decreased in 2017 and 2018, after increasing in each of the prior four years. Specifically, VHA reported that its contractor developed 339 pieces of social media content in fiscal year 2016, compared with 159 in fiscal year 2017, and 47 during the first 10 months of fiscal year 2018 (see fig. 5.). PSAs. VHA’s contractor is required to develop two suicide prevention PSAs in each fiscal year. VHA officials said that the development of the two PSAs was delayed in fiscal year 2018. Specifically, as of August 2018, VHA reported that one PSA was completed, but had not yet aired, and another PSA was in development. As a result of this delay, VHA had not aired a suicide prevention PSA on television or radio in over a year; this is the first time there has been a gap of more than a month since June 2012. Paid media. VHA had a total budget of $17.7 million for its suicide prevention and mental health media outreach for fiscal year 2018, of which $6.2 million was obligated for suicide prevention paid media. As of September 2018, VHA said it had spent $57,000 of its $6.2 million paid media budget. VHA officials estimated that they would spend a total of $1.5 million on suicide prevention paid media for fiscal year 2018 and indicated that the remaining funds would be de-obligated from the contract at the end of the fiscal year and not used for suicide prevention media outreach. VHA officials indicated that the reason they did not spend the remaining funds on suicide prevention paid media in fiscal year 2018 was that the approval of the paid media plan was delayed due to changes in leadership and organizational realignment of the suicide prevention program. As a result, VHA officials said they limited the paid media outreach in fiscal year 2018 to activities that were already in place, including 25 keyword search advertisements, and 20 billboards and 8 radio advertisements in selected cities across the United States. In prior fiscal years, VHA conducted a variety of digital and out-of- home suicide prevention paid media. For example, in fiscal year 2015, with a suicide prevention paid media budget of more than $4 million, VHA reported that it ran 58 advertisements on Google, Bing, and Facebook, and ran 30 billboards, 180 bus advertisements, more than 19,000 radio advertisements, 252 print advertisements, and 39 movie theatre placements in selected cities across the United States. VHA ran similar types of paid media in fiscal years 2013, 2014, and 2016 with variation in quantities based on the approved budget in each of these years. In fiscal year 2017, VHA had a budget of approximately $1.7 million to spend on paid media for both the suicide prevention and mental health outreach campaigns. However, VHA spent less than 10 percent of the funds (approximately $136,000) to run paid advertisements on Google and Bing for suicide prevention in fiscal year 2017; the remainder was spent on mental health outreach. Suicide Prevention Month. VHA documentation indicated that Suicide Prevention Month 2017 was a limited effort. VHA officials said that this was because they did not begin preparing early enough. In May 2018, VHA officials indicated that they were similarly behind schedule for planning Suicide Prevention Month 2018, though they told us in August 2018 that they had caught up. VHA officials told us that the decrease in suicide prevention media outreach activities was due to leadership turnover and reorganization since 2017. For example, VHA officials said the National Director for Suicide Prevention position was vacant from July 2017 through April 2018. VHA filled the role temporarily with a 6-month detail from another agency from October 2017 through March 2018 and then hired this individual as the permanent director on April 30, 2018. VHA officials that worked on the campaign told us they did not have leadership available to make decisions about the suicide prevention campaign during this time. For example, VHA officials said they did not have a kick-off meeting between VHA leadership and VHA’s contractor at the beginning of fiscal year 2018—a requirement of the contract—because there was no leadership available to participate in this meeting. The officials also reported that suicide prevention leadership was not available for weekly meetings to discuss suicide prevention outreach activities, even after the suicide prevention program obtained an acting director on detail from another agency. VHA staff said that at that time, they focused their suicide prevention media outreach efforts on areas that did not require leadership input, such as updating the VCL website. The absence of leadership available to provide direction and make decisions on the suicide prevention media outreach campaign is inconsistent with federal internal control standards for control environment, which require agencies to assign responsibilities to achieve its objectives. If a key role is vacant, management needs to determine by whom and how those responsibilities will be fulfilled in order to meet its objectives. Officials that worked on the campaign told us they shifted their focus away from the suicide prevention media outreach campaign toward the mental health outreach campaign due to reorganization of the offices responsible for suicide prevention activities in 2017. Specifically, under the new organization, and in the absence of suicide prevention program leadership, the officials began reporting directly to mental health program leadership and became more focused on the mental health outreach aspects of the contract. Following the reorganization, officials that worked on the campaign did not have a clear line of reporting to the suicide prevention program. This is also inconsistent with federal internal control standards for control environment, which require agencies to establish an organizational structure and assign responsibilities, such as establishing lines of reporting necessary information to management. VHA officials told us that one of the highest priorities for the suicide prevention program since the beginning of fiscal year 2018 was to establish a national strategy for preventing veteran suicides. The national strategy, issued in June 2018, includes suicide prevention outreach as one of the strategy’s 14 goals. The national strategy also emphasizes VHA’s plans to shift to a public health approach to suicide prevention outreach. The public health approach focuses less on raising awareness of the VCL and more on reaching veterans before the point of crisis. VHA officials told us they have been trying to shift to a public health approach since 2016. Some of the campaign themes and messages have reflected this shift; for example, the “Be There” campaign theme that was adopted in fiscal year 2016—and has remained the theme since— emphasizes the message that everyone has a role in helping veterans in crisis feel less alone and connecting them to resources. However, VHA officials told us in May 2018 that they were just beginning to conceptualize what the suicide prevention outreach campaign should look like moving forward. Leadership officials also said that while they were developing the national strategy, they delegated the responsibility for implementing the suicide prevention outreach campaign to other officials working on the campaign. The decline in VHA’s suicide prevention media outreach activities over the past 2 fiscal years is inconsistent with VA’s strategic goals, which identify suicide prevention as the agency’s top clinical priority for fiscal years 2018 through 2024. Further, VHA has continued to obligate millions of dollars to its suicide prevention media outreach campaign each year. Since fiscal year 2017, VHA has obligated $24.6 million to the contract for media outreach related to both suicide prevention and mental health. By not assigning key leadership responsibilities and clear lines of reporting, VHA’s ability to oversee the suicide prevention media outreach activities was hindered and these outreach activities decreased. As a result, VHA may not have exposed as many people in the community, such as veterans at risk for suicide, or their families and friends, to its suicide prevention outreach content. Additionally, without establishing responsibility and clear lines of reporting, VHA lacks assurance that it will have continuous oversight of its suicide prevention media outreach activities in the event of additional turnover and reorganization in the future, particularly as VHA begins implementing the suicide prevention media outreach campaign under its new agreement that begins in fiscal year 2019. VHA works with its contractor to create and monitor metrics to help gauge the effectiveness of its suicide prevention media outreach campaign in raising awareness among veterans and others about VHA services, such as the VCL. The metrics primarily focus on the number of individuals who were exposed to or interacted with VHA’s suicide prevention content across various forms of outreach, including social media, PSAs, and websites. According to VHA, the metrics are intended to help VHA ensure that its media outreach activities achieve intended results, such as increasing awareness and use of the resources identified on the VCL website. Examples of metrics monitored by VHA and its contractor include those related to (1) social media, such as the number of times a piece of outreach content is displayed on social media; (2) PSAs, such as the total number of markets and television stations airing a PSA; and (3) the VCL website, such as the total traffic to the website, as well as the average amount of time spent on a page and average number of pages viewed per visit. VHA’s contractor is required to monitor the metrics and report results on a monthly basis. Specifically, the contractor provides monthly monitoring reports to VHA that summarize how outreach is performing, such as the number of visits to the VCL website that were driven from paid media sources. Officials noted these reports are key sources of information for VHA on the results of its outreach. VHA officials also told us they informally discuss certain metrics during weekly meetings with VHA’s contractor. In addition, VHA works with its contractor to conduct a more in-depth analysis of outreach efforts during and after Suicide Prevention Month each year. VHA has not established targets for the majority of the metrics it uses to help gauge the effectiveness of its suicide prevention media outreach campaign. As a result, VHA does not have the information it needs to fully evaluate the campaign’s effectiveness in raising awareness of VHA’s suicide prevention resources among veterans, including the VCL. For example, we found that VHA’s contractor’s monitoring reports—a summary of key metrics that VHA uses to routinely monitor information regarding the campaign—generally focused on outreach “highlights” and positive results. The reports did not set expectations based on past outreach or targets for new outreach, and lacked more comprehensive information on whether outreach performed against these expectations. For example: A monitoring report from 2018 showed that during one month, there were 21,000 social media mentions of keywords specific to VA suicide prevention, such as “VCL” or “veteran suicide,” across social media platforms. These mentions earned 120 million impressions; however, there was no indication of the number of keyword mentions or impressions that VHA expected based on its media outreach activities. In addition, the report did not indicate the proportion of mentions that VHA believed were specifically driven by its outreach activities, and there also was no indication of whether these mentions were positive or negative, or what actions to take based on this information. Another monitoring report from January 2017 showed that paid advertising drove 39 percent of overall website traffic during one month, while unpaid sources drove the remaining 61 percent. However, there was no information indicating the amounts of paid advertising that VHA conducted during this monitoring period, and whether this amount of website traffic from paid advertising met expectations. VHA’s 2016 Suicide Prevention Month summary report showed that there were 194,536 visits to the VCL website, roughly an 8 percent increase from the Suicide Prevention Month in 2015. However, the report did not indicate whether this increase from the prior year met expectations, or a different result was expected. VHA officials told us that they have not established targets for most of the suicide prevention media outreach campaign because they lack meaningful targets for the metrics to help evaluate the campaign. VHA officials said that the only target they have established is for each PSA to rank in the top 10 percent of the Nielsen ratings because this is the only meaningful target available that is accepted industry-wide. VHA officials stated that using any other targets would be arbitrary. For the remaining metrics, VHA officials told us they assess the outcomes of their campaign by comparing data from year to year, and examining any changes in the outcomes over time. However, VHA could set targets that capture the number of people who viewed or interacted with its outreach content, similar to its Nielsen target set for television viewership. Doing so would help VHA evaluate whether the campaign has been effective in raising awareness of VHA’s suicide prevention resources. Further, creating targets for these additional metrics need not be arbitrary, because VHA could use information about how its metrics performed in the past to develop reasonable and meaningful targets for future performance. VHA could also adjust the targets over time to reflect changes in its metrics or approach to the campaign, such as changes to its paid media budget each year. Federal internal control standards for monitoring require agencies to assess the quality of its performance by evaluating the results of activities. Agencies can then use these evaluations to determine the effectiveness of its programs or need for any corrective actions. Further, VA’s June 2018 National Strategy for Preventing Veteran Suicide also emphasizes the importance of the agency evaluating the effectiveness of its outreach. The absence of established targets leaves VHA without a framework to effectively evaluate its campaign. Our prior work has shown that establishing targets allows agencies to track their progress toward specific goals. In particular, we have developed several key attributes of performance goals and measures including, when appropriate, the development of quantifiable, numerical targets for performance goals and measures. Such targets can facilitate future evaluations of whether overall goals and objectives were achieved by allowing for comparisons between projected performance and actual results. Further, establishing targets for its outreach metrics will enable VHA officials to determine whether outreach performed as expected and raised awareness of VHA resources such as the VCL, including identifying outreach efforts that worked particularly well and those that did not. In doing so, VHA officials will have the opportunity to make better informed decisions in their suicide prevention media outreach campaign to support VA’s overall goal of reducing veteran suicides. VA has stated that preventing veteran suicide is its top clinical priority; yet VHA’s lack of leadership attention to its suicide prevention media outreach campaign in recent years has resulted in less outreach to veterans. While VHA identifies the campaign as its primary method of raising suicide prevention awareness, it has not established an effective oversight approach to ensure outreach continuity. This became particularly evident during a recent period of turnover and reorganization in the office responsible for the suicide prevention outreach campaign. Moving forward, VHA has an opportunity to improve its oversight to ensure that its outreach content reaches veterans and others in the community to raise awareness of VHA’s suicide prevention services, particularly as VHA begins working with a new contractor beginning in fiscal year 2019. VHA is responsible for evaluating the effectiveness of its suicide prevention media outreach campaign in raising awareness about VHA services that are available to veterans who may be at risk for suicide. To do so, VHA collects and monitors data on campaign metrics to help gauge the effectiveness of its suicide prevention media outreach campaign in raising such awareness, but has not established targets for the majority of these metrics because officials reported that there are no meaningful, industry-wide targets for them. We disagree with VHA’s assertion that other targets would not be meaningful; VHA collects data on its metrics that it can use to develop reasonable and meaningful targets for future performance. In the absence of established targets, VHA cannot evaluate the effectiveness of the campaign, and make informed decisions about which activities should be continued to support VA’s overall goal of reducing veteran suicides. We are making the following two recommendations to VA: 1. The Under Secretary for Health should establish an approach for overseeing its suicide prevention media outreach efforts that includes clear delineation of roles and responsibilities for those in leadership and contract oversight roles, including during periods of staff turnover or program changes. (Recommendation 1) 2. The Under Secretary for Health should require officials within the Office of Suicide Prevention and Mental Health to establish targets for the metrics the office uses to evaluate the effectiveness of its suicide prevention media outreach campaign. (Recommendation 2) We provided a draft of this report to VA for review and comment. In its written comments, summarized below and reprinted in Appendix I, VA concurred with our recommendations. VA described ongoing and planned actions and provided a timeline for addressing our recommendations. VA also provided technical comments, which we incorporated as appropriate. In response to our first recommendation, to establish an oversight approach that includes delineation of roles and responsibilities, VA acknowledged that organizational transitions and realignments within OMHSP contributed to unclear roles and responsibilities in 2017 and 2018. VA said that OMHSP has made organizational improvements, including hiring a permanent Director for Suicide Prevention and establishing a new organizational structure. In its comments, VA requested closure of the first recommendation based on these actions. However, to fully implement this recommendation, VA will need to provide evidence that it has established an oversight approach for the suicide prevention media outreach campaign. This would include providing information about the roles and responsibilities, as well as reporting requirements, for contract and leadership officials involved in the suicide prevention media outreach campaign under the new organizational structure and the new contract. VA will also need to demonstrate that it has a plan in place to ensure continued oversight of the suicide prevention media campaign in the event of staff turnover or program changes. In response to our second recommendation, to establish targets against which to evaluate suicide prevention metrics, VA said it has plans to work with communications experts to develop metrics, targets, and an evaluation strategy to improve its evaluation of its suicide prevention program efforts, including outreach. VA expects to complete these actions by April 2019. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Secretary of Veterans Affairs. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at DraperD@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. In addition to the contact named above, Marcia A. Mann (Assistant Director), Kaitlin McConnell (Analyst-in-Charge), Kaitlin Asaly, and Jane Eyre made key contributions to this report. Also contributing were Jennie Apter, Emily Bippus, Valerie Caracelli, Lisa Gardner, Jacquelyn Hamilton, Teague Lyons, Vikki Porter, and Eden Savino.
|
Veterans suffer a disproportionately higher rate of suicide than the civilian population. VA has estimated that an average of 20 veterans die by suicide per day, and in 2018, VA identified suicide prevention as its highest clinical priority. VHA's suicide prevention media outreach campaign—its collective suicide prevention outreach activities—helps raise awareness among veterans and others in the community about suicide prevention resources. VHA has contracted with an outside vendor to develop suicide prevention media outreach content. GAO was asked to examine VHA's suicide prevention media outreach campaign. This report examines the extent to which VHA (1) conducts activities for its suicide prevention media outreach campaign, and (2) evaluates the effectiveness of its campaign. GAO reviewed relevant VHA documents and data on the amount, type, and cost of suicide prevention outreach activities since fiscal year 2013. GAO also reviewed VHA's contract for developing suicide prevention outreach content and interviewed VA and VHA officials. The Department of Veterans Affairs' (VA) Veterans Health Administration (VHA) conducts national suicide prevention media outreach on various platforms to raise awareness about VHA's suicide prevention resources. The primary focus of this campaign since 2010 has been to raise awareness of the Veterans Crisis Line (VCL), VHA's national hotline established in 2007 to provide support to veterans in emotional crisis. GAO found that VHA's suicide prevention media outreach activities declined in recent years due to leadership turnover and reorganization. For example, the amount of suicide prevention content developed by VHA's contractor for social media decreased in fiscal years 2017 and the first 10 months of 2018 after increasing in each of the 4 prior years. VHA officials reported not having leadership available for a period of time to make decisions about the suicide prevention media outreach campaign. GAO found that VHA did not assign key leadership responsibilities or establish clear lines of reporting, and as a result, its ability to oversee the outreach campaign was hindered. Consequently, VHA may not be maximizing its reach with suicide prevention media content to veterans, especially those who are at-risk. VHA evaluates the effectiveness of its suicide prevention media outreach campaign by collecting data on metrics, such as the number of people that visit the VCL website. However, VHA has not established targets for the majority of these metrics. Officials said they have not established targets because, apart from one industry-wide target they use, they lack meaningful targets for evaluating the campaign. However, VHA could use information about how its metrics performed in the past to develop reasonable and meaningful targets for future performance. Without established targets for its metrics, VHA is missing an opportunity to better evaluate the effectiveness of its suicide prevention media outreach campaign. VHA should (1) establish an approach to oversee its suicide prevention media outreach campaign that includes clear delineation of roles and responsibilities, and (2) establish targets for its metrics to improve evaluation efforts. VA concurred with GAO's recommendations and described steps it will take to implement them.
|
DOD acquires new weapons for its warfighters through a management process known as the defense acquisition process. This process has multiple phases, including: (1) technology maturation and risk reduction, (2) engineering and manufacturing development, and (3) production and deployment. In this report we refer to these three phases as concept development, system development, and production. Programs typically complete a series of milestone reviews and other key decision points that authorize entry into a new acquisition phase. DOD Instruction 5000.02 delegates responsibility for developing and procuring weapon systems to the military departments and other defense agencies. This policy does not specify a standard organizational structure—or program structure—to manage acquisition programs, but rather states that programs are to be tailored as much as possible to the characteristics of the product being acquired, and to the totality of circumstances associated with the program including operational urgency and risk factors. In addition, DOD’s guidance for managing its workforce states that the approach should be flexible, adaptive to program changes, and responsive to new management strategies. DOD decides how many personnel and how much program funding to request for each military department through the Planning, Programming, Budgeting, and Execution (PPBE) process. DOD programming policy requires the military departments and defense agencies to develop a program objective memorandum that identifies and prioritizes requirements and total funding needs for the current budget year and 4 additional years into the future. As a part of this process, the departments also estimate the personnel requirements and program funding needed to execute their mission, including support for the commands and PEOs that are responsible for managing acquisition programs. The results of the PPBE process, including proposed funding levels for programs, are captured in the President’s annual budget request to Congress. For example, in its budget request, DOD identifies and requests the total number of civilian full-time equivalent personnel, among other things. Congress then authorizes and appropriates the funding to pay for civilian personnel for each military department. When budgeting for contracted services, DOD estimates the cost of the tasks to be performed but not the number of individuals that may perform those tasks. The military departments, commands and PEOs then distribute approved funding (which, in part, is used to pay for civilian personnel and contractor support) to the various organizations including the programs that are responsible for managing and supporting defense acquisitions. Each military department has a different approach to developing its budget request, and program budgets may be spread across multiple types of appropriations that are organized into various categories based on their purpose such as research, development, testing and evaluation, or procurement. Similarly, the military departments fund their personnel through several different types of appropriations, including (1) operation and maintenance; (2) military personnel; and (3) research, development, test, and evaluation. Requests for funding are included in different documents and often presented in multiple volumes that can be hundreds of pages long. DOD’s Financial Management Regulation provides instructions for the formulation and presentation of the budget request to Congress, including general categories of costs that might be included in program specific budgets. In addition, the regulation requires DOD components to include specific budget exhibits for certain acquisition programs to provide more insight into those programs’ funding needs. Several interrelated factors influenced the workforce size, composition, and mix, as well as the organizational structure of the 11 major defense acquisition programs we reviewed. We found the following: Program workforce size and composition were influenced by the degree to which the program assumed responsibility for technical development and integration, as well as the program’s stage within the acquisition life cycle. Program workforce mix varied depending on the use of contractor personnel, which was based on the workload requirements and the availability of government personnel to provide the skills needed. Programs were generally structured as either stand-alone—new, high priority, complex weapon system platforms with dedicated personnel—or as part of a portfolio of related programs to share personnel across programs. The number and composition of personnel that supported the selected major defense acquisition programs varied considerably. As shown in figure 1, the total number of personnel supporting the 11 selected programs ranged from 30 to 397, and the composition of those personnel varied based on the needs of the program. While program officials cited a number of factors that influenced the selected programs’ workforce size and composition, including department priority and complexity, we identified two overarching factors—(1) the level of program responsibility for technical development and integration, and (2) the stage of the acquisition life cycle. First, we found programs that assume more responsibility for technical development and integration have more personnel—primarily those that perform engineering as well as test and evaluation functions. The two largest of the selected programs we reviewed, the Navy’s Next- Generation Jammer Mid-Band (NGJ Mid-Band) and Columbia Class Ballistic Missile Submarine (Columbia), assumed significant responsibility for system development and integration, activities a prime contractor often undertook for the other programs we reviewed. For example, NGJ Mid-Band officials explained that the program is responsible for overseeing software integration and other efforts directly. In this case, in addition to personnel assigned to the program office, the Navy relies on personnel from other organizations such as the Naval Air Warfare Center Aircraft Division instead of a prime contractor to develop the software needed to operate the system, conduct system testing, and manage integration into the platform. Similarly, the Columbia program maintains responsibility for many aspects of development and integration of the submarine including most hull, mechanical, and electrical components. As a result, about two-thirds of the 309 personnel supporting the program are performing engineering and technical tasks. In contrast, two programs with fewer personnel, the Air Force’s B-2 Defensive Management System Modernization program (DMS-M) and Navy’s John Lewis Class Fleet Replenishment Oiler (T-AO), assigned significant responsibility for development and integration to their respective prime contractors. The Defensive Management System Modernization program reported to us that it has a total of 11 engineering and technical personnel, and T-AO reported that it has 35 engineering and technical personnel. Secondly, we found that program workforce size and composition changed in response to the amount and nature of the work programs perform at different stages of the acquisition life cycle. For example, officials from our selected programs stated they generally planned to increase in size as they progressed from concept development to system development and also planned to concurrently increase the proportion of engineering and technical personnel. Program officials stated that as the program progresses into the logistics support stage, the number of personnel supporting the program generally decreases as programs release some personnel to other assignments while retaining enough personnel to manage the logistics support stage. Figure 2 shows how the size and composition of Army’s Joint-Air-to-Ground Missile (JAGM) program changed from concept development into production. A program’s total development and procurement cost was not necessarily related to the number of personnel supporting the program for the 11 programs we reviewed. All 11 selected programs are classified as major defense acquisition programs and ranged in total acquisitions cost from $1.5 billion to $103.2 billion. Our analysis, shown in table 1 below, indicates that total cost did not significantly influence the number of personnel supporting these programs. All 11 selected programs used contractors to help meet workload requirements, but the level of contractor support varied from approximately 5 percent to 72 percent of total program personnel, as shown in figure 3. Program officials told us that while they generally try to use civilian or military personnel to meet workload requirements, they use contractor support when the number of government personnel allocated to the program is not sufficient to meet their needs, the technical skills are not available or are limited within the government, or to fulfill short-term tasks that are too brief to justify hiring government personnel. Program officials stated the extent to which their programs use contractor support often depends on the number civilians allocated to the program by the command or PEO. In the case of the three selected programs with the fewest personnel, the officials stated that the number of personnel authorizations allocated to the program by their respective command or PEO did not meet their estimated workload requirements. For example, the B-2 Defensive Management System Modernization program estimated it needed 82 personnel in fiscal year 2018, but was only allocated 13 personnel. As a result, program officials stated that they used program funds to pay for contractor support personnel to partially offset the government civilian staffing shortfalls. Officials at the Air Force Life Cycle Management Center, the organization that allocated personnel to the B-2 program office, told us that civilian personnel are allocated based on the risk associated with each program. Program officials told us that contractor support personnel are used to augment civilian and military personnel by providing skills or technical expertise that are limited or not available in the government. We found that over two-thirds of the contractors that supported the 11 selected programs we reviewed were performing engineering and technical functions. For example, the John Lewis Class Fleet Replenishment Oiler (T-AO) is a commercially-derived ship design. As such, program officials stated that the required engineering expertise resides in the commercial sector, which resulted in contracted engineers comprising about 77 percent of the program’s total engineering personnel. Program officials also stated that it is more effective to use contractor support personnel to perform tasks that are relatively short in duration than to go through the lengthy process of hiring government personnel. Contracting for support allows the program to grow and shrink to meet personnel requirements as they change. For example, Joint Air-to-Ground Missile program officials stated they contracted for support to execute tasks that are not recurring, such as developing the required documents to get approval to start production. Among the 11 programs we reviewed, the Air Force’s Military Global Positioning System User Equipment (MGUE) program has a unique workforce mix. Twenty-four percent of MGUE’s program personnel were military, and MGUE was the only one of the 11 selected programs that had FFRDC personnel. Program officials stated that the challenge of obtaining civilian personnel with the required technical skills in a high cost-of-living area around Los Angeles, California required the program to rely more heavily on military personnel and contractors to support the program. Program officials stated this is in part because it is easier to assign military personnel in high cost-of-living areas than it is to hire civilian personnel. In addition, programs in the Air Force’s Space and Missile Systems Center often rely on FFRDC personnel from Aerospace Corporation, which is located in the Los Angeles area and provides technical expertise that is specific to space systems. Program officials from the other 10 programs we reviewed reported that they did not have FFRDC personnel. While differences existed in the organizational structure of the 11 programs we reviewed, we identified factors that affected which of the two common approaches the military departments used to leverage available personnel with the necessary skills: New, high priority, complex weapon system platforms that require a significant amount of development and integration, such as the Navy’s Columbia and the Army’s Armored Multi-Purpose Vehicle, are structured as distinct standalone program offices with dedicated program personnel. Nine of the 11 selected programs were managed in a portfolio-based program structure which included multiple related acquisition programs. For these portfolio-based programs, personnel were shared across the related programs to help meet fluctuating workload requirements and maximize personnel resources. Figure 4 compares the structure of a standalone program to the structure of a portfolio-based program with multiple acquisition programs managed under it. The figure also illustrates how the Air Force’s MGUE program was situated within the Air Force’s Global Positioning Systems portfolio of programs. In both types of organizational structures illustrated above, the PEO and the program office have personnel that oversee and support the programs. These personnel may be dedicated to one program or may split time between multiple portfolio-based programs. For example, the Air Force PEO for Space has more than 5,000 military, civilian, and contractor personnel and is responsible for managing 41 programs, the responsibility for which is distributed among multiple program offices. One of these program offices, the Global Positioning Systems program office, has 628 personnel. This program office is responsible for overseeing and supplementing the staff of several programs, including the Military Global Positioning System User Equipment Program, which has about 70 personnel. According to PEO and program officials, acquisition programs may be managed within portfolios for several different reasons: Programs are part of the same weapon system platform. The B-2 Defensive Management System Modernization program and the F-15 Eagle Passive Active Warning Survivability System program are examples of upgrades to existing systems on mature aircraft and are managed within a portfolio of programs within the B-2 and F-15 system program offices, respectively. Programs have interrelated technologies. The Air Force’s MGUE program is managed within the GPS program office, which also manages other GPS satellite and ground system programs. Programs have related acquisition strategies. The Navy’s John Lewis Class Fleet Replenishment Oiler (T-AO) program is managed within a portfolio of commercially designed and developed ships. This program is managed within a program office that oversees approximately 85 types of commercially derived auxiliary ships, boats, service craft, and special mission ships. Regardless of how the acquisition program is structured, other DOD organizations also provide personnel to support a program’s workload requirements. There are various specialized DOD organizations that support programs and provide specific acquisition functions or skill sets, such as contracting, cost estimating, and engineering. For the 11 selected programs we reviewed, these organizations supported multiple programs and were either structured (1) within the PEO that was responsible for the programs we reviewed or (2) external to the PEO. These external support organizations include contracting commands, warfare centers, and engineering organizations that are intended to provide the program specialized technical expertise from across the military department. Program officials stated that these organizations may share personnel with a program on a full or part-time basis, and the shared personnel may or may not be co-located with the program. Figure 5 is a notional representation of the way that programs are supported by different organizations. The major defense acquisition programs we reviewed used different approaches to organizing and leveraging support organizations. For example: The Navy programs we reviewed relied on naval warfare centers to provide the engineering expertise necessary to design, build, maintain, and repair the Navy’s aircraft, ships, and submarines. For example, the Navy’s NGJ Mid-Band relies heavily on warfare centers, including the Naval Air Warfare Center Weapons Division and the Naval Air Warfare Center Aircraft Division, to support the program. We found that about 60 percent of the total number of personnel supporting the program office were from these organizations. The Army programs we reviewed relied on support organizations such as the Army Contracting Command for contracting functions, the Aviation and Missile Research Development and Engineering Center for engineering expertise, and others to provide life cycle management support. The Air Force programs we reviewed relied on support organizations established within their command. For example, Air Force’s Life Cycle Management Center has organizations dedicated to supporting all of its programs. These organizations provide support, such as contracting and cost estimating expertise, to programs managed under the Air Force’s Life Cycle Management Center. Personnel within these organizations are not staffed to one particular program, but share their time among many of the programs the Center is responsible for managing. The personnel costs for each major defense acquisition program we reviewed are included in different parts of the President’s annual budget request, including budget justification documents, but are not always clearly identifiable due to different approaches used to report such costs. The DOD Financial Management Regulation gives the military departments flexibility in how they submit program personnel costs. For example, it suggests the use of “typical” personnel cost categories for research, development, test, and evaluation programs to include in their individual program budget exhibits, but it also allows the departments to use the personnel cost categories they deem to be the most appropriate when formulating the budget request. In reviewing DOD’s budget requests for fiscal years 2018 and 2019 associated with the 11 selected programs, we found that personnel costs are budgeted for in two main wayscentrally by the military department, or by an individual programdepending on whether the requests are for military, civilian, or contractor support services. Personnel costs that are program-funded are included in individual program budget justification requests, whereas personnel costs that are centrally funded by the military departments are aggregated into one or more line items in the military department’s specific appropriation request. Table 2 shows how each military department funds military and civilian personnel and contractor support services for major defense acquisition programs. Each military department centrally budgets for military personnel through its respective Military Personnel appropriation requests, which aggregate personnel funding. These requests include funding for pay, travel, and other personnel-related costs. As these costs are combined and not associated with a specific program, we could not determine the costs of the military personnel supporting the 11 selected programs by reviewing DOD’s budget justification documentation. In contrast, support contractor costs were included in each program’s individual budget request. The military departments also centrally budget for some civilian personnel, but there are differences between the departments regarding which appropriations categories they use to request these funds. Regardless of the appropriation, we found that the budget requests do not identify civilian personnel costs by specific program; therefore, we could not determine the costs of the centrally funded civilian personnel supporting the 11 programs we selected. For example, in fiscal year 2019, the Air Force requested funding for the civilian personnel supporting its acquisition programs in development through the Research, Development, Test, and Evaluation appropriation. It grouped the costs into eight categories that represent various missions such as Cyber, Network, and Business Systems; Global Battle Management; and Nuclear Systems. The Air Force budget request indicates the total amount of funds requested, but does not identify the estimated number of personnel that these funds will support. Figure 6 illustrates how the Air Force requested funds for its civilian acquisition workforce in fiscal year 2019. The Navy and Army request funds for civilian personnel primarily through their respective operation and maintenance appropriations. This appropriation is used to fund a wide range of costs necessary to manage, operate and maintain worldwide facilities and military operations. These operation and maintenance budgets are divided into numerous categories related to various missions, functions, or activities. For example, the Navy’s Operation and Maintenance budget requests funding for civilian personnel in several categories, such as “Ship Operational Support and Training” and “Administration.” The Army Operation and Maintenance budget requests funding for civilian acquisition personnel in one combined category labeled as “Other Service Support.” Apart from the portions of the budget described above, certain DOD programs have specific budget exhibits that identify its funding requirements. In reviewing the exhibits for the 11 selected programs, we found that individual program requests include personnel costs that are not funded centrally such as contractor support services costs, but these costs are generally not specifically identified as personnel costs. For example, according to program officials, the Air Force’s B-2 Defensive Management Modernization program requested funds in its exhibit accompanying the fiscal year 2019 Research Development, Test, and Evaluation budget request labeled “PMA,” which stands for Program Management Administration. According to program officials, PMA includes costs for contractor support services, government travel, and other costs but does not include civilian personnel costs (see figure 7). In reviewing and discussing the budget exhibits for the 11 selected programs with program officials, we found that personnel costs, including civilian, contractor, and FFRDC, were generally spread across multiple budget request lines that were associated with various tasks but were not specifically identified as personnel costs. These include the following: Development Test & Evaluation For example, the Navy’s Joint Precision Approach and Landing System’s fiscal year 2019 Research Development, Test and Evaluation budget exhibit included personnel costs across seven lines that represented various efforts including ship integration, test and evaluation, systems engineering, and program management support, as shown in figure 8. Of the 11 program’s fiscal year 2019 budgets we reviewed, one identified personnel costs on a single line, and the remaining 10 programs included personnel costs in two or more budget lines. We provided a draft of this report to DOD for comment. DOD provided technical comments that we incorporated into this report as appropriate. We are sending copies of this report to the appropriate congressional committees; the Acting Secretary of Defense and the Secretaries of the Army, Navy, and Air Force, as well as the Under Secretary of Defense for Personnel and Readiness. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or dinapolit@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Justin Jaynes (Assistant Director); Bradley Terry (Analyst-in-Charge); Matthew T. Crosby; Stephanie Gustafson; Heather B. Miller; Karen Richey; Miranda Riemer; Robin Wilson; and Chris Zakroff made significant contributions to this review.
|
In 2018, DOD estimated that its 82 major defense acquisition programs would cost over $1.69 trillion in total to acquire. DOD relies on program offices—composed of civilian, military and contractor support personnel—to manage and oversee these technically complex programs. GAO was asked to review factors affecting DOD's personnel needs for its acquisition programs and how DOD budgets for the costs associated with these personnel. This report describes (1) factors affecting the workforce size, composition, and mix of contractor and government personnel, as well as organizational structure for selected programs; and (2) how personnel costs associated with those selected programs are included in DOD's budget justification documents. GAO reviewed DOD acquisition, workforce, and financial management policies and regulations and identified a non-generalizable sample of 11 major defense acquisition programs, including programs from each military department that were recently approved to enter into system development. GAO requested information from each of these programs to identify the number and type of personnel supporting the program, reviewed program documentation, and interviewed program officials. GAO also reviewed DOD's budget justification documents for fiscal years 2018 and 2019. The workforce size, composition, and mix, as well as the organizational structure of the 11 Department of Defense (DOD) major defense acquisition programs GAO reviewed were influenced by several interrelated factors. These factors include the government's role in developing and integrating key technologies, the availability of government personnel to provide the skills needed, and whether the program was managed as part of a portfolio of related programs or as a stand-alone program. For example, programs that assumed more responsibility for developing and integrating key technologies generally had a larger workforce, which was primarily composed of engineering and technical personnel. Program officials GAO met with stated that they generally prefer to use government personnel, but use contractor support when the number of government personnel allocated to the program is not sufficient to meet their needs, the technical skills are not available or are limited within the government, or to fulfill short-term tasks that are too brief to justify hiring government personnel. GAO also found that DOD structured the 11 programs to allow them to leverage available personnel with the necessary skills. Two programs were structured as standalone programs because they were new, high priority, and complex. The other nine programs were managed as a part of a portfolio of related programs. For example, the Air Force's F-15 program office manages a number of programs that add capabilities to the existing system. DOD's Financial Management Regulation, which governs the formulation and presentation of DOD's budget request, gives DOD flexibility in how it submits program personnel costs. Consequently, the personnel costs for the 11 programs GAO reviewed were not separately and distinctly identified from other costs. For example, costs for civilian and military personnel are often centrally funded through appropriations categories that support many DOD activities and do not provide information on specific program personnel costs. GAO also found that costs for contractor support are often combined with other costs in individual program budget exhibits.
|
Our October 2017 report found that CMS provides guidance to Medicare Part D plan sponsors on how the plan sponsors should monitor opioid overutilization problems among Part D beneficiaries. The agency includes this guidance in its annual letters to plan sponsors, known as call letters; it also provided a supplemental memo to plan sponsors in 2012. Among other things, these guidance documents instructed plan sponsors to implement a retrospective drug utilization review (DUR) system to monitor beneficiary utilization starting in 2013. As part of the DUR systems, CMS requires plan sponsors to have methods to identify beneficiaries who are potentially overusing specific drugs or groups of drugs, including opioids. Also in 2013, CMS created the Overutilization Monitoring System (OMS), which outlines criteria to identify beneficiaries with high-risk use of opioids and to oversee sponsors’ compliance with CMS’s opioid overutilization policy. Plan sponsors may use the OMS criteria for their DUR systems, but they have some flexibility to develop their own targeting criteria within CMS guidance. At the time of our review, the OMS considered beneficiaries to be at a high risk of opioid overuse when they met all three of the following criteria: 1. received a total daily MED greater than 120 mg for 90 consecutive 2. received opioid prescriptions from four or more providers in the previous 12 months, and 3. received opioids from four or more pharmacies in the previous 12 months. The criteria excluded beneficiaries with a cancer diagnosis and those in hospice care, for whom higher doses of opioids may be appropriate. Through the OMS, CMS generates quarterly reports that list beneficiaries who meet all of the criteria and who are identified as high-risk, and then distributes the reports to the plan sponsors. Plan sponsors are expected to review the list of identified beneficiaries, determine appropriate action, and then respond to CMS with information on their actions within 30 days. According to CMS officials, the agency also expects that plan sponsors will share any information with CMS on beneficiaries that they identify through their own DUR systems. We found that some actions plan sponsors may take include Case management. Case management may include an attempt to improve coordination issues, and often involves provider outreach, whereby the plan sponsor will contact the providers associated with the beneficiary to let them know that the beneficiary is receiving high levels of opioids and may be at risk of harm. Beneficiary-specific point-of-sale (POS) edits. Beneficiary-specific POS edits are restrictions that limit these beneficiaries to certain opioids and amounts. Pharmacists receive a message when a beneficiary attempts to fill a prescription that exceeds the limit in place for that beneficiary. Formulary-level POS edits. These edits alert providers who may not have been aware that their patients are receiving high levels of opioids from other doctors. Referrals for investigation. According to the six plan sponsors we interviewed, the referrals can be made to CMS’s National Benefit Integrity Medicare Drug Integrity Contractor (NBI MEDIC), which is responsible for identifying and investigating potential Part D fraud, waste, and abuse, or to the plan sponsor’s own internal investigative unit, if they have one. After investigating a particular case, they may refer the case to the HHS-OIG or a law enforcement agency, according to CMS, NBI MEDIC, and one plan sponsor. Based on CMS’s use of the OMS and the actions taken by plan sponsors, CMS reported a 61 percent decrease from calendar years 2011 through 2016 in the number of beneficiaries meeting the OMS criteria of high risk—from 29,404 to 11,594 beneficiaries—which agency officials consider an indication of success toward its goal of decreasing opioid use disorder. In addition, we found that CMS relies on separate patient safety measures developed and maintained by the Pharmacy Quality Alliance to assess how well Part D plan sponsors are monitoring beneficiaries and taking appropriate actions. In 2016, CMS started tracking plan sponsors’ performance on three patient safety measures that are directly related to opioids. The three measures are similar to the OMS criteria in that they identify beneficiaries with high dosages of opioids (120 mg MED), beneficiaries that use opioids from multiple providers and pharmacies, and beneficiaries that do both. However, one difference between these approaches is that the patient safety measures separately identify beneficiaries who fulfill each criterion individually. Our October 2017 report also found that while CMS tracks the total number of beneficiaries who meet all three OMS criteria as part of its opioid overutilization oversight across the Part D program, it does not have comparable information on most beneficiaries who receive high doses of opioids—regardless of the number of providers and pharmacies used—and who therefore may be at risk for harm, according to CDC guidelines. These guidelines note that long-term use of high doses of opioids—those above a MED of 90 mg per day—are associated with significant risk of harm and should be avoided if possible. Based on the CDC guidelines, outreach to Part D plan sponsors, and CMS analyses of Part D data, CMS has revised its current OMS criteria to include more at-risk beneficiaries beginning in 2018. The new OMS criteria define a high user as having an average daily MED greater than 90 mg for any duration, and who receives opioids from four or more providers and four or more pharmacies, or from six or more providers regardless of the number of pharmacies, for the prior 6 months. Based on 2015 data, CMS found that 33,223 beneficiaries would have met these revised criteria. While the revised criteria will help identify beneficiaries who CMS determined are at the highest risk of opioid misuse and therefore may need case management by plan sponsors, OMS will not provide information on the total number of Part D beneficiaries who may also be at risk of harm. In developing the revised criteria, CMS conducted a one-time analysis that estimated there were 727,016 beneficiaries with an average MED of 90 mg or more, for any length of time during a 6 month measurement period in 2015, regardless of the number of providers or pharmacies used. These beneficiaries may be at risk of harm from opioids, according to CDC guidelines, and therefore tracking the total number of these beneficiaries over time could help CMS to determine whether it is making progress toward meeting the goals specified in its Opioid Misuse Strategy to reduce the risk of opioid use disorders, overdoses, inappropriate prescribing, and drug diversion. However, CMS officials told us that the agency does not keep track of the total number of these beneficiaries, and does not have plans to do so as part of OMS. (See fig. 1.) We also found that in 2016, CMS began to gather information from its patient safety measures on the number of beneficiaries who use more than 120 mg MED of opioids for 90 days or longer, regardless of the number of providers and pharmacies. The patient safety measures identified 285,119 such beneficiaries—counted as member-years—in 2016. However, this information does not include all at-risk beneficiaries, because the threshold is more lenient than indicated in CDC guidelines and CMS’s new OMS criteria. Because neither the OMS criteria nor the patient safety measures include all beneficiaries potentially at risk of harm from high opioid doses, we recommended that CMS should gather information over time on the total number of beneficiaries who receive high opioid morphine equivalent doses regardless of the number of pharmacies or providers, as part of assessing progress over time in reaching the agency’s goals related to reducing opioid use. HHS concurred with our recommendation. Our October 2017 report found that CMS oversees providers who prescribe opioids to Medicare Part D beneficiaries through its contractor, NBI MEDIC, and the Part D plan sponsors. NBI MEDIC’s data analyses to identify outlier providers. CMS requires NBI MEDIC to identify providers who prescribe high amounts of Schedule II drugs, which include but are not limited to opioids. Using prescription drug data, NBI MEDIC conducts a peer comparison of providers’ prescribing practices to identify outlier providers—the highest prescribers of Schedule II drugs. NBI MEDIC reports the results to CMS. NBI MEDIC’s other projects. NBI MEDIC gathers and analyzes data on Medicare Part C and Part D, including projects using the Predictive Learning Analytics Tracking Outcome (PLATO) system. According to NBI MEDIC officials, these PLATO projects seek to identify potential fraud by examining data on provider behaviors. NBI MEDIC’s investigations to identify fraud, waste, and abuse. NBI MEDIC officials conduct investigations to assist CMS in identifying cases of potential fraud, waste, and abuse among providers for Medicare Part C and Part D. The investigations are prompted by complaints from plan sponsors; suspected fraud, waste, or abuse reported to NBI MEDIC’s call center; NBI MEDIC’s analysis of outlier providers; or from one of its other data analysis projects. NBI MEDIC’s referrals. After identifying providers engaged in potential fraudulent overprescribing, NBI MEDIC officials said they may refer cases to law enforcement agencies or the HHS-OIG for further investigation and potential prosecution. Plan sponsors’ monitoring of providers. CMS requires all plan sponsors to adopt and implement an effective compliance program, which must include measures to prevent, detect, and correct Part C or Part D program noncompliance, as well as fraud, waste, and abuse. CMS’s guidance focuses broadly on prescription drugs, and does not specifically address opioids. Our report concluded that although these efforts provide valuable information, CMS lacks all the information necessary to adequately oversee opioid prescribing. CMS’s oversight actions focus broadly on Schedule II drugs rather than specifically on opioids. For example, NBI MEDIC’s analyses to identify outlier providers do not indicate the extent to which they may be overprescribing opioids specifically. According to CMS officials, they direct NBI MEDIC to focus on Schedule II drugs, because these drugs have a high potential for abuse, whether they are opioids or other drugs. However, without specifically identifying opioids in these analyses—or an alternate source of data—CMS lacks data on providers who prescribe high amounts of opioids, and therefore cannot assess progress toward meeting its goals related to reducing opioid use, which would be consistent with federal internal control standards. Federal internal control standards require agencies to conduct monitoring activities and to use quality information to achieve objectives and address risks. As a result, we recommended that CMS require NBI MEDIC to gather separate data on providers who prescribe high amounts of opioids. This would allow CMS to better identify those providers who are inappropriately and potentially fraudulently overprescribing opioids. HHS agreed, and noted that it intends to work with NBI MEDIC to identify trends in outlier prescribers of opioids. Our report also found that CMS also lacks key information necessary for oversight of opioid prescribing, because it does not require plan sponsors to report to NBI MEDIC or CMS cases of fraud, waste, and abuse; cases of overprescribing; or any actions taken against providers. Plan sponsors collect information on cases of fraud, waste, and abuse, and can choose to report this information to NBI MEDIC or CMS. While CMS receives information from plan sponsors who voluntarily report their actions, it does not know the full extent to which plan sponsors have identified providers who prescribe high amounts of opioids, or the full extent to which sponsors have taken action to reduce overprescribing. We concluded that without this information, it is difficult for CMS to assess progress in this area, which would be consistent with federal internal control standards. In our report, we recommended that CMS require plan sponsors to report on investigations and other actions taken related to providers who prescribe high amounts of opioids. HHS did not concur with this recommendation. HHS noted that plan sponsors have the responsibility to detect and prevent fraud, waste, and abuse, and that CMS reviews cases when it conducts audits. HHS also stated that it seeks to balance requirements on plan sponsors when considering new regulatory requirements. However, without complete reporting—such as reporting from all plan sponsors on the actions they take to reduce overprescribing—we believe that CMS is missing key information that could help assess progress in this area. Due to the importance of this information for achieving the agency’s goals, we continue to believe that CMS should require plan sponsors to report on the actions they take to reduce overprescribing. - - - - - In conclusion, a large number of Medicare Part D beneficiaries use potentially harmful levels of prescription opioids, and reducing the inappropriate prescribing of these drugs is a key part of CMS’s strategy to decrease the risk of opioid use disorder, overdoses, and deaths. Despite working to identify and decrease egregious opioid use behavior—such as doctor shopping—among Medicare Part D beneficiaries, CMS lacks the necessary information to effectively determine the full number of beneficiaries at risk of harm, as well as other information that could help CMS assess whether its efforts to reduce opioid overprescribing are effective. It is important that health care providers help patients to receive appropriate pain treatment, including opioids, based on the consideration of benefits and risks. Access to information on the risks that Medicare patients face from inappropriate or poorly monitored prescriptions, as well as information on providers who may be inappropriately prescribing opioids, could help CMS as it works to improve care. Chairman Jenkins, Ranking Member Lewis, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff members have any questions concerning this testimony, please contact me at (202) 512-7114 or CurdaE@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions to this testimony include Will Simerl (Assistant Director), Carolyn Feis Korman (Analyst-in-Charge), Amy Andresen, Drew Long, Samantha Pawlak, Vikki Porter, and Emily Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Misuse of prescription opioids can lead to overdose and death. In 2016, over 14 million Medicare Part D beneficiaries received opioid prescriptions, and spending for opioids was almost $4.1 billion. GAO and others have reported on inappropriate activities and risks associated with these prescriptions. This statement is based on GAO's October 2017 report (GAO-18-15) and discusses (1) CMS oversight of beneficiaries who receive opioid prescriptions under Part D, and (2) CMS oversight of providers who prescribe opioids to Medicare Part D beneficiaries. For the October 2017 report, GAO reviewed CMS opioid utilization and prescriber data, CMS guidance for plan sponsors, and CMS's strategy to prevent opioid misuse. GAO also interviewed CMS officials, the six largest Part D plan sponsors, and 12 national associations selected to represent insurance plans, pharmacy benefit managers, physicians, patients, and regulatory and law enforcement authorities. The Centers for Medicare & Medicaid Services (CMS), within the Department of Health and Human Services (HHS), provides guidance on the monitoring of Medicare beneficiaries who receive opioid prescriptions to plan sponsors—private organizations that implement the Medicare drug benefit, Part D—but lacks information on most beneficiaries at risk of harm from opioid use. CMS provides guidance to plan sponsors on how they should monitor opioid overutilization among Medicare Part D beneficiaries, and requires them to implement drug utilization review systems that use criteria similar to CMS's. CMS's criteria focused on beneficiaries who do all the following: (1) receive prescriptions of high doses of opioids, (2) receive prescriptions from four or more providers, and (3) fill prescriptions at four or more pharmacies. According to CMS, this approach focused actions on beneficiaries the agency determined to have the highest risk of harm. CMS's criteria, including recent revisions, do not provide sufficient information about the larger population of potentially at-risk beneficiaries. CMS estimates that while 33,223 beneficiaries would have met the revised criteria in 2015, 727,016 would have received high doses of opioids regardless of the number of providers or pharmacies. In 2016, CMS began to collect information on some of these beneficiaries using a higher dosage threshold for opioid use. This approach misses some who could be at risk of harm, based on Centers for Disease Control and Prevention guidelines. As a result, CMS is limited in its ability to assess progress toward meeting the broader goals of its Opioid Misuse Strategy for the Medicare and Medicaid programs, which includes activities to reduce the risk of harm to beneficiaries from opioid use. CMS oversees the prescribing of drugs at high risk of abuse through a variety of projects, but does not analyze data specifically on opioids. According to CMS officials, CMS and plan sponsors identify providers who prescribe large amounts of drugs with a high risk of abuse, and those suspected of fraud or abuse may be referred to law enforcement. However, GAO found that CMS does not identify providers who may be inappropriately prescribing large amounts of opioids separately from other drugs, and does not require plan sponsors to report actions they take when they identify such providers. As a result, CMS is lacking information that it could use to assess how opioid prescribing patterns are changing over time, and whether its efforts to reduce harm are effective. In the October 2017 report, GAO made three recommendations that CMS (1) gather information on the full number of at-risk beneficiaries receiving high doses of opioids, (2) identify providers who prescribe high amounts of opioids, and (3) require plan sponsors to report to CMS on actions related to providers who inappropriately prescribe opioids. HHS concurred with the first two recommendations, but not with the third. GAO continues to believe the recommendation is valid, as discussed in the report and in this statement.
|
An amphibious operation is a military operation launched from the sea by an amphibious force, embarked in ships or craft, with the primary purpose of introducing a landing force ashore to accomplish an assigned mission. An amphibious force is comprised of an (1) amphibious task force and (2) landing force together with other forces that are trained, organized, and equipped for amphibious operations. The amphibious task force is a group of Navy amphibious ships, most frequently deployed as an Amphibious Ready Group (ARG). The landing force is a Marine Air- Ground Task Force—which includes certain elements, such as command, aviation, ground, and logistics—embarked aboard the Navy amphibious ships. A Marine Expeditionary Unit (MEU) is the most-commonly deployed Marine Air-Ground Task Force. Together, this amphibious force is referred to as an ARG-MEU. The Navy’s amphibious ships are part of its surface force. An ARG consists of a minimum of three amphibious ships, typically an amphibious assault ship, an amphibious transport dock ship, and an amphibious dock landing ship. Figure 1 shows the current number of amphibious ships by class and a description of their capabilities. The primary function of amphibious ships is to provide transport to Marines and their equipment and supplies. The ARG includes an amphibious squadron that is comprised of a squadron staff, tactical air control squadron detachment, and fleet surgical team. This task organization also includes a naval support element that is comprised of a helicopter squadron for search and rescue and antisurface warfare, two landing craft detachments for cargo lift, and a beachmaster unit detachment to control beach traffic. An MEU consists of around 2,000 Marines, their aircraft, their landing craft, their combat equipment, and about 15 days’ worth of supplies. The MEU includes a standing command element; a ground element consisting of a battalion landing team; an aviation element consisting of a composite aviation squadron of multiple types of aircraft; and a logistics element consisting of a combat logistics battalion. Figure 2 provides an overview of the components of a standard ARG-MEU. An amphibious force can be scaled to include a larger amphibious task force, such as an Expeditionary Strike Group, and a larger landing force, such as a Marine Expeditionary Brigade or Marine Expeditionary Force (MEF) for larger operations. A Marine Expeditionary Brigade is comprised of 3,000 to 20,000 personnel and is organized to respond to a full range of crises, such as forcible entry and humanitarian assistance. A MEF is the largest standing Marine Air-Ground Task Force and the principal Marine Corps warfighting organization. Each MEF consists of 20,000 to 90,000 Marines. MEFs are used in major theater war and other missions across the range of military operations. There are three standing MEFs—I MEF at Camp Pendleton, California; II MEF at Camp Lejeune, North Carolina; and III MEF in Okinawa, Japan. Navy ships train to a list of mission-essential tasks that are assigned based on the ship’s required operational capabilities and projected operational environments. Most surface combatants, including cruisers, destroyers, and all amphibious ships, have mission-essential tasks related to amphibious operations. The Navy uses a phased approach to training, known as the Fleet Response Training Plan. The training plan for amphibious ships is broken up into five phases: maintenance, basic, advanced, integrated, and sustainment. The maintenance phase is focused on the completion of ship maintenance, with a secondary focus on individual and team training. The basic phase focuses on development of core capabilities and skills through the completion of basic-level inspections, assessments, and training requirements, among other things. This phase can include certification in areas such as mobility, communications, amphibious well-deck operations, aviation operations, and warfare training. The basic phase of training requires limited Marine Corps involvement—mainly to certify amphibious ships for well-deck and flight-deck operations. The advanced phase focuses on advanced tactical training, including amphibious planning. The integrated phase is where individual units and staffs are aggregated into an Amphibious Ready Group (ARG) and train with an embarked MEU or other combat units. The sustainment phase includes training to sustain core skills and provides an additional opportunity for training with Marine Corps units, when possible. Marine Corps units train to accomplish a set of mission-essential tasks for the designed capabilities of the unit. For example, the mission-essential tasks for a Marine Corps infantry battalion include amphibious operations, offensive operations, defensive operations, and stability operations. Many Marine Corps units within the command, aviation, ground, and logistics elements have an amphibious-related mission-essential task. The Marine Corps uses a building-block approach to accomplish training, progressing from individual through collective training. For example, an assault amphibian vehicle battalion will progress through foundational, individual, and basic amphibious training—such as waterborne movement and ship familiarization—to advanced amphibious training, such as live training involving ship-to-shore movement conducted under realistic conditions. Marine Corps unit commanders use Training and Readiness manuals to help develop their training plans. Training and Readiness manuals describe the training events, frequency of training required to sustain skills, and the conditions and standards that a unit must accomplish to be certified in a mission-essential task. To be certified in the mission- essential task of amphibious operations, Marine Corps units must train to a standard that may require the use of amphibious ships. For example, ground units with amphibious-related mission-essential tasks will not be certified until live training involving sea-based operations and ship-to- shore movement has been conducted under realistic conditions. Similarly, for aviation squadrons, training for amphibious operations (called sea- based aviation operations) will not be certified until live training involving sea-based operations has been conducted under realistic conditions, including aviation operations from an amphibious platform. Similar types of units, such as all infantry battalions, may train on the same mission- essential tasks. However, unit commanders are ultimately responsible for their units’ training, and a variety of factors can lead commanders to adopt different approaches to training, such as the units’ assigned missions or deployment locations. Marine Corps units that are scheduled to deploy as part of an ARG-MEU will follow a standardized 6-month predeployment training program that gradually builds collective skill sets over three phases, as depicted in figure 3. The Marine Corps’ use of virtual training devices has increased over time. Virtual training devices were first incorporated into training for the aviation community, which has used simulators for more than half a century. The Marine Corps’ ground units did not begin using simulators and simulations until later. Specifically, until the 1980s, training in the ground community was primarily live training. Further advances in technology resulted in the acquisition of simulators and simulations with additional capabilities designed to help individual Marines and units acquire and refine skills through more concentrated and repetitive training. For example, the Marine Corps began using devices that allowed individual Marines to conduct training in basic and advanced marksmanship and weapons employment tactics. More recently, during operations in Iraq and Afghanistan, the Marine Corps introduced a number of new virtual training devices to prepare Marines for conditions on the ground and for emerging threats. For example, to provide initial and sustainment driver training, the Marine Corps began using simulators that can be reconfigured to replicate a variety of vehicles. In addition, in response to an increase in vehicle rollovers, the Marine Corps began using egress trainers to train Marines to safely evacuate their vehicles. The Marine Corps has also developed virtual training devices that can be used to train Marines in collective training, such as amphibious operations. For example, the Marine Air-Ground Task Force Tactical Warfare Simulation is a constructive simulation that provides training on planning and tactical decision making for the Marine Corps’ command element. See figure 4 for a description of examples of Marine Corps devices that can be used for individual through collective training. Navy and Marine Corps units that are deploying as part of an ARG-MEU completed their required training for amphibious operations, but several factors have limited the ability of Marine Corps units to conduct training for other amphibious operations–related priorities. The Navy and Marine Corps have taken steps to identify and address amphibious training shortfalls, but their efforts to mitigate these shortfalls have not prioritized available training resources, systematically evaluated among potential training resource alternatives to accomplish the services’ amphibious operations training priorities, or monitored progress toward achieving the priorities. Navy and Marine Corps units deploying as part of ARG-MEUs have completed required training for amphibious operations, but the Marine Corps has been unable to consistently accomplish training for other service amphibious operations priorities. We found that Navy amphibious ships have completed training for amphibious operations. Specifically, based on our review of deployment certification messages from 2014 through 2016, we found that each deploying Navy ARG completed training for the amphibious operations mission in accordance with training standards. Similarly, we found that each MEU completed all of its mission-essential tasks that are required during the predeployment training program. These mission-essential tasks cover areas such as amphibious raid, amphibious assault, and noncombatant evacuation operations, among other operations. However, while the Marine Corps has completed amphibious operations training for the MEU, based on our review of unit-level readiness data from fiscal year 2014 through 2016 we found that the service has been unable to fully accomplish training for its other amphibious operations priorities, which include home-station unit training to support contingency requirements, service-level exercises, and experimentation and concept development for amphibious operations. Specific details of these shortfalls were omitted because the information is classified. Additionally, Marine Corps officials cited shortfalls in their ability to conduct service-level exercises that train individuals and units on amphibious operations-related skills, as well as provide opportunities to conduct experimentation and concept development for amphibious operations. In particular, officials responsible for planning and executing these exercises told us that one of the biggest challenges is aligning enough training resources, such as amphibious ships, to accurately replicate a large-scale amphibious operation. For example, officials from III MEF told us that the large-scale amphibious exercise Ssang Yong is planned to be conducted every other year, but that the exercise requires the availability and alignment of two ARG-MEUs in order to have enough forces to conduct the exercise. These officials stated that this alignment may only happen every 3 years, instead of every other year, as planned. In addition, officials from I MEF and II MEF told us that their large-scale amphibious exercises are intended to be a Marine Expeditionary Brigade–level training exercise, however, these exercises are typically only able to include enough amphibious ships to support a MEU, while the other forces must be simulated. Despite these limitations, Navy and Marine Corps officials have identified these service-level exercises as a critical training venue to support training for the Marine Expeditionary Brigade command element and to rebuild the capability to command and control forces participating in amphibious operations. Based on our analysis of interviews with 23 Marine Corps units, we found that all 23 units cited the lack of available amphibious ships as the primary factor limiting training for home-station units. The Navy’s fleet of amphibious ships has declined by half in the last 25 years, from 62 in 1990 to 31 today, with current shipbuilding plans calling for four additional amphibious ships to be added by fiscal year 2024, increasing the total number of amphibious ships to 35 (see fig. 5). Navy and Marine Corps officials noted a number of issues that can affect the amount of training time that is available with the current amphibious fleet. In particular, the current fleet of ships is in a continuous cycle of maintenance, ARG-MEU predeployment training, and sustainment periods, leaving little additional time for training with home-station units and participation in service-level exercises. Navy officials told us that the Optimized Fleet Response Plan may provide additional training opportunities for Marine Corps units during the amphibious ships’ sustainment periods. Given the availability of the current inventory of amphibious ships, Marine Corps requests to the Navy for amphibious ships and other craft have been difficult to fulfill. For example, data from I MEF showed that the Navy was unable to fulfill 293 of 314 (93 percent) of I MEF requests for Navy ship support for training in fiscal year 2016. Similarly, data from II MEF showed that in fiscal year 2016 the Navy was unable to fulfill 19 of 40 requests for ship services. We identified issues with the completeness of this request data. Specifically, we found that the data may not fully capture the Marine Corps’ demand for amphibious ships. As a result, this information may overstate the ability of the Navy to fulfill these requests. We discuss these data-reliability issues further below. Marine Corps officials from the 23 units we interviewed also cited other factors that limit opportunities for amphibious operations training, such as the following: Access to range space: Seventeen of 23 Marine Corps units we interviewed identified access to range space as a factor that can limit their ability to conduct amphibious operations training. Unit officials told us that priority for training resources, including range access, is given to units that will be part of a MEU deployment, leaving little range time available for other units. In addition, unit officials told us that the amount of range space available can affect the scope and realism of the training that they are able to conduct. Training for amphibious operations can require a large amount of range space, because the operational area extends from the offshore waters onto the landing beach and further inland. A complete range capability requires maneuver space, tactical approaches, and air routes that allow for maneuverability and evasive actions. However, officials from II MEF told us that the size of the landing beach near Camp Lejeune, North Carolina makes conducting beach-clearing operations infeasible. Adequate ranges have been identified as a challenge across DOD. For example, according to DOD’s 2016 Report to Congress on Sustainable Ranges, some Marine Corps installations lack fully developed maneuver corridors, training areas, and airspace to adequately support ground and air maneuver inland from landing beaches. Maintenance delays, bad weather, and transit time: Ten of 23 Marine Corps units told us that changes to an amphibious ship’s schedule resulting from maintenance overruns or bad weather can also reduce the time available for a ship to be used for training. In addition, the transit time a ship needs to reach Marine Corps units can further reduce the time available for training. This is a particular challenge for II MEF units stationed in North Carolina and South Carolina that train with amphibious ships stationed in Virginia and Florida. According to II MEF officials, transit time to Marine Corps units can take up to 18 hours in good weather, using up almost a full day of available training time for transit. High pace of deployments: Five of 23 Marine Corps units told us that the high pace of deployments and need to prepare for upcoming deployments limited their opportunity to conduct training for amphibious operations. For example, II MEF officials told us that an infantry battalion that is scheduled to deploy as part of a Special Purpose Marine Air-Ground Task Force to Africa generally does not embark on an amphibious ship or have amphibious operations as part of its assigned missions. As a result, the unit will likely not conduct amphibious operations during its predeployment training. The Navy and Marine Corps have taken some steps to mitigate the training shortfall for their amphibious operations priorities, but these efforts are incomplete because they have not prioritized available training resources, systematically evaluated among potential training resource alternatives to accomplish the services’ amphibious operations training priorities, or monitored progress toward achieving the priorities. The Navy and Marine Corps are in the process of identifying (1) the amount of amphibious operations capabilities and capacity that are needed to achieve the services’ wartime requirements, and (2) the training resources and funding required to meet the amphibious operations- related training priorities. First, in December 2016, the Navy conducted a force structure assessment that established a need for a fleet of 38 amphibious ships. Based on the assessment, the Chief of Naval Operations and the Commandant of the Marine Corps determined that increasing the Navy’s amphibious fleet from a 31-ship to a 38-ship amphibious fleet would allow the Marine Corps to meet its wartime needs of having enough combined capacity to transport two Marine Expeditionary Brigades. Specifically, a 38-ship fleet would provide 17 amphibious ships for each Marine Expeditionary Brigade, plus four additional ships to account for ships that are unavailable due to maintenance. According to Navy and Marine Corps officials, an increase in the number of amphibious ships should create additional opportunities for the Navy and Marine Corps to accomplish amphibious operations training. Second, the Marine Corps has also recognized a need to improve the capacity and experience of its forces to conduct amphibious operations and is taking steps to identify the training resources and funding required to meet its amphibious operations–related training priorities. To accomplish this task, in 2016 the Marine Corps initiated the Amphibious Operations Training Requirements review. As a part of this review, the Marine Corps has comprehensively determined units that require amphibious operations training and is in the process of refining the training and readiness manuals for each type of Marine Corps unit to include an amphibious-related mission-essential task as appropriate, and better emphasizing the types of conditions and standards for amphibious training in the manuals. According to officials, as of May 2017, Marine Corps Forces Command has reviewed the mission-essential tasks for 60 unit types and found 31 unit types already had a mission-essential task for amphibious operations, while another 5 unit types required that an amphibious-related mission-essential task be added. The review further found that the other 24 unit types do not require a mission-essential task for amphibious operations. In addition, the Marine Corps Training and Education Command noted in its review that certain training standards within the training manuals are being refined in order to distinguish between levels of training accomplished. For example, for ground-based units, such as infantry battalions, an additional training standard was added for all amphibious-related mission-essential tasks that a unit would not be considered both trained and certified unless live training using amphibious ships has been conducted under realistic conditions. The Amphibious Operations Training Requirements review is also intended to accomplish other actions to better define the services’ amphibious operations training priorities, but these actions were incomplete at the time of our review. Specifically, the review will also establish an objective for the number of Marine Corps forces that must be trained and ready to conduct amphibious operations at a given point in time, and the amount of funding for ship steaming days that is required to provide training for the services’ amphibious operations priorities. According to officials responsible for the Amphibious Operations Training Requirements review, an outcome of the review is expected to be a combined Navy and Marine Corps directive signed by the Chief of Naval Operations and the Commandant of the Marine Corps that should provide guidance to better define a naval objective for amphibious readiness and required ship steaming days. Marine Corps officials estimated that the issuance of the directive will be in the summer of 2017. With these two efforts, the Navy and Marine Corps have been proactive in identifying the underlying problems with training for amphibious operations, and their ongoing efforts indicate that addressing this training shortfall is a key priority for the two services. In particular, the proposed Navy and Marine Corps directive that will result from the Amphibious Operation Training Requirements review should help establish a naval objective for amphibious readiness with the corresponding units that need to be trained and ready in amphibious operations, as well as a basis for estimating the required amount of training resources, such as ship steaming days, to meet amphibious operations training priorities. When completed, the development of this directive is an important first step to clearly identify the total resources needed for amphibious operations training. However, the Navy’s and Marine Corps’ current approach for amphibious operations training does not incorporate strategic training and leading risk-management practices. Specifically, we found the following: The Marine Corps does not prioritize all available training resources: Based on our prior work on strategic training, we found that agencies need to align their training processes and available resources to support outcomes related to the agency’s missions and goals, and that those resources should be prioritized so that the most- important training needs are addressed first. For certain units that are scheduled to deploy as part of an ARG-MEU, the Navy and Marine Corps have a formal training program that specifies the timing and resource needs across all phases of the training, including the number of days embarked on amphibious ships that the Navy and Marine Corps need to complete their training events. Officials stated that available training resources, including access to amphibious ships for training, are prioritized for these units. However, for other Marine Corps units not scheduled for a MEU deployment, officials described an ad hoc process to allocate any remaining availabilities of amphibious ship training time among home- station units. Specifically, officials stated that the current process identifies units that are available for training when an amphibious ship becomes available rather than a process that aligns the next highest- priority units with available training resources. For example, officials at Headquarters Marine Corps told us that the Navy will identify training opportunities with amphibious ships at quarterly scheduling conferences. The Marine Corps will fill these training opportunities with units that are available to accomplish training during that period, but not based on a process that identifies its highest-priority home- station units for training. Similarly, a senior officer with First Marine Division told us that he would prioritize home-station units that have gone the longest without conducting amphibious-related training, which may not be the units with the highest priority for amphibious operations training. The Navy and Marine Corps have recognized the need for reinstituting a recurring training program for home-station units, but efforts to implement such a program have not been started at the time of our review. According to Navy officials, the Navy and Marine Corps have had a recurring training program in the past to provide home- station units with amphibious operations training called the Type Commander Amphibious Training series, or TCAT, but this program was phased out 15 years ago with the implementation of the Fleet Response Training Plan that is more focused on ARG-MEU training. Navy and Marine Corps officials told us that reinstituting a similar training program would allow the services to better prioritize training resources and align units to achieve the services’ proposed naval objective for amphibious readiness. Without establishing a process to prioritize available training resources for home-station units, the Navy and Marine Corps cannot be certain that scarce training opportunities are being aligned with their highest-priority needs. The Navy and Marine Corps do not systematically evaluate a full range of training resource alternatives to achieve amphibious operations training priorities: Our prior work on risk management has found that evaluating and selecting alternatives are critical steps for addressing operational capability gaps. Based on our interviews with officials across the Marine Expeditionary Forces and review of documentation, we identified a number of alternatives that could help mitigate the risk to the services’ amphibious capability due to limited training opportunities. These alternatives include utilizing additional training opportunities during an amphibious ship’s basic phase of training; using alternative platforms for training, such as Marine Prepositioning Force ships, or the amphibious ships of allies; utilizing smaller Navy craft or pier-side ships to meet training requirements; and leveraging developmental and operational test events. However, the Navy and Marine Corps have not developed a systematic approach to explore and incorporate selected training resource alternatives into home-station training plans. Specifically, officials told us that the combined Navy and Marine Corps directive that is expected to be completed later this year will better define a naval objective for amphibious readiness and the required training resources to achieve it, and will provide guidance to the two services to better identify training resource alternatives for home-station training. Based on our review of briefing materials on the Amphibious Operations Training Requirements review, however, we found that the services have discussed using some training resource alternatives to mitigate amphibious operations training shortfalls, such as pier-side ships to minimize the required number of ship steaming days, but the services have not systematically evaluated potential alternatives. Marine Corps officials told us that fully evaluating resource alternatives, particularly the use of simulated training and pier-side ships, could allow for more amphibious training without the need for additional steaming days. Fully exploring alternatives, such as utilizing alterative platforms and pier-side ships, and incorporating a broader range of training resource alternatives into training will be important as the Navy and Marine Corps try to achieve their training priorities and could help bridge the time gap until more amphibious ships are introduced into the fleet. The Navy and Marine Corps have not developed a process or set of metrics to monitor progress toward achieving its amphibious operations training priorities and mitigating existing shortfalls: Our prior work on risk management has found that monitoring the progress made and results achieved are other critical steps for addressing operational capability gaps. Marine Corps officials told us that the service uses the readiness reporting system (Defense Readiness Reporting System—Marine Corps) to measure the capabilities and capacity of its units to perform amphibious operations. While this reporting system allows the Marine Corps to assess the current readiness of units to perform the amphibious operations mission-essential task—an important measure—the system does not provide other information. For example, it does not allow officials to assess the status of service-wide progress in achieving its amphibious operations priorities or monitor efforts by the Marine Expeditionary Forces in establishing comprehensive amphibious operations training programs. Marine Corps officials told us that they may need to capture and track additional information, such as the number of amphibious training events scheduled and completed. However, as noted above, we found that the Marine Corps does not capture complete data that could be used for these assessments, such as demand for training time with amphibious ships. For example, officials from I MEF told us they do not capture the full demand for training time with Navy ships because unit commanders will not always submit a request that they believe is unlikely to be filled. In addition, these officials stated that their requests are prescreened before being submitted to the Navy to ensure that the requests align with known periods of available ship time. As a result, requests for amphibious ships and crafts are supply- driven, instead of demand-driven, which could affect the services’ ability to monitor progress in accomplishing unit training because an underlying metric is incomplete. Establishing a process to monitor progress in achieving amphibious operations training priorities will better enable the Navy and Marine Corps to ensure that their efforts are accomplishing the intended results and help assess the extent to which the services have mitigated any amphibious operations training shortfalls. The Navy and Marine Corps have taken some steps to improve coordination between the two services, but the services have not fully incorporated leading collaboration practices that would help drive efforts to improve naval integration for amphibious operations. Our prior work on interagency collaboration has found that certain practices can help enhance and sustain collaboration among federal agencies. These key practices include (1) defining and articulating a common outcome; (2) establishing mutually reinforcing or joint strategies; (3) identifying and addressing needs by leveraging resources; (4) agreeing on roles and responsibilities; (5) establishing compatible policies, procedures, systems, and other means to operate across agency boundaries; (6) developing mechanisms to monitor, evaluate, and report on results; and (7) reinforcing agency accountability for collaborative efforts through plans and reports, among others. Common outcomes and joint strategy: The Navy and Marine Corps have issued strategic documents that discuss the importance of improving naval integration, but the services have not developed a joint strategy that defines and articulates common outcomes to achieve naval integration. We have found that collaborative efforts require agency staff working across agency lines to define and articulate the common outcome or purpose they are seeking to achieve that is consistent with their respective agency goals and mission. In addition, collaborating agencies need to develop strategies that work in concert with those of their partners. These strategies can help in aligning the partner agencies’ activities, processes, and resources to accomplish common outcomes. Further, joint strategies can benefit from establishing specific objectives, related actions, and subtasks with measurable outcomes, target audiences, and agency leads. Based on our review of Navy and Marine Corps strategic-level documents, both services identify the importance of improving naval integration, but these documents do not define and articulate outcomes that are common among the services or identify actions and time frames to achieve common outcomes that would be included a joint strategy. Instead, the documents describe naval integration in varying ways, including as a means to improve the capabilities of naval forces to perform essential functions, such as sea control and maritime security; exercise command and control for large-scale operations, including amphibious operations; and establish concepts to conduct naval operations in contested environments, among other areas. For example, strategic documents developed by the Navy only broadly discuss naval integration. In March 2015, the Department of the Navy issued an updated version of A Cooperative Strategy for 21st Century Seapower. This document discusses building the future naval force, including the need to organize and equip the Marine Expeditionary Brigade to exercise command and control of joint and multinational task forces for larger operations and enable the MEF for larger operations. In January 2016, the Department of the Navy published A Design for Maintaining Maritime Superiority, stating the need to deepen operational relationships with other services to include current and future planning, concept and capability development, and assessment. Marine Corps strategic documents provide a more-detailed and expansive list of areas for improved integration with the Navy, but do not provide guidance on how to achieve those areas. For example, in March 2014, the Marine Corps issued Expeditionary Force 21, which describes the need to increase naval integration, including operational integration between the Marine Expeditionary Brigade and the Navy’s Expeditionary Strike Group. Further, in September 2016 the Marine Corps issued a Marine Corps Operating Concept that establishes five tasks needed for the Marine Corps to build its future force, including integrating the naval force to fight at and from the sea. According to Navy and Marine Corps officials, naval integration is a broad term, has different meanings across various service organizations, and is not commonly understood. For example, officials told us that the services have identified the need to develop more-precise language around the term naval integration and articulate common outcomes to create a more- integrated approach to develop naval capabilities. Another senior Marine Corps training official told us that clear guidance is needed on how to define outcomes for naval integration for Navy and Marine Corps command-level staff. In particular, the official stated that without guidance it is unclear how an integrated staff should be composed—whether as two separate Navy and Marine Corps command staffs that should work together, or as one staff composed of both Navy and Marine Corps personnel. The continuing lack of common outcomes and a joint strategy could limit the Navy and Marine Corps ability to achieve their goals for naval integration. Further, joint strategies for improving naval integration could help ensure that services efforts are aligned to maximize available training opportunities and resources. Compatible policies, procedures, and systems: The Navy and Marine Corps have established several mechanisms to better coordinate their respective capabilities for amphibious operations training, but have not fully established compatible policies, procedures, and systems to foster and build naval integration. We have found that agencies need to address the compatibility of standards, policies, procedures, and data systems that will be used in the collaborative effort. These policies can be used to provide clarity about roles and responsibilities, including how the collaborative effort will be led. The Marine Corps has established a working group that provides a forum for collaboration for amphibious operations. Specifically, Marine Corps Forces Command established a Maritime Working Group to develop and manage a continuing Navy–Marine Corps quarterly collaborative process that is comprised of officials from the services’ headquarters, components, and operating forces. According to its mission statement, the Maritime Working Group is intended to align naval amphibious exercise planning to inform force development, war games, experimentation, and coalition participation in order to advance concepts; influence doctrine; inform naval exercise design and sourcing; inform capabilities development; and increase naval warfighting readiness. Based on our observation of the Maritime Working Group in September 2016, we found that the forum covered a broad range of topics including exercise prioritization, experimentation, and planning for future Navy exercises. Following the meeting, a summary of the topics discussed was provided to all participants as well as follow-on actions to be completed. However, we found that the Navy and Marine Corps have not fully established compatible policies and procedures, such as common training tasks and standards and agreed-upon roles and responsibilities, to ensure their efforts to achieve improved naval integration are consistent and sustained. For example, on the West Coast, the Navy and Marine Corps organizations 3rd Fleet and I MEF have issued guidance that formalizes policies that assign 1st Marine Expeditionary Brigade and Expeditionary Strike Group 3 with the responsibilities to conduct joint training. This guidance addresses the importance of Navy and Marine Corps interoperability by formalizing procedures, assigning responsibility, and providing general policy regarding training certification standards for these units. Officials from Fleet Forces Command noted that there is not similar guidance for East Coast–based units for the 2nd Marine Expeditionary Brigade and Expeditionary Strike Group 2. According to a Navy inspection report, Fleet Forces Command officials stated that they did not institute a deployment certification program for Expeditionary Strike Group 2 because of changing priorities at the command. As a result, the services lack clarity on the roles and responsibilities for these organizations—another key collaboration practice—that is needed to ensure these improvements are prioritized to further and sustain the collaborative effort. Both the Navy and Marine Corps have also identified areas where more- compatible training is needed to improve the skills and abilities of naval forces to perform certain missions. For example, Marine Corps training guidance from III MEF identifies a number of areas where Marine Corps units could improve collective naval capabilities by expanding training with the Navy, including areas such as joint maneuver, seizure and defense of forward naval bases, and facilitating maritime maneuver, among others. The Marine Corps Operating Concept also identifies other areas where integration with the Navy should be enhanced, including for intelligence, surveillance, and reconnaissance; operating in a distributed or disaggregated environment; and employment of fifth-generation aviation, such as the F-35. However, the services have been limited in their efforts to improve naval integration in these areas because they have not established compatible training tasks and standards that would institutionalize Navy and Marine Corps unit-level training requirements. Marine Corps officials told us that without compatible training tasks and standards, there is not a mechanism to force continued integration between the services outside of forces deploying as part of an ARG-MEU to help develop integrated naval capabilities. We also found that some of the Navy and Marine Corps’ systems for managing and conducting integrated training are incompatible, leading to inefficiencies in the process to manage training events involving Navy and Marine Corps units. For example, the Marine Corps has developed a system called Playbook to help align Navy and Marine Corps resources for training exercises that have been scheduled through the Force Synchronization process. At the time of our review, the Marine Corps was in the process of inputting data for all of its scheduled training exercises, including experiments and war games, into the system in order to align training resources and capabilities to its highest priority exercises and help build a training and exercise plan through 2020. However, the Navy uses several other data systems to track and capture its training resource requirements, and these systems are incompatible with Playbook. The lack of interface requires the Marine Corps to manually input and reconcile Navy information into its system. This can cause certain inefficiencies in arranging training. For example, officials from III MEF told us that adjustments to the Navy’s maintenance schedule for amphibious ships are not always communicated in advance, which can create a misalignment in the availability of amphibious ships and Marine Corps units to conduct training exercises. The Marine Corps has identified the need to define the Navy’s use of Playbook and explore a potential interface with Navy systems, but, as of May 2017, officials said that any evaluation, including potential cost-benefit analyses for addressing the interoperability issues, had not yet taken place. By having incompatible systems to schedule training, the services remain at risk of missing opportunities to maximize training opportunities for amphibious operations. Leverage resources to maximize training opportunities: The Navy and Marine Corps have identified certain opportunities where the two services can better leverage resources to conduct additional amphibious operations training together, but these opportunities have not been fully maximized. We have found that collaborating agencies should look for opportunities to address needs by leveraging each other’s resources, thus obtaining additional benefits that would not be available if they were working separately. Marine Corps Forces Command and Fleet Forces Command, as well as Marine Corps Forces Pacific and Pacific Fleet, have each established a Campaign Plan for Amphibious Operations Training. The purpose of these plans is to align resources for larger, service-level exercises for amphibious operations over a 5-year period. The goal of these exercises is to develop operational proficiency for a Marine Expeditionary Brigade–level contingency or crisis, but the specific focus of the exercise can change from year to year. For example, in 2017 the Bold Alligator exercise will focus on joint forcible entry operations and anti-access / area denial, whereas in prior years the focus has been on other operational areas, such as crisis response. We found that the Navy and Marine Corps also use mechanisms, such as scheduling conferences, to coordinate and prioritize requests for ship services for these exercises, as well as for other training events. The services are looking to better leverage available training resources for amphibious operations, but enhancing their collaborative efforts could take greater advantage of potential training opportunities. For example, Navy officials have stated that the Surface Warfare Advanced Tactical Training initiative could provide an additional training opportunity for Marine Corps units to train with Navy ships. This initiative is intended to provide amphibious ships with a period of training focused on advanced tactical training, such as defense of the amphibious task force, and multiunit ship-to-shore movement, among other objectives. According to a Navy official responsible for the development of this initiative, its primary focus is on advanced tactical training for Navy personnel, but greater integration with the Marine Corps may be needed to accomplish certain training objectives, such as air defense. Further, it would provide an opportunity for the Marine Corps to achieve additional amphibious operations training. However, according to this official, the Marine Corps did not provide input into how its capabilities could be fully incorporated into the Navy’s advanced tactics training or identify potential opportunities to maximize amphibious operations training for both services. Further, the Marine Corps officials told us that there are opportunities to use transit time during Navy community-relations events, such as port visits, to conduct amphibious training for home-station units, but these events are not always identified with enough lead time to take full advantage of the training opportunity. According to officials at II MEF, Marine Corps units typically need at least 6 months of advance notice to align their forces and equipment for the potential training opportunity. Further, Marine Corps officials told us that the Navy does not always have a fully trained staff with the amphibious ship during these events, which can limit the comprehensiveness of the training that Marine Corps units are able to accomplish. These officials also stated that the flight deck or well deck may not be certified for use at the time of these community- relations events, further limiting their utility for Marine Corps training. Despite these limitations, Marine Corps officials have told us that these events can still provide training benefits, such as ship familiarization for Marines, but that these opportunities still require advanced notice. By improving coordination over its training resources, the services will be better positioned to take full advantage of these scarce training opportunities. Mechanisms to monitor results and reinforce accountability: The Navy and Marine Corps have processes to evaluate and report on the results of specific training exercises, but have not developed mechanisms to monitor, evaluate, and report on results nor jointly reinforced accountability for their naval integration efforts through agency plans and reports. We have found that agencies need to monitor and evaluate their efforts to enable them to identify areas for improvement and help decision makers obtain feedback for improving operational effectiveness. Further, agency plans and reports can reinforce accountability by aligning goals and strategies with the collaborative effort. For large-scale exercises, such as Bold Alligator, the Marine Corps conducts reviews that identify actions that should be sustained moving forward, as well as areas that should be improved in future exercises, including issues related to naval integration. However, the services have not established other processes or mechanisms to monitor, evaluate, and report on results that are needed to measure progress in achieving service-level goals for naval integration and to align efforts to maximize training opportunities for amphibious operations. For example, the Marine Corps does not have a process to monitor and report on results for the critical tasks identified in its Marine Corps Operating Concept, including those tasks related to naval integration, such as integrating command structures, developing concepts for littoral operations in a contested environment, and conducting expeditionary advanced base operations. Monitoring progress against these tasks, as well as common outcomes, once defined, should help the Navy and Marine Corps track progress toward achieving improved naval integration. While the Navy and Marine Corps have taken some steps to improve naval integration in recent years, these efforts are still in the early stages. In particular, Navy and Marine Corps officials stated that the services have not yet defined or articulated common outcomes needed to achieve naval integration because they have not determined who would be responsible for this effort or when to begin its development. Defining and articulating common outcomes for naval integration would allow the services to more effectively incorporate other leading collaboration practices aimed at those common outcomes, to the extent deemed appropriate, such as developing a joint strategy, establishing compatible policies, leveraging resources, and monitoring results. The Marine Corps has taken some steps to better integrate virtual training devices into its operational training. However, the Marine Corps’ process to manage the development and use of its virtual training devices in operational training plans has gaps. The Marine Corps has taken some steps to integrate virtual training devices into operational training and has other efforts under way. In 2013, we reported that the Marine Corps did not have information on the performance and cost of virtual training that would assist the service in assessing and comparing the benefits of virtual training as it sought to optimize the mix of live and virtual training to meet requirements and prioritize training investments. We also found that the Marine Corps had not developed overall metrics or indicators to measure how the use of virtual training devices had contributed to improving the effectiveness of training, or identified a methodology to identify the costs associated with using virtual training. We recommended that the Marine Corps develop outcome-oriented performance metrics for assessing the effect of virtual training on improving performance or proficiency and develop a methodology to identify the costs of virtual training in order to compare the costs of using live and virtual training. Further, in 2015 the Commandant of the Marine Corps issued guidance that stated the service will focus on better leveraging virtual training technology and that all types of Marine Corps forces should make extensive use of virtual training where appropriate. In response to our recommendations and the Commandant’s guidance, in 2015 the Marine Corps Training and Education Command created a Simulation Assessment Working Group with stakeholders from across the Marine Corps to identify training events that could be supported by virtual training devices and incorporate those devices into Training and Readiness manuals. The working group found that over 7,000 of the 12,000 training events reviewed could use a virtual training device to either fully or partially meet the training standard of that event. The group also identified 135 events that may only be performed using the virtual training device or must be performed with the device as a prerequisite to live training. Based on the results of the working group, Training and Education Command updated the corresponding unit-specific Training and Readiness manuals to identify where a training event could be completed using a virtual training device. While this action represents some progress toward better incorporating virtual training devices into operational training, our recommendations remain open because the Marine Corps’ efforts to develop specific outcome-oriented performance metrics to assess virtual training or a methodology to make more- informed comparisons between the costs of live and virtual training are not yet complete. According to a senior Training and Education Command official, the Marine Corps is working to update its training information management system to better capture this information. In 2015, the Marine Corps also issued a Concept of Operations (CONOPS) for the United States Marine Corps Live, Virtual, and Constructive – Training Environment (LVC-TE) (hereafter referred to as Concept of Operations) that is intended to describe the live, virtual, and constructive training environment based on operational requirements in sufficient detail to continue the development of this training capability. According to the Concept of Operations, the goal in implementing the live, virtual, and constructive training environment is to expand training opportunities, reduce training costs, improve safety, and maintain high levels of proficiency and readiness. The Concept of Operations estimates that the live, virtual, and constructive training environment will be implemented in 2022. Lastly, the Marine Corps has an ongoing effort to better inform users of the availability of virtual training devices that support ground-based units. Specifically, the Marine Corps Training and Education Command is developing a Ground Training Simulations Implementation Plan that is intended to provide a framework for the use of current and future virtual training devices for ground units. The Ground Training Simulations Implementation Plan is modeled after the processes used by the Marine Corps’ aviation community to integrate simulators into aviation training. The Marine Corps estimates that the plan will be finalized in the summer of 2017. According to a Training and Education Command official involved in the plan’s development, the plan will help address a challenge the Marine Corps has faced in educating commanders on the availability and capabilities of available virtual training devices. This challenge is consistent with information we gathered during our visit to selected Marine Corps installations. Officials at the two Battle Simulation Centers we visited, for example, told us that unit commanders do not always know what virtual training devices are available and how they can be used to meet training requirements. The Marine Corps process to manage the development and use of virtual training devices in operational training plans has gaps due to a lack of guidance. Specifically, the Marine Corps does not (1) include consideration of critical factors for integrating virtual training devices into operational training in its front-end planning to support the acquisition of its virtual training devices, (2) consistently consider expected and actual usage data for virtual training devices to support its investment decisions, or (3) consistently evaluate the effectiveness of its virtual training devices for operational training. The Marine Corps’ process for conducting front-end planning and analysis to support the acquisition of its virtual training devices does not include consideration of critical factors for integrating virtual training devices into operational training, such as the specific training tasks the device is intended to address, how the device would be used to meet proficiency goals, or available time for units to train with the device. DOD’s Strategic Plan for the Next Generation of Training for the Department of Defense states that the right mix of live, virtual, and constructive training capabilities will depend on training tasks and objectives, required proficiency, and available training time, among other factors. In addition, we have previously found that part of the front-end analysis process for training and development programs should include a determination of the skills and competencies in need of training and how training will build proficiency for those skills and competencies. Based on our analysis of the Marine Corps’ front-end planning documents (called system development documents) for the six virtual training devices included in our review, we found that documentation for five of the six devices did not include specific training tasks. In addition, the documentation for two devices specified that specific training tasks would be identified during the verification and validation phase, which is a type of analysis that typically takes place after the device has already been acquired, according to a senior Training and Education Command official. While the documentation for all of the devices included a high- level discussion of relevant mission areas, documentation for five out of six devices did not identify specific training tasks, such as specific training events in a unit’s Training and Readiness manual, that the device was intended to address. For example, documentation for the Combined Arms Command and Control Training Upgrade System includes a high-level discussion of mission areas that the device supports, such as force application, command and control, and battlespace awareness. It also states that the device is to support training events, but it does not specify what those events are. In addition, none of the system development documents we reviewed identified proficiency goals or considered available training time for the units to use the device. According to officials at Training and Education Command, many virtual training devices in the Marine Corps’ inventory were developed based on urgent needs to meet capability gaps identified by warfighters and were not based on training requirements. Of the six devices included in our review, three of the devices were acquired to meet urgent warfighter needs—the Family of Egress Trainers—Modular Amphibious Egress Trainer, the Operator Driver Simulator, and the Supporting Arms Virtual Trainer. However, the system development documents we reviewed for those three devices were completed after the devices had been fielded to meet the urgent needs, but still did not identify specific training tasks or proficiency goals, or consider available training time for the units to use the device. Moreover, the system development documents for two of the remaining three devices we reviewed did not contain this information. While the Marine Corps did not identify and assess these factors in the front-end planning process, the Marine Corps has begun taking steps to identify these factors through efforts such as the Simulation Assessment Working Group. However, these efforts are occurring after the devices have already been acquired and fielded, leading to decisions that have potential cost implications. For example, in its analysis, the Simulation Assessment Working Group did not fully consider alternative devices that could be used to achieve specific training tasks because its methodology was to identify the one virtual training device that was considered the “best in breed” simulator for conducting each training event rather than considering all devices that could be used for the event, including those that might be more cost-effective. Officials at II MEF told us that this methodology did not include an evaluation of the device’s cost compared to other devices that could achieve similar training outcomes. For example, these officials told us that the Supporting Arms Virtual Trainer was identified as a “best in breed” device for a number of training events, including calls for fire and close air support. However, these officials stated that the Deployable Virtual Training Environment device is a lower- cost alternative that could achieve similar outcomes for many of the training events that do not require the level of realism provided by the Supporting Arms Virtual Trainer. Based on information provided by Training and Education Command, the acquisition cost for the Supporting Arms Virtual Trainer is about $4.5 million per system while the acquisition cost for the Deployable Virtual Training Environment laptop is around $3,700 (see fig. 6). The Marine Corps’ front-end planning process to support the acquisition of virtual training devices has gaps because the service does not have specific policies to ensure the process considers key factors. Specifically, Navy and Marine Corps acquisition policies we reviewed do not require that front-end planning consider specific training tasks the device is intended to address, how the device would be used to meet proficiency goals, or available time for units to train with the device. Training and Education Command officials acknowledged the gaps in the Marine Corps’ process and stated that the front-end process for future device acquisitions would identify specific training tasks that a device will address. However, without guidance that specifically addresses these factors, the Marine Corps does not have a reasonable basis to ensure that it is acquiring the right number and type of virtual training devices to meet its operational training needs. The Marine Corps does not consistently consider expected and actual usage data for virtual training devices to support its investment decisions. Our prior work has found that agencies should establish measures that they can use in assessing training programs, such as expected training hours, which reflect the usage rates of the training program. However, the Marine Corps did not establish expected usage rates in its system development documents for five of the six virtual training devices included in our review, and a senior Training and Education Command official said it also has not established expected usage rates since acquiring the devices. For example, the system development document for the Supporting Arms Virtual Trainer stated that the usage of the device could replace up to 33 percent of the live-fire missions required to retain annual currency, but the document does not specify that units are expected to use the device to replace that high of a percentage of the live-fire missions. As a result, the Marine Corps does not have a baseline against which to assess actual usage of the device. Only the system development document for the Marine Air-Ground Task Force Tactical Warfare Simulation included usage targets, stating that usage is expected to be extensive and estimates that the device will be used for 700 hours per system per year. However, the system development documents for the other four devices we reviewed did not include any information on expected usage rates. Additionally, the Marine Corps has not consistently collected actual usage data for its virtual training devices, which could be used to inform continued investments in existing virtual training devices. During our review, a senior Marine Corps Training and Education Command official told us that Training and Education Command collects data for about two- thirds of the Marine Corps’ total inventory of virtual training devices, but usage data are not available for certain devices. More specifically, the Marine Corps provided usage data for three of the six devices that were included in our review, but it was unable to provide usage data for certain systems, such as the Marine Air-Ground Task Force Tactical Warfare Simulation and the Combined Arms Command and Control Training Upgrade System. This official stated that contractors collect data on these devices, but there is no Marine Corps’ system to collect data on the number of Marines or hours trained. Specifically, contractors submit spreadsheets on a monthly basis showing the number of Marines who have used the device, but these data are not included in any formal reports and there is no standard database for collecting or evaluating them. The Marine Corps has not considered actual usage data in its decision making for additional investments in certain virtual training devices, despite low usage rates for a number of those devices. For example, according to available contractor data, actual usage for the Operator Driver Simulator was significantly lower than the current available hours. Based on data provided by Training and Education Command, the Operator Driver Simulator was used for approximately 7,600 hours in fiscal year 2015 and 5,600 hours in fiscal year 2016, but was available for use for approximately 192,000 hours. However, based on the results of the Simulation Assessment Working Group, Training and Education Command estimated that to accomplish all training events linked to the Operator Driver Simulator would require about 570,000 available training hours. As a result, the Simulator Assessment Working Group recommended various investment options for the Operator Driver Simulator that ranged from $56 million to $121 million, despite the current low utilization and excess capacity. Officials from Training and Education Command told us that they anticipate an increase in user demand for the Operator Driver Simulator based on guidance from the Commandant of the Marine Corps to make driver certification more rigorous. However, officials from Marine Corps Systems Command stated that current Operator Driver Simulators have deficiencies in supporting driver training and, therefore, Marines choose to drive live vehicles instead. The Marine Corps has not considered expected and actual usage of its virtual training devices to support investment decisions due to a lack of guidance on establishing and collecting usage data. Marine Corps training guidance for ground units states that virtual training devices shall be used, as applicable, when constraints limit the use of realistic training conditions, but it does not identify the extent to which virtual training devices are expected to be used. Without guidance on setting usage- rate expectations and assessing actual usage, the Marine Corps risks sustained investment in virtual training devices that do not meet operational training needs. We also found that the Marine Corps was not consistently evaluating the effectiveness of its virtual training devices to accomplish operational training. Our prior work has shown that agencies need to develop processes that systematically plan for and evaluate the effectiveness of their training and development efforts. These evaluations should include data measures, both quantitative and qualitative, to assess training results in areas such as increased user proficiency. Further, evaluations of training effectiveness should be used to make decisions on whether resources should be reallocated or redirected. The Marine Corps uses the verification and validation report process as its primary assessment of a virtual training device after it has been fielded, according to the senior Training and Education Command official with whom we spoke. However, based on our review of postfielding analyses for the virtual training devices included in our review, we found that the Marine Corps does not have a consistent process for selecting devices for which to complete these analyses or how the analysis should be conducted. More specifically, we were provided with verification and validation reports for only three of the six devices in our review—the Supporting Arms Virtual Trainer, the Family of Egress Trainers—Modular Amphibious Egress Trainer, and the Operator Driver Simulator—as well as plans to complete these reports for two other devices. According to a senior Training and Education Command official, Training and Education Command considers certain factors to prioritize the completion of verification and validation reports, such as planned investments for major upgrades on a device. The official also stated that Training and Education Command prioritized completing reports for these virtual training devices to specifically align with recommendations made by the Simulation Assessment Working Group. However, the Simulation Assessment Working Group does not take place on a recurrent basis, and therefore the recommendations from the group do not establish a process for prioritizing future verification and validation reports. Officials from Marine Corps Systems Command told us that program managers are now trying to perform verification and validation reports for future acquisitions prior to full acceptance of the training systems, but that this step is not mandatory. Additionally, there is not a consistent process to include training effectiveness evaluations within the verification and validation report itself. The verification and validation process is not required to include an evaluation of effectiveness based on current guidance, but as noted in the verification and validation report for the Family of Egress Trainers— Modular Amphibious Egress Trainer, such an evaluation is essential to determine whether the capabilities of a virtual training device satisfy requirements to improve training performance and combat readiness. In two instances, the verification and validation reports for the Operator Driver Simulator and Family of Egress Trainers—Modular Amphibious Egress Trainer both included evaluations of the effectiveness of the devices in improving user proficiency, which concluded that the devices enabled Marines to successfully pass related training courses. In another instance, the Marine Corps did not conduct a training effectiveness analysis as part of the verification and validation process. Specifically, for the Supporting Arms Virtual Trainer, Marine Corps Systems Command attempted to conduct a training effectiveness evaluation, but training activity data for a statistically significant sampling of the target training audience were unavailable, which suggests the need for improved data on device usage. We further found that the training effectiveness evaluations that the Marine Corps did complete differed in how they were conducted, which can affect the quality of the information the evaluations provide. For example, the training effectiveness evaluation for the Operator Driver Simulator was conducted to determine whether the device effectively trained Marines to perform tasks required for one specific training and readiness event. The methodology included collecting training activity data from 1 fiscal year in one location and for one of the Operator Driver Simulator vehicle variants. The report noted that conducting a more- complete evaluation, along with additional data collection, would better identify opportunities to improve and enhance training. In contrast, the training effectiveness evaluation for the Family of Egress Trainers— Modular Amphibious Egress Trainer also collected training activity data, but collected data from multiple training sites and for all training courses conducted during the 1-year period used for the evaluation. According to officials from Marine Corps Systems Command, the effectiveness evaluation methods may vary based on the type of training being executed and how well the training requirements are defined. These officials stated that when the device’s training requirements have been more thoroughly defined, the effectiveness evaluation can be more targeted. The Navy and Marine Corps acquisition policy and guidance documents we reviewed do not establish a process to consistently evaluate the training effectiveness of virtual training devices, including identifying the devices to be evaluated and determining what data should be collected and assessed. According to a senior Training and Education Command official, evaluating effectiveness is not a required part of the verification and validation process and is an area that needs to be addressed. The Marine Corps’ Concept of Operations also identified a lack of guidance for conducting effectiveness analyses. Specifically, the Concept of Operations identifies a lack of policy guiding live, virtual, and constructive training capabilities and benefits. It also identifies a training gap on the linkages between live, virtual, and constructive training, as well as a policy gap around the lack of guidance on analysis of virtual training devices after they have been fielded. Without guidance establishing a well-defined process to consistently evaluate the effectiveness of virtual training devices for training—including the selection of devices, guidelines on conducting the analysis, and the data that should be collected and assessed—the Marine Corps risks investing in devices whose value to operational training is undetermined. The Navy and Marine Corps have identified the need to rebuild the capability to conduct amphibious operations and to reinvigorate naval integration between the services toward that end. However, the Navy and Marine Corps have not completed efforts needed to mitigate their training shortfalls for amphibious operations. Specifically, the services have not developed an approach to prioritize available training resources, systematically evaluate among training resource alternatives to achieve amphibious operations priorities, and monitor progress toward achieving them. Without such an approach, the services are not well positioned to mitigate existing amphibious operations training shortfalls and begin to rebuild their amphibious capability as the services await the arrival of additional amphibious ships into the fleet. In addition, while the Navy and Marine Corps have taken a number of positive steps to improve coordination between the two services, they need to define and articulate common outcomes for naval integration. This first critical step will enable them to fully incorporate other leading collaboration practices aimed at a common purpose, such as developing a joint strategy; more fully establishing compatible policies, procedures, and systems; better leveraging resources; and establishing mechanisms to monitor results that are needed to achieve service-level goals for naval integration and to align efforts to maximize training opportunities for amphibious operations. Further, the Marine Corps’ process to integrate virtual training devices into operational training has gaps. Developing guidance for the development and use of virtual training devices would help close these gaps, which is critical as virtual training will become increasingly important to the development of the capability of Marines, including the capability for conducting amphibious operations, among other mission areas. To better mitigate amphibious operations training shortfalls, we recommend the Secretary of Defense direct the Secretary of the Navy, in coordination with the Chief of Naval Operations and Commandant of the Marine Corps, to develop an approach, such as building upon the Amphibious Operations Training Requirements review, to prioritize available training resources, systematically evaluate among training resource alternatives to achieve amphibious operations priorities, and monitor progress toward achieving them. To achieve desired goals and align efforts to maximize training opportunities for amphibious operations, we recommend the Secretary of Defense direct the Secretary of the Navy, in coordination with the Chief of Naval Operations and Commandant of the Marine Corps, to clarify the organizations responsible and time frames to define and articulate common outcomes for naval integration, and use those outcomes to develop a joint strategy; more fully establish compatible policies, procedures, and systems; better leverage training resources; and establish mechanisms to monitor results. To more effectively and efficiently integrate virtual training devices into operational training, we recommend that the Secretary of Defense direct the Commandant of the Marine Corps to develop guidance for the development and use of virtual training devices that includes developing requirements for virtual training devices that consider and document training tasks and objectives, required proficiency, and available training time; setting target usage rates and collecting usage data; and conducting effectiveness analysis of virtual training devices that defines a consistent process for performing the analysis, including the selection of the devices to be evaluated, guidelines on conducting the analysis, and the data that should be collected and assessed. We provided a draft of the classified report to DOD for review and comment. The department’s comments on the classified report are reprinted in Appendix II. In its comments, DOD concurred with all three recommendations. DOD stated that it will review the status of actions the Navy and Marine Corps plan to take in response to all three recommendations within the next twelve months. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Office of the Under Secretary of Defense for Personnel and Readiness, the Secretary of the Navy, and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-5431 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objectives of this report are to determine the extent to which (1) the Navy and Marine Corps have completed training for amphibious operations priorities and taken steps to mitigate any training shortfalls, (2) the Navy’s and Marine Corps’ efforts to improve naval integration for amphibious operations incorporate leading collaborative practices, and (3) the Marine Corps has integrated selected virtual training devices into its operational training. This report is a public version of a classified report that we issued in August 2017. DOD deemed some of the information in our August report to be classified, which must be protected from loss, compromise, or inadvertent disclosure. Therefore, this report omits classified information on select Marine Corps units’ ability to complete training for amphibious operations. Although the information provided in this report is more limited, the report addresses the same objectives as the classified report and uses the same methodology. We focused our review on Navy and Marine Corps organizations and units that have a role in the development and execution of training requirements for amphibious operations. For the Navy, we focused on the training requirements and accomplished training for amphibious ships. For the Marine Corps, we focused on selected active-component units that have identified training requirement for amphibious operations, including Marine Expeditionary Units (MEU) and other units with a mission-essential task for amphibious operations. We selected a nongeneralizable sample of 23 Marine Corps units to speak with in order to interview geographically dispersed units under each Marine Expeditionary Force, as well as units across all elements of the Marine Air-Ground Task Force (i.e., command, ground combat, aviation combat, and logistics combat forces). See below for the list of 23 Marine Corps units. We focused on the Marine Corps’ integration of virtual training devices into operational training because the Navy does not have virtual training devices that simulate amphibious operations, including ship-to- shore movement, according to Navy officials. In addition, we focused on Marine Corps virtual training devices that are used to support the command and ground elements of the Marine Air-Ground Task Force. We selected a nongeneralizable sample of six virtual training devices based on the target training audience, applicability to amphibious operations training, location, and type of training events (individual or collective training) for which the devices are used. The devices included in our review are the Combined Arms Command and Control Training Upgrade System, Marine Air-Ground Task Force Tactical Warfare Simulation, Supporting Arms Virtual Trainer, Amphibious Assault Vehicle Turret Trainer, Family of Egress Trainers—Modular Amphibious Egress Trainer, and Operator Driver Simulator. To determine the extent to which the Navy and Marine Corps have completed training for amphibious operations priorities and taken steps to mitigate any training shortfalls, we analyzed deployment certification reports for all Amphibious Ready Group (ARG)—Marine Expeditionary Unit (MEU) deployments over the most-recent 3-year period. We also analyzed unit-level readiness data for all Marine Corps’ infantry battalions, assault amphibian vehicle battalions, Osprey tilt-rotor aircraft squadrons, and Marine Expeditionary Brigades over the most-recent 3- year period—from fiscal years 2014 through 2016—and compared those data against unit-level training requirements for amphibious operations. We analyzed 3 years of training data because training requirements for Marine Corps units are reviewed and updated on a 3-year cycle. We performed data-reliability procedures on the unit-level readiness data by comparing the data against related documentation and surveying knowledgeable officials on controls over reporting systems and determined that the data presented in our findings were sufficiently reliable for the purposes of this report. We interviewed Navy and Marine Corps officials to discuss any factors that limited their ability to conduct training for amphibious operations. We assessed the reliability of data on amphibious ship requests by speaking with knowledgeable officials and determined the data were sufficiently reliable for the purposes of presenting the number of actual requests submitted and fulfilled. In addition, we reviewed processes and initiatives established by the Navy and Marine Corps to identify and assess training shortfalls for amphibious operations, including the Marine Corps’ Amphibious Operations Training Requirements review, and evaluated these processes and initiatives against our prior work on strategic training and risk management. To determine the extent to which the Navy’s and Marine Corps’ efforts to improve naval integration for amphibious operations incorporate leading collaboration practices, we reviewed the Navy and Marine Corps documents, including A Cooperative Strategy for 21st Century Seapower and the Marine Corps Operating Concept, that discuss the goal of improving naval integration. We also reviewed mechanisms that have been established to coordinate training, including campaign plans for amphibious operations; observed a working group focused on amphibious operations; and interviewed officials with both services to discuss efforts to improve naval integration. We assessed the extent to which the Navy’s and Marine Corps’ efforts toward improving naval integration have followed leading practices for collaboration that we have identified in our prior work. Specifically, we have identified eight practices described in our prior work that can help enhance and sustain collaboration. We selected seven of the eight practices most relevant to issues we identified in our prior work on collaboration to assess the status of Navy and Marine Corps collaborative efforts to improve naval integration. Based on our analysis, we selected the following seven practices: define and articulate a common outcome; establish mutually reinforcing or joint strategies; identify and address needs by leveraging resources; agree on roles and responsibilities; establish compatible policies, procedures, and other means to operate across agency boundaries; develop mechanisms to monitor, evaluate, and report on results; and reinforce agency accountability for collaborative efforts through agency plans and reports. To determine the extent to which the Marine Corps has integrated selected virtual training devices into its operational training, we collected information on the development, usage, and evaluation of virtual training devices, and their integration into operational training plans. We reviewed documentation on actions the Marine Corps has taken to integrate its virtual training devices into operational training, including documentation on the Simulation Assessment Working Groups and the Ground Training Systems Plan. We reviewed DOD and Marine Corps acquisition policies and interviewed Marine Corps officials responsible for the acquisition and oversight of virtual training devices at Training and Education Command and Marine Corps Systems Command and officials responsible for management of the virtual training devices at the Battle Simulation Centers at Camp Lejeune, North Carolina, and Camp Pendleton, California. We reviewed acquisition documents for each of the selected devices, including Capability Production Documents and Capability Development Documents, and assessed the extent to which these documents included key information as identified in leading practices for managing strategic training and DOD’s Strategic Plan for the Next Generation of Training for the Department of Defense. We also reviewed documentation on the Marine Corps process to include expected and actual usage data for virtual training devices to support investment decisions. Further, we reviewed analyses conducted after the selected devices had been fielded through Verification and Validation Reports and evaluated the extent these documents assessed the effectiveness of the virtual training devices for improving user proficiency. The performance audit upon which this report is based was conducted from May 2016 to August 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate, evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We subsequently worked with DOD from August 2017 to September 2017 to prepare this unclassified version of the original classified report for public release. This public version was also prepared in accordance with these standards. In addition to the contact name above, Matthew Ullengren, Assistant Director; Russell Bryan; William Carpluk; Ron La Due Lake; Joanne Landesman; Kelly Liptan; Shahrzad Nikoo; and Roxanna Sun made key contributions to this report.
|
The Navy and Marine Corps have identified a need to improve their ability to conduct amphibious operations—an operation launched from the sea by an amphibious force. Senate and House reports accompanying bills for the National Defense Authorization Act for Fiscal Year 2017 included provisions for GAO to review Navy and Marine Corps training. This report examines the extent to which (1) the Navy and Marine Corps have completed training for amphibious operations priorities and taken steps to mitigate any training shortfalls, (2) these services' efforts to improve naval integration for amphibious operations incorporate leading collaboration practices, and (3) the Marine Corps has integrated selected virtual training devices into operational training. GAO analyzed training initiatives; interviewed a nongeneralizable sample of officials from 23 units that were selected based on their training plans; analyzed training completion data; and selected a nongeneralizable sample of six virtual training devices to review based on factors such as target audience. This is a public version of a classified report GAO issued in August 2017. Information that DOD deemed classified has been omitted. Navy and Marine Corps units that are deploying as part of an Amphibious Ready Group and Marine Expeditionary Unit (ARG-MEU) completed their required training for amphibious operations, but other Marine Corps units have been limited in their ability to conduct training for other amphibious operations–related priorities. GAO found that several factors, to include the decline in the fleet of the Navy's amphibious ships from 62 in 1990 to 31 today limited the ability of Marine Corps units to conduct training for other priorities, such as recurring training for home-station units (see figure). As a result, training completion for amphibious operations was low for some but not all Marine Corps units from fiscal years 2014 through 2016. The services have taken steps to address amphibious training shortfalls, such as more comprehensively determining units that require training. However, these efforts are incomplete because the services do not have an approach to prioritize available training resources, evaluate training resource alternatives, and monitor progress towards achieving priorities. Thus, the services are not well positioned to mitigate any training shortfalls. The Navy and Marine Corps have taken some steps to improve coordination between the two services, but have not fully incorporated leading collaboration practices to improve integration of the two services—naval integration—for amphibious operations. For example, the Navy and Marine Corps have not defined and articulated common outcomes for naval integration that would help them align efforts to maximize training opportunities for amphibious operations. The Marine Corps has taken steps to better integrate virtual training devices into operational training, but gaps remain in its process to develop and use them. GAO found that for selected virtual training devices, the Marine Corps did not conduct front-end analysis that considered key factors, such as the specific training tasks that a device would accomplish; consider device usage data to support its investment decisions; or evaluate the effectiveness of existing virtual training devices because of weaknesses in the service's guidance. As a result, the Marine Corps risks investing in devices that are not cost-effective and whose value to operational training is undetermined. GAO recommends that the Navy and Marine Corps develop an approach for amphibious operations training and define and articulate common outcomes for naval integration; and that the Marine Corps develop guidance for the development and use of its virtual training devices. The Department of Defense concurred.
|
The mission of IRS’s HCO includes providing “human capital strategies and tools for recruiting, hiring, developing, retaining, and transitioning a highly-skilled and high-performing workforce to support IRS mission accomplishments,” and developing and implementing “technology- enabled systems and processes to improve human capital planning and management and empower employees to achieve their potential.” HCO is headed by the Human Capital Officer who reports to the Deputy Commissioner for Operations Support and is to “provide executive leadership and direction in all matters relating to the Service’s employees, overseeing the design, development, and delivery of comprehensive, agency-wide human capital management and development programs that contribute to the Service’s vision and mission.” Worklife Benefits and Performance (WBP) and Employment, Talent and Security (ETS) are two subdivisions within HCO responsible for supporting many of IRS’s strategic human capital management activities. Among WBP’s responsibilities are: agency-wide strategic workforce planning; workforce planning consultation and support; OPM/Treasury/IRS workforce planning pilots, projects, and initiatives; IRS workforce data reporting; analyzing workforce projections; and attrition analysis. ETS is responsible for providing policies, products, and services that support business efforts to identify, recruit, hire, and advance a workforce with the competencies necessary to achieve current and future organizational performance goals. In particular, ETS “partners with business units to develop strategic hiring plans that drive the hiring decision by planning, executing and evaluating the type of position to be filled based on agency-wide workforce, attrition and workload needs.” Strategic human capital management, which includes workforce planning activities, is a persistent challenge across the federal government. We designated strategic human capital management across the government as a high-risk issue in 2001 because of the federal government’s long- standing lack of a consistent strategic approach to human capital management. In February 2011, we narrowed the focus of this high-risk issue to the need for agencies to close skills gaps in mission-critical occupations. Agencies can have skills gaps for different reasons: they may have an insufficient number of employees or their employees may not have the appropriate skills or abilities to accomplish mission-critical work. Moreover, current budget and long-term fiscal pressures, the changing nature of federal work, and a potential wave of employee retirements that could produce gaps in leadership and institutional knowledge threaten to aggravate the problems created by existing skills gaps. Mission-critical skills gaps both within federal agencies and across the federal workforce continue pose a high risk to the nation because they can impede the government from cost-effectively serving the public and achieving results. IRS’s budget declined by about $2.1 billion (15.7 percent) from fiscal years 2011 through 2018 (see figure 1). The President’s fiscal year 2019 budget request was $11.135 billion. This amount is less than the fiscal year 2000 level for IRS, after adjusting for inflation. IRS requested an additional $397 million to cover implementation expenses for Tax Cuts and Jobs Act over the next 2 years and received $320 million for implementation pending submission of a spending plan, which IRS provided in June 2018. We previously reported IRS would direct the majority of the money toward technological updates. The Tax Cuts and Jobs Act made a number of significant changes to the tax law affecting both individuals and corporations. For example, for individual taxpayers, for tax years 2018 through 2025, tax rates were lowered for nearly all income levels, personal exemptions were eliminated while the standard deduction was increased, and certain credits, such as the child tax credit, were expanded. To implement the changes, IRS must (1) interpret the law; (2) create or revise hundreds of tax forms, publications, and instructions; (3) publish guidance and additional materials; (4) reprogram return processing systems; and (5) hire additional staff and train its workforce to help taxpayers understand the law. IRS’s HCO estimated that the agency would need to hire and train new staff to fill approximately 1,100 positions requiring a variety of competencies, and provide additional training on tax law changes for current employees. HCO will be responsible for recruiting and hiring new employees with the needed skills. IRS has scaled back strategic workforce planning activities in recent years. Prior to 2011, IRS staff within its HCO or other dedicated program office conducted and coordinated agency-wide strategic workforce planning efforts. IRS officials told us that resource constraints and fewer staff with strategic workforce planning skills due to attrition since 2011 required HCO to largely abandon strategic workforce planning activities. Instead, HCO generally focused its efforts on completing HR transactions, such as retirements and benefits processing, meeting legal compliance activities, and facilitating hiring of seasonal employees. Since 2011, key human capital activities—such as developing an inventory of skills, identifying skills gaps, and attrition forecasting— became increasingly fragmented and shifted to the individual business divisions and program offices. IRS officials cited management familiarity of programmatic needs, challenges, processes, and culture as a benefit of workforce planning autonomy at business divisions and program offices. However, the officials told us these activities were often performed only to the extent those divisions had the time, resources, and top management interest. As a result, the quality of key human capital activities was uneven across the agency, if performed at all. In addition, HCO officials told us the lack of an agency-wide strategy and HCO authority to manage and coordinate strategic workforce planning efforts put the agency at greater risk for unnecessary duplication of effort in HR activities; development of redundant and generally noninteroperable systems used to maintain human capital information; and failure to effectively identify and retain personnel with critical skills and experience across the agency. IRS’s Information Technology (IT) is an example of an individual program office that has taken steps to address skills needs. IT developed a skills and competency inventory of its workforce. IRS officials told us maintaining and updating the inventory has been particularly helpful to informing IT hiring and training decisions, given the rapid nature of change in the technology industry and competition for top talent from the private sector. In June 2018, we found IRS had not fully implemented any of the key IT workforce planning practices we have previously identified. We recommended IRS should fully implement IT workforce planning practices, including (1) setting the strategic direction for workforce planning; (2) analyzing the workforce to identify skills gaps; (3) developing strategies and implementing activities to address skills gaps; and (4) monitoring and reporting on progress in addressing skills gaps. IRS agreed with our recommendation, but stated its efforts to address these issues were limited solely due to diversion of IT resources to implementation of the Tax Cuts and Jobs Act. We concluded that until the agency fully implemented these practices, it would continue to face challenges in assessing and addressing the gaps in knowledge and skills that are critical to the success of its key IT investments. A number of indicators led IRS to determine that continuing to make short-term, largely nonstrategic human capital decisions was unsustainable, according to IRS officials. For example, IRS has relatively high rates of employees eligible to retire. Nearly half of IRS’s Senior Executive Service (SES) is eligible to retire (see figure 2). Retirement eligibility rates among both SES and non-SES employees is not only greater than the rate at other federal agencies, but are also trending higher according to our analysis of OPM data. We have previously reported that the high rate of federal employees eligible for retirement creates both an opportunity and a challenge for agencies. If accompanied with appropriate strategic and workforce planning, it may create an opportunity for agencies to align their workforce with needed skills and leadership levels to meet their existing and evolving mission requirements. However, it means agencies will need succession planning efforts as well as effective sources and methods for recruiting and retaining candidates to avoid the loss of technical expertise in mission-critical skills. IRS is trying to mitigate the loss of institutional memory and meet its current obligations by re-employing recently retired employees (also known as re-employed annuitants). However, according to HCO officials, as of October 2018, the agency is struggling to bring recently retired employees back in part because many had taken other employment. HCO is focusing on other activities, such as contract staffing services, to meet workload demands. As we discuss later in this report, IRS is taking a number of actions to address staffing shortages, but the effectiveness of those efforts are not yet known. IRS’s FEVS results also indicate the agency is at risk of losing employees with critical skills. For example, IRS’s results for the Global Satisfaction Index—a measure generated by OPM that combines employees’ responses about satisfaction with their job, pay, the organization, and their willingness to recommend their organization as a good place to work—fell below the government-wide average in 2013. Relatedly, our analysis of fiscal year 2016 IRS exit survey results found 32 percent of separating employees indicated poor office morale strongly influenced their decision to leave. Though improving since 2015, IRS continued to lag behind the government-wide average as of 2017, the most recent year of data available at the time of this study (see figure 3). In 2016, IRS determined the agency needed to develop a strategic workforce plan and conduct related workforce planning activities to help mitigate the risks associated with fragmented human capital activities as discussed above, according to HCO officials. IRS provided authority to HCO to be the central coordinating body to lead that effort, hereafter referred to as the workforce planning initiative. In March 2018, IRS issued an update to its Internal Revenue Manual stating HCO’s responsibilities. For example, IRS provided HCO authority to: conduct strategic workforce planning annually that is aligned with Treasury’s mission, goals, and objectives; perform data analysis of the current and future workforce, identify gaps, and submit solutions that will enable the organization to meet its mission, goals, and objectives; ensure the existence and integration of human capital planning functions into the workforce planning process, including skills assessments, competency models, recruitment planning, training and development, and retention and succession planning; provide guidance and direction for IRS-wide workforce planning ensure the implementation of an agency-wide skills assessment and competency model framework; and communicate commitment for a consistent, repeatable, and systematic workforce planning process to enable improved and integrated management of human capital initiatives. The IRM also describes IRS’s workforce planning process, which includes a five-phase strategic workforce planning model that is intended to align with OPM’s workforce planning model (see figure 4). Implementing the strategic workforce planning model and conducting related initiative activities could help the agency ensure its human capital programs align with its mission, goals, and objectives through analysis, planning, investment, and measurement, as required in federal regulation. Furthermore, we determined elements of the initiative addressed key principles we have previously identified for effective workforce planning. For example, the model includes steps to analyze the workforce to determine the critical skills and competencies the agency needs to achieve current and future programmatic results, and to monitor and evaluate the agency’s progress toward its human capital goals. As a result, the initiative could position IRS to systematically identify the workforce needed for the future, develop strategies for identifying and closing skills gaps, and shape its workforce. However, IRS’s implementation of its workforce planning initiative has been delayed. Phase 1 (Enterprise Strategy and Planning) of the workforce planning initiative was underway as of the first quarter of fiscal year 2018, and IRS was scheduled to complete this phase by the second quarter of fiscal year 2018. IRS reports show the agency originally anticipated completing all five phases by June 2018. According to IRS officials, however, IRS now anticipates Phase 1 activities to resume after the opening of the 2020 tax filing season and, as of November 2018, could not estimate a completion date for any of the five phases. The workforce planning initiative has been delayed for three primary reasons, according to IRS documents and officials: 1. Redirection of resources to Tax Cuts and Jobs Act implementation. IRS granted extensions at the request of business divisions and commissioner-level organizations that needed to redirect resources to support the implementation of Tax Cuts and Jobs Act. To implement the 119 provisions of the Tax Cuts and Jobs Act, we reported that IRS would need to (1) interpret the law; (2) create or revise nearly 500 tax forms, publications, and instructions; (3) publish guidance and additional materials; (4) reprogram 140 interrelated return processing systems; (5) hire additional staff and train its workforce to help taxpayers understand the law and how it applies to them; and (6) conduct extensive taxpayer outreach. In addition to redirecting staff, IRS has used overtime and compensatory hours to complete necessary activities in time for the 2019 filing season. 2. Lack of workforce planning skills. As part of a Treasury pilot, IRS conducted a self-assessment of key competencies within HCO as well as within business division-based HR offices. The assessment found competency around workforce planning was among the lowest ranked skills within HCO. According to HCO officials, IRS lacks training and resources available to help its human capital staff develop competency in workforce planning. HCO officials told us they plan to leverage IRS’s Workforce Planning Council to develop strategic workforce planning skills. HCO officials told us the council has training designed to help the HR staff understand how to gather data, use technology, and perform other activities that contribute to IRS’s strategic workforce planning efforts. In addition to a lack of strategic workforce planning skills, a number of key HCO personnel with strategic workforce planning expertise have recently separated from IRS, according to HCO officials. 3. Information system deployment delay. Treasury is developing the Integrated Talent Management system (ITM). Treasury intends ITM to provide the agency with greater visibility of its total workforce, and help its bureaus, including IRS, with workforce planning activities such as succession planning and competency management. Treasury officials told us as of November 2018, ITM is still in development and its deployment has been delayed for a number of reasons, including the need for Treasury to complete system implementation plans and user guides, and address system administration issues at the bureaus. IRS HCO officials told us they opted to wait on ITM rather than moving forward with a number of Phase 2 (Workforce Analysis) activities. IRS HCO officials said they needed this, or a similar software tool, to ensure reliable data capture, make analysis more efficient, and help managers conduct routine updates of workforce planning efforts rather than static, one-time data calls. HCO also opted to wait for ITM to avoid potentially redundant reprogramming of existing systems. However, HCO officials noted that even when ITM is eventually deployed, IRS would need to train business divisions on its use, further lengthening the time needed before conducting Phase 2 activities. Treasury officials told us that ITM would complement rather than replace existing systems and processes. Our analysis of Treasury documents and interviews with Treasury and IRS HCO officials found it was unclear when an ITM module related to talent management and strategic workforce planning will be deployed and available for IRS’s use, the functions it will include, and how IRS’s existing systems and processes would be affected. As a result, IRS lacks the information needed to make staffing and technology decisions related to the workforce planning initiative, putting the initiative at risk of further delay. Treasury is required to conduct data-driven reviews via HRstat. HRStat is a strategic human capital performance evaluation process that identifies, measures, and analyzes human capital data to inform the impact of an agency’s human capital management on organizational results with the intent to improve human capital outcomes. HRstat is also a proven leadership strategy that can help agency officials monitor their progress towards addressing important human capital efforts, such as closing skills gaps. Treasury uses HRstat to monitor the progress of its bureaus in meeting their human capital goals, including IRS’s implementation of the workforce planning initiative. In preparation for the data-driven reviews, each bureau, including IRS, submits HRStat information to Treasury. Treasury and bureau officials discuss the results and make related strategic decisions during bi-monthly Human Capital Advisory Council meetings. Our review of IRS HRstat reports, however, found additional information is needed to more fully reflect the status of the workforce planning initiative and related challenges. For example: in the January, March, May and July 2018 HRstat submissions, IRS 1) reported a status of green (on schedule) for “Increased efforts for development of long-term IRS workforce staffing plan”, and 2) indicated under Key Issues/Challenges that completing the initiative was dependent on ITM deployment; in the July 2018 HRstat submission, IRS moved several milestones to future fiscal years, and identified ITM delays as a significant risk to the workforce planning initiative schedule; in the September 2018 HRstat submission, IRS reported the status of the workforce planning initiative was no longer on schedule. The report identified ITM delays as the cause, but did not include other reasons for the delay, specifically the redirection of resources to Tax Cuts and Jobs Act implementation and a lack of strategic workforce planning skills within HCO. Federal strategic human capital standards state agencies are to communicate in an open and transparent manner to facilitate cross- agency collaboration to achieve mission objectives. In addition, agency leaders should hold managers accountable for knowing the progress being made in achieving goals and, if progress is insufficient, understand why and having a plan for improvement. More complete HRStat information could help IRS and Treasury take fuller advantage of a key opportunity to discuss and address workforce planning initiative delays at Human Capital Advisory Council meetings. IRS full-time equivalents (FTE) have declined each year since 2011, and declines have been uneven across different mission areas (see figure 5). From fiscal years 2011 through 2017, IRS FTEs declined from 95,501 to an estimated 77,685, an 18.7 percent reduction. Our analysis of the President’s Budget data produced by OMB found the reductions have been most significant within IRS Enforcement, where staffing declined by 27 percent (fiscal years 2011 through 2017). In comparison, staff supporting Taxpayer Service activities declined by 8.2 percent, while staff within Operations Support declined by 12.7 percent (fiscal years 2011 through 2017.) IRS estimated FTEs would continue to decline across the three areas in fiscal year 2018. IRS attributed staffing declines primarily to a policy decision to strictly limit hiring. According to IRS, declining budgets over multiple years necessitated decisions for how to reduce and control labor and labor- related costs, which accounted for around 74 percent of its budget allocations in fiscal year 2017. One way IRS sought to control costs was its decision to implement the Exception Hiring Process beginning in fiscal year 2011. The process effectively froze replacement of employees lost to attrition in most program areas, placed limits on external (nonseasonal) hiring, added additional approval steps for new hires, and placed priority on acquiring information technology and cybersecurity staff, according to IRS officials. The Exception Hiring Process remains in place, but as we discuss later, has evolved over time because IRS has received supplemental funding and other priority areas have emerged. IRS also limited overtime and training as a means of controlling costs. Available staff was a key factor in decisions to scale back a number of program activities, most predominantly in enforcement, according to IRS officials. IRS officials told us that, unlike other areas where the agency is legally required to perform certain functions, the agency has flexibility to curtail many enforcement activities when attrition rates increase. Auditing tax returns, for example, is a critical part of IRS’s strategy to ensure tax compliance and address the tax gap, or the difference between taxes owed and those paid on time. Our analysis of IRS data shows the number of individual returns audited has declined each year between fiscal years 2011 through 2017, a 40 percent decline (see figure 6). Reduced audit rates were not limited to individual returns. IRS data show that audit rates of large corporations with assets $10 million or greater declined from 17.7 percent in fiscal year 2011 to 7.9 percent in fiscal year 2017. We have previously reported on other areas in which staffing declines affected IRS operations, including fewer nonfiler investigations, private letter rulings, elimination of a bankruptcy program, and increases in the time needed to close innocent spouse appeals. In addition, we have made recommendations to IRS to better target its limited enforcement resources so it can, for example, 1) maximize revenue yield of the income tax, and 2) more effectively audit large partnerships. IRS agreed with the recommendations and took some action to close them. As of October and July 2018, respectively, those recommendations have not been fully addressed. As previously discussed, IRS is in the initial stages of implementing a strategic workforce planning model, which could provide IRS with information needed to understand what critical skills and competencies are needed to meet its mission. However, according to IRS officials, the agency has not used such a framework in recent years, making it difficult to determine where skills gaps exist. Nonetheless, our analysis of Treasury documents, Enterprise Human Resources Integration data, and interviews with agency officials found IRS currently has skills gaps in key occupations. In fiscal year 2017, Treasury conducted a department-wide analysis of mission critical occupations (MCO) at risk of skills gaps. Treasury analyzed four factors to determine and rank MCOs at highest risk for skills gaps: 1) 2-year retention rate, 2) quit rate, 3) retirement rate, and 4) applicant quality. Analysis of these factors can help build the predictive capacity of agencies to identify mission critical skills gaps as they emerge. The following are the MCOs relevant to IRS that Treasury determined to be at medium or moderate risk for skills gaps, in order of risk: human resources specialist, tax law specialist. In light of staff attrition since 2011, particularly within enforcement occupations, we selected tax examiners and revenue officers to demonstrate how IRS has implemented strategies, policies, and processes for identifying and addressing skills gaps, and to identify critical instances where those efforts have affected IRS’s ability to identify and close critical skills gaps. Tax examiners are responsible for responding to taxpayer’s inquiries regarding preparation of a variety of tax returns, related schedules and other documentation; resolving account inquiries; advising taxpayers of enforcement actions; and managing sensitive case problems designated as requiring special case handling. In addition, tax examiners analyze and resolve tax-processing problems; adjust taxpayer accounts; prepare and issue manual refunds; and compute tax, penalty, and interest. IRS documents note that the level of supervision, complexity, contacts, and the scope of assigned workload varies for tax examiners across performance levels. At the entry level, tax examiners are responsible for receiving and initiating contacts with taxpayers to gather information and resolve issues, and to gain compliance with laws and regulations while dealing with taxpayers that may be evasive under sensitive situations. At the intermediate level, tax examiners are responsible for handling a wide variety of the most difficult or sensitive tax processing problems. Their work products affect the taxpayer’s filing status and tax liability for current, prior, and future reporting requirements. At the senior—or expert—level, tax examiners serve as a work leader over employees engaged in accomplishing tax examining work, as well as perform a full range of examination duties that include adjusting tax, penalty, and interest on taxpayers’ accounts and closing cases. Our analysis of OPM data found that, from fiscal years 2011 through 2017, the agency lost 18 percent of its total tax examiner workforce (see figure 7). Additionally, the number of tax examiners in the intermediate level declined by 34 percent during that same period. IRS officials told us replacing tax examiners is particularly difficult not only because of the general hiring restrictions affecting the entire IRS, but also because of the significant amount of specialized expertise that must be developed to perform in a specific area of tax law. According to IRS officials, in 2018 and in response to declining tax examiner personnel, IRS doubled the dollar amount thresholds tax examiners use to select refunds for additional audit. IRS officials told us this means thousands of refunds that would have received additional scrutiny due to errors or anomalies are no longer considered for follow-up review by tax examiners, and the government is potentially missing significant opportunities to collect revenue and enforce tax laws. Three of the four business divisions within IRS identified skill gaps among its tax examiners. Large Business and International (LB&I). According to LB&I officials, long-term vulnerability with their tax examiners is a major concern, in part because LB&I has been unable to replenish its tax examiner workforce given external hiring constraints and internal promotion concerns (i.e., internal promotions can leave staffing gaps at the lower ranks putting them at risk for skills gaps). According to LB&I officials, having fewer tax examiners—specifically fewer tax examiners in key geographic locations—is affecting its mission. For example, LB&I reviews tax returns of foreign nationals and overseas taxpayers, which are predominantly paper-based returns and have to be processed manually. LB&I officials told us manual paper return processing is time intensive and, with fewer tax examiners, puts IRS at greater risk of having to pay interest to taxpayers for withholding refunds due to processing delays. Small Business/Self-Employed (SB/SE). According to SB/SE officials, gaps among tax examiners are evident and, as a result, SB/SE has reduced work plans and increased the use of overtime. Within SB/SE’s Campus Exam/Automated Underreporter program, officials identified staffing gaps that they attributed to the general inability to hire behind attrition. According to SB/SE officials, as managers and lead vacancies arise, tax examiners are often detailed to fill the positions, which reduce the number of tax examiners available to perform the work. Wage and Investment (W&I). According to W&I officials, they have identified tax examiner skills gaps within their Accounts Management, Submission Processing, and Return Integrity and Compliance Services programs. To address identified skills gaps within W&I, officials said they conduct annual Strategic Hiring Summits bringing together stakeholders and business partners to jointly address filing season staffing needs, staffing barriers and gaps, and hiring lessons learned from prior filing seasons. According to W&I officials, these efforts continue to improve their targeted hiring and timeliness of its onboarding efforts. Other strategies that W&I plans to implement are to bring in tax examiners earlier and provide them with the full spectrum of training upfront rather than spreading the training out over months or years. Additionally, they said tax examiners are going to be cross trained on multiple types of inventory to increase their skills and to address inventory backlogs. Revenue officers are IRS civil enforcement employees who are trained to conduct face-to-face contact with business and individual taxpayers who have not resolved their tax obligations in response to prior correspondence or contact. The role of revenue officers involves explaining to taxpayers why they are not in compliance, advising them of their financial obligation, and when necessary, taking appropriate enforcement action. According to IRS, the goal is voluntary taxpayer compliance through payment arrangements or compromises. However, for taxpayers that remain noncompliant, revenue officers are trained to take civil enforcement actions, such as filing a notice of lien to protect the government’s interest, including and up to seizing personal and business property. According to IRS officials, it takes 4 to 5 years to train a new hire to become an experienced senior or expert revenue officer. The senior or expert levels are of particular importance to IRS’s enforcement efforts. An internal IRS study completed in June 2018 found that 84 percent of all successful fraud referrals came from revenue officers at the senior/expert skill level. Senior revenue officers also serve as classroom instructors and perform on-the-job training of intermediate and entry-level staff. According to IRS officials, this additional responsibility directly affects senior revenue officers’ ability to work fraud cases. Our analysis of OPM data shows that the total number of revenue officers at IRS declined by nearly 40 percent from fiscal years 2011 through 2017, and entry-level revenue officers declined by 86 percent during that same period (see figure 8). IRS officials told us the declines were due to a combination of attrition, limited hiring, and promotions. IRS decided to scale back nonfiler investigations in light of declining staffing, according to IRS officials. We reported in tax year 2010 that IRS started 3.5 million individual nonfiler cases and 4.3 million business nonfiler cases. In tax year 2014, nonfiler cases dropped to 2 million for individuals and 1.8 million for businesses, a reduction of 43 percent and 58 percent, respectively. More recently in fiscal year 2018, IRS data show nonfiler investigations declined to 0.8 million for individuals and 0.4 million for businesses. Since we designated addressing agencies’ mission critical occupation skills gaps as a high-risk area in 2011, OPM and agencies have launched a number of initiatives to close skills gaps. For example, in 2011, OPM and the Chief Human Capital Officer Council established an interagency working group to identify mission critical occupations (MCO) at high risk for skills gaps. The working group, also known as the Federal Agency Skills Team (FAST), identified skills gaps in six government-wide occupations, such as cybersecurity, human resources (HR) specialists, and acquisition. The FAST also identified agency-specific MCOs at high risk for skills gaps, which included IRS revenue agents. Subsequently, Treasury was designated leader of a FAST subteam to develop a plan for closing skills gaps among revenue agents. Treasury convened a group of revenue agents from each of IRS’s business divisions, IRS human resource specialists with workforce planning expertise, and members of IRS’s training group. Table 1 shows the process the subteam used to identify and address the causes of revenue agent skills gaps. The FAST brainstormed potential causes for skills gaps among revenue agents (see figure 9). According to FAST documents, this process helped the team understand the range of contributing factors that led to lower than acceptable 2-year retention rates and a high quit rate among revenue agents. Now that FAST identified the potential causes for the two indicators, Treasury officials told us IRS is responsible for developing and implementing strategies to close skills gaps among its revenue agents and reporting on its progress. According to IRS documents, as of July 2018, the agency established communications with revenue agents to increase awareness about detail and developmental opportunities that are posted on IRS’s Service-wide Detail Opportunities web page, and is developing a plan for more effectively including revenue agents in management training. Related IRS performance measures show that posted detail opportunities for revenue agents have increased from 24 in fiscal year 2016 to 69 in fiscal year 2018. For a limited number of mission critical occupations, HCO provides support to business divisions and program offices that need help addressing workforce capacity concerns. For example, HCO conducts competency assessments when a business division or program is seeking to identify the top candidates for hire or promotion. Determining critical competencies can help agencies effectively meet demographic, technological, and other forces that are challenging government agencies to change activities they perform and the goals that they must achieve, how they do their business, and even who does the government’s business. HCO also conducts skills assessments when a division or program office needs to determine the skill level of their existing employees for the purposes of training, hiring, retention, or staffing decisions. Agencies can use both competency and skills assessments to help identify and address skills gaps. For competency assessments, HCO officials told us they develop annual work plans that prioritize assessment scheduling for certain occupations based on factors including available funding, business division, or program office staff availability to assist HCO with subject matter expertise, and the age of the competency model or assessment. For example, in 2017, HCO supported a competency assessment for special agents within its Criminal Investigations (CI) division. CI special agents are forensic accountants searching for evidence of criminal conduct. HCO officials told us competency assessments for special agents are a priority due to rapidly evolving sophistication of schemes to defraud the government and increasing use of automated financial records. IRS used information resulting from the competency assessment to revamp the special agent hiring process. According to HCO officials, results from the competency assessment have helped IRS reduce the cost and time to assess applicants while improving the overall candidate pool. Skills assessments supported by HCO have been used in some limited cases to help IRS identify and address skills gaps among certain MCOs. According to HCO officials, they provide skills assessments upon request by a business division and program office, assuming personnel and funding resources are available. IRS business divisions or program offices cover costs associated with large-scale assessments where contractor support is needed to supplement HCO’s staff. Skills assessments among occupations with smaller populations usually do not incur costs to the divisions. HCO has supported requested skills assessments of information technology specialists, revenue agents, and human resources specialists in recent years. IRS documents show these assessments were used in part to identify and address skills gaps within these occupations. Unlike competency assessments, however, IRS does not create a work plan or otherwise prioritize skills assessments to address those occupations most in need. As discussed above, Treasury has identified MCOs at moderate to high risk for skills gaps, yet skills assessments have not addressed all the occupations identified as highest risk. Leading practices in strategic workforce management state that agencies should determine the critical skills and competencies its workforce needs to achieve current and future agency goals and missions, and identify gaps, including those that training and development strategies can help address. A work plan for addressing skills gaps could help IRS remediate gaps on a timely basis. Without a plan, IRS risks having to continue scaling back mission-critical activities as it has done in recent years. As previously discussed, Treasury found IRS is at risk of skills gaps among its mission critical occupations, including its HR specialists. In light of related agency-wide hiring limits, IRS offered early retirement incentives for eligible hiring specialists and did not backfill other specialists when they left the agency. HCO has lost more than half of its hiring specialists since 2011. According to HCO, the hiring skills of remaining specialists atrophied as those specialists were redirected to other priority HR areas. Many of HCO’s hiring and other HR responsibilities, however, have remained constant or increased. For example, in fiscal year 2017, IRS hired around 6,700 seasonal employees to assist with the filing season and HCO expects that number to increase in future fiscal years. HCO officials told us the pace of internal hiring (i.e., promotions) remained constant over the past several years. IRS has recently prioritized hiring to address information technology and cybersecurity areas, as well as implementation of the Tax Cuts and Jobs Act. As a result of the combination of fewer hiring specialists and new hiring requirements, HCO officials said its capacity to hire and carry out other important human capital and HR functions is highly strained. In 2018, HCO identified improving hiring capacity as its top priority and is exploring a variety of options, including: HCO surge contracting: Contractors will be used in locations across the employment offices to assist with hiring and personnel security. Leverage Administrative Resource Center (ARC) services. ARC is part of Treasury and provides administrative services, including HR support for various federal agencies. HCO engaged ARC in May 2018 to assist with developing hiring qualifications. OPM shared services. IRS is exploring use of OPM shared services for help in the hiring process. Business-based HR teams: Teams within the divisions have been given authority to post internal merit promotion supervisory vacancy announcements, which will reduce HCO’s workload for this function. HCO will retain responsibility for building positions, setting pay, and processing personnel actions, and will provide a dedicated point of contact for questions and quality review. Federal Executive Board team: A group of Interagency Agreement detailees supported by Wage and Investment (W&I) to work W&I vacancy announcement backlogs. IRS officials told us that, as of November 2018, this option had not been successful. HCO interagency detail opportunity: Employees detailed from other federal agencies into HR positions throughout HCO using interagency agreements. HCO officials told us they are generally monitoring the status of these activities, but cited competing priorities as a reason they have not determined how each activity will be evaluated in achieving increased hiring capacity and associated outcomes. Periodic measurement of an agency’s progress toward human capital goals and the extent that human capital activities contributed to achieving programmatic goals provides information for effective oversight by identifying performance shortfalls and appropriate corrective actions. Without a means for gauging the relative success of its capacity-building activities, IRS risks spending its limited HCO resources on activities that may not help the agency meet its desired hiring outcomes. IRS established a risk register as part of efforts to identify, prioritize, and mitigate risks to IRS’s implementation of the Tax Cuts and Jobs Act, including a number of risks related to its ability to hire. A risk register is used to identify the source of risks, owners to manage the treatment of those risks, and track the success of risk mitigation strategies over time. Risk registers or other comprehensive risk reports are an essential element of a successful enterprise risk management program. The risk register shows that a lack of strategic workforce planning in recent years is contributing to a number of risks IRS has faced in implementing the Tax Cuts and Jobs Act. For example: Large Business and International (LB&I) is having difficulty hiring senior advisors needed to develop training and compliance strategies. The risk register indicates mitigation efforts in this area, such as extending detail opportunities, have failed and there are potential major impacts to the program. According to LB&I officials, staffing declines in related skills prior to the Tax Cuts and Jobs Act have exacerbated difficulties in this area. Business units have been unable to identify critical hiring needs for the Tax Cuts and Jobs Act. As of October 2018, HCO is coordinating with business units to help determine hiring needs so that it can prioritize agency hiring efforts. In a related risk, IRS determined the lack of personnel and resources within W&I may hinder its ability to identify hiring needs for the fiscal year 2019 filing season. According to IRS, “the filing season may be impacted by significant resource constraints largely due to onboarding concerns, resulting in lost revenue, increased cost, and significant reputational impact to the IRS.” As of October 2018, IRS stated it has completed necessary hiring plans and determined this risk has minimal to no impact to IRS’s ability to carry out the upcoming filing season. Table 2 shows additional examples of risks related to hiring identified by IRS, steps the agency is taking to mitigate those risks, and the status as of October 2018. In September 2018, the Treasury Inspector General for Tax Administration (TIGTA) reviewed IRS’s information technology readiness for implementing Tax Cuts and Jobs Act. TIGTA reported IRS used standard position descriptions for hiring efforts and had not defined specific knowledge, skills, abilities, and other requirements necessary for positions it expects to hire for Tax Cuts and Jobs Act implementation, and/or back-filling existing positions due to personnel performing related activities. We did not review position descriptions for the purposes of this report. However, as previously discussed, without information about what skills and skills gaps exist across the agency, IRS lacks important information needed to inform hiring and training resource decisions. It can take a year or longer from the time when a supervisor notifies his or her division of a staffing need to the time the employee is on board, according to IRS documents and our interviews. HCO officials attributed much of this time to gathering required information and approvals associated with IRS’s “Exception Hiring Process.” In fiscal year 2011, IRS instituted the process in part to help the agency prioritize hiring decisions in a highly constrained budget environment. The Exception Hiring Process added approval layers to IRS’s regular hiring requirements, including direct approval from the Deputy Commissioner for Operations Support, the Deputy Commissioner for Services and Enforcement, or the Chief of Staff for direct reports to the Commissioner. Also as part of this process, the Chief Financial Officer performs a cost assessment to determine the affordability of any requested new hire, and HCO determines if multiple hiring requests can be consolidated into a smaller number of positions. Our review of IRS budget operating guidance and interviews found Exception Hiring Process requirements have changed over time. Initially in 2011, every new hire was subject to the Exception Hiring Process. Since 2011, hiring requirements have eased in some circumstances. For example, in 2014, business division directors were given authority to approve internal hires (i.e., promotions) within their own business division. More recently, new hires in cybersecurity, information technology, or those needed to implement the Tax Cuts and Jobs Act were not subject to the same requirements as hiring requests in other occupations. According to HCO officials, easing hiring requirements in certain circumstances was necessary to help the agency bring on critical hires more quickly. However, based on their interactions with managers in the business divisions, HCO officials said the evolving and nonuniform Exception Hiring Process requirements has been confusing to managers requesting new hires. Business divisions and program offices often submitted hiring requests without required information or approvals. This has resulted in hiring delays, according to HCO officials. HCO officials told us that issuing clearer guidance to business managers would help ensure business divisions submit hiring requests that are complete, which would reduce the risk of hiring delays. In light of declining resources and increasing requirements, IRS is taking the initial steps to reinstate a strategic approach to workforce planning that the agency scaled back in recent years. IRS has recently provided its HCO with authority to lead and coordinate agency-wide strategic workforce planning efforts. However, full implementation of an IRS initiative to conduct agency-wide strategic workforce planning has been put on hold as other activities have taken priority, and a key workforce planning system being developed by Treasury has been delayed. As a result, these efforts remain fragmented, and IRS lacks an inventory of its current workforce, has neither developed the competency and staffing requirements nor conducted agency-wide activities associated with analyzing the workforce to identify skills gaps, or developed strategies to address skills gaps. Additionally, IRS could improve reporting of its progress in addressing skills gaps. This critical information will help provide assurance that its fragmented human capital activities are well managed or that resources are being effectively allocated. High attrition among IRS employees, particularly in complex enforcement occupations and lower-than-average employee satisfaction rates, puts IRS at continued risk of skills gaps. These skills gaps have already been a significant contributor to IRS’s decisions to scale back important enforcement activities that are critical to promoting voluntary compliance and closing the tax gap. However, IRS has not targeted its limited resources to addressing issues among the mission critical occupations most at risk of skills gaps. Instead, activities such as skills gaps assessments are only conducted to the extent business divisions and program offices make resources available, and management is aware of and inclined to seek assistance from IRS’s HCO. Reporting on the results of efforts to close skills gap and developing a work plan or other mechanism for prioritizing assessments would better position IRS to address key gaps. Additionally, the results of an interagency working group effort intended to address skill gaps among IRS revenue agents and other occupations with skills gaps across the government may hold important lessons for addressing skills gaps among mission critical occupations at IRS. Each of these issues is exacerbated by limited capacity within HCO, which has redirected its resources to implementing the Tax Cuts and Jobs Act and meeting other routine transactional human resource requirements. HCO is leveraging a range of activities intended to help the agency meet immediate hiring needs. Measuring the extent to which each of activities is effective would help HCO target resources to the most effective activities as it seeks to improve its capacity for hiring employees in hard to fill positions in the future. In addition, issuing clear guidance on hiring request requirements would better position IRS to avoid hiring delays for mission-critical occupations. We are making seven recommendations, six to IRS and one to Treasury. Specifically: The Commissioner of the IRS should fully implement the workforce planning initiative, including taking the following actions: (1) conducting enterprise strategy and planning, (2) conducting workforce analysis, (3) creating a workforce plan, (4) implementing the workforce plan, and (5) monitoring and evaluating the results. (Recommendation 1) The Secretary of the Treasury should issue clarifying guidance to IRS about the Integrated Talent Management system, including when the workforce planning and talent management modules will be deployed and available for IRS’s use, the functions it will include, and how IRS’s existing systems and processes within business divisions and program offices will be affected. (Recommendation 2) The Commissioner of IRS should ensure the Human Capital Officer improves reporting for its workforce planning initiative in its bi-monthly HRstat information submissions to Treasury. The submissions should include the original implementation schedule, changes to the original schedule, delays in implementation and each of their causes, and IRS’s strategy to address the causes of those delays. (Recommendation 3) The Commissioner of IRS should ensure the Human Capital Officer and Deputy Commissioner for Services and Enforcement report the results of efforts to close skills gaps among revenue agents, including lessons learned, that may help inform strategies for conducting skills gap assessment efforts for other mission critical occupations. (Recommendation 4) The Commissioner of IRS should ensure the Human Capital Officer and Deputy Commissioner for Services and Enforcement collaborate to develop a work plan or other mechanism that prioritizes and schedules skills assessments for mission critical occupations at highest risk of skills gaps, such as those identified by Treasury or where key activities have been scaled back, for the purposes of developing a strategy to close the gaps. (Recommendation 5) The Commissioner of IRS should direct the Human Capital Officer to measure the extent to which each of its activities for improving hiring capacity are effective in producing desired hiring capacity outcomes, including strategies used to mitigate hiring risks associated with Tax Cuts and Jobs Act implementation hiring. (Recommendation 6) The Commissioner of IRS should direct the Human Capital Officer and Chief Financial Officer to issue clarifying guidance on the current Exception Hiring Process, including clarifying areas where hiring limitations that were used in previous years are no longer applicable. (Recommendation 7) We provided a draft of this report to the Commissioner of the Internal Revenue Service, the Secretary of the Treasury, and the Acting Director of the Office of Personnel Management for review and comment. In a letter from IRS’s Deputy Commissioner for Operations Support, reproduced in appendix II, IRS agreed with our six recommendations directed to it. The letter states there is room for improvement in implementing its strategic workforce plan and the associated workforce planning initiative, and IRS will provide a detailed corrective action plan in their 180-day response to Congress. IRS also provided technical comments, which we incorporated as appropriate. For Treasury, the Acting Director, Human Capital Strategic Management, the Office of the Deputy Assistant Secretary for Human Resources and Chief Human Capital Officer, emailed comments stating Treasury agreed with the one recommendation directed to it. In the comments, Treasury wrote, “the [Deputy Assistant Secretary for Human Resources and Chief Human Capital Officer] will continue to provide guidance, policy and direction on how the ITM is used to meet Workforce Planning objectives.” Treasury provided technical comments on the recommendation directed to it, and we revised the recommendation as appropriate to recognize that bureaus, not Treasury, implement the ITM. OPM did not have comments. We are sending copies of this report to interested congressional committees, the Commissioner of IRS, the Secretary of the Treasury, and other interested parties. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512- 9110 or McTigueJ@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. You asked us to review the Internal Revenue Service’s (IRS) enterprise- wide strategic workforce planning efforts. In this report, we assess (1) how IRS defines its workforce needs and develops strategies for shaping its workforce; (2) the extent to which IRS identified the critical skills and competencies it will require to meet its goals, and describe its strategy to address skills gaps in its workforce; and (3) the extent to which IRS’s Human Capital Office has the capacity to hire employees in hard to fill positions. For our first objective, to determine how IRS defines its workforce needs, we conducted a review of IRS’s implementation of its strategic workforce planning process. We compared IRS’s strategic workforce planning guidance, policies, and procedures, as well as the Department of the Treasury’s (Treasury) guidance and policies to (1) Office of Personnel Management (OPM) regulations and guidance on strategic workforce planning, (2) our reports on key principles for effective strategic workforce planning, and (3) standards for internal controls. To describe how IRS workforce planning process aligns with standards, we reviewed IRS’s documentation of its programs, policies, and practices for recruiting, developing, and retaining the staff needed to achieve program goals. We compared that information with requirements articulated in OPM regulations and best practices we has identified. To include prior actions and concerns previously identified as related to IRS’s strategic human capital planning, we reviewed our prior relevant reports and those from the Treasury Inspector General for Tax Administration. We also used several databases to examine IRS’s workforce trends. To analyze trends in IRS’s full-time equivalent employment, we used the Office of Management and Budget’s (OMB) budget database, MAX Information System (MAX), for fiscal years 2011 through 2017. To analyze employee engagement and employee global satisfaction at IRS, we analyzed IRS results from OPM’s fiscal years 2011 through 2017 Federal Employee Viewpoint Survey (FEVS). To determine retirement eligibility of SES and non-SES IRS staff, we analyzed data in OPM’s Enterprise Human Resources Integration (EHRI) database. To assess the reliability of EHRI, OMB Max, and FEVS data, we reviewed our past data reliability assessments and conducted electronic testing to evaluate the accuracy and completeness of the data used in our analyses. For EHRI and FEVS, we also interviewed knowledgeable agency officials. We determined the data used from these three systems to be sufficiently reliable for our purposes. We supplemented our review of documentation by interviewing relevant IRS, Treasury, and OPM officials. We interviewed IRS officials from the Human Capital Office including the Human Capital Officer, Large Business & International (LB&I), Small Business Self Employed (SB/SE), Tax Exempt and Government Entities (TE/GE), and Wage & Investment (W&I) business divisions to understand how IRS assesses its workforce needs and develops strategies for shaping its workforce. We interviewed OPM officials about regulatory requirements and their perspective on strategic human capital planning requirements, as well as their experience working with Treasury and IRS. We met with Treasury and Taxpayer Advocate Service officials to understand their role and responsibilities for coordinating with and providing oversight of IRS activities. We reviewed IRS’s practices and related documentation for monitoring and evaluating progress toward human capital goals, including Treasury’s HRStat reports. For objective 2, to assess the extent IRS identified and described critical skills required to meet its goals, in addition to activities performed to address objective 1, we selected a nongeneralizable sample of occupations identified by IRS as mission critical to illustrate how IRS has implemented strategies, policies, and processes for identifying and addressing skills gaps, and to identify critical instances where those efforts have affected IRS’s ability to identify and close critical skills gaps. Because IRS’s workforce planning efforts are generally conducted by mission critical occupations (MCO), we selected MCOs as our unit of analysis. We excluded MCOs with characteristics that made them unlikely to yield new or useful information for the purposes of our report. MCOs were excluded from our analysis if they (1) were under review as part of our recent or ongoing work, (2) had small numbers of staff (less than 100), or (3) were assessed by Treasury to be at low risk for skills gaps. The Treasury assessment ranked MCOs in order of risk for skills gaps based on 2-year retention rate, applicant quality. Based on these criteria, we selected revenue officers and tax examiners as occupational case illustrations representing tax enforcement activities. These two occupations, in tandem with discussion of Treasury’s efforts to close skills gaps among revenue agents, while not generalizable, provided illustrative examples for this objective. We analyzed IRS’s audit rate of individual and corporate returns to show a change in the number of audits for fiscal years 2011 through 2017 based on data reported by IRS in its annual Data Book. To obtain information to illustrate the current state of the selected MCOs located within the four business divisions, we sent the business divisions a semistructured set of written questions coupled with a request to provide corroborating documents to support their responses. We asked each business division for information about related MCOs, including: hiring data and retirement eligibility rates for MCOs; skills, competency, or staffing gaps identified among its MCOs; and any resource tradeoff decisions made as a result of skills gaps. To supplement the information we gathered from responses to our written question responses, we also reviewed IRS and Treasury documents for addressing skills gaps for revenue agents that were conducted after we identified mission critical skills gaps as a government-wide high-risk issue in 2011. For objective 3, to assess the extent IRS’s Human Capital Office has the capacity to hire employees in hard to fill positions, we reviewed documentation related to IRS hiring requirements, including the Internal Revenue Manual and policy explaining the Exception Hiring Process. We interviewed division directors from each of IRS’s major business divisions (W&I, LB&I, TE/GE, and SBSE) to understand their hiring experience and impressions of time-to-hire and candidate quality results related to the exception hiring process. We interviewed senior officials responsible for IRS’s hiring function. We reviewed documentation related to systems used to process and onboard new hires. We conducted this performance audit from August 2017 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Tom Gilbert (Assistant Director), Shea Bader (Analyst-in-Charge), Crystal Bernard, Jacqueline Chapin, James Andrew Howard, Meredith Moles, Steven Putansu, and Robert Robinson made major contributions to this report. Devin Braun, Regina Morrison, Erin Saunders-Rath, and Sarah Wilson provided key assistance.
|
IRS faces a number of challenges that pose risks to meeting its mission if not managed effectively. Key to addressing IRS's challenges is its workforce. Cultivating a well-equipped, diverse, flexible, and engaged workforce requires strategic human capital management. GAO was asked to review IRS's enterprise-wide strategic workforce planning efforts. GAO assessed (1) how IRS defines its workforce needs and develops strategies for shaping its workforce; (2) the extent to which IRS identified the critical skills and competencies it will require to meet its goals, and its strategy to address skills gaps in its workforce; and (3) the extent to which IRS's Human Capital Office has the capacity to hire employees in hard to fill positions. GAO analyzed trends in staffing across IRS and in selected mission critical occupations; compared IRS strategic workforce management processes, practices, and activities with federal regulations and leading practices; analyzed IRS documents and interviewed agency officials. The Internal Revenue Service (IRS) has scaled back strategic workforce planning activities in recent years. IRS officials told GAO that resource constraints and fewer staff with strategic workforce planning skills due to attrition required IRS to largely abandon strategic workforce planning activities. However, a number of indicators, such as increasing rates of retirement eligible employees and declining employee satisfaction, led IRS to determine that continuing to make short-term, largely nonstrategic human capital decisions was unsustainable. One way IRS sought to address these issues was to develop a strategic workforce plan and associated workforce planning initiative. Initiative implementation, however, is behind schedule and on hold. IRS attributed the delay to a combination of: 1) personnel resources redirected to implement Public Law 115-97—commonly referred to as the Tax Cuts and Jobs Act, 2) lack of workforce planning skills within its Human Capital Office, and 3) delayed deployment at the Department of the Treasury (Treasury) related to a new workforce planning system. As a result, IRS lacks information about what mission critical skills it has on board, where skills gaps exist, and what skills will be needed in the future. IRS staffing has declined each year since 2011, and declines have been uneven across different mission areas. GAO found the reductions have been most significant among those who performed enforcement activities, where staffing declined by around 27 percent (fiscal years 2011 through 2017). IRS attributed staffing declines primarily to a policy decision to strictly limit hiring. Agency officials told GAO that declining staffing was a key contributor in decisions to scale back activities in a number of program and operational areas, particularly in enforcement, where the number of individual returns audited from fiscal years 2011 through 2017 declined by nearly 40 percent. IRS has skills gaps in mission critical occupations, and the agency's efforts to address these skills gaps do not target the occupations in greatest need, such as tax examiners and revenue officers. However, the results of an interagency working group effort that began in 2011, and was intended to address skill gaps among IRS revenue agents and other occupations with skills gaps across the government, may hold important lessons for addressing skills gaps in other mission critical occupations at IRS. IRS's Human Capital Office has limited staffing capacity to hire employees in hard to fill positions, which holds risks for the agency's ability to implement the Tax Cuts and Jobs Act. IRS is undertaking a variety of activities to improve its hiring capacity, but has not determined how each activity will be evaluated and will contribute to increased hiring capacity or associated outcomes. In addition, changes in the agency's hiring processes have been confusing to managers and contributed to hiring delays. Clear guidance on hiring request requirements would better position IRS to avoid the risk of hiring delays for mission critical occupations. GAO is making six recommendations to IRS that include implementing its delayed workforce planning initiative, evaluate actions to improve the agency's hiring capacity, and address changes in its processes that have contributed to hiring delays. IRS agreed with GAO's recommendations. GAO also recommends Treasury clarify guidance to IRS on a forthcoming workforce planning system. Treasury agreed with the recommendation.
|
The Forest Service’s mission includes sustaining the nation’s forests and grasslands; managing the productivity of those lands for the benefit of citizens; conserving open space; enhancing outdoor recreation opportunities; and conducting research and development in the biological, physical, and social sciences. The agency carries out its responsibilities in three main program areas: (1) managing public lands, known collectively as the National Forest System, through nine regional offices, 154 national forests, 20 national grasslands, and over 600 ranger districts; (2) conducting research through its network of seven research stations, multiple associated research laboratories, and 81 experimental forests and ranges; and (3) working with state and local governments, forest industries, and private landowners and forest users in the management, protection, and development of forest land in nonfederal ownership, largely through its nine regional offices. According to the Forest Service, it employs a workforce of over 30,000 employees across the country. However, this number grows by thousands in the summer months, when the agency hires seasonal employees to conduct fieldwork, respond to wildland fires, and meet the visiting public’s needs. The Office of the Chief of the Forest Service is located in Washington, D.C., with 27 offices reporting directly to the Office of the Chief, as illustrated in figure 1. The nine national forest regions, each led by a regional forester, oversee the national forests and grasslands located in their respective regions. Each national forest or grassland is headed by a supervisor, the seven research stations are each led by a station director, and a state and private forestry area is headed by an area director. The Forest Service collectively refers to its forest regions, research stations, and area as RSAs. The RSAs are organized differently according to their operations, and comparable operations within the RSAs, such as collections from reimbursable agreements, may be processed differently in the various regions and stations, resulting in highly decentralized operations. In addition, the offices of the Chief Financial Officer (CFO); Deputy Chief of Business Operations (includes the budget office); and eight other offices located in the Washington, D.C., headquarters also report directly to the Office of the Chief of the Forest Service. The Forest Service receives appropriations for its various programs and for specific purposes to meet its mission goals. Prior to fiscal year 2017, the Forest Service’s budgetary resources consisted primarily of no-year funds. Its budget office in Washington, D.C., initiates apportionment requests and monitors the receipt of Department of the Treasury (Treasury) warrants. Upon receipt of the warrant, the apportionment is recorded in the financial system and then the budget office develops an allocation summary detailing the allocation of its budget authority by fund, programs within the funds, and distribution of funds at the regional, station, and area levels. The Forest Service may also transfer funds from other appropriations to the appropriations account that funds its fire suppression activities when available funds appropriated for fire suppression and the Federal Land Assistance, Management, and Enhancement (FLAME) fund will be exhausted within 30 days. The Forest Service’s administrative policies, practices, and procedures are issued in its Directive System, which provides a unified system for issuing, storing, and retrieving internal direction that governs Forest Service programs and activities. The Directive System consists of the Forest Service’s manuals and handbooks. The manuals contain management objectives, policies, and responsibilities and provide general direction to Forest Service line officers and staff directors for planning and executing their assigned programs and activities. The handbooks provide detailed direction to employees and are the principal source of specialized guidance and instruction for carrying out directions issued in the manuals. Line officers at the national and RSA levels have authority to issue directives in the manuals and handbooks under their respective jurisdictions. The Forest Service’s policy states that the Directive System is the only place where Forest Service policy and procedures are issued. In addition to the Directive System, Forest Service staff have also developed standard operating procedures (SOP) and desk guides to supplement guidance provided in directives. However, the SOPs and desk guides are not part of the Forest Service Directive System and therefore are not official policy and procedures. While the Forest Service had documented processes for allotting its budgetary resources, it did not have an adequate process and related control activities for reasonably assuring that (1) amounts designated in appropriations acts for specific purposes are used as designated and (2) unobligated no-year appropriation balances from prior years were reviewed for their continuing need. In addition, the Forest Service did not have a properly designed and documented system for administrative control of funds. Finally, the Forest Service had not properly designed control activities for fund transfers for fire suppression activities under its Wildland Fire Management program. While the Forest Service had documented processes for allotting its budgetary resources, it did not have an adequate process and related control activities to reasonably assure that amounts designated in appropriations acts for specific purposes are used as designated—as required by the purpose statute, which states that “appropriations shall be applied only to the objects for which the appropriations were made except as otherwise provided by law.” We reviewed Forest Service documents about its budget authority processes, which included control objectives, related control activities, and processes over the allotment of its budgetary resources. We found that these documents, including manuals and handbooks, did not include an adequate process and related control activities for assuring that appropriated amounts are used for the purposes designated. For example, such a process would include the Forest Service allotting appropriated funds for specific programs or objects as provided in the applicable appropriation act, by either using specific budget line items already defined in the Forest Service’s financial system or creating new budget line items, as needed. Standards for Internal Control in the Federal Government states that management should define objectives clearly to enable the identification of risks and design appropriate control activities to achieve objectives and respond to the risks identified. As a result of the Forest Service not having an adequate process and related control activities for assuring that appropriated amounts are used for the purposes designated, the Forest Service did not properly allocate certain funds for specific purposes detailed in the appropriations acts for fiscal years 2015 and 2016. For example, in fiscal year 2015, the Forest Service did not set aside in its financial system the $65 million specified in the fiscal year 2015 appropriations act for acquiring aircraft for the next- generation airtanker fleet. According to Forest Service documents, as of January 6, 2016, $35 million of the designated funds was used for other purposes. In February 2017, we issued a legal opinion, related to the Forest Service’s use of the $65 million, which concluded that the Forest Service had failed to comply with the purpose statute. According to USDA’s Office of General Counsel, “this lack of any separate apportionment or account for the next-generation airtanker fleet was due to the fact that it was a new item, not included in the agency’s budget request, and added late in the appropriations process.” Similarly, in fiscal year 2016, the Forest Service did not create new budget line items to reserve in its financial system $75 million for the Forest Inventory and Analysis Program specified in the fiscal year 2016 appropriations act. Rather than creating a new budget line item for the program specified in the appropriations act, the funds were combined with an existing budget line item, making it difficult to track related budget amounts and actual expenditures. The lack of an adequate process and related control activities to reasonably assure that appropriated amounts are used for the purpose designated also increases the risk that the Forest Service may violate the Antideficiency Act. The Forest Service did not have a process and related control activities to reasonably assure that unobligated, no-year funds from prior years were reviewed for continuing need. We reviewed the Forest Service’s budget authority process document and related manuals and handbooks, which documented control objectives and procedures over its budgetary resources and the guidance for administrative control of funds. We found that these documents did not include a process for reviewing the Forest Service’s unobligated, no-year funds from prior years and related control activities to reasonably assure that such funds were reviewed for continuing need. Such reviews, if performed, may identify unneeded funds that could be reallocated to other programs needing additional budgetary resources, if consistent with the purposes designated in appropriations acts. The USDA Budget Manual states as a department policy that “agencies of the Department have a responsibility to review their programs continually and recommend, when appropriate, deferrals or rescissions.” The USDA Budget Manual further states the following: “Agency officials should remain alert to this responsibility since the establishment of reserves is an important phase of budgetary administration. If it becomes evident during the fiscal year that any amount of funds available will not be needed to carry out foreseeable program requirements, it is in the interest of good management to recommend appropriate actions, thereby maintaining a realistic relationship between apportionments, allotments, and obligations.” However, the Forest Service did not develop a directive addressing the control objectives, related risks, and control activities for implementing this USDA policy. Up until fiscal year 2017, Forest Service budgetary resources consisted primarily of no-year funds. At the beginning of each fiscal year, unobligated balances of no-year funds are carried forward and reapportioned to become part of budget authority available for obligation in the new fiscal year. Unobligated balances can increase during the fiscal year due to deobligation of prior years’ unliquidated obligations that the Forest Service determines it no longer needs. These resources are immediately available to the Forest Service to the extent authorized by law without further legislation or action from Office of Management and Budget (OMB) unless the apportionment states otherwise. According to Forest Service officials, unobligated funds reported in the Forest Service’s September 30, 2016, Statement of Budgetary Resources included $351 million in discretionary unobligated no-year funds, appropriated as far back as fiscal year 1999. The Forest Service did not identify and define a process and control objectives related to its review of unobligated no-year funds from prior years for continuing need. As a result, the Forest Service did not have reasonable assurance that prior no-year unobligated balances were properly managed and considered in its annual budget requests. This increased the risk that the Forest Service may make budget requests in excess of its needs. Additionally, the Forest Service could miss opportunities to use its prior year unobligated no-year funds more timely and effectively, for example, using these funds for other Forest Service program needs, if consistent with the purposes designated in appropriations acts. During our work, we brought this issue to management’s attention, and in response, Forest Service officials stated that the Forest Service is planning to develop a quarterly process to review available balances and, as needed, redirect funds to agency priorities. However, as of July 2017, the Forest Service had not yet developed this review process. Further, Congress rescinded about $18 million of the Forest Service’s prior year unobligated balances and required it to report unobligated balances quarterly within 30 days after the close of each quarter and appropriated multi-year funds instead of no- year funds to the Forest Service for fiscal year 2017. The Forest Service issued guidance related to administrative control of funds in manuals and handbooks, which USDA did not review and approve prior to their issuance. Based on our review of these documents, we found that the processes and related control activities over the administrative control of funds were dispersed in numerous manuals and handbooks, which may hamper a clear understanding of the overall system. Further, the system lacked key elements that would allow it to serve as an adequate system of administrative control of funds. For example, in its manuals and handbooks the Forest Service did not identify, by title or office, those officials with the authority and responsibility for obligating the service’s appropriated funds, such as funds for contracts, travel, and training. As a result, the responsibility for obligating funds was not clearly described and properly assigned in Forest Service policy as required by the USDA Budget Manual and OMB Circular No. A-11. OMB Circular No. A-11 states that the Antideficiency Act requires that the agency head prescribe, by regulation, a system of administrative control of funds, and OMB provided a checklist in appendix H to the circular that agencies can use for drafting their fund control regulations. This requirement is consistent with those in the USDA Budget Manual, which prescribes budgetary administration through a system of administrative controls for its component agencies, including the Forest Service. The USDA Budget Manual states that to the extent necessary for effective administration, (1) the heads of USDA component agencies may delegate to subordinate officials responsibilities in connection with the administrative distribution of funds within apportionments and allotments and the monitoring, control, and reporting of the occurrence of obligations and expenditures under apportionments and allotments and (2) the chain of such responsibility shall be clearly defined. In addition, USDA requires its component agencies to promulgate and maintain administrative control of funds regulation and to send such regulation to USDA’s Office of Program and Budget Analysis for review and approval prior to issuance. Because the Forest Service has not developed and issued a comprehensive system for administrative control of funds that considers all aspects of the budget execution processes, it cannot reasonably assure that (1) programs will achieve their intended results; (2) the use of resources is consistent with the agency’s mission; (3) programs and resources are protected from waste, fraud, and mismanagement; and (4) laws and regulations are followed. We also found that the Forest Service had not reviewed and updated most of its administrative control of funds guidance in the manuals and handbooks for over 5 years. The USDA Budget Manual requires each component to periodically review its funds control system for overall effectiveness and to assure that it is consistent with its agency programs and organizational structures. Further, Forest Service policy also requires routine review, every 5 years, of policies and procedures in its Directive System. According to Forest Service officials, when directives are up for review and update, a staff from the Office of Regulatory and Management Services (ORMS) sends an e-mail reminder to notify responsible personnel that updates to applicable directives are needed. However, we found that the Forest Service does not have adequate controls in place to monitor the reviews and any updates of the manuals and handbooks in its Directive System to reasonably assure that their efforts resulted in timely updates. As a result, the Forest Service is at risk that guidance for its system for administrative control of funds may lose relevance as processes change over time and control activities may become inadequate. The Forest Service did not have properly designed control activities over its process for fund transfers related to wildland fire suppression activities. The Forest Service receives appropriations for necessary expenses for (1) fire suppression activities on National Forest System lands, (2) emergency fire suppression on or adjacent to such lands or other lands under fire protection agreement, (3) hazardous fuels management on or adjacent to such lands, and (4) state and volunteer fire assistance. Transfer of funds from other Forest Service programs to its fire suppression activities occurs when the Forest Service has exhausted all available funds appropriated for the purpose of fire suppression and the FLAME fund. A key aspect of this process is assessing the FLAME forecast, which the Forest Service uses to predict the costs of fighting wildland fires for a given season, and developing a strategy to identify specific programs and the amounts that may be transferred to pay for fire suppression activities when needed. The process for reviewing the FLAME forecast and strategizing the fund transfers was documented in the Basic Budget Desk Guide created by staff in the Forest Service’s Strategic Planning and Budget Analysis Office. However, the desk guide did not contain evidence of review by responsible officials. As a result, the Forest Service lacked reasonable assurance that the desk guide was complete and appropriate for its use. The Basic Budget Desk Guide included a listing of actions to be performed by the analyst for reviewing the FLAME forecast report and developing a strategy for fund transfer from other programs. However, the desk guide did not specify the factors to be considered when developing the strategy. For example, it did not call for documentation addressing the rationale for the strategy or an assessment of the risk that the fund transfer could have on the programs from which the funds would be transferred. The desk guide also did not describe the review and approval of the strategy by a responsible official(s) prior to the fund transfer request sent to the Chief of the Forest Service. According to Standards for Internal Control in the Federal Government, management should design control activities to achieve objectives and respond to risks and that such control activities should be designed at the appropriate levels in the organizational structure. Further, management may design a variety of transaction control activities for operational processes, which may include verifications, authorizations and approvals, and supervisory control activities. The lack of properly designed control activities for supervisory review of the desk guide and strategy to identify the amounts for fund transfers does not provide the Forest Service reasonable assurance that the objectives of the fund transfers—including mitigating the risk of a shortfall of funding for other critical Forest Service program activities, such as payroll or other day-to-day operating costs—will be efficiently and effectively achieved. The Forest Service enters into various reimbursable agreements with agencies within USDA, other federal agencies, state and local government agencies, and nongovernment entities to carry out its mission for public benefit. The reimbursable agreements may be for the Forest Service to provide goods and services to a third party or to receive goods and services from a third party, or may be a partnership agreement with a third party for a common goal. According to Forest Service officials, the two distinct types of Forest Service reimbursable agreements are (1) fire incident cooperative agreements and (2) reimbursable and advanced collection agreements (RACA). The Forest Service did not have documented processes and related control activities for its fire incident cooperative agreements to reasonably assure the effectiveness and efficiency of its related fire incident operations. In addition, processes and related control activities applicable to RACAs were not adequately described in applicable manuals and handbooks in the Directive System, to reasonably assure that control activities could be performed consistently and effectively. Further, certain RACA processes in the Directive System had not been timely reviewed by management and did not reflect current processes. Moreover, as previously discussed, SOPs and desk guides developed in field offices related to RACA processes were not in the Forest Service’s Directive System. Finally, the Forest Service lacked control activities segregating incompatible duties performed by line officers and program managers in creating reimbursable agreements and the final disposition of related receivables. The Forest Service did not have documented processes and related control activities for its fire incident cooperative agreements to reasonably assure the effectiveness and efficiency of its related fire incident operations and reliable reporting internally and externally. As part of the service’s mission objective to suppress wildland fires, Forest Service officials stated that they enter into 5-year agreements referred to as master cooperative agreements with federal, state, and other entities. These agreements document the framework for commitment and support efficient and effective coordination and cooperation among the parties in suppressing fires, when they occur. The master cooperative agreements do not require specific funding commitments as amounts are not yet known. These agreements vary from region to region because of the differing laws and regulations pertaining to the participating states and other entities. These variations can also result in different billing and collection processes between regions. When a fire occurs, supplemental agreements, which are based on the framework established in the applicable master cooperative agreements, are signed by relevant parties for each fire incident. These agreements establish the share of fire suppression costs incurred by the Forest Service and amounts related to entities that benefitted from those fire suppression efforts. These supplemental agreements require commitment and obligation of funds. As indicated in figure 2, the Forest Service’s obligations for fire suppression activities ranged from $412 million to $1.4 billion over the 10-year period from fiscal years 2007 through 2016. In response to our request for documentation of processes and related control activities over its fire incident cooperative agreements, Forest Service officials stated that processes and related control activities over reimbursable agreements were applicable to both fire incident cooperative agreements and RACAs. However, based on our review of the Forest Service’s processes and related control activities over its reimbursable agreements, we found that the unique features of fire incident cooperative agreements (as compared to features of RACAs) were not addressed in the processes and related controls for reimbursable agreements. For example, there was no process and related control activities over the negotiation and review of (1) a fire incident master cooperative agreement, which is developed before a fire occurs, and (2) supplemental agreements, which are signed by all relevant parties after the start of a fire incident. These supplemental agreements detail, among other things, the terms for (1) fire department resource use, (2) financial arrangements, and (3) specific cost-sharing agreements. Another unique feature of fire incident cooperative agreements, which was not covered in process documents for its reimbursable agreements, was the preparation of the Cost Settlement Package. The preparation of this package does not start until after the fire has ended and the Forest Service has received and paid all bills. According to Forest Service officials, a fire incident is deemed to have ended when there are no more resources (firefighters and equipment) on the ground putting out the fire. However, this definition was not documented in the Forest Service’s manuals and handbooks in the Directive System. Based on our review of documentation that the Forest Service provided for four fire incidents, we found that for these incidents the Cost Settlement Packages and the billings took several months to years to complete after the fire incident. According to Forest Service officials, delays in preparing the Cost Settlement Package in many cases were due to parties involved in suppressing the fires taking a long time to submit their invoices to the Forest Service for payment. Because the preparation of Cost Settlement Packages was not included in the process documents, the Forest Service did not have a defined time frame for when, in relation to the end of the fire, the Cost Settlement Package must be completed. For example, in one case we reviewed, the bill for a cost settlement was sent 9 months after the fire occurred, and in another case, settlement occurred approximately 2 years after the fire occurred. For both fire incidents, based on the reports we reviewed, the fires were contained within a week or two, but the Forest Service does not have a policy for documenting the date when the fire incident is deemed to have ended. Because of the complexity of the process for negotiating and determining the reimbursable amounts from all the costs that the Forest Service pays for a fire incident, the reimbursable amounts may take time to negotiate, and subsequent billing to and collection from parties may take much longer. Forest Service officials stated that some receivables that were not going to be collected until after its financial system’s aging process for receivables deemed such receivables uncollectible and a bad debt are tracked in a spreadsheet outside its financial system. We found that the Forest Service did not have a documented process and related control activities to reasonably assure that its Budget Office was informed of these older receivables being tracked in a spreadsheet and the related progress of collection activities that local program managers and line officers perform, which could affect the reliability of the reported reimbursable receivable amounts. According to Standards for Internal Control in the Federal Government, management should internally communicate the necessary quality information to achieve the entity’s objectives. Without proper communication, important information, such as amounts that the Forest Service will receive from fire incident cost settlement negotiations, may not be considered in the Forest Service’s strategy for the effective and efficient management of fund transfers for fire suppression activities. Processes and related control activities applicable to RACAs were not adequately described in Forest Service manuals and handbooks in its Directive System. RACAs, which may be for research or other nonemergency purposes, are billed and collected based on previously agreed upon billing and collection terms. In accordance with the Forest Service’s Directive System, policies related to business processes, such as RACAs, are documented in its manuals while procedures for performing specialized activities are documented in its handbooks. We found that the manuals and handbooks in the Directive System did not adequately describe the processes and related control activities over the RACA processes to enable efficient and effective performance of the work by appropriate and responsible personnel. The manuals and handbooks related to RACAs state that a manager review the documentation to ensure that the funding supports the objective of the agreement, the agreement is the correct instrument for funding the project, all relevant terms and conditions have been included in the agreement, the entity’s financial strength and capability are acceptable, and all applicable regulations and OMB circulars have been addressed. However, there was no discussion in the manuals and handbooks about when the manager needs to perform the reviews and how these reviews were to be documented. Further, in response to our inquiry regarding procedures performed to assess the entity’s financial strength and capability are acceptable before a RACA is signed, Forest Service officials stated that there is currently no formal process for determining financial capability for RACAs. For reimbursable agreements, the Forest Service’s process documented in its handbook consisted of completing a creditworthiness checklist. However, the handbook did not describe procedures for (1) completing the checklist and (2) documenting responsible personnel’s review and approval of an entity’s acceptable financial capability. Standards for Internal Control in the Federal Government states that management should design control activities to achieve objectives and respond to risks. Management’s design of internal control establishes and communicates the who, what, when, where, and why of internal control execution to personnel. Documentation also provides a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel. Further, the standards also explain that management clearly document internal control in a manner that allows the documentation to be readily available and properly managed and maintained. In addition, the manuals and handbooks applicable to the RACAs have not been timely reviewed by management, and had not been updated to reflect current processes. For example, the document that serves as direction for Forest Service personnel on how to enter into RACAs referred to an outdated financial system that was replaced in fiscal year 2013. Further, the manuals and handbooks for the RACA processes had no indication that they had been reviewed within the past 5 years. Forest Service policy requires routine review, every 5 years, of policies and procedures in its Directive System. According to Forest Service officials, a staff member from ORMS sends an e-mail to officials responsible for updating these policies and procedures. However, appropriate control activities have not been designed to reasonably assure that updates were made, reviewed, approved, and issued as needed for continued relevance and effectiveness. Without adequate descriptions of processes and related control activities in its manuals and handbooks over RACAs, the Forest Service is at risk that processes and related control activities may not be properly, consistently, and timely performed. Further, because it lacks a process and related controls for monitoring and reviewing the updates of the guidance and various process documents in the Directive System, the Forest Service is at risk that its policies and procedures may not provide appropriate agency-wide direction in achieving control objectives, particularly when financial systems change and old processes may no longer be applicable. SOPs and desk guides related to RACA processes were not in the Directive System and are not considered official Forest Service policy and procedures. Forest Service field staff responsible for various processes generally developed SOPs and desk guides to document day-to-day procedures for employees in carrying out RACA processes to supplement the manuals and handbooks. However, the SOPs and desk guides did not reference the applicable manuals and handbooks they supplemented. Further, the SOPs and desk guides did not provide descriptions of (1) review procedures for authorization, completeness, and validity of RACAs and related receivables; (2) detailed review procedures to be performed and by whom; (3) timing of review procedures; and (4) how to document the completion of the review procedures. Finally, SOPs and desk guides did not have evidence that responsible officials reviewed and approved them to authorize their use. These SOPs and desk guides are only available in the field office where these were developed, and if similar SOPs and desk guides were developed in other field offices, control activities and how they are performed could vary. We also noted that these SOPs and desk guides were not timely updated to reflect processes and systems currently in use. For example, there were many instances where the SOPs and desk guides referred to systems that the Forest Service no longer used. Standards for Internal Control in the Federal Government states that management should establish an organizational structure, assign responsibility, and delegate authority to achieve the entity’s objectives. Effective documentation assists in management’s design of internal control by establishing and communicating the who, what, when, where, and why of internal control execution to personnel. Documentation also provides a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel and to achieve the entity’s objectives. Management assigns responsibility and delegates authority to key roles throughout the entity. As a result of the issues discussed above, the Forest Service is at risk that control activities may not be properly and consistently performed and its related control objectives may not be achieved efficiently and effectively. In addition, the Forest Service is at risk that knowledge for performing the control activities may be limited to a few personnel or lost altogether in the event of employee turnover. The Forest Service lacked control activities over the segregation of incompatible duties performed by line officers and program managers for reimbursable agreements and any adjustments affecting the final disposition of related receivables. Field offices manage the majority of Forest Service projects, including authorizing the agreements and monitoring related collection. The Forest Service line officer for fire incident cooperative agreements and program managers for RACA at the RSA, unit, or field levels initiate and develop the terms of the agreements and are also responsible for any subsequent negotiation of the agreements. In the process of negotiating and settling costs, the line officer or program manager has the authority to cancel or change related receivables that they deemed uncollectible. For example, in a fire incident, the line officer at the region or field level is involved in both developing a Cost Share Agreement and after the fire incident has ended, negotiating the Cost Settlement Package with parties involved in the agreement to determine the final settlement amount that the Forest Service will be reimbursed for expenses paid in suppressing the fire incident. Therefore, the line officer is responsible for initiating the Cost Share Agreement, modifying the Cost Settlement Package, and changing or canceling the related receivable, which represent conflicting duties. We also found that the Forest Service did not have any mitigating controls, such as independent approval of any adjustments affecting the final disposition of receivables, to mitigate the risk of these incompatible duties. Standards for Internal Control in the Federal Government states that management should design control activities to achieve objectives and respond to risks. Segregation of duties contributes to the design, implementation, and operating effectiveness of control activities. To achieve segregation of key functions, management can divide responsibilities among different people to reduce the risk of error, misuse, or fraud. This may include separating the responsibilities for authorizing or approving transactions, processing and recording them, and reviewing the transactions so that no one individual controls all key aspects of a transaction or event. Forest Service officials stated they did not consider segregating the conflicting duties related to reimbursable agreements because these line officers and program managers were most familiar with the terms of the agreement and the activities performed. However, a lack of adequate segregation of conflicting duties or proper monitoring and review of conflicting duties for receivables from reimbursable agreements could result in receivables not being collected, and an increased risk of fraud. The Forest Service’s processes and related control activities over review of unliquidated obligations were not properly designed to reasonably assure optimum utilization of funds and were inconsistent with USDA and Forest Service policy. Further, Forest Service manuals and handbooks related to the review of unliquidated obligations did not clearly describe control activities and were not timely reviewed by management. The Forest Service reported unliquidated obligations of approximately $2.6 billion and $2.5 billion in its financial statements as of September 30, 2015, and 2016, respectively. In fiscal year 2016, the Forest Service deobligated about $319 million of its unliquidated obligations from prior years. The Forest Service’s procedures related to the review of unliquidated obligations were not properly designed and were inconsistent with USDA and Forest Service policy. In accordance with USDA Departmental Regulation (Regulation 2230-001) and related Forest Service policy, the Forest Service identifies and reviews unliquidated obligations that have been inactive for at least 12 months to determine whether delivery or performance of goods or services is still expected to occur. Once a determination has been made that an unliquidated obligation can be deobligated, program or procurement personnel are to notify finance personnel, in writing, within 5 days of the determination to process the deobligation. Within 15 days of receipt of the written notification, the unliquidated obligations are to be adjusted in the financial management system. The Forest Service CFO is then to be notified in writing that the deobligation was processed. Within 1 month of the close of each quarter, the Forest Service CFO is to submit to USDA’s Associate CFO for Financial Operations a certification stating that the Forest Service has performed reviews of its unliquidated obligations and taken appropriate actions, such as promptly deobligating an unliquidated obligation that is no longer needed. However, the Forest Service’s quarterly certifications are inconsistent with USDA and Forest Service policy because the months included in each quarterly review do not line up with the months outlined in policy. For example, as shown in table 1, based on policy the certification due on October 31, covers the months July through September. However in practice, the certification that the Forest Service prepared for October 31 covers May through July. As a result, the review and certification for August and September would be delayed an entire quarter. According to Forest Service officials, it takes considerable time to produce accurate unliquidated obligations reports from USDA’s financial system and then distribute them to field offices. Therefore, there is not sufficient time for the field offices to review and deobligate amounts not needed from the unliquidated obligations balances to meet USDA’s certification timing and requirements. However, the Forest Service has not developed other processes and control activities that could help meet USDA and Forest Service policy and reasonably assure that unliquidated obligations are reviewed timely and appropriate actions are taken. As a result, there is an increased risk that the Forest Service may not achieve its control objectives of optimum utilization of funds and timely adjustments of obligated balances. The Forest Service’s process and related control activities over its review of unliquidated obligations and resulting certifications were not adequately described in manuals and handbooks in its Directive System. Further, the manuals and handbooks were not timely reviewed and updated to reflect processes and systems currently in use. In accordance with the Forest Service’s Directive System, policies are documented in its manuals while procedures for performing specialized activities are documented in its handbooks. However, we found that the Forest Service’s processes and related control activities for reviewing unliquidated obligations were not adequately described and documented in such manuals and handbooks. Although parts of the applicable section of the handbook referred to procedures, there were no detailed descriptions of the processes, and only references to objectives of the procedures for reviewing unliquidated obligations were listed. For example, in identifying unliquidated obligations for review, the narrative description of the procedures in the handbook states that the responsible obligating official must review each selected unliquidated obligation to determine whether (1) delivery or performance of goods or services has occurred or is expected to occur and (2) accounting corrections to the obligation data in the accounting system are necessary. The handbook also refers to an unliquidated obligations report listing the unliquidated obligations that must be reviewed. The narrative does not provide any detailed procedures that obligating officials or responsible personnel need to perform, how to perform those procedures, and how those control activities are to be documented. The guidance in the handbook was supplemented by two desk guides. However, the desk guides are outside the Forest Service’s Directive System and, as previously noted, the Directive System is the only place where the Forest Service’s policy and procedures are issued. In addition, these desk guides did not reference the applicable guidance in the Directive System that they were supplementing. Further, the process and related control activities for adjusting unliquidated obligations within 15 days of receipt of written notification, as stated in USDA’s policy, were not described in either the handbooks or the desk guides. Standards for Internal Control in the Federal Government states that management should design control activities to achieve objectives and respond to risks to achieve an effective internal control system. Management’s design of internal control establishes and communicates the who, what, when, where, and why of internal control execution to personnel. Documentation also provides a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel. Further, the standards also explain that management clearly document internal control in a manner that allows documentation to be readily available and that documentation be properly managed and maintained. In addition, manuals and handbooks for processes related to review and certification of unliquidated obligations had no evidence that they had been reviewed within the past 5 years for ongoing relevance and effectiveness. According to a Forest Service manual, all service-wide directives, except interim directives, shall be reviewed at least once every 5 years. The Forest Service does not have an effective process in place to monitor the reviews and any updates of the manuals and handbooks in its Directive System. As previously discussed, while ORMS sends an e- mail requesting that the applicable officials review and update the guidance in the manuals and handbooks, there is no follow-up process to help ensure that documents were reviewed and updated as needed. Because the Forest Service’s process and related control activities over its review and certification of unliquidated obligations were not adequately described in its manuals and handbooks, the Forest Service is at risk that its control activities may not reasonably assure that control objectives provide (1) optimum utilization of funds and (2) for unliquidated obligations that are no longer needed to be efficiently and effectively deobligated and made available for other program needs. Adequate processes and related control activities over the Forest Service’s budgetary resources are critical in reasonably assuring that these resources are timely and effectively available for its mission operations, including fire suppression. However, we identified deficiencies in the Forest Service’s processes and related controls over allotments, unobligated no-year funds from prior years, administrative control of funds, fund transfers, reimbursable agreements, and available funds from deobligation of unliquidated obligations. Deficiencies ranged from a lack of processes to control activities that were not properly designed, resulting in an increased risk that Forest Service funds may not be effectively and efficiently monitored and used. In addition, the Forest Service’s manuals and handbooks, which provide the directives for the areas we reviewed, had not been reviewed by management in accordance with the Forest Service’s 5-year review policy. Further, Forest Service staff prepared SOPs and desk guides that documented control activities, but they were not issued as official policy and had not been reviewed and approved by responsible officials. As a result, the Forest Service is at increased risk that the control activities may not be consistently performed across the agency and that the guidance in the SOPs and desk guides may not comply with agency policy in the Directive System. To improve internal controls over the Forest Service’s budget execution processes, we are making the following 11 recommendations: The Chief of the Forest Service should (1) revise its process and (2) design, document, and implement related control activities to reasonably assure that amounts designated in appropriations acts for specific purposes are properly used for the purposes specifically designated. (Recommendation 1) The Chief of the Forest Service should (1) develop a process and (2) design, document, and implement related control activities to reasonably assure that unobligated no-year funds from prior years are reviewed for continuing need. (Recommendation 2) The Chief of the Forest Service should (1) design, document, and implement a comprehensive system for administrative control of funds and (2) submit it for review and approval by USDA before issuance, as required by the USDA Budget Manual. (Recommendation 3) The Chief of the Forest Service should design, document, and implement control activities over the preparation and approval of a fire suppression fund transfers strategy, to specify all appropriate factors to be considered in developing and documenting the strategy, and incorporate these control activities into the Directive System. (Recommendation 4) The Chief of the Forest Service should design, document, and implement processes and related control activities for its fire incident cooperative agreements to reasonably assure efficient and effective operations and timely and reliable reporting of reimbursable receivables related to fire incident cooperative agreements, and incorporate them in the Directive System. (Recommendation 5) The Chief of the Forest Service should update the RACA manuals and handbooks to adequately describe the processes and related control activities applicable to RACAs to reasonably assure that staff will know (1) how and when to perform processes and control activities and (2) how to document their performance. (Recommendation 6) The Chief of the Forest Service should design, document, and implement segregation of duties or mitigating control activities over reimbursable agreements and any adjustments affecting the final disposition of related receivables. (Recommendation 7) The Chief of the Forest Service should modify, document, and implement control activities consistent with USDA and Forest Service policy to reasonably assure that unliquidated obligations are reviewed timely and appropriate actions are taken. (Recommendation 8) The Chief of the Forest Service should adequately describe the processes and related control activities for unliquidated obligations review and certification processes in manuals and handbooks within the Directive System. (Recommendation 9) The Chief of the Forest Service should develop, document, and implement a process and related control activities to reasonably assure that manuals and handbooks for allotments, reimbursable agreements, and review of unliquidated obligations are reviewed and updated every 5 years, consistent with Forest Service policy. (Recommendation 10) The Chief of the Forest Service should develop, document, and implement a process and related control activities to reasonably assure that SOPs and desk guides (1) clearly refer to guidance in the Directive System for allotments, reimbursable agreements, and review of unliquidated obligations and (2) are reviewed and approved by responsible officials prior to use. (Recommendation 11) We provided a draft of this report to USDA for comment. In its comments, reproduced in appendix III, the Forest Service stated that it generally agreed with the report and that it has made significant progress to address the report’s findings. Specifically, the Forest Service stated that its financial policies concerning budget execution have been revised to address our concerns with allotments, unliquidated obligations, commitments, and administrative control of funds as prescribed by OMB Circular No. A-11. Further, the Forest Service stated that it has undertaken an in-depth review of its unliquidated obligations and modified the certification process to comply with the USDA requirement. We are sending copies of this report to the appropriate congressional committees and to the Secretary of Agriculture and the Chief of the Forest Service. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-9869 or khana@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix IV. Our objectives were to determine the extent to which the Forest Service properly designed control activities over (1) allotments of budgetary resources, its system for administrative control of funds, and any fund transfers between Forest Service appropriations; (2) reimbursables and related collections; and (3) unliquidated obligations. We reviewed the Forest Service’s process documents and control activities, policies and procedures from its manual and handbooks in its Directive System, and other guidance in the form of standard operating procedures (SOP) and desk guides to obtain an understanding of internal controls at the Forest Service related to our three objectives. We reviewed the control activities that the Forest Service identified to determine whether the activities would achieve the control objectives that the service identified and whether the activities were consistent with Standards for Internal Control in the Federal Government. We also reviewed recent relevant GAO and U.S. Department of Agriculture (USDA) Office of Inspector General reports to obtain background information related to the Forest Service’s budget execution processes. We evaluated the design of the Forest Service’s control activities based on data for fiscal year 2016. To address our first objective, we reviewed Forest Service process documents related to allotments and budget authority to obtain an understanding of control activities over the allotments of budgetary resources, its system for administrative control of funds, and any related fund transfers between Forest Service appropriations. The process documents included a list of control objectives and related control activities that the Forest Service had used to assess its internal controls. We also reviewed the related guidance in appendix H to Office of Management and Budget Circular No. A-11, Preparation, Submission, and Execution of the Budget for Administrative Control of Funds, to identify requirements that agencies must meet to ascertain whether their controls over funds management are properly designed. We interviewed key officials from the Forest Service’s Strategic Planning, Budget and Accountability Office to gain an understanding of their processes for allotments of budgetary resources, its system for administrative control of funds, and fund transfers between Forest Service appropriations for wildland fire suppression activities, including how each of their risk assessments were performed and their plans to mitigate the risks. We reviewed and analyzed the processes documented in the manuals and handbooks collectively referred to as directives to determine whether the processes and control activities were designed to achieve the Forest Service’s stated objectives. Specifically, we examined the Forest Service’s control activities to determine whether these sufficiently communicated the procedures to be performed and the documentation to be prepared. We also reviewed USDA Budget Manual to determine whether Forest Service guidance was consistent with USDA’s requirements for all of its component agencies, specifically requirements related to the administrative control of funds. To address our second objective, we reviewed the Forest Service’s policies, procedures, and other documentation and interviewed agency officials to develop an understanding of its processes related to reimbursable agreements and related collection activities. We first identified, through interviews with Forest Service officials, the different kinds of reimbursable agreements that the Forest Service enters into with other USDA components, other federal agencies, state and local government agencies, and nongovernment entities to carry out its mission for the benefit of the public. Two distinct types of reimbursable agreements include (1) fire incident cooperative agreements and (2) reimbursable and advanced collection agreements. We reviewed Forest Service process documents and templates related to these two types of reimbursable agreements provided to obtain an understanding of control activities over reimbursable processes. We reviewed the list of control objectives and related control activities that the Forest Service identified to determine whether the control activities were designed to achieve the applicable control objectives. To address our third objective, we reviewed the Forest Service’s policies, procedures, and other documentation related to and interviewed agency officials about unliquidated obligations to develop an understanding of the Forest Service’s review and certification processes for unliquidated obligations balances. We reviewed the Forest Service’s control activities related to its process for reviewing unliquidated obligations to obtain an understanding of control activities around its process and to determine whether the control activities were designed to achieve the applicable control objectives. Based on the results of our evaluation of the Forest Service’s design of internal control activities over the budget execution processes, we did not evaluate the implementation of the control activities or whether they were operating as designed. While our audit objectives focused on certain control activities related to (1) allotments of budgetary resources, the Forest Service’s system for administrative control of funds, and related fund transfers; (2) reimbursables and related collections for reimbursable agreements; and (3) unliquidated obligations, we did not evaluate all control activities and other components of internal control. If we had done so, additional deficiencies may or may not have been identified that could impair the effectiveness of the control activities evaluated as part of this audit. We conducted this performance audit from August 2016 to January 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Standards for Internal Control in the Federal Government provides the overall framework for establishing and maintaining internal control. Internal control represents an agency’s plans, methods, policies, and procedures used to fulfill its mission, strategic plan, goals, and objectives. Internal control is a process by an entity’s oversight body, management, and other personnel to provide reasonable assurance that the objectives of the entity will be achieved. When properly designed, implemented, and operating effectively, it provides reasonable assurance that the following objectives are achieved: (1) effectiveness and efficiency of operations, (2) reliability of internal and external reporting, and (3) compliance with applicable laws and regulations. Internal control is not one event, but a series of actions that occur throughout an entity’s operations. The five components of internal control are as follows: Control Environment - The foundation for an internal control system that provides the discipline and structure to help an entity achieve its objectives. Risk Assessment - Assesses the risks facing the entity as it seeks to achieve its objectives and provides the basis for developing appropriate risk responses. Control Activities - The actions management establishes through policies and procedures to achieve objectives and respond to risks in the internal control system, which includes the entity’s information system. Information and Communication - The quality information management and personnel communicate and use to support the internal control system. Monitoring - Activities management establishes and operates to assess the quality of performance over time and promptly resolve the findings of audits and other reviews. An effective internal control system has each of the five components of internal control effectively designed, implemented, and operating with the components operating together in an integrated manner. In this audit, we assessed the design of control activities at the Forest Service related to its (1) allotments of budgetary resources and any related fund transfers between Forest Service appropriations, (2) reimbursables and related collections, and (3) review of unliquidated obligations. In addition to the contact named above, the following individuals made key contributions to this report: Roger Stoltz (Assistant Director), Meafelia P. Gusukuma (Auditor-in-Charge), Tulsi Bhojwani, Cory Mazer, Sabrina Rivera, and Randy Voorhees.
|
The Forest Service, an agency within USDA, performs a variety of tasks as steward of 193 million acres of public forests and grasslands. Its budget execution process for carrying out its mission includes (1) allotments, which are authorizations by an agency to incur obligations within a specified amount, and (2) unliquidated obligations, which represent budgetary resources that have been committed but not yet paid. Deobligation refers to an agency's cancellation or downward adjustments of previously incurred obligations, which may result in funds that may be available for reobligation. GAO was asked to review the Forest Service's internal controls over its budget execution processes. This report examines the extent to which the Forest Service properly designed control activities over (1) allotments of budgetary resources, its system for administrative control of funds, and any fund transfers between Forest Service appropriations; (2) reimbursables and related collections; and (3) review and certification of unliquidated obligations. GAO reviewed the Forest Service's policies, procedures, and other documentation and interviewed agency officials. In fiscal years 2015 and 2016, the Forest Service received discretionary no-year appropriations of $5.1 billion and $5.7 billion, respectively. It is critical for the Forest Service to manage its budgetary resources efficiently and effectively. While the Forest Service had processes over certain of its budget execution activities, GAO found the following internal control deficiencies: Budgetary resources . The purpose statute requires that amounts designated in appropriations acts for specific purposes are used as designated. The Forest Service did not have an adequate process and related control activities to reasonably assure that amounts were used as designated. In fiscal year 2017, GAO issued a legal opinion that the Forest Service had failed to comply with the purpose statute with regard to a $65 million line-item appropriation specifically provided for the purpose of acquiring aircraft for the next-generation airtanker fleet. Further, the Forest Service lacked a process and related control activities to reasonably assure that unobligated no-year appropriation balances from prior years were reviewed for their continuing need; did not have a properly designed system for administrative control of funds, which keeps obligations and expenditures from exceeding limits authorized by law; and had not properly designed control activities for fund transfers to its Wildland Fire Management program. These deficiencies increase the risk that the Forest Service may make budget requests in excess of its needs. Reimbursable agreements . To carry out its mission, the Forest Service enters into reimbursable agreements with agencies within the U.S. Department of Agriculture (USDA), other federal agencies, state and local government agencies, and nongovernment entities. The Forest Service (1) did not have adequately described processes and related control activities in manuals and handbooks for its reimbursable agreement processes and (2) lacked control activities related to segregating incompatible duties performed by line officers and program managers. For example, line officers may be responsible for initiating cost sharing agreements, modifying cost settlement packages, and changing or canceling the related receivable, which represent incompatible duties. As a result, programs and resources may not be protected from waste, fraud, and mismanagement. Unliquidated obligations . The Forest Service's processes and control activities over the review and certification of unliquidated obligations were not properly designed to reasonably assure the best use of funds and that unliquidated obligations would be efficiently and effectively deobligated and made available for other program needs. Further, the current process, as designed, was inconsistent with USDA and Forest Service policy. In addition, the Forest Service's manuals and handbooks, which provide directives for the areas that GAO reviewed, had not been reviewed by management in accordance with the Forest Service's 5-year review policy. Further, standard operating procedures and desk guides prepared by staff to supplement the manuals and handbooks were not issued as directives and therefore were not considered official policy. This increases the risk that control activities may not be consistently performed across the agency. GAO is making 11 recommendations to improve processes and related internal control activities over the management of the Forest Service's budgetary resources, reimbursable receivables and collections, and its process for reviewing unliquidated obligations. The Forest Service generally agreed with the report and stated that it has made significant progress to address the report findings.
|
In fiscal year 2017, approximately 24,000 CBP officers performed a variety of functions at over 300 air, land, and sea POEs, including inspecting travelers and cargo containers, among other activities. According to CBP, increases in passenger and cargo volumes are outpacing CBP’s staffing resources, resulting in increased passenger wait times and cargo backups, among other things. For example, in fiscal year 2017, CBP identified a need for an additional 2,516 CBP officers across all POEs. Further, as of 2017, CBP estimated that it needed approximately $5 billion to meet infrastructure and technology requirements at about 167 land POEs. To help identify and mitigate resource challenges, CBP developed its Resource Optimization Strategy, an integrated, long-term plan to improve operations at all POEs. The Strategy consists of three components: Business transformation: utilize new technology, such as Automated Passport Control kiosks, or new processes, such as trusted traveler programs, to increase CBP operational efficiencies; Workload Staffing Model: utilize modeling techniques to help ensure that existing staffing resources are appropriately aligned with threat environments while maximizing cost efficiencies; and Alternative funding strategies: utilize public-private partnership agreements, such as RSP and DAP, to supplement regular appropriated resources. The RSP enables partnerships between CBP and private sector or government entities, allowing CBP to provide new or additional services upon the request of partners. These services can include customs, immigration, or agricultural processing; border security and support at any facility where CBP provides, or will provide, services; and may cover costs such as salaries, benefits, overtime expenses, administration, and transportation costs. According to authorizing legislation, RSP agreements are subject to certain limitations, including that they may not unduly and permanently impact existing services funded by an appropriations act or fee collection. According to AFP officials, the purpose of the RSP is to provide new or additional CBP services at POEs that the component would otherwise not have been able to provide. From 2013 to 2017, the number of RSP agreements has increased as new authorizing legislation has expanded participant eligibility and made the program permanent. Table 1 below outlines the evolution of RSP through its different legislative authorities. The DAP permits CBP and GSA to accept donations from private and public sector entities, such as private or municipally-owned seaports or land border crossings. Donations may include real property, personal property, money, and non-personal services, such as design and construction services. Donated resources may include improvements to existing facilities, new facilities, equipment and technology, and operations and maintenance costs, among other things. In terms of the types of locations that may accept donations, donations may be used for activities related to land acquisition, design, construction, repair, alteration, operations, and maintenance, including installation or deployment of furniture, fixtures, equipment or technology, at an existing CBP-owned land POE; a new or existing space at a CBP air or sea POE; or a new or existing GSA-owned land POE. CBP and GSA may not accept donations at a leased land POE, nor is CBP able to accept a donation at or for a new land POE if the combined fair market value of the POE and donation exceeds $50 million. Additionally, CBP may not use monetary donations accepted under the DAP to pay salaries of CBP employees performing inspection services. Finally, CBP may not accept donations on foreign soil. Table 2 below depicts the evolution of DAP authorizing legislation since the program’s inception in 2014. Figures 1 and 2 depict the location and number of RSP and DAP agreements in place through fiscal year 2017. CBP has developed detailed guidance on the RSP application process, including application timeframes, requirements, and evaluation criteria, and this guidance is on CBP’s website. According to this guidance, in 2017, CBP expanded the RSP application submission period. Whereas in prior years applications were accepted during a single one-month window, prospective partners may now submit applications throughout the year. Under this new process, CBP evaluates submissions three times per year—beginning in March, July, and November. According to CBP, the submission period was expanded in part because new legislative authorities removed previous restrictions on the number of RSP agreements CBP can enter into each year. The overarching RSP application process—from application submission through CBP evaluation and applicant notification—is depicted in figure 3. According to CBP’s procedures for accepting and reviewing applications, potential partners first submit a letter of application that includes a variety of logistical information concerning the stakeholders, services to be requested, location of services to be requested, available facilities, and funding. For example, in submitting a letter of application, an applicant is to estimate how many hours of services it may request per month and identify the applicant’s available budget for the first fiscal year of the partnership, among other things. According to the application guidance, prospective applicants are encouraged to work with local CBP officials at individual POEs to develop letters of application. After submission, CBP officials at the affected POEs, including affected CBP Field Offices, review applications and communicate their findings and recommendations to the AFP office. In addition, the CBP Office of Chief Counsel reviews the applications for legal sufficiency and may suggest that CBP request additional information from applicants. Next, CBP convenes an expert panel consisting of two senior CBP officials who are not part of the AFP office to consider POE and legal comments on the applications, among other information provided by AFP officials. The panel deliberates and scores each proposal based on seven criteria, and all proposals that achieve a certain minimum score are accepted. The seven evaluation criteria used to weigh the merits of potential new partnership agreements are listed in table 3. The scoring scale ranges from -5 to 5, and the 7 criteria are weighted based on potential impact. For example, impact to CBP operations is weighted more heavily than other agency support. In September 2017, we observed an RSP application review panel. Among other things, we observed senior CBP officials, who were independent from the AFP office, score 31 RSP applications that impacted 46 CBP Field Office locations. The panel members based their deliberations on set criteria and reached consensus on which applications to approve. Finally, Congress and approved partners are notified of the selections. Where CBP denies a proposal for an agreement, it is to provide the reason for denial unless such reason is law enforcement sensitive or withholding the reason for denial is in the national security interests of the United States. Once CBP approves an application, CBP and its prospective new partners follow documented procedures to formalize the agreements and prepare all involved stakeholders, including new partners and local CBP officials, for Reimbursable Services Agreement implementation. The process to establish new RSP partnerships at specific POEs is depicted in figure 4 below. After CBP notifies the applicant of its selection, officials from the AFP office schedule a site visit to meet with local CBP officials at the POEs and the new partners. According to CBP program requirements, the purpose of the site visit is to discuss workload and services, and to verify that the POE facilities and equipment meet CBP’s required specifications. AFP officials also provide program training to CBP Field Office and POE officials, as well as to new partners on the processes to request and fulfill RSP service requests, among other things. We attended an AFP office visit to CBP’s Baltimore Field Office in October 2017 and observed AFP officials sharing best practices with local CBP officials and new RSP partners. According to CBP’s procedures, before any RSP services can be provided, CBP and the prospective partners must sign a legally binding Reimbursable Services Agreement. Among other things, the Reimbursable Services Agreement establishes that the partner will reimburse CBP for the costs of services provided under the RSP authorizing legislation, including the officer overtime rates, benefits, and a 15 percent administrative fee. Further, the partner agrees to reimburse CBP for these services within 15 days of billing through a Department of the Treasury system. Finally, local CBP Field Office and partner officials negotiate a local MOU that outlines the services, schedules, and other conditions for the POE location(s) covered by the Reimbursable Services Agreement. Similar to the RSP application process, CBP, in conjunction with GSA, utilizes criteria and documented processes to evaluate DAP proposals and implement the program. More specifically, in alignment with the most recent DAP authorizing legislation, CBP and GSA developed the Section 482 Donation Acceptance Authority Proposal Evaluation Procedures & Criteria Framework (Framework) for receiving, evaluating, approving, planning, developing, and formally accepting donations under the program. The initial steps of the Framework, which encompass the DAP application process, are depicted in figure 5. In prior years, CBP accepted large-scale proposals, defined by CBP as $5 million or more, during one application and evaluation cycle per year. Beginning in fiscal year 2017, CBP accepts large-scale proposals on a rolling basis, using a streamlined process for expedited review. CBP also accepts small-scale proposals, defined by CBP as less than $5 million, on a rolling basis. According to AFP officials, CBP undertakes considerable effort to provide early education about the program to potential partners who plan to apply for a DAP agreement, including discussing CBP’s operational needs at the POEs. The Framework notes that this outreach helps prospective donors gauge their willingness and ability to work cooperatively with CBP and GSA on potential POE improvements and also helps applicants enhance the viability of their submissions. After a DAP proposal is submitted and checked for completeness, CBP and GSA subject matter experts evaluate the proposal against seven operational and six technical criteria (see table 4 below). The evaluators reach consensus on proposed recommendations and submit their evaluation results to CBP and GSA senior leadership for consideration. Leadership reviews the recommendations and other pertinent information and determines whether or not to select proposals. In accordance with legislative requirements, CBP must notify DAP applicants of the determination to approve or deny a proposal not later than 180 days after receiving the completed proposal. Figure 6 depicts all three phases of the DAP Framework from selecting a proposal to signing a formal Donations Acceptance Agreement. Phase 2 of the Framework begins shortly after CBP notifies new partners of DAP selections. CBP officials then initiate a series of biweekly calls with GSA officials, if applicable, and the partner. AFP officials provide partners with documentation in the form of a high-level roadmap which contains a sequence of activities and deliverables CBP expects from the partners, and all stakeholders convene to track progress against planned activities and milestones. CBP, GSA, and the partner also meet to discuss the technical implementation of the donation. AFP and GSA officials conduct a site visit to meet with new partners; obtain a visual understanding of how CBP, GSA, and the partner will implement the donation; and help the partner begin the planning and development phase. CBP, GSA, and the partner negotiate a MOU on roles and responsibilities and terms and conditions of the donation. CBP then provides the partner with its technical standards and other operational requirements, such as space and staffing needs, under a non- disclosure agreement. The partner then begins to plan and develop its conceptual proposal into an executable project in close coordination with CBP and GSA. By the end of Phase 2, CBP, GSA, as applicable, and the partner confirm that all pre-construction development activities are complete, no outstanding critical risks exist, and that the appropriate agencies are prepared to request future funding, as applicable. Finally, stakeholders move to Phase 3 of the Framework to formalize the terms and conditions under which either CBP, GSA, or both, may accept the proposed donation. After CBP, GSA and the partner agree to the provisions of the project plan, they sign the legally binding Donations Acceptance Agreement, and stakeholders proceed to project execution. CBP has documented standard operating procedures, roadmaps, and other formally documented policies and procedures to administer the RSP and DAP. In addition, as mentioned above, AFP officials conduct site visits to the POEs with new RSP and DAP agreements, and provide formal training for CBP personnel at Field Offices and POEs. The general process for administering RSP–from requesting and fulfilling services to billing and collecting payments–is dictated by standard operating procedures, as shown in figure 7. In general, RSP partners submit a formal request for services by completing an electronic form and calendar access via CBP’s Service Request Portal. Once the partner submits the request, the portal sends an electronic copy of the request to the partner’s email and the port’s RSP email inbox. CBP supervisors at the POE access the Service Request Portal to review, edit, approve, deny, or cancel requests. The system tracks and requires CBP officials to comment on any requests that CBP edits, denies, or cancels, and sends an email notification of CBP’s decision to the partner. If CBP approves the request, the Service Request Portal creates a line item with information about the request, such as codes for the location and partner, as well as the hours CBP officers will work. Next, CBP officers enter line item information—information on accounting codes for the location and partner and the actual hours CBP officers worked to fulfill the request—into CBP’s overtime management system. At the end of every shift, CBP supervisors review and approve the amount of overtime and other data entered into the overtime management system. In addition, data from this system is checked for accuracy and certified weekly by both CBP POE and AFP officials. After the overtime and request information is checked, payroll data generated from the overtime management system, including salary and benefits information for each officer that worked RSP overtime, uploads to CBP’s financial accounting system at the end of each pay period, or every 14 days. CBP bills its partners for two full pay periods, and the partner has 15 days to make a full payment through the partner’s account with the Department of the Treasury. After the partner makes the payment through the Department of the Treasury collection system, CBP National Finance Center officials reimburse the CBP annual Operations & Support account initially used to pay its officers for all of the RSP overtime worked during that pay cycle by moving the expenses to the RSP officer payroll fund. Although the general request and billing processes for RSP services are the same across all POEs regardless of location or mode—air, land, or, sea—CBP and its partners have flexibility to tailor RSP implementation based on local conditions or needs. Some of this implementation variation is documented in locally negotiated MOUs. For example, CBP’s partner at Miami International Airport in Florida relies on CBP to schedule RSP overtime daily based on CBP expertise. CBP officials at the airport developed their own software templates to plan, track, and manage CBP officers for RSP overtime for a given amount of available overtime funding. At the Pharr land POE in Texas, CBP staff at the POE submit recommended RSP overtime request proposals to the partner based on local conditions, including staffing, and the partner decides whether to submit a formal request to CBP. In all of these instances, RSP partners and CBP Field Office and POE officials expressed satisfaction with their more customized administration processes. CBP and its partners also noted some challenges to implementing RSP and DAP agreements, but partners generally agreed that the program benefits outweighed the challenges. For example, some DAP partners we met with mentioned that navigating GSA requirements was difficult and sometimes caused delays. GSA officials we met with noted that they are educating partners on GSA building standards and the GSA approvals process for donations, among other things, to help partners manage their timelines and expectations. GSA officials noted that they are working with CBP and partner officials to manage and learn from these early implementation challenges. CBP, GSA, and DAP partners also acknowledged a lack of clarity about which entity or entities are responsible for the long-term operations and maintenance costs of DAP infrastructure projects, although CBP has taken steps to address this issue. GSA pricing procedures dictate that once a POE receives an improvement, it charges the customer (CBP) for the additional operating costs, such as utilities. CBP officials acknowledged that the long term sustainability of donations, specifically the costs of operations, maintenance, and technology for infrastructure- based donations, needs to be addressed, and officials reported taking initial steps. For example, once CBP and its partner complete the planning of a project and GSA has calculated the project’s estimated operating expenses, the AFP office begins working with the CBP Office of Facilities & Asset Management to budget for such costs with the goal of reaching a mutually acceptable partnership for donations that will have long-term sustainability. CBP officials noted that the agency cannot commit to funding that is not guaranteed for the future. To mitigate budget uncertainty, CBP now includes language in its MOU and Donations Acceptance Agreement templates stating that upon project completion, the partner will be responsible for all costs and expenses related to the operations and maintenance of the donation until the federal government has the available funding and resources to cover such costs. According to AFP officials, CBP also makes efforts to educate its DAP partners on the budgeting process and associated timeframes with project completion. CBP officials noted that the majority of projects are in the early stages of development, and it will be years before the projects are complete. Furthermore, GSA officials stated that the actual operating and maintenance costs associated with DAP projects will not be known until about 1 year after the projects are completed. As noted previously, as CBP’s authorities to enter into new RSP agreements expanded to an unlimited number of agreements per year, and in total, for all types of POEs in 2017, the number of applications that CBP has selected has also increased. For example, in fiscal year 2013, CBP received 16 applications from interested stakeholders and selected five of these applications for partnerships, while in fiscal year 2017 cycle 2, CBP received 31 applications from interested stakeholders and tentatively selected 30 for partnerships. From fiscal year 2013 through fiscal year 2017 cycle 2, CBP has tentatively selected over 100 partners for RSP agreements. This figure includes RSP agreements under the authorities provided in Section 481 that allow CBP to enter into agreements with small airports to pay for additional CBP officers above the number of officers assigned at the time the agreement was reached. Figure 8 details this information for each application cycle. As mentioned above, once CBP selects an application for a new reimbursable services partnership, CBP and its partner sign a legally binding Reimbursable Services Agreement. From fiscal years 2013 through 2017 cycle 2, CBP selected 114 applications and entered into 69 Reimbursable Services Agreements with partners. As mentioned previously, local CBP officials also work with the partner to negotiate the terms of an MOU, which outlines how the partnership will work at the POE. As of November 2017, CBP and its partners were implementing 54 MOUs from partnerships that they entered into from fiscal years 2013 through 2017. Of those 54 MOUs, 10 cover agreements at land POEs, 22 cover agreements at sea POEs, and 23 cover agreements at air POEs. According to AFP officials, during the process of negotiating the MOUs with its partners, CBP and the partner often agree to include a variety of services that the partner can request, so that if a need arises, there is a record that CBP has agreed to provide those services under the MOU. CBP and its partners also negotiate a variety of other terms for the agreements in the MOUs, including the types of requests for services the partner can make, expectations for how often CBP and its partners communicate, and how to amend the MOU, among others terms. Table 5 provides details about the existing 54 MOUs. As noted in the above table, MOUs detail a variety of services that CBP officers can provide at the POEs, and the types of services vary by POE type. For example, most MOUs across land, air, and sea POEs allow partners to request services for freight or cargo processing, while a majority of the MOUs at air POEs allow CBP to provide services for traveler processing and to address unanticipated irregular operations or diversions. In addition, all MOUs allow partners to submit ad-hoc requests that partners make for services in advance. Most of these MOUs also allow partners to make urgent requests for immediate services. In examining the MOUs, we found that 44 of the 54 MOUs, or 81 percent, indicate that CBP and its partner meet at least quarterly to discuss how the partnership is going. Further, CBP and some of its partners meet more often. For example, CBP and its partners agreed to meet monthly in accordance with 23 MOUs, while CBP and its partners agreed to meet weekly according to 3 MOUs. All partners we interviewed that have utilized their RSP agreements reported that maintaining strong communication between CBP and the partner is important to implementing the RSP agreements at the POEs. Appendix I has additional information about each of the 54 current MOUs. Tables 6 and 7 provide the amount that partners reimbursed CBP for overtime services, the total number of overtime hours that CBP officers worked for each fiscal year from 2014 through 2017, and the total number of travelers and vehicles that CBP officers inspected during RSP partner requests for services from fiscal years 2014 through 2017 respectively. Similar to the RSP, the number of DAP partnerships more than doubled in fiscal year 2017. In fiscal years 2015 and 2016, CBP selected seven DAP proposals. In fiscal year 2017, CBP selected 9 DAP proposals. Combined, these 16 DAP projects affect 13 POEs. The donations that partners will provide CBP and GSA, as applicable, include a variety of POE improvements such as the installation of new inspection booths and equipment, removal of traffic medians, and new cold inspection facilities, as well as smaller items such as a high-capacity perforating machine, which reduces document processing time and allows CBP officers to focus on more critical operational duties, among other donations. According to CBP, these 16 donation proposals combined are intended to support over $150 million in infrastructure improvements at U.S. POEs. CBP also expects a variety of benefits from these donations, including support for local and regional trade industries and tourism, reductions in border wait times, and increased border security and officer safety, among others. Table 8 provides information on the scope and status of DAP projects that CBP and GSA have selected since CBP established the DAP in fiscal year 2015. As noted in the table above, CBP has fully accepted six donations, including the donation of a high capacity perforating machine to facilitate the processing of titles and other documents at the Freeport Sea POE in fiscal year 2016, the removal of traffic medians at the Ysleta Land POE, and recurring luggage donations in fiscal year 2017. Figure 9 is a photo of the high capacity perforating machine that CBP accepted at the Port of Freeport Sea POE from its partner Red Hook Terminals in 2016. As mentioned above, once CBP selects an application for a new donation partnership, CBP, GSA, if applicable, and partner officials negotiate the terms of a MOU, which outlines intentions of the partnerships for projects that require coordinated planning and development. CBP currently has MOUs for 9 of its 16 DAP projects. The MOUs contain a variety of project- specific information, including the scope of the project, a list of documents that CBP and GSA may request to determine whether the project is ready for execution, and details on donor warranty and continuing financial responsibility after CBP and GSA accepts the donation. As mentioned previously, CBP classifies donations under the DAP into two categories: small-scale donations, which are reviewed on an expedited basis, and large-scale donations. For example, the Salvation Army’s recurring donation of six to nine pieces of luggage per year to support Office of Field Operations canine training activities is a small-scale donation. Large-scale donations are donations with an estimated value of $5 million or more and are moderate to significant in size, scope, and complexity. For example, the City of Laredo’s donation for construction of four additional commercial vehicle lanes and booths, roadways and infrastructure, and exit booths and related technologies is a large-scale donation. Given that partner requests for RSP services are predominately for the purposes of CBP officer overtime, CBP primarily monitors the RSP through audits. Specifically, CBP conducts regular audits using information from its Service Request Portal, its overtime management system, and its internal accounting system to ensure partners appropriately reimburse CBP for the overtime services officers provide under the RSP. Figure 10 describes how and when CBP uses these tools to conduct audits as part of the RSP request, fulfillment, and billing processes. As noted previously, CBP officers who work RSP overtime enter information from the Service Request Portal, such as the partner code and POE code, into CBP’s overtime management system for the actual hours that the officer worked to complete the request. At the end of every shift, CBP supervisors review and approve the information entered into the overtime management system, which contains the information needed for CBP to bill its RSP partner for the services that it performed, such as the number of hours each CBP officer worked to fulfill RSP requests and the salary and benefits information for those officers. POE supervisors then update the Service Request Portal records so that they reflect what CBP officers actually worked. On Mondays, AFP officials and CBP POE supervisors conduct concurrent audits of weekly overtime management system reports and reconcile these data with the information from the Service Request Portal to ensure that CBP will bill the partner appropriately. At the end of two pay period cycles, or every 28 days, officials at CBP’s National Finance Center review the payroll and benefits information that was uploaded from the overtime management system into CBP’s financial management system to confirm that it matches the appropriate partner code. This ensures that the correct partner is billed for the reimbursable services that CBP provided. Generally, CBP and partner officials we met with did not have any problems with the billing and payment process, and CBP officials noted that any discrepancies in the billing information between the Service Request Portal, the overtime management system, or the financial accounting system, such as the partner code or the number of hours that CBP officers worked, are usually identified and corrected during the weekly audits. Further, in October 2017, we received a demonstration of how partners and CBP manage requests for services in the Service Request Portal, how CBP officers and supervisors at the POEs enter and review overtime information, and how CBP runs reports in its financial accounting system during the audit process. In addition, we conducted a test of the data from the overtime management system and the billing information from the financial accounting system for a selection of partners across eight pay periods from fiscal years 2014 through 2017 to determine if CBP billed its partners appropriately. Specifically, for each of the eight selected pay periods, we randomly selected one RSP partner from the universe of partners who used RSP services during the period. We then compared the number of RSP overtime hours logged in CBP’s overtime management system for the selected partners and pay periods with the number of hours on the corresponding partner bills. In all eight cases, the amount of RSP overtime hours logged by CBP officials matched the overtime hours billed to the partners. Our observations, review of applicable documentation, and testing provided reasonable assurance that CBP is being appropriately reimbursed by partners for the services that it provided under the RSP. To evaluate the benefits of RSP services, the AFP office develops metrics reports on the services that CBP performed while fulfilling RSP requests throughout the billing cycle that it provides its partners. These metrics reports include data, such as the number of overtime hours CBP officers worked, the number of travelers CBP processed, the number of containers CBP inspected, and the average wait times CBP recorded during RSP overtime services, among other data. According to AFP officials, this information about the impact of reimbursable services helps partners make informed decisions when assessing their future requests. The AFP office works with partners to ensure that the information CBP provides in these reports is useful and will provide additional data upon the partners’ request, as applicable. CBP also conducts annual RSP partner satisfaction surveys to obtain feedback and evaluate overall satisfaction with program implementation. In 2015 and 2016, RSP partners expressed high levels of satisfaction about the level of services CBP provided, the request and fulfilment process, the billing and payment process, the monthly and annual metrics reports that CBP provides its partners, and the program’s ability to meet partner goals. Additionally, partners generally responded that the program allowed them to achieve their goals, which primarily focused on reducing wait times and increasing their own customer satisfaction levels. CBP has guidance that it follows to monitor and evaluate the implementation of DAP projects, and CBP and its partners use tools such as implementation roadmaps and other policy documents, such as standard operating procedures, to administer and monitor the progress of DAP projects at the POEs. For example, CBP develops project roadmaps for all donation projects in close collaboration with its partner, GSA (as applicable), and other entities involved in the project, and shares them with project participants. The roadmap identifies a variety of project milestones and tasks, such as drafting the MOU and completing the technical requirements package, among other things. The roadmap also tracks the number of days that CBP expects will be required to complete each task, which helps CBP to ensure that all stakeholders meet project milestones. CBP also monitors overall DAP implementation by collecting quantitative data on the efficiency of DAP processes to inform program and process improvements. For example, from 2015 to 2016, CBP consolidated certain elements of its application evaluation process to reduce the number of days it takes to evaluate and approve applications from an average of 144 days to 75 days for large-scale donations. Similarly, from 2015 to 2016, CBP determined that it could gain efficiencies by establishing a separate application evaluation and approval process for small-scale donation applications to better accommodate small-scale donations, and delegated approval and acceptance authority to the Office of Field Operations Executive Assistant Commissioner. This new process expedited the proposal evaluation timeline for small-scale donations from approximately 27 days to 14 days. In addition, GSA implemented a similar delegation authority for approval and acceptance of small-scale donations in fiscal year 2017, which decreased GSA’s application evaluation process from approximately 57 days to 25 days from fiscal year 2016 to 2017. In addition to monitoring the implementation of the overall program and the progress of specific DAP projects, CBP works with its partners to evaluate the benefits of each project. Specifically, during the planning and development phase of a donation, AFP officials coordinate with local CBP officials and DAP partners to develop a plan for identifying, measuring, and reporting on the local benefits to be derived from accepted donations upon project completion. CBP has completed its evaluation of the benefits of one completed small-scale project. For example, CBP estimated that the donated perforating machine at the Freeport Sea POE will save CBP 166 officer hours and approximately $7,450 in salary and maintenance costs per year. For large-scale projects, CBP is working with its partners to develop these evaluation plans, but it is too early for CBP to evaluate the benefits given that most of these projects are in the early planning and development phases. CBP shares its findings on benefits with its partners to help them assess their return on investment and so that they can share that information with their own local stakeholders. CBP is taking steps to monitor the existing use and impacts of RSP and DAP and to plan for further expansion of these programs. For example, in addition to the monthly metrics reports that CBP provides its RSP partners, AFP officials told us that they monitor the fulfillment rates of formal partner requests for RSP services. The current fulfillment rate across all of CBP’s RSP agreements is over 99 percent. In addition, as noted previously, AFP officials coordinate with local CBP officials and DAP partners to develop a plan for identifying, measuring, and reporting on the local benefits to be derived from accepted donations upon project completion. Furthermore, with regard to planning for future program expansion, CBP has taken steps to plan for the additional oversight activities that it expects at the headquarters level as the RSP expands. For example, CBP is hiring new staff members and contractors for the AFP office, as well as reimbursing the Office of Finance for one staff position and embedding one staff member in the Budget Office to help complete the increased number of financial transactions and audits. In addition, the AFP office is considering the future impact of DAP projects on staffing and other resources at the affected POEs, and is working with Field Office, POE, and partner officials to identify and budget for anticipated operational needs, with assistance from CBP’s Workload Staffing Model and Planning, Program Analysis and Evaluation offices. These efforts to monitor and evaluate the impacts of the programs and plan for further expansion are positive steps that should help position CBP to manage anticipated increases in the number of agreements going forward. Furthermore, prior to Sections 481 and 482 authorities, in accordance with the report of the Senate Appropriations Committee accompanying the Department of Homeland Security Appropriations Act, 2013, CBP submitted semiannual reports to Congress on its Section 560 partnerships for fiscal years 2014 through 2016. CBP included information in these reports on the benefits of RSP services. For example, CBP compared baseline traveler and vehicle volume and wait times at participating POEs from previous years to the traveler and vehicle volume and wait times during time periods when CBP provided reimbursable services. Subsequently, in accordance with the Consolidated Appropriations Act, 2014, CBP developed an evaluation plan with objectives, criteria, evaluation methodologies, and data collection plans to be used to evaluate RSP and DAP performance on an annual and aggregated basis. However, the provision requiring that an evaluation plan be established for the section 559 pilot program was repealed by the Cross- Border Trade Enhancement Act of 2016. This Act requires that CBP report to Congress annually to identify the activities undertaken and the agreements entered into under the RSP and DAP but does not require that CBP develop or report on an evaluation plan for these programs. As of November 2017, CBP had not decided whether it will use a performance evaluation plan going forward. However, in December 2017, AFP officials acknowledged that such a plan—that examines RSP and DAP performance at the programmatic level—could benefit program management and augment evaluation activities already conducted by the AFP office. We reviewed draft versions of CBP’s fiscal year 2017 reports to Congress on new Section 481 fee agreements and new Section 482 donation agreements. Both reports detailed how CBP responded to changes in legislative authorities for the RSP and DAP and listed its fiscal year 2017 selections for public-private partnership agreements, but did not include an evaluation plan or identify measures for tracking program performance going forward. Further, while the AFP office tracks the fulfillment rates of requests for RSP services and is working with its partners and other CBP components to monitor and plan for program expansion, CBP could benefit from a more robust assessment of possible impacts of staffing challenges on program expansion. As mentioned above, as of fiscal year 2017, CBP has an overall staffing shortage of 2,516 officers, according to CBP’s Workload Staffing Model analysis, and CBP officer hiring remains an agency-wide challenge. We identified some staffing challenges that could affect CBP’s management and implementation of its RSP and DAP programs, which roughly doubled in the number of agreements from fiscal year 2016 to 2017. As of November 2017, public-private partnership agreements were in place at approximately one-third of all U.S. POEs. With the removal of the limit on the number of air agreements that CBP can enter each year, some POEs have or are anticipated by CBP to have more than one RSP agreement in place. According to AFP officials, if there are multiple RSP partnerships at the same POE, CBP will try to accommodate all partner requests. Generally, the AFP office expects the POEs to handle requests on a first-come, first-serve basis. As the number of RSP partners increase across POEs, requests for services are likely to also increase, according to CBP officials. While it is too soon for CBP to assess the extent to which fulfillment rates may change over time, if at all, with the expansion of the program, officials noted that RSP agreements do not guarantee that CBP will be able provide all services that partners request, and that RSP services are above and beyond what CBP would normally provide. According to CBP, the recent increase in the mandated cap on officer overtime pay from $35,000 to $45,000 has allowed CBP officers to work more RSP overtime. Nevertheless, it is unclear how CBP will evaluate and address any increase in RSP agreements that may outpace the staff available to fulfill service requests. As noted previously, new authorities for the RSP also allow CBP to enter into agreements that allow partners to reimburse CBP for up to five additional officers, above the number assigned at the time the agreement was reached, at small airports. In fiscal year 2017, CBP selected four partners for this type of reimbursable services agreement. For its agreement with the Rhode Island Airport Corporation, CBP relocated three officers from the Boston-Logan International Airport, one of the busiest U.S. international airports, to T.F. Green State International Airport, which inspects less than 100,000 international travelers annually. AFP officials noted that, in accordance with legislation, the Port Director overseeing the port of origin for the CBP officer(s) added to small airports must determine that the movement of the officer(s) from one POE to another in fulfilling RSP agreements for additional CBP officers does not permanently affect operations at any other POE, including the POE that the officer(s) depart. However, CBP has not planned for how individual POEs or the agency more broadly would make these determinations or how CBP would evaluate any longer term impacts on overall CBP officer staffing resulting from the movement of officers among POEs. Office of Management and Budget guidance for making program expansion decisions indicates that agencies should evaluate cost- effectiveness in a manner that presents facts and supporting details among competing alternatives, including relative costs, benefits, and performance tradeoffs. Further, in September 2016 we developed a list of leading practices for evaluation based on the American Evaluation Association’s An Evaluation Roadmap for a More Effective Government, including development of an evaluation plan or agenda, a description of methods and data sources in evaluation reports, procedures for assuring evaluation quality, and tracking the use of evaluation findings in management or reforms, among others. CBP is taking steps to monitor its RSP and DAP and plan for program expansion. However, given its staffing challenges, CBP could benefit from developing and implementing an evaluation plan for assessing overall RSP and DAP performance. Such a plan could further integrate evaluation activities into program management and could better position CBP to assess relative costs, benefits, and performance trade-offs as CBP expands its RSP and DAP, and consider the extent to which any future program changes may be needed. The amount of legitimate travel and trade entering through the nation’s POEs continues to increase each year. To date, CBP and its partners have utilized public-private partnerships to help meet an increased demand for CBP services and infrastructure improvements at POEs, and agency officials and program partners have generally concurred that the RSP and DAP have been effective in helping to bridge CBP resource gaps and improve partner operations. However, given CBP’s officer hiring and retention challenges and its finite resources for addressing infrastructure needs at POEs, CBP’s ability to monitor and evaluate the implementation of its public-private partnership programs is essential to ensuring that CBP leaders have the information that they need to make program decisions and identify and respond to challenges as the programs expand. As CBP continues to expand its public-private partnership programs, evaluating the RSP and DAP at the program level could better position CBP leaders to assess the relative costs, benefits, and performance trade-offs of continuing to expand the programs. It could also better position CBP to identify and respond to expansion challenges, such as CBP officer staffing. The CBP Commissioner should develop and implement an evaluation plan to be used to assess the overall performance of the RSP and DAP, which could include, among other things, measurable objectives, performance criteria, evaluation methodologies, and data collection plans to inform future program decisions. (Recommendation 1) We provided a draft of this report to DHS and GSA for their review and comment. GSA indicated that it did not have any comments on the draft report via e-mail. DHS provided written comments, which are noted below and reproduced in full in appendix II, and technical comments, which we incorporated as appropriate. DHS concurred with our recommendation and described the actions it plans to take in response. Specifically, DHS stated that CBP will develop and implement a plan to assess the overall performance of the RSP and DAP to inform future program decisions. The plan will evaluate current partnerships, including but not limited to: service denial rate; trend analysis of frequency and type of requests; annual stakeholder survey results; impact of multiple stakeholders in one port location on levels of service provided; impact of unanticipated operations and maintenance costs associated with property donations; and staffing implications on donations of upgraded port infrastructure. If implemented effectively, these planned actions should address the intent of our recommendation. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, the Administrator of the General Services Administration, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Since 2013, U.S. Customs and Border Protection (CBP) has entered into public-private partnerships with private sector or government entities under its Reimbursable Services Program (RSP) to cover CBP’s cost of providing certain services at U.S. ports of entry (POE) upon the request of partners. As of the end of fiscal year 2017, CBP approved 114 applications for reimbursable fee agreements. These services can include customs, immigration, agricultural processing, border security and support at any facility where CBP provides, or will provide services and may cover costs such as salaries, benefits, overtime expenses, administration, and transportation costs. Once CBP selects an application for a new reimbursable services partnership, CBP and its partner sign a legally binding Reimbursable Services Agreement, which is a standard legal form that CBP uses for all new RSP agreements. Local CBP officials then work with the partner to negotiate the terms of a Memorandum of Understanding (MOU), which outlines how the partnership will work at the POE. In the following table, we provide select details from the 54 existing MOUs between CBP and its partners in the RSP. In addition to the partners listed in the table above, CBP has also signed Reimbursable Services Agreements with the following partners, but has not completed negotiating the terms of an MOU as of the end of fiscal year 2017. Fiscal year 2016 partners: 1. City of Charlotte Aviation Department 2. Dole Fresh Fruit Company (Port of Wilmington, Delaware; Port Everglades; and Port of Freeport) 3. GT USA LLC 4. Port of Galveston 5. Presidio Port Authority Local Government Corporation 6. Red Hook Container Terminal, LLC 7. United Parcel Service Co. In addition to the contact named above, Kirk Kiester (Assistant Director), Dominick Dale, Michele Fejfar, Eric Hauswirth, Stephanie Heiken, Susan Hsu, Elizabeth Leibinger, David Lutter, and Sasan J. “Jon” Najmi made significant contributions to this report.
|
International trade and travel to the United States is increasing. On a typical day in fiscal year 2016, CBP officers inspected nearly 1.1 million passengers and pedestrians and over 74,000 truck, rail, and sea containers at 328 U.S. land, sea, and air ports of entry, according to CBP. To help meet the increased demand for these types of CBP services, since 2013, CBP has entered into public-private partnerships under RSP and DAP. The RSP allows partners to reimburse CBP for providing services that exceed CBP's normal operations, such as paying overtime for CBP personnel that provide services at ports of entry outside normal business hours. The DAP enables partners to donate property or provide funding for port of entry infrastructure improvements. The Cross-Border Trade Enhancement Act of 2016 included a provision for GAO to review the RSP and DAP. This report examines: (1) how CBP approves and administers RSP and DAP agreements, (2) the status of RSP and DAP agreements, including the purposes for which CBP has used funds and donations, and (3) the extent to which CBP monitors and evaluates program implementation. GAO reviewed partnership agreements and data on program usage. GAO also interviewed CBP and partner officials at 11 ports of entry selected based on a mix of port of entry and agreement types. Within the Department of Homeland Security, U.S. Customs and Border Protection (CBP) uses criteria and follows documented procedures to evaluate and approve public-private partnership applications and administer the Reimbursable Services Program (RSP) and Donations Acceptance Program (DAP). For example, RSP applications undergo an initial review by CBP officials at the affected ports of entry before they are scored by an expert panel of CBP officials at headquarters. The panel evaluates RSP applications against seven criteria, such as impact on CBP operations. Similarly, DAP proposals are evaluated by CBP officials against seven operational and six technical criteria, such as real estate implications. Further, if the proposal involves real estate controlled by the General Services Administration (GSA), CBP and GSA officials collaborate on DAP selection decisions and project implementation. To administer the RSP and DAP, CBP has documented policies and procedures, such as standard operating procedures and implementation frameworks. For example, CBP uses a standard procedure to guide the process for RSP partners to request services and to provide reimbursement. For DAP projects, CBP, GSA (if applicable), and partners follow an implementation framework that includes a project planning and design phase. The number of public-private partnerships is increasing, and partnerships provide a variety of additional services and infrastructure improvements at ports of entry. From fiscal years 2013 through 2017, CBP selected over 100 partners for RSP agreements that could impact 112 ports of entry and other CBP-staffed locations, and the total number of RSP partnerships doubled from fiscal year 2016 to 2017. According to CBP, since partners began requesting reimbursable services in 2014, CBP has provided its partners nearly 370,000 officer overtime hours of services, which led to over $45 million in reimbursed funds. As a result, CBP inspected an additional 8 million travelers and over 1 million personal and commercial vehicles at ports of entry. Similar to the RSP, the number of DAP partnerships more than doubled from fiscal year 2016 to 2017, and totals 16 projects that impact 13 ports of entry as of November 2017. The donations include improvements, such as the installation of new inspection booths and equipment and removal of traffic medians, and are intended to support over $150 million in infrastructure improvements. CBP uses various processes to monitor and evaluate its partnerships, but could benefit from establishing an evaluation plan to assess overall program performance. For example, CBP conducts regular audits of RSP records to help ensure that CBP bills and collects funds from its partners accurately, and uses guidance, such as the DAP Implementation Roadmap, to identify and monitor project milestones and tasks. However, as of November 2017, CBP had not developed an evaluation plan—which could include, among other things, measurable objectives, performance criteria, and data collection plans—to assess the overall performance of the RSP and DAP, consistent with Office of Management and Budget guidance and leading practices. Given CBP's staffing challenges and anticipated growth of the RSP and DAP, an evaluation plan could better position CBP to further integrate evaluation activities into program management. GAO recommends that CBP develop an evaluation plan to assess the overall performance of the RSP and DAP. DHS concurred with the recommendation.
|
VA’s mission is to promote the health, welfare, and dignity of all veterans in recognition of their service to the nation by ensuring that they receive medical care, benefits, social support, and lasting memorials. In carrying out this mission, the department operates one of the largest health care delivery systems in America, providing health care to millions of veterans and their families at more than 1,500 facilities. The department’s three major components—the Veterans Health Administration (VHA), the Veterans Benefits Administration (VBA), and the National Cemetery Administration (NCA)—are primarily responsible for carrying out its mission. More specifically, VHA provides health care services, including primary care and specialized care, and it performs research and development to improve veterans’ needs. VBA provides a variety of benefits to veterans and their families, including disability compensation, educational opportunities, assistance with home ownership, and life insurance. Further, NCA provides burial and memorial benefits to veterans and their families. Collectively, the three components rely on approximately 340,000 employees to provide services and benefits. These employees work in VA’s Washington, D.C. headquarters, as well as 170 medical centers, approximately 750 community-based outpatient clinics, 300 veterans centers, 56 regional offices, and more than 130 cemeteries situated throughout the nation. The use of IT is critically important to VA’s efforts to provide benefits and services to veterans. As such, the department operates and maintains an IT infrastructure that is intended to provide the backbone necessary to meet the day-to-day operational needs of its medical centers, veteran- facing systems, benefits delivery systems, memorial services, and all other systems supporting the department’s mission. The infrastructure is to provide for data storage, transmission, and communications requirements necessary to ensure the delivery of reliable, available, and responsive support to all VA staff offices and administration customers, as well as veterans. According to department data as of October 2016, there were 576 active or in-development systems in VA’s inventory of IT systems. These systems are intended to be used for the determination of benefits, benefits claims processing, and access to health records, among other services. VHA is the parent organization for 319 of these systems. Of the 319 systems, 244 were considered mission-related and provide capabilities related to veterans’ health care delivery. For example, VHA’s systems provide capabilities to establish and maintain electronic health records that health care providers and other clinical staff use to view patient information in inpatient, outpatient, and long-term care settings. VistA serves an essential role in helping the department to fulfill its health care delivery mission. Specifically, VistA is an integrated medical information system for all veterans’ health information. It was developed in-house by the department’s clinicians and IT personnel and has been in operation since the early 1980s. As such, the system has long been vital to helping ensure the quality of health care received by the nation’s veterans and their dependents. VistA is comprised of more than 200 applications that assist in the delivery of health care and perform other important functions within the department, including financial management, enrollment, and registration. Some of these applications have been in operation for over 30 years and, according to VA, have become increasingly difficult and costly to maintain. As such, the department has expended extensive resources to modernize the system and increase its ability to allow for the viewing or exchange of patient information with the Department of Defense (DOD) and private sector health providers. In addition, as we recently reported, VHA has unaddressed needs that indicate its current health IT systems, including VistA, do not fully support the organization’s business functions. Specifically, about 39 percent of all requests related to health IT needs have remained unaddressed after more than 5 years. Electronic health records are particularly crucial for optimizing the health care provided to veterans, many of whom may have health records residing at multiple medical facilities within and outside the United States. Taking steps toward interoperability—that is, collecting, storing, retrieving, and transferring veterans’ health records electronically—is significant to improving the quality and efficiency of care. One of the goals of interoperability is to ensure that patients’ electronic health information is available from provider to provider, regardless of where it originated or resides. Since 2007, VA has been operating a centralized organization, the Office of Information and Technology (OI&T), in which most key functions intended for effective management of IT are performed. This office is led by the Assistant Secretary for Information and Technology—VA’s Chief Information Officer (CIO). The office is responsible for providing strategy and technical direction, guidance, and policy related to how IT resources are to be acquired and managed for the department, and for working closely with its business partners—such as VHA—to identify and prioritize business needs and requirements for IT systems. Among other things, OI&T has responsibility for managing the majority of VA’s IT-related functions, including the maintenance and modernization of VistA. As of 2016, OI&T was comprised of more than 15,000 staff, with more than half of these positions filled by contractors. For fiscal year 2018, the department’s budget request included nearly $4.1 billion for IT. The department requested approximately $359 million for new systems development or modernization efforts, approximately $2.5 billion for maintaining existing systems, and approximately $1.2 billion for payroll and administration. For example, in its fiscal year 2018 budget submission, the department requested appropriations to support five IT portfolios, including the development and operations and maintenance for programs and projects related to the: Medical portfolio, which provides technology solutions to deliver modern, high-quality medical care capabilities to veterans ($944.2 million); Benefit portfolio, which addresses the technology needs managed by the Veterans Benefit Administration ($296.9 million); Memorial Affairs portfolio, which provides support for the modernization of applications and services for National Cemeteries at 133 locations nationwide ($24.5 million); Corporate portfolio, which consists of back office operations supporting the major business lines and department management ($270.6 million); and Enterprise IT, which provides the underlying infrastructure to enable the other portfolios to operate and includes such things as cybersecurity, data centers, cloud services, telephony, enterprise software, and data connectivity ($1.289 billion). In 2015, we designated VA Health Care as a high-risk area for the federal government and, currently, we continue to be concerned about the department’s ability to ensure that its resources are being used cost- effectively and efficiently to improve veterans’ timely access to health care. In part, we identified limitations in the capacity of VA’s existing systems, including the outdated, inefficient nature of certain systems and a lack of system interoperability—that is, the ability to exchange and use electronic health information—as contributors to the department’s IT challenges related to health care. These challenges present risks to the timeliness, quality, and safety of the health care. While we recently reported that the department has begun to demonstrate leadership commitment to addressing IT challenges, more work remains. Also, in February 2015, we added Improving the Management of IT Acquisitions and Operations to our list of high-risk areas. Specifically, federal IT investments too frequently fail or incur cost overruns and schedule slippages while contributing little to mission-related outcomes. We have previously testified that the federal government has spent billions of dollars on failed IT investments, including, for example, VA’s Scheduling Replacement Project, which was terminated in September 2009 after spending an estimated $127 million over 9 years; and its Financial and Logistics Integrated Technology Enterprise program, which was intended to be delivered by 2014 at a total estimated cost of $609 million, but was terminated in October 2011 due to challenges in managing the program. This high-risk area highlighted several critical IT initiatives in need of additional congressional oversight, including (1) reviews of troubled projects; (2) efforts to increase the use of incremental development; (3) efforts to provide transparency relative to the cost, schedule, and risk levels for major IT investments; (4) reviews of agencies’ operational investments; (5) data center consolidation; and (6) efforts to streamline agencies’ portfolios of investments. We noted that agencies’ implementation of these initiatives was inconsistent and that more work remained to demonstrate progress in achieving IT acquisition and operation outcomes. We also recently issued an update to our high-risk report and noted that, while progress has been made in addressing the high-risk area of IT acquisitions and operations, significant work remains to be completed. For example, we noted, among other things, that additional work was needed to establish action plans for federal agencies to modernize or replace obsolete systems. Specifically, we pointed out that many federal systems use outdated software languages and hardware, which has increased spending on operations and maintenance of technology investments. VA was among a handful of departments with one or more archaic legacy systems. As discussed in our recent report on legacy systems used by federal agencies, we identified 2 of the department’s systems as being over 50 years old, and among the 10 oldest investments and/or systems that were reported by 12 selected agencies. Personnel and Accounting Integrated Data (PAID)—This 53-year old system automates time and attendance for employees, timekeepers, payroll, and supervisors. It is written in Common Business Oriented Language (COBOL), a programming language developed in the late 1950s and early 1960s, and runs on IBM mainframes. Benefits Delivery Network (BDN)—This 51-year old system tracks claims filed by veterans for benefits, eligibility, and dates of death. It is a suite of COBOL mainframe applications. Ongoing uses of antiquated systems, such as PAID and BDN, contribute to agencies spending a large, and increasing, proportion of their IT budgets on operations and maintenance of systems that have outlived their effectiveness and are consuming resources that outweigh their benefits. Accordingly, we have recommended that VA identify and plan to modernize or replace its legacy systems. The department concurred with our recommendation and stated that it plans to retire and replace PAID with the Human Resources Information System Shared Service Center in 2017. The department also stated that it has general plans to roll the capabilities of BDN into another system and to retire BDN in 2018. Congress enacted federal IT acquisition reform legislation (commonly referred to as the Federal Information Technology Acquisition Reform Act, or FITARA) in December 2014. This legislation was intended to improve agencies’ acquisitions of IT and enable Congress to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. The law applies to VA and other covered agencies. It includes specific requirements related to seven areas, including data center consolidation and optimization, agency CIO authority, and government-wide software purchasing. Federal data center consolidation initiative (FDCCI). Agencies are required to provide the Office of Management and Budget (OMB) with a data center inventory, a strategy for consolidating and optimizing their data centers (to include planned cost savings), and quarterly updates on progress made. The law also requires OMB to develop a goal for how much is to be saved through this initiative, and provide annual reports on cost savings achieved. Agency CIO authority enhancements. CIOs at covered agencies are required to (1) approve the IT budget requests of their respective agencies, (2) certify that IT investments are adequately implementing incremental development, as defined in capital planning guidance issued by OMB, (3) review and approve contracts for IT, and (4) approve the appointment of other agency employees with the title of CIO. Government-wide software purchasing program. The General Services Administration is to develop a strategic sourcing initiative to enhance government-wide acquisition and management of software. In doing so, the law requires that, to the maximum extent practicable, the General Services Administration should allow for the purchase of a software license agreement that is available for use by all executive branch agencies as a single user. Expanding upon FITARA, the Making Electronic Government Accountable by Yielding Tangible Efficiencies Act of 2016, or the “MEGABYTE Act,” further enhanced CIOs’ management of software licenses by requiring agency CIOs to establish an agency software licensing policy and a comprehensive software license inventory to track and maintain licenses, among other requirements. In June 2015, OMB released guidance describing how agencies are to implement FITARA. This guidance is intended to, among other things: assist agencies in aligning their IT resources with statutory establish government-wide IT management controls that will meet the law’s requirements, while providing agencies with flexibility to adapt to unique agency processes and requirements; clarify the CIO’s role and strengthen the relationship between agency CIOs and bureau CIOs; and strengthen CIO accountability for IT costs, schedules, performance, and security. In our draft report that is currently with VA for comments, we discuss the history of VA’s efforts to modernize its health information system, VistA. These four efforts—HealtheVet, the integrated Electronic Health Record (iEHR), VistA Evolution, and the Electronic Health Record Modernization (EHRM)—reflect varying approaches that the department has considered to achieve a modernized health care system over the course of nearly two decades. The modernization efforts are described as follows. In 2001, VA undertook its first VistA modernization project, the HealtheVet initiative, with the goals of standardizing the department’s health care system and eliminating the approximately 130 different systems used by its field locations at that time. HealtheVet was scheduled to be fully implemented by 2018 at a total estimated development and deployment cost of about $11 billion. As part of the effort, the department had planned to develop or enhance specific areas of system functionality through six projects, which were to be completed between 2006 and 2012. Specifically, these projects were to provide capabilities to support VA’s Health Data Repository and Patient Financial Services System, as well as the Laboratory, Pharmacy, Imaging, and Scheduling functions. In June 2008, we reported that the department had made progress on the HealtheVet initiative, but noted issues with project planning and governance. In June 2009, the Secretary of Veterans Affairs announced that VA would stop financing failed projects and improve the management of its IT development projects. Subsequently, in August 2010, the department reported that it had terminated the HealtheVet initiative. In February 2011, VA began its second modernization initiative, the iEHR program, in conjunction with DOD. The program was intended to replace the two separate electronic health record systems used by the two departments with a single, shared system. Moreover, because both departments would be using the same system, this approach was expected to largely sidestep the challenges that had been encountered in trying to achieve interoperability between their two separate systems. Initial plans called for the development of a single, joint system consisting of 54 clinical capabilities to be delivered in six increments between 2014 and 2017. Among the agreed-upon capabilities to be delivered were those supporting laboratory, anatomic pathology, pharmacy, and immunizations. According to VA and DOD, the single iEHR system had an estimated life cycle cost of $29 billion through the end of fiscal year 2029. However, in February 2013, the Secretaries of VA and DOD announced that they would not continue with their joint development of a single electronic health record system. This decision resulted from an assessment of the iEHR program that the secretaries had requested in December 2012 because of their concerns about the program facing challenges in meeting deadlines, costing too much, and taking too long to deliver capabilities. In 2013, the departments abandoned their plan to develop the integrated system and stated that they would again pursue separate modernization efforts. In December 2013, VA initiated its VistA Evolution program as a joint effort of VHA and OI&T that was to be completed by the end of fiscal year 2018. The program was to be comprised of a collection of projects and efforts focused on improving the efficiency and quality of veterans’ health care by modernizing the department’s health information systems, increasing the department’s data exchange and interoperability with DOD and private sector health care partners, and reducing the time it takes to deploy new health information management capabilities. Further, the program was intended to result in lower costs for system upgrades, maintenance, and sustainment. According to the department’s March 2017 cost estimate, VistA Evolution was to have a life cycle cost of about $4 billion through fiscal year 2028. Since initiating VistA Evolution in December 2013, VA has completed a number of key activities that were called for in its plans. For example, the department delivered capabilities, such as the ability for health providers to have an integrated, real-time view of electronic health record data through the Joint Legacy Viewer, as well as the ability for health care providers to view sensitive DOD notes and highlight abnormal test results for patients. VA also initiated work to standardize VistA across the 130 VA facilities and released enhancements to its legacy scheduling, pharmacy, and immunization systems. In addition, the department released the enterprise Health Management Platform, which is a web- based user interface that assembles patient clinical data from all VistA instances and DOD. Although VistA Evolution is ongoing, VA is currently in the process of revising its plan for the program as a result of the department recently announcing its pursuit of a fourth VistA modernization program (discussed below). For example, the department determined that it would no longer pursue additional development or deployment of the enterprise Health Management Platform—a major VistA Evolution component— because the new modernization program is envisioned to provide similar capabilities. In June 2017, the VA Secretary announced a significant shift in the department’s approach to modernizing VistA. Specifically, rather than continue to use VistA, the Secretary stated that the department plans to acquire the same electronic health record system that DOD is implementing. In this regard, DOD has contracted with the Cerner Corporation to provide a new integrated electronic health record system. According to the Secretary, VA has chosen to acquire this same product because it would allow all of VA’s and DOD’s patient data to reside in one system, thus enabling seamless care between the department and DOD without the manual and electronic exchange and reconciliation of data between two separate systems. The VA Secretary added that this fourth modernization initiative is intended to minimize customization and system differences that currently exist within the department’s medical facilities, and ensure the consistency of processes and practices within VA and DOD. When fully operational, the system is intended to be the single source for patients to access their medical history and for clinicians to use that history in real time at any VA or DOD medical facility, which may result in improved health care outcomes. According to VA’s Chief Technology Officer, Cerner is expected to provide integration, configuration, testing, deployment, hosting, organizational change management, training, sustainment, and licenses necessary to deploy the system in a manner that meets the department’s needs. To expedite the acquisition, in June 2017, the Secretary signed a “Determination and Findings,” which noted a public interest exception to the requirement for full and open competition, and authorized VA to issue a solicitation directly to the Cerner Corporation. According to the Secretary, VA expects to award a contract to Cerner in December 2017, and deployment of the new system is anticipated to begin 18 months after the contract has been signed. VA’s Executive Director for the Electronic Health Records Modernization System stated that the department intends to incrementally deploy the new system to its medical facilities. Each facility is expected to continue using VistA until the new system has been deployed at that location. All VA medical facilities are anticipated to have the new system implemented within 7 to 8 years after the first deployment. Figure 1 shows a timeline of the four efforts that VA has pursued to modernize VistA since 2001. For iEHR and VistA Evolution, the two modernization initiatives for which VA could provide contract data, the department obligated approximately $1.1 billion for contracts with 138 different contractors during fiscal years 2011 through 2016. Specifically, the department obligated approximately $224 million and $880 million, respectively, for contracts associated with these efforts. Of the 138 contractors, 34 of them performed work supporting both iEHR and VistA Evolution. The remaining 104 contractors worked exclusively on either iEHR or VistA Evolution. Funding for the 34 contractors that worked on both iEHR and VistA Evolution totaled about $793 million of the $1.1 billion obligated for contracts on the two initiatives. Obligations for contracts awarded to the top 15 of these 34 contractors (which we designated as key contractors) accounted for about $741 million (about 67 percent) of the total obligated for contracts on the two initiatives. The remaining 123 contractors were obligated about $364 million for their contracts. The 15 key contractors were obligated about $564 million and $177 million for VistA Evolution and iEHR contracts, respectively. Table 1 identifies the key contractors and their obligated dollar totals for the two efforts. Additionally, we determined that, of the $741 million obligated to the key contractors, $411 million (about 55 percent) was obligated for contracts supporting the development of new system capabilities, $256 million (about 35 percent) was obligated for contracts supporting project management activities, and $74 million (about 10 percent) was obligated for contracts supporting operations and maintenance for iEHR and VistA Evolution. VA obligated funds to all 15 of the key contractors for system development, 13 of the key contractors for project management, and 12 of the key contractors for operations and maintenance. Figure 2 shows the amounts obligated for each of these areas. Further, based on the key contractors’ documentation, for the iEHR program, VA obligated $102 million for development, $65 million for project management, and $10 million for operations and maintenance. For the VistA Evolution Program, VA obligated $309 million for development, $191 million for project management, and $64 million for operations and maintenance. Figure 3 shows the amounts obligated for contracts on the VistA Evolution and iEHR programs for development, project management, and operations and maintenance. In addition, table 2 shows the amounts that each of the 15 key contractors were obligated for the three types of contract activities performed on iEHR and VistA Evolution. Industry best practices and IT project management principles stress the importance of sound planning for system modernization projects. These plans should identify key aspects of a project, such as the scope, responsible organizations, costs, schedules, and risks. Additionally, planning should begin early in the project’s lifecycle and be updated as the project progresses. Since the VA Secretary announced that the department would acquire the same electronic health record system as DOD, VA has begun planning for the transition from VistA Evolution to EHRM. However, the department is still early in its efforts, pending the contract award. In this regard, the department has begun developing plans that are intended to guide the new EHRM program. For example, the department has developed a preliminary description of the organizations that are to be responsible for governing the EHRM program. Further, the VA Secretary announced in congressional testimony in November 2017, a key reporting responsibility for the program—stating that the Executive Director for the Electronic Health Records Modernization System will report directly to the department’s Deputy Secretary. In addition, the department has developed a preliminary timeline for deploying its new electronic health record system to VA’s medical facilities, and a 90-day schedule that depicts key program activities. The department also has begun documenting the EHRM program risks. Beyond the aforementioned planning activities undertaken thus far, the Executive Director stated that the department intends to complete a full suite of planning and acquisition management documents to guide the program, including a life cycle cost estimate and an integrated master schedule to establish key milestones over the life of the project. To this end, the Executive Director told us that VA has awarded two program management contracts to support the development of these plans to MITRE Corporation and Booz Allen Hamilton. According to the Executive Director, VA also has begun reviewing the VistA Evolution Roadmap, which is the key plan that the department has used to guide VistA Evolution since 2014. This review is expected to result in an updated plan that is to prioritize any remaining VistA enhancements needed to support the transition from VistA Evolution to the new system. According to the Executive Director, the department intends to complete the development of its plans for EHRM within 90 days after award of the Cerner contract, which is anticipated to occur in December 2017. Further, beyond the development of plans, VA has begun to staff an organizational structure for the modernization initiative, with the Under Secretary of Health and the Assistant Secretary for Information and Technology (VA’s Chief Information Officer) designated as executive sponsors. It has also appointed a Chief Technology Officer from OI&T, and a Chief Medical Officer from VHA, both of whom are to report to the Executive Director. VA’s efforts to develop plans for EHRM and to staff an organization to manage the program encompass key aspects of project planning that are important to ensuring effective management of the department’s latest modernization initiative. However, the department remains early in its modernization planning efforts, many of which are dependent on the system acquisition contract award, which has not yet occurred. The department’s continued dedication to completing and effectively executing the planning activities that it has identified will be essential to helping minimize program risks and guide this latest electronic health record modernization initiative to a successful outcome—one which VA, for almost two decades, has yet to achieve. Beyond managing its system modernization efforts, such as VistA, VA has to ensure the effective implementation of the IT acquisition requirements called for in FITARA. Pursuant to FITARA, in August 2016, the Federal CIO issued a memorandum that announced the Data Center Optimization Initiative (DCOI). According to OMB, this new initiative supersedes and builds on the results of FDCCI, and is also intended to improve the performance of federal data centers in areas such as facility utilization and power usage. Among other things, DCOI requires 24 federal departments and agencies, including VA, to develop plans and report on strategies (referred to as DCOI strategic plans) to consolidate inefficient infrastructure, optimize existing facilities, improve security posture, and achieve costs savings. Further, the memorandum establishes a set of five data center optimization metrics and performance targets intended to measure agency’s progress in the areas of (1) server utilization and automated monitoring, (2) energy metering, (3) power usage effectiveness, (4) facility utilization, and (5) virtualization. The guidance also indicates that OMB is to maintain a public dashboard that will display consolidation-related costs savings and optimization performance information for the agencies. However, in a series of reports that we issued from July 2011 through August 2017, we noted that, while data center consolidation could potentially save the federal government billions of dollars, weaknesses existed in several areas, including agencies’ data center consolidation plans, data center optimization, and OMB’s tracking and reporting on related cost savings. Further, we previously reported that VA’s progress toward closing data centers, and realizing the associated cost savings, lagged behind that of other covered agencies. More recently, VA reported a total inventory of 415 data centers, of which 39 had been closed as of August 2017. While the department anticipates another 10 data centers will be closed by the end of fiscal year 2018, these closures fall short of the targets set by OMB. Specifically, even if VA meets all of its planned targets for closure, it will only close about 9 percent of its tiered data centers and about 18.7 percent of its non-tiered data centers by the end of fiscal year 2018, which is short of the respective 25 and 60 percent targets set by OMB. Further, while VA has reported $23.61 million in data center-related cost savings and avoidances for 2012 through August 2017, the department does not expect to realize further savings from the additional 10 data center closures in the next year. In addition, in August 2017 we reported that agencies needed to address challenges in optimizing their data centers in order to achieve cost savings. Specifically, we noted that, according to the 24 agencies’ data center consolidation initiative strategic plans as of April 2017, most agencies were not planning to meet OMB’s optimization targets by the end of fiscal year 2018. As of February 2017, VA reported meeting one of the five data center optimization metrics related to power usage effectiveness. Also, the department’s data center optimization strategic plan indicates that the department plans to meet three of the five metrics by the end of fiscal year 2018. Further, while OMB directed agencies to replace manual collection and reporting of metrics with automated tools no later than fiscal year 2018, VA had only implemented automated tools at 6 percent of its data centers. OMB has emphasized the need to deliver investments in smaller parts, or increments, in order to reduce risk, deliver capabilities more quickly, and facilitate the adoption of emerging technologies. In 2010, it called for agencies’ major investments to deliver functionality every 12 months and, since 2012, every 6 months. Subsequently, FITARA codified a requirement that agency CIOs certify that IT investments are adequately implementing incremental development, as defined in the capital planning guidance issued by OMB. Later OMB guidance on the law’s implementation—issued in June 2015—directed agency CIOs to define processes and policies for their agencies which ensure that they certify that IT resources are adequately implementing incremental development. Between May 2014 and November 2017, we reported on agencies’ efforts to utilize incremental development practices for selected major investments. In November 2017, we noted that agencies reported that 62 percent of major IT software development investments were certified by the agency CIO as using adequate incremental development in fiscal year 2017, as required by FITARA. VA’s CIO certified the use of adequate incremental development for all 10 of its major IT investments. However, VA had not yet updated the department’s policy and process for the CIO’s certification of major IT investments’ adequate use of incremental development, in accordance with OMB’s guidance on the implementation of FITARA as we recommended. The department stated that it plans to address our recommendation to establish a policy and that the policy is targeted for completion in 2017. Federal agencies engage in thousands of licensing agreements annually. Effective management of software licenses can help organizations avoid purchasing too many licenses that result in unused software. In addition, effective management can help avoid purchasing too few licenses, which results in noncompliance with license terms and causes the imposition of additional fees. Federal agencies are responsible for managing their IT investment portfolios, including the risks from their major information system initiatives, in order to maximize the value of these investments to the agency. OMB developed a policy that requires agencies to conduct an annual, agency-wide IT portfolio review to, among other things, reduce commodity IT spending. Such areas of spending could include software licenses. We previously identified seven elements that a comprehensive software licensing policy should address: identify clear roles, responsibilities, and central oversight authority within the department for managing enterprise software license agreements and commercial software licenses; establish a comprehensive inventory (at least 80 percent of software license spending and/or enterprise licenses in the department) by identifying and collecting information about software license agreements using automated discovery and inventory tools; regularly track and maintain software licenses to assist the agency in implementing decisions throughout the software license management life cycle; analyze software usage and other data to make cost-effective provide training relevant to software license management; establish goals and objectives of the software license management consider the software license management life-cycle phases (i.e., requisition, reception, deployment and maintenance, retirement, and disposal phases) to implement effective decision making and incorporate existing standards, processes, and metrics. We previously made recommendations to VA to (1) develop an agency- wide comprehensive policy for the management of software licenses that includes guidance for using analysis to better inform investment decision making, (2) employ a centralized software license management approach that is coordinated and integrated with key personnel, (3) establish a comprehensive inventory of software licenses using automated tools, (4) track and maintain a comprehensive inventory of software licenses using automated tools and metrics, (5) analyze agency-wide software license data to identify opportunities to reduce costs and better inform investment decision making, and (6) provide software license management training to appropriate personnel. Consistent with our recommendation, in July 2015, VA issued a comprehensive software licensing policy that addressed weaknesses we previously identified. The department also issued a directive that documents VA’s software license management policy and responsibilities for central management of agency-wide software licenses, consistent with our recommendations. By implementing our recommendations, VA should be better positioned to consistently and cost-effectively manage software throughout the agency. In August 2017, the department also provided documentation showing that it had generated a comprehensive inventory of software licenses using automated tools for the majority of agency software license spending or enterprise-wide licenses. This inventory can serve to reduce redundant applications and help identify other cost saving opportunities. Further, the department implemented a solution to analyze agency-wide software license data, including usage and costs. This solution should allow VA to identify cost saving opportunities and inform future investment decisions. In addition, the department has provided information indicating that appropriate personnel receive software license management training. In conclusion, VA has made extensive use of numerous contractors and has obligated more than $1 billion for contracts that supported two of four VistA modernization programs that the department has initiated. VA has recently begun the fourth modernization program in which it plans to replace VistA with the same commercially available electronic health record system that is used by DOD. However, the department’s latest modernization effort is in the early stages of planning and is dependent on the system acquisition contract award in December 2017. VA’s completion and effective execution of plans will be essential to guiding this latest electronic health record modernization initiative to a successful outcome. Beyond VistA, the department continues to make progress on key FITARA-related initiatives. Although the department has made progress in the area of software licensing, additional actions in the areas of data center consolidation and optimization, as well as incremental system development can better position VA to effectively manage its IT. We plan to continue to monitor the department’s progress on these important activities. Chairman Hurd, Ranking Member Kelly, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have. If you or your staffs have any questions about this testimony, please contact David A. Powner at (202) 512-9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony statement. GAO staff who made key contributions to this statement are Mark Bird (Assistant Director), Jacqueline Mai (Analyst in Charge), Justin Booth, Chris Businsky, Rebecca Eyler, Paris Hawkins, Valerie Hopkins, Brandon S. Pettis, Jennifer Stavros-Turner, Eric Trout, Christy Tyson, Eric Winter, and Charles Youman. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The use of IT is crucial to helping VA effectively serve the nation's veterans and, each year, the department spends billions of dollars on its information systems and assets. However, VA has faced challenges spanning a number of critical initiatives related to modernizing its major systems. To improve all major federal agencies' acquisitions and hold them accountable for reducing duplication and achieving cost savings, in December 2014 Congress enacted federal IT acquisition reform legislation (commonly referred to as the Federal Information Technology Acquisition Reform Act , or FITARA). GAO was asked to summarize its previous and ongoing work regarding VA's history of efforts to modernize VistA, including past use of contractors, and the department's recent effort to acquire a commercial electronic health record system to replace VistA. GAO was also asked to provide an update on VA's progress in key FITARA-related areas, including (1) data center consolidation and optimization, (2) incremental system development practices, and (3) software license management. VA generally agreed with the information upon which this statement is based. For nearly two decades, the Department of Veterans Affairs (VA) has undertaken multiple efforts to modernize its health information system—the Veterans Health Information Systems and Technology Architecture (known as VistA). Two of VA's most recent efforts included the Integrated Electronic Health Record (iEHR) program, a joint program with the Department of Defense (DOD) intended to replace separate systems used by VA and DOD with a single system; and the VistA Evolution program, which was to modernize VistA with additional capabilities and a better interface for all users. VA has relied extensively on assistance from contractors for these efforts. VA obligated over $1.1 billion for contracts with 138 contractors during fiscal years 2011 through 2016 for iEHR and VistA Evolution. Contract data showed that the 15 key contractors that worked on both programs accounted for $741 million of the funding obligated for system development, project management, and operations and maintenance to support the two programs (see figure). VA recently announced that it intends to change its VistA modernization approach and acquire the same electronic health record system that DOD is implementing. With respect to key FITARA-related areas, the department has reported progress on consolidating and optimizing its data centers, although this progress has fallen short of targets set by the Office of Management and Budget. VA has also reported $23.61 million in data center-related cost savings, yet does not expect to realize further savings from additional closures. In addition, VA's Chief Information Officer (CIO) certified the use of adequate incremental development for 10 of the department's major IT investments; however, VA has not yet updated its policy and process for CIO certification as GAO recommended. Finally, VA has issued a software licensing policy and has generated an inventory of its software licenses to inform future investment decisions. GAO has made multiple recommendations to VA aimed at improving the department's IT management. VA has generally agreed with the recommendations and begun taking responsive actions.
|
The U.S. government supports various types of democracy assistance activities, which USAID and State categorize under the DRG portfolio. USAID and State use their Updated Foreign Assistance Standardized Program Structure and Definitions to categorize and define DRG program areas. As updated in April 2016, this document defines the aims of DRG as “to advance freedom and dignity by assisting governments and citizens to establish, consolidate, and protect democratic institutions, processes, and values, including participatory and accountable governance, rule of law, authentic political competition, civil society, human rights, and the free flow of information.” Prior to the 2016 update, DRG program areas were (1) rule of law and human rights, (2) good governance, (3) political competition and consensus-building, and (4) civil society. Each program area features different program elements, as shown in table 1. Multiple bureaus and offices in USAID and State, as well as NED, provide funding for democracy assistance programs, as shown in table 2. USAID provides democracy assistance through contracts, grants, and cooperative agreements, while NED provides democracy assistance only through grants. INL was the only State bureau that reported providing a significant amount of democracy assistance through contracts in addition to grants and cooperative agreements, while other bureaus primarily use grants and cooperative agreements. Combined allocations for democracy assistance administered by USAID and State ranged from about $2 billion to about $3 billion per year, and NED funding ranged from about $100 million to about $170 million annually during fiscal years 2012 through 2016, as shown in figure 1. USAID’s and State’s combined allocations for democracy assistance varied by account in fiscal years 2012 through 2016. Economic Support Fund was the largest account ranging from 50 to 63 percent of the total in fiscal years 2012 through 2016, as shown in figure 2. The following laws, regulations, and policies are related to agencies’ decisions to use a contract, grant, or cooperative agreement to implement democracy assistance programming: According to the Federal Grant and Cooperative Agreement Act of 1977, one of the purposes of the act is to promote a better understanding of government expenditures and help eliminate unnecessary administrative requirements on recipients of government awards by characterizing the relationship between executive agencies and contractors, states, local governments, and other recipients in acquiring property and services and in providing government assistance. The act provides agencies with criteria to be considered when making award-type decisions, including the intended nature of the relationship between the agency and recipient, as well as whether the principal purpose of the award is to benefit the federal government or to transfer a thing of value to a recipient to carry out a public purpose of support or stimulation authorized by law. The Competition in Contracting Act of 1984 requires agencies to obtain full and open competition for contracts through the use of competitive procedures in procurements unless otherwise authorized by law. The Federal Acquisition Regulation (FAR) establishes uniform policies and procedures for all executive agencies for acquisition through contracts. For example, the FAR includes policies and procedures to promote the requirement to obtain full and open competition for contracts. It defines the circumstances under which it is permissible for agencies to limit competition for contracts, including when there is an unusual or compelling urgency or when doing so is necessary for reasons of public interest or national security. The Office of Management and Budget’s “Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards,” as codified in the Code of Federal Regulations (C.F.R.), establishes government-wide requirements for federal agencies administering grants and cooperative agreements with nonfederal entities. This regulation includes policies and procedures for award elements, including monitoring and reporting as well as cost sharing. USAID established agencywide guidance for making award-type decisions in its Automated Directives System (ADS), Chapter 304 (ADS 304). In addition to referencing the criteria established in the Federal Grant and Cooperative Agreement Act, this ADS guidance lists indications for when a specific award type should be used and also identifies factors that should not be primary considerations in making award-type decisions. According to USAID guidance, agreement officers and contract officers are individuals representing the U.S. government who are responsible for documenting the final determination of award-type decisions. USAID further outlines policies and procedures for administration of grants and cooperative agreements in ADS Chapter 303 (ADS 303) and for contracts in ADS Chapter 302 (ADS 302). State also established agencywide guidance for making award-type decisions in its Federal Assistance Directive. It references relevant legislation and instructs contracting and agreement officers to consult with State’s Office of the Procurement Executive if disagreements regarding award-type decisions arise. The Consolidated Appropriations Act, 2016, required USAID and State to each establish guidelines for clarifying program design and objectives for democracy programs, including the use of contracts versus grants and cooperative agreements, for programs carried out with funds appropriated by the act. For more information on USAID and State guidance related to award-type decisions, see appendix III. USAID officials are to make award-type decisions based on applicable laws, regulations, and policies, some of which are described above. Figure 3 provides an overview of the considerations in making this determination based on USAID guidance. ADS 304 provides the following definitions and guidance to USAID personnel as to what award type to select: A contract is a mutually binding legal instrument in which the principal purpose is the acquisition, by purchase, lease, or barter, of property or services for the direct benefit or use of the federal government, or in the case of a host country contract, the host government agency that is a principal, signatory party to the instrument. According to ADS 304, USAID personnel shall use a contract when the principal purpose of this legal relationship is the acquisition of property or services for the direct benefit of a federal government agency. A grant is a legal instrument used when the principal purpose is the transfer of money, property, services or anything of value to a recipient in order to accomplish a public purpose of support or stimulation authorized by Federal statute and when substantial involvement by USAID is not anticipated. USAID personnel are instructed to use a grant when the principal purpose of the relationship with an awardee is to transfer money, property, services, or anything of value to that awardee to carry out a public purpose of support or stimulation authorized by federal statute; and the agency does not anticipate substantial involvement between itself and the awardee during the performance of the activity. A cooperative agreement is a legal instrument used when the principal purpose is the transfer of money, property, services, or anything of value to a recipient in order to accomplish a public purpose of support or stimulation authorized by federal statute and when substantial involvement by USAID is anticipated. According to ADS 304, USAID personnel must use a cooperative agreement when the principal purpose of the relationship with an awardee is to transfer a thing of value to that awardee in order to carry out a public purpose; and the agency anticipates substantial involvement between itself and the awardee during the performance of the activity. The active engagement of USAID officials with awardees in certain programmatic elements of a project constitutes substantial involvement. Such activities include approval of the awardee’s implementation plan and of specified key personnel. In addition to awarding contracts, grants and cooperative agreements to private organizations (such as a for-profit business or a nongovernmental organization), USAID makes awards to federal agencies and public international organizations. Under USAID guidance, a public international organization is an international organization composed principally of countries or other related organizations designated by USAID. USAID maintains a list of public international organizations and international agricultural research centers that are considered public international organizations. These organizations include the United Nations and related organizations, such as the Food and Agriculture Organization, and international financial institutions, such as the World Bank Group. USAID officials noted that public international organizations normally receive grants. Under USAID guidance, awards to public international organizations and interagency agreements do not require the same award-type decisions as those required by ADS 304 for contracts, grants, and cooperative agreements. Awards made to public international organizations are governed by USAID guidance separate from the guidance that applies to awards to other types of organizations, and interagency agreements are governed by guidance separate from contracts, grants, and cooperative agreements. According to USAID’s guidance, the award-type decision should occur early in the preaward stage within the life cycle of an award. Award type- decisions impact other elements of awards because different regulations and guidance are applicable based on award type. For example, competition and oversight requirements differ for contracts compared with grants and cooperative agreements. Similarly, award-type decisions affect whether the recipient of an award is eligible to make a profit. The award life cycle contains preaward and award implementation stages, as shown in figure 4. During fiscal years 2012 through 2016, USAID obligated $5.5 billion and NED obligated $610.2 million in democracy assistance funding, and the total such funding that State obligated cannot be reliably determined. In providing democracy assistance, USAID obligated more through grants and cooperative agreements combined than contracts, but its obligations through different award types varied by fiscal year and DRG program area. NED provided democracy assistance only through grants, and its obligations remained generally constant by fiscal year but varied by DRG program area. State bureaus that were able to provide reliable data provided democracy assistance primarily through grants and cooperative agreements. INL was the only State bureau that reported providing a significant amount of democracy assistance through contracts in addition to grants and cooperative agreements, but INL was one of the three State bureaus unable to provide reliable data. USAID obligated $5.5 billion in democracy assistance funding during fiscal years 2012 through 2016, about 31 percent through contracts; about 33 percent through cooperative agreements; about 4 percent through grants, excluding grants to public international organizations (PIO); and about 32 percent through grants to PIOs. Of the $5.5 billion in democracy assistance, USAID obligated over $1.7 billion of all its democracy assistance through grants to PIOs. The three countries for which USAID obligated the most funds for democracy assistance projects were Afghanistan, Iraq, and South Sudan. Democracy assistance projects in Afghanistan received over $2 billion or 37 percent of USAID’s total democracy assistance obligations during fiscal years 2012 through 2016. Moreover, two grants to the World Bank for the Afghanistan Reconstruction Trust Fund totaling $1.5 billion during fiscal years 2012 through 2016 accounted for 85 percent of the total democracy assistance funds USAID obligated through grants to PIOs during that period. For both total obligations and number of awards, USAID awarded more of its democracy assistance through grants and cooperative agreements combined than through contracts, as shown in figure 5. Contracts and cooperative agreements each accounted for roughly one- third of total obligations, while grants, excluding those to PIOs, accounted for 4 percent of total obligations during fiscal years 2012 through 2016. Excluding grants to PIOs, the number of grants and obligations for grants on average were significantly less than cooperative agreements, as shown in table 3. USAID’s democracy assistance obligations through contracts, grants, and cooperative agreements have varied during fiscal years 2012 to 2016, with significant increases in USAID’s obligations through grants to the World Bank in fiscal years 2012 and 2015, as shown in figure 6. These increases were driven by two large grants to the World Bank for the Afghanistan Reconstruction Trust Fund. Specifically, the World Bank received more than $820 million in fiscal year 2012 and more than $360 million in fiscal year 2015. During fiscal years 2012 to 2016, the World Bank accounted for 93 percent of grants to PIOs. For more details on USAID obligations through different award types by fiscal year and DRG program area, see appendix IV. USAID’s democracy assistance obligations for good governance varied the most compared with the other three DRG program areas, rule of law and human rights, political competition and consensus-building, and civil society, as shown in figure 7. This variation was again due to two large grants to the World Bank for the Afghanistan Reconstruction Trust Fund, which were categorized under good governance. As shown in figure 8, USAID provided more democracy assistance in the area of good governance, over $2 billion more than the next largest program area. Excluding USAID obligations through grants to PIOs, USAID obligated more democracy assistance through contracts than through grants and cooperative agreements combined for the two program areas of good governance and rule of law and human rights. For the two other program areas—civil society and political competition and consensus-building—USAID obligated less through contracts. NED obligated over $610.2 million in democracy assistance funding through a single award type—grants—during fiscal years 2012 through 2016. The three countries for which NED obligated the most funds for democracy assistance are in Eurasia and Asia. NED’s obligations remained generally constant in the past few fiscal years, as shown in figure 9. NED’s approved funding varied across the four DRG program areas. According to NED officials, NED does not maintain obligations data for awards by DRG program areas, as defined by USAID and State. Therefore, NED categorized its grants into DRG program areas for projects when funds were approved rather than when funds were obligated to provide a general sense of funding by DRG program area. NED approved the most funding in the area of good governance followed closely by political competition and consensus-building and then by civil society. NED’s approved funding in all program areas, except for civil society, increased over the years, as shown in figure 10. State reported obligating approximately $3 billion in democracy assistance funding during fiscal years 2012 through 2016 primarily through grants and cooperative agreements, but also through contracts. Seven of 10 State bureaus that were able to provide reliable data obligated $1.7 billion primarily through grants and cooperative agreements; the remaining three bureaus that were unable to provide reliable data reported obligating about $1.4 billion through all three award types. The seven State bureaus that were able to provide reliable data collectively obligated $1.7 billion for fiscal years 2012 through 2016 primarily through grants and cooperative agreements, as shown in table 4. Of these State bureaus, the Bureau of Democracy, Human Rights, and Labor obligated the most with about $1.2 billion in democracy assistance through 547 grants and 56 cooperative agreements for that period. The three regions for which the Bureau of Democracy, Human Rights, and Labor obligated the most funds for democracy assistance were the Near East, East Asia and Pacific, and the Western Hemisphere. Three State bureaus—INL, EUR, and SCA—were unable to provide reliable data on democracy assistance obligations for fiscal years 2012 through 2016. Collectively, these three bureaus reported obligating about $1.4 billion in democracy assistance during this period: INL, about $1.1 billion; EUR, about $150 million; and SCA, about $160 million. INL was the only State bureau that reported providing a significant amount of democracy assistance through contracts in addition to grants and cooperative agreements. We deemed data from these three bureaus unreliable because the data were incomplete, nonstandard, or inaccurate. For example, INL did not provide democracy assistance data for Colombia, Egypt, and Kenya until we identified these countries as potentially missing based on our comparison of INL data with USAID data. According to data INL subsequently provided, the democracy assistance projects in these three countries received about $49 million of the approximately $1.1 billion in democracy assistance obligated by INL in fiscal years 2012 through 2016. According to INL officials, the initial data INL provided did not include records of awards for these countries because awards were miscoded when the data were entered; for example, some awards were coded under the broad category of law enforcement rather than under specific DRG program areas. According to INL officials, this erroneous law enforcement code was used for all of Colombia’s programs and for some programs in other countries such as Egypt and Kenya. According to INL officials, for two additional countries, Tunisia and Morocco, the regional post did not always use codes associated with DRG program areas or personnel entered incorrect codes. INL also provided incomplete data for multiple data fields, including the dates for periods of performance. INL was missing the start date for 74 percent of records and the end date for almost 75 percent of records for fiscal years 2012 through 2015. A September 2014 State Office of Inspector General report on INL found, among other things, that because State’s budgeting and accounting systems are not designed to manage foreign assistance, INL staff were required to engage in time-consuming, inefficient, and parallel processes to track the bureau’s finances. According to INL officials, INL has made improvements in its data since the Inspector General report was published. However, INL was missing the start date for 69 percent of records and the end date for almost 71 percent of records for fiscal year 2016. According to INL, data fields such as these were incomplete because contract officers and agreement officers were not required to enter values for these data fields into State systems until October 2016. EUR and SCA also initially provided incomplete, inaccurate, or nonstandard data for multiple data fields. According to State officials, this was due to manual data entry and transfer errors. For example, dates were in various formats and recipient names were sometimes listed in the field intended for recipient categories, which did not allow for the systematic analysis of records. While EUR generally provided more complete and standard data for fiscal year 2016 compared with fiscal years 2012 through 2015, EUR still provided nonstandard codes to identify award subtype for 5.3 percent of its fiscal year 2016 records. For example, “ESF,” an abbreviation for the Economic Support Fund, was listed as the award subtype for multiple contracts. Furthermore, we identified 145 duplicate EUR records. EUR officials in Washington, D.C., noted that some of the duplicates resulted from their efforts to validate the data they had collected from staff in each country. Subsequently, these officials—who manually merged, analyzed, and validated data to correct it—identified additional duplicates beyond the 145 that we had identified. According to EUR officials, the bureau’s obligation data for democracy assistance awards were maintained in separate databases at posts, rather than in a centralized database. In validating the data they had collected, EUR officials identified duplicate records amounting to at least 5 percent of the records during fiscal years 2012 through 2016. On the basis of our independent analysis of the same dataset, we were able to confirm that about 4 percent of the EUR records were duplicate records. Data on democracy assistance awards are maintained in the countries where the awards are made. To obtain award level data, EUR headquarters personnel had to ask staff in each country to manually compile and report award data. In addition, SCA did not initially provide data for Afghanistan and Pakistan, including award-type data. Records associated with these two countries accounted for about 92 percent of SCA’s total democracy assistance funding. We identified these countries as potentially missing based on our comparison of SCA data with USAID data. SCA subsequently provided the missing data on democracy assistance awards made in Afghanistan and Pakistan; the data resided within a separate database. SCA democracy assistance awards are allocated across three offices within SCA and EUR, and information regarding democracy assistance programs is not currently managed through a centralized database. According to SCA officials, due to the lack of a centralized database, they would need to carefully coordinate across the three offices. However, despite the coordination efforts of these offices, SCA did not include Afghanistan and Pakistan in their initial submission of data to us, and the additional data SCA subsequently submitted through EUR for Central Asia still contained nonstandard and missing values. A June 2017 State Office of Inspector General report determined that State cannot obtain timely and accurate data necessary to provide central oversight of foreign assistance activities and meet statutory and regulatory reporting requirements. For example, the report said that State cannot readily analyze its foreign assistance by country or programmatic sector. Similarly, we found that State cannot readily analyze its foreign assistance agencywide by country or for its DRG portfolio since INL, EUR, and SCA did not provide reliable DRG award data, including incomplete or duplicative data associated with certain countries. According to the report, this lack of data hinders State’s leadership from strategically managing foreign assistance resources, identifying whether programs are achieving their objectives, and determining how well bureaus and offices implement foreign assistance programs. In September 2014, State began the Foreign Assistance Data Review to better understand and document issues with its agencywide data and multiple budget, financial, and program management systems, but State does not plan to complete its Foreign Assistance Data Review until fiscal year 2021. The Consolidated Appropriations Act, 2016, requires State to report on its use of the various award types, and the Office of Management and Budget’s Bulletin No. 12-01 requires State to report quarterly on its foreign assistance activities. Given these reporting requirements, State would not be able to provide accurate and complete data on democracy assistance unless INL, EUR, and SCA took immediate steps to address their data deficiencies. Federal internal control standards call for agencies to use quality information from reliable sources to achieve intended objectives and to effectively monitor activities. Without reliable democracy assistance data from all relevant bureaus, State cannot effectively monitor its democracy assistance programming and report reliable data externally. USAID generally did not document award-type decisions in a complete and timely manner for the awards in our sample. Specifically, USAID provided complete and timely documentation of the award-type decision for 5 of the 41 awards we reviewed. For the remaining 36 awards, the documentation was either incomplete, not timely, or both. According to ADS 304, contract and agreement officers must determine whether to use a contract, grant, or cooperative agreement, including a rationale based on criteria outlined in the Federal Grant and Cooperative Agreement Act. Consistent with the requirements of ADS 304, USAID personnel documented the rationale for using a contract, grant, or cooperative agreement for 27 of the 41 awards we reviewed. As table 5 shows, the number of awards in our sample with complete and incomplete documentation of the award-type decision varies by award type. ADS 304 requires contract and agreement officers to document the selection of an award type, including the rationale for the award-type decisions based on the requirements of the Federal Grant and Cooperative Agreement Act. USAID provided documentation of the award-type decision for 31 of the awards in our sample but lacked such documentation for 10 awards. However, for 4 of the 31 awards with documentation of the award-type decision, the documentation was not complete because it did not include a rationale for choosing between grants, cooperative agreements, and contracts on the basis of criteria in the Federal Grant and Cooperative Agreement Act, as required by USAID guidance. The documentation of the award-type decision for these 4 awards, which were all contracts, outlined the rationale for selecting a particular type of contract, information that is required by the FAR. However, the documentation for these 4 awards did not address the decision to use a contract rather than a grant or cooperative agreement, including a rationale based on the requirements outlined in the Federal Grant and Cooperative Agreement Act, as required by ADS 304. For example, documentation for one contract provided a rationale for selecting a firm- fixed-price contract based on the level of risk, which is in accordance with requirements of the FAR. However, the documentation did not indicate the rationale for deciding to use a contract rather than a grant or cooperative agreement as required by ADS 304. Without documentation of the rationale for award-type decisions as required under USAID guidance, USAID cannot demonstrate that award-type decisions are made based on the requirements outlined in the Federal Grant and Cooperative Agreement Act. For the 31 awards in our sample for which USAID provided documentation of the award-type decision, 6 met the timeliness standard set by USAID guidance, and 25 did not, as shown in table 6. While 5 award-type decisions were both timely and complete, one award that met the timeliness standard lacked a required component. According to ADS 304, contract and agreement officers must document the final award-type decision before a solicitation is issued or before USAID initiates communications with a potential sole source recipient. We found that 25 awards lacked timely documentation of the award-type decision because the decision was documented after the solicitation was issued or timeliness was indeterminate because the documentation lacked a date or other indication of when in the process this determination was documented. Without this, we could not determine whether the award-type decisions were documented prior to solicitation or before USAID initiated communications with a potential sole source recipient. Instances in which final award-type decisions were documented after the issuance of a solicitation or communication with a potential sole source recipient include the following: Solicitation for one of the contracts in our sample occurred in 2011, but the award-type decision was not documented until 2013. The award-type decision for one of the grants in our sample was documented after the grant was awarded, which occurs after the solicitation is issued. Solicitation for one cooperative agreement in our sample occurred in 2010, but the award-type decision was not documented until 2012. According to USAID officials, the agency’s practice prior to October 2016 was to include award-type decisions in a comprehensive document that was intended to record all key decisions made throughout the award process. This document was finalized at the end of the award process. However, USAID officials also stated that they have introduced new processes and procedures, including making updates to relevant guidance, templates, and instructions that they believe will result in more timely and complete documentation of award-type decisions. Specifically, in 2016 USAID issued an update to ADS 304 that includes examples of when to use contracts, grants, and cooperative agreements and provides additional information about the legal framework for making award-type decisions. In 2017, USAID also issued revised templates to guide the documentation of award-type decisions. According to USAID officials, in addition to clarifying the ADS 304 guidance and developing new templates, USAID is also developing specific guidance for DRG programs that it expects to release at a future date. For additional information about this DRG-specific guidance, see app. III. USAID has taken steps to improve documentation for award-type decisions by updating its guidance and templates but has not assessed whether these updates have resulted in timely and complete documentation of award-type decisions. USAID officials stated that assessments are conducted at the sub-bureau or mission level, rather than by specific sectors, such as for DRG programs. As a result, USAID officials do not have plans to assess whether the newly updated processes and procedures have resulted in more timely documentation of DRG award-type decisions. It is important that USAID document the award-type decision before it publishes a solicitation for the award because award-type decisions impact other award elements, such as the requirements for competition and oversight and whether profit is permissible under the award. Until USAID assesses its updated processes and procedures, it cannot know if the steps it has taken have resulted in complete and timely documentation of award-type decisions as required by USAID guidance. For the awards in our sample, contracts generally differed from grants and cooperative agreements in terms of competition, scope of work, cost sharing and profit, and oversight requirements, among other characteristics. We identified differences in three award elements— competition, cost sharing and profit, and oversight requirements—that were generally consistent with the unique requirements provided for in procurement regulations and agency guidance. We also identified differences between the award types with regard to scope of work, and found certain activities were conducted under all three award types. USAID awarded most, but not all, of the contracts in our sample using full and open competition, according to USAID data. Different federal and USAID requirements are in place regarding the use of competition procedures to award contracts than apply to grants and cooperative agreements. In accordance with the FAR, executive agencies such as USAID are required to promote and provide for full and open competition in awarding contracts, with only limited exemptions. USAID did not require full competition for any of the grants in our sample and required it for only about one-third of the cooperative agreements, according to USAID data. For the 41 awards in our sample, table 7 shows how many of each award type used full competition, limited competition, or no competition, based on USAID data. Below are examples of the rationale USAID provided for limiting competition for selected contracts, grants, and cooperative agreements: USAID limited competition for one of the contracts in our sample because of potential impairment to a foreign aid program, and another contract was limited to local competitors. This exemption to full and open competition is based on a unique statutory authority available to USAID and other agencies operating foreign assistance programs, which has been implemented in the USAID Supplement to the FAR. USAID also exempted one of the contracts in our sample from full and open competition using a provision in the FAR that allows for solicitation from a single source when the purchase falls below a threshold of $150,000. However, USAID officials indicated that they erroneously cited FAR 13.106-1(b), which permits sole source awards for acquisitions not exceeding the simplified acquisition threshold if only one source is reasonably available, when they should have cited FAR 13.501(a)(2)(i), which permits sole source acquisitions of commercial items (including brand-name items) for acquisitions greater than $150,000. For two of the grants in our sample, USAID limited the awards to local competition, according to USAID officials. For one cooperative agreement in our sample, competition was limited, according to USAID data, but USAID did not provide additional information on how the award competition was limited. The recipient of this award had submitted an unsolicited application, which under ADS 303, may be included in a relevant competition for an award, if USAID finds that the unsolicited application reasonably fits an existing program. USAID found that this unsolicited application was responsive to an existing solicitation and thus provided no additional justification. For more information on the rationales USAID used to exempt contracts, grants, and cooperative agreements in our sample from full and open competition, see appendix V. We found that the scope of work for contracts, grants, and cooperative agreements included similar types of activities. We also found that contracts more often included a greater number of activities working with the host-country government or other major national institutions, and grants and cooperative agreements more often included a greater number of activities working with civil society organizations. Seven of the 13 contracts in our sample included more activities focused on engaging with host-country governments and national institutions, while only 2 of the 13 contracts included more activities focused on engaging civil society organizations. Grants and cooperative agreements, by contrast, more often included a greater number of activities to support civil society organizations and media organizations than government or major national institutions of the country of performance. Three of the 5 grants in our sample included more objectives or activities focused on engaging civil society organizations, rather than engaging with host government or other major national institutions, while none of the grants included more objectives or activities related to the host government or other major national institutions. Cooperative agreements slightly more often included a greater number of objectives or activities to engage civil society organizations than they did to work with host government and national institutions, with 9 cooperative agreements including more objectives or activities focused on engaging civil society organizations and 7 with more objectives or activities focused on engaging host governments or other national institutions. Below are some examples of the activities and types of parties engaged with as stated in the awards in our sample: One contract in our sample provided various advisors to assist the government of a foreign country in implementing transparent policies, laws, and systems to strengthen public financial management and provide for a well-regulated financial sector, among other things. For more information on program objectives for selected democracy assistance awards by contract type, see appendix VI. A grant in our sample sought to increase the capacity of civil society organizations and the media to promote transparent democratic elections and political processes, among other things. Activities under the scope of work for this award included building alliances with stakeholders, conducting election-day observations, and analyzing electoral results. A cooperative agreement in our sample was intended to support a political transition through, among other things, organizational capacity development and grant-making opportunities for civil society organizations working to raise awareness about electoral events. In addition, for our award sample, we found that activities such as technical assistance, training, and local capacity building were conducted under grants, cooperative agreements, and contracts. Eight of the 13 contracts in our sample were cost-plus-fixed-fee contracts, under which the contractor is reimbursed its costs in implementing the program in addition to a fee (profit) that is fixed at the outset. For these 8 contracts, the estimated percentage of profit ranged from about 1 to 6 percent of the estimated contract cost. According to the FAR, under cost-plus-fixed-fee contracts, the fee cannot exceed 10 percent of the contract’s estimated cost excluding fee. The average estimated fixed fee percentage for these contracts was about 5 percent of the estimated contract cost. While USAID contracts may be structured to provide for contractor profit in accordance with the FAR, USAID guidance does not allow profit under grants and cooperative agreements. For the grants and cooperative agreements in our sample, the awards did not specifically provide any fee (profit), and the awardees often agreed to contribute to the cost of the program through cost sharing. In addition, USAID guidance identifies cost sharing—whereby an awardee contributes to the total cost of an agreement—as an important element of the USAID-awardee relationship for grants and cooperative agreements. According to this guidance, although there is no general requirement for the awardees of grants and cooperative agreements to share in providing the costs of programs, cost sharing can be a mechanism to help awardees build their organizational capacity. For the awards in our sample, USAID included provisions for cost sharing in 3 of the 5 grants, and the awardees agreed to contribute about 11 percent, 13 percent, and 74 percent of the respective total award funding, including the cost share amount. USAID also included cost sharing provisions in 10 of the 23 cooperative agreements, with the awardee contribution ranging from less than 1 percent to 36 percent of the total award funding, including the cost share amount. All of the grants and cooperative agreements that included cost sharing provisions were awarded to nonprofit organizations, according to USAID data. Some of these awardees agreed to contribute to cost sharing by covering in-kind costs, such as donated time from volunteer legal specialists, and others agreed to contribute cash to cover some of the direct costs of implementing programs, such as personnel and benefits. According to USAID officials, cost sharing is rarely used under USAID contracts because under a cost sharing contract the contractor agrees to absorb a portion of its costs in expectation of substantial compensating benefits, such as certain research and development efforts, and these circumstances rarely occur under USAID’s programming. USAID did not include cost sharing provisions in any of the 13 contracts in our sample. For additional information about profit in our sample, see table 8. Table 9 provides additional information about cost sharing under the awards in our sample. Below are some examples of profit and cost sharing arrangements included in contracts, grants, and cooperative agreements in our sample: A contract in our sample sought to, among other things, improve the access of vulnerable and disadvantaged populations to the country’s legal system by engaging in activities such as working to build the capacity of government and civil society organizations to be more responsive to the needs of these populations. Under this award, the contractor was to receive approximately $1.7 million in profit, which was 4 percent of the estimated value of the award. The awardee for a grant in our sample agreed to provide $2.1 million of the program costs, about 74 percent of the total cost of the program, which sought to develop public opinion survey research capacity in the host country, among other things. USAID’s grant to this awardee funded additional support for the program, which the awardee was already executing prior to USAID assistance. A cooperative agreement in our sample included a requirement for the awardee to contribute about 9 percent of program expenditures, or about $3 million, for a program that sought to improve access to health services, as well as strengthen health delivery systems and health governance. We found that USAID oversight requirements differed for contracts compared with grants and cooperative agreements for the awards in our sample. This is because contracts (1) at times required more frequent reporting and (2) more often required evaluations of the contractor’s performance. Reporting requirements: We found that while most awards in our sample required quarterly financial and performance reporting, some contracts required these reports to be submitted monthly. USAID required quarterly financial and performance reporting for the majority of grants and cooperative agreements in our sample. None of the grants or cooperative agreements in our sample included requirements for financial reporting more frequently than quarterly, and no grants and only one cooperative agreement included a more frequent performance reporting requirement. According to Title 2 of the Code of Federal Regulations (C.F.R.), Section 200.327, under grants and cooperative agreements, financial reports must be collected by agencies with the frequency required by the award, but no less frequently than annually and no more frequently than quarterly, except in unusual circumstances, such as where more frequent reporting is necessary for effective monitoring of the award. USAID officials confirmed that there would have to be a reason to justify quarterly or more frequent reporting requirements for grants or cooperative agreements. For example, considerations related to risk could result in the need for more frequent reporting for grants and cooperative agreements. Table 10 shows the financial and performance reporting requirements for the contracts, grants, and cooperative agreements in our sample. Evaluations of performance: For the majority of contracts in our sample, USAID included provisions for evaluation of the contractor’s performance at the conclusion of performance. According to the FAR, evaluations of a contractor’s performance shall be prepared at the time the work under the contract is completed, and, for contracts longer than 1 year, interim evaluations should be prepared at least annually. USAID officials indicated that there is no similar government-wide or USAID requirement for grants and cooperative agreements. None of the grants and only a few of the cooperative agreements in our sample included such evaluation provisions. However, USAID officials noted that, in accordance with USAID policy, the past performance of a potential awardee is considered in conducting risk assessments for grants and cooperative agreements. Table 11 shows the number of USAID contracts, grants, and cooperative agreements in our sample that included provisions for evaluation of the contractor or awardee’s performance. For most of the contracts in our sample, award documentation indicated that the contractor’s performance would be assessed on a variety of factors such as quality of service, cost control, timeliness of performance, and effectiveness of key personnel. These evaluations form the basis of the contractor’s performance record for the contract. Only one contract in our sample had no requirement for a performance evaluation of the contractor, and that award was for the rental of a hotel ballroom and related services for an event. For three of the cooperative agreements in our sample that included provisions for the evaluation of the awardee’s performance, the award documentation indicated that USAID officials were to ensure prudent management of the award and to make the achievement of program objectives easier by, among other things, evaluating the awardee and its performance. One cooperative agreement included a provision that USAID will fund or conduct an external midterm evaluation during the second year of the project. For the one remaining cooperative agreement with an evaluation provision, documentation indicated that the evaluation would be used to inform a decision about a potential follow-on award. Democracy assistance has been a key component of U.S. foreign assistance, supporting activities related to rule of law and human rights, good governance, political competition and consensus-building, and civil society. USAID and State together have allocated about $2 billion annually for democracy assistance in fiscal years 2012 through 2016. USAID’s information systems enable it to track and report the amount of democracy assistance funding through contracts, grants, and cooperative agreements. However, State lacks the ability to provide comparable agencywide data. The quality of democracy assistance award data provided by 10 State bureaus and offices varied, and three of these bureaus were unable to provide reliable data. Of the State bureaus, INL is the only bureau that regularly makes use of contracts, and it provided unreliable data. Without reliable data from INL, State cannot accurately report on its use of the various award types. In addition, since EUR’s and SCA’s award data are maintained across embassies, offices, and the two bureaus, opportunities for data errors may increase when regional data needs to be compiled. Without reliable data from all relevant bureaus, State cannot be sure that it is fully and accurately reporting on democracy assistance awards, which limits, among other things, congressional oversight of democracy assistance funding. While USAID requirements for complete and timely documentation of award-type decisions have existed since at least 2011, for our sample of 41 USAID awards for which an award-type decision was required, only 5, or about 12 percent, had both complete and timely documentation of the award-type decision. USAID recently introduced processes and procedures to improve the documentation of these decisions. However, until USAID assesses its updated processes and procedures, it cannot know if the changes resulted in award-type decisions being documented in a complete and timely manner, as required by its guidance, or if additional steps are needed. We are making three recommendations, two to State and one to USAID. The Secretary of State should direct the Bureau of International Narcotics and Law Enforcement Affairs to identify and address factors that affect the reliability of its democracy assistance data, such as miscoded or missing data. (Recommendation 1) The Secretary of State should direct the Director of the Office of U.S. Foreign Assistance Resources to implement a process to improve the reliability, accessibility, and standardization of democracy assistance data across the geographic regions of the Bureaus of European and Eurasian Affairs and South and Central Asian Affairs, such as utilizing a centralized database for award data. (Recommendation 2) The USAID Administrator should direct the Office of Acquisition and Assistance to assess whether current processes and procedures as outlined in revised guidance result in complete and timely documentation of award-type decisions for democracy assistance. (Recommendation 3) We provided a draft of this report to State, USAID, and NED for review and comment. State, USAID, and NED provided technical comments on the draft, which we incorporated as appropriate. State and USAID also provided written comments in letters that are reproduced in appendices VII and VIII, respectively. In their written comments, both State and USAID concurred with our recommendations. State also requested that the report provide more information about its commitment and efforts to improve accountability of foreign assistance under its Foreign Assistance Data Review process. We have added more details about these efforts, including a discussion of State’s recent report to Congress on the outcomes of Phases One and Two of its four-phase review, which is expected to be completed in fiscal year 2021. State’s letter also described other efforts to improve the quality and accessibility of data at the bureau- level and at posts. In its written comments, USAID stated that it will take steps to assess documentation of award-type decisions and planned to complete this assessment by September 30, 2018. USAID also underscored certain details regarding required documentation of award-type decisions for some awards in our sample of 41 USAID democracy assistance awards. USAID noted that three contracts in our sample consisted of task orders, which do not require award-type decision documentation separate from their base awards under USAID guidance, according to agency officials. The draft report included these details, and we added more information to the report to further clarify them. We are sending copies of this report to the appropriate congressional committees, the Secretary of State, the Administrator of the U.S. Agency for International Development, and the President of the National Endowment for Democracy. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IX. This report (1) examines funding that the U.S. Agency for International Development (USAID), National Endowment for Democracy (NED), and U.S. Department of State (State) obligated for democracy assistance through contracts, grants, and cooperative agreements; (2) evaluates USAID documentation of award-type decisions; and (3) compares USAID contracts with grants and cooperative agreements across selected award elements. To examine funds obligated by USAID, NED, and State for democracy assistance by award types, we obtained data on awards that USAID, NED, and State administered during fiscal years 2012 through 2016 under the Democracy, Human Rights, and Governance (DRG) portfolio. The data we obtained included awards to public international organizations (PIO). However, awards to PIOs are governed by USAID guidance separate from the guidance that applies to awards to other types of organizations. The data we obtained also included interagency agreements. However, interagency agreements are governed by separate USAID guidance that does not require the same award-type decision as when agencies obligate funds to entities through contracts, grants, and cooperative agreements. We analyzed the award data for fiscal years 2012 through 2016 but did not include fiscal year 2011 data in our analysis because State did not consistently track obligations data at the award level prior to fiscal year 2012, according to State officials. We assessed the reliability of these data by reviewing related documentation; interviewing knowledgeable officials; and conducting electronic or manual data testing for missing, nonstandard, or duplicative data; among other things. We determined that data provided by USAID, NED, and State, except for data from State’s Bureau of International Narcotics and Law Enforcement Affairs (INL), Bureau of European and Eurasian Affairs (EUR), and Bureau of South and Central Asian Affairs (SCA), were sufficiently reliable for the purposes of our report. For the USAID, NED, and State data that were sufficiently reliable, we analyzed the amount of funding by award type, among other variables. We assessed State’s data reliability challenges against federal internal control standards. To evaluate USAID’s award-type decisions, we reviewed relevant regulations and agency policies, and we interviewed knowledgeable agency officials about these polices. State and NED were not included in our sample because most State bureaus did not regularly use all three types of awards and NED only provides assistance through grants. In addition, three State bureaus were unable to provide reliable data from which to select a sample. We also selected a roughly proportional, random, nongeneralizable sample of 41 awards—13 contracts, 5 grants, and 23 cooperative agreements. These awards were selected based on characteristics, such as award type, DRG program area, and place of performance. The sample focused on the 14 countries for which USAID obligated the most democracy funding. Democracy assistance projects in these 14 countries received over 70 percent of USAID’s democracy assistance funding. The sample was also limited to contracts, grants, and cooperative agreements that were awarded by USAID in fiscal years 2012 through 2015 because fiscal year 2015 was the most recent fiscal year for which data were available at the time of our sample selection. We excluded (1) grants made to PIOs because these awards are governed by USAID guidance separate from the guidance that applies to awards to other types of organizations; (2) interagency agreements because engaging other federal agencies through interagency agreements does not require the same award-type decision under USAID guidance as when agencies obligate funds to entities through contracts, grants, and cooperative agreements; and (3) awards that fell below the simplified acquisition threshold, which is $150,000, because there are different acquisition procedures allowable for awards that fall below the threshold. For the selected awards, we obtained and analyzed preaward documentation relevant to the award-type decision and evaluated this documentation against the relevant regulations and agency guidance. To ensure accuracy, we cross-checked information from the documentation for the selected awards with USAID’s award data. In collaboration with subject-matter experts, we selected four award elements—competition, cost sharing and profit, scope of work, and oversight requirements—for a comparison of contracts with grants and cooperative agreements. To compare USAID contracts with grants and cooperative agreements across selected award elements, we obtained and conducted a review of documentation associated with the same sample of 41 USAID awards. Additionally, we obtained information about award recipients from a public database maintained at SAM.gov. Using information collected from the documentation, we analyzed the selected awards’ competition, cost sharing and profit, scope of work, and oversight activities. Subsequently, we reviewed the documentation and applicable legal frameworks, including federal regulations and guidance pertaining to the award elements we selected, to compare differences between award types. We also interviewed relevant agency officials as well as the leading industry organizations that represent implementers of foreign assistance programs to better understand the use of various award types. We conducted this performance audit from July 2016 to December 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Our nongeneralizable sample of U.S. Agency for International Development (USAID) awards was limited to fiscal years 2012 through 2015 and to the 14 countries for which USAID obligated the most democracy funding, which accounted for over 70 percent of USAID’s democracy assistance funding. Total USAID democracy assistance funding for projects in Afghanistan was greater than for any other country, amounting to almost 39 percent of USAID’s total democracy assistance obligations during fiscal years 2012 through 2015. The total USAID democracy assistance funding for projects in Afghanistan included obligations to public international organizations (PIOs) of more than $827 million in fiscal year 2012, more than $55 million in fiscal year 2013, more than $48 million in fiscal year 2014, and more than $369 million in fiscal year 2015. USAID’s use of award types for democracy assistance varied across these 14 countries during fiscal years 2012 through 2015, as shown in figure 11. The Consolidated Appropriations Act, 2016, states that not later than 90 days after enactment of the act, the U.S. Department of State (State) and U.S. Agency for International Development (USAID), following consultation with democracy program implementing partners, shall each establish guidelines for clarifying program design and objectives for democracy programs, including the use of contracts versus grants and cooperative agreements in the conduct of democracy programs carried out with funds appropriated by the act. The joint explanatory statement accompanying the act further elaborated that the act requires the development of guidelines for the use of contracts versus grants and cooperative agreements for the unique objectives of democracy programs, and that the guidelines should assist contracting and agreement officers in selecting the most appropriate mechanism for democracy programs, among other things. In 2016, USAID released its revised agencywide guidance, Automated Directives System (ADS) Chapter 304 (ADS 304), on how to make award- type decisions between contracts, grants, and cooperative agreements. According to USAID officials, USAID expects to release guidance further clarifying ADS 304 at a future date. USAID intends to issue the guidance after it completes final consultations with implementing partners, the Congress, and other stakeholders. It includes scenarios and examples to further clarify existing government-wide and agencywide guidance. According to USAID officials, in drafting its guidance to further clarify ADS 304, USAID pursued multiple rounds of review within USAID, and with implementing partners, the Congress, and other stakeholders. According to State, it met the requirement to establish additional guidelines for democracy assistance through State’s release of a Program Design and Performance Management Toolkit in fall 2016 and State’s updating of its Federal Assistance Directive in May 2017. The aim of the Program Design and Performance Management Toolkit was to clarify program design and objectives for foreign assistance programs broadly. The Federal Assistance Directive combined both policies and procedures from the Federal Assistance Policy Directive and the Procedural Desk Guide into one document and clarified appropriate mechanisms for all programs. Although applicable to democracy programs, neither of these actions was specific to democracy programs. According to State, the Bureau of Democracy, Human Rights, and Labor; the Bureau of International Narcotics and Law Enforcement; and other relevant State bureaus that work closely with democracy assistance implementing partners consult regularly with and provide guidance to implementing partners on the use of the guidelines. U.S. Agency for International Development (USAID) democracy assistance obligations through different award types varied by fiscal year and DRG program area, as shown in tables 12, 13, and 14. Regulations, law, and policy enable the U.S. Agency for International Development (USAID) to limit competition in awarding contracts, grants, and cooperative agreements under certain circumstances. One source of USAID’s authority to limit competition for contracts is the Competition in Contracting Act of 1984, as implemented in the Federal Acquisition Regulation (FAR), which outlines policies and procedures for acquisition by all federal agencies, including policies and procedures pertaining to exemptions from competition. In addition, for contracts awarded under USAID programs, the FAR, among other regulations and legislation, contains specific provisions on exemptions from competition. For grants and cooperative agreements, USAID’s Automated Directives System Chapter 303 outlines circumstances under which competition can be limited. In accordance with applicable policies, procedures, and guidance, USAID can use some exemptions from competition only for contracts and others only for grants and cooperative agreements. For example, USAID can limit competition for contracts for the sake of public interest or when circumstances are such that competition would compromise U.S. national security; however, according to USAID officials, they rarely have cause to use these grounds for limiting competition. USAID guidance outlines some unique exemptions to competition for grants and cooperative agreements. For example, USAID can exempt follow-on awards, which are the same or substantively similar to recently completed awards, if the awardee will be the same, or can exempt awards from competition in certain instances when USAID has received an unsolicited application. For the awards in our sample, USAID limited competition for only 3 of the 13 contracts, based on USAID data. However, for one of these contracts, USAID officials indicated that they erroneously cited FAR 13.106-1(b), which permits sole source awards for acquisitions not exceeding the simplified acquisition threshold if only one source is reasonably available, when they should have cited FAR 13.501(a)(2)(i), which permits sole source acquisitions of commercial items (including brand-name items) for acquisitions greater than $150,000. For the exemptions from competition that USAID used for these awards, see table 15. USAID exempted from full competition all five of the grants and 15 of the 23 cooperative agreements in our sample. Table 16 outlines exemptions from competition that USAID may use for grants and cooperative agreements in our sample. For the contracts in our sample, we found that program objectives varied by type of contract. For example, the firm fixed price award and three of the four indefinite quantity awards in our sample procured goods and services with specific deliverables that were directly for the U.S. Agency for International Development’s (USAID) benefit. Nearly all of the cost- plus-fixed-fee contracts sought to achieve improvements in the public sector of the country of performance through activities such as supporting developments in public policy or strengthening national institutions. For examples of differences in program objectives by contract type in our sample, see table 17. In addition to the contact named above, Mona Sehgal (Assistant Director), Justine Lazaro (Analyst-in-Charge), Lindsey Cross, Christopher Hayes, Carl Barden, Karen Cassidy, David Dayton, Timothy DiNapoli, Justin Fisher, Alexandra Jeszeck, Heather Latta, Madeline Messick, Natarajan Subramanian, Alex Welsh, and Bill Woods made key contributions to this report.
|
Supporting efforts to promote democracy has been a foreign policy priority for the U.S. government. In recent years, USAID and State have allocated about $2 billion per year toward democracy assistance overseas. Congress required USAID and State to each establish guidelines for and report on the use of contracts, grants, and cooperative agreements for certain democracy programs. GAO was asked to review U.S. democracy assistance. This report (1) examines funding USAID, NED, and State obligated for democracy assistance primarily through contracts, grants, and cooperative agreements and (2) evaluates documentation of USAID award-type decisions, among other objectives. GAO analyzed USAID, NED, and State democracy assistance award data for fiscal years 2012–2016. GAO also reviewed relevant regulation and agency policies and analyzed documentation for a nongeneralizable sample of USAID awards selected based on factors such as award type, program area, and country. In fiscal years 2012–2016, the U.S. Agency for International Development (USAID) obligated $5.5 billion and the National Endowment for Democracy (NED) obligated $610.2 million in democracy assistance funding. The total funding the Department of State (State) obligated for democracy assistance could not be reliably determined. One-third of all USAID obligations were provided through public international organizations (PIOs), which under USAID guidance are composed principally of countries or other organizations designated by USAID; 94 percent of PIO obligations were provided to the World Bank for democracy assistance projects in Afghanistan. The remaining two-thirds of USAID obligations were provided through contracts, grants (excluding PIOs), and cooperative agreements. Of the 10 State bureaus providing democracy assistance, 3 were unable to provide reliable funding data for fiscal years 2012–2016. Data from these bureaus were incomplete, nonstandard, or inaccurate. Federal internal control standards call for agencies to use quality information from reliable sources to achieve intended objectives and to monitor activities. Without such data, State cannot effectively monitor its democracy assistance programming and report reliable data externally. For the awards GAO sampled, USAID generally did not document decisions about whether to award a contract, grant, or cooperative agreement (known as award-type decisions) in a complete and timely manner. According to applicable USAID guidance, agency officials were required to (1) document the final award-type decision with their written determination, including a rationale based on the requirements of the Federal Grant and Cooperative Agreement Act, and (2) complete this documentation before award solicitation occurs or, for noncompetitive awards, before USAID initiated communications with a potential sole-source awardee. However, USAID provided both complete and timely documentation of the award-type decision for 5 of the 41 awards GAO sampled. For the remaining 36 awards, the documentation was either incomplete, not timely or timeliness was indeterminate, or both (see table). While USAID has taken steps to improve documentation for award-type decisions by updating its guidance and templates, it has not assessed whether these updates have resulted in complete and timely documentation. It is important that USAID document these decisions in advance of solicitation because the selection of an award type may affect requirements for administering the award, including competition and oversight requirements and whether or not profit is permissible. State should improve the reliability and completeness of its democracy assistance funding data, and USAID should assess whether steps taken are resulting in complete and timely documentation of democracy assistance award-type decisions. State and USAID concurred with GAO's recommendations and described actions planned or under way to address them.
|
Since September 2012, CMS has subjected selected items and services to prior authorization and pre-claim reviews—a process similar to prior authorization where review takes place after services have begun— through four fixed-length demonstrations and a permanent program. The prior authorization demonstrations are for certain power mobility devices, repetitive scheduled non-emergency ambulance services, non- emergency hyperbaric oxygen therapy, and home health services; while the permanent program is for certain durable medical equipment, prosthetics, orthotics, and supplies (DMEPOS) items. By using prior authorization, CMS generally seeks to reduce expenditures, unnecessary utilization, and improper payments, although specific objectives for the programs vary based on the statutory authority CMS used to initiate each. Power mobility devices demonstration: In September 2012, CMS implemented prior authorization for certain scooters and power wheelchairs, items the agency has identified with historically high levels of fraud and improper payments, for Medicare beneficiaries in seven states: California, Florida, Illinois, Michigan, New York, North Carolina, and Texas. The demonstration, established under Section 402(a) of the Social Security Amendments of 1967, is intended to develop or demonstrate improved methods for the investigation and prosecution of fraud in providing care or services under Medicare. In October 2014, CMS expanded the demonstration to 12 additional states: Arizona, Georgia, Indiana, Kentucky, Louisiana, Maryland, Missouri, New Jersey, Ohio, Pennsylvania, Tennessee, and Washington. CMS also extended the program, which was originally scheduled to end in 2015, until August 2018. CMS officials reported that since the prior authorization programs’ implementation, the agency made more than 100 referrals to its contractors that investigate fraud. However, due to the length of time fraud investigations typically take, results from these referrals are not yet available. extended the program, which was originally scheduled to end in 2017, through November 2018. Non-emergency hyperbaric oxygen therapy demonstration: In March 2015, CMS implemented prior authorization for non-emergency hyperbaric oxygen therapy in three states the agency has identified with high utilization and improper payment rates, based on the therapy facility’s location: Illinois, Michigan, and New Jersey. Medicare covers hyperbaric oxygen therapy for certain conditions, such as diabetic wounds of the lower extremities, after there have been 30 days of no measurable signs of healing during standard wound care treatment. According to CMS, previous experience indicates that hyperbaric oxygen therapy has a high potential for improper payments and raises concerns about beneficiaries receiving medically unnecessary care. The demonstration, established under Section 1115A of the Social Security Act, is intended to reduce expenditures while preserving or enhancing quality of care. The demonstration ended in February 2018. Home health services demonstration: In August 2016, CMS implemented prior authorization for home health services in Illinois. The demonstration, established under Section 402(a) of the Social Security Amendments of 1967, is intended to develop or demonstrate improved methods for the investigation and prosecution of fraud in providing care or services under Medicare. The demonstration was originally scheduled to incorporate other states the agency has identified with high rates of fraud and abuse: Florida, Massachusetts, Michigan, and Texas. However, as of April 2017, CMS paused the demonstration while it considered making improvements. As of February 2018, the demonstration has not resumed. Permanent DMEPOS program: In December 2015, CMS established a permanent prior authorization program for certain DMEPOS items under Section 1834(a)(15) of the Social Security Act. This program aims to reduce unnecessary utilization for certain DMEPOS items. To select the items that would be subject to prior authorization, CMS compiled a Master List of items that 1) appear on the DMEPOS Fee Schedule list, 2) have an average purchase fee of $1,000 or greater (adjusted annually for inflation) or an average rental fee schedule of $100 or greater (adjusted annually for inflation), and 3) meet one of these two criteria: the item was identified in a GAO or HHS Office of Inspector General report that is national in scope and published in 2007 or later as having a high rate of fraud or unnecessary utilization, or the item is listed in the 2011 or later published Comprehensive Error Rate Testing program’s annual report. CMS may choose specific items from this Master List to include on the required prior authorization list, and there is no set end date for requiring prior authorization for those items. CMS may suspend prior authorization for those items at any time. (See app. I for the items on the Master List.) In March 2017, CMS began requiring prior authorization for two types of group 3 power wheelchairs from the Master List for beneficiaries with a permanent address in selected states (Illinois, Missouri, New York, and West Virginia) and expanded the program nationwide in July 2017. As of February 2018, CMS has not identified any other items from the Master List for prior authorization. See figure 1 for each prior authorization program’s implementation and end dates. MACs that administer the prior authorization programs review prior authorization requests for items and services, along with supporting documentation, to determine whether all applicable Medicare coverage and payment rules have been met. CMS expects requests for prior authorization to include all documentation necessary to show that coverage requirements have been met, for example face-to-face examination documentation or the detailed product description. The referring physician—or the physician who conducts the face-to-face examination of the beneficiary and orders the item or service—provides this documentation to a provider or supplier who subsequently furnishes the item or service. According to multiple MACs’ officials, the provider or supplier who furnishes the item or service typically submits the prior authorization request. CMS has specified that MACs review initial prior authorization requests and make a determination within 10 business days. MACs make one of the following decisions: Provisionally affirm (approve) – Documentation submitted meets Medicare’s coverage and payment rules. A prior authorization provisional affirmation is a preliminary finding that a future claim submitted to Medicare for the item or service meets Medicare’s coverage and payment requirements and will likely be paid. Non-affirm (deny) – Documentation submitted does not meet Medicare rules or the item or service is not medically necessary. However, a non-affirmed request may be revised and resubmitted for review an unlimited number of times prior to the submission of the claim for payment. CMS has specified that MACs make a determination on a resubmission within 20 business days. For the demonstrations, claims that are submitted without a prior authorization provisional affirmation are subject to prepayment review, which is medical review before the claim is paid. In addition, for the home health services and power mobility devices demonstrations, claims submitted without a prior authorization provisional affirmation that are determined payable during the medical review will be subject to a 25 percent reduction in payment. For the permanent program, claims that are submitted without a prior authorization provisional affirmation are denied. (See fig. 2 for the prior authorization process.) As of March 31, 2017, MACs had processed over 337,000 prior authorization requests—about 264,000 initial requests and about 73,000 resubmissions, as shown in table 1. MACs’ provisional affirmation rates for both initial and resubmitted prior authorization requests rose in each demonstration between their implementation and March 2017, often by 10 percentage points or more. For example, the provisional affirmation rate for initial submissions for repetitive scheduled non-emergency ambulance services rose from 28 percent in the first 6 months after implementation (December 2014 through May 2015) to 66 percent in the most recent 6 months for which data are available (October 2016 through March 2017). Some MAC officials attributed this rise in part to provider and supplier education, which improved documentation being submitted. According to our analysis, expenditures decreased for items and services subject to prior authorization in four demonstrations. For example, expenditure decreases in initial demonstration states from implementation through March 2017 ranged from 17 percent to 74 percent. Figure 3 shows the average monthly expenditures per state from 6 months prior to the start of each demonstration through March 2017 for each of three groups of states: states that were part of the initial demonstration, states that were part of the demonstration expansion, and non-demonstration states. (See app. II for monthly expenditures for items and services covered under each demonstration from their implementation through March 2017.) Our analysis also shows potential savings for items and services subject to prior authorization, based on the difference between actual expenditures and estimates of what expenditures would have been in the absence of the demonstrations. For each demonstration, we estimated expenditures had the demonstration not been implemented by assuming that expenditures would have remained at their average for the 6 months prior to the demonstration starting in each state. We then compared actual expenditures to these estimated expenditures and found that potential savings could be as high as about $1.1 to $1.9 billion. Estimated potential savings in states that were part of the demonstrations since either their initial implementation or expansion may be as high as $1.1 billion. For items and services subject to prior authorization in these states, estimated expenditures in the absence of the demonstrations would have been over $2.1 billion, while actual expenditures were about $1.0 billion. Estimated potential savings may be as high as about $1.9 billion if, for the power mobility device demonstration, we estimate savings in both demonstration states and non-demonstration states since implementation. With this method, estimated savings since the power mobility device demonstration’s implementation change from over $600 million in demonstration states since each state’s implementation to about $1.4 billion in all states since the demonstration began in September 2012, a nearly $800 million increase. This increase is due to including non-demonstration states in the analysis and changing the assumptions for expanded demonstration states in the analysis. CMS officials have reported that certain power mobility device expenditures have declined significantly in both demonstration states and non-demonstration states in part because they think that larger nationwide suppliers improved their compliance with CMS policies in all states based on their experiences with prior authorization. CMS did not make a similar statement for the other demonstrations, and in December 2017, CMS officials said that the agency has not analyzed expenditures in non- demonstration states for the other demonstrations. See table 2 for estimated potential savings for prior authorization demonstrations from implementation through March 2017. According to our analysis, more than half of the reduction in monthly expenditures took place within the first 6 months of each demonstration. We calculated the average monthly expenditures for the 6 months prior to the start of each demonstration, the monthly expenditures in the 6th month after implementation, and the monthly expenditures in March 2017 for initial demonstration states in the power mobility device, repetitive scheduled non-emergency ambulance services, and non-emergency hyperbaric oxygen therapy demonstrations. We compared these expenditures and found that 58, 99, and 91 percent of the reduction in monthly expenditures during this time took place during the first 6 months of each demonstration, respectively. CMS had other program integrity efforts underway before implementing prior authorization, and these efforts may have also contributed to the reduction in expenditures for items and services subject to prior authorization in these demonstrations. CMS officials said that it is likely that prior authorization played a large role in the expenditure reduction for those select items and services. However, CMS officials also reported that it is difficult to separate the effects of prior authorization from other program integrity efforts, and the agency has not developed a methodology for determining the independent effect of prior authorization on expenditures. We found that some of these other program integrity efforts have addressed provider screening and enrollment and certain durable medical equipment, and these may have contributed to the reductions in Medicare expenditures. Provider screening and enrollment: CMS has taken steps to keep potentially fraudulent providers and suppliers from billing Medicare. For example, in September 2011, CMS began revalidating providers’ and suppliers’ enrollment in Medicare to ensure that they continue to be eligible to bill Medicare. Revalidation involves confirming that the provider or supplier continues to meet Medicaid program requirements, including ensuring that a provider or supplier does not employ or contract with individuals who have been excluded from participation in federal health care programs. We previously reported that screening all providers and suppliers—not just the ones subject to prior authorization—resulted in over 23,000 new applications being denied or rejected and over 703,000 existing enrollment records being deactivated or revoked from March 2011 through December 2015. We also reported that CMS estimated the revised process avoided $2.4 billion in total Medicare payments to ineligible providers and suppliers from March 2011 to May 2015, some of which may have been payments for items and services subject to prior authorization. in July 2013, CMS implemented moratoria on enrollment of new providers for home health services and for repetitive, scheduled non- emergency ambulance transport in select counties. As of January 2018, CMS had extended the home health services moratoria statewide to Florida, Illinois, Michigan, and Texas and the repetitive, scheduled non-emergency ambulance transport moratoria statewide to Pennsylvania and New Jersey. During a moratorium, no new applications to enroll as a billing provider of the affected service types are reviewed or approved. In October 2017, CMS officials said that home health and non-emergency ambulance services’ expenditures may have been affected by provider enrollment moratoria. Certain durable medical equipment pricing, payments, and education and outreach: CMS has taken steps to change how certain durable medical equipment is paid for and to provide ongoing durable medical equipment education and outreach. For example, in January 2011, CMS implemented a DMEPOS competitive bidding program required by the Medicare Prescription Drug, Improvement, and Modernization Act of 2003. Under the program, only competitively selected contract suppliers can furnish certain durable medical equipment items at competitively determined prices to Medicare beneficiaries in designated areas. CMS began the program in 9 of the largest metropolitan areas, and in July 2013 expanded to an additional 100 large metropolitan areas. In January 2016, CMS implemented competitive bidding program-based adjusted prices for non-designated areas for durable medical equipment items that were previously, or are currently, included in the competitive bidding program. According to CMS, the program—which generally results in lower competitively bid prices—is reducing expenditures for approximately half of the beneficiaries receiving power mobility devices nationwide. We have previously reported that prices decreased for power mobility devices in the competitive bidding program; some of these devices are also subject to prior authorization. in January 2011, CMS eliminated the lump sum purchase option for standard power wheelchairs. This change reduced expenditures for power wheelchairs used on a short-term basis because payments for short-term rentals are lower than for the purchase of these items. durable medical equipment MACs and CMS provide continuous DMEPOS education and outreach. According to CMS, the education and outreach may have contributed to reducing expenditures for power mobility devices by helping providers and suppliers to understand how to bill correctly and to submit fewer claims that do not meet Medicare coverage and payment requirements. Many of the officials we interviewed representing provider, supplier, and beneficiary groups, as well as CMS and MACs, reported benefits to prior authorization. Officials from some of these groups said that prior authorization is an effective tool to reduce unnecessary utilization and improper payments. Some officials who reported benefits said that prior authorization helps educate providers and suppliers about allowable items and services under Medicare and improves providers’ and suppliers’ documentation. Some officials also said that providers and suppliers appreciate the assurance of knowing that Medicare is likely to pay for these items and services. Officials from three provider and supplier groups said that by getting provisional prior authorization, their claims will likely not be denied, and they can thus avoid the appeals process, for which there are significant delays. In addition, officials from two provider and supplier groups believe that prior authorization may deter fraudulent suppliers from participating in Medicare. Because of these benefits, these provider and supplier group officials recommended that CMS expand its use of prior authorization. In addition, CMS has improved the prior authorization programs by responding to some of the providers’ and suppliers’ initial concerns. For example, for the power mobility device demonstration, CMS and MAC officials that process DMEPOS claims reported that providers and suppliers were initially confused about whether beneficiaries with representative payees—persons or organizations authorized to accept payment on a beneficiary’s behalf—were exempt from the prior authorization program. To address this issue, CMS revised and clarified its guidance related to representative payees. In addition, for the non- emergency hyperbaric oxygen therapy demonstration, officials from CMS and a MAC administering the demonstration said that providers and suppliers raised concerns that a Medicare-covered condition (compromised skin grafts) included in the demonstration required immediate care and therefore should not be subject to prior authorization. In response, CMS removed the condition from the list of conditions subject to prior authorization. Some provider and supplier group officials we interviewed reported that obtaining the documentation necessary to submit a prior authorization request can be difficult. For example, some of these officials told us that providers and suppliers may spend 3 to 7 weeks obtaining necessary documentation from referring physicians and other relevant parties before submitting a prior authorization request. While CMS’s documentation requirements did not change under prior authorization, officials from a provider and supplier group we spoke with said that prior authorization exacerbates existing documentation challenges because they must obtain all required documentation before providing items and services to beneficiaries. As we noted in a previous report, two durable medical equipment MACs said that referring physicians may lack financial incentives to submit proper documentation since they are unaffected if a durable medical equipment or home health claim is denied due to insufficient documentation, while the provider or supplier submitting the claim loses the payment. Furthermore, according to some provider and supplier group representatives, CMS’s documentation requirements can be difficult to meet. Representatives from one supplier and provider group said that there is a high standard of proof to meet the information needed to support their medical necessity requirements. For example, documentation in the medical record is required to show whether the referring physician considered other options. Also, representatives from another provider and suppler group said that, unlike private insurers, CMS has more requirements that providers and suppliers consider administrative. For instance, MACs deny prior authorization requests for missing physician signatures. In addition, representatives from a provider and supplier group said it may be necessary to collect documentation from multiple providers that treated the beneficiary in order to meet CMS’s medical necessity requirements. However, officials from one private insurer said that their medical necessity requirements for certain items and services may necessitate receiving documentation from several providers as well, although this does not occur often. CMS officials acknowledged that the agency’s requirements may be more difficult to meet than those of private health insurers. However, this scrutiny may be beneficial because, unlike private insurers, Medicare must pay for health care delivered by any eligible physician willing to accept Medicare payment and follow Medicare requirements. We found that CMS and the MACs have taken some steps to assist providers and suppliers in obtaining documentation from referring physicians. For example, CMS has created e-clinical templates for home health services and power mobility devices that can be incorporated into progress notes to help ensure physicians meet medical necessity requirements. CMS and the MACs have also created documentation checklists, prior authorization coversheets, and other tools to assist providers and suppliers in verifying that they have obtained the documentation necessary to meet CMS’s documentation requirements. Agency officials have stated that they are working on additional changes to reduce provider and supplier burden, for example, developing e-clinical templates for additional items and services. Furthermore, representatives from each of the MACs said that they call providers and suppliers that receive certain prior authorization non- affirmations to ensure suppliers and providers understand what information is required to obtain a provisional affirmation. Some MAC representatives said that having a phone conversation with suppliers allows them to resolve non-affirmations more expediently and reduces the number of resubmissions. Representatives from one MAC estimated that when they call providers and suppliers, they are able to resolve 50 to 80 percent of the issues that led to the non-affirmations. Several MAC representatives also said calling helps providers and suppliers gain a better understanding of CMS’s documentation requirements, which will increase their likelihood of having future requests provisionally affirmed. In addition, CMS officials said that the agency encourages MACs to call referring physicians directly, when necessary, to remedy curable errors or obtain additional documentation needed to affirm a request because non- affirmation may be resolved faster without providers and suppliers serving as intermediaries. Providers and suppliers reported concerns about whether accessories deemed essential to group 3 power wheelchairs are subject to prior authorization and can be provisionally affirmed under the permanent DMEPOS program. According to CMS, the permanent DMEPOS program requires prior authorization for power wheelchair bases, but not for their accessories. CMS officials said this is because accessories do not meet the criteria for inclusion on the Master List. However, according to CMS, the MACs must review these accessories when they make prior authorization determinations because their decision to provisionally affirm a wheelchair base is based in part on their view of the medical necessity of the accessories. Therefore, if an essential accessory does not meet medical necessity requirements, a MAC will deny a prior authorization request for a power wheelchair base. In other words, in practice these accessories are subject to prior authorization, even though they are not technically included in the permanent DMEPOS program and therefore cannot be provisionally affirmed. As a result, providers and suppliers lack assurance about whether Medicare is likely to pay for these accessories. In December 2017, CMS officials stated that there have been preliminary discussions regarding the feasibility and effect of subjecting accessories essential to the group 3 power wheelchairs in the permanent DMEPOS program to prior authorization. However, CMS officials did not provide a timeframe for reaching a decision about whether they would do so. Federal internal control standards state that agencies should design control activities that enable an agency to achieve its objectives and should respond to any risks related to achieving those objectives. By not including essential accessories in prior authorization so they can be provisionally affirmed as appropriate, CMS may hinder its ability to achieve one of the stated benefits of the prior authorization program—to allow providers and suppliers to know prior to providing the items whether Medicare will likely pay for them. We found that CMS monitoring includes reviewing MAC reports of the results of prior authorization requests, examining MAC timeliness and accuracy, and contracting for independent evaluations of the prior authorization demonstrations. CMS officials told us that they review weekly, monthly, and annual MAC reports that include information such as numbers of requests received, completed, approved, denied, and resubmitted. According to CMS officials, they monitor MAC timeliness through these reports and separately have a contractor review MAC accuracy in processing requests. According to these officials, they have not identified any issues with MAC timeliness, as the MACs currently meet the standards for processing initial requests within 10 business days and resubmissions within 20 business days. In addition, CMS officials said that a sample of MACs’ prior authorization decisions is reviewed each month for accuracy for each of the prior authorization demonstrations, and the reviews have not identified any issues with these decisions. CMS officials said that they meet with providers and supplier groups regularly to solicit feedback, to identify issues that need to be addressed, and to determine whether there are any problems, such as reduced beneficiary access to care. According to CMS officials, they have not identified any negative impact on beneficiary access to care as a result of implementing prior authorization. CMS has contracted for independent evaluations of the power mobility device, repetitive scheduled non-emergency ambulance services, and non-emergency hyperbaric oxygen demonstrations. In December 2017, CMS officials told us that evaluations will be completed and results available after the demonstrations end. In December 2017, officials told us that they also plan to contract for an evaluation of the permanent program after more time has passed. Most prior authorization programs are scheduled to end in 2018, with all the demonstrations concluding and only the limited permanent program remaining. The non-emergency hyperbaric oxygen demonstration ended in February 2018, the power mobility device demonstration in August 2018, and the repetitive scheduled non-emergency ambulance services demonstration in November 2018. The home health services demonstration has been on pause since April 2017 with no plans to resume as of February 2018, although CMS stated that they are considering improvements to the demonstration. The permanent program, which currently consists of two group 3 power wheelchairs, is the only prior authorization program that will remain. According to CMS officials, these wheelchairs are very low volume, and the HHS Office of the Inspector General reported that these wheelchairs represent just a small percentage of all durable medical equipment claims. CMS has not made plans for continuing expiring or paused prior authorization programs or expanding prior authorization. However, officials told us that they would like to see prior authorization for additional items. For example, CMS officials said that they have considered prior authorization for items such as hospital beds and oxygen concentrators, because these have high utilization or improper payment rates. In addition, in December 2017, CMS officials said that the agency is evaluating whether it has met the requirements for nationwide expansion of the repetitive scheduled non-emergency ambulance services demonstration established by the Medicare Access and CHIP Reauthorization Act of 2015. However, CMS officials also said that have not yet determined the next steps for the use of prior authorization. Federal internal control standards state that agencies should identify, analyze, and respond to risks related to achieving objectives. By not taking steps, based on results from the evaluations, to continue prior authorization, CMS risks missed opportunities for achieving its stated goals of reducing costs and realizing program savings by reducing unnecessary utilization and improper payments. Since September 2012, CMS has begun using prior authorization to ensure that Medicare coverage and payment rules have been met before the agency pays for selected items and services. During this time, expenditures for items and services subject to prior authorization have been reduced. We estimate potential savings may be as high as about $1.1 to $1.9 billion, although other CMS program integrity efforts may have contributed to these reductions. Many stakeholders, including providers, suppliers, and MAC officials, support prior authorization, citing benefits such as reduced unnecessary utilization. However, providers and suppliers report concerns about whether accessories deemed essential to group 3 power wheelchairs are subject to prior authorization and can be provisionally affirmed. By not including essential accessories in prior authorization, CMS may hinder its ability to achieve one of the stated benefits of the prior authorization program—to allow providers and suppliers to know prior to providing the items whether Medicare will likely pay for them. All four prior authorization demonstrations are either paused or will end in 2018, and CMS does not have plans to extend these programs or expand the use of prior authorization to additional items and services with high rates of unnecessary utilization or improper payments. By not taking steps, based on results from the evaluations, to continue prior authorization, CMS risks missed opportunities for achieving its stated goals of reducing costs and realizing program savings by reducing unnecessary utilization and improper payments. We are making the following two recommendations to CMS. The Administrator of CMS should subject accessories essential to the group 3 power wheelchairs in the permanent DMEPOS program to prior authorization. (Recommendation 1) The Administrator of CMS should take steps, based on results from evaluations, to continue prior authorization. These steps could include: resuming the paused home health services demonstration; extending current demonstrations; or, identifying new opportunities for expanding prior authorization to additional items and services with high unnecessary utilization and high improper payment rates. (Recommendation 2) We provided a draft of this report to HHS for comment, and its comments are reprinted in appendix III. HHS also provided technical comments, which we incorporated as appropriate. HHS neither agreed nor disagreed with the recommendations but said it would continue to evaluate prior authorization programs and take our findings and recommendations into consideration in developing plans or determining appropriate next steps. In addition, in response to our recommendation to take steps to continue prior authorization, HHS noted that the President’s fiscal year 2019 budget for HHS included a legislative proposal to extend its statutory authority to permanently require prior authorization for specified Medicare fee-for-service items and services to all Medicare fee-for-service items and services. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact A. Nicole Clowers at (202) 512-7114 or clowersa@gao.gov or Kathleen M. King at (202) 512-7114 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. In December 2015, the Centers for Medicare & Medicaid Services (CMS) established a permanent prior authorization program for certain durable medical equipment, prosthetics, orthotics, and supplies (DMEPOS). To select the items subject to prior authorization, CMS compiled a Master List of items that 1) appear on the DMEPOS Fee Schedule list, 2) have an average purchase fee of $1,000 or greater (adjusted annually for inflation) or an average rental fee schedule of $100 or greater (adjusted annually for inflation), and 3) meet one of these two criteria: the item was identified in a GAO or Department of Health and Human Services Office of Inspector General report that is national in scope and published in 2007 or later as having a high rate of fraud or unnecessary utilization, or the item is listed in the 2011 or later published Comprehensive Error Rate Testing program’s annual report. The information presented in this appendix was reprinted from information in a December 2015 final rule. We did not edit it in any way, such as to spell out abbreviations. (See table 3 for the Master List.) Tables 4 through 7 present monthly expenditures for items and services subject to prior authorization in initial demonstration states, expansion demonstration states, and non-demonstration states from 6 months prior to each demonstration’s implementation through March 2017, the most recent month for which reliable data is available. In addition to the contact named above, Martin T. Gahart (Assistant Director), Lori Achman (Assistant Director), Peter Mangano (Analyst-in- Charge), Sylvia Diaz Jones, and Mandy Pusey made key contributions to this report. Also contributing were Sam Amrhein, Muriel Brown, Eric Wedum, and Jennifer Whitworth.
|
CMS required prior authorization as a demonstration in 2012 for certain power mobility devices, such as power wheelchairs, in seven states. Under the prior authorization process, MACs review prior authorization requests and make determinations to approve or deny them based on Medicare coverage and payment rules. Approved requests will be paid as long as all other Medicare payment requirements are met. GAO was asked to examine CMS's prior authorization programs. GAO examined 1) the changes in expenditures and the potential savings for items and services subject to prior authorization demonstrations, 2) reported benefits and challenges of prior authorization, and 3) CMS's monitoring of the programs and plans for future prior authorization. To do this, GAO examined prior authorization program data, CMS documentation, and federal internal control standards. GAO also interviewed CMS and MAC officials, as well as selected provider, supplier, and beneficiary groups. Prior authorization is a payment approach used by private insurers that generally requires health care providers and suppliers to first demonstrate compliance with coverage and payment rules before certain items or services are provided to patients, rather than after the items or services have been provided. This approach may be used to reduce expenditures, unnecessary utilization, and improper payments. The Centers for Medicare & Medicaid Services (CMS) has begun using prior authorization in Medicare through a series of fixed-length demonstrations designed to measure their effectiveness, and one permanent program. According to GAO's analyses, expenditures decreased for items and services subject to a demonstration. GAO's analyses of actual expenditures and estimated expenditures in the absence of the demonstrations found that estimated savings from all demonstrations through March 2017 could be as high as about $1.1 to $1.9 billion. While CMS officials said that prior authorization likely played a large role in reducing expenditures, it is difficult to separate the effects of prior authorization from other program integrity efforts. For example, CMS implemented a durable medical equipment competitive bidding program in January 2011, and according to the agency, it resulted in lower expenditures. Many provider, supplier, and beneficiary group officials GAO spoke with reported benefits of prior authorization, such as reducing unnecessary utilization. However, provider and supplier group officials reported that providers and suppliers experienced some challenges. These include difficulty obtaining the necessary documentation from referring physicians to submit a prior authorization request, although CMS has created templates and other tools to address this concern. In addition, providers and suppliers reported concerns about whether accessories deemed essential to the power wheelchairs under the permanent durable medical equipment, prosthetics, orthotics, and supplies (DMEPOS) program are subject to prior authorization. In practice, Medicare Administrative Contractors (MAC) that administer prior authorization programs review these accessories when making prior authorization determinations, even though they are not technically included in the program and therefore cannot be provisionally affirmed. As a result, providers and suppliers lack assurance about whether Medicare is likely to pay for these accessories. This is contrary to a CMS stated benefit of prior authorization—to provide assurance about whether Medicare is likely to pay for an item or service—and to federal internal control standards, which call for agencies to design control activities that enable an agency to achieve its objectives. CMS monitors prior authorization through various MAC reports. CMS also reviews MAC accuracy and timeliness in processing prior authorization requests and has contracted for independent evaluations of the demonstrations. Currently, prior authorization demonstrations are scheduled to end in 2018. Despite its interest in using prior authorization for additional items, CMS has not made plans to continue its efforts. Federal internal control standards state that agencies should identify, analyze, and respond to risks related to achieving objectives. CMS risks missed opportunities for achieving its stated goals of reducing costs and realizing program savings by reducing unnecessary utilization and improper payments. GAO recommends that CMS (1) subject accessories essential to the power wheelchairs in the permanent DMEPOS program to prior authorization and (2) take steps, based on results from evaluations, to continue prior authorization. The Department of Health and Human Services neither agreed nor disagreed with GAO's recommendations but said it would continue to evaluate prior authorization programs and take GAO's findings and recommendations into consideration in developing plans or determining appropriate next steps.
|
The Aviation and Transportation Security Act designated TSA as the primary federal agency responsible for securing all modes of transportation. In fiscal year 2005, Congress appropriated funds for surface transportation security, and the accompanying conference report directed that some of those funds go to rail compliance inspectors, the predecessors to today’s surface transportation security inspectors— referred to as surface inspectors. Public and private transportation entities have the principal responsibility to carry out safety and security measures for their services. As such, TSA coordinates with public and private transportation entities to identify vulnerabilities, share intelligence information, and work to mitigate security risks to the system. See table 1 for examples of the entities TSA works with to secure the various surface transportation modes. In fiscal year 2005, $10 million of TSA’s surface transportation security appropriation was to hire and deploy up to 100 rail compliance inspectors. TSA assigned inspectors to oversee security and provide oversight and assistance to railroads, and subsequently, other surface transportation modes, including mass transit and passenger rail, freight rail, highway, and pipeline sectors. TSA has since increased the number of surface inspectors, and since 2013 has maintained more than 200 Full Time Equivalent (FTE) positions. See table 2 for additional details on the number of TSA surface inspector FTEs from fiscal years 2013 through 2017. In August 2007, the 9/11 Commission Act was signed into law and required TSA to issue security regulations for freight and passenger rail, among other requirements. TSA also issued regulations governing surface transportation security on its own initiative. As of July 2017, TSA has issued the following regulations related to surface transportation: Rail Inspections: Issued in November 2008, 49 C.F.R. part 1580 requires certain freight railroad carriers and passenger rail operations (passenger railroad carriers and rail transit systems) to designate a rail security coordinator, notify the Transportation Security Operations Center regarding any significant security concerns, and, if applicable, ensure a secure chain of custody of rail cars containing certain hazardous materials, and be able to provide location and shipping information for certain rail cars, among other things. The hazardous materials subject to this regulation include certain explosives, toxic inhalation hazardous materials (TIH), and radioactive materials. See appendix II for additional details. Maritime Inspections: TSA also partners with the U.S. Coast Guard (USCG) in securing maritime ports, facilities and vessels. TSA’s responsibilities include enrolling Transportation Worker Identification Credential (TWIC) applicants, conducting background checks to assess the individual’s security threat, and issuing TWICs. In addition, TSA is authorized to conduct inspections of persons using TWIC to access the secured area of a regulated maritime facility. Surface inspectors work under the direct command authority of the Federal Security Director (FSD) in the field. As of fiscal year 2017, TSA used a staffing model to allocate surface inspector staff to 49 different field offices, separated into seven geographic regions around the country. According to TSA, all but one surface field office locations are at or near major airports. Figure 1 depicts surface field office locations by region. Surface inspector policies and procedures, and operational oversight are managed separately. Program Guidance: Within TSA’s Office of Security Operations, the Surface Compliance Branch plans surface transportation security activities and programs, and develops an annual work plan that lays out the minimum required activities to be completed for surface inspectors in the field. The Office of Security Policy and Industry Engagement (OSPIE) collects and analyzes data on certain surface inspector activities such as the Baseline Assessment for Security Enhancement (BASE) program, TIH attendance rates, and freight rail compliance rates; coordinates with industry stakeholders, and; develops strategic plans, among other things. Operational oversight: The Assistant Federal Security Director for Inspections (ASFD-I) in each field office manages surface inspectors on a day-to-day basis, oversees the scheduling of surface inspector work plan activities, and reviews inspectors’ documentation of activities in PARIS, TSA’s system of record. FSDs are ultimately responsible for ensuring that surface inspectors complete their annual work plan requirements. In 2010, TSA created the Regional Security Inspector (RSI) position in an effort to improve oversight of surface inspectors in the field and standardize inspections across field offices. One RSI is assigned to each of the seven geographic regions and serves as a liaison between TSA headquarters staff and surface inspectors in the field. Each RSI is also assigned to be the lead liaison between TSA and the Class I railroads within their assigned geographic region. See figure 2 for surface inspectors’ command structure as of 2017. TSA documents state that it employs a risk-based approach for securing transportation modes and identifies managing risk as one of its strategic goals to help identify and plan security priorities and activities. According to TSA officials, TSA uses the National Infrastructure Protection Plan (NIPP) risk management framework and the DHS Risk Management Fundamentals as its primary risk guidance. In June 2006, DHS issued the NIPP which established a six-step risk management framework to establish national priorities, goals and requirements. Most recently updated in 2013, the NIPP defines risk as a function of three elements: threat, vulnerability and consequence. Threat is an indication of the likelihood that a specific type of attack will be initiated against a specific target or class of targets. Vulnerability is the probability that a particular attempted attack will succeed against a particular target or class of targets. Consequence is the effect of a successful attack. TSA uses the TSSRA, a bi-annual risk assessment that considers the three elements of risk to measure the risk of various terrorist attack scenarios, evaluate transportation modes, and identify surface security priorities. Surface inspectors conduct a variety of activities to implement TSA’s surface transportation security mission, including (1) regulatory inspections for freight and passenger rail systems, (2) regulatory TWIC inspections, and (3) non-regulatory security assessments and training which surface transportation entities participate in on a voluntary basis. Surface inspector activities are, in part, determined by an annual surface work plan that lays out the minimum required number of surface inspector activities to be completed by each field office. Specifically, the work plan requirements are designed to take up about one-third of inspectors’ available working hours, with the expectation that the other two-thirds of inspectors’ time will be used for related activities, such as documentation and follow-up, or other tasks as determined by local AFSD-Is and FSDs in the field. To develop the annual surface work plan, officials from Office of Security Operation’s Surface Compliance Branch and OSPIE meet with each of the RSIs once a year to determine the requirements for each office. According to TSA officials, they rely on the previous year’s requirements as well as data on surface inspectors’ past activities as logged in PARIS as a starting point to develop the requirements, and adjust the work plan based on their professional judgment of the unique environment in each field office’s area of responsibility. TSA officials stated that they consider variables such as the compliance rates for inspections, the amount of TIH materials being shipped through an area, and any other relevant risk- related information when they develop the work plan. Surface inspectors conduct inspections to enforce several freight and passenger rail security requirements. Table 3 provides descriptions of these inspections and appendix II provides a complete listing of TSA’s regulatory activities. TSA also tracks the rate at which the inspected entities comply with the regulations discussed in table 3. According to TSA data, on average, overall compliance rates for inspections have remained relatively high, and the compliance rates have generally improved over the years as entities have become more familiar with the processes and expectations of each type of inspection. Surface inspectors work with the USCG to conduct inspections of TWIC card holders attempting to access the secured area of maritime facilities regulated by the Maritime Transportation Security Act of 2002 (MTSA). TSA first issued the TWIC regulation in 2007 in cooperation with the USCG, and according to TSA officials, began nationwide implementation of TSA inspection of TWICs at maritime facilities in fiscal year 2017. Surface inspectors scan cards using a TWIC card reader to verify that the card presented is valid and belongs to the card holder. TSA may pursue civil enforcement and can refer violators for criminal proceedings through the USCG. TSA officials stated they set the total minimum required TWIC inspections at 1,315 combined across all surface inspector field offices for fiscal year 2017 as a starting point, and would modify the requirements in subsequent years, as discussed below. According to TSA, it is too soon to determine compliance rates for TWIC inspections. Surface inspectors perform a variety of non-regulatory surface-related activities, such as various types of assessments, which require surface entities’ voluntary participation. Table 4 provides a list of key non- regulatory activities surface inspectors perform. For a full list of activities surface inspectors perform see appendix II. Since 2006, TSA has made adjustments to the BASE program to expand its use to more surface modes and address implementation challenges. To conduct a BASE review, surface inspectors use a standardized checklist to evaluate and score an entity’s security policies and procedures for areas such as employee security training, cybersecurity, and facility access control, among other items. According to TSA officials, the results of the BASE reviews are intended to help track the entity’s progress in implementing specific security measures over time and improve overall security posture among surface transportation entities, as well as inform transportation security grant funding. Surface inspectors also use entities’ BASE review scores to help inform Exercise Information System (EXIS) training programs inspectors facilitate for transportation entities. Initially, the BASE program was designed to assess large mass transit entities in major metropolitan areas that transported 60,000 riders or more daily. TSA officials stated in 2017 that TSA has completed initial and follow up BASE reviews for the top 100 mass transit agencies in the country which comprise approximately 80 percent of the ridership in the United States. In 2012, TSA expanded the BASE reviews to the highway mode to include trucking, motor coach, and school bus operators. Additionally, TSA has taken steps to address challenges related to the implementation of the BASE reviews, including an initial lack of training and guidance for surface inspectors in conducting and evaluating the BASE reviews and difficulty applying the BASE template for smaller mass transit entities and highway entities. For example, surface inspectors we interviewed at six field offices indicated that they received limited to no training to conduct the initial BASE reviews. Office of Security Operations officials acknowledged that the BASE program initially lacked scoring guidance to allow surface inspectors to make objective evaluations. Additionally, two industry entities we spoke with stated that some BASE questions, as initially developed, seemed inappropriate or irrelevant given the scope of their operation, and that their scores reflected areas that they were not able to modify based on their limited size and resources. Further, in 2010, the DHS Office of Inspector General reported that TSA needed to provide increased training and guidance for inspectors to ensure that BASE assessments gather effective, objective data. In response, officials from TSA’s Surface Compliance Branch stated that they established a BASE Advisory Panel and held a series of training workshops throughout the country on how to conduct BASE assessments. Specifically, in fiscal year 2014, TSA established a panel comprised of mass transit experts to adjust the BASE tool by modifying topics and removing outdated questions in an effort to improve the quality and applicability of the assessments for the industry stakeholders. TSA has also modified the BASE template over time to include areas such as cybersecurity and active shooter training, among others. TSA reported that it held a series of 16 workshops in 2015 around the country where headquarters officials met with inspectors to train them on how to conduct BASE assessments and correctly apply scoring guidance to help ensure inspectors applied the BASE criteria consistently. Moreover, in fiscal year 2016, TSA developed a targeted BASE that focuses only on an entity’s areas of concern as identified by surface inspectors in a previous BASE review. Further, TSA is piloting a modified BASE template in fiscal year 2017 that eliminates questions that may not apply for smaller mass transit and highway entities. According to Surface Compliance Branch and OSPIE officials, these changes have led to more consistent and more reliable results in the BASE scores. We believe that TSA efforts to improve training and guidance as well as establishing the BASE Advisory Panel will help address the agency’s previous concerns related to the implementation of the BASE review. According to TSA headquarters and field officials, in addition to surface inspection activities, surface inspectors are tasked, to varying degrees, with aviation activities. However, TSA officials told us that they are unable to identify the total time surface inspectors spend on aviation activities because of data limitations. For example, surface inspectors may perform aviation activities on a regular basis as a “duty agent,” or on an as- needed basis as determined by their local manager—their AFSD-I. TSA guidance directs surface inspectors to report the time they spend on all activities into TSA’s PARIS database. TSA officials responsible for managing PARIS told us that it has two independent modules – aviation and surface – and that surface inspectors enter aviation-related activities in both the aviation and surface modules. Specifically, TSA guidance directs surface inspectors to document their time serving as “duty agent” in the surface module of PARIS, but to document time spent on aviation inspections, incidents, or investigations – including those that take place during an inspector’s time serving as the duty agent – into the aviation module of PARIS. See table 5 for examples of the types of aviation activities surface inspectors record in each separate PARIS module. TSA officials told us that it is not possible to identify the time surface inspectors document in the aviation module of PARIS because there is no efficient, reliable way to distinguish surface inspectors from aviation or cargo inspectors in the data. Since TSA cannot reliably identify activities surface inspectors have entered into the aviation module of PARIS, TSA is only aware of the portion of time surface inspectors spent on aviation activities that was logged in the surface module. As a result, TSA does not have complete information on how surface inspector resources are being used or the extent to which surface inspectors are being used to perform aviation activities. According to some surface inspectors we spoke to, these resources can be substantial. Surface inspectors we interviewed at 16 of the 17 TSA field offices contacted stated that they perform aviation duties. One inspector stated she had received calls to respond to 12 different aviation incidents in one shift as duty inspector, and other inspectors stated that each incident report could subsequently take between 2 and 12 hours to complete. Surface inspectors from another office located near a major airport told us they have to work overtime to complete aviation incident reports and still meet their required surface activities. Further, we met with surface inspectors stationed at four different major airports who each estimated spending 20 percent, 25 percent, 30 percent, and 50 percent of their total working hours on aviation tasks, respectively. Standards for Internal Control in the Federal Government states that agencies should use complete information to make informed decisions and evaluate the agency’s performance in achieving key objectives. As stated previously, one of TSA’s key objectives is to employ a risk-based approach to all operations to identify, manage, and mitigate risk. Standards for Internal Control in the Federal Government also states that agencies should clearly document all activities in a manner that allows the documentation to be readily available for examination. Without having access to complete information on all inspector activities, including aviation activities, TSA cannot monitor how frequently surface inspectors are being used to support aviation. In addition, by not using complete information on how much time surface inspectors spend working in support of aviation, TSA is limited in its ability to make informed future decisions on annual resource needs for surface inspectors, which will be especially important as TSA takes steps to expand its inspection activities with the promulgation of new surface security regulations. By addressing the limitations in the aviation module of PARIS, TSA would be able to more reliably access complete information on all inspector activities. Also, it would have the information it needs to make fully informed decisions about surface inspector resources and activities, and to evaluate surface inspectors’ performance in achieving key surface security objectives. Since there is no way to identify surface inspectors in the aviation module of PARIS at the aggregate level, we were unable to conduct our own analysis of all surface inspector activities. However, we were able to analyze data on how surface inspectors reported spending their time in the surface module of PARIS, including time spent on aviation activities as documented in this particular module. Our analysis showed that from fiscal years 2013 to 2017, surface inspectors reported spending approximately 80 percent of their time on non-regulatory activities, while spending approximately 20 percent on regulatory inspections. Figure 3 shows a breakdown of the time surface inspectors recorded spending in the surface module of PARIS for fiscal year 2016, the most recent complete year of data available. See appendix III for similar breakdowns for each fiscal year from 2013 to 2017. In fiscal year 2017, TSA’s Surface Compliance Branch implemented an updated staffing model to redistribute 222 surface-funded positions across its 49 surface field offices based on the factors described in table 6 below. TSA considered four of these factors – HTUA/Urban Area Security Initiative (UASI), Mass Transit, TWIC, and TIH – to be related to risk. For example, TSA derived its list of HTUAs based on risk assessments conducted under the UASI program. We have previously reported that the UASI methodology for determining risk scores and distributing grant funds is reasonable, and that UASI grant allocations are strongly associated with a city’s current relative risk score. Additionally, according to TSA, inspectors focus on entities within surface transportation modes or shipments of certain hazardous materials the agency determines could pose the greatest security vulnerability and which could potentially be more likely to be targeted by terrorists. The DHS Risk Lexicon 2010 and the 2013 NIPP risk management framework, which are TSA’s primary risk guidance, define risk-informed decision-making as the determination of a course of action predicated on the assessment of risk, the expected impact of that course of action on that risk, as well as other relevant factors. The DHS Risk Lexicon 2010 further states that risk-informed decision-making may also take into account multiple sources of information not included specifically in the assessment of risk. Because TSA considered multiple risk factors in addition to other information, such as the number of regulated entities in an area and the number of required activities, in its staffing model, we determined that TSA used a risk-informed model to allocate surface inspector staff to its 49 offices. TSA surface inspectors perform a wide range of regulatory and non- regulatory activities to fulfill the agency’s objective of employing risk- based security, but we found that between fiscal years 2013 and 2017 surface inspector activities did not align with the risks TSA identified for surface transportation. To inform its security strategy, TSA assesses risk within and across the aviation, freight rail, passenger rail/mass transit, highway, and pipeline modes approximately every 2 years using the TSSRA. According to the TSSRA’s cross-modal risk assessments between fiscal years 2013 and 2017, one particular surface mode consistently posed the highest risk, and another consistently posed the lowest risk out of all surface transportation modes. For example, in fiscal year 2016, TSA found that the lowest risk mode posed approximately 6 percent of domestic total risk while the highest risk mode posed 27 percent of domestic total risk. However, our analysis of data from the surface module of PARIS showed that inspectors reported spending between 35 and 45 percent of their time on the lowest risk mode between fiscal year 2013 and fiscal year 2016 – the most time spent on any surface mode. Of the time reported in the surface module of PARIS in fiscal year 2016, surface inspectors reported spending 38 percent of their time on the lowest risk transportation mode while they reported spending approximately 16 percent of their time on the highest risk surface mode according to the TSSRA. See figure 4 for a comparison between the percent of time inspectors recorded spending on each mode and the percent of risk identified in the TSSRA. We found that TSA did not use the results of risk assessments that measure threat, vulnerability, and consequence, like the TSSRA, when it developed surface inspector work plans, or when it monitored activities inspectors conducted, including those in addition to the minimum work plan requirements. While TSA officials told us that they considered the results of the TSSRA, TSA officials could not provide evidence that they incorporated the results of the TSSRA or other risk assessments when developing the work plan and monitoring inspector activities, as required by DHS risk management guidance. For example, TSA officials could not provide documentation of how and why they selected certain work plan activities to address lower risk modes, or how they monitored the extent to which implemented activities aligned with or addressed risks. We found that TSA did not incorporate the results of the TSSRA or other risk assessments when it monitored how surface inspector activities were implemented beyond the minimum requirements laid out in the work plan. Specifically, we found that between fiscal years 2013 and 2017, inspectors spent about half their working hours fulfilling work plan requirements. Surface Compliance Branch officials told us that they reviewed PARIS data on all surface inspector activities, as reported in the surface module of PARIS, annually to inform staffing decisions and conducted detailed analysis of surface inspector time starting in fiscal year 2015. However, this analysis did not evaluate the extent to which surface inspector time beyond the work plan requirements corresponded to surface transportation risks as identified by the TSSRA or other risk assessments. Further, TSA officials told us that they did not think surface inspector time should be compared to risks identified in cross-modal risk assessments like the TSSRA because required regulatory inspections are unpredictable and can take a significant amount of time. However, as previously discussed, we found that, of the time reported in the surface module of PARIS, inspectors reported spending approximately 20 percent of their time on regulatory inspections, with the remaining 80 percent spent on non-regulatory activities. More than half of the industry representatives we spoke to (9 of 15) identified benefits from inspectors’ activities in surface transportation modes other than freight rail. For example, two of the three representatives of MTSA-regulated companies we spoke to said that TSA’s TWIC inspections had significant benefits for the security of their facilities, and stated that they wanted more TWIC inspections and civil enforcement activities from inspectors because these activities discourage misuse of TWICs at their facilities. Representatives from two maritime companies, one highway company, and three public transportation systems told us that they wanted TSA surface inspectors to do more. Additionally, a representative for one national industry organization stated that his organization was concerned that TSA is mainly focused on freight rail when the principal threat resides in the passenger and mass transit modes, and suggested that TSA deploy inspection resources from the freight rail mode to support more non- regulatory initiatives in the passenger rail/mass transit mode. According to TSA, the agency employs a risk-based approach – which the DHS Risk Lexicon defines as using the assessment of risk as the primary decision driver – to all operations to identify, manage, and mitigate risk in all TSA lines of business. One TSA risk strategy document specifically emphasizes the importance of linking the TSSRA, among other risk assessments, to the identification of risk-reduction activities as part of a risk-based approach to security. Moreover, the NIPP risk management framework and the DHS Risk Management Fundamentals Doctrine, which TSA officials told us are TSA’s primary risk management guidance documents, also state that entities should systematically prioritize and implement activities and resources to mitigate and manage risks identified in risk assessments. These documents also state that monitoring implemented decisions and comparing observed and expected effects to influence subsequent risk management decisions are key steps in the homeland security risk management process. The DHS Risk Management Fundamentals Doctrine further states that agencies should document the development and selection of alternative risk management actions, including assumptions and risk strategies such as the decision to not take action and accept risk, in order to provide decision-makers with a clear picture of the benefits of each action. It also explains that the risk management process allows organizations to clearly explain the rationale behind resource decisions. TSA did not use the results of risk assessments – such as the TSSRA – or other risk information when it developed its surface inspector work plan requirements. Instead, TSA prioritized the lowest-risk surface transportation mode, reducing the amount of surface security resources available to address identified risks in other, higher-risk surface transportation modes. As a result, TSA’s limited surface transportation security resources were not used in a risk-based way. By incorporating the results of its risk assessments when it plans and monitors surface inspector activities, including those not required by the work plan, TSA would be better able to ensure that its limited surface transportation security resources are being used to effectively and efficiently address the highest risks to surface transportation, especially as risks evolve. Incorporating risk assessment results in planning and monitoring surface inspector activities will also allow TSA to ensure that its surface inspectors are making progress toward achieving TSA’s objective of risk- based security. Additionally, by documenting its risk mitigation decisions and strategies, TSA would be able to more clearly explain the rationale for its resource decisions, including when TSA decides to accept risk or prioritize lower-risk activities for any reason. In fiscal year 2012, TSA began developing the Risk Mitigation Activities for Surface Transportation (RMAST) program in support of TSA’s risk- based security initiative. According to TSA’s fiscal year 2017 work plan, the RMAST program incorporates specific risk reduction measures and focuses time and resources on high-risk locations through (1) public observation, (2) site security observations, and (3) stakeholder engagement activities. Though TSA field officials told us that inspectors have been conducting these activities in some format in the past, TSA began piloting this particular program in fiscal year 2014 and made RMAST a work plan requirement for each office starting in fiscal year 2017. In addition to TSA demonstrating its commitment to the RMAST program by adding it as a required work plan activity, we found that inspectors reported spending an increasing amount of time conducting RMASTs since fiscal year 2014, and that RMASTs now comprise a larger percentage of inspector time (see table 7). Although surface inspectors reported spending an increasing amount of time on RMAST activities, we found that TSA has not identified or prioritized the high-risk entities and locations on which the RMAST program is intended to focus time and resources. For example, the fiscal year 2017 surface inspector work plan states that the required number of RMASTs each office should conduct was developed based on the presence of applicable stakeholders in each office’s area, but we found that TSA did not identify any such stakeholders in its work plan. Specifically, while the work plan guidance directed surface inspectors to conduct RMASTs with entities that fit “listed” criteria, this list consisted of all surface modes of transportation for which TSA has authority and did not include any criteria surface inspectors could use to identify the highest-risk and most critical locations, such as by type, characteristics, or location of high-risk entities. TSA officials told us that they have not identified high-risk entities for RMAST because there are too many potential entities and stated that there is no way to provide a full list of all entities in each office’s area. However, the intent of the RMAST program is to focus time and resources on high-risk entities and locations, which precludes the need to provide a complete list of all surface transportation entities in each area. Further, TSA officials told us that TSA has not provided any guidance to the field beyond the work plan on how to identify appropriate entities for RMASTs, but that they rely on surface field offices to identify the highest-risk entities in their own areas. Officials from three field offices told us that inspectors try to conduct RMASTs based on threat information or previous BASE scores, but inspectors in one of those offices said that the intelligence information they receive from TSA is insufficient to help them identify threats and conduct outreach for RMASTs. As previously discussed, the NIPP risk management framework and the DHS Risk Management Fundamentals Doctrine both state that entities should identify and assess risks and prioritize resources to mitigate those risks. If TSA identified and prioritized the types of high-risk entities and locations it intends the RMAST program to reach, surface inspectors would have information that would enable them to implement these activities in a more risk-based manner. While TSA has identified broad objectives for the RMAST program, it has not defined these objectives – and associated program activities – in a measurable and clear way. Specifically, in its description of RMAST in the fiscal year 2017 work plan implementation guidance, TSA stated that the RMAST program will be risk-based, intelligence-driven, and mitigate current threats and vulnerabilities, but did not provide further information that would allow TSA to measure progress toward achieving these objectives. Similarly, in its budget justifications for fiscal years 2014, 2015, and 2016 TSA stated that RMAST is intended to improve security and reduce the need for stakeholders to stretch limited resources to harden security at their most critical and high-risk locations, but TSA did not describe how it would measure whether security had improved, or if stakeholders’ resource needs were reduced. While our review of the fiscal year 2017 work plan guidance showed that TSA identified general categories of activities – public observation, site security observation, and stakeholder engagement – TSA did not identify what specific activities within each of these categories constitute an RMAST, or describe how those activities would help TSA achieve its objectives for the RMAST program. Some inspectors told us that the purpose of RMAST was unclear, that they had not been given the tools to perform RMAST in an effective and efficient way, or that the observation component of RMAST was not a valuable activity. TSA has not defined the RMAST program’s objectives and associated activities in a measurable and clear way because, according to TSA officials, TSA has not identified an approach for determining the effectiveness of activities conducted under the program. Standards for Internal Control in the Federal Government states that management should establish proper controls – including the establishment and review of clearly defined objectives and performance measures – so that program objectives and processes are understood at all levels and progress toward achieving objectives can be assessed. By defining the program’s objectives and associated activities in a measurable and clear way, TSA would be better positioned to measure progress toward achieving the program’s goal of mitigating current threats and vulnerabilities, and surface inspectors may better understand how to effectively carry out the program. TSA has employed surface inspectors for a variety of regulatory and non- regulatory activities intended to mitigate risks to surface transportation and enhance the security of the United States’ surface transportation systems and networks. Working with surface transportation entities, who have the primary responsibility for securing their respective entities, TSA surface inspectors enforce security regulations for the freight and passenger rail modes, but spend the majority of their time conducting non-regulatory activities such as security assessments, exercises, and observations. While TSA uses information on some surface inspector activities to monitor and make decisions on these activities, limitations in the PARIS data system prevent TSA from readily accessing complete information on how much time inspectors spend working in support of aviation. Without addressing these limitations TSA is limited in its ability to make informed future decisions on annual resource needs for surface inspectors, which will be especially important as TSA take steps to expand its inspection activities with the promulgation of new surface security regulations. Given that TSA spends only about 3 percent of its budget on surface activities, it is crucial that the agency have complete information on how resources are being used in order to best allocate these limited federal surface transportation security resources. According to TSA, the agency implements risk-based security – security activities that are driven primarily by the assessment of risk – to deliver the most effective security in the most efficient manner. While TSA has implemented a risk-informed process to allocate surface inspectors to its field offices, it has not taken steps to ensure that surface inspector activities align more closely to the risks TSA has identified in its risk assessments. As a result TSA could continue to prioritize its limited resources to lower risk surface modes, leaving fewer resources available for higher risk modes. By using the results of risk assessments like the TSSRA when it plans and monitors surface inspector activities, TSA would be better able to ensure that limited surface transportation security resources are used to effectively and efficiently address the highest surface transportation security risks. Additionally, by documenting its risk mitigation decisions and strategies, TSA would be able to more clearly explain the rationale for its resource decisions, including when TSA decides to accept risk or prioritize lower-risk activities for any reason. Furthermore, by identifying and prioritizing highest risk entities and locations for its new RMAST program, surface inspectors would have information that would enable them to implement risk mitigation activities in more of a risk-based way. In addition, by clearly defining the program’s goals and activities, TSA would be better able to measure whether RMAST activities are achieving the program’s goal of increasing surface transportation security. We are making the following four recommendations to TSA: The Administrator of TSA should address limitations in TSA’s data system, such as by adding a data element that identifies individuals as surface inspectors, to facilitate ready access to information on all surface inspector activities. (Recommendation 1) The Administrator of TSA should ensure that surface inspector activities align more closely with higher-risk modes by incorporating the results of surface transportation risk assessments, such as the TSSRA, when it plans and monitors surface inspector activities, and that TSA documents its rationale for decisions to prioritize activities in lower-risk modes over higher-risk ones, as applicable. (Recommendation 2) The Administrator of TSA should identify and prioritize high-risk entities and locations for TSA’s Risk Mitigation Activities for Surface Transportation (RMASTs). (Recommendation 3) The Administrator of TSA should define clear and measurable objectives for the RMAST program. (Recommendation 4) We provided a draft of this report to DHS for their review and comment. DHS provided written comments, which are noted below and reproduced in full in appendix IV, and technical comments, which we incorporated as appropriate. DHS concurred with all four recommendations in the report and described actions underway or planned to address them. With regard to the first recommendation that TSA address limitations in its data system to facilitate ready access to information on all surface inspector activities, DHS concurred and stated TSA’s Compliance Division will maintain a staffing tool that identifies the modal assignments of transportation security inspectors that can be used to more effectively analyze all surface inspector activities. If fully implemented, such that data on all activities surface inspectors perform are readily accessible, this system should address the intent of the recommendation. With regard to the second recommendation that TSA align surface inspector activities more closely with higher-risk modes by incorporating the results of surface transportation risk assessments, such as the TSSRA, when it plans inspector activities, and document its rationale for decisions to prioritize activities in lower-risk modes, TSA concurred and stated relevant risk information would be more clearly incorporated into the Surface Work Plan development process. Further, TSA plans to explain decisions and rationale for deviating surface inspector planned activities from mirroring the TSSRA in its program guidance documentation. TSA estimates it will complete this process by January 31, 2018. If TSA is able to fully incorporate risk assessment results, such as the TSSRA, into its decisions for assigning surface inspector tasks across surface transportation modes, and document its rationale if planned inspector activities do not align with risk assessment results, TSA’s planned actions would address the intent of the recommendation. With regard to the third recommendation to identify and prioritize high-risk entities and locations for TSA’s Risk Mitigation Activities for Surface Transportation (RMAST), TSA concurred and stated the Surface Compliance Branch will prioritize entities for RMAST activities within the Surface Work Plan or other applicable program guidance documents using results from the TSSRA and using high threat urban area designations. TSA estimates this process will be completed by January 31, 2018 and if fully implemented, this process should address the intent of the recommendation. With regard to the fourth recommendation that TSA define clear and measurable objectives for the RMAST program, TSA concurred and stated the Surface Compliance Branch has clarified in program guidance documents how to apply and measure certain security outcomes resulting from RMAST activities to security vulnerabilities identified from a previous BASE assessment or other security assessment program. Documentation corroborating these actions was not provided to GAO before the issuance of this report. However, if TSA is able to clearly state the purpose and objectives of RMAST activities, and track the extent to which these objectives have been met, this additional program guidance should address the intent of the recommendation. We are sending copies of this report to interested congressional committees, the Secretary of Homeland Security, and the Administrator of the Transportation Security Administration. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7141 or groverj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Our objectives were to examine (1) how Transportation Security Administration (TSA) surface inspectors implement the agency’s surface transportation security mission, and (2) the extent to which TSA has used a risk-based approach to prioritize and implement surface inspector activities. This report is a public version of a prior sensitive report that we issued in October 2017. TSA deemed some of the information in the prior report sensitive security information, which must be protected from public disclosure. Therefore, this report omits sensitive information regarding the specific risks facing particular surface transportation modes as determined by TSA. However, the report addresses the same questions as the sensitive report and the overall methodology used for both reports is the same. To obtain background information and answer both questions we (1) reviewed background documents, including TSA strategic documents and previous GAO and Department of Homeland Security (DHS) Inspector General reports, (2) analyzed TSA data on surface inspector activities, and (3) conducted non-generalizable interviews of surface inspectors, their supervisors, and industry stakeholders. To understand TSA’s roles and responsibilities for surface security, as well as its mission, we examined statutes and regulations, including the Aviation and Transportation Security Act, the Implementing Recommendations of the 9/11 Commission Act of 2007, and TSA surface security and related regulations. We also reviewed DHS and TSA strategic documents including TSA’s National Strategy for Transportation Security 2016, the DHS National Infrastructure Protection Plan (NIPP) 2013, and the fiscal years 2016 to 2018 strategic plans for TSA’s Office of Security Operations and the Office of Security Policy and Industry Engagement. Additionally, we reviewed previous GAO and DHS Office of Inspector General reports on TSA’s surface security efforts and surface inspector programs. To evaluate how surface inspectors implemented TSA’s surface security mission and the extent to which this implementation was based on risk, we analyzed data from the surface module of the Performance and Results Information System (PARIS) on the activities of surface inspectors from fiscal year 2013 through March 24, 2017, the most recent data available. Based on TSA documents, regulations, and interviews with TSA data and program officials, we categorized surface inspector activities according to regulatory and non-regulatory activities and by mode, and calculated the total time surface inspectors reported spending for each category. We analyzed data from fiscal years 2013 through 2017 to ensure that we could compare several years of data and analyze data obtained after reorganizations of the surface inspector command structure in fiscal year 2010 and offices in mid-fiscal year 2013. We did not review data from the aviation module of PARIS because, as discussed below, it was not feasible to identify the data surface inspectors entered into this module, and, based on our interviews with TSA data officials and our review of related documentation, we determined that all other surface inspector activities were documented in the surface module of PARIS. To determine the reliability of data from the surface module of PARIS we (1) reviewed related documentation such as data dictionaries, schema, PARIS reliability assessments from previous GAO audits, TSA analyses of PARIS data, and data entry guidance, (2) interviewed TSA officials responsible for entering, reviewing, or using PARIS data, including headquarters officials, field office supervisors, and surface inspectors, (3) electronically and manually tested the data for completeness and obvious errors, such as duplicates and consistency with secondary sources, and (4) conducted internal logic tests on certain time-related fields in the data. Through these steps, we identified some inconsistencies in the data including incomplete data on surface inspectors’ aviation activities and non-specific data elements for inspection activities in fiscal year 2013, among others. However, we determined that for our purposes – to describe how surface inspectors reported spending their time at the summary-level – these inconsistencies did not affect the reliability of the PARIS surface module data and these data were reliable with some limitations. Specifically, based on interviews with TSA data officials and our review of TSA data entry guidance, we determined that the data in the surface module of PARIS did not represent the complete activities conducted by surface inspectors because they enter some aviation activities separately in the aviation module of PARIS. Further, we determined that it was not feasible to distinguish aviation activities documented by surface inspectors in the aviation module from aviation activities documented by cargo or aviation inspectors in this module at the aggregate level. However, based on our testing, review of related documentation, and interviews with TSA data officials, we determined that the data surface inspectors entered into the surface module of PARIS, including data on some aviation activities, were reliable for our purposes. As a result, we reported data on surface inspectors’ aviation activities as documented in the surface module of PARIS, with the limitation that these data represent the minimum aviation activities surface inspectors actually conducted. Additionally, through our analysis of PARIS data on regulatory inspections surface inspectors conducted in fiscal year 2013 and interviews with TSA data officials, we found that 25 percent of the total inspections in fiscal year 2013 (1,990 of 8,083) were documented under data elements that did not specify the type of inspection conducted. According to TSA officials, there are no additional data elements that would allow us to identify the specific type of inspection surface inspectors conducted for these 1,990 inspections. As a result, we determined that this portion of the fiscal year 2013 data was not reliable for our purposes of identifying the number of specific inspection types surface inspectors conducted. However, we found that the remaining 78 percent of inspection data for fiscal year 2013 was reliable for our purposes. As a result, the inspection counts and compliance rates we reported for fiscal year 2013 represent partial year data. To obtain the perspectives of a wide sample of TSA officials on both surface inspector activities and TSA’s use of risk, we conducted semi- structured interviews with surface inspectors and/or their supervisors in 17 of 49 field offices. We also interviewed the 6 Regional Security Inspectors (RSIs), who cover all seven TSA regions. We interviewed inspectors and supervisors from at least 2 offices in each region and selected the offices based on a variety of factors including geographic dispersion, staff level, surface transportation environment, and whether the office was co-located with a major airport. We physically visited 6 offices and conducted the remainder of our interviews remotely. We selected the offices we traveled to based on the location of GAO staff, the availability of industry representatives in the area, and the opportunity to observe surface inspector assessments, tabletop exercises, and other activities. The results of our interviews are not generalizable, but provide insight into how surface inspectors and their supervisors implement TSA surface programs and the challenges they may face, if any. To gain insight into the experience surface transportation industry stakeholders have had with TSA surface inspectors, we interviewed 15 industry stakeholders in four surface modes including 3 freight rail stakeholders, 3 maritime stakeholders, 3 highway stakeholders, and 6 passenger rail/mass transit stakeholders. We selected industry stakeholders based on their involvement and familiarity with TSA surface inspectors, the surface mode in which they operate, their ridership, and TSA recommendation. Three of these stakeholders consisted of national trade associations representing the highway, freight rail, and mass transit modes of transportation. As with our interviews with TSA surface inspectors and supervisors, our interviews with industry stakeholders are not generalizable but provided us with valuable information on the transportation industry’s interaction with TSA surface inspectors. To further address our first objective and describe how TSA surface inspectors implemented the agency’s surface transportation security mission, we examined TSA strategic and program documents including surface inspector work plans and implementation guidance from fiscal years 2013 to 2017, the TSA Inspector Compliance Manual, and TSA surface security regulations, and reviewed public testimony by TSA leadership. To understand how TSA has implemented the Baseline Assessment for Security Enhancement (BASE) program in particular, we reviewed TSA program documents and guidance for the BASE program, including the BASE workbook, and observed a BASE review on a mass transit entity. We also observed a regional Intermodal – Security Training Exercise Program (I-STEP) exercise and an Exercise Information System (EXIS) exercise, and interviewed TSA officials in headquarters, and inspectors and supervisors in the field. We used the results of our analysis of PARIS surface module data, specifically the number of each type of regulatory inspection TSA inspectors conducted from fiscal years 2013 to 2017, and PARIS data on the violations found during those inspections, to calculate regulatory compliance rates. We also used the results of our analysis of PARIS surface module data to describe how surface inspectors reported spending their time. As previously stated, we found the PARIS surface module data to be reliable for this purpose, with the limitation that TSA data on the time surface inspectors reported spending on aviation activities was incomplete because we could not identify surface inspector activities entered into the aviation module of PARIS. To evaluate the effects of this limitation, we compared the results of our data analysis, our reviews of PARIS documentation, and our interviews with TSA officials to Standards for Internal Control in the Federal Government. To further address our second objective, the extent to which TSA has used a risk-based approach to prioritize and implement surface inspector activities, we analyzed TSA’s risk guidance as contained in the NIPP risk management guidance, the DHS 2010 Risk Lexicon, and the DHS Risk Management Fundamentals to understand how TSA should assess and use risk information. To understand the risks TSA has identified for surface transportation modes during the time period we examined, we analyzed TSA’s cross-modal risk assessments in three Transportation Security Sector Risk Assessments (TSSRA) published between May 2013 and July 2016. We reviewed TSA’s fiscal year 2017 surface inspector staffing model and supporting documents and data and interviewed TSA officials responsible for developing and executing staffing. We compared that process to TSA risk guidance to evaluate the extent to which TSA considered risk when it staffed TSA surface inspectors for fiscal year 2017. We assessed only the fiscal year 2017 staffing model because TSA’s previous staffing model was last used in fiscal year 2011, which is outside our scope. To determine the extent to which TSA prioritized surface inspector activities based on risk when it planned these activities, we identified, compiled and analyzed activity requirements from surface inspector work plans and associated implementation guidance from fiscal years 2013 to fiscal year 2017. We (1) compared them to each other to identify changes in planned surface inspector activities over time and (2) compared them to results from the TSSRA, as well as other risk information including unattended rates for Toxic Inhalation Hazard (TIH) rail cars and the presence of Maritime Transportation Security Act of 2002-regulated facilities in each office’s area. We also interviewed TSA officials in headquarters and the field who were responsible for developing the surface inspector work plan about the process and information they considered during work plan development, and compared this information to TSA risk guidance. To determine the extent to which TSA’s implementation of surface inspector activities aligned with risk, we compared the results of our analysis of PARIS surface module data on the time surface inspectors spent in each surface mode to the results of the TSSRA cross-modal risk assessments from fiscal years 2013 to 2017. As previously discussed, we determined the data to be reliable for our purposes. We also compared the results of our analysis of PARIS surface module data to our analysis of work plan requirements to identify the amount of time surface inspectors reported spending on work plan activities. In addition, we identified the types of information TSA used in its fiscal year 2015 analysis of surface inspector time and activities to determine what TSA considered when it monitored how surface inspector activities were implemented. Additionally, we used the results of our analysis of PARIS surface module data to determine the percent of total time surface inspectors reported spending on Risk Mitigation Activities for Surface Transportation (RMAST) between fiscal years 2013 and 2017. To understand TSA’s objectives for the RMAST program, we analyzed program descriptions in TSA congressional budget justifications and TSA’s fiscal year 2017 work plan and work plan implementation guidance. We also conducted interviews with TSA officials in headquarters, and inspectors and supervisors in the field, and observed an RMAST activity to understand how TSA has implemented the program. We compared the results of our analysis and interviews to TSA’s risk guidance and Standards for Internal Control in the Federal Government to evaluate the extent to which the program was risk-based and to which TSA had established measurable goals for the program. The performance audit upon which this report is based was conducted from April 2016 to October 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We subsequently worked with TSA from September 2017 to December 2017 to prepare this nonsensitive version of the original report for public release. This public version was also prepared in accordance with these standards. Appendix II: Surface Inspector Activities 2005 High-visibility activities, such as patrols, passenger and baggage screening, and canine activities to introduce unpredictability, increase security, and deter potential terrorist actions on multiple modes of transportation. Managed by the U.S. Federal Air Marshal Service and conducted by TSA personnel, which may include surface inspectors. 2006 A voluntary review in which surface inspectors evaluate the security programs of transportation entities, offer technical assistance, and share best practices. TSA uses BASE to, among other things, determine priorities for allocating mass transit and passenger rail security grants, such as those provided through the Transportation Security Grant Program. 2006 Local field assessments of critical infrastructure, station and other facilities for mass transit, passenger rail, and commuter rail and bus systems. Station profiles provide detailed information of specific station-related intelligence, such as the locations of exits, telephones, CCTV, electrical power, station mangers etc. 2007 Inspectors verify that Toxic Inhalation Hazard (TIH) rail cars at rail yards within high- threat urban areas that transport TIH on a regular and reoccurring basis are being attended by railroad personnel. Inspectors also conduct “wildcard” RRS, during which they observe locations which do not normally handle TIH on a regular and recurring basis to determine if TIH cars are present, and if they are being attended by railroad personnel. 2008 Detailed assessments that focus on the vulnerabilities of high-population areas where TIH materials are moved by rail in significant quantities, and that provide site- specific mitigation strategies and lessons learned. 2008 I-STEP, which is managed through the Office of Security Policy and Industry Engagement, consists of contractor-facilitated exercises designed to help multimodal surface transportation entities closely examine their security programs and operational efforts. TSA facilitates I-STEP exercises across all surface transportation modes to help operators, law enforcement, first responders, and related entities test and evaluate their security plans, including prevention and preparedness capabilities, ability to respond to threats, and interagency coordination. TSA updates I-STEP scenarios as new threats emerge, helping industry partners prepare to implement the most appropriate countermeasures. 2014 Quality assurance assessments of Transportation Worker Identification Credential (TWIC) enrollment centers to, according to TSA officials, review contractor performance. 2015 EXIS consists of exercises facilitated by surface inspectors that utilize software developed by TSA for stakeholder use, generally focus on one entity, and are intended to build on the findings of a previously completed BASE assessment. 2017 A program intended to focus time and resources on high-risk and critical assets, facilities and other infrastructure through the following activities: (1) public observation to identify suspicious activities, security vulnerabilities and/or suspicious behaviors that could be indicative of pre-operational planning related to terrorism; (2) site security observation to determine if the physical security measures and operational deterrence components are in place to effectively mitigate risk, and (3) stakeholder engagement including TSA’s public security awareness programs and improvised explosive device (IED) and intelligence briefings. In this table, passenger rail and rail transit systems consist of: each passenger railroad carrier, including each carrier operating light rail or heavy rail transit service on track that is part of the general railroad system of transportation, each carrier operating or providing intercity passenger train service or commuter or other short-haul railroad passenger service in a metropolitan or suburban area (as described by 49 U.S.C. § 20102), and each public authority operating passenger train service; (b) each passenger railroad carrier hosting an operation described in paragraph (a) of this section; (c) each tourist, scenic, historic, and excursion rail operator, whether operating on or off the general railroad system of transportation; (d) each operator of private cars, including business/office cars and circus trains, on or connected to the general railroad system of transportation, and (e) each operator of a rail transit system that is not operating on track that is part of the general railroad system of transportation, including heavy rail transit, light rail transit, automated guideway, cable car, inclined plane, funicular, and monorail systems. 49 C.F.R. § 1580.200. Jennifer Grover (202) 512-7141 or groverj@gao.gov. In addition to the contact named above, Christopher E. Ferencik, Assistant Director; Brendan Kretzschmar, Analyst in Charge; Nanette Barton, and Katherine Blair made key contributions to this report. Also contributing to the report were, Charles Bausell, Katherine Davis, Eric Erdman, Anthony Fernandez, Eric D. Hauswirth, Paul Hobart, Tracey King, Christopher Lee, Mara McMillen, Amanda Miller, Claudia Rodriguez, Christine San, McKenna Storey, Natalie Swabb, Michelle Vaughn, Adam Vogt, Johanna Wong.
|
The global terrorist threat to surface transportation – freight and passenger rail, mass transit, highway, maritime and pipeline systems – has increased in recent years, as demonstrated by the 2017 London vehicle attacks and a 2016 thwarted attack on mass transit in the New York area. TSA is the primary federal agency responsible for securing surface transportation in the United States. GAO was asked to review TSA surface inspector activities. This report addresses (1) how TSA surface inspectors implement the agency's surface transportation security mission, and (2) the extent to which TSA has used a risk-based approach to prioritize and implement surface inspector activities. GAO analyzed TSA data on surface inspector activities from fiscal year 2013 through March 24, 2017, reviewed TSA program and risk documents and guidance, and observed surface inspectors conducting multiple activities. GAO also interviewed TSA officials in 17 of 49 surface field offices and 15 industry stakeholders. Transportation Security Administration (TSA) surface transportation security inspectors—known as surface inspectors—conduct a variety of activities to implement the agency's surface security mission, including: Regulatory Inspections: Surface inspectors enforce freight rail, passenger rail, and maritime security regulations. GAO found that, according to TSA data, surface inspectors reported spending approximately 20 percent of their time on these activities from fiscal years 2013 to 2017. Non-regulatory assessments and assistance: Surface inspectors conduct voluntary assessments and provide training to surface transportation entities, among other things. GAO found that, according to TSA data, inspectors reported spending approximately 80 percent of their time on these activities. In addition to mission-related activities, surface inspectors can assist with aviation-related activities. However, GAO found that TSA has incomplete information on the total time surface inspectors spend on these activities because of limitations in TSA's data system. Addressing these limitations would provide TSA with complete information when making decisions about inspector activities. GAO also found that TSA prioritized inspector activities in the surface transportation mode with the lowest risk because TSA did not incorporate risk assessment results when planning and monitoring activities. Specifically, in fiscal year 2016, the last full year for which data on inspectors' activities in the surface modes was available, surface inspectors reported spending more than twice as much time on the lowest risk surface transportation mode according to TSA risk assessments than on the highest risk surface transportation mode. Incorporating risk assessment results when prioritizing inspector activities would help TSA ensure that its surface security resources address the highest risks. In fiscal year 2017, TSA fully implemented a new risk mitigation program—Risk Mitigation Activities for Surface Transportation (RMAST)—intended to focus time and resources on high-risk surface transportation entities and locations. However, GAO found that TSA has not identified or prioritized these high-risk entities and locations, or defined the RMAST program's objectives and associated activities in a measurable and clear way. According to TSA officials, they have not done so because there are too many potential entities to list them all for prioritization and TSA has not identified an approach for determining the effectiveness of activities under the program. However, prioritizing high-risk entities, such as by type, characteristics, or location does not require a complete list of entities. By identifying and prioritizing high-risk entities and locations for RMAST, and clearly defining the program's activities and objectives, TSA would be better able to implement RMAST activities in a risk-based manner and measure their effectiveness. This is a public version of a sensitive report that GAO issued in October 2017. Information that TSA deemed sensitive has been omitted. GAO recommends that TSA (1) address limitations in its data system to collect complete information, (2) ensure inspector activities more closely align with the results of risk assessments, (3) identify and prioritize entities and locations for its risk mitigation program, and (4) define measurable and clear objectives for the program. TSA concurred with these recommendations.
|
The EFMP provides support to families with special needs at their current and proposed locations. Servicemembers relocate frequently, generally moving every 3 years if in the Army, Marine Corps, and Navy, and every 4 years if in the Air Force. In fiscal year 2016, the Military Services relocated approximately 39,000 servicemembers enrolled in the EFMP to CONUS installations. To implement DOD’s policy on support for families with special needs, DOD requires each Service to establish its own EFMP for active duty servicemembers. EFMPs are to have three components—identification and enrollment, assignment coordination, and family support. Identification and enrollment: Medical and educational personnel at each installation are responsible for identifying eligible family members with special medical or educational needs to enroll in the EFMP. Once identified by a qualified medical provider, active duty servicemembers are required to enroll in their service’s EFMP. Servicemembers are also required to self-identify when they learn a family member has a qualifying condition. Assignment coordination: Before finalizing a servicemember’s assignment to a new location, DOD requires each Military Service to consider any family member’s special needs during this process, including the availability of required medical and special educational services at a new location. Family support: DOD requires each Military Service’s EFMP to include a family support component through which it helps families with special needs identify and gain access to programs and services at their current, as well as proposed, locations. Servicemembers assigned to a joint base would receive family support from the Service that is responsible for leading that installation. For example, an Airman assigned to a joint base where the Army is the lead would receive family support from the Army installation’s EFMP. As required by the NDAA for Fiscal Year 2010, DOD established the Office of Community Support for Military Families with Special Needs (Office of Special Needs or OSN) to develop, implement, and oversee a policy to support these families. Among other things, this policy must (1) address assignment coordination and family support services for families with special needs; (2) incorporate requirements for resources and staffing to ensure appropriate numbers of case managers are available to develop and maintain services plans that support these families; and (3) include requirements regarding the development and continuous updating of a services plan for each military family with special needs. OSN is also responsible for collaborating with the Services to standardize EFMP components as appropriate and for monitoring the Services’ EFMPs. OSN has been delegated the responsibility of implementing DOD’s policy for families with special needs by the Undersecretary of Defense for Personnel and Readiness through the Assistant Secretary for Manpower and Reserve Affairs according to DOD officials. Currently, OSN is administered under the direction of the Deputy Assistant Secretary of Defense for Military Community and Family Policy through the Office of Military Family Readiness Policy. In addition, each Military Service has designated a program manager for its EFMP who is also responsible for working with OSN to implement its EFMP (see fig. 1). DOD’s guidance for the EFMP (1) identifies procedures for assignment coordination and family support services; (2) designates the Assistant Secretary of Defense for Manpower and Reserve Affairs as being responsible for monitoring overall EFMP effectiveness; (3) assigns the OSN oversight responsibility for the EFMP, including data review and monitoring; and (4) directs each Service to develop guidance for overseeing compliance with DOD requirements for their EFMP. Table 1 provides an overview of the procedures each Service must establish for the assignment coordination and family support components of the EFMP. As a part of its guidance for monitoring military family readiness programs, DOD also requires each Military Service to certify or accredit its family readiness services, including family support services provided through the EFMP. In addition, DOD states that each Service must balance the need for overarching consistency across EFMPs with the need for each Service to provide family support that is consistent with their specific mission. To accomplish this, each Service is required to jointly work with DOD to develop a performance strategy, which is a plan that assesses the elements of cost, quality, effectiveness, utilization, accessibility, and customer satisfaction for family readiness services. In addition, each Military Service is required to evaluate their family readiness services using performance goals that are linked to valid and reliable measures such as customer satisfaction and cost. DOD also requires each Service to use the results of these evaluations to inform their assessments of the effectiveness of their family readiness services for families with special needs. According to DOD officials, each Military Service provides family support services in accordance with DOD guidance as well as Service-specific guidance. However, we found wide variation in each Service’s requirements for family support personnel as well as the practices and expectations of EFMP staff. As a result the type, amount, and frequency of assistance enrolled families receive varies from Service to Service and when a servicemember from one Service is assigned to a joint base led by another Service (see table 2). For example, in terms of a minimum level of contact for families with special needs enrolled in the EFMP, the Services vary in the frequency with which they require family support providers to contact families with special needs: The Marine Corps specifies a frequency (quarterly) with which families with special needs should be contacted by their family support providers. The Air Force has each installation obtain a roster of families with special needs enrolled in the EFMP on a monthly basis, but it does not require family support providers to, for example, use this information to regularly contact these families. The Navy assigns one of three service levels to each family member enrolled in the EFMP. These service levels are based on the needs of each family with special needs; family support providers are responsible for assigning a “service level” that directs the frequency with which the family must be contacted. The Army has no requirements for how often families with special needs should be contacted. The Services also vary as to whether they offer legal assistance to families with special needs as follows: The Marine Corps employs two attorneys who can represent families with special needs who fail to receive special education services from local school districts, as specified in their children’s individualized education programs (IEP). They can also advise EFMP-enrolled families on their rights and options if a family believes their child needs special education services from a local school district (e.g., an IEP). The Air Force, Army, and Navy choose not to employ special education attorneys. Officials with whom we spoke said families with special needs in these Services can receive other types of assistance that may help them resolve special education legal issues. For example, Air Force officials said servicemembers and their families can receive support from attorneys that provide general legal assistance on an installation, Army officials said installation EFMP managers can refer families with special needs to other organizations that provide legal support, and Navy officials said families can find support through working with their installation’s School Liaison Officers. The NDAA for Fiscal Year 2010 requires DOD’s policy to include requirements regarding the development and continuous updating of a services plan (SP) for each family with special needs, and DOD has specifically required these plans as part of the provision of family support services. These plans describe the necessary services and support for a family with special needs and document and track progress toward meeting related goals. According to DOD guidance, these plans should also document the support provided to the family, including case notes. In addition, the DOD reference guide for family support providers emphasizes that timely, up-to-date documentation is especially important each time a family relocates, as military families regularly do. Therefore, SPs are an important part of providing family support during the relocation process, and provide a record for the gaining installation. Requiring timely and up-to-date documentation is consistent with federal internal control standards, which state that agencies should periodically review policies, procedures, and related control activities for continued relevance and effectiveness in achieving their objectives. SPs follow families with special needs each time they relocate and without timely and up-to-date documentation, DOD cannot ensure that all families continue to receive required medical and/or special educational services once they relocate to another installation. For every Service the number of SPs was relatively few when compared to the number of servicemembers (known as sponsors) or the number of family members enrolled in the EFMP (see table 3). The Services and OSN provided a range of reasons as to why the Services do not develop and maintain a SP for each family with special needs. For example, Air Force officials said their family support providers consider the needs of each family with special needs before determining whether a SP will help them receive the required services. In addition, Army and Marine Corps officials said they may not develop these plans if families do not request them. Further, according to a Navy official, some families lack the required SPs because installations may not have the staff needed to develop them—even though DOD requires the Services to maintain sufficient staff and certify their EFMPs. OSN officials with whom we spoke also said that the Services may not have developed many SPs during fiscal year 2016 because DOD had not yet approved a standardized form that could be used to meet this requirement. Finally, OSN officials also said that each family with special needs enrolled in the EFMP may not need a SP because their condition does not require this type of family support. To meet requirements of the NDAA for Fiscal Year 2010, in April 2017, DOD issued to the Services guidance that directed them to “rogram, budget, and allocate sufficient funds and other resources, including staffing,” to meet DOD’s policy objectives for the EFMP. According to OSN officials, DOD relies on each Service to determine what level of funds and resources is sufficient and what constitutes an appropriate number of family support personnel. To determine family support providers and related personnel staffing levels, the Service officials with whom we spoke said they consider a number of factors, including the number of families with special needs enrolled in the EFMP at any given installation (see app. II for more information about the EFMP data by installation). See Table 4 for a summary of EFMP family support providers and other key personnel at CONUS installations. As required by DOD, all of the Services employ family support providers to assist families with special needs. In addition, some Services employ additional personnel to support implementation of the EFMP (see sidebar). For example, the Air Force employs family support coordinators to administer its EFMP and no other personnel are dedicated to assisting these coordinators or enrolled families. The Army employs “system navigators” who provide individualized support to families with special needs at selected installations through its EFMP, as well as other personnel to administer the EFMP. workers at most of its CONUS installations to administer individualized support to families with special needs. In addition, the Marine Corps employs program managers, administrative assistants, as well as training and education outreach specialists. The Navy contracts regional case liaisons and case liaisons at selected CONUS installations to administer individualized support to families with special needs. In addition, the Navy employs collateral duty case liaisons who assist with the delivery of family support services at all other CONUS installations. Senior OSN officials said they rely on each Service to determine the extent to which its EFMP complies with DOD’s policy for families with special needs because they consider OSN to be a policy-making organization that is not primarily responsible for assessing compliance. In addition, these officials said the Services need flexibility to implement DOD’s policy for families with special needs because they each have unique needs and the number of enrolled families in the EFMP is constantly changing. However, DOD has not developed a standard for determining the sufficiency of funding and resources each Service allocates for family support. Air Force officials at one of the installations we visited said the Air Force identified the lack of staff and funding to provide individualized support to most families with special needs as an issue. In addition, officials from the Army and Navy said they have not received any guidance from OSN officials about their Service-specific guidance, including requirements for resources and services plans. Further, the Services may not know the extent to which their Service- specific guidance complies with DOD’s policy for families with special needs. The NDAA for Fiscal Year 2010 requires DOD to identify and report annually to the congressional defense committees on gaps in services for military families with special needs and to develop plans to address these gaps. However, DOD’s most recent reports to the congressional defense committees did not address the relatively few SPs being created for families with special needs, or whether the Services are providing sufficient resources to ensure an appropriate number of family support providers. Federal internal control standards require that agencies establish control activities, such as developing clear policies, in order to accomplish agency objectives such as those of the Services’ EFMPs. Without fully identifying and addressing potential gaps in family support across these programs, some families with special needs may not get the assistance they require, particularly when they relocate. Each Service monitors EFMP assignment coordination and family support using a variety of mechanisms, such as regularly produced internal data reports. However, DOD has not yet established common performance measures to track the Services’ progress in implementing its standard procedures over time or developed a process to evaluate the overall effectiveness of each Service’s assignment coordination and family support procedures. DOD requires each Service to monitor implementation of their EFMP, including their procedures for assignment coordination and family support. To comply with this requirement, each Service has developed guidance that establishes monitoring protocols and assigns oversight responsibilities. Officials from each Service told us they use internal data reports from each installation to monitor assignment coordination and family support. To monitor assignment coordination, officials from each Service told us their headquarters reviews proposed assignment locations for families with special needs enrolled in the EFMP. These officials said monitoring proposed assignment locations helps ensure that enrolled families will be able to access required services at their new installations. In addition, Army officials said each Army unit commander is responsible for tracking the number of families with special needs that have expired enrollment paperwork because it affects assignment coordination worldwide. Several years ago, the Army determined that 25 percent of soldiers (over 13,000) enrolled in the EFMP had expired enrollment paperwork, complicating the task of considering each enrolled family’s special medical or educational needs as part of proposed relocations. In response, in August 2011, the Army revised its policies and procedures for updating enrollment paperwork which would help ensure a family member’s special needs are considered during the assignment coordination process. To monitor family support provided by installations worldwide, each Military Service told us they use a variety of mechanisms (see table 5). The Marine Corps pays particular attention to customer satisfaction. Marine Corps officials told us that every three years Marine Corps headquarters administers a survey of family members enrolled in the EFMP. We previously reported that organizations may be able to increase customer satisfaction by better understanding customer needs and organizing services around those needs. This survey is one of the primary ways Marine Corps headquarters measures customer satisfaction with family support services at installations worldwide. Marine Corps officials also said this survey helps ensure its EFMP is based on the current needs of families with special needs. To improve its oversight of the EFMP and implement its policy for families with special needs, DOD, through OSN, has several efforts under way to standardize the Services’ procedures for assignment coordination and family support. However, DOD has not developed common performance measures to monitor its progress toward these efforts and has not developed a process for assessing the Services’ related monitoring activities. Federal internal control standards emphasize the importance of assessing performance over time and evaluating the results of monitoring activities. To help improve family member satisfaction by addressing gaps in support that may exist between Services, OSN has begun to standardize procedures for assignment coordination and family support. To date, OSN’s efforts have focused on ensuring each Service’s EFMP considers the needs of family members during the assignment process and helps family members identify and gain access to community resources. According to OSN’s April 2017 Report to Congress, the long-term goal of these efforts is to help ensure that all families with special needs enrolled in the EFMP receive the same level of service regardless of their Military Service affiliation or geographic location. In addition, OSN officials told us its standardized procedures will also help DOD perform required oversight by improving its access to Service-level data and its ability to validate each Service’s monitoring activities. To date, efforts to standardize assignment coordination and family support have included efforts such as developing new family member travel screening forms which will be the official documents used during the assignment coordination process and completing a DOD-wide customer service satisfaction survey on EFMP family support (see table 6). Despite its efforts to begin standardizing assignment coordination and family support services, DOD is unable to measure its progress in standardizing assignment coordination and family support procedures for families with special needs and assessing the Services’ performance of these processes because it has not yet developed common metrics for doing so. Federal internal control standards emphasize the importance of agencies assessing performance over time. We have also reported on the importance of federal agencies engaging in large projects using performance metrics to determine how well they are achieving their goals and to identify any areas for improvement. By using performance metrics, decision makers can obtain feedback for improving both policy and operational effectiveness. Additionally, by tracking and developing a baseline for all measures, agencies can better evaluate progress made and whether or not goals are being achieved—thus providing valuable information for oversight by identifying areas of program risk and causes of risks or deficiencies to decision makers. Through our body of work on leading performance management practices, we have identified several attributes of effective performance metrics relevant to OSN’s work (see table 7). OSN officials said each Service is currently responsible for assessing the performance of its own EFMP, including the development of Service- specific goals and performance measures. OSN officials said that they recognize the need to continually measure the department’s progress overall in implementing its policy for families with special needs, and are considering ways to do so. They also said they have encountered challenges to developing common performance measures. In addition, OSN officials said its efforts to reach consensus among the Services about performance measures for the overall EFMP are still ongoing because each Service wants to maintain its own measures, and DOD has not required them to reach a consensus. Absent common performance measures, DOD is unlikely to fully determine whether its long-standing efforts to improve support for families with special needs are being implemented as intended. DOD requires each Service to monitor its own family readiness programs, including procedures for assignment coordination and family support through the EFMP, but lacks a systematic process to evaluate the results of these monitoring activities. To monitor family readiness services, as required by DOD, each Service must accredit or certify its family support services, including the EFMP, using standards developed by a national accrediting body not less than once every 4 years. In addition, personnel from each Service’s headquarters are required to periodically visit installations as a part of their monitoring activities for assignment coordination, among other things. The Services initially had the Council on Accreditation accredit family support services provided through their installations’ EFMPs using national standards developed for military and family readiness programs, according to the officials with whom we spoke. However, by 2016, each Service was certifying installations’ family support services using standards that meet those of a national accrediting body, Service-specific standards, and best practices. According to officials from each Service with whom we spoke, this occurred due to changes in the funding levels allocated to this activity. Table 8 provides an overview of the certification process currently being used by each Service. OSN officials said they do not have an ongoing process to systematically review the results of the Services’ activities, including the certification of EFMPs because they choose to rely on the Services to develop their own monitoring activities and ensure they provide the desired outcomes. In doing so, DOD allows each Service to develop its own processes for certifying installations’ family support services, including the selection of standards. In addition, OSN officials told us that efforts to standardize certification of EFMPs are ongoing because the Military Services have not been able to reach consensus on a set of standards that can be used across DOD for installations’ family support services. Further, OSN has not established a process to assess the results of the Services’ processes for certifying installations’ family support services. Federal standards for internal control state that management should evaluate the results of monitoring efforts—such as those the Services are conducting on their own—to help ensure they meet their strategic goals. The lack of such a process hampers OSN’s ability to monitor the Services’ EFMPs and determine the adequacy of such programs as required by the NDAA for Fiscal Year 2010. OSN’s job of developing a policy for families with special needs that will work across DOD’s four Services is challenging given the size, complexity, and mission of the U.S. military. It has had to consider, among other things, the Services’ mission requirements, resource constraints, and the myriad demands on servicemembers and their families during their frequent relocations. Anything that further complicates a relocation—such as not receiving the required family support services for family members with special needs—potentially affects readiness or, at a minimum, makes an already stressful situation worse. By providing little direction on how the Services should provide family support or what the scope of family support services should be, some servicemembers get more—or less—from the EFMP each time they relocate, including when a servicemember from one Service is assigned to a joint base led by another Service. By largely deferring to the Services to design, implement, and monitor their EFMPs’ performance, DOD cannot, as required by the NDAA for Fiscal Year 2010, fully determine the adequacy of the Services’ EFMPs in serving families with special needs, including any gaps in services these families receive, because it has not built a systematic process to do so. Instead, it relies on the Services to self-monitor and address, within each Service, the results of monitoring activities. However, because servicemembers relocate frequently and often depend on the EFMP of a Service other than their own, a view of EFMP performance across all of the Services is essential to ensuring, for example, that relocating servicemembers get consistent EFMP service delivery no matter where they are stationed. Evaluating and developing program improvements based on the results of the Services’ monitoring would help DOD ensure the Services’ EFMPs achieve the desired outcomes and improve its ability to assess the overall effectiveness of the program. We are making the following three recommendations to DOD: We recommend the Secretary of Defense direct the Office of Special Needs (OSN) to assess the extent to which each Service is (1) providing sufficient resources for an appropriate number of family support providers, and (2) developing services plans for each family with special needs, and to include these results as part of OSN’s analysis of any gaps in services for military families with special needs in each annual report issued by the Department to the congressional defense committees. (Recommendation 1) We recommend that the Secretary of Defense direct the Office of Special Needs (OSN) to develop common performance metrics for assignment coordination and family support, in accordance with leading practices for performance measurement. (Recommendation 2) We recommend that the Secretary of Defense implement a systematic process for evaluating the results of monitoring activities conducted by each Service’s EFMP. (Recommendation 3) We provided a draft of this report to the Department of Defense (DOD) for comment. DOD provided written comments, which are reproduced in appendix IV. DOD also provided technical comments, which we incorporated as appropriate. DOD agreed with all three of our recommendations. In its written comments, DOD stated that additional performance metrics need to be developed for assignment coordination and that it is in the process of measuring families’ satisfaction with family support provided through the EFMP. DOD also stated that it is developing plans for evaluating the results of each Service’s monitoring activities for the EFMP. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Defense and Education, and other interested parties. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0580 or nowickij@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. The National Defense Authorization Act (NDAA) for Fiscal Year 2017 includes a provision for GAO to assess the effectiveness of the Department of Defense’s (DOD) Exceptional Family Member Programs (EFMP). This report focuses on the assignment coordination and family support components of the EFMP for dependents with special needs and examines: (1) the extent to which each Service has provided family support as required by DOD, and (2) the extent to which the Services monitor and DOD evaluates assignment coordination and family support. To address these objectives, we used a variety of data collection methods. Key methods are described in greater detail below. For both objectives, we reviewed relevant federal laws, regulations, and DOD guidance and documentation that pertain to the EFMP, including the following: The NDAA for Fiscal Year 2010, which established the Office of Special Needs and defined program requirements for assisting families with special needs, including assignment coordination and family support. DOD’s guidance for administering the EFMP. We assessed how DOD implements the requirements in the NDAA for Fiscal Year 2010; how each Service implements assignment coordination and family support; and how the Services and DOD monitor assignment coordination and family support using performance measures. Specially, we reviewed DOD Instruction 1315.19 - Exceptional Family Member Program; Service-specific guidance and related documents from the Air Force, Army, Marine Corps, and Navy; and DOD Instruction 1342.22 - Military Family Readiness. Standards for internal control in the federal government related to the documentation of responsibilities through policies, performance measures, and evaluating the results of monitoring activities. We compared each Service’s procedures for monitoring assignment coordination and family support to these standards. To determine the extent of the Services’ EFMP family support, we obtained and analyzed fiscal year 2016 EFMP data (the most recent available) for each Service. We reviewed DOD policy to identify data variables that each Service maintains related to its EFMP. We used these data to summarize key characteristics of each Service’s EFMP. The selected variables provided Service-wide and installation-specific EFMP information on, the number of continental United States (CONUS) and outside the continental United States (OCONUS) installations; the number of servicemembers (sponsors) enrolled in the EFMP; the number of family members with special needs enrolled in the EFMP; the number of EFMP family support personnel; and the number of services plans created for families with special needs enrolled in the EFMP. We determined that the selected data variables from each Service are sufficiently reliable for the purposes of providing summary results about family support for fiscal year 2016. To learn more about how the Services implement their EFMPs, we visited seven installations in five states. We selected the seven installations based on their location in states with the largest number of military- connected students in school year 2012-2013 (the most recent available and reliable data) or in states with the largest percentage of students enrolled in U.S. DOD schools as of May 2017, as well as their status as a joint base. At each installation, we interviewed installation officials, EFMP managers, selected family support personnel, and family members and caregivers enrolled in the program. In states we visited that had the largest number of military-connected students, the EFMP personnel we interviewed collectively served 66 percent of students who attend local public schools and 42 percent of the students attending U.S. DOD schools. To obtain illustrative examples about how the EFMP serves families with special needs, we conducted seven group interviews with EFMP-enrolled family members and caregivers (one at each of the seven installations we visited). Using a prepared script, we asked participants to describe how they were identified and enrolled in the EFMP, how they were assigned to new installations, and the types of family support services they received. We also asked about how these services aligned with their family member’s EFMP-eligible condition, the benefits and challenges they experienced, as well as their overall satisfaction. A total of 38 self- selected volunteers participated in the seven group discussions. While the participants in these groups included a variety of family members and caregivers, the number of participants and groups were very small relative to the total number of family members enrolled in the EFMP. Their comments are not intended to represent all EFMP-enrolled family members or caregivers. Other EFMP-enrolled family members and caregivers may have had other experiences with the program during the same period. Finally, for both objectives, we conducted interviews with a variety of DOD, Service-level, and nonfederal officials. We spoke with DOD officials from the Office of the Assistant Secretary of Defense–Offices of Manpower and Reserve Affairs, Military Community and Family Policy, Military Family Readiness Policy, and Special Needs. We also spoke with EFMP Managers from Air Force, Army, Marine Corps, and Navy headquarters. We also met with officials from selected national military family advocacy organizations including the National Military Family Association; the Military Family Advisory Network; and the Military Officers Association of America to discuss the EFMP. We conducted this performance audit from February 2017 to May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Each Service has an Exceptional Family Member Program (EFMP) that provides support to military families with special needs. The tables below present the following information on selected EFMP and family support categories for each Service’s program at continental United States (CONUS) and outside the continental United States (OCONUS) installations in fiscal year 2016: City, state or country; Number of exceptional family members; Number of family support providers (by Full-Time Equivalent); Number of family support provider vacancies; Number of services plans; Number of indirect contacts; and Number of direct contacts. The information below is listed sequentially in alphabetical order by Service. We held small group discussions with Exceptional Family Member Program (EFMP) participants at the seven military installations we visited. Family members and caregivers who attended each session reported they had children or spouses with EFMP-eligible conditions. The discussion group participants were self-selected; and their comments are not intended to represent all EFMP -enrolled family members or caregivers in fiscal year 2016. In addition, other EFMP -enrolled family members and caregivers may have had different experiences with the program during the same period. There were a total of 38 participants representing all the Services. The following issues were discussed by one or more participants during the small group discussions at the installations we visited. The issues that emerged relate to the current and future overall effectiveness of the EFMP. Overall Satisfaction with EFMP (Discussed by 30 of 38 participants): Measure of participants’ approval of the family support services offered and experience with the EFMP. Many participants expressed overall satisfaction with the EFMP. Several participants expressed dissatisfaction with the EFMP. A participant expressed dissatisfaction with the lack of consistency in the provision of family support services (i.e., special education advocacy) across installations. School Liaison Officers (Discussed by 20 of 38 participants): Serve as the primary point of contact for school-related matters as well as assist military families with school issues. Several participants noted that they received no response to their request for assistance from their School Liaison Officer or they only received general information. Several participants said School Liaison Officers were not helpful. Some participants found School Liaison Officers were helpful. Some participants were unaware of School Liaison Officers being available at their installation and the service(s) they provide. A few participants said School Liaison Officers did not follow up on requests for information. A participant noted there seems to be a disconnect between family support services provided through the EFMP and services provided by School Liaison Officers. Family Support Personnel (Discussed by 12 of 38 participants): Provide information and referral to military families with special needs. Some participants at one installation noted that the EFMP was understaffed. Some participants at one installation noted high turnover of family support personnel. Some participants noted family support personnel did not provide support for their family with special needs. Stigma (Discussed by 12 of 38 participants): A perception that participating in the EFMP may limit a soldier’s assignment opportunities and/or compromise career advancement. Several participants believe there is still stigma associated with participating in the EFMP. Some participants said participating in the EFMP has not affected career advancement. Assignment Coordination (Discussed by 10 of 38 participants): The assignment of military personnel in a manner consistent with the needs of armed forces that considers locations where care and support for family members with special needs are available. Some participants found the assignment coordination process challenging. Some participants described limitations with the assignment coordination process. A few participants noted there is a lack of information among families with special needs regarding how to express the need for stabilization and /or continuity of care. A few participants cited the challenges of assignment coordination as contributing to their decision to retire. One participant commented that the opinion of a medical professional was not reflected in the assignment coordination process. Special Education Services (Discussed by 10 of 38 participants): The provision of staff capable of assisting families with special needs with special education and disability law advice and/or assistance and attendance at individualized education program (IEP) meetings where appropriate. Several participants who had a family support provider assist them with preparing for or attending a school-based meeting, including IEP meetings, spoke positively of their experience(s). Some participants at one installation agreed that assistance from family support providers during meetings with school officials regarding special education services is helpful. A few participants who were unable to get assistance with special education services from the EFMP sought the services of private attorneys at their own expense. Family Support Services (Discussed by 9 of 38 participants): The non-clinical case management delivery of information and referral for families with special needs, including the development and maintenance of a services plan. Some participants found that family support providers were helpful. Some participants could not identify needed resources or were unaware of the resources or services available to them. One participant noted that the family support provider had minimal contact. One participant said navigating the system can be challenging. Surveys (Discussed by 8 of 38 participants): The process of collecting data from a respondent using a structured instrument and survey method to ensure the accurate collection of data. Several participants noted that they had not or rarely had the opportunity to evaluate the family support services provided through the EFMP. One participant noted that comment cards used by each service are not effective for evaluating the EFMP. Warm hand-off (Discussed by 6 of 38 participants): Assistance to identify needed supports or services and facilitating the initial contact or meeting with the next program. Many participants at one installation agreed that the warm hand-off process worked well for them. Several participants said they found the warm hand-off process helpful when moving from one installation to the next. Outreach (Discussed by 5 of 38 participants): Developing partnerships with military and civilian agencies and offices (local, state, and national), improving program awareness, providing information updates to families, and hosting and participating in EFMP family events. Some participants found it difficult to obtain information regarding the types of family support services that are available. A few participants noted that communications regarding the EFMP were not targeted to address their needs. A few participants noted communications regarding the EFMP are untimely, (e.g., newsletters not issued periodically). Joint Base Family Support Services (Discussed by 1 of 38 participants): Family support services provided by the lead Service of the Joint Base that is different from that of the servicemember enrolled in the EFMP. One participant said that using family support services on joint bases may pose a challenge as each Service has different rules and procedures and as a result provides different types of family support services. In addition to the contact named above, Bill MacBlane (Assistant Director), Brian Egger (Analyst-in-Charge), Patricia Donahue, Holly Dye, Robin Marion, James Rebbe, Shelia Thorpe, and Walter Vance made significant contributions to this report. Also contributing to this report were Lucas Alvarez, Bonnie Anderson, Connor Kincaid, Brian Lepore, Daniel Meyer, and Mimi Nguyen.
|
Military families with special medical and educational needs face unique challenges because of their frequent moves. To help assist these families, DOD provides services plans, which document the support a family member requires. The National Defense Authorization Act for Fiscal Year 2017 included a provision for GAO to review the Services' EFMPs, including DOD's oversight of these programs. This report examines the extent to which (1) each Service provides family support and (2) the Services monitor and DOD evaluates assignment coordination and family support. GAO analyzed DOD and Service-specific EFMP guidance and documents; analyzed fiscal year 2016 EFMP data (the most recent available); visited seven military installations, selected for their large numbers of military-connected students; and interviewed officials responsible for implementing each Service's EFMP, as well as officials in OSN that administer DOD's EFM policy. The support provided to families with special needs through the Department of Defense's (DOD) Exceptional Family Member Program (EFMP) varies widely for each branch of Military Service. Federal law requires DOD's Office of Special Needs (OSN) to develop a uniform policy that includes requirements for (1) developing and updating a services plan for each family with special needs and (2) resources, such as staffing, to ensure an appropriate number of family support providers. OSN has developed such a policy, but DOD relies on each Service to determine its compliance with the policy. However, Army and Navy officials said they have not received feedback from OSN about the extent to which their Service-specific guidance complies. Federal internal control standards call for developing clear policies to achieve agency goals. In addition, DOD's most recent annual reports to Congress do not indicate the extent to which each Service provides services plans or allocates sufficient resources for family support providers. According to GAO's analysis, the Military Services have developed relatively few services plans, and there is wide variation in the number of family support providers employed, which raises questions about potential gaps in services for families with special needs (see table). Each Service uses various mechanisms to monitor how servicemembers are assigned to installations (assignment coordination) and obtain family support, but DOD has not established common performance measures to assess these activities. DOD has taken steps to better support families with special needs, according to the DOD officials GAO interviewed. For example, DOD established a working group to identify gaps in services. However, OSN officials said that DOD lacks common performance measures for assignment coordination and family support because the Services have not reached consensus on what those measures should be. In addition, OSN does not have a process to systematically evaluate the results of the Services' monitoring activities. Federal internal control standards call for assessing performance over time and evaluating the results of monitoring activities. Without establishing common performance measures and assessing monitoring activities, DOD will be unable to fully determine the effect of its efforts to better support families with special needs and the adequacy of the Services' EFMPs as required by federal law. GAO makes a total of three recommendations to DOD. DOD should assess and report to Congress on the extent to which each Service provides sufficient family support personnel and services plans, develop common performance metrics for assignment coordination and family support, and evaluate the results of the Services' monitoring activities. DOD agreed with these recommendations and plans to develop performance metrics for assignment coordination and develop plans to evaluate the Services' monitoring activities.
|
ISO 55000 defines asset management as “the coordinated activity of an organization to realize value from assets.” This approach includes, for example: developing an understanding of how each of an organization’s assets contributes to its success; managing and investing in those assets in such a way as to maximize that success; and fostering a culture of effective decision making through leadership support, policy development, and staff training. While ISO defines an asset as any item, thing, or entity that has potential or actual value to an organization, in this report we focus on real property assets. Asset management can help federal agencies optimize limited funding and make decisions to better target their policy goals and objectives. See fig. 1 for an example of an asset management framework. Asset management as a distinct concept developed in the 1980s, and since that time, organizations around the world have published a number of standards and leading practices. These include: Publicly Available Specification (PAS) 55: The British Standards Institution published this standard in its final form in 2008. This standard focuses on the management of physical assets such as real property and describes leading asset management practices in areas such as life cycle planning, risk management, cost avoidance, and collaborative decision-making. Additionally, the standard provides a checklist for organizations to assess the maturity of their asset management framework. Some public services, utilities, and oil and gas sectors in the United Kingdom and other countries have adopted this standard. The British Standards Institution formally withdrew this standard in 2015 after the publication of ISO 55000, but it remains in use as a reference for many organizations. ISO 55000: This standard, published in 2014, is a series of three documents, collectively referred to as “ISO 55000.” It is based on the earlier PAS 55 standard but with stated applicability to all types of assets as opposed to just the physical assets covered by PAS 55. Committees with members from more than 30 countries identified common asset management practices and developed this international consensus standard that, according to ISO, applies to the broadest possible range of assets, organizations, and cultures. Some public and private sector organizations from around the world including utilities, infrastructure management firms, cities, federal agencies, and others have adopted the standard for their real property assets. See appendix III for a summary of the key elements of the ISO 55000 standards. International Infrastructure Management Manual: Initially published in 2000, this manual became one of the first sets of internationally accepted asset management leading practices. The Institute of Public Works Engineering Australasia published the most recent edition in 2015. The current manual complements the ISO 55000 standards and includes case studies of how organizations in different sectors have approached asset management. It provides detailed information on how to create and implement an effective asset management framework, such as how to incorporate estimates of future demand for services. Various organizations, particularly in sectors that manage physical assets, have adopted the manual as a reference. In the United States, within the federal government’s executive branch, OMB and GSA are responsible for providing leadership in managing federal real property—one of the government’s major assets. OMB is tasked with overseeing how federal agencies devise, implement, manage, and evaluate programs and policies. OMB has provided direction to federal agencies by issuing various government-wide policies, guidance, and memorandums related to asset management. For example: OMB’s 2017 Capital Programming Guide outlines a capital- programming process, including how agencies should effectively and collectively manage a portfolio of capital assets and requirements for agencies strategic asset management plans; OMB’s Circular A-123 directs agencies to conduct enterprise risk management assessments to identify significant risks to agency goals and operations; OMB’s Memorandum 18-21 expands the responsibilities of federal agencies’ senior real property officers in leading and directing the agency’s real property program. GSA’s Office of Government-wide Policy is generally responsible for identifying, evaluating, and promoting best practices to improve the efficiency of real property management processes. This office has provided guidance for federal agencies and published performance measures. In 2004, the President issued Executive Order 13327 directing Chief Financial Officers Act (CFO Act) agencies to designate a senior real property officer responsible for establishing an asset management- planning process and developing a plan to carry out this process. Among other things, this plan was to describe the agency’s process for: identifying and categorizing all real property managed by the agency, prioritizing actions needed to improve the operational and financial management of the agency’s real property inventory, using life-cycle cost estimations for those actions, and identifying asset management goals and measuring progress towards those goals. The order also required agencies to manage their real property assets in a manner that supports the agency’s asset management plan, goals, and strategic objectives. In addition, Executive Order 13327 tasked GSA with providing policy oversight and guidance to inform federal agencies’ real property management efforts and required that OMB review agencies’ efforts in implementing their asset management plans and completing the other requirements specified in the executive order. The executive order also established the Federal Real Property Council (FRPC)—chaired by OMB and composed of senior management officials from CFO agencies—and called for the FRPC to develop guidance, collect best practices, and help federal agencies improve the management of real property assets. In response to this executive order, in 2004 the FRPC developed guidance describing guiding principles that agencies’ asset management practices should align with, requirements for what agencies should include in their asset management plans, and a template for agencies to follow when compiling these plans. Specifically, the guidance stated that each real property asset’s management plan should link the asset management framework to the agency’s strategic goals and objectives, describe a process for periodically evaluating assets, and describe a process for continuously monitoring the agency’s framework. More recent federal asset management initiatives have focused on efficiently managing and reducing federal agencies’ real property holdings. For example, in 2012 OMB directed the 24 CFO Act agencies to maintain their civilian real-estate inventory at or below their then-current levels, a policy known as Freeze the Footprint. In 2015, OMB issued its National Strategy for the Efficient Use of Real Property and its accompanying Reduce the Footprint policy requiring the CFO Act agencies to set annual targets for reducing their portfolio of domestic office and warehouse space. Subsequently, the Federal Assets Sale and Transfer Act of 2016 established the Public Buildings Reform Board to identify opportunities for the federal government to reduce its inventory of civilian real property and reduce its costs. The act also requires the head of each executive agency to provide annually to GSA information describing the nature, use, and extent of the agency’s real property assets. In addition, the Federal Property Management Reform Act of 2016 codified the Federal Real Property Council to, among other things, ensure efficient and effective real-property management while reducing costs to the federal government. The act requires executive branch agencies to annually submit to the Federal Real Property Council a report on all excess and underutilized real property in their inventory. Based on our review of the ISO 55000 standards, asset management literature, and interviews with experts, we identified six key characteristics of an effective asset management framework: (1) establishing formal policies and plans, (2) maximizing an asset portfolio’s value, (3) maintaining leadership support, (4) using quality data, (5) promoting a collaborative organizational culture, and (6) evaluating and improving asset management practices (see fig. 2). See appendix II for a more detailed explanation of how we identified these key characteristics. Each of the six federal agencies we reviewed had a real property asset management framework that included some of these key characteristics. However, agencies varied in how they performed activities in these areas. In addition, the scope and maturity level of the agencies’ asset management frameworks varied. For example, while some agencies’ asset management policies applied to large portions of their portfolios, other agencies’ policies applied to only certain portions of their portfolios. In addition, two agencies—the Corps and Coast Guard—told us they were using the ISO 55000 standards. For example, according to Corps officials, the Corps is in the process of incorporating elements of the ISO 55000 standards into its frameworks. Coast Guard officials told us they were using the ISO 55000 standards as a benchmark to compare against their existing framework. According to OMB and GSA officials, some of the differences in agencies’ asset management frameworks can be attributed to differences such as agency mission needs and the types of assets that each manages. For example, the real property asset portfolios of the six agencies we reviewed differed substantially in the types, numbers, and total replacement values of the assets. See table 1 for more information on the agencies’ asset portfolios and fig. 3 for examples of agency assets and their primary uses. Below we discuss the six key characteristics of an effective asset management framework and how the six selected agencies performed asset management activities in these areas. Formal policies and plans can help agencies utilize their assets to support their missions and strategic objectives. According to literature we reviewed, developing a formal asset management plan can help agencies take a more strategic approach in their asset management decision making and identify key roles and responsibilities, resources required to implement their plans, potential implementation obstacles and strategies for overcoming these obstacles. In addition, several experts we interviewed stated that having an asset management plan that describes the overarching goals of the organization and how the organization’s assets relate to those goals is an important element of an asset management framework. Each of the six agencies we reviewed had some documentation such as asset management plans, investment strategies, or technical orders that lay out how the agency conducts asset management activities. This documentation covered important areas such as collecting data, prioritizing assets, and making investment decisions, along with documentation detailing the roles and responsibilities of key officials, for example: In 2014, the Corps published a Program Management Plan for Civil Works Asset Management that laid out a vision, tenets, and objectives for asset management along with the roles and responsibilities of key officials. Corps officials told us that this document functions as a strategic asset management plan for the Corps’ Civil Works asset portfolio, and the plan contains foundational principles such as how the Corps will assess risk and measure the performance of its framework. Since 2006, the Coast Guard Civil Engineering program has been developing a series of manuals, process guides, and technical orders that provide detailed procedures to support implementation of an overarching asset management model. Coast Guard officials told us this model will cover all of the Coast Guard’s real property assets and reflect the agency’s mission and objectives. In addition, each of the six agencies we reviewed had developed a formal asset management plan in response to Executive Order 13327 from 2004. One agency had a plan that officials said reflected their current practices. Officials from the remaining five agencies told us that the practices contained within their original asset management plans had been superseded by later policy documents. For example: NASA officials told us the agency’s 2008 Real Property Asset Management Plan no longer reflects NASA’s overarching asset management framework. Officials said that NASA instead uses a series of policy documents, procedural requirements, and annual data calls to set out its framework. Park Service officials told us the agency’s 2009 Asset Management Plan is still in place, though some of the practices in that document have been superseded by more recent policy documents including the Capital Investment Strategy. Further, five of the agencies linked their asset management goals and objectives to their agency mission and strategic objectives in their asset management plans. For example, GSA’s 2012 plan states that it supports GSA’s overall mission and goals, as well as the mission of the Public Buildings Service, by organizing real property decision making and supporting the Public Buildings Service’s objectives for owned assets. Prioritizing investments can help agencies better target resources toward assets that will provide the greatest value to the agency in meeting its missions and strategic objectives. Each of the six agencies we reviewed has documentation describing a process for prioritizing asset investments. For example, each agency has documentation describing a scoring process for prioritizing projects based on specific criteria, such as the risks an asset poses to agency operations, asset condition, project cost, and project impact. Some agency officials told us that scoring projects in this manner provides an objective foundation for decision making that can lead to more consistent investment decisions and improved transparency. In addition, each of the six agencies have implemented, or are in the process of implementing, a centralized decision-making process for prioritizing high value projects and delegating approval for lower cost projects to local or regional offices. The agencies vary, however, in the types of projects for which they use centralized decision-making and the degree to which they use the project scores, for example: NASA field centers are authorized to independently prioritize and approve certain projects with total costs under $1 million. For larger projects, however, NASA field centers develop project scores based on a mission dependency index measuring the relative risk an asset poses to NASA’s missions. To prioritize and approve these larger projects, NASA headquarters staff consider projects submitted by centers using the mission dependency scores, asset conditions, and other factors such as flooding risk, and make funding decisions using NASA’s available budget. GSA categorizes each of its assets into tiers based on the asset’s financial performance and capital investment needs. Additionally, since 2017 GSA has been using an Asset Repositioning Tool, which uses more detailed data analysis to rank assets within each tier. GSA uses these designations when prioritizing asset investments. For projects with projected costs below the prospectus level (approximately $3.1 million in fiscal year 2018), GSA regions use each asset’s tier and core designation to allocate funds across the region’s asset portfolio. For larger projects, the GSA Administrator and GSA’s Public Buildings Service Commissioner and Deputy Commissioner are responsible for determining the priority level of projects. The Corps is in the process of implementing a procedure that would base funding decisions for maintenance and repair projects on a portfolio-wide comparison of scores, with the goal of approving the projects that will reduce the greatest amount of risk. This differs from the Corps’ previous system of allocating projects’ funding to local divisions and districts based on historical amounts and staff judgement. To prioritize projects, the Corps calculates a score for each project based on an assessment of the asset’s condition and the risk the asset poses to operations. For example, the Corps measures risk for a lock and dam component such as a gate (see fig. 5) based on the potential economic impact of failure to users (e.g., shipping companies that use the waterway). The Corps has a plan to implement this process by 2020, a plan that Corps officials told us they expect to complete on schedule. Officials from these agencies told us that more centralized decision- making processes can provide improved standardization and clarity in the prioritization process, particularly for high value projects, and can help ensure that mission-critical projects receive funding. As an example, Coast Guard officials cited a project involving a permanent repair to a failed steam heating pipe at the Coast Guard Yard near Baltimore. They said that this failure left several key buildings, including the Coast Guard’s primary ship-painting facility, with intermittent service and an inability to complete certain critical tasks. According to officials, the Coast Guard’s centralized decision-making process scored this project as a high priority because of the importance of the facilities involved, the impact of the failure, and the fragility of the temporary pipe that runs on the surface amongst other equipment (see fig. 4). Leadership buy-in is important for organizational initiatives, and experts told us that management support is vital to implementing an asset management framework. However, officials from two of the six agencies told us that they have received varying levels of leadership support for asset management, for example: Corps officials told us that it can be a challenge to make senior leadership understand the value that improved asset management practices can provide to the agency, value that they said can affect the level of support the program gets. Forest Service officials told us that they have faced challenges obtaining the resources they need to develop their asset management program. In addition, in 2015 the Coast Guard received a report it had commissioned to examine the level of alignment between its asset management framework and the ISO 55000 standards. This report concluded, among other things, that the Coast Guard has faced challenges with strategic leadership related to asset management, including in balancing budgetary support for long-term initiatives—like developing an asset management framework—against short-term infrastructure investment needs and in communicating asset management policies. Using quality information when making decisions about assets can help agencies ensure that they get the most value from their assets. Experts we spoke with cited data elements such as inventory information (e.g., asset age and location); condition information (e.g., how well the asset is performing); replacement value; and level of service (e.g., how the asset helps the agency meet its missions and strategic objectives) as important for maximizing an asset’s value. Each of the six agencies collected inventory and condition data on their assets, and used this data to make decisions about its assets, for example: The Forest Service requires its units, such as national forests and grasslands, to inventory and verify 100 percent of their asset data over a 5-year cycle. It has developed a standardized process for units to collect specific types of data for this inventory, such as condition data and deferred maintenance. According to Forest Service officials, the data tracked in the system informs several investment decisions, such as decisions on decommissioning of assets. GSA developed the Building Assessment Tool Survey to assess the overall condition of its assets and what investments they need. GSA uses the data collected from the survey, conducted every 2 years, to calculate a Facility Condition Index, which is the asset’s current needs divided by its replacement value. The Corps’ 2017 policy for operational condition assessments lays out a methodology for assessing condition based on visible attributes and asset performance, such as the degree to which water is leaking around a lock gate (see fig. 5 for an example of what Corps officials described as a minor water leak). Under this policy, Corps officials assign a letter grade to the performance of each individual component within a Corps’ asset. Corps officials told us that there are key differences between this system and the maintenance management system they used previously. For example, officials said the Corps is now able to more easily compare the condition of its assets across the portfolio, and grade the condition of more types of asset components, a process that Corps officials said gives them a more complete understanding of how their assets are performing. Some agencies told us that they faced challenges related to collecting and maintaining asset data, for example, The Park Service uses data on the condition of its assets to calculate a facility condition index. Park Service officials told us that when they developed their asset management program in the early 2000’s they had to change many of their existing data collection processes and train their staff to manage the new data. NASA field centers are required to assess assets and enter key asset data into NASA’s database, but according to NASA Headquarters officials, they have faced challenges collecting data from some Centers. For example, NASA Centers are required to review and revalidate the mission dependency scores for each of their assets every 3 years, but Headquarters officials told us not all Centers have entered such scores on all assets. Aligning staff activities toward effective asset management and communicating information across traditional agency boundaries can ensure that agencies make effective decisions about their assets. Officials from three of the agencies we reviewed told us that having staff embrace asset management is a key to successful implementation, for example, Park Service officials told us they implemented an organizational change-management process and provided additional training to staff in key asset management areas such as data collection. Finally, they said that they tried to prevent asset management requirements from overwhelming the other tasks staff perform by, for example, considering staff time constraints when developing their data collection processes. Officials told us that they continue to streamline these processes to reduce field staff workload. The Corps’ Program Management Plan includes chapters on communications strategies and organizational change management to promote an asset management culture. While these agency officials told us that obtaining leadership and staff buy-in is important for asset management implementation to be effective, officials from three of our six selected federal agencies cited managing organizational culture changes as an implementation challenge. For example, Corps officials told us that, prior to developing their framework, the different functional areas in the Civil Works Program were each responsible their own assets and were not sharing asset information across areas. As a result, the Corps struggled with getting staff to work together and coordinate on asset management activities. To help mitigate this issue, Corps officials told us they have assigned dedicated asset management staff to each regional district to facilitate communication at the local level between staff in different functional areas, and developed a community of practice to discuss maintenance issues including asset management. Continuously evaluating the performance of an agency’s asset management framework and implementing needed changes can optimize the value the agency’s assets provide. According to literature we reviewed, an asset management plan should be evaluated and continuously improved over time to ensure it still reflects the organization’s goals. Officials from each of the six agencies told us that they collect data to measure the performance of their asset management policies, and two agencies have continuous evaluation processes laid out in their asset management plans. For example: GSA’s asset management plan describes the data GSA uses to track the performance of its framework, including information on operating costs, asset condition, asset utilization, operating income, and energy. The Corps evaluates its program by conducting maturity assessments. According to the Corps’ 2014 Program Management Plan, these assessments measure the maturity level of its asset management program to review and identify gaps in achieving the asset management system’s vision and objectives while efficiently using resources. Corps officials told us they self-assessed their own operations at the low end of the maturity scale, and they are using the results of the assessment to inform revisions to their Program Management Plan. In addition, officials from five of the six agencies told us they are in the process of developing or implementing major changes to their asset management policies, including developing new policies for collecting data, measuring asset criticality, and prioritizing investments, for example: The Coast Guard has been developing its asset management model since 2006 and, as previously mentioned, is in the process of developing manuals, process guides, and technical orders to support this model. NASA officials told us that they are in the midst of developing new policies and guidance for asset management based on a recently completed business process assessment. Officials said that the new process under development would involve more centralized planning and management across NASA instead of the more center-based asset management program they currently use, along with improved data collection practices. Park Service is undertaking a program focused on improving the operation and maintenance of its real property portfolio. Officials told us that there are two major pieces to this effort, one to improve efficiency of their data collection process by streamlining and consolidating systems to reduce the data collection and management burden on staff, and another to expand the Park Service’s investment strategies to reflect the agency's top priorities and strengthen the role of the Developmental Advisory Board to ensure consistent application of investment goals. According to our interviews with asset management experts and practitioners whom we selected, organizations can face challenges implementing an asset management framework. The two challenges most frequently mentioned were managing both organizational culture changes and capacity challenges, such as lack of skills and knowledge of management practices. Almost all the experts and over half of the practitioners we interviewed stated that managing the organizational culture changes that result from implementing a new asset management framework is a challenge. For example, several experts and practitioners stated that an effective framework requires enterprise-wide policies to manage assets and that changing the organizational culture from one in which departments or divisions are used to working independently to one that promotes interdepartmental coordination and information sharing can be challenging. Specifically, one expert representing a U.S. municipality told us that a key implementation challenge it faced was in setting up policies to promote more information sharing across the organization. This expert stated that previously the organization’s data systems were not set up to share information across departments, leading to data silos that hindered coordination across the agency. Similarly, another expert stated that asset management is by nature a multidisciplinary practice, which crosses through many functional silos that are typically present in large organizations. These silos are necessary to allow for the required level of specialization, but if these silos do not communicate, inefficiencies and errors in asset management result. He stated that in these organizations, a key challenge in implementing an asset management framework is getting officials in these different departments to agree upon and transition to a common set of goals and direction for the framework. Several experts and practitioners stated that obtaining the leadership and staff buy-in that is critical for asset management implementation to be effective can be a challenge. For example, one expert representing an organization that had recently implemented a new asset management framework stated that it faced resistance from some of its staff. These employees had been working for the organization for a long time, had not been updating their skills over time and were resistant to having to learn a new process. In addition, it was difficult to convince staff previously invested in the old decision-making process to adjust to a new process. A study examining asset management practices of public agencies in New Zealand found that obtaining buy-in and support from leadership and staff was critical. According to this study, for asset management to be successful, it has to become part of the organization’s culture, and for that to happen, leadership needs to “buy-into” the process, the reason why it is important, and the value of its outputs. Over half of the experts and all of the practitioners we interviewed cited capacity challenges to implementing an effective asset management framework, such as lack of skills, knowledge of management practices, asset data, and resources. Some experts and practitioners stated that implementing an effective framework might require skills and competencies that the organization may not currently have. For example, one expert stated that organizations might not have the in-house expertise needed to implement a risk management approach. Similarly, a practitioner representing an asset management firm that provides consulting services to municipalities noted that lack of in-house expertise could lead to the organization’s over-reliance on consultants; such over- reliance, in turn, can result in the organization’s not following through with the new asset management practices once the consultants finish their work. Several experts and practitioners also stated that some organizations struggle with collecting and managing data needed to conduct asset management. For example, one expert stated that an important first step to implementing an asset management framework is to develop comprehensive records of the organization’s assets. However, according to this expert, it is difficult to actually collect and use good information about assets to deliver robust planning. The age of assets can compound this challenge because with older assets sometimes the original plans and specifications have been lost. Several experts and practitioners also mentioned lack of sufficient resources as an implementation challenge. Specifically, one expert noted that obtaining funding to support asset management activities is a challenge. This expert stated that it is more difficult to secure funding for improving components of an asset management framework, such as improving data collection processes, than it is to secure funding for tangible investments in new assets. As we previously discussed, some of the experts that we interviewed stated that evaluating and continually improving asset management practices is an important characteristic of an effective asset management framework. Experts and practitioners we interviewed identified potential strategies for addressing and overcoming implementation challenges, including strategies for managing culture change and capacity challenges such as lack of skills and resources. See table 2 for the strategies experts and practitioners identified. We have previously reported on practices and implementation steps that can help agencies manage organizational change and transform their cultures to meet current and emerging needs, maximize performance, and ensure accountability. Several of these practices—such as involving employees in the transformation effort, ensuring top leadership drives the transformation effort, and establishing a communication strategy—could address some of the potential change-management challenges that agencies might face when implementing an asset management framework. For example, in our prior work on organizational change we have noted that a successful transformation must involve employees and their representatives from the beginning to increase employees’ understanding and acceptance of organizational goals and objectives, help establish new networks and break down existing organizational silos, and gain their ownership for the changes that are occurring in the organization. Some of the experts we interviewed who had implemented ISO 55000 stated that they involved employees in the transformation effort. For example, one expert representing an organization with recent success in implementing ISO 55000 stated that the managers at person’s organization involved staff in the implementation process, which helped foster ownership of the new asset management program. Asset management experts and practitioners we interviewed cited a number of potential benefits to adopting an asset management framework that aligns with the six characteristics we identified, including: (1) improved data and information about assets, (2) better-informed decisions, and (3) financial benefits. About half of the experts and practitioners we interviewed stated that implementing an asset management framework that aligns with the six characteristics we identified previously and discussed can result in an organization’s collecting more detailed and quality information about assets. For example: One expert representing a U.S. municipality that had recently implemented a new asset management framework stated that it now collects and tracks more detailed asset data, including information about the condition and performance of its assets. According to this expert, this more detailed information provides asset managers with a better understanding of how much asset repairs actually cost in the long term, how long repairs take, and which assets are most critical to repair or replace. Additionally, they are in the process of integrating this data into the organization’s capital-improvement project modeling, a step that in turn has allowed the asset managers to make better investment decisions. This expert also noted that collecting detailed data about the municipality’s assets has enabled the asset managers to provide more information to the public and to decision-makers. Another expert we interviewed representing an organization that had recently adopted a new asset management framework stated that its data have improved as a result. According to this expert, prior to implementing the program, the organization had a good inventory of its assets, but it was missing dynamic information about condition and performance. The managers made several changes to address this situation, including investing in information technology systems and infrastructure to collect and track condition data in real time. As a result, the organization is now able to track trends in asset performance failures and anticipate that over time it will predict future performance failures with this information. Most of the experts and all of the practitioners who responded to this question stated that another benefit of implementing an asset management framework is that it can help organizations make better- informed asset management decisions. For example, some of these experts and practitioners stated that having a framework that includes improving interdepartmental coordination, collecting more detailed data, and having a strategic approach to asset management helps organizations make better-informed decisions about how to maintain and invest in their assets. In addition, about one-half of the experts stated such a framework can also help organizations better understand the risks the organization faces and make informed decisions about the organization’s assets. For example: One expert stated that a benefit to implementing an asset management framework that incorporates interdepartmental coordination is that everyone within the organization is working to achieve the same goals in both the short-term and long-term, which results in better decisions and better customer service. This expert worked with a foreign network operator to implement an asset management system that would support the company’s goals for increasing its electric grid capacity. He found that for different assets, the company had adopted different asset strategies to deal with future demand growth, approaches that resulted in misaligned asset strategies. The differences in the individual asset strategies were identified and realigned. If these differences had not been recognized, this lack of coordination could have resulted in inefficient decision- making and the loss of time and money. Another expert representing a U.S. municipality stated that by implementing an asset management framework, the municipality’s program managers are now able to make better-informed asset management decisions and present information and proposals to the city council and budget committee. In addition, this detailed information has allowed managers to better assess the condition of their assets across the portfolio and to compare it to industry standards in the respective asset classes. Over half of the experts and a third of the practitioners we interviewed stated that effective asset management practices can result in financial benefits to the organization, such as cost avoidance and better management of financial resources. For example, One expert stated that asset management can lead to a greater understanding of budget needs and better long-term capital and lifecycle investment planning. In addition, this expert stated overall that asset management improves clarity in terms of where funds are spent. This enhanced insight can then inform asset management decision-making to produce future cost savings. A practitioner representing a local municipality in Canada stated that since implementing an asset management framework, the municipality is now making better-informed decisions about maintenance and have identified and eliminated unneeded maintenance activities, steps that have resulted in cost savings. For example, by analyzing condition data, the municipality identified an optimal point in time for addressing maintenance issues on its roads and achieved a fivefold-to-tenfold cost reduction over previous repairs. Experts and practitioners we interviewed most often cited the ISO 55000 standards as a useful resource that provided a solid foundation for an asset management framework and could inform federal agencies’ asset management efforts. Specifically, these experts and practitioners stated that the standards are flexible and adaptable to different types of organizations regardless of size or organization mission, applicable to different types of assets, and internationally accepted and credible. About half of the experts we interviewed had used the standards, and some of these experts shared examples of how their organization’s asset management approach improved by implementing ISO 55000. See, for example, the experience of Pacific Gas & Electric below. Pacific Gas and Electric’s (PG&E) experience with International Organization for Standardization (ISO) 55001 standard: In 2014 and 2017, PG&E, a public utility company in California, attained Publicly Available Specification (PAS) 55 and ISO 55001 certification and recertification for its natural gas operations. Its physical assets include gas transmission and distribution pipelines, pressure regulator stations, gas storage facilities, and meters. According to PG&E, a key benefit from implementing the standards is that PG&E has developed a consistent strategy for managing its natural gas operations assets. This, according to PG&E, has enabled the utility to develop a framework for program managers from different parts of the organization, such as finance, operations, engineering and planning, to collaborate more effectively and work together to wards one strategic goal rather than competing with one another for funding. According to PG&E, this new structure allows the program managers to prioritize investment decisions across their asset portfolio to align with corporate objectives. Officials from five of the six agencies we interviewed stated that they were familiar with the ISO 55000 standards, and officials from the Corps stated that they use selected practices from ISO 55000. Corps officials stated that using the standard has provided several benefits to their organization. For example, they stated that using the standard has informed their budget process and has helped them make better-informed decisions about critical reinvestment. In addition, it has allowed them to develop a consistent approach to managing all of their physical assets across different lines of business. However, officials from four agencies raised some concerns about using these standards. These included concerns about upfront costs and resources needed to implement the standards and their applicability to the federal government given the size, scope, and uniqueness of agencies’ assets, and the diverse missions of each agency. For example, officials from one selected agency stated that in their view, the standards are better suited for private organizations because federal agencies have federal requirements they need to meet, such as those for disposition of real property, which may affect their asset management decision making. We have previously reported on challenges federal agencies face with disposing of assets in part due to legal requirements agencies must follow. Several experts and officials from one practitioner organization we interviewed stated that they thought that federal agencies across the government could implement the ISO 55000 standard. The experts stated that key benefits of implementing the standard would be that it would result in a more consistent asset management approach and help federal agencies better manage resources. For example, one expert stated that a key benefit of implementing the standard would be to drive federal agencies to be better stewards of their resources by better utilizing mission assets. In addition, some experts and practitioners also stated that federal agencies do not need to implement the full standard or seek certification to achieve results; agencies can decide which practices in the standard are most relevant to their organization and implement those practices. The ISO technical committee that produced the ISO 55000 standards is drafting a new standard on asset management in the public sector. According to ISO, this standard, expected to be published in December 2019, will provide guidance to any public entity at the federal, state, or local level including more detailed information on how to implement an asset management framework. While OMB has issued government-wide requirements and guidance to federal agencies related to asset management, this guidance does not present a comprehensive approach to asset management because it does not fully align with standards and key characteristics, nor does it provide a clearinghouse of information on best practices for federal real property management to agencies as required by Executive Order 13327. As mentioned earlier, OMB has issued various government-wide policies, guidance, and memorandums related to federal asset management. For example, in response to Executive Order 13327 in 2004, the FRPC— chaired by OMB—developed guiding principles for agencies’ asset management practices and for developing a real property asset management plan. Specifically, the guidance stated that each real property asset management plan should, among other things: link the agency’s asset management framework to the agency’s strategic goals and objectives, describe a process for periodically evaluating assets, and describe a process for continuously monitoring the agency’s framework. In addition, OMB’s Circular A-11 describes requirements for the agency capital planning process, such as prioritizing assets to support agency priorities and objectives, while OMB’s Circular A-123 describes risk management requirements for agencies, and OMB’s Memorandum 18-21 describes requirements for an agency’s senior real property officers, such as coordinating real property planning and budget formulation. Further, the Federal Assets Sale and Transfer Act and the Federal Property Management Reform Act—both of 2016—collectively contain provisions related to asset management including establishing procedures for agencies to follow when disposing of real property assets and requiring agencies to submit data on leases to the FRPC. Taken as a whole, the OMB guidance lacks many of the elements called for by the ISO 55000 standards and the key characteristics we identified. For example, the guidance: covers several different areas of asset management but does not direct agencies to develop a comprehensive approach to asset management that incorporates strategic planning, capital planning, and operations, as recommended by the ISO 55000 standards and the key characteristics we identified. directs agencies to continuously monitor their asset management frameworks and identify performance measures but does not direct agencies to use the results to improve their asset management frameworks in areas such as overall governance, decision making, and data collection, as called for in ISO 55000 standards and the key characteristics we identified. directs agencies to have a senior official in charge of coordinating the real property management activities of the various parts of the organization but does not direct agencies to demonstrate leadership commitment to asset management or to define asset management roles and responsibilities for each element of the agency, as called for in ISO 55000 standards and the key characteristics we identified. directs agencies to ensure that their real property management practices enhance their decision making, but does not direct agencies to actively promote a culture of information sharing or ensure that the agencies’ decisions are made on an enterprise-wide basis, as called for in ISO 55000 standards and the key characteristics we identified. directs agencies to identify asset management goals and enhance decision making, but does not direct agencies to establish the scope of their asset management frameworks by, for example, determining how the agency should group or organize the management of its different types of assets, as called for in ISO 55000 standards. Moreover, OMB staff told us that while the executive order’s requirements for federal agencies to develop an asset management plan and related processes remain in effect, OMB’s real property management focus has shifted to the National Strategy for the Efficient Use of Real Property and its accompanying Reduce the Footprint initiatives issued in 2015. These initiatives emphasize efficiently managing and using space, rather than overall asset management. OMB staff said that they view asset management as a tactical activity, separate from broader strategic and capital planning efforts, where agencies make operational-level policies to support their real property portfolio. However, this approach to asset management differs from ISO’s definition of asset management, which encompasses both the capital-planning and asset management levels of OMB’s policy model. Under the Reduce the Footprint initiative, federal agencies are required to submit annual Real Property Efficiency plans that specify their overall strategic and tactical approach to managing real property, provide a rationale for and justify their optimum portfolio, and direct the identification and execution of real property disposals, efficiency improvements, and cost-savings measures. As a result, according to OMB staff, they no longer require agencies to develop a comprehensive asset management plan. We recognize that reducing, and more efficiently managing government- owned and leased space are important goals. However, effective asset management is a more comprehensive objective that seeks to best leverage assets to meet agencies missions and strategic objectives. For example, some agencies have high-value real property assets that are not building space, such as those at the Corps and the Park Service. See table 2 for examples of these types of assets at the six selected agencies in our review. For example, the Corps has over 700 dams—the age and criticality of which require the Corps to conduct regular maintenance and, in some cases, major repairs to assure continued safe operation. In 2015, the Corps estimated the cost of fixing all of its dams that need repair at $24 billion. Similarly, in 2016, we reported that the Park Service’s deferred maintenance for its assets averaged about $11.3 billion from fiscal year 2009 through fiscal year 2015 and that in each of those years, deferred maintenance for paved roads made up the largest share of the agency’s deferred maintenance—about 44 percent. Assets classified as paved roads in the Park Service’s database include bridges, tunnels, paved parking areas, and paved roadways. For these and other agencies with similar portfolios, the agencies’ Real Property Efficiency plans are not relevant to managing the bulk of their assets, and the guidance primarily focused on buildings and office space is of limited use. In addition, without specific information to help all federal agencies evaluate their current practices and develop more comprehensive asset management approaches, federal agencies may not have the knowledge needed to maximize the value of their limited resources. In addition, while Executive Order 13327 requires the FRPC to provide a clearinghouse of information on best practices for federal real property management, this information is currently lacking from existing guidance or other available sources. GSA officials and OMB staff stated they do not currently have plans to compile this information. Because of this, existing guidance falls short of what an effective asset management framework might include. GSA officials told us that while certain agencies have shared information on asset management at meetings of the FRPC, the council does not take minutes or make this information readily available to agencies outside of the meetings. Given OMB’s shift in focus, OMB staff said that they did not plan to update their guidance. However, Standards for Internal Control in the Federal Government state that communicating information, such as leading practices, is vital for agencies to achieve their objectives. Further, government-wide information in some cases is not available, such as information on practices federal agencies have successfully used to conduct asset management. There is merit to having key information on successful agency practices readily accessible for federal agencies to use. For example, officials from three of the six agencies we spoke with said information on best practices for asset management would be helpful to them in developing their agencies asset management frameworks. Such information could include practices that are described in ISO 55000 and that federal agencies have successfully used to improve asset management. For example, one agency official stated that it would be useful to have a compilation of asset management practices that federal agencies use to determine if any of those practices might be applicable to an agency. Similarly, an official from another agency stated that the agency is currently evaluating opportunities to improve its asset management program and that the agency would be interested in learning more about asset management processes across the federal government in order to inform the agency’s asset management efforts. Without information such as these officials described, federal agencies lack access to practices geared to them on how to develop an asset management plan and other asset management practices. Federal agencies collectively hold billions of dollars in real property assets—ranging from buildings, warehouses, and roads to structures including beacons, locks, and dams—and are charged with managing these assets. The effective management of all of an agency’s real property assets plays an important role in its ability to execute its mission now and into the future. However, because existing federal asset management guidance does not fully reflect standards and the key characteristics, such as, directing agencies to develop a comprehensive approach to asset management that incorporates strategic planning, capital planning, and operations, federal agencies may not have the knowledge needed to maximize the value of their limited resources. In addition, because there is no central clearinghouse of information to support agencies’ asset management efforts, as required by Executive Order 13327, agencies may not know how best to implement asset management activities, including using quality data to inform decisions and prioritize investments. A reliable central source of information on current effective asset management practices could support agencies in making progress in their asset management efforts, helping them more efficiently fulfill their missions and avoid unnecessarily expending resources. Further, sharing experiences across the government could assist agencies’ efforts to adopt, assess, and tailor an asset management approach appropriate to their needs and to support efforts to more strategically manage their real property portfolios. We are making the following recommendation to OMB: The Director of OMB should take steps to improve existing information on federal asset management to reflect leading practices such as those described in ISO 55000 and the key characteristics we identified and make it readily available to federal agencies. These steps could include updating asset management guidance and developing a clearinghouse of information on asset management practices and successful agency experiences. (Recommendation 1) We provided a draft of this report for review to the Office of Management and Budget, the General Services Administration, the National Aeronautics and Space Administration, and the Departments of Agriculture, Defense, Homeland Security, and the Interior. The Forest Service within the Department of Agriculture agreed with our findings and noted that GAO's key characteristics for effective asset management will help the Forest Service manage their assets and resources effectively. Further, the Forest Service stated that asset management leading practices are critical in measuring efficiencies and meeting strategic goals for its diverse and large portfolio. The Forest Service’s written comments are reproduced in appendix IV. The Departments of Homeland Security and the Interior, and the General Services Administration provided technical comments, which we incorporated as appropriate. The Office of Management and Budget, the Department of Defense, and the National Aeronautics and Space Administration had no comments on the draft report. We are sending copies of this report to the appropriate congressional committees, the Secretaries of the Departments of Agriculture, Defense, Homeland Security, and the Interior; the Administrators of the General Services Administration and National Aeronautics and Space Administration; and the Director of the Office of Management and Budget. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-2834 or rectanusl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. As of 2016, public entities in Canada owned about $800 billion worth of infrastructure assets including roads, bridges, buildings, waste and storm water facilities, and public transportation assets. Municipalities owned the majority of these assets, around 60 percent, with provincial and federal entities owning around 38 percent and 2 percent respectively. The federal government of Canada owns or leases approximately 20,000 properties containing about 37,000 buildings with about 300 million square feet of floor space. In the fiscal year that ended in 2016, the federal government spent around $7.5 billion on managing its real property portfolio, of which about 80 percent went to operating expenditures and about 20 percent went to capital investments such as acquisitions and renovations. This portfolio is managed and controlled by 64 federal agencies, departments, and “Crown corporations” with primary uses including post offices, military facilities, government offices, employee housing, and navigation facilities such as lights. The Treasury Board of Canada, supported by the Treasury Board Secretariat, provides policy direction to agencies and departments for their real property assets along with approving certain larger projects, acquisitions, and disposals. The Treasury Board of Canada Secretariat is currently conducting a portfolio-wide review of the federal government’s real property management in order to develop a road map for the most efficient and effective model for federal real property asset management. Treasury Board Secretariat officials told us that they have preliminarily found that the federal government does not have a government-wide asset management strategy and faces challenges related to the availability of current and consistent asset condition data. Municipalities own and manage most of Canada’s public infrastructure, and in recent years, municipal governments have been leaders in developing and implementing asset management frameworks. By the early 2000’s several large cities including Hamilton, Calgary, and Edmonton began developing frameworks to reduce costs and improve the management of certain types of municipal assets such as those related to water distribution and treatment. More recently, the federal government and several provincial governments have promoted asset management for municipalities in a variety of ways including by awarding grants and attaching requirements to infrastructure funding. Some of these programs have focused on small municipalities that make up the large majority of the total but may face particular challenges in obtaining the resources to develop and implement an asset management framework. The federal government provides infrastructure funding to municipalities through several programs, including the Federal Gas Tax Fund. This fund provides around $1.5 billion in funding to municipalities each year for projects such as water treatment, roads and bridges, broadband connectivity, airports, and public transit, and does not require yearly reauthorization. Each of Canada’s municipalities receives funding through this program by formula, and funds are routed through the provinces, which can attach their own requirements. In the 2014 set of agreements between the federal government and the provinces, provinces were required to institute asset management requirements for municipalities to receive gas tax funds, and each of the provinces developed separate requirements for municipalities under its jurisdiction. These requirements took several forms. For example, Ontario required each municipality to develop an asset management plan by the end of 2016 while Nova Scotia has withheld a small portion of its total provincial gas tax allocation to use toward developing a province-wide asset management framework for municipalities to use. The federal government also provides funding to municipalities for asset management. Through the Municipal Asset Management Program, administered by the Federation of Canadian Municipalities (FCM), Infrastructure Canada made available $38 million over 5 years for Canadian municipalities and partnering not-for-profit organizations to improve municipal asset management practices. The maximum grant amount for municipalities is $38,000. Eligible activities under this program include assessing asset condition, collecting data on asset costs, implementing asset management policies, training staff, and purchasing software. FCM officials told us that, as of March 2018, they had received 253 grant applications and that, of the grants they had disbursed so far, around: 25 percent of grantees used the funds for data projects, 15 percent to develop asset management plans, 2 percent for staff training, 4 percent for asset management system operations, and 60 percent for some combination of these purposes. Canadian provinces have also taken several actions to improve asset management practices at the municipal level by establishing requirements for municipalities in their jurisdiction or by providing funding programs. For example, in 2017, Ontario issued an asset management planning regulation, which requires municipalities to develop a strategic asset management policy by July 1, 2019, and then develop progressively more detailed asset management planning documents in later years. In addition to this regulation, in 2014, Ontario also introduced a funding program for small and rural municipalities to provide long-term, formula and application-based funding for these municipalities to develop and repair their infrastructure. Under the program, municipalities are required to have an asset management plan as a condition of receiving funding. In addition, municipalities can use formula-based program funds for certain asset management activities including purchasing software, staff training, or direct staff activity related to asset management. In 2016, Ontario announced plans to increase the funding available per year from about $75 million to about $150 million in 2019. Much of the federal government’s real property is managed by a federal department known as Public Services and Procurement Canada (PSPC) whose nationwide portfolio includes around 350 owned buildings and an additional 1,200 building leases. PSPC uses a portfolio-wide asset management framework, which begins with developing national portfolio strategies and plans every 5 years. Staff in each of PSPC’s five regional offices then use these plans to develop regional and community-based portfolio strategies and plans, which then inform annual management plans for each PSPC asset. To determine how to best allocate funds across its portfolio of assets, PSPC places each of its assets into one of four tiers based on three major criteria: (1) the asset’s strategic importance to PSPC’s portfolio as measured by criteria such as the asset’s location and design, (2) the asset’s operating and functional performance such as cost per unit area, and (3) the asset’s condition based on a metric called the Liability Condition Index, which measures the risk an asset poses to continuing operations and occupant safety. Using this method, PSPC designates its highest tier assets as those that have excellent financial performance, that have non-financial attributes that support PSPC’s objectives, and that are not expected to need major capital investments in the next 5 years. The lowest tier assets have poor performance and are in need of either major investments or disposal in the next 5 to 10 years. PSPC officials told us that they are in the midst of making major changes to their asset management framework, including by moving to a component-based system of accounting where they will treat each asset as 12 components, including 11 for the building such as roofs or heating and air conditioning systems, and 1 for tenant equipment. Additionally, PSPC plans to move to more modern enterprise systems to eliminate paper records and improve the quality of the data they use to make budgeting decisions. Officials said that they consider the ISO 55000 requirements when evaluating their asset management framework, but they also use other best practices from the private sector that they said better suit their needs by providing more detailed information on how to develop and implement the various elements of an asset management framework. Over the past 20 years, several Canadian municipalities have developed detailed asset management frameworks to improve management efficiency and cost-effectiveness as well as to obtain improved levels of service from municipal infrastructure. In the late 1990’s, the City of Hamilton, Ontario, began developing an asset management framework for its core municipal infrastructure assets, and in 2001, the city established an office dedicated to asset management within its public works department, which produced its most recent municipal asset management plan for public works in 2014. This plan sets a strategic vision and goals for the asset management program, which are designed to align with the city’s overall strategic plan, capital and operating budgets, master plan, and other business documents, and describes how the city’s asset management activities will support the objectives laid out in those documents. Additionally, the asset management plan provides an overview of the current state of Hamilton’s infrastructure assets in four categories: drinking water supply, wastewater management, storm water management, and roads and bridges. The plan states the total value of the assets in each category and, the condition of those assets and has an indicator of the recent trends in the condition of those assets. The plan also defines the levels of service Hamilton aims to provide in each of the four main asset categories and sets goals for each category such as safety, reliability, regulatory compliance, and customer service. Next, the plan defines an asset management strategy for the city, which includes taking an inventory of assets, measuring asset condition, assessing risk, measuring the performance of the asset management framework, making coordinated citywide decisions, and planning for capital investments. Finally, the document contains a plan for managing each of the four main asset categories over their entire life cycles. Hamilton officials stressed the importance of collecting and using quality data when deciding where and when to allocate resources. They told us that the data they have collected under their asset management framework have allowed them to make better-informed investment decisions, and have provided them with the information necessary to make business cases for investment and to better defend their decisions when they solicit funding from the City Council. For example, officials described how the city assesses the condition of its road network and uses the results to prioritize investment in its assets. To assess the condition of each road, the city uses a 100-point scale where, for example, above 60 indicates the road is only in need of preventative maintenance and 20 or less indicates the road is in need of total reconstruction. Officials said that a total reconstruction could cost ten times as much as a minor rehabilitation and that the window of time between when a road needs only a minor rehabilitation and a full reconstruction is only around 10 years. Because of this, Hamilton officials said that it is important to conduct rehabilitation on roads and other infrastructure assets before they deteriorate to the point where they either fail or are in need of a full rehabilitation. For example, Hamilton undertook a major re-lining project for a storm sewer that was in danger of complete collapse, as shown in fig. 6. Officials told us this project would preserve storm sewer service at significantly lower cost than waiting for the structure to fail or completely rebuilding it, either of which would have been cost prohibitive. Additionally, Hamilton officials noted that they do not need all of their assets to be at a 100 rating and that their asset management framework directs them to allow some assets to deteriorate to a certain extent while rehabilitating others by making investment decisions on a system-wide service basis, as opposed to an individual project basis. The City of Calgary, Alberta, began developing its asset management framework in the early 2000’s, first focusing on the Calgary’s municipal water-management assets because they are expensive to maintain and are only funded from water utility customer bills, as opposed to tax revenue. City officials told us that the primary impetus for initially exploring asset management was to be able to maintain levels of service as the city rapidly expanded in both population and physical size; this expansion forced Calgary to make major investments in the water system. Since that time, Calgary has expanded its asset management framework to include nearly all of its assets, including its software, bridges, public recreation facilities, and even its trees. Between 2008 and 2010, the Calgary took steps to align its asset management to its business processes, steps that culminated with the development of the city’s first citywide asset management policy in 2010. Calgary officials told us that between 2004 and 2008 they worked to align their initial asset management framework with the British Standards Institution Publicly Available Specification 55 (PAS 55). After this experience, officials from Calgary participated in the development of the ISO 55000 standards and provided the Standards Committee information about tactics for asset management such as policy development and business strategy. When the ISO 55000 standards were officially published in 2014, the city began working on aligning their asset management framework with the new standards, a process that led to a new framework including a strategic asset management plan, which city officials published in 2016. Calgary officials said that aligning their asset management framework with the ISO 55000 standards has given them support from the city’s top management and has improved their relationship with the various bodies that audit the city’s operations because it gives them a common language to use when describing management processes. Calgary officials told us that the ISO 55000 standards are credible internationally recognized best practices and that in practice they are a good guide for developing an asset management framework. However, Calgary is not planning on certifying its operations to the ISO 55000 standard because officials told us that they are not required to be certified; certification is expensive and needs to be repeated; and they are unsure of what additional value certification to the standards would provide. The City of Ottawa, Ontario, began developing its asset management framework in 2001. Since that time, the city’s asset management framework has gone through several versions, the most recent of which it developed beginning in 2012 based on PAS 55. Ottawa officials told us that implementing their asset management framework has allowed them to collect better information about their assets and improve their long-term financial-infrastructure-planning process. While Ottawa officials developed and implemented an asset management framework, they have a number of ongoing initiatives to further develop some areas of the framework. For example, officials said that they consider determining the levels of service to be provided by each asset class the most difficult aspect of asset management, especially for those assets that do not necessarily provide a measureable service. Ottawa officials are working on ways to better measure the services each of their assets provides and the levels of risk that each asset poses to these service levels. Officials said that accurately measuring service and risk levels is critical for their financial planning and will allow them to improve how they prioritize funding and ensure that funds are spent on priority assets. See fig. 7 for an example of an asset officials said was intended to improve levels of service for Ottawa’s pedestrian multi-use pathways. Another ongoing initiative is an updated report card for the condition of the city’s assets, which officials said they use to transparently communicate to stakeholders the current state of their infrastructure. This report discusses: (1) key characteristics of an effective asset management framework, and how selected federal agencies’ frameworks reflect these characteristics; (2) views of selected asset management experts and practitioners on challenges and benefits to implementing an asset management framework; and (3) whether government-wide asset management guidance and information reflect standards and key characteristics of an effective asset management framework. To obtain information for all three objectives, we reviewed relevant literature, including academic and industry literature on asset management, publications describing asset management leading practices, and the ISO 55000 and related standards. We selected the ISO 55000 standards because they are international consensus standards on asset management practices. We also reviewed laws governing federal real-property asset management, Office of Management and Budget’s (OMB) guidance and prior GAO reports describing agencies’ real-property management and efforts to more efficiently manage their real property portfolios. In addition, to address all three objectives, we collected information from and interviewed a judgmental sample of 22 experts to obtain their perspectives on various asset management issues. To identify possible experts to interview, we first worked to identify relevant literature published in the topic area. Specifically we searched in October 2017 for scholarly and industry trade articles and other publications that examined effective asset management practices. We limited our search to studies and articles published from January 2014 through January 2017. From this search, we screened and identified studies and articles for relevance to our report and selected those that discussed asset management practices and the ISO 55000 standards. In addition, we conducted preliminary interviews with selected asset management practitioners, who included representatives from public and private organizations knowledgeable about asset management practices, to learn about key asset management issues and obtain recommendations about experts in this field. Through these methods, we identified a total of 82 possible candidates to interview. To ensure a diversity of perspectives, we used the following criteria to assess and select a sample from this group: type and depth of an expert’s experience, affiliations with asset management trade associations, experience with government asset management practices, relevance of published work to our topic, and recommendations from other entities. We selected a total of 22 experts representing academia, private industries, foreign private and public entities, and entities that have implemented ISO 55000. See table 3 for a list of experts whom we interviewed. Their views on asset management practices are not generalizable to those of all experts; however, we were able to secure the participation of a diverse, highly qualified group of experts and believe their views provide a balanced and informed perspective on the topics discussed. We interviewed the selected 22 experts between January 2018 and February 2018 and used a semi-structured interview format with open- ended questions for those interviews. We identified the topics that each of the experts would be able to respond to, based on the individual’s area of expertise and each responded to questions in the semi-structured interview guide in the areas in which they had specific knowledge. During these interviews, we asked for experts’ views on key characteristics of an effective asset management system, opportunities for improving federal agencies’ asset management approaches, experiences with using ISO 55000, and their views on the applicability of ISO 55000 to the federal government. After conducting these semi-structured interviews, we conducted a content analysis of the interview data. To conduct this analysis, we organized the responses by interview question, and then one GAO analyst reviewed all of the interview responses to questions and identified recurring themes. Using the identified themes, the analyst then developed categories for coding the interview responses and independently coded the responses for each question. To ensure the accuracy of our content analysis, a second GAO analyst reviewed the first analyst’s coding of the interview responses, and then the two analysts reconciled any discrepancies. To identify key characteristics of an effective asset management framework and how selected federal agencies’ frameworks reflect these characteristics, we obtained and analyzed the ISO 55000 standards, which include leading practices, and asset management literature, and we analyzed information collected from our interviews with experts. We synthesized information from these sources to identify six commonly mentioned characteristics. We then selected six bureau-level and independent agencies as case studies and compared these agencies’ asset management frameworks to the six key characteristics that we identified. Because the agencies are not required to follow the key characteristics we identified, we did not evaluate the extent to which agencies’ efforts met these characteristics. Instead, we provide this information as illustrative examples of how the agencies’ asset management practices reflect these characteristics. We used a variety of criteria to select these agencies, such as: whether the agency was among the agencies that had the largest real property portfolio; replacement value and total square footage of the portfolio; extent to which the bureau or independent agency had a notable asset management program as described by recommendations from practitioners we interviewed; and whether the agency was implementing the ISO 55000 standards. In order to ensure that we had a diversity of experiences and expertise from across the federal government, we limited our selection to independent agencies and one bureau-level entity from each cabinet department. Based on these factors, we selected: (1) U.S. Coast Guard (Coast Guard); (2) U.S. Army Corps of Engineers (Corps); (3) General Service Administration (GSA); (4) National Aeronautics and Space Administration (NASA); (5) National Parks Service (Park Service); and (6) United States Forest Service (Forest Service). While our case-study agencies are not generalizable to all Chief Financial Officers Act (CFO) agencies, they provide a range of examples of agencies’ experiences with implementing asset management practices. We reviewed documents and interviewed officials from each of the six selected agencies to learn about the agency’s practices, its experiences with the ISO 55000 standards, and challenges it has faced in conducting asset management. In addition, we analyzed fiscal year 2017 Federal Real Property Profile (FRPP) data, as managed by GSA, to obtain information about each agency’s portfolio, such as the number of real property assets and total asset-replacement value, and to obtain examples of the types of buildings and structures owned by the six selected agencies. The Corps and Coast Guard noted small differences between our analysis of the FRPP data and the data from their reporting systems. For example, the Corps reported having 139,744 real property assets as of August 2018 with an estimated asset replacement value $273.4 billion as of September 2017. In addition, the Coast Guard reported 44,226 real property assets with an estimated asset replacement value of $17.6 billion as of September 2017. To ensure consistency, and because these differences were small, we relied on FRPP data rather than data from these agencies’ reporting systems. We conducted a data reliability assessment of the FRPP data by reviewing documentation, interviewing GSA officials, and verifying data with officials from our selected agencies, and concluded the data were reliable for the purposes of our reporting objectives. We also visited four locations from our case study agencies to discuss and view examples of how our selected case-study agencies are conducting asset management. Specifically, we visited the Park Service’s Santa Monica, CA, Mountains National Recreation Area; the Coast Guard’s Baltimore Shipyard in Curtis Bay, MD; the Corps’ Washington Aqueduct in Washington, D.C.; and the Brandon Road Lock and Dam in Joliet, IL. We selected these locations based on several factors including geographic and agency diversity, costs to travel to location, recommendations from officials at our case study agencies, and extent to which the location provided illustrative examples of how federal agencies are managing their assets. To determine the 32 experts’ and practitioners’ views on challenges and benefits to implementing an asset management framework, we analyzed information collected from our interviews with the 22 experts previously mentioned. We also reviewed documents from and interviewed asset management practitioners from 10 additional organizations familiar with asset management practices and the ISO 55000 standards. The 10 organizations included representatives from private industry, one federal agency and local municipalities in Canada. We selected these additional 10 organizations by reviewing published materials related to asset management and referrals from our preliminary interviews. We interviewed the 32 experts and practitioners about their views on challenges and benefits to conducting asset management, ISO 55000, and illustrative examples of practices in other countries. The information gathered from our interviews with experts and practitioners is not generalizable but is useful in illustrating a range of views on asset management issues. See table 4 for a list of organizations we interviewed. To assess whether government-wide guidance and information on asset management reflect standards and key characteristics of an effective asset management framework, we reviewed current federal guidance and evaluated the extent to which this guidance incorporates practices described in the ISO 55000 standards and the six key characteristics of an effective asset management framework that we identified. Specifically, we reviewed the Federal Real Property Council’s (FRPC’s) 2004 Guidance for Improved Asset Management, OMB’s, National Strategy for the Efficient Use of Real Property 2015-2020: Reducing the Federal Portfolio through Improved Space Utilization, Consolidation, and Disposal and OMB’s Implementation of OMB Memorandum M-12-12 Section 3: Reduce the Footprint, Management Procedures Memorandum No. 2015- 01. We also reviewed other OMB guidance, such as OMB’s 2017 Capital Programming Guide, OMB’s Circular A-123, OMB’s Memorandum 18- 21 and other guidance. In addition, we reviewed asset management requirements in the Federal Real Property Management Act of 2016 and in the Federal Assets Sale Transfer Act of 2016. We interviewed OMB and GSA officials about their role in supporting federal agencies’ asset management efforts. In addition, we obtained information from our interviews with the 32 asset management experts and practitioners about practices that could be applicable to the federal government and opportunities to improve federal agencies’ asset management approaches. Lastly, we obtained documents and, as previously discussed, interviewed representatives from private organizations, federal agencies, and local municipalities in Canada—a country with over 20 years of experience in conducting asset management—to learn about their asset management practices, including their use of the ISO 55000 standard. We also conducted a site visit to Canada to learn more about their practices and to view examples of assets in local municipalities. See appendix I for more information on Canada’s asset management practices. We conducted this performance audit from August 2017 to November 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Amelia Shachoy, Assistant Director; Maria Mercado, Analyst-in-Charge; Sarah Arnett; Melissa Bodeau; Leia Dickerson; Alex Fedell; Geoffrey Hamilton; Terence Lam; Malika Rice; Kelly Rubin; and Tasha Straszewski made key contributions to this report.
|
The federal government is the largest real property owner in the United States and spends billions of dollars to operate and maintain these assets, which include buildings, roads, bridges, and utility systems. Federal agencies are responsible for developing asset management policies, processes, and plans. In 2014, the ISO 55000 asset management standards were issued. GAO was asked to examine federal agencies' real property asset management practices and the applicability of ISO 55000. This report discusses: (1) key characteristics of an effective asset management framework and how selected federal agencies' frameworks reflect these characteristics, and (2) whether government-wide asset management guidance and information reflect standards and key characteristics of an effective asset management framework, among other objectives. To conduct this work, GAO reviewed the ISO 55000 standards, relevant studies and literature, and interviewed 22 experts and 10 practitioners. GAO selected six federal agencies as case studies, including agencies with the largest real property portfolio and some agencies that were using the ISO 55000 standards. GAO reviewed documentation and interviewed officials from these six agencies, GSA, and OMB. GAO identified six key characteristics of an effective asset management framework (see table 1) that can help federal agencies manage their assets and resources effectively. GAO identified these key characteristics through reviews of the International Organization for Standardization (ISO) 55000 standards—an international consensus standard on asset management—studies and articles on asset management practices, and interviews with experts. GAO reviewed the asset management practices of six federal agencies: the U.S. Coast Guard (Coast Guard); U.S. Army Corps of Engineers (Corps); General Services Administration (GSA); National Park Service (Park Service); National Aeronautics and Space Administration (NASA); and U.S. Forest Service (Forest Service). Each of the six federal-agency frameworks GAO reviewed included some of the key characteristics. Source: GAO analysis of ISO 55000 standards, asset management literature, and comments from experts. | GAO-19-57 While the Office of Management and Budget (OMB) has issued guidance to inform federal agencies' real property management efforts, the existing guidance does not reflect an effective asset management framework because it does not fully align with ISO 55000 standards and the key characteristics. For example, this guidance does not direct agencies to develop a comprehensive approach to asset management that incorporates strategic planning, capital planning, and operations, or maintaining leadership support, promoting a collaborative organizational culture, or evaluating and improving asset management practices. In addition, the guidance does not reflect information on successful agency asset management practices, information that officials from three of the six agencies GAO spoke with said would be helpful to them. OMB staff said that they did not plan to update existing government-wide guidance because OMB's real property management focus has shifted to the Reduce the Footprint initiative, which emphasizes efficiently managing and using buildings and warehouse space, rather than all assets. Without a more comprehensive approach, as described above, federal agencies may not have the knowledge needed to maximize the value of their limited resources. OMB should take steps to improve information on asset management to reflect leading practices. OMB had no comments on this recommendation.
|
Chemical attacks have emerged as a prominent homeland security risk because of recent attacks abroad using chemical agents and the interest of ISIS in conducting and inspiring chemical attacks against the West. DHS’s OHA officials have stated that nationwide preparedness for a chemical attack is critical to prevent, protect against, mitigate, respond to, and recover from such an attack because it could occur abruptly, with many victims falling ill quickly, and with a window of opportunity of a few hours to respond effectively. Also, recent incidents in Malaysia and the United Kingdom demonstrate that chemical agents can be used to target individuals and can contaminate other individuals near the attack area. Chemicals that have been used in attacks include chlorine, sarin, and ricin, all of which can have deadly or debilitating consequences for individuals exposed to them; see figure 1. Various laws guide DHS’s efforts to defend the nation from chemical threats and attacks. For example, under the Homeland Security Act of 2002, as amended, the Secretary of Homeland Security, through the Under Secretary for Science and Technology, has various responsibilities, to include conducting national research and developing, testing, evaluating, and procuring technology and systems for preventing the importation of chemical and other weapons and material; and detecting, preventing, protecting against, and responding to terrorist attacks. Under former Section 550 of the DHS Appropriations Act, 2007, DHS established the CFATS program to, among other things, identify chemical facilities and assess the security risk posed by each, categorize the facilities into risk-based tiers, and inspect the high-risk facilities to ensure compliance with regulatory requirements. DHS’s responsibilities with regard to chemical defense are also guided by various presidential directives promulgated following the September 11, 2001, terror attacks against the United States; see table 1. In 2010, Public Law 111-139 included a provision for us to identify and report annually on programs, agencies, offices, and initiatives—either within departments or government-wide—with duplicative goals and activities. In our annual reports to Congress from 2011 through 2018 in fulfillment of this provision, we described areas in which we found evidence of duplication, overlap, and fragmentation among federal programs, including those managed by DHS. To supplement these reports, we developed a guide to identify options to reduce or better manage the negative effects of duplication, overlap, and fragmentation, and evaluate the potential trade-offs and unintended consequences of these options. In this report, we use the following definitions: Duplication occurs when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries. Overlap occurs when multiple programs have similar goals, engage in similar activities or strategies to achieve those goals, or target similar beneficiaries. Overlap may result from statutory or other limitations beyond the agency’s control. Fragmentation occurs when more than one agency (or more than one organization within an agency) is involved in the same broad area of national interest and opportunities exist to improve service delivery. DHS manages several programs and activities designed to prevent and protect against domestic chemical attacks. Prior to December 2017, for example, three DHS components—OHA, S&T, and NPPD—had specific programs and activities focused on chemical defense. In December 2017, DHS created the CWMD Office, which, as discussed later in this report, consolidated the majority of OHA and some other DHS programs and activities intended to counter weapons of mass destruction such as chemical weapons. Other DHS components—such as CBP, the Coast Guard, and TSA—have chemical defense programs and activities as part of their broader missions. These components address potential chemical attacks as part of an all-hazards approach to address a wide range of threats and hazards. Appendix I discusses in greater detail DHS’s programs and activities that focus on chemical defense, and appendix II discusses DHS components that have chemical defense responsibilities as part of an all-hazards approach. Table 2 identifies the chemical defense responsibilities of each DHS component, and whether that component has a specific chemical defense program or an all-hazards approach to chemical defense. Figure 2 shows that fiscal year 2017 funding levels for three of the programs that focus on chemical defense totaled $77.3 million. Specifically, about $1.3 million in appropriated funds was available for OHA for its Chemical Defense Program activities and S&T had access to about $6.4 million in appropriated funds for its Chemical Security Analysis Center activities. The CFATS program had access to about $69.6 million in appropriated funds—or 90 percent of the $77.3 million for the three programs—to regulate high-risk facilities that produce, store, or use certain chemicals. OHA officials stated that their efforts regarding weapons of mass destruction over the last few years had focused mostly on biological threats rather than chemical threats. For example, $77.2 million in fiscal year 2017 appropriated funds supported OHA’s BioWatch Program to provide detection and early warning of the intentional release of selected aerosolized biological agents in more than 30 jurisdictions nationwide. By contrast, as stated above, OHA and S&T had access to about $7.7 million in fiscal year 2017 appropriated funds for chemical defense efforts. We could not determine the level of funding for components that treated chemical defense as part of their missions under an all-hazards approach because those components do not have chemical defense funding that can be isolated from funding for their other responsibilities. For example, among other things, CBP identifies and interdicts hazardous chemicals at and between ports of entry as part of its overall mission to protect the United States from threats entering the country. DHS’s chemical defense programs and activities have been fragmented and not well coordinated, but DHS recently created the CWMD Office to, among other things, promote better integration and coordination among these programs and activities. While it is too early to tell the extent to which this new office will enhance this integration and coordination, developing a chemical defense strategy and related implementation plan would further assist DHS’s efforts. DHS’s chemical defense programs and activities have been fragmented and not well coordinated across the department. As listed in table 2 above, we identified nine separate DHS organizational units that have roles and responsibilities that involve conducting some chemical defense programs and activities, either as a direct mission activity or as part of their broader missions under an all-hazards approach. We also found examples of components conducting similar but separate chemical defense activities without DHS-wide direction and coordination. OHA and S&T—two components with specific chemical defense programs—both conducted similar but separate projects to assist local jurisdictions with preparedness. Specifically, from fiscal years 2009 to 2017, OHA’s Chemical Defense Program conducted chemical demonstration projects in five jurisdictions—Baltimore, Maryland; Boise, Idaho; Houston, Texas; New Orleans, Louisiana; and Nassau County, New York—to assist the jurisdictions in enhancing their preparedness for a large-scale chemical terrorist attack. According to OHA officials, they worked with local officials in one jurisdiction to install and test chemical detectors without having department-wide direction on these detectors’ requirements. Also, according to S&T officials, the Chemical and Biological Defense Division worked with three jurisdictions in New York and New Jersey to help them purchase and install chemical detectors for their transit systems beginning in 2016 again without having department-wide direction on chemical detector requirements. The Secret Service, CBP, and the Coast Guard—three components with chemical defense activities that are part of their all-hazards approach—also conducted separate acquisitions of chemical detection or identification equipment, according to officials from those components. For example, according to Secret Service officials, the agency has purchased chemical detectors that agents use for personal protection of protectees and assessing the safety of designated fixed sites and temporary venues. Also, according to CBP officials, CBP has purchased chemical detectors for identifying chemical agents at ports of entry nationwide. Finally, according to Coast Guard officials, the agency has purchased chemical detectors for use in maritime locations subject to Coast Guard jurisdiction. Officials from OHA, S&T, and the CWMD Office acknowledged that chemical defense activities had been fragmented and not well- coordinated. They stated that this fragmentation occurred because DHS had no department-wide leadership and direction for chemical defense activities. We recognize that equipment, such as chemical detectors, may be designed to meet the specific needs of components when they carry out their missions under different operating conditions, such as an enclosed space by CBP or on open waterways by the Coast Guard. Nevertheless, when fragmented programs and activities that are within the same department and are responsible for the same or similar functions are executed without a mechanism to coordinate them, the department may miss opportunities to leverage resources and share information that leads to greater effectiveness. As discussed earlier, DHS has taken action to consolidate some chemical defense programs and activities. Specifically, in December 2017, DHS consolidated some of its chemical, biological, radiological, and nuclear defense programs and activities under the CWMD Office. The CWMD Office consolidated the Domestic Nuclear Detection Office; the majority of OHA; selected elements of the Science and Technology Directorate, such as elements involved in chemical, biological, and integrated terrorism risk assessments and material threat assessments; and certain personnel from the DHS Office of Strategy, Policy, and Plans and the Office of Operations Coordination with expertise on chemical, biological, radiological, and nuclear issues. According to officials from the CWMD Office, the fiscal year 2018 funding for the office is $457 million. Of this funding, OHA contributed about $121.6 million and the Domestic Nuclear Detection Office contributed about $335.4 million. Figure 3 shows the initial organizational structure of the CWMD Office as of June 2018. As of July 2018, according to the Assistant Secretary of CWMD, his office supported by DHS leadership is working to develop and implement its initial structure, plans, processes, and procedures. To guide the initial consolidation, officials representing the CWMD Office said they plan to use the key practices for successful transformations and reorganizations identified in our past work. For example, they noted that they intend to establish integrated strategic goals, consistent with one of these key practices—establish a coherent mission and integrated strategic goals to guide the transformation. These officials stated that the goals include those intended to enhance the nation’s ability to prevent attacks using weapons of mass destruction, including toxic chemical agents; support operational components in closing capability gaps; and invest in and develop innovative technologies to meet technical requirements and improve operations. They noted that the latter might include networked chemical detectors that could be used by various components to help them carry out their mission responsibilities in the future. However, the officials stated that all of the new office’s efforts were in the initial planning stages and none had been finalized. They further stated that the initial setup of the CWMD Office covering the efforts to consolidate OHA and the Domestic Nuclear Detection Office may not be completed until the end of fiscal year 2018. It is still too early to determine the extent to which the creation of the CWMD Office will help address the fragmentation and lack of coordination on chemical defense efforts that we have identified. Our prior work on key steps for assisting mergers and transformations shows that transformation can take years to complete. One factor that could complicate this transformation is that the consolidation of chemical defense programs and activities is limited to certain components within DHS, such as OHA, and not others, such as some parts of S&T and NPPD. Officials from the CWMD Office stated that they intend to address this issue by coordinating the office’s chemical security efforts with other DHS components that are not covered by the consolidation, such as those S&T functions that are responsible for developing chemical detector requirements. These officials also stated that they intend to address fragmentation by coordinating with and supporting DHS components that have chemical defense responsibilities as part of their missions under an all-hazards approach, such as the Federal Protective Service, CBP, TSA, the Coast Guard, and the Secret Service. Furthermore, the officials stated that they plan to coordinate DHS’s chemical defense efforts with other government agencies having chemical programs and activities at the federal and local levels. In October 2011, the Secretary of Homeland Security designated FEMA to coordinate the development of a strategy and implementation plan to enhance federal, state, local, tribal and territorial government agencies’ ability to respond to and recover from a catastrophic chemical attack. In November 2012, DHS issued a chemical response and recovery strategy that examined core capabilities and identified areas where improvements were needed. The strategy identified a need for, among other things, (1) a common set of catastrophic chemical attack planning assumptions, (2) a formally established DHS oversight body responsible for chemical incident response and recovery, (3) a more rapid way to identify the wide range of chemical agents and contaminants that comprise chemical threats, and (4) reserve capacity for mass casualty medical care. The strategy also identified the principal actions needed to fill these gaps. For example, with regard to identifying the range of chemical agents and contaminants that comprise chemical threats, the strategy focused on the capacity to screen, search for, and detect chemical hazards (and noted that this area was cross-cutting with prevention and protection). The strategy stated that, among other things, the Centers for Disease Control and Prevention, the Department of Agriculture and Food and Drug Administration, the Department of Defense, the Environmental Protection Agency, and DHS components, including the Coast Guard, provide screening, search, and detection capabilities. However, the strategy noted that “DHS does not have the requirement to test, verify, and validate commercial-off-the-shelf (COTS) chemical detection equipment purchased and fielded by its various constituent agencies and components, nor by the first responder community.” According to a November 2012 memorandum transmitting the response and recovery strategy to DHS employees, the distribution of the strategy was only to be used for internal discussion purposes and was not to be distributed outside of DHS because it had not been vetted by other federal agencies and state, local, tribal, and territorial partners. The memorandum and the strategy further stated that DHS was developing a companion strategy focused on improving the national capacity to prevent, protect against, and mitigate catastrophic chemical threats and attacks and noted that once this document was complete, DHS would engage with its partners to solicit comments and feedback. The strategy also stated that DHS intended to develop a separate implementation plan that would define potential solutions for any gaps identified, program any needed budget initiatives, and discuss programs to enhance DHS’s core capabilities and close any gaps. DHS officials representing OHA and S&T told us that DHS had intended to move forward with the companion strategy and the accompanying implementation plan but the strategy and plan were never completed because of changes in leadership and other competing priorities within DHS. At the time of our discussion and prior to the establishment of the CWMD Office, OHA officials also noted that DHS did not have a singular entity or office responsible for chemical preparedness. An official representing S&T also said that the consolidation of some chemical, biological, radiological, and nuclear efforts may help bring order to chemical defense efforts because DHS did not have an entity in charge of these efforts or a strategy for guiding them. Now that DHS has established the CWMD Office as the focal point for chemical, biological, radiological, and nuclear programs and activities, DHS has an opportunity to develop a chemical defense strategy and related implementation plan to better integrate and coordinate the department’s programs and activities to prevent, protect against, mitigate, respond to, and recover from a chemical attack. The Government Performance and Results Act of 1993 (GPRA), as updated by the GPRA Modernization Act of 2010 (GPRAMA), includes principles for agencies to focus on the performance and results of programs by putting elements of a strategy and plan in place such as (1) establishing measurable goals and related measures, (2) developing strategies and plans for achieving results, and (3) identifying the resources that will be required to achieve the goals. Although GPRAMA applies to the department or agency level, in our prior work we have reported that these provisions can serve as leading practices for strategic planning at lower levels within federal agencies, such as planning for individual divisions, programs, or initiatives. Our past work has also shown that a strategy is a starting point and basic underpinning to better manage federal programs and activities such as DHS’s chemical defense efforts. A strategy can serve as a basis for guiding operations and can help policy makers, including congressional decision makers and agency officials, make decisions about programs and activities. It can also be useful in providing accountability and guiding resource and policy decisions, particularly in relation to issues that are national in scope and cross agency jurisdictions, such as chemical defense. When multiple agencies are working to address aspects of the same problem, there is a risk that duplication, overlap, and fragmentation among programs can result in wasting scarce funds, confuse and frustrate program customers, and limit overall program effectiveness. A strategy and implementation plan for DHS’ chemical defense programs and activities would help mitigate these risks. Specifically, a strategy and implementation plan would help DHS further define its chemical defense capability, including opportunities to leverage resources and capabilities and provide a roadmap for addressing any identified gaps. By defining DHS’s chemical defense capability, a strategy and implementation plan may also better position the CWMD Office and other components to work collaboratively and strategically with other organizations, including other federal agencies and state, local, tribal, and territorial jurisdictions. Officials from the CWMD Office agreed that the establishment of the new office was intended to provide leadership to and help guide, support, integrate, and coordinate DHS’s chemical defense efforts and that a strategy and implementation plan could help DHS better integrate and coordinate its fragmented chemical defense programs and activities. Recent chemical attacks abroad and the threat of ISIS to use chemical weapons against the West have sparked concerns about the potential for chemical attacks occurring in the United States. DHS components have developed and implemented a number of separate chemical defense programs and activities that, according to DHS officials, have been fragmented and not well coordinated within the department. In December 2017, DHS consolidated some of its programs and activities related to weapons of mass destruction, including those related to chemical defense, by establishing the new CWMD Office. It is too early to tell whether and to what extent this office will help address fragmentation and the lack of coordination across all DHS’s weapons of mass destruction efforts, including chemical efforts. However, as part of its consolidation, the CWMD Office would benefit from developing a strategy and implementation plan to guide, support, integrate, and coordinate DHS’s programs and activities to prevent, protect against, mitigate, respond to, and recover from a chemical attack. A strategy and implementation plan would also help the CWMD Office guide DHS’s efforts to address fragmentation and coordination issues and would be consistent with the office’s aim to establish a coherent mission and integrated strategic goals. The Assistant Secretary for Countering Weapons of Mass Destruction should develop a strategy and implementation plan to help the Department of Homeland Security, among other things, guide, support, integrate and coordinate its chemical defense programs and activities; leverage resources and capabilities; and provide a roadmap for addressing any identified gaps. (Recommendation 1) We provided a draft of this report to DHS for review and comment. DHS provided comments, which are reproduced in full in appendix III and technical comments, which we incorporated as appropriate. DHS concurred with our recommendation and noted that the Assistant Secretary for CWMD will coordinate with the DHS Under Secretary for Strategy, Policy, and Plans and other stakeholders to develop a strategy and implementation plan that will better integrate and direct DHS chemical defense programs and activities. DHS estimated that it will complete this effort by September 2019. These actions, if fully implemented, should address the intent of this recommendation. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Homeland Security, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (404) 679-1875 or CurrieC@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. At the time our review began, the Department of Homeland Security (DHS) had three headquarters components with programs and activities focused on chemical defense. These were the Office of Health Affairs’ (OHA) Chemical Defense Program; the Science and Technology Directorate’s (S&T) Chemical and Biological Defense Division and Chemical Security Analysis Center (CSAC); and the National Protection and Programs Directorate’s (NPPD) Chemical Facility Anti-Terrorism Standards (CFATS) program and Sector Outreach and Programs Division. Each component had dedicated funding to manage the particular chemical defense program or activity (with the exception of the Sector Outreach and Programs Division because this division funds DHS activities related to all critical infrastructure sectors, including the chemical sector). On December 7, 2017, DHS established the Countering Weapons of Mass Destruction (CWMD) Office, which incorporated most of OHA and selected elements of S&T, together with other DHS programs and activities related to countering chemical, biological, radiological, and nuclear threats. According to DHS, the CWMD Office was created to, among other things, elevate and streamline DHS’s efforts to prevent terrorists and other national security threat actors from using harmful agents, such as chemical agents, to harm Americans and U.S. interests. OHA, which was subsumed by the CWMD Office in December 2017, was responsible for enhancing federal, state, and local risk awareness and planning and response mechanisms in the event of a chemical incident through the Chemical Defense Program. This program provided medical and technical expertise to OHA leadership and chemical defense stakeholders including DHS leadership, DHS components, the intelligence community, federal interagency partners, and professional and academic preparedness organizations. The program’s efforts focused on optimizing local preparedness and response to chemical incidents that exceed the local communities’ capacity and capability to act during the first critical hours by providing guidance and tools for first responders and supporting chemical exercises for preparedness. DHS’s Chief Medical Officer was responsible for managing OHA. The Chemical Defense Program expended about $8.3 million between fiscal years 2009 and 2017 in chemical demonstration projects and follow-on funding to assist five jurisdictions in their chemical preparedness: Baltimore, Maryland; Boise, Idaho; Houston, Texas; New Orleans, Louisiana; and Nassau County, New York. For example, in Baltimore, OHA assisted the Maryland Transit Administration with the selection and installation of chemical detection equipment to integrate new technology into community emergency response and planning. In the other four locales, OHA assisted these partners in conducting multiple scenarios specific to each city based on high-risk factors identified by the Chemical Terrorism Risk Assessment (CTRA), which is a risk assessment produced by CSAC every 2 years. Such scenarios included indoor and outdoor scenarios in which persons were “exposed” to either an inhalant or a substance on their skin. Figure 4 summarizes the scenarios conducted in each city and some of the lessons learned. According to OHA summary documentation, a key finding from this work was that timely decisions and actions save lives and manage resources in response to a chemical incident. Since the completion of the five-city project, OHA has been working to, among other things, continue to develop a lessons learned document based on the project, as well as a related concept of operations, that state and local jurisdictions could use to respond to chemical incidents. As of December 7, 2017, OHA was consolidated into the CWMD Office and its functions transferred to the new office, according to officials from the CWMD Office. The Chief Medical Officer is no longer responsible for managing OHA but serves as an advisor to the Assistant Secretary for Countering Weapons of Mass Destruction and as the principal advisor to the Secretary and the Administrator of FEMA on medical and public health issues related to natural disasters, acts of terrorism, and other man-made disasters, among other things. S&T’s Homeland Security Advanced Research Projects Agency includes the Chemical and Biological Defense Division, which supports state and local jurisdictions by, for example, providing them help in modeling potential chemical attacks. The Chemical and Biological Defense Division worked with the City of New York to develop chemical detection modeling by simulating a chemical attack. As a result of the simulation, New York City officials wanted to implement mechanisms to prevent the potential consequences of a chemical attack in a large city. S&T’s Office of National Laboratories includes the CSAC, which identifies and characterizes the chemical threat against the nation through analysis and scientific assessment. CSAC is responsible for producing, among other things, the CTRA, a comprehensive evaluation of the risks associated with domestic toxic chemical releases produced every 2 years. CSAC officials chair the Interagency Chemical Risk Assessment Working Group that meets to develop the CTRA, identify chemical hazards, and produce a list of priority chemicals. This working group is comprised of DHS components, federal partners, and private industry officials that share industry information to ensure accurate and timely threat and risk information is included in the CTRA. To complement the CTRA, CSAC developed a standalone CTRA desktop tool that DHS components can use to conduct risk-based modeling of a potential chemical attack and provide results to DHS components, such as the U.S. Secret Service, for advance planning of large-scale events. In addition, CSAC conducts tailored risk assessments addressing emerging threats such as fentanyl, a synthetic opioid that has caused numerous deaths across the United States. CSAC sends these assessments, along with other intelligence and threat information, to relevant DHS components, federal agencies, state and local partners, and private entities so this information can be used in planning and decision making. Officials from eight DHS components we spoke with said they use CSAC information in their work and that CSAC products are useful. CSAC conducted two exercises, known as Jack Rabbit I and II, to experimentally characterize the effects of a large-scale chemical release and to understand the reason for the differences seen between real-world events and modeling predictions. These exercises were intended to strengthen industry standards in chemical transportation, as well as response and recovery plans. Outputs and data from these exercises have been used to write first responder guidelines for these types of events and are being taught in nationwide fire and hazmat courses. The fiscal year 2018 President’s Budget request did not ask for an appropriation to fund CSAC. However, the Consolidated Appropriations Act, 2018, did provide funding for CSAC. Furthermore, in May 2018, the Secretary delegated responsibility for conducting the non-research and development functions related to the Chemical Terrorism Risk Assessment to the CWMD Office. The CFATS program uses a multitiered risk assessment process to determine a facility’s risk profile by requiring facilities in possession of specific quantities of designated chemicals of interest to complete an online questionnaire. CFATS program officials said they also use CSAC data as part of the process for making decisions about which facilities should be covered by CFATS, and their level of risk. If CFATS officials make a determination that a facility is high-risk, the facility must submit a vulnerability assessment and a site security plan or an alternative security program for DHS approval that includes security measures to meet risk- based performance standards. We previously reported on various aspects of the CFATS program and identified challenges that DHS was experiencing in implementing and managing the program. We made a number of recommendations to strengthen the program to include, among other things, that DHS verify that certain data reported by facilities is accurate, enhance its risk assessment approach to incorporate all elements of risk, conduct a peer review of the program to validate and verify DHS’s risk assessment approach, and document processes and procedures for managing compliance with site security plans. DHS agreed with all of these recommendations and has either fully implemented them or taken action to address them. The Sector Outreach and Programs Division works to enhance the security and resilience of chemical facilities that may or may not be considered high-risk under the CFATS program and plays a nonregulatory role as the sector-specific agency for the chemical sector. The Sector Outreach and Programs Division works with the chemical sector through the Chemical Sector Coordinating Council, the Chemical Government Coordinating Council, and others in a public-private partnership to share information on facility security and resilience. In addition, the division and the coordinating councils help enhance the security and resilience of chemical facilities that may or may not be considered high-risk under the CFATS program. The division and councils are to collaborate with federal agencies, chemical facilities, and state, local, tribal, and territorial entities to, among other things, assess risks and share information on chemical threats and chemical facility security and resilience. Further, the Protective Security Coordination Division in the Office of Infrastructure Protection works with facility owners and operators to conduct voluntary assessments at facilities. Department of Homeland Security (DHS) components conduct various prevention and protection activities related to chemical defense. These activities are managed by individual components as part of their overall mission under an all-hazards approach. U.S. Coast Guard - The Coast Guard uses fixed and portable chemical detectors to identify and interdict hazardous chemicals as part of its maritime prevention and protection activities. It also responds to hazardous material and chemical releases in U.S. waterways. The Coast Guard also staffs the 24-hour National Response Center, which is the national point of contact for reporting all oil and hazardous materials releases into the water, including chemicals that are discharged into the environment. The National Response Center also takes maritime reports of suspicious activity and security breaches at facilities regulated by the Maritime Transportation Security Act of 2002. Under this act, the Coast Guard regulates security at certain chemical facilities and other facilities possessing hazardous materials. U.S. Customs and Border Protection (CBP) - CBP interdicts hazardous chemicals at U.S. borders and ports of entry as part of its overall mission to protect the United States from threats entering the country. Among other things, CBP has deployed chemical detectors to point of entry nationwide that were intended for narcotics detection, but can also be used by CBP officers to presumptively identify a limited number of chemicals. Also, CBP’s National Targeting Center helps to screen and identify high-risk packages that may contain hazardous materials at ports of entry. In addition, CBP’s Laboratories and Scientific Services Directorate manages seven nationally accredited field laboratories, where staff detect, analyze, and identify hazardous substances, including those that could be weapons of mass destruction. When CBP officers send suspected chemical weapons, narcotics, and other hazardous materials to the labs, the labs use various confirmatory analysis technologies, such as infrared spectroscopy and mass spectrometry, to positively identify them. Also, the Directorate has a 24-hour Teleforensic Center for on-call scientific support for CBP officers who have questions on suspected chemical agents. Federal Emergency Management Agency (FEMA) - FEMA provides preparedness grants to state and local governments for any type of all-hazards preparedness activity, including chemical preparedness. According to FEMA data, in fiscal year 2016, states used about $3.5 million, local municipalities used about $48.5 million, and tribal and territorial municipalities used about $80,000 in preparedness grant funding for chemical defense including prevention and protection activities, as well as mitigation, response, and recovery efforts related to a chemical attack. Office of Intelligence and Analysis (I&A) - I&A gathers intelligence information on all homeland security threats including chemical threats. Such threat information is compiled and disseminated to relevant DHS components and federal agencies. For example, I&A works with CSAC to provide intelligence information for the CTRA and writes the threat portion of that assessment. I&A also receives information from CSAC on high-risk gaps in intelligence to help better inform chemical defense intelligence reporting. Also, the Under Secretary of I&A serves as the Vice-Chair of the Counterterrorism Advisory Board. This board is responsible for coordinating, facilitating, and sharing information regarding DHS’s activities related to mitigating current, emerging, perceived, or possible terrorist threats, including chemical threats; and providing timely and accurate advice and recommendations to the Secretary and Deputy Secretary of Homeland Security on counterterrorism issues. NPPD’s Federal Protective Service (FPS) - FPS secures federally- owned and leased space in various facilities across the country. Federal facilities are assigned a facility security level determination ranging from a Level 1 (low risk) to a Level 5 (high risk). As part of its responsibility, FPS is to conduct Facility Security Assessments of the buildings and properties it protects that cover all types of hazards including a chemical release, in accordance with Interagency Security Committee standards and guidelines. FPS is to conduct these assessments at least once every 5 years for Level 1 and 2 facilities, and at least once every 3 years for Level 3, 4, and 5 facilities. FPS conducts the assessments using a Modified Infrastructure Survey Tool. Transportation Security Administration (TSA) - TSA efforts to address the threat of chemical terrorism have been focused on the commercial transportation of bulk quantities of hazardous materials and testing related to the release of commercially transported chemicals that could be used as weapons of mass destruction. TSA’s activities with respect to hazardous materials transportation aim to reduce the vulnerability of shipments of certain hazardous materials through the voluntary implementation of operational practices by motor carriers and railroads, and ensure a secure transfer of custody of hazardous materials to and from rail cars at chemical facilities. Also, in May 2003, TSA began requiring that all commercial motor vehicle operators licensed to transport hazardous materials, including toxic chemicals, must successfully complete a comprehensive background check conducted by TSA. According to TSA documents, approximately 1.5 million of the nation’s estimated 6 million commercial drivers have successfully completed the vetting process. Additionally, TSA has also recently partnered with five mass transit and passenger rail venues, together with other DHS components such as DHS’s Science and Technology Directorate and the U.S. Secret Service, to test chemical detection technologies for such venues. In addition, TSA is responsible for the Transportation Sector Security Risk Assessment, which examines the potential threat, vulnerabilities, and consequences of a terrorist attack involving the nation’s transportation systems. This assessment’s risk calculations for several hundred specific risk scenarios, including chemical weapons attacks, are based on the elements of threat, vulnerability and consequence using a combination of subject matter expert judgments and modeling results. U.S. Secret Service - The Secret Service is responsible for protecting its protectees and designated fixed sites and temporary venues from all threats and hazards, including chemical threats. For example, the Secret Service conducts security assessments of sites, which may involve chemical detection, and coordinates with other agencies for preparedness or response to threats and hazard incidents. In addition, the Secret Service has a Hazardous Agent Mitigation Medical Emergency Response team, dedicated to responding to numerous hazards, including chemical threats and incidents. In addition to the contact named above, John Mortin (Assistant Director), Juan Tapia-Videla (Analyst-in-Charge), Michelle Fejfar, Ashley Grant, Imoni Hampton, Eric Hauswirth, Tom Lombardi, Sasan J. “Jon” Najmi, Claire Peachey, and Kay Vyas made key contributions to this report.
|
Recent chemical attacks abroad and the threat of using chemical weapons against the West by the Islamic State of Iraq and Syria (ISIS) have raised concerns about the potential for chemical attacks occurring in the United States. DHS's chemical defense responsibilities include, among others, managing and coordinating federal efforts to prevent and protect against domestic chemical attacks. GAO was asked to examine DHS's chemical defense programs and activities. This report examines (1) DHS programs and activities to prevent and protect against domestic chemical attacks and (2) the extent to which DHS has integrated and coordinated all of its chemical defense programs and activities. GAO reviewed documentation and interviewed officials from relevant DHS offices and components and reviewed DHS strategy and planning documents and federal laws and directives related to chemical defense. The Department of Homeland Security (DHS) manages several programs and activities designed to prevent and protect against domestic attacks using chemical agents (see figure). Some DHS components have programs that focus on chemical defense, such as the Science and Technology Directorate's (S&T) chemical hazard characterization. Others have chemical defense responsibilities as part of their broader missions, such as U.S. Customs and Border Protection (CBP), which interdicts chemical agents at the border. DHS recently consolidated some chemical defense programs and activities into a new Countering Weapons of Mass Destruction (CWMD) Office. However, GAO found and DHS officials acknowledged that DHS has not fully integrated and coordinated its chemical defense programs and activities. Several components—including CBP, U.S. Coast Guard, the Office of Health Affairs, and S&T—have conducted similar activities, such as acquiring chemical detectors or assisting local jurisdictions with preparedness, separately, without DHS-wide direction and coordination. As components carry out chemical defense activities to meet mission needs, there is a risk that DHS may miss an opportunity to leverage resources and share information that could lead to greater effectiveness addressing chemical threats. It is too early to tell the extent to which the new CWMD Office will enhance the integration of DHS's chemical defense programs and activities. Given the breadth of DHS's chemical defense responsibilities, a strategy and implementation plan would help the CWMD Office (1) mitigate the risk of fragmentation among DHS programs and activities, and (2) establish goals and identify resources to achieve these goals, consistent with the Government Performance and Results Modernization Act of 2010. This would also be consistent with a 2012 DHS effort, since abandoned, to develop a strategy and implementation plan for all chemical defense activities, from prevention to recovery. DHS officials stated the 2012 effort was not completed because of leadership changes and competing priorities. GAO recommends that the Assistant Secretary for the CWMD Office develop a strategy and implementation plan to help DHS guide, support, integrate, and coordinate chemical defense programs and activities. DHS concurred with the recommendation and identified actions to address it.
|
The total number of SFSP meals served nationwide during the summer— one indicator of program participation—increased from 113 million meals in fiscal year 2007 to 149 million meals in fiscal year 2016, or by 32 percent. Although almost half of the total increase in meals served in the summer months was due to increases in lunches, when comparing across each of the meal types, supper and breakfast had the largest percentage increases over the 10-year period, 50 and 48 percent, respectively (see table 1). The increase in SFSP meals over this time period was generally consistent with increases in the number of meals served in the National School Lunch Program (NSLP), the largest child nutrition assistance program, during this period. Although states reported the actual number of SFSP meals served to FNS for reimbursement purposes, they estimated the number of children participating in SFSP, and these participation estimates have been calculated inconsistently, impairing FNS’s ability to inform program implementation and facilitate strategic planning and outreach to areas with low participation. Specifically, state agencies calculated a statewide estimate of children’s participation in the SFSP, referred to as average daily attendance (ADA), using sponsor-reported information on the number of meals served and days of operation in July of each year. However, according to our review of states’ survey responses and FNS documents, states’ methods for calculating ADA have differed from state to state and from year to year. For example, although FNS directed states to include the number of meals served in each site’s primary meal service—which may or may not be lunch—some states calculated ADA using only meals served at lunch. In addition, five states reported in our survey that the method they used to calculate ADA in fiscal year 2016 differed from the one they used previously. While FNS clarified its instructions in May 2017 to help improve the consistency of states’ ADA calculations moving forward, ADA, even if consistently calculated, remained an unreliable estimate of children’s daily participation in SFSP for at least two reasons. First, ADA did not account for existing variation in the number of days that each site serves meals to children. Specifically, because FNS’s instructions indicated that sites’ ADAs were to be combined to provide a statewide ADA estimate, differences in the number of days of meal service at each site were disregarded. As a result, ADA did not reflect the average number of children served SFSP meals daily throughout the month. Second, ADA was an unreliable estimate of children’s participation in SFSP because it did not account for state variation in the month with the greatest number of SFSP meals served. According to FNS officials, the agency instructed states to calculate ADA for July because officials identified this as the month with the largest number of meals served nationwide. However, according to our analysis of nationwide FNS data, in summer 2016, 26 states served more SFSP meals in June or August than in July. Although FNS had taken some steps to identify other data that states collect on the SFSP, at the time of our May 2018 report, FNS had not yet used this information to help improve its estimate of children’s participation in the program. In 2015, FNS published a Request for Information, asking whether states or sponsors collected any SFSP data that were not reported to FNS, and received responses from 15 states. The responses suggested some states collected additional data, such as site-level data, that may allow for an improved estimate of children’s SFSP participation, potentially addressing the issues identified in our analysis. FNS also followed up with several of these states in 2016 and 2017 to explore the feasibility of collecting additional data and improving estimates of children’s SFSP participation. FNS stated in a May 2017 memo to states that it is critical that the agency’s means of estimating children’s participation in the SFSP is as accurate as possible because it helps inform program implementation at the national level and facilitates strategic planning and outreach to areas with low participation. Yet, at the time of our report, FNS had not taken further action to improve the estimate. In our May 2018 report, we concluded that FNS’s limited understanding of children’s participation in the SFSP impaired its ability to both inform program implementation and facilitate strategic planning and outreach to areas with low participation. To improve FNS’s estimate of children’s participation in the SFSP, we recommended that FNS focus on addressing, at a minimum, data reliability issues caused by variations in the number of operating days of meal sites and in the months in which states see the greatest number of meals served. FNS generally agreed with this recommendation. Other federal and nonfederal programs that operate solely in the summer, as well as those operating year-round, helped feed low-income children in the summer months. For example, in 2016, FNS data indicated about 26 million meals were served through the NSLP’s Seamless Summer Option, a separate federal program that streamlines administrative requirements for school meal providers serving summer meals. Some children also received summer meals through nonfederal programs operated by entities such as faith-based organizations and foodbanks, though the reach of these efforts was limited, according to our state survey and interviews with providers and national organizations at the time of our report. For example, of the 27 states that reported in our survey awareness of the geographic coverage of these nonfederal programs, 11 states indicated that they operated in some portions of the state—the most common state response. States and SFSP providers reported challenges with issues related to meal site availability, children’s participation, and program administration, though federal, state, and local entities had taken steps to improve these areas. For example, a lack of available transportation, low population density, and limited meal sites posed challenges for SFSP implementation in rural areas, according to states we surveyed, selected national organizations, and state and local officials in the three states we visited. In response, state and local entities took steps, such as transporting meals to children by bus, to address these issues—efforts that FNS supported through information sharing and grants. States and SFSP providers also reported challenges with meal site safety, and FNS’s efforts to address this area were limited. Seventeen states reported in our survey that ensuring summer meal sites are in safe locations was moderately to very challenging. Some states and sponsors took steps to help address this issue, and FNS also used its available authorities to grant some states and sponsors flexibility with respect to the requirement that children consume summer meals on site, such as when safety at the site is a concern. However, our review of FNS documentation showed FNS had not clearly communicated to all states and sponsors the circumstances it considers when deciding whether to grant this flexibility. These circumstances—described in letters the agency sent to requesting states—generally included verification that violent crime activities occurred within both a 6-block radius of the meal site and 72 hours prior to the meal service. Although FNS officials explained that they reviewed state and sponsor requests for flexibility due to safety concerns on a case-by-case basis, they also acknowledged that the set of circumstances they used to approve state and sponsor requests for flexibility, which we identified in their letters to states, had been used repeatedly. Further, states and sponsors reported challenges obtaining the specific data needed for approval of a site for this type of flexibility, including inconsistent availability of timely data, which hampered some providers’ efforts to ensure safe delivery of meals. We concluded that unless FNS shared information with all states and sponsors on the circumstances it considered when deciding whether to grant flexibility with respect to the requirement that children consume summer meals on site, states and sponsors would likely continue to be challenged to use this flexibility, hindering its usefulness in ensuring safe summer meal delivery to children. We therefore recommended that FNS communicate to all SFSP stakeholders the circumstances it considers in approving requests for flexibility with respect to the requirement that children consume SFSP meals on-site in areas that have experienced crime and violence, taking into account the feasibility of accessing data needed for approval, to ensure safe delivery of meals to children. FNS generally agreed with this recommendation. We also found that while FNS had issued reports to Congress evaluating some of its demonstration projects, as required under its statutory authorities, the agency had not issued any such reports to Congress specifically on the use of flexibilities with respect to the on-site requirement in areas where safety was a concern. As previously discussed, the agency is required to annually submit certain reports to Congress regarding the use of waivers and evaluations of projects carried out under its demonstration authority. FNS officials told us that they had not evaluated or reported on these flexibilities, in part, because they had limited information on their outcomes. We concluded that without understanding the impact of its use of these flexibilities, neither FNS nor Congress knew whether these flexibilities were helping provide meals to children—the goal of the program. Accordingly, we recommended that FNS evaluate and annually report to Congress, as required by statute, on its use of waivers and demonstration projects to grant states and sponsors flexibility with respect to the requirement that children consume SFSP meals on-site in areas experiencing crime or violence, to improve understanding of the use and impact of granting these flexibilities on meeting program goals. FNS generally agreed with this recommendation. Although FNS had established program and policy simplifications to help lessen the administrative burden on sponsors participating in multiple child nutrition programs, challenges in this area persisted, indicating that information had not reached all relevant state agencies. According to officials we spoke with from a national organization involved in summer meals, management of each child nutrition program and the processes related to applications, funding, and oversight were fragmented in many states. For example, in one of the states we visited, a sponsor that provided school meals during the school year told us they had to fill out 60 additional pages of paperwork to provide summer meals, which they described as significant burden. FNS officials told us that some of the duplicative requirements might have been a function of differences in statute, and although FNS provided guidance to states on simplified procedures for sponsors participating in more than one child nutrition program, some states might have chosen not to implement them. We concluded that without further efforts from FNS to disseminate information on current options for streamlining administrative requirements across multiple child nutrition programs, overlapping and duplicative administrative requirements may limit children’s access to meals by discouraging sponsor participation in child nutrition programs. We recommended that FNS disseminate information about the existing streamlining options, and FNS generally agreed with this recommendation. Chairman Rokita, Ranking Member Polis, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions you may have at this time. If you or your staff have any questions about this testimony, please contact Kathryn A. Larin at (202) 512-7215 or larink@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Rachel Frisk, Melissa Jaynes, and Claudine Pauselli. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
This testimony summarizes information contained in GAO's May 2018 report entitled Summer Meals: Actions Needed to Improve Participation Estimates and Address Program Challenges , GAO-18-369 . It addresses (1) what is known about SFSP participation, (2) other programs that help feed low-income children over the summer, and (3) challenges in providing summer meals to children and the extent to which USDA provides assistance to address these challenges. For its May 2018 report, GAO reviewed relevant federal laws, regulations, and guidance; analyzed USDA's SFSP data for fiscal years 2007 through 2016; and surveyed state agencies responsible for administering the SFSP in 50 states and the District of Columbia. GAO also visited a nongeneralizable group of 3 states and 30 meal sites, selected based on Census data on child poverty rates and urban and rural locations, and analyzed meal site data from these 3 states. In addition, GAO interviewed USDA, state, and national organization officials, as well as SFSP providers, including sponsors and site operators. Nationwide, the total number of meals served to children in low-income areas through the Summer Food Service Program (SFSP) increased from 113 to 149 million (about 32 percent) from fiscal year 2007 through 2016, according to GAO's May 2018 report. GAO noted that the U.S. Department of Agriculture (USDA) directed states to use the number of meals served, along with other data, to estimate the number of children participating in the SFSP. However, GAO found that participation estimates had been calculated inconsistently from state to state and year to year. In 2017, USDA took steps to improve the consistency of participation estimates, noting they are critical for informing program implementation and strategic planning. However, GAO determined that the method USDA directed states to use would continue to provide unreliable estimates of participation, hindering USDA's ability to use them for these purposes. Other federal and nonfederal programs helped feed low-income children over the summer to some extent, according to states GAO surveyed and SFSP providers and others GAO interviewed for its May 2018 report. For example, GAO found that in July 2016, about 26 million meals were served through a separate federal program that allowed school meal providers to serve summer meals, according to USDA data. Some children also received summer meals through nonfederal programs operated by faith-based organizations and foodbanks, though GAO's state survey and interviews with SFSP meal providers and national organizations indicated the reach of such efforts was limited. In GAO's May 2018 report, states and SFSP meal providers reported challenges with issues related to meal sites, participation, and program administration, though USDA, state, and local officials had taken some steps to address these issues. Seventeen states in GAO's survey and several providers in the states GAO visited reported a challenge with ensuring meal sites were in safe locations. To address this issue, USDA granted some states and providers flexibility from the requirement that children consume meals on-site. However, GAO found that USDA had not broadly communicated the circumstances it considered when granting this flexibility or reported to Congress on the use of flexibilities with respect to the on-site requirement in areas where safety was a concern, per requirements. As a result, neither USDA nor Congress knew whether these flexibilities were helping provide meals to children and meeting program goals. Further, officials from national and regional organizations GAO interviewed, as well as providers GAO visited, reported challenges related to the administrative burden associated with participating in multiple child nutrition programs. Although USDA had established program and policy simplifications to help lessen related burdens, the persistence of challenges in this area suggested that information had not reached all relevant state agencies, potentially limiting children's access to meals by discouraging provider participation. In its May 2018 report, GAO made four recommendations, including that USDA improve estimates of children's participation in SFSP, communicate the circumstances it considers when granting flexibilities to ensure safe meal delivery, evaluate and annually report to Congress on its use of waivers and demonstration projects when granting these flexibilities, and disseminate information about existing flexibilities available to streamline administrative requirements for providers participating in multiple child nutrition programs. USDA generally agreed with GAO's recommendations.
|
Qualified health plans sold through the exchanges must meet certain minimum requirements, including those related to benefits coverage. Beyond these requirements, many elements of plans can vary, including their cost and availability. Those who opt to enroll in a plan generally pay for their health care in two ways: (1) a premium to purchase the insurance, and (2) cost-sharing for the particular health services they receive (for example, deductibles, coinsurance, and co-payments). Qualified health plans are offered at one of four metal tiers that reflect the out-of-pocket costs that may be incurred by a consumer. These tiers correspond to the plan’s actuarial value—a measure of the relative generosity of a plan’s benefits that is expressed as a percentage of the covered medical expenses expected to be paid, on average, by the issuer for a standard population and set of allowed charges for in-network providers. In general, as actuarial value increases, consumer cost- sharing decreases. The actuarial values of the metal tiers are: bronze (60 percent), silver (70 percent), gold (80 percent), and platinum (90 percent). If an issuer sells a qualified health plan on an exchange, it must offer at least one plan at the silver level and one plan at the gold level; issuers are not required to offer bronze or platinum plans. Individuals purchasing coverage through the exchanges may be eligible, depending on their incomes, to receive financial assistance to offset the costs of their coverage. According to HHS, more than 80 percent of enrollees obtained financial assistance in the first half of 2017, which came in the form of premium tax credits or cost-sharing reductions. Premium tax credits. These are designed to reduce an eligible individual’s premium costs, and can either be paid in advance on a monthly basis to an enrollee’s issuer—referred to as advance premium tax credits—or received after filing federal income taxes for the prior year. To be eligible for premium tax credits, enrollees must generally have household incomes of at least 100, but no more than 400, percent of the federal poverty level. The amount of the premium tax credit varies based on enrollees’ income relative to the cost of premiums for their local benchmark plan—which is the second lowest cost silver plan available—but consumers do not need to be enrolled in the benchmark plan in order to be eligible for these tax credits. Cost-sharing reductions. Enrollees who qualify for premium tax credits, have household incomes between 100 and 250 percent of the federal poverty level, and enroll in a silver tier plan may also be eligible to receive cost-sharing reductions, which lower enrollees’ deductibles, coinsurance, and co-payments. To reimburse issuers for reduced cost-sharing from qualified enrollees, HHS made payments to issuers (referred to as cost-sharing reduction payments) until October 2017, when it discontinued these payments. Despite HHS’s decision to discontinue cost-sharing reduction payments, issuers are still required under PPACA to offer cost-sharing reductions to eligible enrollees. Since consumers who receive these reductions are generally enrolled in silver plans, insurance commissioners in most states instructed the issuers in their states to increase 2018 premiums for silver plans offered on the exchanges to reflect the discontinued federal payments. This has been referred to as “silver-loading” and resulted in substantial increases in exchange-based silver plan premiums for 2018. (See fig.1.) Because the amount of an eligible enrollee’s premium tax credit is based on the premium for the enrollee’s local benchmark plan (the second lowest cost silver plan available to an enrollee), the value of this form of financial assistance also increased significantly for 2018. As we have previously reported, the number and type of plans available in the health insurance exchanges varies from year to year. Issuers can add new plans and adjust or discontinue existing plans from year to year, as long as the plans meet certain minimum requirements—such as covering essential health benefits. Issuers can also extend or restrict the locations in which they offer plans. According to HHS, while individuals seeking 2018 coverage were able to select from an average of 25 plans across the various metal tiers, 29 percent of consumers were able to select from plans from only one issuer. HHS performs outreach to increase awareness of the open enrollment period and facilitate enrollment among healthcare.gov consumers— including those new to the exchanges as well as those returning to renew their coverage. Outreach to these different types of enrollees can vary. For example, while outreach to those new to the exchanges may focus more on the importance of having insurance, outreach to existing enrollees may focus on encouraging them to go back to the exchange to shop for the best option. All exchanges are required to carry out certain functions to assist consumers with their applications for enrollment and financial assistance, among other things. HHS requires exchanges to operate a website and toll-free call center to address the needs of consumers requesting assistance with enrollment, and to conduct outreach and educational activities to help consumers make informed decisions about their health insurance options. HHS administers the federal healthcare.gov website, which allows consumers in states using the website for enrollment to directly compare health plans based on a variety of factors, such as premiums and provider networks. HHS also operates a Marketplace Call Center to respond to consumer questions about enrollment. Consumers may apply for coverage through the call center, the website, via mail, or in person (in some areas), with assistance from navigator organizations or agents and brokers. Navigators. PPACA required all exchanges to establish “navigator” programs to conduct public education activities to raise awareness of the availability of coverage available through the exchanges, among other things. As part of HHS’s funding agreement with navigator organizations in states using the federally facilitated exchange, HHS requires them to maintain relationships with consumers who are uninsured or underinsured. They must also examine consumers’ eligibility for other government health programs, such as Medicaid, and provide other assistance to consumers—for example, by helping them understand how to access their coverage. Agents and Brokers. Licensed by states, agents and brokers may also provide assistance to those seeking to enroll in a health plan sold on the exchanges; however, they are generally paid by issuers. They may sell products for one issuer from which they receive a salary, or from a variety of issuers and be paid a commission for each plan they sell. About 8.7 million consumers enrolled in healthcare.gov plans during the open enrollment period for 2018 coverage, 5 percent less than the 9.2 million who enrolled for 2017. This decline continues a trend from 2016, when a peak of 9.6 million consumers enrolled in such plans. Since that peak, enrollment has decreased by 9 percent. Enrollment in plans sold by state-based exchanges that use their own enrollment website has remained relatively stable during the same time period, with just over 3.0 million enrollees each year since 2016. Overall, enrollment in federal and state exchanges has declined 7 percent from a peak of nearly 12.7 million enrollees in 2016, largely driven by the decrease in enrollment in exchanges using healthcare.gov. (See table 1.) HHS officials told us that they did not want to speculate on the specific factors that affected enrollment this year, but noted that the exchanges are designed for consumers to utilize as needed, which includes degrees of fluctuation from year to year. A decreased demand for exchange-based insurance could be influenced by increases in the numbers of people with other types of health coverage, such as coverage through other public programs, or that which is sponsored by their employers. Enrollees who were new to healthcare.gov coverage comprised a smaller proportion of total enrollees in 2018 than in 2017, continuing a trend seen in prior years. The proportion of new enrollees decreased from 33 percent (3 million) in 2017 to 28 percent (2.5 million) in 2018 (see fig. 2). Some stakeholders noted the importance of enrolling new, healthy enrollees each year to maintain the long-term viability of the exchanges. However, other stakeholders noted that they had expected the number and proportion of new enrollees to decrease over time because a large majority of those who wanted coverage and were eligible for financial assistance had likely already enrolled. The increasing proportion of enrollees who return to the exchanges for their coverage could also demonstrate their need for or satisfaction with this coverage option. The demographic characteristics of enrollees remained largely constant from 2017 through 2018. For example, the proportion of enrollees with household incomes of 100 to 250 percent of the federal poverty level remained similar at 71 percent in 2017 and 70 percent in 2018. In addition, the proportion of enrollees whose households were located in rural areas was 18 percent in both years. However, the proportion of healthcare.gov enrollees aged 55 and older increased from 27 percent in 2017 to 29 percent in 2018. Appendix III provides detailed information on the characteristics of enrollees in 2017 and 2018. According to stakeholders we interviewed, plan affordability likely played a major role in 2018 exchange enrollment—both attracting and detracting from enrollment—and enrollees’ plan selection. In 2018, premiums across all healthcare.gov plans increased an average of 30 percent—more than expected given overall health cost trends. As a result of these premium increases, plans were less affordable in 2018 compared to 2017 for exchange consumers without advance premium tax credits (15 percent in 2018). One driver of these premium increases was the elimination of federal cost-sharing reduction payments to issuers in late 2017, which resulted in larger premium increases for silver tier plans (the most popular healthcare.gov metal tier). For example, among enrollees who did not use advance premium tax credits, the average monthly premium amount paid for silver plans increased 45 percent (from $424 in 2017 to $614 in 2018). Average premiums for these enrollees also increased for bronze and gold plans, but not by as much—22 percent for bronze plans (from $374 in 2017 to $455 in 2018) and 23 percent for gold plans (from $509 in 2017 to $628 in 2018). Most stakeholders we interviewed told us the decreased affordability of plans likely resulted in lower enrollment in exchange plans for these consumers. Some stakeholders we interviewed reported personally encouraging consumers who were not eligible for premium tax credits to purchase their coverage off of the exchanges, where they could often purchase the same health insurance plan for a lower price. However, despite overall premium increases, plans became more affordable for the more than 85 percent of exchange consumers who used advance premium tax credits, because the value of the premium tax credits increased significantly in order to compensate for the higher premiums of silver plans. For example, the average value of monthly advance premium tax credits for those enrolled in any exchange plan increased 44 percent, from $383 in 2017 to $550 in 2018—the largest increase in the program’s history. As a result, enrollees who used advance premium tax credits faced lower net monthly premiums on average in 2018 than they had in 2017—specifically, enrollees’ average net monthly premiums across all plans decreased 16 percent from $106 in 2017 to $89 in 2018. According to most stakeholders we interviewed, the enhanced affordability of net monthly premiums among consumers who used advance premium tax credits likely encouraged enrollment among this group. (See fig. 3). Stakeholders we interviewed also noted that plan affordability likely played a major role in enrollees’ plan selection, including the metal tier of their coverage. This finding is consistent with our prior work which showed that plan cost—including premiums—is a driving factor in exchange enrollees’ selection of a plan. Specifically, we found that while silver plans remained the most popular healthcare.gov metal tier, covering 65 percent of all enrollees in 2018, this proportion decreased 9 percentage points from 2017 as more enrollees selected bronze and gold plans. (See fig. 4.) Stakeholders reported that consumers using advance premium tax credits benefitted from enhanced purchasing power in 2018 due to the impact of silver loading, which likely served as a driving factor in these consumers’ plan selections. Specifically, they noted that the increased availability of free bronze and low-cost gold plans (after tax credits were applied) for such consumers likely explained why many enrollees moved from silver to bronze or gold plans for 2018. While average monthly net premiums paid by these consumers decreased overall from 2017 to 2018 due to the tax credits, the changes were most pronounced for those enrolled in bronze or gold plans (which decreased 36 and 39 percent, respectively), compared to silver plans (which decreased 13 percent). Separately, the enhanced affordability of gold plans, along with the richer benefits they offer, likely led some consumers to move from silver to gold plans in 2018. While the average monthly net premium amount paid for gold plans in 2018 ($207) remained higher than that for less generous silver plans ($88) among those using advance premium tax credits, it was nearly 40 percent lower than the average net premium for gold plans in 2017 ($340). Stakeholders also reported that consumers in some areas were able to access gold plans for a lower cost than silver plans. The proportion of enrollees in gold plans using advance premium tax credits increased from 49 percent to 74 percent—signaling that many enrollees used their higher tax credits to enroll in richer gold plan coverage. As the proportion of enrollees with silver plans declined for 2018, so too did the proportion of enrollees with cost-sharing reductions—which are generally only available to those with silver plans. Specifically, 54 percent of healthcare.gov enrollees received these subsidies in 2018, 6 percentage points lower than the 60 percent who received these subsidies in 2017. Stakeholders we interviewed reported that a variety of factors other than plan affordability also likely affected 2018 exchange enrollment, but opinions on the impact of each factor were mixed. Specifically, most stakeholders we interviewed, including all 4 navigator organizations and 3 professional trade organizations, reported that consumer confusion about PPACA and its status likely played a major role in detracting from 2018 healthcare.gov enrollment. Some of these stakeholders attributed consumers’ confusion about the exchanges to efforts to repeal and replace PPACA. In addition, many stakeholders attributed consumer confusion to the Administration’s negative statements about PPACA. Further, many stakeholders reported that as a result of the public debate during 2017 over whether to repeal and replace PPACA many consumers had questions about whether the law had been repealed and whether insurance coverage was still available through the exchanges. However, other stakeholders reported that this debate likely did not affect enrollment and consumers who were in need of exchange-based coverage were likely able to find the information they required to enroll. In addition, many stakeholders noted that consumer understanding and enrollment was aided through increased outreach and education events conducted by many groups, including some state and local governments, hospitals, issuers, and community groups. Many stakeholders also noted that the volume of exchange-related news increased significantly before and during the open enrollment period for 2018 coverage, in part due to the ongoing political debate about the future of the exchanges. These stakeholders agreed that this increase in reporting about the exchanges likely resulted in increased consumer awareness and enrollment, even in cases where the coverage negatively portrayed the exchanges. Many stakeholders also said that reductions in HHS outreach and advertising of the open enrollment period likely detracted from 2018 enrollment, in part because any reduction in promoting enrollment detracts from overall consumer awareness and understanding of the program and its open enrollment period. In particular, some stakeholders reported that outreach and advertising are especially important for increasing new enrollment, especially among younger and healthier consumers whose enrollment can help ensure the long-term stability of the exchanges. However, other stakeholders reported that these reductions likely had no effect on enrollment, noting that most consumers who needed exchange-based coverage were already enrolled in it and were well aware of the program, and also noting that enrollment in 2018 did not dramatically change compared with that of 2017. Stakeholders we interviewed were largely divided on the effects of other factors on 2018 healthcare.gov enrollment, including the shorter 6-week open enrollment period. For example, about half of the stakeholders said that the shorter open enrollment period likely led fewer to enroll due to lack of consumer awareness of the new deadline, as well as to challenges related to the reduced capacity of those helping consumers to enroll. However, many others said that the shorter open enrollment period likely had no effect. In particular, some of these stakeholders noted that enrollment in 2018 was similar to that for 2017 and that during prior open enrollment periods the majority of consumers had enrolled by December 15, as this was the deadline for coverage that began in January. Figure 5 displays the range of stakeholder views on factors affecting 2018 healthcare.gov enrollment, and appendix IV provides selected stakeholder views of factors affecting 2018 healthcare.gov enrollment. HHS reduced its consumer outreach—including paid advertising and navigator funding—for the 2018 open enrollment period. Further, HHS allocated the navigator funding using a narrower approach and problematic data, including consumer application data that it acknowledged were unreliable and navigator organization-reported goal data that were based on an unclear description of the goal, and which HHS and navigator organizations likely interpreted differently. HHS reduced the amount it spent on paid advertising for the 2018 open enrollment period by 90 percent, spending $10 million as compared to the $100 million it spent for the 2017 open enrollment period. HHS officials reported that their 2018 advertising approach was a success, noting that they cut wasteful spending on advertising, which resulted in a more cost- effective approach. HHS officials told us that the agency elected to reduce funding for paid advertising to better align with its spending on paid advertising for the Medicare open enrollment period. According to the officials, HHS targeted its reduced funding toward low-cost forms of paid advertising that HHS studies showed were effective in driving enrollment, and that could be targeted to specific populations, such as individuals aged 18 to 34 and individuals who had previously visited healthcare.gov. For example, for 2018, HHS spent about 40 percent of its paid advertising budget on two forms of advertising aimed at reaching these populations. Specifically, HHS spent $1.2 million on the creation of two digital advertising videos that were targeted to potential young enrollees, and $2.7 million on search advertising, in which Internet search engines displayed a link to healthcare.gov when individuals used relevant search terms. HHS followed up with individuals that visited the link to encourage them to enroll. Agency officials said they focused some of their paid advertising on individuals aged 18 to 34 because in the prior open enrollment period many individuals in this age range enrolled after December 15—the deadline for the 2018 open enrollment period. HHS officials said they did not use paid television advertising because it was too expensive and because it was not optimal for attracting young enrollees—although a 2017 HHS study found this was one of the most effective forms of paid advertising for enrolling new and returning individuals during the prior open enrollment period. See appendix V for HHS’s expenditures for paid advertising for the 2017 and 2018 open enrollment periods. HHS reduced navigator funding by 42 percent for 2018, spending $37 million compared to the $63 million it spent for 2017. According to HHS officials, the agency reduced this funding due to a shift in the Administration’s priorities. For the 2018 open enrollment period, HHS planned to rely more heavily on agents and brokers—another source of in-person consumer assistance, who, unlike navigator organizations that are funded through federal grants, are generally paid for by the issuers they represent. HHS took steps to highlight their availability to help consumers and enable consumers to enroll through them. For example, for the 2018 open enrollment period, HHS made a new “Help on Demand” tool available on healthcare.gov that connected consumers directly to local agents or brokers. HHS also developed a streamlined enrollment process for those enrolling through agents and brokers. HHS also changed its approach for allocating the navigator funding to focus on a narrower measure of navigator organization performance than it had used in the past. According to HHS officials, in prior years, HHS awarded funding based on navigator organizations’ performance on a variety of tasks, such as the extent to which navigator organizations met their self-imposed goals for numbers of public outreach events and individuals assisted with applications for exchange coverage and selection of exchange plans. HHS officials said the agency previously also took state-specific factors, such as the number of uninsured individuals in a state, into account when awarding funding. HHS calculated preliminary navigator funding awards for 2018 using this approach. However, according to HHS officials, the agency later decided to change both its budget and approach for allocating navigator funding for 2018 to hold navigator organizations more accountable for the number of individuals they enrolled in exchange plans. In its new funding allocation approach, rather than taking into account navigator organization performance on a variety of tasks, HHS only considered performance in achieving one goal—the number of individuals each navigator organization planned to assist with selecting or enrolling in exchange plans for 2017 coverage. In implementing this new approach, HHS compared the number of enrollees whose 2017 exchange coverage applications included navigator identification numbers with each navigator organization’s self-imposed goal. For navigator organizations that did not appear to meet their goals, HHS decreased their preliminary 2018 award amounts proportionately. For navigator organizations that appeared to meet or exceed their goals, HHS left their preliminary 2018 award amounts unchanged. Based on this change in approach, HHS offered 81 of its 98 navigator organizations less funding for 2018, with decreases ranging from less than 1 percent to 98 percent of 2017 funding levels. HHS offered 4 of the 98 navigator organizations increased funding and 13 the same level of funding they received for 2017 (see fig. 6). We found that the data HHS used for its revised funding approach were problematic for multiple reasons. In particular, prior to using the 2017 consumer application data as part of its 2018 funding calculations, HHS had acknowledged that these data were unreliable, in part because navigators were not consistently entering their identification numbers into applications during the 2017 open enrollment period. Specifically, HHS stated in a December 9, 2016, email to navigator organizations that the application data were unreliable and thus could not be used. Over 4 million individuals had enrolled in 2017 coverage by December 10, 2016, so it is likely that many of the applications that HHS used in its 2018 funding calculation included incomplete or inaccurate information with respect to navigator assistance. HHS provided guidance to navigator organizations in the December 2016 email on the importance of, and locations for, entering identification numbers into applications to help improve the reliability of the data. However, some data reliability issues may have remained throughout the 2018 open enrollment period, as two of the navigator organizations we interviewed reported ongoing challenges entering navigator identification numbers into applications during this period. For example, representatives from one navigator organization reported that the application field where navigators enter their identification number was at times pre-populated with an agent or broker’s identification number. Consumer application data may therefore still be unreliable for use in HHS navigator funding decisions that would be expected later this year for 2019. Moreover, the 2017 goal data that HHS used in its funding calculation were also problematic because HHS described the goal in an unclear manner when it asked navigator organizations to set their goals. As a result, HHS’s interpretation of the goal was likely different than how it was interpreted and established by navigator organizations. Specifically, in its award application instructions, HHS asked navigator organizations to provide a goal for the number of individuals that they “expected to be assisted with selecting/enrolling in (including re- enrollments)” but HHS did not provide guidance to navigator organizations on how it would interpret the goal. HHS officials told us that they wanted to allow navigator organizations full discretion in setting their goals, since the organizations know their communities best. In its funding calculation, HHS interpreted this goal as the number of individuals navigator organizations planned to enroll in exchange plans. However, as written in the award application instructions, the goal could be interpreted more broadly, because not all individuals whom navigators assist with the selection of exchange plans ultimately apply and enroll in coverage. Representatives from one navigator organization we spoke with said they did interpret this goal more broadly than how it was ultimately interpreted by HHS—and thus set it as the number of consumers they planned to assist in a variety of ways, not limiting it to those they expected to assist through to the final step of enrollment in coverage. The navigator organization therefore set a higher goal than it otherwise would have, had it understood HHS’s interpretation of the goal, and ultimately received a decrease in funding for 2018. As a result, we found that two of the three inputs in HHS’s calculation of 2018 navigator organization awards were problematic (see fig. 7). HHS’s reduced funding and revised funding allocation approach resulted in a range of implications for navigator organizations. According to HHS officials, eight of the navigator organizations that were offered reduced funding for 2018—with reductions ranging from 50 to 98 percent of 2017 funding levels—declined their awards and withdrew from the program. HHS reported asking the remaining navigator organizations to focus on re-enrolling consumers who had coverage in 2017 and resided in areas where issuers reduced or eliminated plan offerings for 2018, and informing consumers about the shortened open enrollment period for 2018 coverage. Representatives of the navigator community group we interviewed reported that many navigator organizations did focus their resources on enrollment and cut back on outreach efforts, particularly in rural areas. According to self-reported navigator organization data provided by HHS, navigator organizations collectively reported conducting 68 percent fewer outreach events during the 2018 open enrollment period as compared to the 2017 period. Representatives from the navigator organizations we interviewed also reported making changes to their operations; for example, officials from one of the navigator organizations reported cutting staff and rural office locations. Officials from another navigator organization said that they focused their efforts on contacting prior exchange enrollees to assist them with re-enrollment, instead of finding and enrolling new consumers, and de-prioritized assistance with Medicaid enrollment. The three navigator organizations we spoke with that had funding cuts for 2018 also reported that their ability to perform the full range of navigator duties during the rest of the year would be compromised because they needed to make additional cuts in their operations—such as reducing staff and providing less targeted assistance to underserved populations—in order to reduce total costs. One of the three navigator organizations reported that it may go out of business at the end of the 2018 award year. HHS’s narrower approach to awarding funding; lack of reliable, complete data on the extent to which navigator organizations enrolled individuals in exchange plans; and lack of clear guidance to navigator organizations on how to set their goals could hamper the agency’s ability to use the program to meet its objectives. Federal internal control standards state that management should use quality information to achieve the agency’s objectives, such as by using relevant, reliable data for decision-making. Without reliable performance data and accurate goals, HHS will be unable to measure the effectiveness of the navigator program and take informed action as necessary. Further, because HHS calculated awards using problematic data, navigator organizations may have received awards that did not accurately reflect their performance in enrolling individuals in exchange plans. Additionally, HHS’s narrow focus on exchange enrollment limited its ability to make decisions based on relevant information. Moving forward, this may affect navigator organizations’ interests and abilities in providing a full range of services to their communities, including underserved populations. This, in turn, could affect HHS’s ability to meet its objectives, such as its objective of improving Americans’ access to health care. HHS did not set any numeric enrollment targets for 2018 related to total healthcare.gov enrollment, as it had in prior years. In prior years, HHS used numeric targets to monitor enrollment progress during the open enrollment period and focus its resources on those consumers that it believed had a high potential to enroll in exchange coverage. For example, HHS established a target of enrolling a total of 13.8 million individuals during the 2017 open enrollment period and also set numeric enrollment targets for 15 regional markets that the agency identified as presenting strong opportunities for meaningful enrollment increases, partly due to having a high percentage of eligible uninsured individuals. HHS used these regional target markets to focus its outreach, travel, and collaborations with local partners. According to agency officials, during prior open enrollment periods, HHS monitored its performance with respect to its targets and revised its outreach efforts in order to better meet its goals. According to federal internal control standards, agencies should design control activities to achieve their objectives, such as by establishing and monitoring performance measures. HHS has recognized the importance of these internal controls by requiring state-based exchanges to develop performance measures and report on their progress. Without developing numeric targets for healthcare.gov enrollment, HHS’s ability to both perform high level assessments of its performance and progress and to make critical decisions about how to use its resources is hampered. HHS may also be unable to ensure that it meets its objectives—including its current objective of improving Americans’ access to health care, including by stabilizing the market and implementing policies that increase the mix of younger and healthier consumers purchasing plans through the individual market. HHS leadership decided against setting numeric enrollment targets for the 2018 open enrollment period and instead focused on a goal of enhancing the consumer experience, according to HHS officials. Specifically, HHS officials measured the consumer experience based on its assessment of healthcare.gov availability and functionality, and call center availability and customer satisfaction. HHS officials told us that they selected these measures of the consumer experience because healthcare.gov and the call center represent two of the largest channels through which consumers interact with the exchange. HHS reported meeting its goal based on consumers’ improved experiences with these two channels, some of which had been problematic in the past. (See fig. 8.) Healthcare.gov. According to HHS officials, the healthcare.gov website achieved enhanced availability and functionality for the 2018 open enrollment period, continuing a trend in improvements over prior years. While HHS scheduled similar periods of healthcare.gov downtime for maintenance in 2017 and 2018, the website had less total downtime during the 2018 open enrollment period because the agency needed to conduct less maintenance. HHS officials attributed the increased availability in part to an operating system upgrade and comprehensive testing of the website that they conducted before the 2018 open enrollment period began. In addition, unlike prior years, HHS officials said that the agency published scheduled maintenance information for 2018 to reduce scheduling conflicts for consumers and groups providing enrollment assistance. HHS also reported enhancing the functionality of the website for the 2018 open enrollment period, including by adding new tools, such as a “help on demand” feature that links consumers with a local agent or broker willing to assist them, as well as updated content that included more plain language. Many stakeholders we interviewed told us that healthcare.gov functioned well during the open enrollment period and was more available than it had been in prior years. Call Center Assistance. According to HHS officials, the call center reduced wait times and improved customer satisfaction scores in 2018, continuing a trend in improvements over prior years. HHS officials reported average wait times of 5 minutes, 38 seconds for the 2018 open enrollment period—almost four minutes shorter than the average wait time experienced during a comparable timeframe of the 2017 open enrollment period. HHS officials attributed this reduction in wait times to improvements in efficiency, including scripts that used fewer words and generated fewer follow-up questions. In addition, there was a modest reduction in call center volume during similar timeframes of the 2017 and 2018 open enrollment periods. Officials from many stakeholders we interviewed reported that call center assistance was more readily available this year than it had been in prior years. HHS officials also reported an average call center customer satisfaction score of 90 percent in 2018 compared to 85 percent in 2017, based on surveys conducted at the end of customer calls. Although HHS officials reported that the agency met its goal of enhancing specific aspects of the consumer experience for the 2018 open enrollment period, HHS narrowly defined its goal and excluded certain aspects of the consumer experience that it had identified as key as recently as 2017. More specifically, in 2017, HHS reported that successful outreach and education events and the availability of in-person consumer assistance, such as that provided by navigators to help consumers understand plan options, were key aspects of the consumer experience. However, HHS did not include these key items when measuring progress toward their 2018 goal of enhancing the consumer experience. Federal internal control standards state that agencies should identify risks that affect their defined objectives and use quality information to achieve these objectives, including by identifying the information required to achieve the objectives and address related risks. By excluding key aspects of the consumer experience in its evaluation of its performance, HHS’s assessment of the consumer experience may be incomplete. For example, as noted above, some stakeholders we interviewed told us that consumer confusion likely detracted from enrollment for 2018, and some linked this outcome to HHS’s reduced role in promoting exchange enrollment, including navigator support, which may have resulted in less in-person consumer assistance through navigators. HHS’s assessment of the consumer experience, which focused only on consumers who used the website or reached out to the call center during open enrollment, did not account for the experiences of those who interacted with the health insurance exchanges through other channels, such as through navigators or agents and brokers. Some experts have raised questions about the long-term stability of the exchanges absent sufficient enrollment, including among young and healthy consumers. To encourage exchange enrollment, HHS has traditionally conducted a broad outreach and education campaign, including funding navigator organizations that provide in-person enrollment assistance. For the 2018 open enrollment period, HHS reduced its support of navigator organizations and changed its approach for allocating navigator funding to focus on exchange enrollment alone. HHS allocated the funding based on performance data that were problematic for multiple reasons, including because some of the underlying data were unreliable. As a result, navigator organizations received funding that reflected a more limited evaluation of their performance than HHS had used in the past, and that may not have accurately reflected their performance. This raises the risk that navigator organizations will decrease the priority they place on fulfilling a range of other duties for which they are responsible, including providing assistance to traditionally underserved populations, which some navigator organizations we interviewed reported they had either decreased or planned to decrease due to reduced funding. HHS’s lack of complete and reliable data on navigator organization performance hampers the agency’s ability to make appropriately informed decisions about funding. Moreover, its focus on enrollment alone in awarding funding may affect navigator organizations’ ability to fulfill the full range of their responsibilities, which could in turn affect HHS’s ability to use the program as a way to meet its objective of enhancing Americans’ access to health care. In addition, the lack of numeric enrollment targets for HHS to evaluate its performance with respect to the open enrollment period hampers the agency’s ability to make informed decisions about its resources. HHS reported achieving a successful consumer experience for the 2018 open enrollment period based on enhancing its performance in areas that had been problematic in the past. However, the agency’s evaluation of its performance did not include aspects of the consumer experience that it identified in 2017 as key, and for which stakeholders reported problems in 2018. As a result, its assessment of its performance in enhancing the consumer experience was likely incomplete. Absent a more complete assessment, HHS may not have the information it needs to fully understand the consumer experience. We are making the following three recommendations to HHS: The Secretary of HHS should ensure that the approach and data it uses for determining navigator award amounts accurately and appropriately reflect navigator organization performance, for example, by 1. providing clear guidance to navigator organizations on performance goals and other information they must report to HHS that will affect their future awards, 2. ensuring that the fields used to capture the information are functioning properly, and 3. assessing the effect of its current approach to funding navigator organizations to ensure that it is consistent with the agency’s objectives. (Recommendation 1) The Secretary of HHS should establish numeric enrollment targets for healthcare.gov, to ensure it can monitor its performance with respect to its objectives. (Recommendation 2) Should the agency continue to focus on enhancing the consumer experience as a goal for the program, the Secretary of HHS should assess other aspects of the consumer experience, such as those it previously identified as key, to ensure it has quality information to achieve its goal. (Recommendation 3) We provided a draft of this report to HHS for comment. In its comments, reproduced in appendix VI, HHS concurred with two of our three recommendations. HHS also provided technical comments, which we incorporated as appropriate. HHS concurred with our recommendation that it ensure that the approach and data it uses for determining navigator awards accurately and appropriately reflect navigator organization performance. In its comments on our draft report, HHS stated that it had notified navigator organizations that their funding would be linked to the organizations’ self-identified performance goals and their ability to meet those goals. On July 10, 2018, HHS issued its 2019 funding opportunity announcement for the navigator program, which required those applying for the award to set performance goals, including for the number of consumers assisted with enrollment and re-enrollment in exchange plans, and also states that failure to meet such goals may negatively impact a recipient’s application for future funding. In its comments, HHS also noted that it is in the process of updating the healthcare.gov website so that individual applications can hold the identification numbers of multiple entities, such as navigators, agents or brokers, and will work to ensure that the awards align with agency objectives. HHS also concurred with our recommendation that the agency assess other aspects of the consumer experience, such as those it previously identified as key, to ensure it has quality information to achieve its goal. HHS noted that it had assessed the consumer experience based on the availability of the two largest channels supporting exchange operations, and also noted that it will consider focusing on other aspects of the consumer experience as needed. HHS did not concur with our recommendation that the agency establish numeric enrollment targets for healthcare.gov, to ensure that it can monitor its performance with respect to its objectives. Specifically, HHS noted that there are numerous external factors that can affect a consumer’s decision to enroll in exchange coverage that are outside of the control of HHS, including the state of the economy and employment rates. HHS stated that it does not believe that enrollment targets are relevant to assess the performance of a successful open enrollment period related to the consumer experience. Instead, it believes a more informative performance metric would be to measure whether everyone who utilized healthcare.gov, who qualified for coverage, and who desired to purchase coverage, was able to make a plan selection. We continue to believe that the development of numeric enrollment targets is important for effective monitoring of the program and management of its resources. Without establishing numeric enrollment targets for upcoming open enrollment periods, HHS’s ability to evaluate its performance and make informed decisions about how it should deploy its resources is limited. We also believe that these targets could help the agency meet its program objectives of stabilizing the market and of increasing the mix of younger and healthier consumers purchasing plans through the individual market. Furthermore, HHS has previously demonstrated the ability to develop meaningful enrollment targets using available data. For example, in prior years, HHS developed numeric enrollment targets based on a range of factors, including the number of exchange enrollees, number of uninsured individuals, and changes in access to employer-sponsored insurance, Medicaid, and other public sources of coverage. In addition, the agency set numeric enrollment targets for regional markets that took these and other factors into account. Once these targets were established, HHS officials were able to use them to monitor progress throughout the open enrollment period and revise its efforts as needed. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of HHS. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. We identified a list of factors that may have affected 2018 healthcare.gov enrollment based on a review of Department of Health and Human Services information, interviews with health policy experts, and review of recent publications by these experts related to 2018 exchange enrollment. Factors related to the open enrollment period: Open enrollment conducted during a shorter 6-week open enrollment period. Consumer awareness of this year’s open enrollment deadline. Factors related to plan availability and plan choice: Plan affordability for consumers ineligible for financial assistance. Plan affordability for consumers eligible for financial assistance. Consumers’ perceptions of plan affordability. Availability of exchange-based plan choices. Availability of off-exchange plan choices. Consumer reaction to plan choices. Factors related to outreach and education: Reductions in federal funding allocated to outreach and education, and lack of television and other types of advertising. Top Administration and agency officials’ messaging about the health insurance exchanges and open enrollment. National and local media reporting on the exchanges and open enrollment. Local outreach and education events conducted by federally funded navigator organizations. Outreach and education efforts and/or advertising by some states, issuers, advocacy groups, community organizations, and agents and brokers. Factors related to enrollment assistance and tools: Availability of one-on-one enrollment assistance from federally funded navigator organizations. Availability of one-on-one enrollment assistance from agents and brokers. Updates to the content and function of the healthcare.gov website. Availability of the healthcare.gov website during the open enrollment period. Availability of assistance through the call center during the open enrollment period. Consumer understanding of the Patient Protection and Affordable Care Act and its status. Automatic re-enrollment occurred on the last day of the open enrollment period. 4 Navigator organizations were selected to reflect a range in: (1) amount of 2018 award from the Department of Health and Human Services (HHS); (2) change in HHS award amount from 2017; (3) region; and (4) target population. Insurance departments in six states that use the federally facilitated exchanges were selected to reflect a range with respect to: (1) 2018 healthcare.gov enrollment outcomes; (2) strategies used for calculating 2018 premiums to compensate for the loss of federal cost-sharing reduction payments; (3) changes in 2018 navigator organization award amounts; and (4) the number of issuers offering 2018 exchange coverage in the state. 3 Three issuers were selected who offered 2018 plans on healthcare.gov exchanges; two of which sold exchange plans in multiple states. 5 Five research and consumer advocacy organizations were selected to provide a range of perspectives with respect to the law and issues related to exchange outreach and enrollment. 3 Three professional trade associations were selected to collectively represent the perspectives of regulators, issuers, and consumer assisters. 2 Two state-based exchanges were selected based on the length of their open enrollment periods—one had one of the shortest open enrollment periods and the other had one of the longest open enrollment periods for 2018. Navigator organizations, among other things, carry out public education activities and help consumers enroll in a health insurance plan offered through the exchange. HHS awards financial assistance to navigator organizations that provide these services in states using the federally facilitated exchange. An issuer is an insurance company, insurance service, or insurance organization that is required to be licensed to engage in the business of insurance in a state. State-based exchanges are able to set their own budget and strategy for promoting exchange enrollment and set the length of their open enrollment periods. We identified a list of factors that may have affected 2018 healthcare.gov enrollment based on a review of Department of Health and Human Services (HHS) information, interviews with health policy experts, and review of recent publications by these experts related to 2018 exchange enrollment. Using this list, we conducted structured interviews with officials from 23 stakeholder organizations to gather their viewpoints as to whether and how these or other factors affected 2018 health insurance exchange enrollment. Organizations interviewed were selected to reflect a wide range of perspectives and included HHS-funded navigator organizations that provide in-person consumer enrollment assistance, issuers, state insurance departments, professional trade organizations, research and advocacy organizations, and state-based exchanges. Table 2 displays a range in stakeholder views about the impact of these factors. In addition to the contact named above, Gerardine Brennan, Assistant Director; Patricia Roy, Analyst-in-Charge; Priyanka Sethi Bansal; Giao N. Nguyen; and Fatima Sharif made key contributions to this report. Also contributing were Muriel Brown, Laurie Pachter, and Emily Wilson.
|
Since 2014, millions of consumers have purchased health insurance from the exchanges established by the Patient Protection and Affordable Care Act. Consumers can enroll in coverage during an annual open enrollment period. HHS and others conduct outreach during this period to encourage enrollment and ensure the exchanges' long-term stability. HHS announced changes to its 2018 outreach, prompting concerns that fewer could enroll, potentially harming the exchanges' stability. GAO was asked to examine outreach and enrollment for the exchanges using healthcare.gov. This report addresses (1) 2018 open enrollment outcomes and any factors that may have affected these outcomes, (2) HHS's outreach efforts for 2018, and (3) HHS's 2018 enrollment goals. GAO reviewed HHS documents and data on 2018 open enrollment results and outreach. GAO also interviewed officials from HHS and 23 stakeholders representing a range of perspectives, including those from 4 navigator organizations, 3 issuers, and 6 insurance departments, to obtain their non-generalizable views on factors that likely affected 2018 enrollment. About 8.7 million consumers in 39 states enrolled in individual market health insurance plans offered on the exchanges through healthcare.gov during the open enrollment period for 2018 coverage. This was 5 percent less than the 9.2 million who enrolled for 2017 and continued a decline in enrollment from a peak of 9.6 million in 2016. Among the 23 stakeholders we interviewed representing a range of perspectives, most reported that plan affordability played a major role in exchange enrollment—both attracting and detracting from enrollment. In 2018, total premiums increased more than expected, and, as a result, plans may have been less affordable for consumers, which likely detracted from enrollment. However, most consumers receive tax credits to reduce their premiums, and stakeholders reported that plans were often more affordable for these consumers because higher premiums resulted in larger tax credits, which likely aided exchange enrollment. Stakeholders had mixed opinions on the effects that other factors, such as the impact of reductions in federal advertising and the shortened open enrollment period, might have had on enrollment. The Department of Health and Human Services (HHS), which manages healthcare.gov enrollment, reduced consumer outreach for the 2018 open enrollment period: HHS spent 90 percent less on its advertising for 2018 ($10 million) compared to 2017 ($100 million). Officials told us that the agency's approach for 2018 was to focus on low-cost, high-performing forms of advertising. HHS reduced funding by 42 percent for navigator organizations—which provide in-person enrollment assistance for consumers—spending $37 million in 2018 compared to $63 million in 2017 due to a shift in administration priorities. HHS allocated the funding using data that it acknowledged were not reliable in December 2016. The lack of quality data may affect HHS's ability to effectively manage the navigator program. Unlike in prior years, HHS did not set any numeric targets related to 2018 total healthcare.gov enrollment; officials told us that they instead focused on enhancing the consumer experience for the open enrollment period. Setting numeric targets would allow HHS to monitor and evaluate its overall performance, a key aspect of federal internal controls. Further, while HHS reported meeting its goal of enhancing the consumer experience, such as by improving healthcare.gov availability, it did not measure aspects of the consumer experience it had identified as key in 2017, such as successful outreach events. Absent a more complete assessment, HHS may not be able to fully assess its progress toward its goal of enhancing the consumer experience and may miss opportunities to improve other aspects of the consumer experience. GAO is making three recommendations to HHS, including that it ensure the data it uses for determining navigator organization awards are accurate, set numeric enrollment targets, and assess other aspects of the consumer experience. HHS agreed with two recommendations, but disagreed with the need to set numeric targets. GAO maintains that such action is important.
|
In 2016, Medicare spent about $380 billion on health care services for beneficiaries enrolled in Medicare FFS, which consists of two separate parts: Medicare Part A, which primarily covers hospital services, and Medicare Part B, which primarily covers outpatient services. The majority of the 38 million Medicare FFS beneficiaries were enrolled in both Part A and Part B, although about 5 million were enrolled in Part A only and 0.3 million were enrolled in Part B only. The general design of Medicare FFS cost-sharing has been largely unchanged since Medicare’s enactment in 1965. It includes separate deductibles for Part A and Part B services, a variety of per-service copayments and coinsurance after the deductibles are met, and no cap on beneficiaries’ cost-sharing responsibilities (see table 1). The current cost-sharing design leaves beneficiaries exposed to potentially catastrophic cost-sharing, and in part because of that, in 2015, 81 percent of Medicare FFS beneficiaries obtained supplemental insurance that covered some or all of their Medicare cost-sharing responsibilities, often in exchange for an additional premium (see table 2). For example, in 2015, 31 percent of Medicare FFS beneficiaries purchased a private Medigap plan, the most common types of which fully insulated them from Medicare cost-sharing responsibilities in exchange for an average annual premium of $2,400. Another 20 percent of Medicare FFS beneficiaries enrolled in Medicaid, which generally covered most of their Medicare cost-sharing responsibilities; however, these low- income beneficiaries generally only paid a limited or no premium for this supplemental coverage. The current Medicare FFS cost-sharing design can be confusing, contribute to beneficiaries’ overuse of services, and leave beneficiaries exposed to catastrophic costs. Modernizing the design could address these concerns, but would involve trade-offs. For example, as shown in four illustrative designs that we evaluated, maintaining Medicare’s share of costs would involve a trade-off between the level of the cap and the deductible (or other cost-sharing). As noted by Medicare advocacy groups and others, the current Medicare FFS cost-sharing design, which includes multiple deductibles, can be confusing for beneficiaries. In 2014, 16 percent of Medicare FFS beneficiaries were responsible for at least one Part A deductible for an episode of inpatient care as well as the annual Part B deductible. (Medicare FFS beneficiaries may be subject to more than one Part A deductible during the year, as the Part A deductible applies to each admission to an inpatient hospital or skilled nursing facility that occurs more than 60 consecutive days after the prior admission.) The Congressional Budget Office has cited the separate deductibles as one way in which Medicare FFS cost-sharing is more complicated than private plans. In 2016, according to a survey conducted by the Kaiser Family Foundation, only 1 percent of workers with employer-sponsored insurance had a separate deductible for inpatient services. Moreover, inpatient services tend to be nondiscretionary, and one or more deductibles for those services can create a financial burden for beneficiaries, while having minimal effect on their use of inpatient services. The cost-sharing design also affects beneficiaries’ utilization of services. For example, as noted by the bipartisan Simpson-Bowles Fiscal Commission, the lack of a coherent cost-sharing system is a significant contributor to overuse and misuse of care. This is particularly true for services such as home health and clinical laboratory services, which currently have no cost-sharing under Medicare FFS and thus do not provide beneficiaries an incentive to decline care of negligible value. Because of these concerns, MedPAC recommended adding a cost- sharing requirement for home health services that were not preceded by hospitalization or post-acute care, noting that the current lack of cost- sharing has likely contributed to the significant rise in utilization for these services, which suggests some overuse. At the same time, the lack of an annual cost-sharing cap prevents Medicare FFS from fulfilling a key purpose of health insurance: protecting beneficiaries from catastrophic medical expenses. While most beneficiaries had cost-sharing responsibilities under $2,000 in 2014, 1 percent—over 300,000 beneficiaries—had responsibilities over $15,000, including several hundred beneficiaries with responsibilities between $100,000 and $3 million. (See fig. 1.) Given the risk of catastrophic medical expenses, a focus group of current and future Medicare beneficiaries convened by MedPAC indicated that an annual cap is the cost-sharing design feature they were most interested in seeing added to the Medicare benefit. Annual caps are a common design feature of private plans, as most are required to have an annual cap, including those participating in MA. Specifically, since 2011, CMS has required most MA plans to have an annual cap of $6,700 or less and grants them additional flexibility in their cost-sharing design if they voluntarily set their cap at or below $3,400. The mandatory and voluntary caps for certain MA plans that provide both in- and out-of-network coverage are the same ($6,700 and $3,400) for in-network services, and 1.5 times higher ($10,000 and $5,100) for combined in- and out-of-network services. In addition to these implications of the cost-sharing design itself, the American Academy of Actuaries and others have noted that the complexity and the possibility of unlimited responsibilities increases demand for supplemental insurance, which can lead to added costs for beneficiaries and the Medicare program. It is uncommon for beneficiaries enrolled in private health insurance to have supplemental coverage. By insulating beneficiaries from some or all cost-sharing responsibilities (and not just catastrophic costs), supplemental insurance further reduces the incentives for beneficiaries to evaluate the need for discretionary care. In part because of these reduced incentives, we previously estimated that both beneficiaries’ average total out-of-pocket costs and average Medicare program spending were higher for Medicare FFS beneficiaries with Medigap than those with FFS only. Modernizing Medicare FFS cost-sharing could address these concerns, but would involve design trade-offs. Specifically, as proposed by various groups, revising Medicare’s cost-sharing design to include a single deductible, modified cost-sharing requirements, and an annual cost- sharing cap could address concerns with the current cost-sharing design. However, there are multiple options for revising within this broad framework, including two key design trade-offs that would affect the extent to which a modernized structure would address concerns about the current design (and possibly also raise new concerns). One trade-off centers on how to modify the existing complicated set of cost-sharing requirements for different services. While the reform proposals have generally suggested moving to a single deductible, they have varied in how to modify the subsequent per-service payments. Some proposals have emphasized the value of simplicity and suggested replacing the complex set of per-service payments above the deductible with a uniform coinsurance. A uniform coinsurance would simplify the cost-sharing design, provide beneficiaries insight into the total cost of each service, and introduce cost-sharing for certain potentially discretionary services, such as home health services. However, as noted by the Medicare Payment Advisory Commission and Congressional Budget Office, uniform coinsurance also has drawbacks, such as a fixed percentage of an unknown bill being harder for beneficiaries to understand and predict than copayments. Other proposals have emphasized the need to set cost-sharing based on the value of services, and have suggested moving Medicare toward a value-based insurance design in which per-service cost-sharing would vary based on the clinical value of the service to an individual beneficiary. While a value-based design would specifically target cost-sharing to promote prudent use of health care services, implementing it is challenging in practice and would be more complicated for beneficiaries to understand and for CMS to administer, though CMS is testing the feasibility of value-based insurance design in MA. A second design trade-off centers on how to set the level of the deductible and the annual cap. As shown in the four illustrative cost- sharing designs we evaluated, the lower the cap, the higher the deductible (or other cost-sharing requirements) would need to be to maintain Medicare’s and beneficiaries’ aggregate share of costs similar to that of the current design. For example, holding utilization and enrollment constant, we found that even without any deductible, a uniform coinsurance of 18 percent (a level below the existing 20 percent coinsurance for most Part B services) would be sufficient to add a cap near $10,000 (the mandatory cap for certain MA plans that allow beneficiaries to see any provider). In contrast, it would take a deductible near $1,225 (a level similar to the existing Part A deductible for each inpatient episode) and a uniform coinsurance of 20 percent to establish a cap of $3,400 (the voluntary cap for most MA plans). (See table 3.) Different levels of the deductible and cap would address certain concerns of the current design raised by GAO and others but also could create new ones. For example, as our analysis of four illustrative cost-sharing designs shows, designs with relatively high caps would provide some additional protection from catastrophic costs while maintaining a deductible and coinsurance near or below the current levels for Part B services. However, per an analysis conducted by Kaiser Family Foundation and the Urban Institute, half of Medicare beneficiaries in 2016 were living on less than $26,200 in income; thus, caps of $6,700 or higher may still leave some beneficiaries vulnerable to costs that are catastrophic for them and may not significantly decrease the associated demand for supplemental insurance. In contrast, designs with relatively low caps would provide greater protection from catastrophic costs. However, as noted by the Congressional Budget Office, beneficiaries who reached the cap would have less incentive to use services prudently. In addition, the higher deductible needed to offset a lower cap while maintaining Medicare’s share of costs could present a financial barrier for some beneficiaries to obtain necessary care. The direct effect of modernizing the Medicare FFS cost-sharing design (i.e., the effect when holding utilization and enrollment constant) on beneficiaries’ cost-sharing responsibilities would depend on the specific revisions and the time horizon examined. As we noted above, modernizing the FFS cost-sharing design while maintaining Medicare’s aggregate share of costs similar to the current design requires a trade-off between the level of the deductible and cap. At the beneficiary level, this design trade-off affects beneficiaries’ annual cost-sharing and the degree to which beneficiaries would be protected from catastrophic costs. One way of viewing how the design trade-off affects beneficiaries is to compare across different designs the median annual cost-sharing responsibility with the level of the cap (see fig. 2). In examining the direct effect of the four illustrative modernized designs we analyzed, we found the following: During year 1, cost-sharing designs that feature relatively low deductibles and relatively high caps would result in a median annual beneficiary cost-sharing responsibility close to or below that of the current design. In contrast, designs with relatively low caps—and therefore greater beneficiary protection from catastrophic costs— would result in a median annual beneficiary cost-sharing responsibility above that of the current design. For example, during year 1 of a design with no deductible, 18 percent coinsurance, and a cap near $10,000, we found that the median annual cost-sharing responsibility would be $479, which is below that of the current design ($621), despite the addition of a cap. In contrast, during year 1 of a design with a $1,225 deductible, 20 percent coinsurance, and a cap near $3,400, the median annual cost-sharing responsibility would be $1,486, which is 2.4 times higher than that of the current design. However, in exchange for this higher median annual cost-sharing responsibility, beneficiaries would have much greater protection from catastrophic costs, as their annual cost-sharing responsibilities would be capped near $3,400. By the end of 8 years, there would still be differences in the median annual beneficiary cost-sharing responsibility across different designs, but they would become less pronounced—despite the significantly different levels of catastrophic protection. As beneficiaries age and become more likely to have catastrophic costs in at least one year, the median annual cost-sharing responsibility would increase, regardless of the cost-sharing design. However, by the end of 8 years the differences in the median annual cost-sharing responsibility across different designs would become less pronounced. For example, the median annual cost-sharing responsibility under the design with a cap near $10,000 would increase from below that of the current design in year 1 to 1.1 times higher than the current design by the end of 8 years. In contrast, the median annual cost-sharing responsibility under the design with the cap near $3,400 would decrease from 2.4 times higher than the current design in year 1 to only 1.6 times higher by the end of 8 years. (See app. I table 4 for more details, including results on our other two illustrative designs and results over 4 years.) The same patterns held when looking at how the design trade-off affects beneficiaries in another way: the percentage of beneficiaries with cost- sharing responsibilities lower and higher than under the current design (see fig. 3). In examining the direct effect of our four illustrative designs, we found the following: During year 1, designs that feature relatively low deductibles and relatively high caps would result in a minority of beneficiaries having cost-sharing responsibilities that are at least $100 higher than under the current design. In contrast, designs with relatively high deductibles and relatively low caps would result in the majority of beneficiaries having cost-sharing responsibilities that are higher than under the current design. For example, during year 1 of a design with no deductible, 18 percent coinsurance, and a cap near $10,000, 16 percent of beneficiaries would have cost-sharing responsibilities at least $100 higher than their responsibilities under the current design. In contrast, during year 1 of a design with a $1,225 deductible, 20 percent coinsurance, and a cap near $3,400, 69 percent of beneficiaries would have cost-sharing responsibilities at least $100 higher than their responsibilities under the current design. By the end of 8 years, there would still be differences across the designs, but they would become less pronounced—despite levels of catastrophic protection that vary significantly. Over a longer time horizon, a larger percentage of beneficiaries would reach the cap at least once, regardless of the cost-sharing design (ranging from 23 percent reaching the cap at least once over 8 years under the design with a cap near $10,000 to 66 percent under the design with a cap near $3,400). However, the subset of these beneficiaries who nonetheless had annual cost-sharing responsibilities at least $100 higher would also increase. Whether this increase would be augmented or offset by the changes over time in the percentage of beneficiaries who never reached the cap and had higher cost-sharing responsibilities would depend on the specific design. For example, the percentage of beneficiaries with annual cost-sharing responsibilities at least $100 higher than the current design would increase from 16 percent in year 1 to 38 percent by year 8 under the design with a cap near $10,000. In contrast, this percentage would decrease from 69 percent in year 1 to 67 percent by year 8 under the design with a cap near $3,400. (See app. I tables 5 and 6 for more details, including results on our other two illustrative designs and results over 4 years.) Modernizing the Medicare FFS cost-sharing design would affect beneficiaries’ costs indirectly through beneficiaries’ and supplemental insurers’ behavioral responses to altered incentives, according to the studies we reviewed and the experts we spoke to. These studies and experts identified several types of behavioral responses that would influence the net effect of a modernized design on beneficiaries’ out-of- pocket costs, including changes in beneficiaries’ demand for, and insurers’ supply of, supplemental insurance; changes in beneficiaries’ utilization of services; changes in Medicare beneficiaries’ enrollment in FFS versus MA; and interactions among these and other behavioral responses, including effects on the price of supplemental insurance. According to studies we reviewed and experts we spoke to, implementing a modernized cost-sharing design would likely trigger changes in the demand for and supply of supplemental insurance. For example, a focus group of current and future Medicare beneficiaries convened by MedPAC and a report from the American Academy of Actuaries stated that the addition of an annual cap would reduce the need of some beneficiaries to purchase supplemental insurance. While beneficiaries who drop their supplemental insurance would then need to pay all their Medicare cost- sharing responsibilities, those might be less than their annual premium for supplemental insurance. Additionally, according to the same MedPAC study and a Congressional Budget Office report, retiree coverage may change under a modernized design. For example, with a cap in place, there would be less difference between employer-sponsored plans and Medicare, and employers may choose to alter the supplemental insurance they offer. CMS officials told us that this would continue the trend of private employers reducing retiree health coverage. Several studies we reviewed and experts we interviewed indicated that implementing a modernized design could also trigger changes in utilization of Medicare services, the extent of which would affect beneficiaries’ out-of-pocket costs. For example, the RAND Health Insurance Experiment (HIE), which some experts consider to be the most comprehensive study on price and utilization, found that patients were “moderately sensitive to price.” The RAND HIE found that patients respond to increases in cost-sharing that they need to pay at least partly out-of-pocket by decreasing their use of some services. Similarly, CMS officials told us that they would expect utilization to decrease as beneficiaries’ out-of-pocket costs increased, while a study in the American Economic Review found that the addition of a copayment led to a decline in office visits. The RAND HIE study suggests that a 10 percent increase in cost-sharing would lead to a 1 to 2 percent decline in patients’ use of services. In the case of the RAND HIE study, cost- sharing affected the number of contacts people initiated with their physician, which impacted preventive care and diagnostic tests. The study found that this could potentially affect patients’ use of both effective and less effective services. According to several studies and interviews with experts, design changes could trigger other behavioral responses. For example, a study by the Kaiser Family Foundation and a report by the Congressional Budget Office both anticipated that a modernized design could change the proportion of Medicare beneficiaries who decide to enroll in FFS or MA. Similarly, officials from the American Academy of Actuaries told us that they would expect a change in demand for MA under a modernized design. Under the current Medicare design, all MA plans have an annual cap that protects beneficiaries from catastrophic medical expenses. Between 2008 and 2017, the percentage of Medicare beneficiaries who chose to enroll in an MA plan increased from 22 to 33 percent. CMS officials told us that the increases in MA enrollment may be due in part to the requirement that MA plans must include an annual cost-sharing cap. The Kaiser Family Foundation study found that a modernized design, similar to that of an MA plan, might incentivize some MA beneficiaries to move back to FFS. According to experts we interviewed and studies we reviewed, different behavioral responses described above would also likely interact and affect beneficiaries’ out-of-pocket costs. CMS officials told us that when all of the factors contributing to out-of-pocket costs are combined, it is difficult to assess the net effect of a modernized cost-sharing design on beneficiaries’ out-of-pocket costs. For example, officials with the National Association of Insurance Commissioners emphasized that as both demand for supplemental insurance and expected utilization changed, supplemental premiums would also change, which would change out-of- pocket costs. Similarly, studies by both MedPAC and the Congressional Budget Office found that changes in beneficiaries’ level of supplemental insurance might trigger additional changes in utilization, which would also result in changes to the pricing of supplemental insurance. Specifically, if a number of relatively healthy beneficiaries dropped their supplemental insurance, and the beneficiaries left were sicker (that is, more costly), premiums for supplemental insurance might increase. Officials from the Congressional Budget Office told us that, conversely, if the more costly beneficiaries dropped their supplemental insurance, premiums might be lower. We provided a draft of this report to the Department of Health and Human Services for comment. The Department provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to appropriate congressional committees, the Secretary of Health and Human Services, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. The direct effect of modernizing the Medicare fee-for-service (FFS) cost- sharing design (i.e., the effect when holding utilization and enrollment constant) on beneficiaries’ cost-sharing responsibilities would depend on the specific revisions and the time horizon examined. Tables 4, 5, and 6 present the direct effect of modernizing the Medicare FFS cost-sharing design on beneficiaries’ cost-sharing responsibilities under four illustrative designs. Each table presents the direct effect of each illustrative design over 1-, 4-, and 8-year time horizons. In addition to the contact named above, Greg Giusto (Assistant Director), Alison Binkowski, George Bogart, Reed Meyer, Beth Morrison, Brandon Nakawaki, and Brian O’Donnell made key contributions to this report. Also contributing were Todd Anderson, Emei Li, Yesook Merrill, Vikki Porter, and Frank Todisco.
|
To address concerns with the current Medicare FFS cost-sharing design, various groups have proposed modernizing the design to make it simpler and include features found in private plans. These proposals have generally included a single deductible, modified cost-sharing requirements (e.g., a uniform coinsurance), and the addition of a cap on beneficiaries' annual cost-sharing responsibilities. GAO was asked to review how modernized cost-sharing designs would affect beneficiaries' costs over multiple years. This report describes implications of the current cost-sharing design; options for modernizing; and how modernized cost-sharing designs could directly and indirectly affect beneficiaries' costs. GAO reviewed studies related to modernizing Medicare's cost-sharing design and interviewed authors of those studies and other experts. GAO also used summarized Medicare claims data from 2007 to 2014 (the most recent data available) to develop four illustrative modernized designs, each including a single deductible, uniform coinsurance, and an annual cap while maintaining Medicare program spending similar to the current design. For each design, GAO calculated how beneficiaries' annual cost-sharing responsibilities compared with the current design over a 1-, 4-, and 8-year time horizon. The Department of Health and Human Services provided technical comments on a draft of this report, which GAO incorporated as appropriate. GAO and others have raised concerns about the design of Medicare fee-for-service (FFS) cost-sharing—the portion of costs beneficiaries are responsible for when they receive care. The current cost-sharing design has been largely unchanged since Medicare's enactment in 1965, can be confusing for beneficiaries, and can contribute to overuse of services. Additionally, the design leaves some beneficiaries exposed to catastrophic costs that can exceed tens of thousands of dollars annually. The complexity of the design and lack of an annual cap on cost-sharing responsibilities also increases demand for supplemental insurance, which can cost beneficiaries thousands annually and further contribute to overuse of services. Modernizing Medicare FFS's cost-sharing design to include features found in private plans could help address these concerns, but would involve design trade-offs. For example, adding an annual cap on cost-sharing responsibilities while maintaining Medicare's aggregate share of costs similar to the current design would involve a trade-off between the level of the cap and other cost-sharing requirements. In analyzing four illustrative FFS cost-sharing designs, GAO found that the direct effect of modernizing the design on beneficiaries' cost-sharing responsibilities—that is, the effect when holding utilization and enrollment constant—would depend on the specific revisions and the time horizon examined. For example, GAO found that During year 1, cost-sharing designs that feature relatively low deductibles (costs a beneficiary is responsible for before Medicare starts to pay) and relatively high caps would result in a median annual beneficiary cost-sharing responsibility close to or below that of the current design. In contrast, designs with relatively low caps—and therefore greater beneficiary protection from catastrophic costs—would result in a median annual cost-sharing responsibility above that of the current design. By the end of 8 years, there would still be differences in the median annual beneficiary cost-sharing responsibility across different designs, but they would become less pronounced. Modernizing the Medicare FFS cost-sharing design would also affect beneficiaries' costs indirectly through altered incentives. The studies GAO reviewed and experts GAO interviewed identified several types of behavioral responses that would influence the net effect of a modernized design on beneficiaries' out-of-pocket costs, including changes in beneficiaries' demand for and insurers' supply of supplemental insurance; changes in beneficiaries' use of services; changes in Medicare beneficiaries' enrollment in FFS versus Medicare's private plan alternative; and interactions among these and other behavioral responses, including effects on the price of supplemental insurance.
|
At TSA headquarters, the Office of Security Operations (OSO) has primary responsibility for operation of the RAP and allocation of TSOs across airports. Within OSO, the Staffing and Scheduling Division oversees the RAP. To allocate staff to the nearly 440 TSA-regulated airports in the United States, OSO is to use a combination of computer- based modeling and line-item adjustments based on airport-specific information. First, the agency is to work with a contractor to evaluate the assumptions—such as rates of expedited screening—used by the computer-based staffing allocation model (model) to determine the optimal number of TSOs at each airport based on airport size and configuration, flight schedules, and the time it takes to perform checkpoint and baggage screening tasks. Second, after the model has determined how many TSOs are required for each airport, headquarters-level staff are to make line item adjustments to account for factors such as differences in staff availability and training needs that affect each airport. Figure 1 below provides additional details regarding TSA’s process to determine the number of TSOs at airports. As previously discussed, in 2007, we recommended that TSA establish a mechanism to periodically assess the assumptions in the RAP (prior to fiscal year 2017, known as the Staffing Allocation Model) to ensure that staffing allocations accurately reflect operating conditions that may change over time. TSA implemented this recommendation by developing an evaluation plan for regularly assessing the assumptions used in the staffing model. Assumptions include the number of passengers or bags that can be screened each hour by TSA equipment and the time TSOs require to operate discrete sections of the screening process, such as conducting pat-downs or searches of passengers’ carry-on baggage. The evaluation plan states that TSA is to assess (1) the time it takes to screen passengers using TSA equipment and (2) the number of staff needed to operate the equipment. Results from these assessments are to inform the assumptions used in the model to determine the base allocation of TSOs to U.S. airports. TSA uses the evaluation plan as well as airport-level characteristics to systematically evaluate the assumptions used in the model on a regular basis: Evaluation plan: TSA’s evaluation plan recommends evaluating the time it takes to perform 19 aspects of passenger and checked baggage screening processes at least every two years and includes detailed procedures for doing so. For instance, the evaluation of passenger screening processes involves observing operations at selected airports to determine the average time it takes for one passenger to remove items of clothing and prepare his or her belongings for screening. Similarly, the evaluation determines how many passengers can be processed each hour during selected aspects of screening, such as by travel document checkers or via advanced imaging technology (AIT), often referred to as body scanners. Individual airport characteristics: Each year, TSA airport-level staff, such as FSDs or their designees, are to review the information in the model to ensure that information on the number of checkpoints and each checkpoint configuration and the number of flights departing the airport each day is accurate. At the airport level, FSDs and their designees are responsible for overseeing TSA security activities, including passenger and checked baggage screening. TSOs at airports follow standard operating procedures that guide screening processes and utilize technology such as AITs or walk through metal detectors (WTMD) to screen passengers and their accessible property. TSOs also inspect checked baggage to deter, detect, and prevent the carriage of any unauthorized explosive, incendiary, or weapon onboard an aircraft. Checked baggage screening is conducted in accordance with standard operating procedures and generally is accomplished through the use of explosives detection systems or explosives trace detection systems. TSA employs an expedited screening program, known as TSA Pre® that assesses passenger risk to aviation security prior to their arrival at an airport checkpoint. According to TSA, expedited screening involves a relatively more efficient and convenient screening process for individuals from whom TSA has obtained sufficient information to determine them to be of lower risk and thus undergo an expedited screening process, compared to the standard screening process a traveler may undergo, for whom TSA does not have such information in advance. Finally, at each airport, TSA is to collect throughput data on the number of passengers screened under both expedited and standard screening and monitor passenger wait times at screening checkpoints. TSA airport officials are to submit passenger throughput and wait time data on a daily basis to OSO’s Performance Management Division at TSA headquarters, which compiles the data through the Performance Measurement Information System (PMIS), TSA’s web-based data collection system. TSA’s OSO and the Office of Security Policy and Industry Engagement (OSPIE) are both responsible for sharing information with stakeholders about airport operations. In response to the Aviation Security Act, OSO issued guidance in October 2016 intended to ensure that FSDs share information with stakeholders. OSPIE communicates TSA information about airport operations, such as how TSOs are allocated across airports, to stakeholders. In fiscal years 2016 and 2017, TSA modified the assumptions used in its model, as needed, to reflect changes identified through annual evaluations performed by a contractor. The contractor is specifically tasked with evaluating the assumptions related to the time needed to screen passengers and their baggage. For example, TSA officials stated that they increased the expected time needed to screen passengers for one type of passenger screening equipment in fiscal year 2017 because the contractor found that the actual time needed was more than the assumption TSA used in fiscal year 2016. Similarly, in fiscal year 2016, TSA allocated fewer staff to review images of checked baggage, compared to previous years, because the contractor’s evaluation determined it took TSOs less time to review the images than the time observed in previous years. In addition to modifying its model based on evaluations performed by contractors, TSA officials at the headquarters level review and modify other assumptions in the model to ensure they are accurate. For example, prompted by the long waits in the spring of 2016, officials stated that they modified the model for the 2017 fiscal year based on their evaluation of the 2016 assumptions. Specifically, TSA assumed that 50 percent of airline passengers would use expedited screening in 2016, but only an average of 27 percent of passengers used expedited screening that year. According to the officials, TSA modified this assumption in fiscal year 2017 and now uses TSA Pre® Program data specific to each individual airport in the model. Similarly, officials told us that, since TSA was established in November 2001, many employees will reach 15 years of service with the federal government in fiscal years 2016 and 2017, resulting in increased annual leave allowances. In response, officials have increased the amount of annual leave they expect employees to use and rely on airport-specific data regarding employee tenure to estimate annual leave for the coming year. TSA has also modified the way it develops assumptions regarding passenger throughput at each airport. For example, beginning in fiscal year 2016, TSA used passenger throughput forecasts to allocate staff commensurate with the expected rate of increase in passenger throughput at each airport. The estimated increase in passenger throughput for each fiscal year is based primarily on national and airport- level data from the previous 3 months from PMIS, TSA’s web-based data collection system, and flight forecast data from the airline industry, as well as additional input from other sources. Prior to fiscal year 2016, TSA planned for passenger throughput during the busiest 28 days from the previous fiscal year and did not adjust the assumption for the annual increase in passenger throughput, which increased two percent in 2014 and four percent in 2015. A TSA headquarters official responsible for overseeing the RAP stated that the agency compared projected passenger throughput to actual passenger throughput for fiscal year 2017 to determine the accuracy of the projections and concluded that no significant changes to the method of forecasting were necessary for fiscal year 2018. According to TSA officials, each airport in the United States has unique characteristics that make it difficult to apply a one-size-fits-all solution to staffing security operations. For instance, officials told us that some airports are allocated additional staff to account for the time needed to transport TSOs to off-site training facilities. Because the staffing allocation resulting from TSA’s model does not reflect the full range of operating conditions at individual airports, TSA headquarters officials use airport-specific information to further adjust allocations by changing individual line items within the allocation after running the model on both an annual and an ad hoc basis. TSA headquarters officials stated that they have developed methodologies for making standard line item adjustments such as training requirements, overtime, and annual and sick leave. Officials told us they review the methodologies each year and use their professional judgement to modify the methodologies to account for changes in airport needs as well as budget constraints. We found that through its process of tailoring staffing allocations to individual airports’ needs, TSA is able to respond to the circumstances at each individual airport. TSA headquarters officials also use airport-specific data on staff availability, training needs, supervisory needs, and additional security layers to manually adjust the model’s staffing allocation output at a line item level. For instance, headquarters officials use the previous years’ data on staff sick leave for each airport to evaluate whether they are allocating the appropriate amount of sick leave to their staff allocations on an individual airport basis. According to TSA headquarters officials, sick leave use can vary by airport and region of the country. Similarly, officials stated that they adjust the model’s output to account for individual airport staff’s training needs so that each airport’s staff can meet TSA’s annual training requirements. In addition, according to TSA officials at both the headquarters and airport levels, airport-level officials can request exceptions—modifications to their staffing allocation—based on unusual airport conditions that are difficult to address, such as problematic checkpoint configurations or lack of space for security operations. For instance, officials at one airport said that they had been granted exceptions for one checkpoint because pillars and curves within the checkpoint prevented the lanes in the checkpoint from screening passengers at the rate assumed by the model. TSA officials at the headquarters level review requests for exceptions and use their professional judgement to determine whether the exception will be granted. Finally, in some cases, TSA may adjust an airport’s staffing allocation outside of the annual staffing allocation process and may do so as the result of significant and unforeseen changes in airport operations. For instance, TSA officials stated that one airport was allocated additional staff for the remainder of the fiscal year when the airport opened a new terminal mid-year so that the additional checkpoints could be properly staffed. Officials at another airport we visited said that they had been allocated additional staff when an airline extended its operational hours to ensure appropriate staffing for the additional hours of operation. TSA collects passenger wait time and throughput data and uses those data to monitor daily operations at airports. TSA’s Operations Directive (directive), Reporting Customer Throughput and Wait Times, provides instructions for collecting and reporting wait time and passenger throughput data for TSA screening lanes. Regarding wait time data, according to the directive, FSDs or their designees at all Category X, I, and II airports must measure wait times every operational hour in all TSA expedited and standard screening lanes. The directive requires wait times to be measured in actual time, using a verifiable system such as wait time cards, closed circuit television monitoring, or another confirmable method. The directive indicates that wait times should be measured from the end of the line in which passengers are waiting to the WTMD or AIT units. FSDs or their designees at Category III and IV airports may estimate wait times initially, but the directive requires them to measure actual wait times when wait times are estimated at 10 minutes or greater. The directive also requires FSDs or their designees to collect passenger throughput data directly from the WTMD and AIT units. According to TSA headquarters officials, the machines have sensors that collect the number of passengers that pass through each hour, and TSOs retrieve the data directly from the units. All airports regardless of category are required to enter their wait time and throughput data daily into PMIS, TSA’s web-based data entry program, no later than 3:30 AM Eastern Time of the next calendar day so that the data can be included in the morning’s Daily Leadership Report (discussed in more detail below). To monitor operations for all airports, TSA compiles a daily report utilizing a variety of PMIS data points, including wait time and throughput data. The Office of Security Operations’ Performance Management Division disseminates the Daily Leadership Report to TSA officials, including regional directors and FSDs and their designees every morning detailing the previous day’s wait times and throughput figures, among other data points. The Performance Management Division includes a quality assurance addendum with each Daily Leadership Report, indicating missing or incorrect data, to include wait time and throughput data, and TSA has procedures in place intended to ensure officials at the airports correct the data in PMIS within 2 weeks. In addition to the Daily Leadership Report, TSA utilizes wait time and throughput data to monitor airport operations at 28 airports in near real time. In May 2016, TSA established the Airport Operations Center (AOC) that conducts near real time monitoring of the operations of 28 airports that, according to TSA headquarters officials, represent the majority of passenger throughput nationwide or are operationally significant. TSA requires the 28 airports monitored by the AOC to enter passenger wait time data and throughput data into PMIS hourly (whereas the remaining airports are only required to submit data once daily, by 3:30 AM Eastern Time, as described above) so that AOC officials can monitor the operations in near real time. In addition, TSA officials at airports are required to report to the AOC when an event occurs—such as equipment malfunctions, weather-related events, or unusually high passenger throughput—that affects airport screening operations and results in wait times that are greater than TSA’s standards of 30 minutes in standard screening lanes or greater than 15 minutes in expedited screening lanes. If an airport is undergoing a period of prolonged wait times, the AOC coordinates with the Regional Director and the FSD to assist in deploying resources. For example, over the course of the summer of 2016, after certain airports experienced long wait times in the spring of 2016 as confirmed by our analysis, the AOC assisted in deploying additional passenger screening canines and TSOs to those airports that experienced longer wait times. The AOC disseminates a morning and evening situational report to TSA airport-level officials and airport stakeholders summarizing nationwide wait times, highlighting wait times at the top airports and any hot spots (unexpected passenger volume or other operational challenges) that may have occurred since the most recent report was issued. In addition to the near real time monitoring of the 28 airports, the AOC also monitors operations at all other airports and disseminates information to airports and stakeholders as needed. To determine the extent to which TSA exceeded its wait time standards, we analyzed wait time data for the 28 airports monitored by the AOC for the period of January 2015 through May 2017 for both standard and expedited screening. Our analysis shows that TSA met its wait time standard of less than 30 minutes in standard screening at the 28 AOC airports 99.3 percent of the time for the period of January 2015 through May 2017. For expedited screening for the same time period at the same airports, we found that 100 percent of the time passengers were reported to have waited 19 minutes or less. Additionally, our analysis confirmed that the percentage of passengers in standard screening waiting over 30 minutes increased in 2016 during the months of March, April, and May as compared to 2015 at all 28 airports monitored by the AOC. FSDs and their staff at the airports we visited identified a variety of tools that they utilize to respond to increases in passenger wait times and/or throughput. TSOs from the National Deployment Force (NDF)—teams of additional TSOs—are available for deployment to airports to support screening operations during major events and seasonal increases in passengers. For example, TSA officials at one airport we visited received NDF officers during busy holiday seasons and officials at another airport received officers during the increase in wait times in the spring and summer of 2016. TSA officials at select airports use passenger screening canines to expedite the screening process and support screening operations during increased passenger throughput and wait time periods. For example, TSA officials at one airport we visited emphasized the importance of passenger screening canines as a useful tool to minimize wait times and meet passenger screening demands at times when throughput is high. Officials at another airport we visited rely on these canines in busy terminals during peak periods. According to officials at two of the airports we visited, the use of passenger screening canines helped them to reduce wait times due to increased passenger volumes in the spring and summer of 2016. TSA officials at airports also utilize part-time TSOs and overtime hours to accommodate increases in passenger throughput and wait times. For example, according to officials at all eight of the airports we visited, they use overtime during peak travel times, such as during holiday travel seasons, and officials usually plan the use of overtime in advance. Additionally, TSA officials at four of the airports we visited told us they use part-time TSOs to help manage peak throughput times throughout the day. According to TSA officials at two of the airports we visited, they move TSOs between checkpoints to accommodate increases in passenger throughput at certain checkpoints and to expedite screening operations. For example, TSA officials at one airport we visited have a team of TSOs that terminal managers can request on short notice. Officials at the other airport estimated that they move TSOs between terminals about 40 times per day. TSA headquarters has taken steps intended to improve information sharing with stakeholders about staffing and related screening procedures at airports. For example, TSA officials hold daily conference calls with industry association, airline, and airport officials at the 28 airports monitored by the AOC. According to TSA headquarters officials, TSA established the daily conference call as a mechanism intended to ensure timely communication with stakeholders and to help identify and address challenges in airport operations such as increases in passenger wait times. Also, TSA headquarters officials stated that they conducted a series of presentations and meetings with industry, airline, and airport officials to discuss TSA’s RAP, security enhancements at airports, and airport screening processes, among other things. For example, TSA’s headquarters officials shared information about the fiscal year 2017 RAP in October 2016 during a briefing at an industry conference and a meeting with airline representatives, airline engineers, and Federal Aviation Administration officials. Additionally, TSA headquarters officials facilitated a stakeholder meeting in May 2017 to discuss planned improvements for the TSA Pre® Program and met with stakeholders in June 2017 to discuss security enhancements and changes to screening procedures for carry-on baggage. In addition to headquarters-level initiatives, at the eight airports we visited, we found that FSDs shared information with airport and airline officials by meeting on an ongoing basis to discuss TSA staffing and related screening procedures. For example, according to the FSDs and airline and airport officials at all eight airports we visited, FSDs met with stakeholders on a daily, weekly, monthly, or quarterly basis. During these meetings, FSDs and airline and airport officials told us that FSDs discussed TSO staffing levels at the airports, instances when passenger screening wait times were long at security checkpoints, and TSA screening equipment performance, among other things. Stakeholders told us that TSA headquarters officials and most FSDs improved information sharing since fiscal year 2016. With regard to TSA headquarters officials’ information sharing efforts, officials from all three industry associations we interviewed stated that, since fiscal year 2016, TSA headquarters improved information sharing with their association member companies and attributed that improvement, in part, to the daily conference call between TSA and stakeholders. For example, officials from one industry association stated that the calls benefited members by facilitating collaboration with TSA to more quickly identify and address problems, such as malfunctioning screening equipment, before the problems negatively affected passengers. An official from another industry association told us that the daily conference call improved communication substantially between TSA and the organization by providing a regular opportunity to discuss airport security issues and TSA’s plans to resolve those issues. Additionally, stakeholders we interviewed generally reported positive relationships or improved information sharing with FSDs, but also noted differences in the type and extent of information that FSDs shared. For example, officials at seven of eight airlines and all eight airports we visited stated that they have positive relationships with their FSDs and that their FSDs were accessible and available when needed, while the remaining airline official noted improving access to information. Furthermore, officials from all three industry associations cited improved information sharing between their members at airports and FSDs since fiscal year 2016, but officials from two association noted that some FSDs still do not regularly share information, such as changes in the number of TSOs staffed at individual airports. According to TSA headquarters officials, stakeholders can elevate any problems they experience with FSDs sharing information to regional directors who are responsible for ensuring that FSDs engage regularly with stakeholders. We provided a draft of this product to DHS for comment. We received technical comments which we incorporated as appropriate. We are sending copies of this report to the Secretary of Homeland Security, the Administrator of TSA and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7141 or groverj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix I. In addition to the contact named above, Ellen Wolfe, Assistant Director; Joel Aldape, David Alexander, Chuck Bausell, David Beardwood, Wendy Dye, Miriam Hill, Susan Hsu, Thomas Lombardi, Kevin Newak, Heidi Nielson, and Natalie Swabb made significant contributions to this report.
|
TSA employs about 43,000 TSOs who screen over 2 million passengers and their baggage each day at airports in the United States. TSA allocates TSOs to airports using both a computer-based staffing model and information from airports that are intended to provide each airport with the optimum number of TSOs. In the spring of 2016, long screening checkpoint lines at certain U.S. airports raised questions about TSA's process for allocating TSOs to airports. The Aviation Security Act of 2016 includes a provision for GAO to review TSA's process for allocating TSOs. This report examines how (1) TSA modifies staffing assumptions and tailors staffing levels to airports' needs, (2) TSA monitors wait times and throughput and adjusts resources accordingly, and (3) TSA shares information with stakeholders about staffing and related screening procedures at airports. GAO reviewed TSA documentation describing how the agency modifies staffing assumptions and manages stakeholder coordination. GAO also analyzed passenger wait time and throughput data from January 2015 through May 2017 for the 28 airports monitored by headquarters. GAO visited eight airports selected on the basis of passenger volume and other factors and interviewed TSA officials and stakeholders at those locations. GAO is not making any recommendations. The Transportation Security Administration (TSA) modifies staffing assumptions used in its computer-based staffing model (model) and tailors staffing levels to individual airport needs. Specifically, TSA works with a contractor annually to evaluate the assumptions used in the model and modifies the model's assumptions as needed. For example, TSA adjusted its model after contractor evaluations conducted in fiscal years 2016 and 2017 found that transportation security officers (TSO) needed more time to screen passengers and their baggage when using one type of screening equipment. Moreover, in 2016, TSA began using forecasts on the number of passengers screened at each airport's checkpoints (throughput) to better allocate staff commensurate with the expected rate of increase in passenger throughput at each airport. Furthermore, prompted by the long wait times at some airports in 2016, for the 2017 model TSA officials used actual expedited screening data, specific to each individual airport, rather than relying on the system-wide estimate used in 2016. TSA officials also use other information specific to each airport—such as staff training needs—to further tailor the TSO allocation because the initial allocation resulting from the model does not reflect the full range of operating conditions at individual airports. TSA uses data to monitor passenger wait times and throughput on a daily basis and responds to increases. For example, TSA's Airport Operations Center (AOC) monitors daily wait times and passenger throughput from 28 airports that TSA officials say represent the majority of passenger throughput nationwide or are operationally significant. Furthermore, TSA officials at airports are required to report to the AOC when an event occurs—such as equipment malfunctions—that affects airport screening operations and results in wait times that are greater than 30 minutes in standard screening lanes. GAO analyzed wait time data for the AOC-monitored airports for the period of January 2015 through May 2017 and found that TSA's reported wait times met its standard of less than 30 minutes in standard screening 99 percent of the time. Within that time frame, two airports accounted for the longest wait times in the spring of 2016. TSA officials identified several tools, such as passenger screening canines, that they use to respond to increases in passenger wait times at these airports. TSA has taken steps to improve information sharing with airline and airport officials (stakeholders) about staffing and related airport screening operations, and most stakeholders GAO interviewed reported improved satisfaction with information sharing. However, some stakeholders noted differences in the type and extent of information shared. According to TSA officials, stakeholders can elevate any problems they experience with information sharing within TSA to ensure information is shared regularly with stakeholders.
|
The National Defense Authorization Act (NDAA) for Fiscal Year 1995 authorized the Secretary of Defense to conduct personnel demonstration projects at the department’s laboratories designated as Science and Technology Reinvention Laboratories. The demonstration projects were established to give laboratory managers more authority and flexibility in managing their civilian personnel. These projects function as the vehicles through which the department can determine whether changes in personnel management concepts, policies, or procedures, such as flexible pay or hiring authorities, would result in improved performance and would contribute to improved DOD or federal personnel management. Table 1 presents a list of the 15 defense laboratories included in the scope of our review. The Defense Laboratories Office—within the Office of the Undersecretary of Defense for Research and Engineering (Research and Engineering)— carries out a range of core functions related to the defense labs, including the aggregation of data, analysis of capabilities, and alignment of activities, as well as advocacy for the defense labs. The National Defense Authorization Act for Fiscal Year 2017 gave authority to conduct and evaluate defense laboratory personnel demonstration projects to the Under Secretary of Defense for Research and Engineering and, accordingly, the Defense Laboratories Office. The Defense Laboratories Office supports the Research and Engineering mission by helping to ensure comprehensive department-level insight into the activities and capabilities of the defense laboratories. The LQEP was chartered on April 15, 1994 to improve productivity and effectiveness of the defense laboratories through changes in, among other things, personnel management and contracting processes. The NDAA for Fiscal Year 2017 established a new organizational structure for the program, adding two new panels while also specifying that two previously existing subpanels on personnel and infrastructure would continue to meet. The NDAA for Fiscal Year 2017 requires the department to maintain a LQEP Panel on Personnel, Workforce Development, and Talent Management—one of the four panels established by a February 14, 2018 charter signed by the Under Secretary of Defense for Research and Engineering. The purpose of the panel is to help the LQEP achieve the following goals: (1) review and make recommendations to the Secretary of Defense on current policies and new initiatives affecting the defense laboratories; (2) support implementation of quality enhancement initiatives; and (3) conduct assessments and data analysis. The LQEP Panel on Personnel, Workforce, Development, and Talent Management includes representatives from each of the defense laboratories, as well as from the Army, Navy, Air Force, appropriate defense agencies, and Office of the Under Secretary of Defense for Research and Engineering. A hiring authority is the law, executive order, or regulation that allows an agency to hire a person into the federal civil service. Among other roles, hiring authorities determine the rules (or a subset of rules within a broader set) that agencies must follow throughout the hiring process. These rules may include whether a vacancy must be announced, who is eligible to apply, how the applicant will be assessed, whether veterans preference applies, and how long the employee may stay in federal service. Hiring authorities may be government-wide or granted to specific agencies. Competitive (Delegated) Examining. This is the traditional method for making appointments to competitive service positions, and it requires adherence to Title 5 competitive examining requirements. The competitive examining process requires agencies to notify the public that the government will accept applications for a job, screen applications against minimum qualification standards, apply selection priorities such as veterans preference, and assess applicants’ relative competencies or knowledge, skills, and abilities against job-related criteria to identify the most qualified applicants. Federal agencies typically assess applicants by rating and ranking them based on their experience, training, and education. Figure 1 depicts the Office of Personnel Management’s (OPM) 80-day standard roadmap for hiring under the competitive process. Governmentwide (Title 5) Direct Hire Authority. This authority allows agencies to appoint candidates to positions without regard to certain requirements in Title 5 of the United States Code, with OPM approval. A direct hire authority expedites hiring by eliminating specific hiring rules. In order for an agency to use direct hire, OPM must determine that there is either a severe shortage of candidates or a critical hiring need for a position or group of positions. When using the direct hire authority, agencies must adhere to certain public notice requirements. The Pathways Programs. These programs were created to ensure that the federal government continues to compete effectively for students and recent graduates. The current Pathways Programs consist of the Internship Program, the Recent Graduates Program, and the Presidential Management Fellows Program. Initial hiring is made in the excepted service, but it may lead to conversion to permanent positions in the competitive service. Veterans-Related Hiring Authorities. These include both the Veterans Recruitment Appointment Authority and the Veterans Employment Opportunities Act authority. The Veterans Recruitment Appointment authority allows for certain exceptions from the competitive examining process. Specifically, agencies may appoint eligible veterans without competition under limited circumstances or otherwise through excepted service hiring procedures. The Veterans Employment Opportunities Act authority is a competitive service appointment authority that allows eligible veterans to apply for positions announced under merit promotion procedures when an agency accepts applications from outside of its own workforce. The Defense Laboratory Direct Hire Authorities. These include the following four types of direct hire authorities granted to the defense laboratories by Congress for hiring STEM personnel: (1) direct hire authority for candidates with advanced degrees; (2) direct hire authority for candidates with bachelor’s degrees; (3) direct hire authority for veterans; and (4) direct hire authority for students currently enrolled in a graduate or undergraduate STEM program. The purpose of these direct hire authorities is to provide a streamlined and accelerated hiring process to allow the labs to successfully compete with private industry and academia for high-quality scientific, engineering, and technician talent. The Expedited Hiring Authority for Acquisition Personnel. This authority permits the Secretary of Defense to designate any category of positions in the acquisition workforce as positions for which there exists a shortage of candidates or there is a critical hiring need; and to utilize specific authorities to recruit and appoint qualified persons directly to positions so designated. The Science, Mathematics, and Research for Transformation (SMART) Scholarship-for-Service Program. This program was established pursuant to 10 USC §2192a, as amended, and is funded through the National Defense Education Program. The SMART scholarship for civilian service program provides academic funding in exchange for completing a period of full-time employment with DOD upon graduation. The labs have used the defense laboratory-specific direct hire authorities more than any other category of agency-specific or government-wide hiring authority. Defense laboratory officials we surveyed reported that these direct hire authorities had been the most helpful to the labs’ efforts to hire highly qualified candidates for STEM positions, and also reported that the use of certain incentives had been helpful in this effort. However, even with access to the authorities, these defense laboratory officials identified challenges associated with the hiring process that affected their ability to hire highly qualified candidates. For fiscal years 2015 through 2017, the defense laboratories used laboratory-specific direct hire authorities more often than any other category of hiring authorities when hiring STEM personnel. Moreover, the defense laboratories’ use of these direct hire authorities increased each year from fiscal year 2015 through fiscal year 2017. Of the 11,562 STEM hiring actions in fiscal years 2015 through 2017, approximately 46 percent were completed using one of the defense laboratory direct hire authorities. The second and third most used hiring authorities were internal hiring actions and the expedited hiring authority for acquisition personnel, each of which comprised approximately 12 percent of the hiring actions during the time period. Table 2 provides information on the overall number of hiring actions by hiring authority for fiscal years 2015 through 2017. The laboratory-specific direct hire authorities include the direct hire authorities for candidates with advanced degrees, candidates with bachelor’s degrees, and candidates who are veterans—authorities were granted by Congress in prior legislation. Among the defense laboratory direct hire authorities, the direct hire authority for candidates with bachelor’s degrees was used for 55 percent of all direct hires, for a total of 2,920 hiring actions for fiscal years 2015 through 2017. During the same time frame, the labs used the direct hire authority for candidates with advanced degrees for approximately 36 percent (1,919 hiring actions) of all direct hires, and the direct hire authority for veteran candidates for approximately 9 percent (455 hiring actions). In addition, for less than one percent of the direct hires, either the labs used another category of laboratory-specific direct hire authority or we were unable to determine which type of direct hire authority was used during those same three fiscal years. See table 3 for information on the defense labs’ use of the defense laboratory-specific direct hire authorities for fiscal years 2015 through 2017. In fiscal year 2017 the defense labs used the defense laboratory direct hire authorities for 54 percent of STEM hiring actions completed, representing an increase of approximately 16 percentage points relative to fiscal year 2015, when 38 percent were hired under defense lab direct hire authorities. For additional information on the labs’ use of hiring authorities in fiscal years 2015 through 2017, as well as hiring authority data by laboratory, see appendix IV. One laboratory official explained that the increased use of the direct hire authorities could be a result of the NDAA for Fiscal Year 2016, which increased the laboratories’ allowable use of the direct hire authority for candidates with bachelor’s degrees from 3 percent to 6 percent, and use of the direct hire authority for veterans from 1 percent to 3 percent, of the total number of scientific and engineering positions at each laboratory at the end of the preceding fiscal year. The direct hire authority for candidates with bachelor’s degrees was used most often—for 1,151 out of 1,835 hiring actions—as compared with the other direct hire authorities in fiscal year 2017. See table 4 for more information on the laboratories’ use of all hiring authorities in fiscal year 2017. In addition, table 5 provides more information on the labs’ use of the direct hire authorities in fiscal year 2017. Defense laboratory officials we surveyed most frequently identified the three defense laboratory-specific direct hire authorities as having helped to hire highly qualified candidates (see figure 2) and to hire quickly (see figure 3). Specifically, 15 of 16 respondents to our survey stated that each of the three direct hire authorities had been helpful in hiring highly qualified candidates, and that the direct hire authorities for veterans and for candidates with an advanced degree had helped them to hire quickly. Moreover, all 16 survey respondents stated that the direct hire authorities for candidates with a bachelor’s degree had helped them to hire quickly. Among the three direct hire authorities, the one for candidates with bachelor’s degrees was reported to be the most helpful to the laboratories’ hiring efforts, according to our survey results. A majority of the laboratory officials we surveyed also stated that the Expedited Hiring Authority and the Science, Mathematics, and Research for Transformation (SMART) Program had both helped facilitate their efforts to hire highly qualified candidates and to hire them quickly. According to our survey, the least helpful hiring authority that lab officials reported using was the delegated examining unit authority. Six of 16 survey respondents stated that the delegated examining unit authority had helped them to hire highly qualified candidates, while 9 of 16 stated that the authority had hindered this effort. Three of 16 survey respondents stated that the delegated examining unit authority had helped them to hire quickly, while 12 of 16 stated that the use of this authority had hindered their ability to hire quickly. During our interviews with laboratory officials, hiring officials and supervisors described the defense laboratory direct hire authorities as being helpful in their hiring efforts. For example, hiring officials from one lab stated that the direct hire authorities were the easiest authorities to use, and that since their lab had started using them, job offer acceptance rates had increased and their workload related to hiring had decreased. A hiring official from another laboratory stated that the use of direct hire authorities had allowed their lab to be more competitive with the private sector in hiring, which is useful due to the high demand for employees in research fields. A supervisor from one lab stated that the use of direct hire authorities was not only faster than the competitive hiring process, but it also allowed supervisors a greater ability to get to know candidates early in the process to determine whether they met the needs of a position. In comparison, hiring managers we interviewed at one laboratory stated that the Pathways Program is not an effective means of hiring students because the program requires a competitive announcement. Supervisors also stated that the application process for Pathways can be cumbersome and confusing for applicants and may cause quality applicants to be screened out early. Defense laboratory officials who responded to our survey also stated that the process takes too long and that quality applicants may drop out of the process due to the length of the process. Defense laboratory hiring data also indicated that use of the defense laboratory direct hire authorities resulted in faster than median hiring times. As shown in table 6, the median time to hire for STEM positions at the defense laboratories in fiscal year 2017 was 88 days. The median time to hire when using the defense laboratories’ direct hire authorities, Pathways, or the SMART program authority was faster than that of the median for all categories combined. The median time to hire when using the competitive hiring process was approximately twice as long as when using the labs’ direct hire authorities. Our full analysis of defense laboratory hiring data, including the time to hire by hiring authority category, for fiscal years 2015 through 2017 can be found in appendix V. Defense laboratory officials also cited the use of incentives as helpful in hiring highly qualified candidates, as shown in figure 4. According to our survey results, the defense laboratories’ flexibility in pay setting under their demonstration project authority was generally considered to be the most helpful incentive, with 13 of 16 survey respondents stating that this incentive had very much helped them to hire highly qualified candidates. During interviews, laboratory officials described the use of these incentives as being particularly helpful if a candidate is considering multiple job offers because the incentives can help make the lab’s offer more competitive with offers from other employers. Multiple hiring officials stated that they would generally not include such incentives in an initial offer, but that if the candidate did not accept that offer, they would consider increasing the salary or offering a bonus. A hiring official from one lab stated that his lab has not offered many recruitment bonuses in recent years, because their acceptance rate has been sufficiently high without the use of that incentive. Many of the recently hired lab employees whom we interviewed also cited incentives, including bonuses and student loan repayment, as factoring into their decisions to accept the employment offers for their current positions. For example, one recently hired employee stated that the lab’s student loan repayment program was a significant factor in his decision to accept employment at the lab rather than with private industry. Recently hired employees also cited less tangible benefits of working at the labs, including the work environment, job stability, and type of work performed, as key factors in their decisions to accept their current positions. One newly hired employee stated that, while she could earn more money in a private-sector job, the defense laboratory position would afford her the freedom to pursue the type of work she is currently doing, and that this was a major consideration in her decision to accept it. Another newly hired employee similarly stated that he was interested in the type of research conducted at the lab where he now works, and that he was attracted to the opportunity to contribute to the national defense, while also taking advantage of benefits that support the pursuit of higher education. Defense laboratory officials we surveyed reported that, although the available hiring authorities and incentives are helpful, they experience a range of challenges to their ability to hire highly qualified candidates, as shown in figure 5, ranging in order from the most to the least frequently cited. In addition, figure 6 shows the extent to which officials reported selected top challenges that hindered their respective labs’ abilities to hire highly qualified candidates. Defense laboratory officials described how hiring challenges identified in our survey affect their ability to hire high quality candidates. Specifically, these challenges are as follows: Losing quality candidates to the private sector: Fifteen of 16 survey respondents stated that this was a challenge, and 12 of the 15 stated that this challenge had somewhat or very much hindered their lab’s ability to hire highly qualified candidates for STEM positions since October 2015. Hiring officials and supervisors we interviewed stated that private-sector employers can make on-the-spot job offers to candidates at college career fairs or other recruiting events, whereas the labs are unable to make a firm job offer until later in the hiring process. Government-wide hiring freeze: Fifteen of 16 survey respondents identified this as a challenge, with 13 of those reporting that it had either somewhat or very much hindered their lab’s ability to hire highly qualified candidates for STEM positions since October 2015. Multiple hiring officials and supervisors we interviewed stated that they had lost candidates whom they were in the process of hiring because the candidates had accepted other offers due to the delays created by the hiring freeze. In addition, some officials stated that, although the freeze had been lifted, their labs’ hiring efforts were still affected by backlogs created by the freeze, or were adapting to new processes that were implemented as a result of the freeze. Delays with the processing of security clearances: Fifteen of 16 survey respondents cited this as a challenge; 12 of the 15 stated that this challenge had somewhat or very much hindered their lab’s ability to hire highly qualified candidates for STEM positions since October 2015. A supervisor from one lab stated that he was in the process of trying to hire two employees whose hiring actions had been delayed due to the security clearance process. The supervisor stated that he had been told it could potentially take an additional 6 months to 1 year to complete the process, and that he believed this may cause the candidates to seek other employment opportunities. In other cases, hiring officials stated that employees may be able to begin work prior to obtaining a clearance, but that they may be limited in the job duties they can perform while waiting for their clearance to be granted. The government-wide personnel security clearance process was added to GAO’s High Risk List in 2018, based on our prior work that identified, among other issues, a significant backlog of background investigations and delays in the timely processing of security clearances. Inability to extend a firm job offer until a final transcript is received: Fourteen of 16 survey respondents stated that this was a challenge, with 10 of the officials responding that it had somewhat or very much hindered their lab’s ability to hire highly qualified candidates. One hiring official stated that top candidates will often receive 5 to 10 job offers prior to graduation, and that his lab’s may be the only one of those offers that is characterized as tentative. Multiple officials noted that career fairs can often occur several months prior to graduation, so the lab would have to wait for the duration of this time before extending a firm offer to a candidate who has been identified. Delays with processing personnel actions by the external human resources office: Thirteen of 16 survey respondents stated that this presented a challenge, and 9 of the 13 stated that this challenge had somewhat or very much hindered their lab’s ability to hire highly qualified candidates for STEM positions since October 2015. Multiple hiring officials stated that employees at their human resource offices may not have an understanding of either the technical nature of the positions being filled at the lab or the lab’s unique hiring authorities, and that this lack of knowledge could create delays. Other officials noted that their servicing human resource offices seemed to be inflexible regarding certain paperwork requirements. For example, officials at one lab stated that their human resource office requires candidates’ resumes to be formatted in a particular way, and that they have been required to ask candidates to make formatting changes to their resumes. An official at another lab stated that the lab has faced similar challenges with regard to the formatting of transcripts and has had to request clarifying documentation from the university. In both cases, the officials described these requirements as embarrassing, and as a delay to the hiring process. Further, both a supervisor and a newly hired employee we interviewed noted that it is difficult to learn the status of an application when it is being processed by the human resource office. Overall length of the hiring process: Twelve of 16 survey respondents cited this as a challenge; 11 of the 12 stated that this challenge had somewhat or very much hindered their lab’s ability to hire highly qualified candidates for STEM positions since October 2015. Hiring officials and supervisors we interviewed stated that their lab had lost candidates due to the length of the hiring process. One supervisor we interviewed stated that he has encountered candidates who really wanted to work at his lab but had had to pursue other opportunities because they could not afford to wait to be hired by the lab. Multiple newly hired employees we interviewed described the process as slow or lengthy, but described reasons why they were willing to wait. For example, some employees were already working at their lab in a contractor or post-doctoral fellowship position, and accordingly they were able to continue in these positions while completing the hiring process for the permanent positions they now hold. One employee stated that if the process had gone on any longer, he likely would have accepted another offer he had received, while another employee stated that he knew of at least two post- doctoral fellows at his lab who chose not to continue in the hiring process for a permanent position at the lab due to the length of the hiring process. The department and the defense laboratories track hiring data that can be used to evaluate some aspects of the individual labs’ hiring efforts, but the Defense Laboratories Office has not routinely obtained or monitored these data or evaluated the effectiveness of hiring, including the use of hiring authorities, across the defense laboratories as a whole. Laboratory hiring data are captured at the department level in the Defense Civilian Personnel Data System (DCPDS)—the department’s system of record for personnel data. In addition, the individual defense laboratories track hiring data, including the type of hiring authority used and certain milestone dates that can be used to measure the length of the hiring process, known as time to hire. According to OPM guidance and our prior work, time to hire is a measure that may inform about the effectiveness of the hiring process, and federal agencies are required to report time to hire for certain types of hiring actions to OPM. Defense laboratory officials stated that, from their perspectives, the time- to-hire metric does not sufficiently inform about the effectiveness of the use of specific authorities, particularly when using the most commonly tracked milestones—from the initiation of a request for personnel action to an employee’s entrance-on-duty date. For example, officials stated that when a direct hire authority is used to hire a candidate who is completing the final year of his or her educational program, the lab may identify and provide a tentative offer to this candidate several months prior to graduation, consistent with private- sector recruitment methods. In this case, officials stated that the length of time between the initiation of the request for personnel action and the candidate’s entrance-on-duty date, following his or her graduation, could span a period of several months. According to defense laboratory officials, the total number of days for this hiring action gives the appearance that the use of the hiring authority was not efficient in this case; however, officials stated that it would have been effective from the supervisor’s perspective, because the use of the hiring authority resulted in the ability to recruit a highly qualified candidate in a manner that was more competitive with the private sector. Further, time-to-hire data, as reflected by the milestone dates that are currently tracked across the defense laboratories, may not reflect a candidate’s perception of the length of the hiring process. More specifically, a candidate may consider the hiring process to be completed upon receiving a job offer (either tentative or final), which could occur weeks or months before the candidate’s entrance-on-duty date, the commonly used end-point for measuring time to hire. According to officials, the length of time from when the offer is extended to entrance on duty can be affected by a candidate’s individual situation and preferences, such as the need to complete an educational program or fulfill family or professional responsibilities prior to beginning work in the new position. In other cases, certain steps of the hiring process, such as completing the initial paperwork or obtaining management approval, may occur after a candidate has been engaged but prior to the initiation of a request for personnel action—the commonly used start-point for measuring time to hire. In this situation, the candidate’s perception of the length of the hiring process may be longer than what is reflected by the time-to-hire data. For the reasons described above, some defense laboratories measure time to hire using milestones that they have determined more appropriately reflect the effectiveness of their hiring efforts. For example, officials from one lab stated that they have sought to measure the length of the hiring process that occurs prior to the request for personnel action, while officials from some labs stated that they measure time to hire using the tentative offer date as an end-point. In addition, some laboratories informally collect other types of data that they use in an effort to evaluate their hiring efforts, such as the reasons why candidates decline a job offer or feedback on the hiring process from newly hired employees. However, officials from the Defense Laboratories Office stated that their office has not conducted any review of the effectiveness of defense laboratory hiring, including the use of hiring authorities, across the labs. The National Defense Authorization Action for Fiscal Year 2017 gave authority to conduct and evaluate defense laboratory personnel demonstration projects to the Office of the Under Secretary of Defense for Research and Engineering, under which the Defense Laboratories Office resides. Defense Laboratories Office officials stated that the office has not evaluated the effectiveness of defense laboratory hiring because it does not have access to defense laboratory hiring data, has not routinely requested these data from the labs or at the department level to monitor the data, and has not developed performance measures to evaluate the labs’ hiring. As noted, laboratory hiring data are captured at the department level in DCPDS and in a variety of service- and laboratory- specific systems and tools. However, the Defense Laboratories Office does not have access to these data and, according to one official, the office would not have access to defense laboratory hiring data unless officials specifically requested them from the labs or from the Defense Manpower Data Center, which maintains DCPDS. According to the official, the Defense Laboratories Office has not routinely requested such data in the past, in part because their role did not require evaluation of such data. In addition, the Defense Laboratories Office has not developed performance measures to evaluate the effectiveness of hiring across the defense laboratories or the labs’ use of hiring authorities. An official from the Defense Laboratories Office stated that the office may begin to oversee the effectiveness of the defense laboratories’ hiring efforts and, in doing so, may consider establishing performance measures to be used consistently across the labs, which could include time-to-hire or other measures. However, as of March 2018, the office had not established such measures for use across the defense laboratories nor provided any documentation on any planned efforts. Standards for Internal Control in the Federal Government states that management should design appropriate types of control activities to achieve the entity’s objectives, including top-level reviews of actual performance and the comparison of actual performance with planned or expected results. Further, consistent with the principles embodied in the GPRA Modernization Act of 2010, establishing a cohesive strategy that includes measurable outcomes can provide agencies with a clear direction for implementation of activities in multi-agency cross-cutting efforts. We have previously reported that agencies are better equipped to address management and performance challenges when managers effectively use performance information for decision making. Without routinely obtaining and monitoring defense laboratory hiring data and developing performance measures, the Defense Laboratories Office cannot effectively oversee the effectiveness of hiring, including the use of hiring authorities, at the defense laboratories. Specifically, without performance measures for evaluating the effectiveness of the defense laboratories’ hiring, and more specifically the use of hiring authorities, the department lacks reasonable assurance that these authorities—in particular, those granted by Congress to the defense laboratories—are resulting in improved hiring outcomes. In addition, without evaluating the effectiveness of the defense laboratories’ hiring efforts, the department cannot understand any challenges experienced by the labs or determine appropriate strategies for mitigating these challenges. As a result, the department and defense laboratories may be unable to demonstrate that they are using their authorities and flexibilities effectively, or that such authorities and flexibilities should be maintained or expanded for future use. DOD does not have clear time frames for its process for approving and implementing new hiring authorities for the defense laboratories. Section 1105 of the Carl Levin and Howard P “Buck” McKeon National Defense Authorization Act for Fiscal Year 2015 established a direct hire authority for students enrolled in a scientific, technical, engineering, or mathematics course of study at institutions of higher education on a temporary or term basis. Officials from the Defense Laboratories Office stated that the labs were unable to use the authority because the department’s current process—the publication of a federal register notice—for allowing the laboratories to use the hiring authority took longer than anticipated. On June 28, 2017—2 ½ years after the authority was granted in the NDAA for Fiscal Year 2015—the department published a federal register notice allowing the defense laboratories the authority to use the direct hire for students. DOD officials stated that the department has typically published a federal register notice whenever the defense laboratories are granted a new hiring authority in legislation—for example, when an NDAA is issued, or when certain modifications to the demonstration projects are made. The Defense Civilian Personnel Advisory Service—through its personnel policymaking role for the department—at the time required that the federal register notice process be used to implement any hiring authorities granted to the defense labs by Congress in legislation. These procedures were published in DOD Instruction 1400.37. DOD officials identified coordination issues that occurred during the approval process of the federal register notice across the relevant offices as the cause of the delay associated with this federal register notice. Changes to DOD organizational structures further complicated the process of implementing new hiring authorities for defense laboratories. Specifically, in late 2016 a provision in the NDAA for Fiscal Year 2017 shifted the authority to conduct and evaluate defense laboratory personnel demonstration projects from the Office of the Under Secretary of Defense for Personnel and Readiness to the Office of the Under Secretary of Defense for Research and Engineering. Within the Office of the Under Secretary of Defense for Research and Engineering, the Defense Laboratories Office has been tasked with the responsibility for matters related to the defense laboratories. According to the Director of the Defense Laboratories Office, informal discussions about the transition began shortly after the NDAA for Fiscal Year 2017 was passed in late 2016. According to that official, despite the shift in oversight responsibility, coordination between the offices of the Under Secretaries for Research and Engineering and for Personnel and Readiness is required on issues related to civilian personnel, including defense laboratory federal register notices. Although a formal process for coordination did not exist at the start of our review, officials from the Defense Laboratories Office stated that representatives from the offices have met approximately five times since December 2016 and were taking steps to establish a coordination process for implementing new authorities. According to officials from the Defense Laboratories Office, during those meetings as well as during other, less formal interactions, officials have taken steps to formalize the roles and responsibilities of the relevant offices. According to officials from the Defense Laboratories Office, as of May 2018 the office was drafting a memorandum to formalize the roles and responsibilities of the Defense Laboratories Office and the Office of the Under Secretary of Defense for Personnel and Readiness to correspond to the federal register notice approval process; however, officials did not provide a completion date. The Defense Laboratories Office established and documented its own federal register approval process in spring 2017 and updated it in early 2018. The aforementioned memorandum would further describe the roles and responsibilities for the Offices of the Under Secretary for Research and Engineering and the Deputy Assistant Secretary of Defense for Civilian Personnel Policy in carrying out the updated process. According to officials, this is the process the office will use moving forward for coordination and approval of any future federal register notices. On March 6, 2018, the Office published a federal register notice that rescinds the earlier instruction published by the Defense Civilian Personnel Advisory Service of the Office of the Under Secretary of Personnel and Readiness. By rescinding that instruction—including the earlier process for approving requests from the labs and federal register notices—the Defense Laboratories Office can, according to officials, publish its own process and guidance. In a 2016 presentation to the Joint Acquisition/Human Resources Summit on the defense laboratories, the Chair of the Laboratory Quality Enhancement Program Personnel Subpanel stated that a renewed and streamlined approval process would be beneficial to the creation of new authorities, among other things. Although Defense Laboratories Office officials provided a flowchart of the office’s updated federal register approval process for coordination, this process did not include time frames for specific stages of the coordination. Officials stated that they cannot arbitrarily assign time frames or deadlines for a review process because any time frames will be contingent on the other competing priorities of each office, and other tasks may take priority and thus push review of a federal register notice down in order of priority. Our prior work has found that other federal agencies identify milestones, significant events, or stages in the agency-specific rulemaking process, and track data associated with these milestones. That work also found that, despite variability across federal agencies in the length of time taken by the federal rulemaking process, scheduling and budgeting for rulemaking are useful tools for officials to manage regulation development and control the resources needed to complete a rule. Standards for Internal Control in the Federal Government further establishes that management should design control activities to achieve objectives and respond to risks. Further, management should also establish an organizational structure, assign responsibility, and delegate authority to achieve the entity’s objectives. Moreover, documentation is a necessary part of an effective internal control system. The level and nature of documentation may vary based on the size and complexity of the organization and its processes. The standards also underscore that specific terms should be fully and clearly set forth such that they can be easily understood. Our prior work on interagency collaboration has also found that overarching plans can help agencies overcome differences in missions, cultures, and ways of doing business, and can help agencies better align their activities, processes, and resources to collaborate effectively to accomplish a commonly defined outcome. Without establishing and documenting clear time frames for its process for departmental coordination efforts related to the approval and implementation of new hiring authorities, the department cannot be certain that it is acting in the most efficient or effective manner possible. Moreover, the defense laboratories may not promptly benefit from the use of congressionally granted hiring authorities, relying instead on other existing authorities. Doing so could, according to officials, have the unintended consequence of complicating the hiring process, increasing hiring times, or resulting in the loss of highly qualified candidates. The future of the department’s technological capabilities depends, in large part, on its investment in its people—the scientists and engineers who perform research, development, and engineering. To that end, Congress has granted the defense laboratories specific hiring authorities meant to encourage experimentation and innovation in their approaches to building and strengthening their workforces. The defense laboratories have used most of these authorities as a part of their overall hiring efforts. However, without obtaining and monitoring hiring data and developing performance measures, the Defense Laboratories Office may not be in a position to provide effective oversight of the defense laboratories’ hiring, including the use of hiring authorities, or to evaluate the effectiveness of specific hiring authorities. Moreover, the absence of clear time frames to facilitate timely decision-making and implementation of any new hiring authorities may impede the laboratories’ ability to make use of future authorities when authorized by Congress. Until the department addresses these issues, it lacks reasonable assurance that the defense laboratories are taking the most effective approach toward hiring a workforce that is critical to the military’s technological superiority and ability to address existing and emerging threats. We are making three recommendations to DOD. The Secretary of Defense should ensure that the Defense Laboratories Office routinely obtain and monitor defense laboratory hiring data to improve the oversight of the defense laboratories’ use of hiring authorities. (Recommendation 1) The Secretary of Defense should ensure that the Defense Laboratories Office develop performance measures to evaluate the effectiveness of the defense laboratories’ use of hiring authorities as part of the labs’ overall hiring to better inform future decision making about hiring efforts and policies. (Recommendation 2) The Secretary of Defense should ensure that the Defense Laboratories Office, in collaboration with the Under Secretary of Defense for Personnel and Readiness and the Laboratory Quality Enhancement Panel’s Personnel Subpanel, establish and document time frames for its coordination process to direct efforts across the relevant offices and help ensure the timely approval and implementation of hiring authorities. (Recommendation 3) We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix VI, DOD concurred with our recommendations, citing steps the department has begun and plans to take to improve oversight and coordination of the defense laboratories’ hiring efforts. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and other interested parties, including the Defense Laboratories Office and defense laboratories. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact Brenda Farrell at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. The term “STEM” refers to the fields of science, technology, engineering, and mathematics. The following figure identifies the Department of Defense’s broad categories of STEM occupations, as well as the specific occupational series within each category. This report examines (1) the defense laboratories use of existing hiring authorities and what officials view as the benefits of authorities and incentives and the challenges in hiring; (2) the extent to which the Department of Defense (DOD) evaluates the effectiveness of hiring, including hiring authorities, at the defense laboratories; and (3) the extent to which DOD has time frames for approving and implementing new hiring authorities. To address these objectives, we included in the scope of our review science, technology, engineering, and mathematics (STEM) hiring at the 15 defense laboratories designated as Science and Technology Reinvention Laboratories (STRL) that were implemented at the time of our review within the Army, Navy, and Air Force. We included 9 Army laboratories: Armament Research, Development, and Engineering Center; Army Research Laboratory; Aviation and Missile Research, Development, and Engineering Center; Communications-Electronics Research, Development, and Engineering Center; Edgewood Chemical and Biological Center; Engineer Research and Development Center; Medical Research and Materiel Command; Natick Soldier Research, Development, and Engineering Center; and Tank Automotive Research, Development, and Engineering Center. We included 5 Navy laboratories: Naval Air Systems Command Warfare Centers, Weapons Division and Aircraft Division; Naval Research Laboratory; Naval Sea Systems Command Warfare Centers, Naval Surface and Undersea Warfare Centers; Office of Naval Research; and Space and Naval Warfare Systems Command, Space and Naval Warfare Systems Center, Atlantic and Pacific. We included 1 Air Force laboratory: Air Force Research Laboratory. We excluded 2 additional defense laboratories within the Army—the Army Research Institute and the Space and Missile Defense Command—because these defense laboratories were in the process of being implemented at the time of our review. For our first objective, we obtained and analyzed documentation, including past National Defense Authorization Acts (fiscal years 1995 through 2017), guidance related to government-wide hiring authorities, and federal register notices on existing hiring authorities used by the defense laboratories to hire STEM personnel. We obtained data that were coordinated by the Defense Manpower Data Center and prepared by the Defense Civilian Personnel Advisory Service’s Planning and Accountability Directorate. These data included, among other things, hiring process milestone dates and type of hiring authority used for each civilian hire at the defense laboratories for fiscal years 2015 through 2017. We selected these years because they were the three most recent years for which hiring data were available, and because doing so would allow us to identify any trends in the use of hiring authorities or the length of time taken to hire. The data we obtained were extracted from DCPDS using the Corporate Management Information System. The team refined the data to include only those hiring actions that were made by the 15 defense laboratories included within the scope of our review. In addition, we excluded hiring actions that used a 700-series nature of action code, which denotes actions that relate to position changes, extensions, and other changes, which we determined should not be included in our analysis. We included actions that used nature of action codes in the 100-series (appointments) and 500-series (conversions to appointments). For the purpose of calculating time to hire, we also excluded records with missing dates and those for which the time-to-hire calculation resulted in negative number (that is, the record’s request for personnel action initiation date occurred after the enter-on- duty date). Specifically, we excluded 92 actions for which no request for personnel action initiation date was recorded and 205 actions for which the date occurred after the enter-on-duty date, for a total of 2.57 percent of all hiring actions. We included in our calculation 7 actions for which the request for personnel action initiation date was the same date as the enter-on-duty date, resulting in a time to hire of zero days. To determine the extent to which the defense laboratories use existing hiring authorities, based on the department’s data, we analyzed the current appointment authority codes identified for individual hiring actions. Current appointment authority codes are designated by the Office of Personnel Management and are used to identify the law, executive order, rule, regulation, or other basis that authorizes an employee’s most recent conversion or accession action. Based on our initial review of the data, we determined that, in some cases, more than one distinct current appointment authority code could be used to indicate the use of a certain hiring authority. Alternately, a single current appointment authority code could in some cases be used for indicating more than one type of authority. In these cases, the details of the specific type of hiring authority that was used for the hiring action can be recorded in the description field associated with the current appointment authority code field. For this reason, in order to determine the type of hiring authority used, it was necessary to analyze the description fields for the current appointment authority code when certain codes were used. Two analysts independently reviewed each description and identified the appropriate hiring authority. Following this process, the two analysts compared their work and resolved any instances in which the results of their analyses differed. A data analyst used the results to produce counts of the number of times various categories of hiring authorities were used, as well as the average time to hire for each hiring authority category. For those instances where the analysts could not identify a hiring authority on the basis of the three digit codes or the description fields, the hiring actions were assigned to an “unknown” category. We note that the “unknown” category included 591 hiring actions, or approximately 5 percent of the total data for fiscal years 2015 through 2017. In addition, within the laboratory-specific direct hire authority category, if a determination could not be made about the specific type of laboratory- specific direct hire authority used, the hiring action was captured in the “direct hire authority, unspecified” category because the action was clearly marked as one of the laboratory-specific direct hire authorities but the type of authority (for example, direct hire for veterans) was unclear. Of the 5,303 hiring actions identified as a laboratory-specific direct hire authority, 0.1 percent of the hiring actions fell into the unspecified category. Based on the aforementioned steps and discussions with officials from the Defense Civilian Personnel Advisory Service and the Defense Manpower Data Center and reviews of additional documentation provided to support the data file, as well as interviews with officials from 13 of the laboratories about their data entry and tracking, we determined that these data were sufficiently reliable for the purposes of reporting the frequency with which the labs used specific hiring authorities and calculating the time it takes the labs to hire, or time to hire, for fiscal years 2015 through 2017. To describe officials’ views of hiring authorities and other incentives, we conducted a survey of officials at each of the defense laboratories on (1) their perceptions of the various hiring authorities and incentives, (2) whether those authorities and incentives have helped or hindered hiring efforts, (3) the extent to which they experienced barriers to using hiring authorities, and (4) any challenges during the hiring process, among other things. We administered the survey to the official at each defense laboratory who was identified as the Laboratory Quality Enhancement Program Personnel, Workforce Development, and Talent Management Panel point of contact, because we determined that this individual would be the most knowledgeable about his or her lab’s hiring process and use of hiring authorities. One laboratory—the Space and Naval Warfare Systems Command Centers—had two designated Laboratory Quality Enhancement Program Personnel, Workforce Development, and Talent Management Panel points of contact, one for each of its command centers (Atlantic and Pacific). Because the contacts would each be knowledgeable about his or her lab’s hiring processes for their respective command centers, we chose to include both command centers in our survey. As a result, we included a total of 16 laboratory officials in our survey. We drafted our questionnaire based on the information obtained from our initial interviews with department, service, and laboratory personnel. We conducted pretests to check that (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the questionnaire did not place an undue burden on agency officials, (4) the information could feasibly be obtained, and (5) the survey was comprehensive and unbiased. We conducted five pretests to include representatives from each of the three services, as well as from corporate research laboratories and from research, development, and engineering centers. We conducted the pretests—with the assistance of a GAO survey specialist—by telephone and made changes to the content and format of the questionnaire after each pretest, based on the feedback we received. Key questions from the questionnaire used for this study are presented in appendix II. We sent a survey notification email to each laboratory’s identified point of contact on July 6, 2017. On July 10, 2017, we sent the questionnaire by email as a Microsoft Word attachment that respondents could return electronically after marking checkboxes or entering responses into open answer boxes. One week later, we sent a reminder email, attaching an additional copy of the questionnaire, to everyone who had not responded. We sent a second reminder email and copy of the questionnaire to those who had not responded 2 weeks following the initial distribution of the questionnaire. We received questionnaires from all 16 participants by August 4, 2017, for a 100 percent response rate. Between July 26 and October 5, 2017, we conducted additional follow-up with 11 of the respondents via email to resolve missing or problematic responses. Because we collected data from every lab, there was no sampling error. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as non-sampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, how the responses were processed and analyzed, or the types of people who do not respond can influence the accuracy of the survey results. We took steps in the development of the survey, the data collection, and the data analysis to minimize these non-sampling errors and help ensure the accuracy of the answers that were obtained. For example, a survey specialist designed the questionnaire, in collaboration with analysts having subject matter expertise. Then, as noted earlier, the draft questionnaire was pretested to ensure that questions were relevant, clearly stated, and easy to comprehend. The questionnaire was also reviewed by internal subject matter experts and an additional survey specialist. Data were electronically extracted from the Microsoft Word questionnaires into a comma-delimited file that was then imported into a statistical program for quantitative analyses and Excel for qualitative analyses. We examined the survey results and performed computer analyses to identify inconsistencies and other indications of error, and we addressed such issues as necessary. Quantitative data analyses were conducted by a survey specialist using statistical software. An independent data analyst checked the statistical computer programs for accuracy. To obtain information on department- and service-level involvement in and perspectives of defense laboratory hiring, we interviewed officials at the Defense Personnel Advisory Service, Defense Laboratories Office, Army Office of the Assistant G-1 for Civilian Personnel, and Navy Office of Civilian Human Resources. In addition, we interviewed hiring officials, first-line supervisors, and newly hired employees from a non- generalizable sample of six defense laboratories or subordinate level entities within a laboratory (for example, division or directorate) to obtain their perspectives on the hiring process. We selected the six laboratories based on the following two criteria: (1) two laboratories from each of the three services, and (2) a mix of both corporate research laboratories and research and engineering centers. In addition, because some hiring activities can occur at subordinate levels within a laboratory—such as a division or directorate—we included at least one subordinate level entity for each service. In total, we selected: Army Research Laboratory Sensors and Electron Devices directorate; Aviation and Missile Research, Development, and Engineering Center (Army); Naval Research Laboratory; Naval Air Warfare Center Weapons Division; Air Force Research Laboratory Information directorate; and Air Force Research Laboratory Space Vehicles directorate. For each lab, we requested to interview the official(s) most knowledgeable about the lab’s hiring process, supervisors who had recently hired, and newly hired employees. We initially requested to interview one group each of supervisors and newly hired employees. Following our first round of interviews at one laboratory, we requested to interview two groups each of supervisors and newly hired employees. Subsequent to this request, at one lab we were able to conduct one supervisor interview and at a second lab we were able to conduct one newly hired employee interview, due to scheduling constraints. The views obtained from these officials, supervisors, and recent hires are not generalizable and are presented solely for illustrative purposes. For our second and third objectives, we reviewed guidance and policies for collecting and analyzing laboratory personnel data related to the implementation and use of hiring authorities by these labs. We interviewed DOD, military service, and defense laboratory officials to discuss and review their hiring processes and procedures for STEM personnel, the use of existing hiring authorities, and efforts to document and evaluate time-to-hire metrics. We also met with DOD officials from the Office of the Under Secretary of Defense for Personnel and Readiness and the Office of the Under Secretary of Defense for Research and Engineering to discuss processes and procedures for implementing new hiring authorities granted by Congress. We evaluated their efforts to determine whether they met federal internal control standards, including that management should design appropriate types of control activities to achieve the entity’s objectives, including top-level reviews of actual performance, and should establish an organizational structure, assigning responsibilities and delegating authority to achieve an organization’s objectives. We conducted this performance audit from November 2016 to May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We analyzed three years of Department of Defense hiring data obtained from the Defense Civilian Personnel Data System to identify the defense laboratories’ use of hiring authorities. We found that the defense laboratories completed a total of 11,562 STEM hiring actions in fiscal years 2015 through 2017 and used the defense laboratory direct hire authorities the most often when hiring STEM personnel. Table 7 provides information on the laboratories’ use of hiring actions by hiring authority for fiscal years 2015, 2016, and 2017. Table 8 provides a breakdown of the individual labs’ use of hiring authorities in fiscal years 2015 through 2017. We analyzed three years of the DOD hiring data to identify time to hire using various types of hiring authorities when hiring for Science, Technology, Engineering, and Math (STEM) occupations at the defense laboratories. Tables 9, 10, 11, and 12 below show the frequency of actions for each hiring authority category and the average, minimum, maximum, median, 25th percentile, and 75th percentile of the number of days to hire for each category in fiscal years 2015 through 2017 and for all three years combined. In addition to the contact named above, Vincent Balloon (Assistant Director), Isabel Band, Vincent Buquicchio, Joseph Cook, Charles Culverwell, Serena Epstein, Christopher Falcone, Robert Goldenkoff, Cynthia Grant, Chelsa Gurkin, Amie Lesser, Oliver Richard, Michael Silver, John Van Schaik, Jennifer Weber, and Cheryl Weissman made key contributions to this report.
|
DOD's defense labs help sustain, among other things, U.S. technological superiority and the delivery of technical capabilities to the warfighter. Over time Congress has granted unique flexibilities—such as the ability to hire qualified candidates who meet certain criteria using direct hire authorities—to the defense labs to expedite the hiring process and facilitate efforts to compete with the private sector. Senate Report 114-255 included a provision for GAO to examine the labs' hiring structures and effective use of hiring authorities. This report examines (1) the defense labs use of existing hiring authorities and officials' views on the benefits of authorities and challenges of hiring; (2) the extent to which DOD evaluates the effectiveness of hiring, including hiring authorities at the defense labs; and (3) the extent to which DOD has time frames for approving and implementing new hiring authorities. GAO analyzed DOD hiring policies and data; conducted a survey of 16 defense lab officials involved in policy-making; interviewed DOD and service officials; and conducted nongeneralizable interviews with groups of officials, supervisors, and new hires from 6 labs—2 from each of the 3 military services, selected based on the labs' mission. The Department of Defense's (DOD) laboratories (defense labs) have used the laboratory-specific direct hire authorities more than any other category of agency-specific or government-wide hiring authority for science, technology, engineering, and mathematics personnel. As shown below, in fiscal years 2015—2017 the labs hired 5,303 personnel out of 11,562 total hires, or 46 percent using these direct hire authorities. Lab officials, however, identified challenges to hiring highly qualified candidates, such as delays in processing security clearances, despite the use of hiring authorities such as direct hire. Source: GAO analysis of Department of Defense data. | GAO-18-417 . a Other includes all other defense laboratory-specific direct hiring authorities used. b All other includes remaining five categories of hiring authorities. c Percentages may not sum to total due to rounding. DOD and the defense labs track hiring data, but the Defense Laboratories Office (DLO) has not obtained or monitored these data or evaluated the effectiveness of the labs' hiring, including the use of hiring authorities. While existing lab data can be used to show the length of time of the hiring process, effectiveness is not currently evaluated. According to lab officials, timeliness data do not sufficiently inform about the effectiveness of the authorities and may not reflect a candidate's perception of the length of the hiring process. Further, the DLO has not developed performance measures to evaluate the effectiveness of hiring across the defense laboratories. Without routinely obtaining and monitoring hiring data and developing performance measures, DOD lacks reasonable assurance that the labs' hiring and use of hiring authorities—in particular, those granted by Congress to the labs—result in improved hiring outcomes. DOD does not have clear time frames for approving and implementing new hiring authorities. The defense labs were unable to use a direct hire authority granted by Congress in fiscal year 2015 because it took DOD 2½ years to publish a federal register notice—the process used to implement new hiring authorities for the labs—for that authority. DOD officials identified coordination issues associated with the process as the cause of the delay and stated that DOD is taking steps to improve coordination—including meeting to formalize roles and responsibilities for the offices and developing a new approval process—between offices responsible for oversight of the labs and personnel policy. However, DLO's new federal register approval process does not include time frames for specific stages of coordination. Without clear time frames for its departmental coordination efforts related to the approval and implementation of new hiring authorities, officials cannot be certain they are taking action in a timely manner. GAO recommends that DOD (1) routinely obtain and monitor defense lab hiring data to improve oversight; (2) develop performance measures for evaluating the effectiveness of hiring; and (3) establish time frames to guide hiring authority approval and implementation. DOD concurred with the recommendations.
|
Agencies generally acquire equipment from commercial vendors and through GSA, which contracts for the equipment from commercial vendors. In acquiring heavy equipment from a commercial vendor or GSA, agencies can purchase or lease the equipment. Generally, agencies use the term “lease” to refer to acquisitions that are time-limited and therefore distinct from purchases. The term “lease” is used to refer to both long-term and short-term leases. For example, the three agencies we reviewed in-depth use the term “rental” to refer to short-term leases of varying time periods. According to Air Force officials, they define rentals as leases that are less than 120 days while FWS and NPS officials said they generally use the term rental to refer to leases that are a year or less. For the purposes of this report, we use the term “rental” to refer to short-term leases defined as rentals by the agency and “long-term lease” to refer to a lease that is not considered a rental by the agency. (See fig. 1.) In 2013, GSA began offering heavy equipment through its Short-Term Rental program, which had previously been limited to passenger vehicles, in part to eliminate ownership and maintenance cost for infrequently used heavy equipment. Under this program, agencies can request a short-term equipment rental (less than a year) from GSA, and GSA will work with a network of commercial vendors to provide the requested heavy equipment. Unlike for some other types of federal property, there are no central reporting requirements for agencies’ inventories of heavy equipment. However, each federal agency is required to maintain inventory controls for its property, which includes heavy equipment. Agencies maintain inventory data through the use of agency-specific databases, and each agency can set its own requirements for what data are required and how these data are maintained. For example, while an agency may choose to maintain data in a headquarters database, it could also choose to maintain data at the local level. As another example, an agency may decide to track and maintain data on the utilization of its heavy equipment (such as the hours used) or may choose not to have such data or require any particular utilization levels. The Federal Acquisition Regulation (FAR) governs the acquisition process of executive branch agencies when acquiring certain goods and services, including heavy equipment. Under the FAR, agencies should consider whether to lease equipment instead of purchasing it based on several factors. Specifically, the FAR provides that agency officials should evaluate cost and other factors by conducting a “lease-versus-purchase” analysis before acquiring heavy equipment. Additionally, DOD’s regulations require its component agencies to prepare a justification supporting lease-versus-purchase decisions if the equipment is to be leased for more than 60 days. Twenty agencies reported data on their owned heavy equipment, including the (1) number, (2) types, (3) acquisition year, and (4) location of agencies’ owned heavy equipment in their inventories as of June 2017. The 20 agencies reported owning over 136,000 heavy equipment items. DOD reported owning most of this heavy equipment—over 100,000 items, about 74 percent. (See app. I for more information on agencies’ ownership of these items.) The Department of Agriculture reported owning the second highest number of heavy equipment items—almost 9,000 items, about 6 percent. (See fig. 2.) Four agencies—the Nuclear Regulatory Commission, the Department of Housing and Urban Development, the Office of Personnel Management, and the Agency for International Development—reported owning five or fewer heavy equipment items each. The 20 agencies reported owning various types of heavy equipment, such as cranes, backhoes, and road maintenance equipment in five categories: (1) construction, mining, excavating, and highway maintenance equipment; (2) airfield-specialized trucks and trailers; (3) self-propelled warehouse trucks and tractors; (4) tractors; and (5) soil preparation and harvesting equipment. Thirty-eight percent (almost 52,000 items) were in the construction, mining, excavating, and highway maintenance category (see fig. 3). Fifteen of the 20 agencies reported owning at least some items in this category. Twenty-four percent (over 33,000 items) were in the airfield- specialized trucks and trailers category, generally used to service and re-position aircraft on runways. DOD reported owning 99 percent (over 32,000) of these items, while 9 other agencies, including the Department of Labor and the National Aeronautics and Space Administration, reported owning the other one percent (317 items). Twenty-two percent (over 29,000 items) were in the self-propelled warehouse trucks and tractors category, which includes equipment such as forklift trucks. All 20 agencies reported owning at least one item in this category, and five agencies—the Agency for International Development, Department of Housing and Urban Development, the Environmental Protection Agency, the Nuclear Regulatory Commission, and the Office of Personnel Management—reported owning only items in this category. (For additional information on agencies’ ownership of heavy equipment in various categories, see app. I.) The twenty agencies reported acquiring their owned heavy equipment between 1944 and 2017, with an average of about 13 years since acquisition (see fig. 4). One heavy equipment manager we interviewed reported that a dump truck can last 10 to 15 years, whereas other types of equipment can last for decades if regularly used and well-maintained. The 20 agencies reported that over 117,000 heavy equipment items (86 percent) were located within the United States or its territories. Of these, about one-fifth (over 26,000) were located in California and Virginia, the two states with the most heavy equipment (see fig. 5). Of the equipment located outside of the United States and its territories, 94 percent was owned by the Department of Defense. The rest was owned by the Department of State (714 items in 141 countries from Afghanistan to Zimbabwe) and the National Science Foundation (237 items in areas such as Antarctica). The twenty agencies reported spending over $7.4 billion in 2016 dollars to acquire the heavy equipment they own (see table 1). However, actual spending was higher because this inflation-adjusted figure excludes over 37,000 heavy equipment items for which the agencies did not report acquisition cost or acquisition year, or both. Without this information, we could not determine the inflation-adjusted cost and therefore did not include the cost of these items in our calculation. The Army owns almost all of these items, having not reported acquisition cost or acquisition year, or both, for 36,589 heavy equipment items because, according to Army officials, the data were not available centrally but may have been available at individual Army units and would have been resource- intensive to obtain. The heavy equipment items reported by the 20 agencies ranged in acquisition cost from zero dollars to over $2 million in 2016 dollars, with an average acquisition cost in 2016 dollars of about $78,000, excluding assets with a reported acquisition cost of $0. Of the items which we adjusted to 2016 dollars and for which non-zero acquisition costs were provided: 94 percent cost less than $250,000 and accounted for 57 percent of the total adjusted acquisition costs (See fig. 6.) 6 percent of items cost more than $250,000 and accounted for 43 percent of the adjusted acquisition costs. (See fig. 6.) High-cost items included a $779,000 hydraulic crane acquired by the National Aeronautics and Space Administration in 1997 ($1.2 million in 2016 dollars), a $1.4 million ultra-deep drilling simulator acquired by the Department of Energy in 2009 ($1.6 million in 2016 dollars), and several $2.2 million well-drilling machines acquired by the Air Force in 2013 ($2.3 million in 2016 dollars). In calendar years 2012 through 2016, the Air Force, FWS, and NPS purchased almost 3,500 pieces of heavy equipment through GSA and private vendors at a total cost of about $360 million to support mission needs. (See table 2.) These agencies also spent over $5 million on long- term leases and rentals during this time period. The Air Force spent over $300 million to purchase over 2,600 heavy equipment assets in calendar years 2012 through 2016 that were used to support and maintain its bases globally. For example, according to Air Force officials, heavy equipment is often used to maintain runways and service and reposition aircraft on runways. While the majority of Air Force heavy equipment purchased in this time period is located in the United States, 41 percent of this heavy equipment is located outside the United States and its territories in 17 foreign countries to support global military bases. The Air Force could not provide complete information on its heavy equipment leases for fiscal years 2012 through 2016. Specifically, the Air Force provided data on 33 commercial heavy equipment leases that were ongoing as of August 2017 but could not provide cost data for these leases because this information is not tracked centrally. Additionally, the Air Force could not provide any data on leases that occurred previously because, according to Air Force officials, lease records are removed from the Air Force database upon termination of the lease. Officials said that rentals are generally handled locally and obtaining complete data would require a data call to over 300 base contracting offices. Air Force officials stated that rentals are generally used in unique situations involving short- term needs such as responding to natural disasters. For example, following Hurricane Sandy, staff at Langley Air Force Base in Virginia used rental equipment to clean up and repair the base. Although Air Force did not provide complete information on rentals, data we obtained from GSA’s Short-Term Rental program indicated that Air Force rented heavy equipment in 46 transactions not reflected in the Air Force data we received totaling over $3.7 million since GSA began offering heavy equipment through its Short-Term Rental program, which had previously been limited to passenger vehicles, in part program in 2013. FWS spent over $32 million to purchase 348 heavy equipment assets from calendar years 2012 through 2016. FWS used its heavy equipment to maintain refuge areas throughout the United States and its territories, including maintaining roads and nature trails. FWS also used heavy equipment to respond to inclement weather and natural disasters. Most of the heavy equipment items purchased by FWS were in the construction, mining, excavating, and highway maintenance equipment category and include items such as excavators, which were used for moving soil, supplies, and other resources. FWS officials reported that they did not have any long-term leases for any heavy equipment in fiscal years 2012 through 2016 because they encourage equipment sharing and rentals to avoid long-term leases whenever possible. FWS officials provided data on 228 rentals for this time period with a total cost of over $1 million. Information regarding these rentals is contained in an Interior-wide property management system, the Financial Business Management System (FBMS). FWS officials told us that they have not rented heavy equipment through GSA’s program because they have found lower prices through local equipment rental companies. NPS spent over $27 million to purchase 471 heavy equipment assets from calendar years 2012 through 2016. NPS uses heavy equipment— located throughout the United States and its territories—to maintain national parks and respond to inclement weather and natural disasters. For example, NPS used heavy equipment such as dump trucks, snow plows, road graders, and wheel loaders to clear and salt the George Washington Memorial Parkway in Washington, D.C., following snow and ice storms. Most of the heavy equipment items purchased by NPS were in the construction, mining, excavating, and highway maintenance equipment category and include items such as excavators, which are used for moving soil, supplies, and other resources. NPS reported spending about $360,000 on 230 long-term leases and rentals in fiscal years 2012 through 2016, not including rentals through GSA’s Short-Term Rental program, which had previously been limited to passenger vehicles, in part program. As with FWS, NPS leases and rentals are contained in FBMS, which is Interior’s property management system. Data we obtained from GSA’s Short-Term Rental program, which had previously been limited to passenger vehicles, in part program indicated that NPS rented heavy equipment in 26 transactions totaling over $200,000 since GSA began offering heavy equipment through its Short-Term Rental program, which had previously been limited to passenger vehicles, in part program in 2013, for a potential total cost of over $560,000 for these long-term leases and rentals. As mentioned earlier, the FAR provides that executive branch agencies seeking to acquire equipment should consider whether it is more economical to lease equipment rather than purchase it and identifies factors agencies should consider in this analysis, such as estimated length of the period that the equipment is to be used, the extent of use in that time period, and maintenance costs. This analysis is commonly referred to as a lease-versus-purchase analysis. While the FAR does not specifically require that agencies document their lease-versus-purchase analyses, according to federal internal control standards, management should clearly document all transactions and other significant events in a manner that allows the documentation to be readily available for examination and also communicate quality information to enable staff to complete their responsibilities. As discussed below, we found that most acquisitions we reviewed from FWS, NPS, and the Air Force did not contain any documentation of a lease-versus-purchase analysis. Specifically, officials were unable to provide documentation of a lease-versus-purchase analysis for six of the eight acquisitions we reviewed. FWS officials were able to provide documentation for the other two. Officials told us that a lease-versus- purchase analysis was not conducted for five of the six acquisitions and did not know if such analysis was conducted for the other acquisition. According to agency officials, the main reason why analyses were not conducted or documented for these six acquisitions is that the circumstances in which such analyses were to be performed or documented were not always clear to FWS, NPS, and Air Force officials. In addition to the FAR, Interior has agency guidance stating that bureaus should conduct and document lease-versus-purchase analyses. This July 2013 guidance—that FWS and NPS are to follow—states that requesters of equipment valued at $15,000 or greater should perform a lease-versus- purchase analysis when requesting heavy equipment. According to the guidance, this analysis should address criteria in the FAR and include a discussion of the financial and operating advantages of alternate approaches that would help contracting officials determine the final appropriate acquisition method. At the time the guidance was issued, Interior also provided a lease-versus-purchase analysis tool to aid officials in conducting this analysis. Additionally, in April 2016, Interior issued a policy to implement the July 2013 guidance. The 2016 policy clarifies that program offices are required to complete Interior’s lease-versus-purchase analysis tool and provide the completed analysis to the relevant contracting officer. Within Interior, bureaus are responsible for ensuring that procurement requirements are met, including the requirements and directives outlined in Interior’s 2013 guidance and 2016 policy on lease-versus-purchase analyses, according to agency officials. Within FWS, local procurement specialists prepare procurement requests and ensure that procurement requirements are met and that all viable options have been considered. Regional equipment managers review these procurement requests, decide whether to purchase or lease the requested equipment, and prepare the lease-versus-purchase analysis tool if the procurement specialist has indicated that it is required. Within NPS, local procurement specialists are responsible for ensuring that all procurements adhere to relevant requirements and directives, including documenting the lease- versus-purchase analysis. Of the three FWS heavy equipment acquisitions we reviewed for which the 2013 Interior guidance was applicable, one included a completed lease-versus-purchase analysis tool; one documented the rationale for purchasing rather than leasing, although it did not include Interior’s lease- versus-purchase analysis tool; and one did not include any documentation related to a lease-versus-purchase analysis. (See table 3.) Regarding the acquisition for which no documentation of a lease-versus- purchase analysis was provided—a 12-month lease of an excavator and associated labor costs for over $19,000—FWS officials initially told us that a lease-versus-purchase analysis was not required because the equipment lease was less than $15,000, and Interior’s guidance required a lease-versus-purchase analysis for procurements of equipment valued at $15,000 or greater. However, we found the guidance did not specify whether the $15,000 threshold includes the cost of labor. We also found that Interior’s guidance did not specify if a lease-versus-purchase analysis was required if the total cost of a rental is less than the purchase price. FWS officials acknowledged that Interior guidance is not clear and that it would be helpful for Interior to clarify whether these leases require a lease-versus-purchase analysis. NPS officials were unable to provide documentation of a lease-versus- purchase analysis for the single heavy equipment acquisition we reviewed—the purchase of a wheeled tractor in 2015 for $43,177. According to these officials, they could not do so because of personnel turnover in the contracting office that would have documented the analysis. In addition, they told us that they believe that such analyses are not always completed for heavy equipment acquisitions because responsibility for completing these analyses is unclear. Specifically, they told us that it was unclear whether the responsibility lies with the official requesting the equipment, the contracting personnel who facilitate the acquisition, or the property personnel who manage inventory data. However, when we discussed our findings with Interior and NPS officials, NPS officials were made aware of the 2016 Interior policy that specifically requires program offices—the officials requesting the equipment—to complete the lease-versus-purchase analysis and provide documentation of this analysis to the contracting officer. As a result, NPS officials told us at the end of our review that program office officials will now be required to complete the lease-versus-purchase analysis tool and document this analysis. According to Air Force officials responsible for managing heavy equipment, financial or budget personnel at individual bases are responsible for conducting lease-versus-purchase analyses, also called economic analyses, to support purchase and lease requests. Air Force fleet officials told us that they then review these requests from a fleet perspective, considering factors such as whether the cost information provided in the request is from a reputable source, expected maintenance costs, and whether a requesting base has the capability to maintain the requested equipment. However, they said they do not check to ensure that a lease-versus-purchase analysis was completed or review the analysis. Equipment rentals can be approved at individual bases. In our review of four Air Force heavy equipment acquisitions, we found no instances in which Air Force officials documented a lease-versus- purchase analysis (see table 4). For the acquisitions that we reviewed, Air Force officials told us they did not believe a lease-versus-purchase analysis was required because the new equipment was either replacing old equipment that was previously approved or could be deployed. Accordingly, the Air Force purchased two forklifts in 2013 without conducting lease-versus-purchase analyses because the forklifts were replacing old forklifts that were authorized in 1997 and 2005. Furthermore, Air Force officials told us that both of these forklifts could be deployed and indicated that lease-versus-purchase analyses are not required for deployable equipment. However, the Air Force does not have guidance that describes the circumstances that require either a lease-versus-purchase analysis or documentation of the rationale for not completing such analysis. Although we identified several instances in which officials in the three selected agencies did not document lease-versus-purchase analyses, officials from these agencies stated that they consider mission needs and equipment availability, among other factors, when making these decisions. For example, Air Force officials told us following Hurricane Sandy, staff at Langley Air Force Base in Virginia used rental equipment to clean and repair the base because the equipment was needed immediately to ensure the base could meet its mission. Moreover, availability of heavy equipment for lease or rental, which can be affected by factors such as geography and competition for equipment, is a key consideration. For example, FWS officials told us that the specialized heavy equipment sometimes needed may not be available for long-term lease or rent in remote areas such as Alaska and the Midway Islands, so the agency purchases the equipment. In addition, some agency officials told us that they may purchase heavy equipment even if that equipment is needed only sporadically if there is likely to be high demand for rental equipment. For example, following inclement weather or a natural disaster, demand for certain heavy equipment rentals can be high and equipment may not be available to rent when it is needed. While we recognize that mission needs and other factors are important considerations, without greater clarity regarding when to conduct or document lease-versus-purchase analyses, officials at FWS, NPS, and Air Force may not be conducting such analyses when appropriate and may not always make the best acquisition decisions. These agencies could be overspending on leased equipment that would be more cost- effective if purchased or overspending to purchase equipment when it would be more cost-effective to lease or rent. Moreover, without documenting decisions on whether to purchase or lease equipment, they lack information that could be used to inform future acquisition decisions for similar types of equipment or projects. Air Force guidance requires that fleet managers collect utilization data for both vehicles and heavy equipment items, such as the number of hours used, miles traveled, and maintenance costs. The Air Force provided us with utilization data for over 18,000 heavy equipment items and uses such data to inform periodic base validations. Specifically, Air Force officials said that every 3 to 5 years each Air Force base reviews the on- base equipment to ensure that the installation has the appropriate heavy equipment to complete its mission and reviews utilization data to identify items that are underutilized. If heavy equipment is considered underutilized, the equipment is relocated—either moved to another location or sent to the Defense Logistics Agency for reuse or transfer to another agency. According to Air Force officials the Air Force has relocated over 700 heavy equipment items based on the results of the validation process and other factors such as replacing older items and agency needs since 2014. Similarly, FWS guidance for managing heavy equipment utilization sets forth minimum utilization hours for certain types of heavy equipment and describes requirements for reporting utilization data. FWS provided us with utilization data on over 3,000 heavy equipment items. According to officials, condition assessments of heavy equipment are required by FWS guidance every 3 to 5 years. According to FWS officials, condition assessments inform regional-level decision making about whether to move equipment to another FWS location or dispose of the equipment. In contrast, NPS does not require the collection of utilization data to evaluate heavy equipment use and does not have guidance for managing heavy equipment utilization. However, NPS officials told us that they recognize the need for such guidance. NPS officials shared with us draft guidance that they have developed, which would require collection of utilization data for heavy equipment such as hours or days of usage each month. According to NPS officials, they plan to send the guidance to the NPS policy office for final review in March 2018. Until this guidance is completed and published, NPS is taking interim actions to manage the utilization of its heavy equipment. For example, NPS officials stated that they have asked NPS locations to collect and post monthly utilization data, discussed the collection of utilization data at fleet meetings, and distributed job aids to support this effort. During the course of our review, NPS officials provided us with some utilization data for about 1,400 of the more than 2,400 NPS heavy equipment items. Specifically, for the 1,459 heavy equipment items for which NPS provided utilization data, 541 items had utilization data for each month. For the remaining 918 items, utilization data were reported for some, but not all months. The federal government has spent billions of dollars to acquire heavy equipment. There is no requirement that agencies report on the inventory of this equipment, as there is no standard definition of heavy equipment. When deciding how to acquire this equipment, agencies’ should conduct a lease-versus-purchase analysis as provided in the FAR, which is a critical mechanism to ensure agencies are acquiring the equipment in the most cost-effective manner. Because FWS, NPS and the Air Force were unclear when such an analysis was required, they did not consistently conduct or document analyses of whether it was more economical to purchase or lease heavy equipment. In the absence of clarity on the circumstances in which lease-versus-purchase analyses for heavy equipment acquisitions are to be conducted and documented, the agencies may not be spending funds on heavy equipment cost- effectively. We are making two recommendations—one to the Air Force and one to the Department of the Interior. The Secretary of the Air Force should develop guidance to clarify the circumstances in which lease-versus-purchase analyses for heavy equipment acquisitions are to be conducted and documented. (Recommendation 1) The Secretary of the Interior should further clarify in guidance the circumstances in which lease-versus-purchase analyses for heavy equipment acquisitions are to be conducted and documented. (Recommendation 2) We provided a draft of this report to the Departments of Agriculture, Defense, Energy, Homeland Security, Housing and Urban Development, the Interior, Justice, Labor, State, and Veterans Affairs; General Services Administration; National Aeronautics and Space Administration; National Science Foundation; Nuclear Regulatory Commission; Office of Personnel Management; and U.S. Agency for International Development. The departments of Agriculture, Energy, Homeland Security, Housing and Urban Development, Justice, State and Veterans Affairs, as well as the General Services Administration, National Aeronautics and Space Administration, National Science Foundation, Nuclear Regulatory Commission, Office of Personnel Management; and U.S. Agency for International Development did not have comments. The Department of Labor provided technical comments, which we incorporated as appropriate. In written comments, reproduced in appendix III, the Department of Defense stated that it concurred with our recommendation and plans to issue a bulletin to Air Force contracting officials. In written comments, reproduced in appendix IV, the Department of the Interior stated that it concurred with our recommendation and plans to implement it. If you or members of your staff have any questions about this report, please contact me at (202) 512-2834 or RectanusL@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix V. Specialized Trucks and Trailers 37 . Self-Propelled Warehouse Trucks and Tractors 1,733 3 . . . . . . . . . . . . . . . . . . . Specialized Trucks and Trailers 7 . Self-Propelled Warehouse Trucks and Tractors 2,925 134 . . . . . . . . . . . . . . . . . . Specialized Trucks and Trailers . Self-Propelled Warehouse Trucks and Tractors 146 . . . . . . 7 . . . - 109 . . . Self-Propelled Warehouse Trucks and Tractors 575 64 40 . . . . . . . . . . . 4 . . . . . . . . . Nuclear Regulatory Commission Office of Personnel Management Social Security Administration United States Agency for International Development Grand Total . . . . This report addresses: (1) the number, type, and cost of heavy equipment items that are owned by the 24 CFO Act agencies; (2) the heavy equipment items selected agencies have recently acquired and how selected agencies decided to purchase or lease this equipment; and (3) how selected agencies manage the utilization of their heavy equipment. To identify the number, type, and cost of heavy equipment owned by federal agencies, we first interviewed officials at the General Services Administration to determine whether there were government-wide reporting requirements for owned heavy equipment and learned that there are no such requirements. We then obtained and analyzed data on agencies’ spending on equipment purchases and leases from the Federal Procurement Data System–Next Generation (FPDS-NG), which contains government-wide data on agencies’ contracts. However, in reviewing the data available and identifying issues with the reliability of the data, we determined that data on contracts would not be sufficient to answer the question of what heavy equipment the 24 CFO Act agencies own. We therefore conducted a data collection effort to obtain heavy equipment inventory information from the 24 CFO Act agencies, which are the Departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, the Interior, Justice, Labor, State, Transportation, the Treasury, and Veterans Affairs; Environmental Protection Agency; General Services Administration; National Aeronautics and Space Administration; National Science Foundation; Nuclear Regulatory Commission; Office of Personnel Management; Small Business Administration; Social Security Administration; and Agency for International Development. Because there is no generally accepted definition of heavy equipment, we identified 12 federal supply classes in which the majority of items are self- propelled equipment but not passenger vehicles or items that are specific to combat and tactical purposes, as these items are generally not considered to be heavy equipment. (See table 5.) We then vetted the appropriateness of these selected supply classes with Interior, FWS, NPS, and Air Force agency officials, as well as with representatives from a fleet management consultancy and a rental company, and they generally agreed that items in selected federal supply classes are considered heavy equipment. Federal supply classes are used in FPDS- NG and are widely used in agencies’ inventory systems. Overall, about 90 percent of the heavy equipment items that agencies reported were assigned a federal supply class in the agency’s inventory data. In discussing heavy equipment categories in the report, we use the category titles below. To identify points of contact at the 24 CFO Act agencies, we obtained GSA’s list of contact information for agencies’ national utilization officers, who are agency property officers who coordinate with GSA. As a preliminary step, we contacted these individuals at each of the 24 CFO Act agencies and asked them to either confirm that they were the appropriate contacts or provide contact information for the appropriate contact and to inform us if they do not own heavy equipment. Officials at 4 agencies—Department of Education, Department of the Treasury, General Services Administration, and Small Business Administration— indicated that the agency did not own any items in the relevant federal supply classes. Officials at 16 of these agencies indicated that they would be able to respond on a departmental level because the relevant inventory data are maintained centrally, while officials at 4 agencies indicated that we would need to obtain responses from officials at some other level because the relevant inventory data are not maintained centrally. (See table 7 for a list of organizations within the 20 CFO Act agencies that indicated they own relevant equipment and responded to our data collection effort.) After identifying contacts responsible for agencies’ heavy-equipment inventory data, we prepared data collection instruments for requesting information on heavy equipment and tested these documents with representatives from 4 of the 20 CFO Act agencies that indicated they own heavy equipment to ensure that the documents were clear and logical and that respondents would be able to provide the requested data and answer the questions without undue burden. These agency representatives were selected to provide a variety of spending on federal supply group 38 equipment as reported in FPDS-NG, civilian and military agencies, and different levels at which the agency would be responding to the data collection effort (e.g., at the departmental level or at a sub- departmental level). Our data collection instrument requested the following data on respondent organizations’ owned assets in 12 federal supply classes as of June 2017: Respondents provided data on original acquisition costs in nominal terms, with some acquisitions occurring over 50 years ago. In order to provide a fixed point of reference for appropriate comparison, we present in our report inflation-adjusted acquisition costs using calendar year 2016 as the reference. To adjust these dollar amounts for inflation, we used the Bureau of Labor Statistic’s Producer Price Index by Commodity for Machinery and Equipment: Construction Machinery and Equipment (WPU112), compiled by the Federal Reserve Bank of St. Louis. We conducted the data collection effort from July 2017 through October 2017 and received responses from all 20 agencies that indicated they own heavy equipment. In order to assess the reliability of agencies’ reported data, we collected and reviewed agencies’ responses regarding descriptions of their inventory systems, frequency of data entry, agency uses of the data, and agencies’ opinions on potential limitations of the use of their data in our analysis. We conducted some data cleaning, which included examining the data for obvious errors and eliminating outliers. We did not verify the data or responses received; the results of our data collection effort are used only for descriptive purposes and are not generalizable beyond the 24 CFO Act agencies. Based on the steps we took, we found these data to be sufficiently reliable for our purposes. To determine the heavy equipment items that selected agencies recently acquired and how these agencies decided whether to purchase or lease this equipment, we first used data from the FPDS-NG to identify agencies that appeared to have the highest obligations for construction or heavy equipment, or both, and used this information, along with other factors, to select DOD and Interior. At the time, in the absence of a generally accepted definition of heavy equipment, we reviewed data related to federal supply group 38—construction, mining, excavating, and highway maintenance equipment—because (1) we had not yet defined heavy equipment for the purposes of our review; (2) agency officials had told us that most of what could be considered heavy equipment was in this federal supply group; and (3) our analysis of data from usaspending.gov showed that about 80 percent of spending on items that may be considered heavy equipment were in this federal supply group. In meeting with officials at these departments, we learned that agencies within each department manage heavy equipment independently, so we requested current inventory data for Interior bureaus and the DOD military departments and selected three agencies that had among the largest inventories of construction and/or heavy equipment at the time, among other criteria: the U.S. Air Force (Air Force); the Fish and Wildlife Service (FWS); and the National Park Service (NPS). We then used information from our data collection effort—which included the number, type, cost, acquisition year and other data elements—to determine heavy equipment items that these agencies acquired during 2012 through 2016. We interviewed agency officials to determine what lease data were available from the three selected agencies. We assessed the reliability of these data with agency official interviews and reviewed the data for completeness and potential outliers. We determined that the data provided were sufficiently reliable for the purposes of documenting leased and rental heavy equipment. We also obtained data from GSA’s Short- Term Rental program, which had previously been limited to passenger vehicles, in part program for August 2012, when the first item was rented under this program, to February 2017, when GSA provided the data. We used these data to identify selected agencies’ rentals of heavy equipment through GSA’s Short-Term Rental program, which had previously been limited to passenger vehicles, in part program and associated costs. We interviewed officials from GSA’s Short-Term Rental program to discuss the program history as well as the reliability of their data on these rented heavy equipment items. We determined that the data were sufficiently reliable for our purposes. To determine how the three selected agencies decide whether to purchase or lease heavy equipment, we interviewed fleet and property managers at these selected agencies and asked them to describe their process for making these decisions as well as to identify relevant federal and agency regulations and guidance. We reviewed relevant federal and agency regulations and guidance regarding how agencies should make these decisions, including: Federal Acquisition Regulation, Office of Management Budget’s A-94, Guidelines and Discount Rates for Benefit- Cost Analysis of Federal Programs, Defense Federal Acquisition Regulation Supplement, Air Force Manual 65-506, Air Force Guidance Memorandum to Air Force Instruction 65-501, and Interior’s Guidance On Lease Versus Purchase Analysis and Capital Lease Determination for Equipment Leases. We also reviewed the Standards for Internal Control in the Federal Government for guidance on documentation as well as past GAO work that reviewed agencies’ lease-versus-purchase analyses. To determine whether the three selected federal agencies documented lease-versus-purchase decisions for selected acquisitions and adhered to relevant agency guidance, we selected and reviewed a non-generalizable sample of 10 heavy equipment acquisitions—two purchases each from the Air Force, FWS, and NPS, and two leases each from the Air Force and FWS. Specifically, we used inventory data obtained through our data collection effort, described above, to randomly select two heavy equipment purchases from each selected agency using the following criteria: calendar years 2012 through 2016; the two federal supply classes most prevalent in each selected agency’s heavy equipment inventory, as determined by the data collection effort described above; and for NPS and FWS, acquisition costs of over $15,000. In addition, we used lease data provided by the Air Force and FWS to randomly selected two heavy equipment leases per agency. Because NPS could not provide data on heavy equipment leases, we did not select or review any NPS lease decisions. To select the Air Force and FWS leases we used the following criteria: fiscal years 2012 through 2016; for the Air Force, which included federal supply classes in the lease data provided, the two federal supply classes most prevalent in the lease data and for FWS, which did not include federal supply class in the lease data provided, the two federal supply classes most prevalent in the purchase data; and for FWS, leases over $15,000. After selecting these acquisitions, we determined that one FWS lease and one NPS purchase we selected pre-dated Interior’s 2013 guidance on lease-versus-purchase analysis and excluded these acquisitions from our analysis for a total of eight acquisitions. In reviewing agencies’ documentation related to these acquisitions, we developed a data collection instrument to assess the extent to which agencies documented lease-versus-purchase analyses and, in the case of FWS and NPS, adhered to relevant Interior guidance. We supplemented our review of these acquisition decisions with additional information by interviewing officials at the three selected agencies and requesting additional information to understand specific circumstances surrounding each procurement. Our findings are not generalizable across the federal government or within each selected department. To determine how selected agencies manage heavy equipment utilization, we interviewed officials at the three selected agencies to identify departmental and agency-specific guidance and policies and to determine whether utilization requirements exist. We reviewed guidance identified by these officials, including Interior and Air Force vehicle guidance, both of which apply to heavy equipment, and FWS’s Heavy Equipment Utilization and Replacement Handbook. We also compared their practices to relevant Standards for Internal Control in the Federal Government. For the selected agencies with guidance for managing heavy equipment—Air Force and FWS—we reviewed the guidance to determine if and how selected agencies measured and documented heavy equipment utilization. For example, we reviewed whether selected agencies developed reports for managing heavy equipment utilization such as Air Force validation reports and FWS conditional assessment reports. We also reviewed Air Force, FWS, and NPS utilization data for heavy equipment but we did not independently calculate or verify the utilization rate for individual heavy equipment items because each heavy equipment item (backhoe, forklift, tractor, etc.,) has different utilization requirements depending on various factors such as the brand, model, or age of equipment. However, we did request information about agency procedures to develop and verify utilization rates. We assessed the reliability of the utilization data with agency official interviews and a review of the data for completeness and potential outliers. We determined that the data were sufficiently reliable for the purposes of providing evidence of utilization data collection for heavy equipment assets. We also visited the NPS George Washington Memorial Parkway to interview equipment maintenance officials regarding the procurement and management of heavy equipment and to document photos of heavy equipment. We selected this site because of its range of heavy equipment and close proximity to the Capital region. We conducted this performance audit from October 2016 to February 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, John W. Shumann (Assistant Director), Rebecca Rygg (Analyst in Charge), Nelsie Alcoser, Melissa Bodeau, Terence Lam, Ying Long, Josh Ormond, Kelly Rubin, Crystal Wesco, and Elizabeth Wood made key contributions to this report.
|
Federal agencies use heavy equipment such as cranes and forklifts to carry out their missions, but there is no government-wide data on federal agencies' acquisition or management of this equipment. GAO was asked to review federal agencies' management of heavy equipment. This report, among other objectives, examines: (1) the number, type, and costs of heavy equipment items that are owned by 20 federal agencies and (2) the heavy equipment that selected agencies recently acquired as well as how they decided whether to purchase or lease this equipment. GAO collected heavy equipment inventory data as of June 2017 from the 24 agencies that have chief financial officers responsible for overseeing financial management. GAO also selected three agencies (using factors such as the heavy equipment fleet's size) and reviewed their acquisitions of and guidance on heavy equipment. These agencies' practices are not generalizable to all acquisitions but provide insight into what efforts these agencies take to acquire thousands of heavy equipment items. GAO also interviewed officials at the three selected agencies. Of the 24 agencies GAO reviewed, 20 reported owning over 136,000 heavy equipment items such as cranes, backhoes, and forklifts, and spending over $7.4 billion (in 2016 dollars) to acquire this equipment. The remaining 4 agencies reported that they do not own any heavy equipment. The three selected agencies GAO reviewed in-depth—the Air Force within the Department of Defense (DOD), and the Fish and Wildlife Service and the National Park Service within the Department of the Interior (Interior)—spent about $360 million to purchase about 3,500 heavy equipment assets in calendar years 2012 through 2016 and over $5 million to lease heavy equipment from fiscal years 2012 through 2016. Officials from all three agencies stated that they consider mission needs and the availability of equipment leases when deciding whether to lease or purchase heavy equipment. Federal regulations provide that agencies should consider whether it is more economical to lease or purchase equipment when acquiring heavy equipment, and federal internal control standards require that management clearly document all transactions in a manner that allows the documentation to be readily available for examination. However, in reviewing selected leases and purchases of heavy equipment from these three agencies, GAO found that officials did not consistently conduct or document lease-versus-purchase analyses. Officials at the Air Force and Interior said that there was a lack of clarity in agency policies about when they were required to conduct and document such analyses. Without greater clarity on when lease-versus-purchase analyses should be conducted and documented, these agencies may not be spending funds on heavy equipment effectively. The Department of the Interior and the Air Force should clarify the circumstances in which lease-versus-purchases analyses for heavy equipment acquisitions are to be conducted and documented. The Departments of the Interior and Defense concurred with these recommendations.
|
In the U.S. commercial airline industry, passengers travel by air on network, low-cost, and regional airlines. With thousands of employees and hundreds of aircraft, network airlines support large, complex hub- and-spoke operations, which provide service at various fare levels to many destinations. Low-cost airlines generally operate less costly point- to-point service using fewer types of aircraft. Regional airlines typically operate small aircraft—turboprops or regional jets with up to 100 seats—and generally provide service to smaller communities on behalf of network airlines. The U.S. airline industry’s financial health has improved greatly in recent years due in part to increased demand for air travel as a result of the improved economy, industry reorganization, and changes in business practices. Starting in 2007, airlines faced a number of major challenges, including volatile fuel prices, the financial crisis, and the ensuing recession of 2007–2009. These events led to a wave of domestic airline bankruptcies, five airline mergers, and changes in airlines’ business practices. In all, these circumstances—such as the improved economy and new airline business practices—contributed to record level profits for airlines. For example, in 2017, U.S. airlines reported an after-tax net profit of $13.4 billion for domestic operations, according to DOT data. As the industry recovered from the recession and passenger traffic began to rebound, airlines began to exercise “capacity restraint” by carefully controlling the number of seats on flights to achieve higher load factors in order to control costs and improve profitability. Because capacity restraint may result in fewer empty seats on many flights, this practice also limits airlines’ ability to rebook passengers if a flight is delayed or cancelled. Airlines have also made changes in their ticket pricing. For example, airlines now generally “unbundle” optional services from the base ticket price and charge ancillary fees for those services. Unbundling may result in passengers paying for services that were previously included in the price of the ticket. Additionally, certain aspects of customer service quality are tied to the class of ticket passengers purchase. For example, purchasing a “basic economy” ticket may include restrictions, such as not allowing passengers to select seats or charging for carry-on bags, that would not apply to a higher priced ticket class. Similarly, the quality of seating varies based on the ticket class purchased—even within the main cabin of the aircraft. Moreover, while the recent airline mergers have resulted in some new service options for passengers in certain markets, they have also reduced consumers’ choice of airlines on some routes and can result in higher ticket prices. At the same time, low-cost airlines provide greater competition in the markets they serve, which may help to keep prices in check. Many factors—from booking a flight through collecting checked baggage—may contribute to passengers’ level of satisfaction with an airline’s service, according to an airline industry association and market research organizations (see fig.1). For example, one industry survey found that passengers most valued affordable airfare, convenient flight schedules, and reliable on-time departures and arrivals. DOT’s regulatory activities include issuing consumer protection regulations. Specifically, DOT may issue or amend consumer protection regulations under its statutory authority to prohibit unfair or deceptive practices, or unfair methods of competition by airlines, among others. As mentioned previously, under this authority DOT has promulgated various regulations to enhance airline consumer protections since 2009 (see table 1). When regulations are promulgated, agency officials must determine how to promote compliance and deter noncompliance. Agencies charged with promoting regulatory compliance, including DOT, usually adopt a program that consists of two types of activities: those that encourage compliance and those that enforce the regulations. Compliance assistance helps regulated entities, such as U.S. airlines, understand and meet regulatory requirements, whereas activities such as monitoring, enforcement, and data reporting deter noncompliance and ensure that entities follow requirements. Agencies choose a mix of compliance activities that will achieve their desired regulatory outcome. DOT promotes airlines’ compliance with consumer protection requirements through a number of activities, and it educates passengers on their rights. For example, DOT has the authority to investigate whether an airline has been, or is engaged, in an unfair or deceptive practice or an unfair method of competition in air transportation or the sale of air transportation. If DOT finds that an airline has violated consumer protection requirements, DOT may take enforcement action against the airline by, for example, assessing civil penalties. In addition to promoting airlines’ compliance with consumer protection requirements, DOT also conducts activities aimed at educating passengers about their rights and the services provided by airlines. For example, DOT has an aviation consumer protection website where it highlights passengers’ rights and describes how to file complaints with DOT, in addition to other consumer resources. Within DOT’s Office of the Secretary (OST), the Office of the Assistant General Counsel for Aviation Enforcement and Proceedings and its Aviation Consumer Protection Division are responsible for these efforts. According to DOT officials, the annual appropriation to OST’s Office of the General Counsel provides funding for DOT’s consumer protection activities, among other things. At the end of fiscal year 2017, DOT employed 38 staff—including 18 attorneys and 15 analysts—to conduct these activities, according to DOT officials. DOT’s data, which include both operational measures of airline service, as well as passenger complaints received by DOT, provide mixed information on whether service improved from 2008 through 2017. DOT requires reporting airlines to provide operational data, including information on late, cancelled, or diverted flights; mishandled baggage; and denied boardings. These data showed some general improvement in the quality of airline service from 2008 through 2017. However, during the same time period, the total number of passenger complaints filed with DOT increased for “selected” airlines. Moreover, while these data may be imperfect measures of service quality, they do provide some indication of the passenger experience. DOT publishes data on both operational performance and passengers’ complaints in its monthly Air Travel Consumer Report to inform the public about the quality of services provided by airlines. Representatives from all 11 selected airlines highlighted actions they took to enhance passenger service since 2013, including in some of the areas discussed above. While customer service is important for airlines, these actions can also be motivated in part by other factors—including compliance with certain consumer protection requirements or DOT consent orders, or competition with other airlines. For example, one airline developed a wheelchair tracking system in response to DOT enforcement, which also contributed to the airline’s goal to improve its services to passengers with disabilities. Additional examples of service improvements are listed below. On-time performance. Representatives we interviewed from almost all selected airlines (10 of 11) reported taking actions intended to improve on-time performance or mitigate challenges associated with flight delays and cancellations. These actions varied across airlines from those intended to improve operational performance to those intended to improve the comfort of passengers. For example, one airline began tracking flights that were “at-risk” of meeting DOT’s definition of a chronically delayed flight, so it could, among other things, swap crews or substitute aircraft and avoid these types of delays. According to DOT regulations, airlines with a chronically delayed flight for more than four consecutive one-month periods are engaging in a form of unrealistic scheduling, which is an unfair or deceptive practice and an unfair method of competition. Airlines have also used technology, such as text-messaging updates, to communicate with passengers during delays and cancellations (8 of 9); increased the number of situations where passengers are compensated during delays and cancellations (5 of 9); and empowered customer service agents to provide food, beverages, and entertainment to passengers during flight delays (1 of 9). For example, one airline e-mails all passengers that experience long delays with an apology and voucher for future travel, regardless of whether the delay was within the airline’s control. While DOT has some requirements for airlines on delays and cancellations, such as on tarmac delays and chronically delayed flights, it generally does not require airlines to compensate passengers for delays. Baggage handling. Representatives we interviewed from almost all network and low-cost airlines (8 of 9) reported investing resources in order to improve baggage-handling efforts and minimize the effects to passengers whose bags are lost or delayed. Among other things, airlines upgraded baggage technology (5 of 9); modernized the claims process, so passengers could complete forms on-line (3 of 9); and instituted replacement baggage programs, where passengers get a replacement bag at the airport (2 of 9). For example, one airline invested several million dollars to use radio frequency identification technology (RFID) to track bags, as well as allowing passengers to track their baggage via an application on their smartphone. Another airline introduced a policy to use FedEx to deliver delayed bags if the airline cannot return them within 24 hours. Since 2011, DOT has required certain airlines to make every reasonable effort to return mishandled baggage within 24 hours. Quality of interaction with airline staff. Representatives we interviewed from almost all selected airlines (10 of 11) reported improving training programs in an attempt to enhance interactions between airline staff and passengers. For example, one airline worked with the Disney Institute to provide training to staff on relating to guests during travel disruptions and de-escalating conflict. While airlines have increased customer service training, representatives from one industry association said that the training would be more beneficial if it was provided on a more regular basis. Two airlines also expanded their customer service departments’ hours to better match when passengers travel. According to DOT officials, airlines are not required to provide customer service training to staff. Passengers with disabilities. Representatives we interviewed from almost all network and low-cost airlines (8 of 9) reported taking actions intended to improve services for passengers with disabilities. These actions included programs to replace damaged or misplaced wheelchairs or other assistive devices (3 of 9); improving seating and access to lavatories in the aircraft (1 of 9); and using RFID technology to track wheelchairs (1 of 9). For example, representatives from one airline told us they have retrofitted their larger single aisle aircraft lavatories to be wheelchair accessible. Two airlines also reported changing policies pertaining to emotional support animals. For example, one airline has an online registration for emotional support animals where passengers must submit all documentation at least 48 hours in advance of the flight; according to representatives, the process allows the airline to validate the required paperwork, while providing relevant information to passengers with emotional support animals and ensuring the safety of everyone onboard the aircraft. Involuntary denied boardings. Representatives we interviewed from network and low-cost airlines (9) reported taking steps to reduce or eliminate involuntary denied boardings. Representatives from three airlines said they have reduced or stopped overbooking flights, and other representatives (5 of 9) said their airlines have begun soliciting volunteers to be “bumped” off a flight (i.e., give up their seat) earlier in the process. Two conduct reverse auctions where they ask passengers what compensation they would accept to take an alternative flight. Airlines are also offering additional incentives to encourage passengers to voluntarily switch to flights with available seats (5 of 9)—including travel vouchers with fewer restrictions or that cover ancillary fees, gift cards for Amazon and other retailers, or large travel credits of up to $10,000. DOT promotes and monitors airlines’ compliance with consumer protection requirements and deters noncompliance in five key ways, such as by reviewing passenger complaint data and taking enforcement action where it identifies violations. However, we found that DOT could improve its procedures to provide additional assurances that analysts consistently code passengers’ complaints and properly identify potential consumer protection violations, in addition to more fully utilizing data from DOT’s information systems to inform its compliance program. Further, while DOT has objectives for each of its five key compliance activities, it lacks performance measures for three of these objectives. As a result, DOT is limited in its ability to assess progress toward achieving its goal of promoting airlines’ compliance with consumer protection requirements or to identify and make any needed improvements. DOT conducts five key activities to help airlines understand and comply with consumer protection requirements: (1) providing compliance assistance to airlines, (2) processing complaints from passengers, (3) conducting compliance inspections of airlines at headquarters and airports, (4) conducting airline investigations, and (5) enforcing airlines’ compliance with consumer protection requirements. Collectively, these key compliance activities are intended to help airlines understand and meet consumer protection requirements and deter noncompliance. Providing compliance information to airlines. DOT attorneys assist airlines in meeting consumer protection requirements by developing guidance materials and responding to questions. DOT publishes these materials—such as topic-specific webpages and frequently asked questions—on its website. Attorneys and analysts also informally respond to questions or requests for information from airline representatives. Processing complaints from passengers. As previously stated, passengers may file complaints with DOT via its website, by mail, or through DOT’s telephone hotline. DOT analysts use a web application—the Consumer Complaints Application system—to receive, code, and track passenger complaints. In 2017, DOT’s 15 analysts processed about 18,000 air travel-related complaints. Initial processing involves reviewing the information in the complaint, notifying complainants that their complaint was received, and transmitting the complaint to the relevant airline for action. Analysts assign one of 15 high-level complaint category codes (e.g., “advertising” or “discrimination”) to each complaint as well as more specific lower-level complaint codes and codes indicating a potential violation of consumer protection requirements as necessary. Analysts initially code a complaint based on the passenger’s perception of events and not on an assessment of whether the complaint is a potential violation of consumer protections. According to DOT officials, when initially coding passenger complaints, analysts generally use their judgment to code each passenger’s complaint based on the primary issue. While analysts handle a variety of complaints, DOT may designate specific analysts to handle more complex complaint codes, such as disability complaints. On a monthly basis, DOT provides airlines the opportunity to review the complaints received and the agency’s categorization of each complaint. At that time, airlines have an opportunity to challenge DOT’s categorizations. According to DOT officials, a limited number of complaints are recoded as a result of this process. Conducting compliance inspections of airlines at headquarters and airports. DOT analysts and attorneys inspect airlines at airline headquarters and airports to assess their compliance with consumer protection requirements. From 2008 through 2016, analysts and attorneys conducted compliance inspections of airlines at the airlines’ headquarters, but DOT has not conducted any such inspections since September 2016. Beginning in 2015, DOT initiated compliance inspections of airlines at airports, and DOT continued to conduct these inspections through 2018. According to DOT officials, they have exclusively conducted on-site inspections of airlines at airports in recent years due, in part, to limited resources and budget unpredictability. However, officials stated that they would consider conducting more inspections of airlines at airline headquarters in the future. Inspections of airlines at airlines’ headquarters examine customer service policies and passenger complaints received directly by airlines, among other things. According to DOT officials, these inspections represent a “deep dive” into an airline’s relevant policies and involve collecting and analyzing data prior to and after their weeklong visit, as well as interviewing corporate personnel. DOT analysts and attorneys use the agency’s inspection checklist to assess compliance with a variety of regulated areas such as the inclusion of certain information on the airline’s website and the proper reporting of data to DOT (e.g., mishandled baggage and on-time performance data). According to DOT data, between 2008 and 2016 DOT completed inspections at 33 U.S. airlines’ headquarters. These 33 inspections identified 23 systemic violations, resulting in consent orders. Two inspections resulted in warning letters, and eight did not identify any systemic violations. The assessed penalty amounts for these inspections ranged from $40,000 to $1,200,000. Inspections of airlines at airports examine staff’s knowledge of certain consumer protection requirements and the availability and accuracy of signage and documentation. Such inspections provide DOT the opportunity to examine multiple airlines in one visit. According to DOT officials, during these unannounced inspections, attorneys and analysts focus on assessing compliance through observation and interviews with randomly selected airline employees. For example, analysts and attorneys may confirm the availability of information on compensation for denied boarding from an airline gate agent or review an airline’s required signage on compensation for mishandled baggage to determine whether the information is accurate. According to DOT data, DOT inspected 12 to 14 U.S. airlines annually—most multiple times—at 51 domestic airports from 2015 through 2017. In 2017, DOT conducted inspections at 18 domestic airports that included inspecting 12 U.S. airlines multiple times. In total, from 2015 through 2017, DOT found violations of various consumer protection requirements for 13 airlines that DOT addressed through warning letters. In addition, DOT found violations related to incorrect (e.g., out-of-date) or missing notices regarding baggage liability limits or oversales compensation for 8 airlines that were settled by consent orders with penalties between $35,000 and $50,000. Conducting airline investigations. According to DOT officials, attorneys determine whether to open an investigation by weighing numerous factors, including whether they believe an airline is systematically violating consumer protection requirements. Attorneys may initiate an investigation based on findings from trends in passenger complaints, compliance inspections, monitoring of airline websites and news media, or information supplied by other entities, including other DOT offices or governmental agencies. According to DOT officials, after gathering preliminary information, an attorney may notify the airline of his or her investigation, request information for further analysis, and then determine whether a violation has occurred and which enforcement action, if any, is appropriate. Attorneys document these investigations using DOT’s case management system. From 2008 to 2017, DOT initiated almost 2,500 investigations as shown in table 2 below. Enforcing airlines’ compliance with consumer protection requirements. When investigations result in a determination that a violation occurred, DOT may pursue enforcement action against the airline by, for example; (1) seeking corrective actions through warning letters; (2) consent orders (which may include fines); or (3) commencement of a legal action (see table 2). According to DOT officials, attorneys consider a number of factors in determining the appropriate enforcement action, including whether there is evidence of recidivism or systemic misconduct, and the number of passengers affected. According to DOT data, most investigations result in administrative closures and findings of no violation. According to DOT officials, when attorneys decide to issue a consent order, they work with their managers to arrive at an initial civil penalty level and then negotiate with the airline to arrive at a final settlement agreement and civil penalty amount if applicable. DOT has criteria for setting civil penalties, but officials describe the process as “more art than science” because facts and circumstances always vary. Civil penalties assessed in consent orders often include three parts: mandatory penalties, credits, and potential future penalties (see table 3). A mandatory penalty is the portion of the assessed penalty that must be paid immediately or in installments over a specified period of time. A credit is the portion of the assessed penalty that DOT allows an airline to not pay in order to give credit to the airline for spending funds on passenger compensation or toward specific service improvements, both of which must be above and beyond what is required by existing requirements. A potential future penalty is the portion of the assessed penalty that the airline will pay if DOT determines that the airline violated certain requirements during a specified period of time. Our review of 76 consent orders for our 12 selected airlines where a penalty was assessed found that DOT issued penalties totaling $17,967,000 from 2008 through 2017. Of this, 47 percent ($8,437,700) comprised mandatory penalties paid by the airline. The remaining amounts were either credits or potential future penalties. According to DOT officials, credits are a better way to effect positive change than merely assessing a mandatory penalty. For example, one recent consent order included violations of regulations regarding assistance for passengers with disabilities, among other things. The airline and DOT agreed to an assessed civil penalty amount of $400,000, $75,000 of which was credited to the airline for compensation to customers filing disability-related complaints in certain years and for implementation of an application to provide real-time information and response capabilities to a wheelchair dispatch and tracking system, among other things. However, our review found that consent orders do not always ensure future compliance. Specifically, we found 14 instances where an airline received multiple consent orders for the same regulatory violation. Three of these instances—each for different airlines—related to violations of the “full fare rule,” and two—also for different airlines—related to airlines’ failure to adhere to customer service plans. We found that while DOT has some procedures (i.e., guidance documents and on-the-job training) in place for coding passenger complaints, it lacks others that could help ensure that analysts consistently code complaints and that potential consumer protection violations are properly identified. Federal internal control standards state that agencies should design control activities to achieve objectives and establish and operate monitoring activities to evaluate results. By designing and assessing control activities, such as procedures and training, agencies are able to provide management with assurance that the program achieves its objectives, which in this case involve identifying instances of airline noncompliance. DOT has taken some steps to help analysts code passenger complaints and properly identify potential violations of consumer protection requirements: Guidance documents. DOT developed two documents to guide complaint processing and evaluation—a coding sheet that helps analysts determine how to code complaints and identify potential consumer protection violations, and a user guide that describes how analysts should enter complaint information into the web application. However, we found that these documents may not be clear or specific enough to ensure that analysts consistently coded complaints or properly identified potential consumer protection violations. For example, while the coding sheet includes explanatory notes in 9 of the 15 complaint categories, it does not include definitions and examples for each of DOT’s 15 complaint categories that would illustrate appropriate use of a complaint code, a gap that could result in inconsistent coding. On-the-job training. DOT supplements its guidance documents with on-the-job training, which officials told us helps analysts consistently code complaints and identify potential consumer protection violations; however, DOT has not established formal training materials to ensure all new analysts get the same information. DOT pairs each newly- hired analyst with a senior analyst to be their coach and instruct them on how to code complaints. According to DOT officials, senior and supervisory analysts determine when new analysts are able to code and work independently but continue to monitor their work as needed and determined by the senior analyst. DOT officials stated that while the agency does not regularly check the extent to which complaints are consistently coded, supervisory analysts check analysts’ complaint coding on an as-needed basis throughout the year, as well as during semi-annual performance reviews. However, DOT does not provide formal training materials or other guidance to ensure that senior analysts are conveying the same information during these informal, on-the-job training sessions. DOT officials stated that the combination of the existing guidance, procedures, and hands-on training provides adequate assurance that analysts share a common understanding of the complaint categories resulting in complaints being consistently coded. As a result, DOT officials have not developed additional guidance documents or established formal training materials. While DOT officials said they believe their procedures and on-the-job training are sufficient to ensure that complaints are consistently coded and that potential consumer protection violations are properly identified, a recent DOT Office of Inspector General (OIG) report found that DOT analysts did not identify when to code complaints as potential consumer protection violations for a sample of frequent flyer complaints the agency reviewed. As a result, in 2016, the DOT OIG recommended that DOT provide additional training on what constitutes an unfair or deceptive practice to strengthen oversight of airlines’ frequent flyer programs. In response, DOT created a special team to process frequent flyer complaints and developed and provided review team analysts and other members with training on how to review complaints and identify potential violations related to airlines’ frequent flyer programs. Improving DOT’s procedures that analysts use to code complaints and identify potential consumer protection violations could provide DOT with additional assurances that analysts: share a common understanding of the definitions of all the complaint codes, are coding complaints in each category consistently, and are identifying potential consumer protection violations. Consistent coding among analysts is important for a number of reasons. First, according to DOT officials, passengers use complaint data—which are publicly reported in DOT’s Air Travel Consumer Report—to make decisions about air travel, including which airlines to fly. Second, DOT analysts and attorneys use complaint data to guide their compliance activities (e.g., selecting airlines for inspections and investigations, and determining proper enforcement actions). We found that while DOT’s case management system allows attorneys to track investigations, it lacks functionality that would allow DOT officials to more efficiently use data from the system to inform other key activities, such as making compliance and enforcement decisions. Federal internal control principles state that agencies should design an entity’s information system and related control activities to achieve objectives and respond to risks, which in this case involve using data from DOT’s case management system to inform its compliance activities. Our review of DOT’s case management system identified the following limitations that affect DOT’s ability to use data from its case management system to target resources and accurately monitor trends in violations, compliance activities, and the results of its enforcement actions: Key data are optional. Attorneys are not required to complete certain key data fields in the case management system. For example, attorneys are not required to document the outcome of an investigation in the “enforcement action” field. According to officials, while attorneys do not always complete this field, they often choose to document the outcome of investigations in the case notes. Even if that information is captured in the case notes section, attorneys can only access that information by individually reviewing each case file. Data entries are limited. Attorneys cannot record multiple consumer protection violations for a single investigation in the case management system. As a result, when multiple violations occur, attorneys must use their professional judgement to select the primary violation to record. Our review of the 76 consent orders against selected airlines resulting from airline investigations identified 24 instances—or more than 30 percent—where an airline violated multiple consumer protection regulations. While this is a small subset of all investigations (2,464) DOT completed across our timeframe, it suggests investigations could include violations of multiple consumer protection regulations. Data entries do not reflect DOT’s compliance activities. While the case management system includes a field for attorneys to document the source of investigations, the field’s response options do not fully correspond to DOT’s key compliance activities or align to DOT’s documentation listing the sources of investigations. For example, the field that tracks the source of an investigation includes an option to identify passenger complaints as the source but not an inspection of an airline. Officials told us that, like the outcomes of investigations, attorneys often document the source of an investigation in the case notes. However, as mentioned previously, information captured in the case notes section can only be accessed by individually reviewing each case file. Limited reporting capabilities exist. Attorneys are limited in their ability to run reports to understand trends across multiple investigations, according to DOT officials. For example, the case management system lacks a function to run reports by certain data fields. Specifically, according to DOT officials, attorneys cannot run reports by the airline name data field and must instead type in the airline name to create a report, a process that could produce unreliable results if an airline’s name is inconsistently entered into the database. According to DOT officials, the case management system’s capabilities are limited largely because the database was designed as a mechanism for attorneys to manage ongoing investigations. DOT officials told us that, while the database has successfully fulfilled that role, officials have increasingly used data from the case management system to make enforcement decisions. For example, DOT attorneys use information from the case management system to inform civil penalty amounts. In addition, DOT uses data from the case management system to analyze the results of investigations and inspections, as well as the details of consent orders in order to target future compliance activities. However, because of limited reporting capabilities, attorneys and managers must manually create summary documents from the case management system’s data, work that could be time consuming and subject to manual errors, and that does not address the issue that some data are not entered into various data fields in the first place. Recognizing limitations with the case management system, DOT has taken steps to improve the system. Specifically, starting in June 2018, DOT began working with a contractor to update the case management system’s functionality. Among other things, the updates are intended to improve the system’s ability to run reports, which could enhance DOT’s ability to examine trends in enforcement actions and penalty amounts, and allow the system to track investigation milestones. While DOT’s planned updates may help DOT officials better examine trends in enforcement actions, the planned updates do not fully address the issues we identified above, particularly related to collecting complete data. Collecting complete and comprehensive data in the case management system could allow DOT to better track trends in its investigations, inspections, and enforcement actions and to use that information to make data-driven decisions about future compliance activities and enforcement actions. While DOT has five objectives for its key compliance program activities, it has not established performance measures for three of these objectives. Objectives communicate what results the agency seeks to achieve, and performance measures show the progress the agency is making toward achieving those objectives. Federal internal control standards state that agencies should define objectives clearly to enable the identification of risks and define risk tolerances. They further state that management defines objectives in measurable terms, so that performance toward those objectives can be assessed. Additionally, the Government Performance and Results Act of 1993 (GPRA), as enhanced by the GPRA Modernization Act of 2010, requires agencies to develop objective, measurable, and quantifiable performance goals and related measures and to report progress in performance reports in order to promote public and congressional oversight, as well as to improve agency program performance. In fiscal years 2017 and 2018, DOT developed objectives for each of its five key compliance activities; however, as illustrated in table 4 below, DOT does not have performance measures for three of its objectives. For the three objectives for which DOT has not established performance measures, it has documented qualitative measures in internal agency documents. For example, while DOT has not developed a performance measure related to enforcing airlines’ compliance with consumer protection requirements, it summarized enforcement cases in fiscal year 2017 that illustrated actions the agency had taken to achieve this objective. For instance, one enforcement action included a consent order against an airline with an assessed penalty of $1.6 million for violating DOT’s tarmac delay rule. DOT highlighted similar accomplishments for educating airlines and conducting inspections. For example, DOT issued guidance to help airlines understand their legal obligations to not discriminate against passengers in air travel on the basis of race, color, national origin, religion, sex or ancestry, and the agency highlighted identifying unlawful practices by multiple airlines during an inspection of airlines at an airport. While the actions described may provide DOT with some information on whether it is achieving its objectives, they fall short of internal control standards that call for federal agencies to define objectives in measureable terms to assess performance. DOT officials stated that they have not developed performance measures to monitor progress toward achievement of some objectives because it is difficult to develop quantifiable performance measures. We have previously reported that officials from other enforcement agencies with similar objectives found it challenging to develop performance measures in part due to the reactive nature of enforcement as well as the difficultly of quantifying deterrence, but were ultimately able to do so. Developing performance measures for all objectives would allow DOT to more fully assess the effectiveness of its efforts at promoting airlines’ compliance with consumer protection requirements. Specifically: Providing compliance information to airlines. DOT has not developed quantifiable performance measures to assess how well DOT educates airlines about consumer protection requirements. For example, DOT does not have a performance measure for developing and disseminating guidance for specific rules or to issue information on new rules within a certain time frame. Rather, officials told us that they proactively e-mail stakeholders new consumer protection rules— rather than relying on stakeholders having to find them on DOT’s website or Regulations.gov—and if officials receive the same question repeatedly, about the same requirement they might issue guidance on the topic. According to DOT officials, these activities help ensure that stakeholders are complying with relevant consumer protection requirements. DOT officials did not provide a specific reason for why they do not have a performance measure related to this objective. However, without such a measure, DOT cannot be sure that it is providing timely educational materials to clarify new consumer protection requirements and assist airlines in complying with these requirements. Conducting compliance inspections of airlines at headquarters and airports. DOT lacks quantifiable performance measures related to conducting inspections of airlines at airlines’ headquarters and at airports. Having such a measure could help ensure that DOT conducts these activities. Specifically, we found that while DOT continues to conduct inspections of airlines at airports, it has not conducted inspections at airlines’ headquarters since 2016, despite having identified this compliance activity as a key priority in planning documents. According to DOT officials, they have not conducted inspections at airlines’ headquarters for two primary reasons. First, DOT officials said inspections at airlines’ headquarters require significant staff resources, which DOT has allocated to other compliance activities in recent years. Second, officials said that no airline was an obvious choice for an inspection at its headquarters because DOT had not received a disproportionate number of complaints against a specific airline to suggest an inspection was warranted. However, the DOT OIG previously directed the agency to make these inspections a priority and to allocate resources accordingly, and DOT officials themselves have said that these inspections provide incentives for airlines’ continued compliance regardless of whether one airline has an obvious problem. Establishing performance measures for conducting both types of inspections would provide greater assurance that DOT conducts these activities on a regular basis. Moreover, officials told us that inspections at airlines’ headquarters examine specific consumer protection requirements that are not examined during inspections at airports, and that inspections at headquarters help promote compliance. Among other things, inspections at airlines headquarters allow DOT officials to: (1) review training manuals and training records; (2) examine a sample of passengers’ complaint data received directly by the airlines, including disability and discrimination complaints; and (3) verify that airlines are current on reporting data such as on mishandled baggage and denied boardings to DOT. Performance measures related to how often and under what circumstances compliance inspections should take place could provide assurance that DOT conducts these activities, and is not missing opportunities to monitor airlines’ compliance with consumer protection requirements. Enforcing airlines’ compliance with consumer protections. DOT officials told us that they have not developed performance measures for enforcement actions because they would not want to have performance measures that were punitive or reactive by, for example, requiring the agency to collect a certain penalty amount from airlines. While we acknowledge the complexity and risks involved in setting these types of performance measures, as mentioned previously, other agencies have done so. For example, one of the Federal Trade Commission’s performance measures is to focus 80 percent of enforcement actions on consumer complaints. Without a performance measure for enforcement activities, DOT is missing opportunities to assess the effectiveness of these activities and make any needed changes. We have previously reported that performance measurement gives managers crucial information to identify gaps in program performance and plan any needed improvements. DOT’s primary vehicle for educating passengers is its aviation consumer protection website, which it relaunched in November 2017 (see fig. 3). According to DOT officials, as part of the relaunch, DOT improved the navigability and accessibility of the website by, among other things, arranging material by topic, adding icons for various subjects, and including a link for the website on DOT’s aviation homepage. The website now includes summaries of passengers’ rights on a number of issues including tarmac delays, overbookings, mishandled baggage, and disability issues, as well as DOT’s rules, guidance issued to airlines and others, and enforcement orders on key consumer protection issues. Moreover, the website is now accessible to people with disabilities. Moving forward, DOT has a number of additional updates planned through fiscal year 2019. For example, DOT plans to update its website with information on frequent flyer issues, optional services and fees, and codeshare agreements by the end of calendar year 2018. According to DOT officials, while not statutorily required to conduct these education activities, passenger education is a key effort to ensuring airlines’ compliance. DOT also has numerous other efforts to educate passengers on their rights. For example: Establishing resources for passengers. DOT developed Fly Rights—an online brochure that details how passengers can avoid common travel problems—in addition to material on unaccompanied minors, family seating, and a glossary of common air travel terms. DOT also developed training tools (e.g., brochures, digital content, and videos) on the rights of passengers with disabilities under the Air Carrier Access Act of 1986 and its implementing regulations, including wheelchair assistance at airports and onboard aircraft, traveling with a service animal, and traveling with assistive devices. While some of these materials were developed primarily for airline employees and contractor staff, others were developed to directly assist passengers with disabilities by providing helpful tips on airlines’ responsibilities, according to DOT officials. Building consumer education information into existing regulations. Passenger education is built into certain consumer protection requirements, according to DOT officials. For example, when an airline involuntarily denies a passenger boarding, immediately after the denied boarding occurs the airline must provide a written statement explaining the terms, conditions, and limitations of denied boarding compensation, and describing the airline’s boarding priority rules and criteria. Responding to complaints. DOT officials said they include information on an airline’s responsibilities when responding to passenger complaints. For example, if a passenger submits a complaint to DOT about not receiving compensation for a delayed or cancelled flight, the DOT analyst may inform the passenger that airlines are generally not required to compensate passengers in these instances. We compared DOT’s efforts to educate airline passengers about their rights against key practices for consumer outreach GAO identified in prior work and found that DOT’s efforts fully align with five of the nine key practices (see fig. 4). For example, we found that DOT has successfully identified the goals and objectives of its passenger education program and identified the appropriate media mix for disseminating its materials. Similarly, we found that DOT had identified and engaged stakeholders, a step that, according to DOT officials, allowed them to better tailor materials. However, as summarized in the figure below, we found that DOT only partially met or did not meet the remaining four key practices. For example, DOT’s actions do not align with the key practice to “identify resources” and only partially align with the key practice to “develop consistent, clear messages” based on the established budget. According to a senior DOT official, DOT has not identified budgetary resources because, while important, DOT’s educational efforts are secondary to the office’s other efforts. Further, officials said that it has been difficult for the agency to develop a budget when it has been operating under a continuing resolution for some part of the fiscal year for the last decade. However, without identifying short- and long-term budgetary resources and planning activities accordingly, DOT is missing an opportunity to plan educational efforts or prioritize needs based on available resources. In addition, we found DOT’s efforts only partially align with the key practice that calls for an agency to research its target audience. While DOT has solicited some input from stakeholder groups such as those representing passengers with disabilities, DOT has not solicited feedback directly from passengers to understand what they know about their rights. DOT officials said they have not sought such feedback because they have not identified a method for doing so that would be statistically generalizable and not cost prohibitive. While costs are always an issue when considering budget priorities, we have previously reported on other agencies’ direct consumer outreach efforts that while not statistically generalizable were nonetheless useful for understanding the effect of the agencies’ efforts. For example, the Bureau of Consumer Financial Protection has used focus groups to understand its outreach efforts. Bureau of Consumer Financial Protection officials previously told GAO that while obtaining information through such efforts was resource intensive, it allowed them to assess the performance of their outreach activities. In another case, an agency surveyed users that access its website to help it understand whether its outreach efforts were effective. Obtaining input from passengers directly on what information they want or what they know about their rights would provide DOT with greater assurance that educational materials are appropriately tailored to meet a wide range of passengers’ needs. Finally, DOT has not established performance measures to understand the quality of its passenger education materials (i.e., process measures) or the effectiveness of its efforts (i.e., outcome measures). DOT officials said that they receive informal input from stakeholders on the quality of the materials and track website traffic to understand whether materials are reaching passengers. Officials said they believe that these mechanisms provide them with some assurance that the materials are meeting passengers’ needs and that passengers are accessing and using the materials. While these mechanisms may provide DOT with some information on how often materials are accessed online, they do not help it understand the quality of the materials and measure the success of its passenger education efforts. For example, while DOT officials track website traffic, they have not established a related performance measure. A number of different measures could be used to track processes and outcomes related to the use of its website, including the time consumers spend on the website, number of website pages viewed, bounce rate (i.e., percentage of visitors who looked at only one page and immediately left the site), or user’s perception of the experience of their visit. Establishing such measures would provide DOT with greater assurances that its educational efforts are appropriately tailored to passengers and leading to improved understanding of passengers’ rights, including whether any adjustments are needed. To enforce consumer protection requirements, such as those preventing unfair or deceptive practices or unfair methods of competition by airlines, DOT has conducted almost 2,500 investigations and issued about 400 consent orders over the last decade. However, DOT lacks reasonable assurance that its approach is achieving the highest level of airlines’ compliance, given its available resources. For example, DOT has not assessed whether its procedures and training materials help analysts consistently code passengers’ complaints and identify potential consumer protection violations. Additionally, DOT has not fully used data from its case management system to inform its compliance program. Moreover, in the absence of comprehensive performance measures, DOT lacks a full understanding of the extent to which it is achieving its goal of airlines’ compliance with consumer protection requirements and whether any programmatic changes may be warranted. Improvements in these areas would provide DOT with additional information to target its resources and improve compliance. DOT has taken positive steps to educate passengers about their rights— through its revamped website and other educational resources. Nevertheless, DOT could improve its efforts by more fully following key practices GAO previously identified for conducting consumer education, such as by: seeking feedback directly from consumers; identifying short- and long-term budget resources; and establishing performance measures. Taking such actions would provide DOT with greater assurance that its efforts are meeting passengers’ needs. We are making the following six recommendations to DOT: The Office of the Secretary should assess its procedures and training materials for coding airline passengers’ complaints, as appropriate, to help ensure that passengers’ complaints are consistently coded and that potential consumer protection violations are properly identified. (Recommendation 1) The Office of the Secretary should assess the feasibility and cost of updating its airline case management system to address data and reporting limitations, and to undertake those updates that are cost effective and feasible. (Recommendation 2) The Office of the Secretary should establish performance measures for each of its objectives for its five key airline-compliance activities. (Recommendation 3) The Office of the Secretary should capture feedback directly from airline passengers or identify other mechanisms to capture passengers’ perspectives to inform DOT’s education efforts. (Recommendation 4) The Office of the Secretary should identify available short- and long- term budgetary resources for DOT’s airline-passenger education efforts. (Recommendation 5) The Office of the Secretary should develop performance measures for DOT’s efforts to educate airline passengers. (Recommendation 6) We provided a draft of this report to DOT for review and comment. DOT provided written comments, which are reprinted in appendix IV, and technical comments, which we incorporated as appropriate. DOT concurred with our recommendations and officials said that they had begun taking steps to address the recommendations. We are sending copies of this report to the appropriate congressional committees, DOT, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at 202-512-2834 or vonaha@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Since its deregulation in 1978, numerous studies have examined the effects of competition in the airline industry. Most have examined the link between competition and pricing on specific airline routes—i.e., airline service between two airports or cities. These routes are viewed as the relevant markets for competitive analysis because they reflect the products that consumers purchase and for which airlines set prices. These studies have examined the pricing effect: (1) of route competition, (2) of the extent of an airline’s presence at airports, and (3) of mergers in the evolving airline industry. Studies have generally shown (1) that prices tend to be higher when fewer airlines serve a city-pair market and (2) that airline dominance at airports can be associated with higher market prices. Other studies have also shown that the presence of a low-cost airline on a route—or even the threat of entry by a low-cost airline—is associated with lower fares. In addition, some studies have examined whether there is a link between the level of competition in city-pair markets and certain elements of customer service quality, such as the incidence and length of delays, cancellations, lost baggage, flight frequency, and denied boarding. While competition generally lowers prices, the effect of competition on the quality of service is more ambiguous. On the one hand, firms may compete on quality of service; in this instance, competition leads to higher service, but it is also possible that a firm facing less competition may invest in quality of service to more fully differentiate among passengers. A variety of factors could influence the association between competition and customer service. These factors include, for example: the cost of providing higher levels of quality, the extent to which consumers have full knowledge of quality, the extent to which consumers change future purchasing decisions based on quality, and the value consumers place on product quality relative to product price. In the context of the airline industry, airline investments that underlie the provision of consumer services are not necessarily route-specific as they more likely relate to investments airlines make at airports, or at the overall airline level. For example, airlines make decisions about the extent to which resources—such as the number of aircraft and customer service personnel—are available at a given airport. Moreover, policies regarding training of gate and customer service personnel likely take place at the corporate level as do decisions about the configuration of aircraft, which may have related quality of service factors. Also, because airlines provide a service that involves a large network, some elements of quality may relate to the broad decisions regarding the management of that network. For example, if a flight is delayed on one route, it may affect the timeliness of several downstream flights due to the late arrival of the aircraft, pilots, and flight attendants, and airlines may take these networked effects into consideration in ways that could affect customer service. Still, some decisions that airlines make do have route-specific consequences that could influence customer service, such as decisions on flight scheduling, and which flights to cancel or delay in the face of operational disruptions. Some empirical airline literature on the impact of competition on certain quality factors predates several airline mergers, and some was conducted more recently. In the earlier literature, several studies found a linkage between the competitiveness of airline markets and customer service outcomes such as on-time performance, cancellations, mishandled baggage and flight frequency. These studies generally found that more competitive markets are associated with an improvement in one or more of these aspects of customer service. For example, one study found a small increase in the number of cancelled flights when a route was served by only one airline, and another found that such routes had, on average, slightly longer delays. However, the extent of these improvements has typically been small, such as an association with a small reduction in cancellations or a reduced average delay of just a few minutes. On the other hand, some studies found that delays and cancellations are less common when they involve airlines’ hub airports—especially when a flight is destined for an airline’s hub airport. In order to look more closely at the relationship between market competition and airline customer service in recent years, we reviewed several more current studies. Specifically, because the nature of the airline industry—particularly its competitive landscape—transformed after the 2007–2009 recession, we selected studies that included at least some of the study period post-recession. We identified six studies that met our criteria for inclusion, each of which examined some aspect of the link between airline market competition and one or more element of customer service. As with the earlier studies, these more recent studies generally found greater competition was associated with some improved customer service. Specifically, some studies found that flight delays were, on average, a little longer, and flight cancellations more likely when markets were more highly concentrated or in the aftermath of an airline merger. For example, one study found that a particular level of increased route concentration was associated with about a 4-minute average increase in flight delay. Another study found a similar effect on delay and also found a slightly higher incidence of cancellations on more concentrated routes. These increases in delays and cancellations were generally small. In the case of mergers, the findings are somewhat mixed. One study we reviewed found increased cancellations and more delays after mergers, but the effects tended to diminish over time, while another study did not find an effect of mergers on these measures of customer service. Another study found that the effect of mergers on consumer welfare—as measured by both price and flight frequency—may be idiosyncratic to the specific airlines involved in the merger and the state of competition in the broader market at the time of the merger. Finally, a GAO study that examined the effect of the tarmac delay rule on flight cancellations found that flights on routes where either the originating or destination airport was a hub airport for the airline had a lower likelihood of cancellation, possibly indicating a focus by airlines on maintaining smooth operations as much as possible. Generally, the differing findings on the extent or existence of quality impacts could be the result of varied methodologies in these analyses, including differing model specifications, variable measurements, and analysis time frames. Finally, while these studies provide insight into the link between competition and certain aspects of service quality, some elements of airline’s service quality are harder to explore in this way. For example, there are no data that would be readily usable in empirical analyses on the effect of competition on certain quality measures such as the extent airline websites are user-friendly, the ability to be rebooked on a different flight when a flight is missed or was cancelled, the helpfulness of airline staff, and consumer satisfaction with airline cabin amenities, such as seat comfort and availability and quality of food for sale. Moreover, while studies examine effects of competition at the route level, the national airline industry has become more concentrated in the past decade due to a series of bankruptcies and mergers. The reduced competition at this broad level may also have implications for customer service, such as the level of service provided at airports and policies on flight cancellations and rebooking. Our objectives for this report were to: (1) describe trends in DOT data on airline service from 2008 through 2017 and airlines’ actions to improve service; (2) assess how effectively DOT ensures airlines’ compliance with consumer protection requirements; and (3) assess the extent to which DOT’s airline passenger education efforts align with key practices for consumer outreach. We also examined the relationship between airline competition and customer service (app. I). The scope of this report focused on issues regarding consumer protections for airline passengers (i.e., “consumer protections”) overseen by DOT. We focused our analysis on the time period 2008 through 2017 unless otherwise noted because it encompassed key additions or amendments to consumer protection regulations, including Enhancing Airline Passenger Protections I, II, and III. For each of our objectives, we reviewed documents and data from DOT and airlines, to the extent possible. We also conducted multiple interviews with officials from DOT’s Office of the Assistant General Counsel for Aviation Enforcement and Proceedings and its Aviation Consumer Protection Division, in addition to a non-generalizable sample of 25 stakeholders—including representatives from 11 airlines, 3 market research organizations, 3 aviation academics, and 8 industry associations representing airlines, airline staff, and airline passengers. To describe trends in airline service, we analyzed DOT operational data and passenger complaints submitted to DOT from 2008 through 2017. Specifically, we analyzed DOT’s data on late flights; cancellations; diverted flights (i.e., flights operated from the scheduled origin point to a point other than the scheduled destination point in the airline’s published schedule); voluntary and involuntary denied boardings; and mishandled baggage to describe airlines’ operational performance. From 2008 through 2017, DOT required airlines with at least one percent of domestic scheduled-passenger revenues in the most recently reported 12-month period to report this data for reportable flights—we refer to these airlines as “reporting airlines” throughout our report. We also obtained data for passenger complaints submitted to DOT and analyzed the data to identify the frequency, types, and changes in complaints over time. We limited our analysis of passenger complaint data to “selected” airlines that were required to report operational data to DOT in 2017— the most recent year of available data when we started our review—because they were the 12 largest U.S. domestic passenger airlines in 2016. To assess the reliability of the operational data and complaints, we conducted electronic testing of the data to identify any outliers, compared our results to DOT published data, and interviewed DOT officials about how the data were collected and used. Because our interviews with DOT officials indicated that no changes had been made to the processes used to collect and maintain both data sources, we also relied on our past data reliability assessments from recently issued GAO reports, assessments that found that both data sources are sufficiently reliable for providing information on trends over time. Therefore, we determined that the data were sufficiently reliable for our purposes, including to present high-level trends in service over time. Moreover, we also reviewed analyses from three market research organizations that we identified during the course of our work— J.D. Power and Associates, the American Customer Satisfaction Index, and the Airline Quality Rankings—to provide additional information on airline service quality. We interviewed the authors to understand how they conducted the analyses; however, we did not evaluate the underlying methodologies. We determined that the results were reliable enough to report their high-level trends on passenger satisfaction. To understand airlines’ actions to enhance service, we interviewed or received written responses from 11 of 12 selected airlines. We conducted interviews with airline representatives using a semi-structured interview instrument, which included questions pertaining to business practices aimed at improving service from 2013 through 2017, among other things. We conducted three pretests with one airline and two industry groups. Representatives from each group provided technical comments, which we incorporated, as appropriate. We limited our timeframe to the most recent 5 years because business practices in the industry evolve quickly and we wanted to highlight the most relevant and recent practices. During interviews, we asked selected airline representatives whether these practices were documented in contracts of carriage or other customer commitment documents and reviewed those documents as appropriate. During these interviews, we also asked selected airline representatives if they considered certain aspects of their passenger complaint data they receive directly from passengers to be proprietary, and all airline representatives said the data were proprietary. To inform interviews with selected airlines representatives and to understand recent airlines business practices aimed at improving service for passengers, we also conducted a literature search of trade publications and industry reports from 2013 through 2017. Where relevant, we used information from this literature search as additional context and as a basis for our questions to airline representatives regarding specific business practices. To describe how DOT ensures airlines’ compliance with consumer protection requirements, we reviewed DOT’s documentation of the policies, procedures, and guidance that describe its five key compliance activities. In addition, we conducted multiple interviews with staff from DOT’s Office of the Assistant General Counsel for Aviation Enforcement and Proceedings and its Aviation Consumer Protection Division. To identify trends in DOT’s key compliance activities from 2008 through 2017, we analyzed reports and data DOT provided on the number and results of its airline inspections, investigations, enforcement actions, and civil penalties—including data from DOT’s case management system. To assess the reliability of the data, we interviewed DOT officials to understand how the data are collected and used and the steps DOT takes to ensure the data are accurate, complete, and reliable. We determined that the data were reliable enough to summarize trends in DOT’s investigation and enforcement actions from 2008 through 2017. To determine how effectively DOT implements its compliance program, we assessed selected key compliance activities—i.e., coding passenger complaints, using the case management system to inform compliance activities, and developing objectives and related performance measures—against selected principles of Standards of Internal Control in the Federal Government related to control activities. We also summarized other leading practices for developing performance measures, in addition to our past work, which has identified other agencies with successful performance measures. To understand the extent to which passenger education materials developed by DOT align with key practices for consumer outreach, we reviewed DOT’s educational materials and assessed them against nine key practices we previously developed for consumer education planning. In that prior work, GAO convened an expert panel of 14 senior management-level experts in strategic communications to identify the key practices of a consumer education campaign. We believe the key practices the expert panel identified in 2007 remain relevant today since the practices are not time-sensitive. In addition to reviewing relevant materials, we also conducted interviews with DOT officials to understand their outreach efforts. During these interviews, DOT officials agreed that these criteria were relevant to conducting consumer outreach. For a complete list of the criteria and corresponding definitions, see appendix III. To understand the impact of airline competition on customer service provided to passengers we conducted a literature search of pertinent studies in scholarly, peer-reviewed journals, conference papers, and government publications. We restricted our review to results published between January 1, 2012, and December 31, 2017, and our search yielded 57 academic results and 10 government studies. Of these results, we reviewed each abstract to determine whether it was relevant to our objective based on criteria we established. For example, we limited results to those looking at the U.S. airline system and eliminated results that focused solely on airfares. In total, we found that 5 academic studies and 1 government study were ultimately relevant and sufficiently reliable for our report. Moreover, we also summarized 6 additional studies that we identified by reviewing the bibliographies of our selected studies or that were identified as key pieces of research in the field to summarize prior work in this area. We conducted this performance audit from September 2017 to November 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. GAO previously identified nine key practices that are important to conducting a consumer education campaign (see table 5). Andrew Von Ah, (202) 512-2834 or vonaha@gao.gov. In addition to the individual named above, other key contributors to this report were Jonathan Carver, Assistant Director; Melissa Swearingen, Analyst-in-Charge; Amy Abramowitz; Lacey Coppage; Caitlin Cusati; Delwen Jones; Kelsey Kreider; Ethan Levy; Gail Marnik; SaraAnn Moessbauer; Malika Rice; Minette Richardson; Pamela Snedden; and Laurel Voloder.
|
Airlines recently came under scrutiny for their treatment of passengers—including a high-profile incident in which a passenger was forcibly removed from an overbooked flight. However, airlines maintain that service has improved, citing better on-time performance and lower airfares. DOT has the authority to issue and enforce certain consumer protection requirements. DOT also educates passengers about their rights. GAO was asked to examine airline consumer protection issues. This report examines, among other issues, (1) trends in DOT's data on airline service; (2) the effectiveness of DOT's compliance efforts; and (3) the extent to which DOT's passenger education efforts align with key practices for consumer outreach. GAO reviewed DOT data on airline service and analyzed passenger complaint data for the 12 largest domestic airlines from 2008 through 2017; reviewed relevant documents and data on DOT's compliance program; assessed DOT's educational efforts against key practices for successful consumer outreach; and interviewed DOT officials. GAO interviewed or obtained written information from 11 of the 12 airlines. The Department of Transportation's (DOT) data offered mixed information on whether airlines' service improved from 2008 through 2017. While DOT's operational data on rates of late flights, denied boardings, and mishandled baggage generally suggested improvement, the rate of passenger complaints received by DOT increased about 10 percent—from about 1.1 complaints per 100,000 passengers to 1.2 complaints per 100,000 passengers. DOT conducts five key activities to ensure airlines' compliance with consumer protection requirements (see table). However, GAO found that DOT lacked performance measures to help it evaluate some of these activities and that it could improve its procedures (i.e., guidance documents and training materials), that analysts use to code passenger complaints. Performance measures : DOT has established objectives for each of its five key compliance activities that state what it seeks to achieve; however, DOT lacks performance measures for three objectives. For example, DOT lacks a performance measure for conducting inspections of airlines' compliance with consumer protection requirements at airlines' headquarters and at airports. As a result, DOT is missing opportunities to capture critical information about airlines' compliance with consumer protection requirements. Procedures : DOT has procedures to help analysts code passenger complaints and identify potential consumer protection violations. GAO found that DOT's guidance for coding passenger complaints did not consistently include definitions or examples that illustrate appropriate use or help analysts select among the various complaint categories. Additional procedures would help DOT ensure that complaints are consistently coded and that potential violations are properly identified. GAO found that while DOT has taken steps to educate passengers on their rights, its efforts did not fully align with four of nine key practices GAO previously identified for conducting consumer education. For example, while DOT has defined the goals and objectives of its outreach efforts, it has not used budget information to prioritize efforts or established performance measures to assess the results. DOT has also not solicited input directly from passengers to understand what they know about their rights. Taking such actions would provide DOT with greater assurance that its efforts are meeting passengers' needs. GAO is making six recommendations, including that DOT: develop performance measures for compliance activities, improve its procedures for coding airline passengers' complaints, and improve how passenger education aligns with GAO's key practices. DOT concurred with our recommendations and provided technical comments, which we incorporated as appropriate.
|
While HUD has primary responsibility for addressing lead paint hazards in federally-assisted housing, EPA also has responsibilities related to setting federal lead standards for housing. EPA sets federal standards for lead hazards in paint, soil, and dust. Additionally, EPA regulates the training and certification of workers who remediate lead paint hazards. CDC sets a health guideline known as the “blood lead reference value” to identify children exposed to more lead than most other children. As of 2012, CDC began using a blood lead reference value of 5 micrograms of lead per deciliter of blood. For children whose blood lead level is at or above CDC’s blood lead reference value, health care providers and public health agencies can identify those children who may benefit the most from early intervention. CDC’s blood lead reference value is based on the 97.5th percentile of the blood lead distribution in U.S. children (ages 1 to 5), using data from the National Health and Nutrition Examination Survey. Children with blood lead levels above CDC’s blood lead reference value have blood lead levels in the highest 2.5 percent of all U.S. children (ages 1 to 5). HUD, EPA, and the Department of Health and Human Services (HHS) are members of the President’s Task Force on Environmental Health Risks and Safety Risks to Children. HUD co- chairs the lead subcommittee of this task force with EPA and HHS. The task force published the last national lead strategy in 2000. The primary federal legislation to address lead paint hazards and the related requirements for HUD is the Residential Lead-Based Paint Hazard Reduction Act (Title X of the Housing and Community Development Act of 1992). We refer to this law as Title X throughout this report. Title X required HUD to, among other things, promulgate lead paint regulations, implement the lead hazard control grant programs, and conduct research and reporting, as discussed throughout this report. The two key regulations that HUD has issued under Title X are the Lead Disclosure Rule and the Lead Safe Housing Rule: Lead Disclosure Rule. In 1996, HUD and EPA jointly issued the Lead Disclosure Rule. The rule applies to most housing built before 1978 and requires sellers and lessors to disclose any known information, available records, and reports on the presence of lead paint and lead paint hazards and provide an EPA-approved information pamphlet prior to sale or lease. Lead Safe Housing Rule. In 1999, HUD first issued the Lead Safe Housing Rule, which applies only to housing receiving federal assistance or federally-owned housing being sold. The rule established procedures for evaluating whether a lead paint hazard exists, controlling or eliminating the hazard, and notifying occupants of any lead paint hazards identified and related remediation efforts. The rule established an “elevated blood lead level” as a threshold that requires landlords and PHAs to take certain actions if a child’s blood test shows lead levels meeting or exceeding this threshold. In 2017, HUD amended the rule to align its definition of an “elevated blood lead level” with CDC’s blood lead reference value. This change lowered the threshold that generally required landlords and PHAs to act from 20 micrograms to 5 micrograms of lead per deciliter of blood. According to the rule, when a child under age 6 living in HUD-assisted housing has an elevated blood lead level, the housing provider must take several steps. These generally include testing the home and other potential sources of the child’s lead exposure within 15 days, ensuring that identified lead paint hazards are addressed within 30 days of receiving a report detailing the results of that testing, and reporting the case to HUD. Office of Lead Hazard Control and Healthy Homes (Lead Office). HUD’s Lead Office is primarily responsible for administering HUD’s two lead hazard control grant programs, providing guidance on HUD’s lead paint regulations, and tracking HUD’s efforts to make housing lead-safe. The Lead Office collaborates with HUD program offices on its oversight and enforcement of lead paint regulations. For instance, the Lead Office issues guidance, responds to questions about requirements of lead paint regulations, and provides training and technical assistance to HUD program staff, PHA staff, and property owners. The Lead Office’s oversight efforts also include maintaining email and telephone hotlines to receive complaints and tips from tenants or homeowners, among others, as they pertain to lead paint regulations. Additionally, the Lead Office, in collaboration with EPA, contributes to the operation of the National Lead Information Center––a resource that provides the general public and professionals with information about lead, lead hazards, and their prevention. Office of Public and Indian Housing (PIH). HUD’s PIH oversees and enforces HUD’s lead paint regulations for the rental assistance programs. As discussed earlier, this report focuses on the two largest rental assistance programs serving the most families with children––the Housing Choice Voucher and public housing programs. Housing Choice Voucher program. In the voucher program, eligible families and individuals are given vouchers as rental assistance to use in the private housing market. Generally, eligible families with vouchers live in the housing of their choice in the private market. The voucher generally pays the difference between the family’s contribution toward rent and the actual rent for the unit. Vouchers are portable; once a family receives one, it can take the voucher and move to other areas where the voucher program is administered. In 2017, there were roughly 2.5 million vouchers available. Public housing program. Public housing is reduced-rent developments owned and operated by the local PHA and subsidized by the federal government. PHAs receive several streams of funding from HUD to help make up the difference between what tenants pay in rent and what it costs to maintain public housing. For example, PHAs receive operating and capital funds through a formula allocation process. PHAs use operating funds to pay for management, administration, and day-to-day costs of running a housing development. Capital funds are used for modernization needs, such as replacing roofs or remediating lead paint hazards. According to HUD rules, generally families that are income-eligible to live in public housing pay 30 percent of their adjusted income toward rent. In 2017, there were roughly 1 million public housing units available. For both of these rental assistance programs, the Office of Field Operations (OFO) within PIH oversees PHAs’ compliance with lead paint regulations, in conjunction with HUD field office staff. The office has a risk-based approach to overseeing PHAs and performs quarterly risk assessments. Also within PIH, staff from the Real Estate Assessment Center are responsible for inspecting the physical condition of public housing properties. Office of Policy Development and Research (PD&R). HUD’s PD&R is the primary office responsible for data analysis, research, and program evaluations to inform the development and implementation of programs and policies across HUD offices. of the total grant amount, while the Lead Hazard Reduction Demonstration grant program has required at least a 25 percent match. For fiscal years 2013–2017, HUD awarded $527 million for its lead hazard control grants, which included 186 grants to state and local jurisdictions (see fig. 1). In these 5 years, about 40 percent of grants awarded went to jurisdictions in the Northeast and 31 percent to jurisdictions in the Midwest––regions of the country known to have a high prevalence of lead paint hazards. Additionally, in these 5 years, 90 percent of grant awards went to grantees at the local jurisdiction level (cities, counties, and the District of Columbia). The other 10 percent of grant awards went to state governments. During this time period, HUD awarded the most grants to jurisdictions in Ohio (17 grants), Massachusetts and New York (15 grants each), and Connecticut (14 grants). HUD’s Lead-Based Paint Hazard Control grant and the Lead Hazard Reduction Demonstration grant programs have incorporated Title X statutory requirements through recent annual funding notices and their grant processes. Title X contains applicant eligibility requirements and selection criteria HUD should use to award lead grants. To be eligible to receive a grant, applicants need to be a state or local jurisdiction, contribute matching funds to supplement the grant award, have an approved comprehensive affordable housing strategy, and have a certified lead abatement program (if the applicant is a state government). HUD has incorporated these eligibility requirements in its grant programs’ 2017 funding notices, which require applicants to demonstrate that they meet these requirements when they apply for a lead grant. According to the 2017 funding notices, applicants must detail the sources and amounts of their matching contributions in their applications. Similarly, applicants must submit a form certifying that the proposed grant activities are consistent with their local affordable housing strategy. HUD’s 2017 funding notices state that if applicants did not meet these eligibility requirements, HUD would not consider their applications. Additionally, Title X requires HUD to award lead grants according to the following applicant selection criteria: the extent to which an applicant’s proposed activities will reduce the risk of lead poisoning for children under the age of 6; the degree of severity and extent of lead paint hazards in the applicant’s jurisdiction; the applicant’s ability to supplement the grant award with state, local, or private funds; the applicant’s ability to carry out the proposed grant activities; and other factors determined by the HUD Secretary to ensure that the grants are used effectively. In its 2017 funding notices, HUD incorporated the Title X applicant selection criteria through five scoring factors that it used to assess lead grant applications. HUD allocated a certain number of points to each scoring factor. Applicants are required to develop their grant proposals in response to the scoring factors. When reviewing applications, HUD staff evaluated an applicant’s response to the factors and assigned points for each factor. See table 1 for a description of the 2017 lead grant programs’ scoring factors and points. As shown in table 1, HUD awarded the most points (46 out of 100) to the “soundness of approach” scoring factor, according to HUD’s 2017 funding notices. Through this factor, HUD incorporated Title X selection criteria on an applicant’s ability to carry out the proposed grant activities and supplement a grant award with state, local, or private funds. For example, HUD’s 2017 funding notices required applicants to describe their detailed plans to implement grant activities, including how the applicants will establish partnerships to make housing lead-safe. Specifically, HUD began awarding 2 of the 100 points to applicants who demonstrated partnerships with local public health agencies to identify families with children for enrollment in the lead grant programs. Additionally, HUD asked applicants to identify partners that can help provide assistance to complete the lead hazard control work for high-cost housing units. Furthermore, HUD required applicants to identify any nonfederal funding, including funding from the applicants’ partners. Appendix I includes examples of state, local, and nongovernmental funds that selected grantees planned to use to supplement their lead grants. In its lead grant programs, HUD has taken actions that were consistent with OMB’s requirements for competitively awarded grants. OMB generally requires federal agencies to: (1) establish a merit-review process for competitive grants that includes the criteria and process to evaluate applications; and (2) develop a framework to assess the risks posed by applicants for competitive grants, among other things. Through a merit-review process, an agency establishes and applies criteria to evaluate the merit of competitive grant applications. Such a process helps to ensure that the agency reviews grant applications in a fair, competitive, and transparent manner. Consistent with the OMB requirement to establish a merit review process, HUD has issued annual funding notices that communicate clear and explicit evaluative criteria. In addition, HUD has established processes for reviewing and scoring grant applications using these evaluative criteria, and selects grant recipients based on the review scores (see fig. 2). For example, applicants that score at or above 75 points are qualified to receive awards from HUD. Also, HUD awards funds beginning with the highest scoring applicant and proceeds by awarding funds to applicants in a descending order until funds are exhausted. Furthermore, consistent with the OMB requirement to develop a framework to assess applicant risks, HUD has developed a framework to assess the risk posed by lead grant applicants by, among other things, deeming ineligible those applicants with past performance deficiencies or those that do not have a financial management system that meets federal standards. However, HUD has not fully documented or evaluated its lead grant processes in reviewing and scoring the grants and making award decisions: Documenting grant processes and award decisions. While HUD has established processes for its lead grant programs, it lacks documentation, including detailed guidance to help ensure that staff carry out processes consistently and appropriately. Federal internal control standards state that agency management should develop and maintain documentation of its internal control system. Such documentation assists agency management by establishing and communicating the processes to staff. Additionally, documentation of processes can provide a means to retain organizational knowledge and communicate that knowledge as needed to external parties. The Lead Office’s Application Review Guide describes its grant application review and award processes at a high level but does not provide detailed guidance for staff as to how tasks should be performed. For example, the Guide notes that reviewers score eligible applications according to factors contained in the funding notices but does not describe how the reviewers should allocate points to the subfactors that make up each factor. Lead Office staff told us that creating detailed scoring guidance would be challenging because applicants’ proposed grant activities differ widely, and they said that scoring grant applications is a subjective process. While scoring grant applications may involve subjective judgments, improved documentation of grant review and scoring processes, including additional direction to staff, can help staff apply their professional judgment more consistently in evaluating applications. By better documenting processes, HUD can better ensure that staff evaluate applications consistently. Additionally, HUD has not fully documented its rationale for deciding which applicants receive lead grant awards and for deciding the dollar amounts of grant awards to successful applicants. In prior work examining federal grant programs, one recommended practice we identified is that agencies should document the rationale for award decisions, including the reasons individual applicants were selected or not and how award funding amounts were determined. While HUD’s internal memorandums listed the applicants selected and the award amounts, these memorandums did not document the rationale for these decisions or provide information sufficient to help applicants understand award outcomes. Lead Office staff told us that most grantees have received the amount of funding they requested in their applications, which was generally based on HUD’s maximum grant award amount. Lead Office staff said they could use their professional judgment to adjust award amounts to extend funding to more applicants when applicants received similar scores. However, the Lead Office’s documentation we reviewed did not explain this type of decision making. For example, in 2017, when two applicants received identical scores on their applications, HUD awarded each applicant 50 percent of the remaining available funds rather than awarding either applicant the amount they requested. Representatives of one of the two grantees told us they did not know why the Lead Office had not provided them the full amount they had requested. Lead Office staff told us that, to date, HUD has not considered alternative ways to award grant funding amounts. By fully documenting grant award processes, including the rationale for award decisions and amounts, HUD could provide greater transparency to grant applicants about its grant award decisions. Evaluating processes. HUD lacks a formal process for reviewing and updating its lead grant funding notices, including the factors and point allocations used to score applications. Federal internal control standards state that agencies should implement control activities through policies and that periodic review of policies and procedures can provide assurance of their effectiveness in achieving the agency’s objectives. Lead Office staff told us that previous changes to the factors and point allocation used to score applicants have been made based on informal discussions among staff. However, the Lead Office does not have a formal process to review and evaluate the relevance and appropriateness of the factors or points used to score applicants. Lead Office staff told us that they have never analyzed the scores applicants received for the factors to identify areas where applicants may be performing well or poorly or to help inform decisions about whether changes may be needed to the factors or points. Additionally, HUD has not changed the threshold criteria used to make award decisions since the threshold was established in 2003. As previously shown in figure 2, applicants who received at least 75 points (out of 100) have been qualified to receive a grant award. However, HUD grant documentation, including the funding notices and the Application Review Guide, does not explain the significance of this 75-point threshold. Lead Office staff stated that this threshold was first established in 2003 by HUD based on OMB guidance. A formal review of this 75-point threshold can help HUD determine whether it remains appropriate for achieving the grant programs’ objectives. Furthermore, by periodically evaluating processes for reviewing and scoring grant applications, HUD can better determine whether these processes continue to help ensure that lead grants reach areas of the country at greater risk for lead paint hazards. HUD has begun to develop analyses and tools to inform its efforts to target outreach and ensure that grant awards go to areas of the country that are at risk for lead paint hazards. However, HUD has not developed time frames for incorporating the results of the analyses into its lead grant programs’ processes. HUD has required jurisdictions applying for lead grants to include data on the need or extent of the problem in their jurisdiction (i.e., scoring factor 2). Additionally, Lead Office staff told us that HUD uses information from the American Healthy Homes Survey to obtain information on lead paint hazards across the country. However, the staff explained that the survey was designed to provide meaningful results at the regional level and did not include enough homes in its sample to provide information about housing conditions, such as lead paint hazards, at the state or local level. Because HUD awards lead grants to state and local jurisdictions, it cannot effectively use the survey results to help the agency make award decisions or inform decisions about areas for potential outreach. In early 2017, the Lead Office began working with PD&R to develop a model to identify local jurisdictions (at the census-tract level) that may be at heightened risk for lead paint hazards. Lead Office staff said that they hope to use results of this model to develop geographic tools to help target HUD funding to areas of the country at risk for lead paint hazards but not currently receiving a HUD lead grant. Lead Office staff said that they could reach out to these at-risk areas, help them build the capacity needed to administer a grant, and encourage them to apply. For example, HUD has identified that Mississippi and two major metropolitan areas in Florida (Miami and Tampa) had not applied for a lead grant. HUD has conducted outreach to these areas to encourage them to apply for a lead grant. In 2016, the City of Jackson, Mississippi, applied for and received a lead grant. Though the Lead Office has collaborated with PD&R on the model, HUD has not developed specific time frames to operationalize the model and incorporate the results of the model for using local-level data to help better identify areas at risk for lead paint hazards. Federal internal control standards require agencies to define objectives clearly to enable the identification of risks. This includes clearly defining time frames for achieving the objectives. Setting specific time frames could help to ensure that HUD operationalizes this model in a timely manner. By operationalizing a model that incorporates local data on lead paint hazard risk, HUD can better target its limited grant resources towards areas of the country with significant potential for lead hazard control needs. We performed a county-level analysis using HUD and Census Bureau data and found that most lead grants from 2013 through 2017 have gone to counties with at least one indicator of lead paint hazard risk. Information we reviewed, such as relevant literature, suggests that the two common indicators of lead paint hazard risk are the prevalence of housing built before the 1978 lead paint ban and the prevalence of individuals living below the poverty line. We defined areas with lead paint hazard risk as counties that had percentages higher than the corresponding national percentages for both of these indicators. The estimated average percentage nationwide of total U.S. housing stock constructed before 1980 was 56.9 percent and the estimated average percentage nationwide of individuals living below the poverty line was 17.5 percent. As shown in figure 3, our analysis estimated that 18 percent of lead grants from 2013 through 2017 have gone to counties with both indicators above the estimated national percentages, 59 percent of grants have gone to counties with estimated percentages of old housing above the estimated national percentage, and 7 percent of grants have gone to counties that had estimated poverty rates above the estimated national percentage. (For an interactive version of this map, click here.) When HUD finalizes its model and incorporates information into its lead grant processes, HUD will be able to better target its grant resources to areas that may be at heightened risk for lead paint hazards. In 2016, HUD began to incorporate new steps to monitor PHAs’ compliance with lead paint regulations for nearly 4,000 PHAs. Previously, according to PIH staff, HUD required only that PHAs annually self-certify their compliance with lead paint laws and regulations, and HUD’s Real Estate Assessment Center inspectors check for lead paint inspection reports and disclosure forms at public housing properties during physical inspections. Starting in June 2016, PIH began using new tools for HUD field staff to track PHAs’ compliance with lead paint requirements in the voucher and public housing programs. As shown in figure 4, PIH’s compliance oversight processes for the voucher and public housing programs include various monitoring tools for overseeing PHAs. Key components of PIH’s lead paint oversight processes include the following: Tools for tracking lead hazards and cases of elevated blood levels in children. HUD uses two databases to monitor PHAs’ compliance with lead paint regulations: (1) the Lead-Based Paint Response Tracker, which PIH uses to collect and monitor information on the status of lead paint-related documents, including lead inspection reports and disclosure forms, in public housing properties but not in units with voucher assisted households; and (2) the Elevated Blood Lead Level Tracker, which PIH uses to collect and monitor information reported by PHAs on cases of elevated blood levels in children living in voucher and public housing units. In June 2016, OFO began using the Lead-Based Paint Response Tracker database to store information on public housing units and to help HUD field office staff to follow up with PHAs that have properties missing required lead documentation. In July 2017, OFO began using information recorded in the Elevated Blood Lead Level Tracker to track whether PHAs started lead remediation activities in HUD- assisted housing within the time frames required by the Lead Safe Housing Rule. Lead paint hazards included in PHAs’ risk assessment scores. OFO assigns scores to PHAs based on their relative risk in four categories: physical condition, financial condition, management capacity, and governance. OFO uses these scores to identify high- and very high-risk PHAs that will receive on-site full compliance reviews. In July 2017, OFO incorporated data from the Real Estate Assessment Center into the physical condition category of its Risk Assessment Protocol to help account for potential lead paint hazards at public housing properties. Questions about lead paint included as part of on-site full compliance reviews. In fiscal year 2016, HUD field offices began conducting on-site full compliance reviews at high- and very high-risk PHAs as part of HUD’s compliance monitoring program to enhance oversight and accountability of PHAs. In fiscal year 2017, as part of the reviews, HUD field office staff started using a compliance monitoring checklist to determine if PHAs comply with major HUD rules and to gather additional information on the PHAs. This checklist included lead-related questions that PIH field office staff use to determine whether PHAs meet the requirements in lead paint regulations for both the voucher and public housing programs. In 2016, OFO and HUD field offices began using information from the new monitoring efforts to identify potential noncompliance by PHAs with lead paint regulations and help the PHAs resolve the identified issues. According to HUD data, as of November 2017, the Lead-Based Paint Response Tracker indicated that 9 percent (357) of PHAs were missing both lead inspection reports and lead disclosure forms for one or more properties. There were 973 PHAs missing one of the two required documents. OFO staff told us that they prioritized following up with PHAs that were missing both documents. According to OFO staff, PHAs can resolve potential noncompliance by submitting adequate lead documentation to HUD. OFO staff told us the agency considers missing lead documentation as “potential” noncompliance because PHAs may provide the required documentation or they may be exempt from certain requirements (e.g., HUD-designated elderly housing). While HUD has taken steps to strengthen compliance monitoring processes, it does not have a plan to identify and address the risks of noncompliance by PHAs with lead paint regulations. Federal internal control standards state that agencies should identify, analyze, and respond to risks related to achieving the defined objectives. Furthermore, when an agency has made significant changes to its processes—as HUD has done with its compliance monitoring processes—management review of changes to these processes can help the agency determine that its control activities are designed appropriately. Our review found that HUD does not have a plan to help mitigate and address risks related to noncompliance with lead paint regulations by PHAs (i.e., ensuring lead safety in assisted housing). Additionally, our review found several limitations with HUD’s new compliance monitoring approach, which include the following: Reliance on PHA self-certifications. HUD’s compliance monitoring processes rely in part on PHAs self-certifying that they are in compliance with lead paint regulations, but recent investigations have found that some PHAs may have falsely certified that they were in compliance. In November 2017, HUD filed a fraud complaint against two former officials of the Alexander County (Illinois) Housing Authority, alleging that the former official, among other things, falsely certified to HUD that the Housing Authority was in compliance with lead paint regulations. Further, PIH staff told us there are ongoing investigations related to potential noncompliance with lead paint regulations and false certifications at two other housing authorities. Lack of comprehensive data for the public housing program. OFO started to collect data for the public housing program in the Lead-Based Paint Response Tracker in June 2016 and the inventory of all public housing properties includes units inspected since 2012. In addition, HUD primarily relies on the presence of lead inspection reports but does not record in the database when inspections and remediation activities occurred and does not determine whether they are still effective. Because of this, the information contained in the lead inspection reports may no longer be up-to-date. For example, a lead inspection report from the 1990s may provide evidence that abatement work was conducted at that time, but according to PIH staff, the housing may no longer be lead-safe. Lack of readily available data for the voucher program. The voucher program does not have readily available data on housing units’ physical condition and compliance with lead paint regulations because data on the roughly 2.5 million units in the program are kept at the PHA level. According to PIH staff, HUD plans to adopt a new system for the voucher program that will include standardized, electronic data for voucher units. PIH staff said the new system (Uniform Physical Condition Standards for Vouchers Protocol) will allow greater oversight and provide HUD the ability to conduct data analysis for voucher units. Challenges identifying children with elevated blood lead levels. For several reasons, PHAs face ongoing challenges receiving information from state and local public health departments on the number of children identified with elevated blood lead levels. First, children across the U.S. are not consistently screened and tested for exposure to lead. Second, according to CDC data, many states use a less stringent health guideline to identify children compared to the health standard that HUD uses (i.e., CDC’s current blood lead reference value). PIH staff told us that some public health departments may not report children with elevated blood levels to PHAs because they do not know that a child is living in a HUD- assisted unit and needs to be identified using the more stringent HUD standard. Lastly, Lead Office staff told us that privacy laws in some states may impose restrictions on public health departments’ ability to share information with PHAs. Limited coverage of on-site compliance reviews. While full on-site compliance reviews can be used to determine if PHAs are in compliance with lead paint regulations, OFO conducts a limited number of these reviews annually. For example, in Fiscal Year 2017, OFO conducted 72 reviews of the roughly 4,000 total PHAs. Based on OFO information, there are 973 PHAs that are missing either lead inspection reports or lead disclosure forms indicating some level of potential noncompliance. HUD’s steps since June 2016 to enhance monitoring of PHAs’ compliance with lead paint regulations have some limitations that create risks in its new compliance monitoring approach. By developing a plan to help mitigate and address the various limitations associated with the new compliance monitoring approach, HUD could further strengthen its oversight and help ensure that PHAs maintain lead-safe housing units. HUD does not have detailed procedures to address PHA noncompliance with lead paint regulations or to determine when enforcement decisions may be needed. Lead Office staff told us that their enforcement program aims to ensure that PHAs have the information necessary to remain in compliance with lead paint regulations. According to federal internal control standards, agencies should implement control activities through policies and procedures. Effective design of procedures to address noncompliance would include documenting specific actions to be performed by agency staff when deficiencies are identified and related time frames for these actions. While HUD staff stated that they address PHA noncompliance through ongoing communication and technical assistance to PHAs, HUD has not documented specific actions to be performed by staff when deficiencies are identified. OFO staff told us that in general, PIH has not needed to take many enforcement actions because field offices are able to resolve most lead paint regulation compliance concerns with PHAs through ongoing communication and technical assistance. For example, HUD field offices sent letters to PHAs when Real Estate Assessment Center inspectors could not locate required lead inspection reports and lead disclosure forms, and requested that the PHA send the missing documentation within 30 days. However, OFO’s fiscal years 2015–2017 internal memorandums on monitoring and oversight guidance for HUD field offices did not contain detailed procedures, including time frames or criteria HUD staff would use to determine when to consider whether a more formal enforcement action might be warranted. Additionally, Lead Office staff said if efforts to bring a PHA into compliance are unsuccessful, the Lead Office would work in conjunction with PIH and HUD’s Office of General Counsel’s Departmental Enforcement Center to determine if an enforcement action is needed, such as withholding or delaying funds from a PHA or imposing civil money penalties on a PHA. Lead Office staff also told us that instead of imposing a fine on a PHA, HUD would rather work with the PHA to resolve the lead paint hazard. However, the Lead Office provided no documentation detailing the specific steps or time frames HUD staff would follow to determine when a noncompliance case is escalated to the Office of General Counsel. In a March 2018 report to Congress, HUD noted that children continued to test positive for lead in HUD-assisted housing in 2017. In the same report, HUD notes PIH and the Lead Office will continue to work with PHAs to ensure compliance with lead paint regulations. By adopting procedures that clearly describe when lead paint hazard compliance efforts are no longer sufficient and enforcement decisions are needed, HUD can better keep PHAs accountable in a consistent and timely manner. The standard HUD uses to identify children with elevated blood lead levels and initiate lead hazard control activities in its rental assistance aligns with the health guideline set by CDC in 2012. HUD also uses CDC’s health guideline in its lead grant programs. In HUD’s January 2017 amendment to the Lead Safe Housing Rule, HUD made its standard for lead in a child’s blood more stringent by lowering it from 20 micrograms to 5 micrograms of lead per deciliter of blood, matching CDC’s health guideline (i.e., blood lead reference value). Specifically, HUD’s stronger standard allows the agency to respond more quickly when children under 6 years old are exposed to lead paint hazards in voucher and public housing units. The January 2017 rule also established more comprehensive testing for children and evaluation procedures for HUD assisted housing. According to HUD’s press release that accompanied the rule, by aligning HUD’s standard with CDC’s guidance, HUD can respond more quickly in cases when a child who lives in HUD assisted housing shows early signs of lead in their blood. The 2017 rule notes HUD will revise the agency’s elevated blood lead level to align with future changes HHS may make to its recommended environmental intervention level. HUD’s standards for lead dust levels align with EPA standards for its rental assistance programs and exceed EPA standards for the lead grant programs. In 2001, EPA published a final rule on lead paint hazard standards, including lead dust clearance standards. The rule established standards to help property owners, contractors, and government agencies identify lead hazards in residential paint, dust, and soil and address these hazards in and around homes. Under these standards, lead is considered a hazard when equal to or exceeding 40 micrograms of lead in dust per square foot sampled on floors and 250 micrograms of lead in dust per square foot sampled on interior window sills. In 2004, HUD amended the Lead Safe Housing Rule to incorporate the 2001 EPA lead dust standards as HUD’s standards. Since this time, HUD has used EPA’s 2001 lead hazard standards in its rental assistance programs. In February 2017, HUD released policy guidance for its lead grantees requiring them to meet new and more protective requirements for identifying and addressing lead paint hazards in the lead grant programs than those imposed by EPA’s 2001 standards that HUD uses in the rental assistance programs. For example, the policy guidance requires grantees to consider lead dust a hazard on floors at 10 micrograms per square foot sampled (down from 40) and on window sills at 100 micrograms per square foot sampled (down from 250). The policy guidance noted that the new requirements are supported by scientific evidence on the adverse effects of lead exposure at low blood lead levels in children. Further, the policy guidance established a standard for porch floors––an area that EPA has not covered––because porch floors can be both a direct exposure source for children and a source of lead dust that can be tracked into the home. On December 27, 2017, the United States Court of Appeals for the Ninth Circuit ordered EPA to issue a proposed rule updating its lead dust hazard standard and the definition of lead-based paint within 90 days of the decision becoming final and a final rule within 1 year of the proposed rule. Because HUD’s Lead Safe Housing Rule generally defines lead paint hazards and lead dust hazards to mean the levels promulgated by EPA, if EPA changes its 2001 standards those new standards would be used in HUD’s rental assistance programs. On March 16, 2018, EPA filed a request to the court asking for clarification for when EPA is required to issue the proposed rule and followed up with a motion seeking clarification or an extension. In response to EPA’s motion, on March 26, 2018, the court issued an order clarifying time frames and ordered that the proposed rule be issued within 90 days from March 26, 2018. HUD’s Lead Safe Housing Rule requires a stricter lead inspection standard for public housing than for voucher units. According to HUD staff, HUD does not have the authority to require the more stringent inspection in the voucher program. While HUD has acknowledged that moving to a stricter inspection standard for voucher units would provide greater assurance that these units are lead-safe and expressed its plan to support legislative change to authorize it to impose a more stringent inspection standard, HUD has not requested authority from Congress to amend its inspection standard for the voucher program. For voucher units, HUD requires PHAs to ensure that trained inspectors conduct visual assessments to identify deteriorated paint for housing units inhabited by a child under 6 years old. In a visual assessment, an inspector looks for deteriorated paint and visible surface dust but does not conduct any testing of paint chips or dust samples from surfaces to determine the presence of lead in the home’s paint. By contrast, for public housing units, HUD requires a stronger inspection process. Lead- based paint inspections are required for pre-1978 public housing units. If that inspection identifies lead-based paint, PHAs must then perform a risk assessment. In a risk assessment, in addition to conducting a visual inspection, an inspector tests for the presence of lead paint by collecting and testing samples of paint chips and surface dust, and typically using a specialized device (an X-ray fluorescence analyzer) to measure the amount of lead in the paint on a surface, such as a wall, door, or window sill. Staff from HUD’s Lead Office and the Office of General Counsel told us that Title X did not include specific risk assessment requirements for voucher units, and HUD does not believe, therefore, that it has the statutory authority to require an assessment more thorough than a visual assessment of voucher units. As of May 2018, HUD had not requested statutory authority to change the visual assessment standard used in the voucher program. However, HUD previously acknowledged the limitation of the weaker inspection standard in a June 2016 publication titled Lead- Safe Homes, Lead-Free Kids Toolkit. In this publication, HUD noted its plans to support legislative change to strengthen lead safety in voucher units by eliminating reliance on visual-only inspections. Staff from HUD’s Lead Office and Office of General Counsel told us the agency recognizes that risk assessments are more comprehensive than visual assessments. The staff noted that, by definition, a risk assessment is a stronger inspection standard than a visual-only assessment because it includes additional identification and testing. In responding to a draft of this report, HUD cited the need to conduct and evaluate the results of a statistically rigorous study on the impacts of requiring a lead risk assessment versus a visual assessment, such as the impact on leasing times and the availability of housing for low-income families. HUD further noted that such a study could explore whether alternative options to the full risk assessment standard (such as targeted dust sampling) could achieve similar levels of protection for children in the voucher program. Requesting and obtaining authority to amend the standard for the voucher program would not preclude HUD from doing such a study. Such analysis might support a range of options based on consideration of health effects for children, housing availability, and other relevant factors. Because HUD’s Lead Safe Housing Rule contains a weaker lead inspection standard for the voucher program children living in voucher units may be less protected from lead paint hazards than children living in public housing. By requesting and obtaining statutory authority to amend the voucher program inspection standard, HUD would be positioned to take steps to ensure that children in the voucher program are provided better protection as indicated by analysis of the benefits and costs from amending the standard. HUD has taken limited steps to measure, evaluate, and report on the performance of its programmatic efforts to ensure that housing is lead- safe. First, HUD has tracked one performance measure for its lead grant programs but lacks comprehensive performance goals and measures. Second, while HUD has evaluated the effectiveness of its Lead-Based Paint Hazard Control grant program, it has not formalized plans and does not have a time frame for evaluating its lead paint regulations. Third, HUD has not issued an annual report on the results of its lead efforts since 1997. A key aspect to promoting improved federal management and greater efficiency and effectiveness is that agencies set goals and report on performance. We have previously reported that a program performance assessment contains three key elements––program goals, performance measures, and program evaluations (see fig. 5). In our prior work, we have noted that both the executive branch and congressional committees need evaluative information to help them make decisions about the programs they oversee––information that tells them whether, and why, a program is working well or not. Program goals and performance measures. HUD has tracked one performance measure for making private housing units lead-safe as part of its lead grant programs but lacks goals and performance measures that more fully cover the range of its lead efforts. In addition to our prior work on program goals and performance measures, federal internal control standards state that management should define objectives clearly and that defining objectives in measurable terms allows agency management to assess performance toward achieving objectives. According to Lead Office staff, HUD provides information on its goals and performance measures related to its lead efforts in the agency’s annual performance reports. For example, the fiscal year 2016 report contains information about the number of private housing units made lead-safe as part of HUD’s lead grant programs but does not include any performance measures on HUD’s lead efforts for the voucher and public housing programs. Lead Office staff told us HUD does not have systems to count the number of housing units made lead-safe in these two housing programs. The staff said the Lead Office and PIH recently began discussing whether data from an existing HUD database could be used to count units made lead-safe within these programs. However, they could not provide additional details on the status of all these efforts. Without comprehensive goals and performance measures, HUD does not know the results it is achieving with all its lead paint hazard reduction efforts. Moreover, HUD may be missing opportunities to use performance information to improve the results of its lead efforts. Program evaluations. HUD has evaluated the effectiveness of its Lead- Based Paint Hazard Control grant program but has not taken similar steps to evaluate the Lead Safe Housing Rule or Lead Disclosure Rule. As previously stated, our prior work on program performance assessment has noted the importance of program evaluations to know how well a program is working relative to its objectives. Additionally, Title X required HUD to conduct research to evaluate the long-term cost-effectiveness of interim lead hazard control and abatement strategies. For its Lead-Based Paint Hazard Control Grant program, HUD has contracted with outside experts to conduct evaluations. For example, the National Center for Healthy Housing and the University of Cincinnati’s Department of Environmental Health evaluated whether the lead hazard control methods used by grantees continued to be effective 1, 3, 6, and 12 years later. The evaluations concluded that the lead hazard control activities used by grantees substantially reduced lead dust levels and the original evaluation and those completed 1 and 3 years later were also associated with substantial declines in the blood lead levels of children living in the housing remediated using lead grant program funds. HUD has general plans to conduct evaluations of the Lead Safe Housing Rule and the Lead Disclosure Rule, but Lead Office and PD&R staff said they did not know when or if the studies will begin. In a 2016 publication, HUD noted its plans to evaluate the Lead Safe Housing Rule requirements and noted that such an evaluation would contribute toward policy recommendations and program improvements. Additionally, in its 2017 Research Roadmap, PD&R outlined HUD’s plans for two studies to evaluate the effectiveness of requirements within the Lead Safe Housing and Lead Disclosure Rules. However, PD&R and Lead Office staff were not able to provide a time frame for when the studies would begin. PD&R staff told us that the plans noted within the Research Roadmap were HUD’s first step in research planning and prioritization but that appropriations for research have been prescriptive in recent years (i.e., tied to specific research topics) and fell short of the agency’s research needs. By studying the effectiveness of requirements included within the Lead Safe Housing and Lead Disclosure Rules, including the cost- effectiveness of the various lead hazard control methods, HUD could have more complete information to assess how effectively it uses federal dollars to make housing units lead-safe. Reporting. HUD has not reported on its lead efforts as required since 1997. Title X includes annual and biennial reporting requirements for HUD. Staff from HUD’s Lead Office and General Counsel told us that in 1998 the agency agreed with the congressional committees of jurisdiction that HUD could satisfy this reporting requirement by including the required information in its annual performance reports. Lead Office staff told us HUD’s recent annual performance reports do not contain specific information required by law and that HUD has not issued other publicly available reports that contain the Title X reporting requirements. Title X requires HUD to annually provide Congress information on its progress in implementing the lead grant programs; a summary of studies looking at the incidence of lead poisoning in children living in HUD-assisted housing; the results of any required lead technical studies; and estimates of federal funds spent on lead hazard evaluation and reduction in HUD-assisted housing. As previously stated, the annual performance reports have provided information on the number of housing units made lead-safe through the agency’s lead grant programs, but not through the voucher or public housing programs. In March 2018, Lead Office staff told us HUD plans to submit separate reports on the agency’s lead effort, covering the Title X reporting requirements, starting in fiscal year 2019. By HUD complying with Title X statutory reporting requirements, Congress and the public will be in a position to better know the progress HUD is making toward ensuring that housing is lead-safe. Lead exposure can cause serious, irreversible cognitive damage that can impair a child for life. Through its lead grant programs and oversight of lead paint regulations, HUD is helping to address lead paint hazards in housing. However, our review identified specific areas where HUD could improve the effectiveness of its efforts to identify and address lead paint hazards and protect children in low-income housing from lifelong health problems: Documenting and evaluating grant processes. HUD could improve documentation for its lead grant programs’ processes by providing more specific direction to staff and documenting grant award rationale. In doing so, HUD could better ensure that grant program staff score grant applications consistently and appropriately and provide greater transparency about its award decisions. Additionally, periodically evaluating its grant processes and procedures could help HUD better ensure that its lead grants reach areas most at risk for lead paint hazards. Identifying areas at risk for lead hazards. By developing specific time frames to finalize and incorporate the results of its model to more fully identify areas at risk for lead paint hazards, HUD can better identify and conduct outreach to at-risk localities that its lead grant programs have not yet reached. Overseeing compliance with lead paint regulations. False self- certifications of compliance by some PHAs and other limitations in HUD’s compliance monitoring approach make it essential for HUD to develop a plan to mitigate and address limitations, as well as establish procedures to determine when enforcement decisions are needed. These actions could further strengthen HUD’s oversight and keep PHAs accountable for ensuring that housing units are lead-safe. Amending inspection standard in the voucher program. Children living in voucher units may receive less protection from lead paint hazards than children living in public housing units because HUD applies different lead inspection standards to the two programs. HUD could ensure that children in the voucher program are provided better protection from lead by requesting and obtaining statutory authority to amend the voucher program inspection standard as indicated by analysis of the benefits and costs of amending the standard. Assessing and reporting on performance. Fully incorporating key elements of performance assessment—by developing comprehensive goals, improving performance measures, and adhering to reporting requirements—could better enable HUD to assess its own progress and target its resources toward lead efforts that maximize impact. Additionally, HUD may be missing opportunities to inform the Congress and the public about how HUD’s lead efforts have helped reduce lead poisoning in children. We are making the following nine recommendations to HUD: The Director of HUD’s Lead Office should ensure that the office more fully documents its processes for scoring and awarding lead grants and its rationale for award decisions. (Recommendation 1) The Director of HUD’s Lead Office should ensure that the office periodically evaluates its processes for scoring and awarding lead grants. (Recommendation 2) The Director of HUD’s Lead Office, in collaboration with PD&R, should set time frames for incorporating relevant data on lead paint hazard risks into the lead grant programs’ processes. (Recommendation 3) The Director of HUD’s Lead Office and the Assistant Secretary for PIH should collaborate to establish a plan to mitigate and address risks within HUD’s lead paint compliance monitoring processes. (Recommendation 4) The Director of HUD’s Lead Office and the Assistant Secretary for PIH should collaborate to develop and document procedures to ensure that HUD staff take consistent and timely steps to address issues of PHA noncompliance with lead paint regulations. (Recommendation 5) The Secretary of HUD should request authority from Congress to amend the inspection standard to identify lead paint hazards in the Housing Choice Voucher program as indicated by analysis of health effects for children, the impact on landlord participation in the program, and other relevant factors. (Recommendation 6) The Director of the Lead Office should develop performance goals and measures to cover the full range of HUD’s lead efforts, including its efforts to ensure that housing units in its rental assistance programs are lead-safe. (Recommendation 7) The Director of the Lead Office, in conjunction with PD&R, should finalize plans and develop a time frame for evaluating the effectiveness of the Lead Safe Housing and Lead Disclosure Rules, including an evaluation of the long-term cost effectiveness of the lead remediation methods required by the Lead Safe Housing Rule. (Recommendation 8) The Director of the Lead Office should complete statutory reporting requirements, including but not limited to its efforts to make housing lead-safe through its lead grant programs and rental-assistance programs, and make the report publicly available. (Recommendation 9) We provided a draft of this report to HUD for review and comment. We also provided the relevant excerpts of the draft report to CDC and EPA for their review and technical comments. In written comments, reproduced in appendix III, HUD disagreed with one of our recommendations and generally agreed with the remaining eight. HUD and CDC also provided technical comments, which we incorporated as appropriate. EPA did not have any comments on the relevant excerpts of the draft report provided to them. In its general comments, HUD noted that the lead grant programs and HUD’s compliance assistance and enforcement of lead paint regulations have contributed significantly to, among other things, the low prevalence of lead-based paint hazards in HUD-assisted housing. Further, HUD said the lead grant programs and compliance assistance and enforcement of lead paint regulations have played a critical part in developing and maintaining the national lead-based paint safety infrastructure. HUD asked that this contextual information be included in the background of the report. The draft report included detailed information on the purpose and scope of HUD’s lead grant programs, two key regulations related to lead paint hazards, and efforts to make housing lead-safe. Furthermore, the draft report provided context on other federal agencies’ role in establishing relevant standards and guidelines for lead paint hazards. We made no changes in response to this comment because we did not think it was necessary for background purposes. HUD disagreed with the draft report’s sixth recommendation to request authority from Congress to use the risk assessment inspection standard to identify lead paint hazards in the Housing Choice Voucher program. As discussed in the report, HUD’s Lead Safe Housing Rule requires a more stringent lead inspection standard (risk assessments) for public housing than for Housing Choice Voucher units, for which a weaker inspection standard is used (visual assessments). In its written comments, HUD said that before deciding whether to request the statutory authority to implement risk assessments for voucher units, it would need to conduct and evaluate the results of a statistically rigorous study on the impacts of requiring a lead risk assessment versus a visual assessment, such as the impact on leasing times and the availability of housing for low-income families. HUD further noted that such a study could explore whether alternative options to the full risk assessment standard (such as targeted dust sampling) could achieve similar levels of protection for children in the voucher program. We note that requesting and obtaining authority to amend the standard for the Housing Choice Voucher program would not preclude HUD from doing such a study. We acknowledge that the results of such a study might support a range of options. Therefore, we revised our recommendation to provide HUD with greater flexibility in how it might amend the lead inspection standard for the voucher program based on consideration of not only leasing time and availability of housing, as HUD emphasized in its written comments, but also based on the health effects on children. The need for HUD to review the lead inspection standard for the voucher program is underscored by the greater number of households with children served by the voucher program compared to public housing, as well as recent information indicating that more children with elevated blood lead levels are living in voucher units than in public housing. HUD generally agreed with our remaining eight recommendations and provided specific information about planned steps and other considerations related to implementing them. For example, in response to our first three recommendations on the lead grant programs, HUD outlined specific steps it plans to take, such as updating its guidance for scoring grant applications and reviewing its grant application scoring methods to identify potential improvements. In response to our fourth and fifth recommendations to the Director of HUD’s Lead Office on compliance monitoring and enforcement of lead paint regulations, HUD noted that PIH should be the primary office for these recommendations with the Lead Office providing support. While these recommendations had already recognized the need for the Lead Office to collaborate with PIH, we reworded them to clarify that it is not necessary for the Lead Office to have primary responsibility for their implementation. HUD generally agreed with our seventh and eighth recommendations, but noted some considerations for implementing them. For our seventh recommendation about performance goals and measures, HUD noted that it will re-examine the availability of information from the current housing databases to determine whether data on housing unit production can be added to the existing data collected. HUD noted if that information is not sufficient, it would need to obtain Office of Management and Budget approval and have sufficient funds for such an information technology project. For our eighth recommendation about evaluating the Lead Safe Housing and Lead Disclosure Rules, HUD noted if its own resources are insufficient, the time frame for implementing this recommendation may depend on the availability of funding for contracted resources. Finally, in response to our ninth recommendation, HUD said that it will draft and submit annual and biennial reports to the congressional authorizing and appropriations committees and then post the reports on the Lead Office’s public website. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Housing and Urban Development, the Administrator of the Environmental Protection Agency, and the Secretary of Health and Human Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Under the Department of Housing and Urban Development’s (HUD) Lead-Based Paint Hazard Control and the Lead Hazard Reduction Demonstration grant programs, HUD competitively awards grants to state and local jurisdictions, as authorized by the Residential Lead-Based Paint Hazard Reduction Act (Title X of the Housing and Community Development Act of 1992). Title X requires each grant recipient to make matching contributions with state, local, and private funds (i.e., nonfederal) toward the total cost of activities. For the Lead-Based Paint Hazard Control grant and the Lead Hazard Reduction Demonstration grant programs, the matching contribution has been set at no less than 10 percent and 25 percent, respectively, of the total grant amount. For example, if the total grant amount is $3 million, then state or local jurisdictions must provide at least $300,000 and $750,000, respectively, for each grant program, in additional funding toward the cost of activities. HUD requires lead grant applicants to include information on the sources and amounts of grantees’ matching contributions as part of their grant applications. Additionally, Title X requires HUD to award grants in part based on an applicant’s ability to leverage state, local, and private funds to supplement the federal grant funds. To identify the nonfederal funding sources grantees used in the lead hazard control grants, we selected and reviewed the lead grant applications of 20 HUD grantees and interviewed representatives from 10 of these. We selected these grantees based on their geographic locations; the number of HUD lead grants they had previously received; experience with HUD’s lead hazard control grants; and whether they have received both grants from 2013 through 2017. Grantees we selected included entities at the state, municipality, and county levels. Information from our grant application reviews and interviews of grantees cannot be generalized to all HUD grantees. Based on our review of the selected grant applications and interviews of selected grantees, we found that grantees planned to use the following types of nonfederal funding sources as their matching contributions to support their lead grants activities: State and local funds. Eighteen of the 20 grantees we selected noted that they planned to use state or local funding sources to supplement HUD’s grant funds. The state and local funding sources included state or local general funds and local property taxes or fees. For example, grantees in Connecticut, Baltimore, and Philadelphia used state or local general funds to cover personnel and operating costs. Additionally, grantees in Alameda County (California), Hennepin County (Minnesota), Malden, St. Louis, and Winnebago County (Illinois) planned to use local taxes, including property taxes or fees, such as real estate recording and building permit fees, to cover some costs associated with their lead hazard control grants activities. Community Development Block Grant funds. Ten of the 20 grantees we selected indicated that they planned to use Community Development Block Grant (CDBG) program funds to cover part of the costs of their lead hazard control grants. CDBG program funds can be used by states and local communities for housing; economic development; neighborhood revitalization; and other community development activities. For example, grantees in Baltimore and Memphis noted in their grant applications that they planned to use the funds to cover costs related to personnel, operations, and training. Nongovernmental contributions or discounts. Eight of 20 grantees we selected stated that they anticipated some forms of nongovernmental contributions from nonprofit organizations or discounts from contractors to supplement the lead grants. For example, all eight grantees stated that they expected to receive matching contributions from nonprofit organizations. Table 2 summarizes the nonfederal funds by source that the 20 selected grantees planned to use, based on our review of these grantees’ applications. Furthermore, almost all of the selected grantees stated in their grant applications or told us that they expected to receive or have received other nonfederal funds in excess of their matching contributions. For example, 15 grantees stated that they generally required or encouraged property owners or landlords to contribute toward the lead hazard remediation costs. Also, grantees in Baltimore, District of Columbia, Lewiston, and Providence indicated that they expected to receive monetary or in-kind donations from organizations to help carry out lead hazard remediation, blood lead-level testing, or training. Additionally, the grantee in Alameda County (California) told us that they have received nonfederal funds from a litigation settlement with a private paint manufacturer. This report examines the Department of Housing and Urban Development’s (HUD) efforts to (1) incorporate statutory requirements and other relevant federal standards in its lead grant programs; (2) monitor and enforce compliance with lead paint regulations for its rental assistance programs; (3) adopt federal health guidelines and environmental standards for lead hazards in its lead grant and rental assistance programs; and (4) measure and report on its performance related to making housing lead-safe. In this report, we examine lead paint hazards in housing, and we focus on HUD’s lead hazard control grant programs and its two largest rental assistance programs that serve the most families with children: the Housing Choice Voucher (voucher) and public housing programs. To address all four objectives, we reviewed relevant laws, such as the Residential Lead-Based Paint Hazard Reduction Act (Title X of the Housing and Community Development Act of 1992, referred to as Title X throughout this appendix) and relevant HUD regulations, such as the Lead Safe Housing Rule and a January 2017 amendment to this rule. To examine trends in funding for HUD’s lead grant programs for the past 10 years, we also reviewed HUD’s budget information for fiscal years 2008 through 2017. We interviewed HUD staff from the Office of Lead Hazard Control and Healthy Homes (Lead Office), Office of Public and Indian Housing (PIH), Office of Policy Development and Research (PD&R), and other relevant HUD program and field offices. Finally, we reviewed our prior work and those of HUD’s Office of Inspector General. To address the first objective, we reviewed HUD’s Notices of Funding Availability (funding notices), policies, and procedures to identify HUD’s grant award processes for the Lead-Based Paint Hazard Control grant and Lead Hazard Reduction Demonstration grant programs. For example, we reviewed HUD’s annual notices of funding availability from 2013 through 2017 to identify HUD’s scoring factors for evaluating grant applications. We compared HUD’s grant award processes in 2017 with Title X statutory requirements, the Office of Management and Budget (OMB) requirements for awarding federal grants, and relevant federal internal control standards. We also interviewed HUD staff about the agency’s grant application review and award processes. To determine the extent to which HUD’s grants have gone to counties in the United States potentially at high risk for lead paint hazards, we compared grantee locations from HUD’s lead grant data for grants awarded from 2013 through 2017 with county-level data on two indicators of lead paint hazard risk from the 2011–2015 American Community Survey—a continuous survey of households conducted by the U.S. Census Bureau. We analyzed HUD’s grant data to determine the number and dollar amount of grants received by each grantee, and the grantees’ addresses. We then conducted a geographic analysis to determine whether each HUD lead grant went to a county that met at least one, both, or neither of the two commonly known indicators of lead paint hazard risk—the age of housing and poverty level. We identified these two indicators through a review of relevant academic literature, agency research, and state lead modelling methodologies. We used data from the 2011–2015 American Community Survey because the data covered a time frame that best aligned with the 5 years of lead grant data (2013 through 2017). Using its county-level data, we calculated an estimated average percentage nationwide of housing units built before 1980 (56.9 percent) and an estimated average percentage nationwide of individuals living below the poverty level (17.5 percent). We used 1980 as a benchmark for age of housing because the American Community Survey data for age of housing is separated by the decade of construction and 1980 was closest in time to the 1978 federal lead paint ban. We categorized counties based on whether their levels of pre-1980 housing and poverty were above one, both, or neither of the respective national average percentage for each indicator. The estimated average nationwide and county-level percentages of the two indicators (e.g., older housing and poverty rate) are expressed as a range of values. For the lower and upper ends of the range, we generated a 95 percent confidence interval that was within plus or minus 20 percentage points. We classified a county as above the estimated average percentages nationwide if the county’s confidence interval was higher and did not overlap with the nationwide estimate’s confidence interval. We omitted the data for 12 counties that we determined were unreliable for our purposes. We analyzed data starting in 2013 because that was the first year for which these grant data were available electronically. We also interviewed HUD staff to understand their efforts and plans to perform similar analyses using indicators of lead paint hazard risk. To assess the reliability of HUD’s grant data, we reviewed documentation of HUD’s grant database, interviewed Lead Office staff on the processes HUD used to collect and ensure the reliability of the data, and tested the data for missing values, outliers, and obvious errors. To assess the reliability of the American Community Survey data, we reviewed statistical information from the Census Bureau and other publicly available documentation on the survey and conducted electronic testing of the data. We determined that the HUD grant data and American Community Survey county-level data on age of housing and poverty were sufficiently reliable for identifying areas at risk of lead paint hazards and determining the extent to which lead grants from 2013 through 2017 have gone to at-risk areas. Furthermore, to obtain information about how HUD works with grantees to achieve program objectives, we conducted in-person site visits to five grantees located in five localities (Alameda County, California; Atlanta, Georgia; Baltimore, Maryland; District of Columbia; and San Francisco, California); and interviewed an additional five grantees on the telephone (Hennepin County, Minnesota; Lewiston, Maine; Malden, Massachusetts; Providence, Rhode Island; and Winnebago County, Illinois). In addition, we reviewed the grant applications of the 10 grantees we spoke to and an additional 10 grantees from 10 additional jurisdictions (State of Connecticut; Cuyahoga County, Ohio; Denver, Colorado; Monroe County, New York; Philadelphia, Pennsylvania; Memphis, Tennessee; San Antonio, Texas; St. Louis, Missouri; Tucson, Arizona; and State of Vermont). We selected the 10 grantees for site visits or interviews based on the following criteria: geographic variation, number of years the grantees had HUD’s lead grants, and grantees that have received both types of lead grants from 2013 through 2017. We selected the 10 additional grantees’ applications for review based on geographic diversity and to achieve a total of two applications for each year during our 5-year time frame, with at least one application from each of the two HUD lead grant programs. As part of our review of selected grant applications, we identified nonfederal funding sources used by grantees, such as local tax revenues, contractor discounts, and property owner contributions. Information from the selected grantees and grant applications review cannot be generalized to those grantees we did not include in our review. Additionally, we interviewed representatives from housing organizations to obtain additional examples of any nonfederal funding sources, such as state or local bond measures, or low-interest loans to homeowners. To address the second objective, we also reviewed HUD guidance and internal memorandums related to its efforts to monitor and enforce compliance with lead paint regulations for public housing agencies (PHA), the entities that manage HUD’s voucher and public housing rental assistance programs. In addition, we reviewed HUD’s documentation of databases it uses to monitor compliance, including the Lead-Based Paint Response Tracker and the Elevated Blood Lead Level Tracker, and observed HUD staff’s demonstrations of these databases. HUD staff also provided a demonstration of the Record and Process Inspection Data database (known as “RAPID”) used by HUD’s Real Estate Assessment Center to collect physical inspection data for public housing units. We obtained and reviewed information from HUD about instances of potential noncompliance with lead paint regulations by PHAs as of November 2017 and enforcement actions HUD has taken. We compared HUD’s regulatory compliance monitoring and enforcement approach to federal internal control standards. We interviewed staff from HUD’s Lead Office, Office of General Counsel, Office of Field Operations, and field staff, including four HUD regional directors in areas of the country known to have a high prevalence of lead paint hazards, about internal procedures for monitoring and enforcing compliance with lead paint regulations by the PHAs within their respective regions. To address the third objective on HUD’s adoption of federal health guidelines and environmental standards for lead paint hazards in its lead grant and rental assistance programs, we reviewed relevant rules and HUD documentation. To identify relevant federal health guidelines and environmental standards, we reviewed guidelines and regulations from the Centers for Disease Control and Prevention (CDC) and the Environmental Protection Agency (EPA) and interviewed staff from each agency. To identify state and local laws with different requirements than these federal guidelines and standards, we obtained information from and interviewed staff from CDC’s Public Health Law Program and the National Conference of State Legislatures. We compared HUD’s requirements to CDC’s health guideline known as the “blood lead reference value” and EPA’s standards for lead-based paint hazards and lead-dust clearance standards. Finally, we reviewed information in HUD’s 2017 funding notices and lead grant programs’ policy guidance about requirements for grantees as they pertain to health guidelines and environmental standards. We also interviewed HUD staff about how HUD has used the findings from lead technical study grants to consider changes to HUD’s requirements and processes regarding identifying and addressing lead paint hazards for the grant programs. To address the fourth objective, we reviewed HUD documentation related to performance goals and measures, program evaluations, and reporting. For example, we reviewed HUD’s recent annual performance reports to identify goals and performance measures related to HUD’s efforts to make housing lead-safe. Further, we reviewed Title X to identify requirements related to evaluating and reporting on HUD’s lead efforts. We reviewed program evaluations and related studies completed by outside experts for the lead grant programs and interviewed staff from one of the organizations that conducted the evaluations. In addition, we interviewed Lead Office and PD&R staff about the agency’s plans to evaluate the requirements in the Lead Safe Housing Rule and reviewed corresponding agency documentation about these plans. Additionally, we reviewed the Lead Office’s most recent strategic plan (2009) and annual report (1997) on the agency’s lead efforts. We compared HUD’s use of performance goals and measures, program evaluations, and reporting against leading practices for assessing program performance and federal internal control standards. Finally, we interviewed staff from HUD to understand goals and performance measures used by the agency to assess their lead efforts. We conducted this performance audit from March 2017 to June 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, John Fisher (Assistant Director), Beth Faraguna (Analyst in Charge), Enyinnaya David Aja, Farah Angersola, Carol Bray, William R. Chatlos, Anna Chung, Melinda Cordero, Elizabeth Dretsch, Christopher Lee, Marc Molino, Rebecca Parkhurst, Tovah Rom, Tyler Spunaugle, and Sonya Vartivarian made key contributions to this report.
|
Lead paint in housing is the most common source of lead exposure for U.S. children. HUD awards grants to state and local governments to reduce lead paint hazards in housing and oversees compliance with lead paint regulations in its rental assistance programs. The 2017 Consolidated Appropriations Act, Joint Explanatory Statement, includes a provision that GAO review HUD’s efforts to address lead paint hazards. This report examines HUD’s efforts to (1) incorporate statutory requirements and other relevant federal standards in its lead grant programs, (2) monitor and enforce compliance with lead paint regulations in its rental assistance programs, (3) adopt federal health guidelines and environmental standards for its lead grant and rental assistance programs, and (4) measure and report on the performance of its lead efforts. GAO reviewed HUD documents and data related to its grant programs, compliance efforts, performance measures, and reporting. GAO also interviewed HUD staff and some grantees. The Department of Housing and Urban Development’s (HUD) lead grant and rental assistance programs have taken steps to address lead paint hazards, but opportunities exist for improvement. For example, in 2016, HUD began using new tools to monitor how public housing agencies comply with lead paint regulations. However, HUD could further improve efforts in the following areas: Lead grant programs. While its recent grant award processes incorporate statutory requirements on applicant eligibility and selection criteria, HUD has not fully documented or evaluated these processes. For example, HUD’s guidance is not sufficiently detailed to ensure consistent and appropriate grant award decisions. Better documentation and evaluation of HUD’s grant program processes could help ensure that lead grants reach areas at risk of lead paint hazards. Further, HUD has not developed specific time frames for using available local-level data to better identify areas of the country at risk for lead paint hazards, which could help HUD target its limited resources. Oversight. HUD does not have a plan to mitigate and address risks related to noncompliance with lead paint regulations by public housing agencies. We identified several limitations with HUD’s monitoring efforts, including reliance on public housing agencies’ self-certifying compliance with lead paint regulations and challenges identifying children with elevated blood lead levels. Additionally, HUD lacks detailed procedures for addressing noncompliance consistently and in a timely manner. Developing a plan and detailed procedures to address noncompliance with lead paint regulations could strengthen HUD’s oversight of public housing agencies. Inspections. The lead inspection standard for the Housing Choice Voucher program is less strict than that of the public housing program. By requesting and obtaining statutory authority to amend the standard for the voucher program, HUD would be positioned to take steps to better protect children in voucher units from lead exposure as indicated by analysis of benefits and costs. Performance assessment and reporting. HUD lacks comprehensive goals and performance measures for its lead reduction efforts. In addition, it has not complied with annual statutory reporting requirements, last reporting as required on its lead efforts in 1997. Without better performance assessment and reporting, HUD cannot fully assess the effectiveness of its lead efforts. GAO makes nine recommendations to HUD including to improve lead grant program and compliance monitoring processes, request authority to amend its lead inspection standard in the voucher program, and take additional steps to report on progress. HUD generally agreed with eight of the recommendations. HUD disagreed that it should request authority to use a specific, stricter inspection standard. GAO revised this recommendation to allow HUD greater flexibility to amend its current inspection standard as indicated by analysis of the benefits and costs.
|
Cobra Dane and other radar systems can provide capabilities that contribute to a range of missions, such as ballistic missile defense, space surveillance, and intelligence-gathering missions. DOD uses Cobra Dane and other radar systems to provide information over a short period of time to ground-based interceptors so they can hit their targets. Such radar systems contribute to ballistic missile defense by tracking incoming missile threats, classifying the missile threat, and determining if a threat was intercepted successfully. In addition, some radar systems can provide discrimination capabilities, which allow for that radar to identify a warhead when a missile threat deploys decoys at the same time. Radar systems can also have the capability to contribute to a space surveillance mission, which provides an awareness of space objects within or near the Earth’s orbit and their movements, capabilities, and intent. Finally, radars can also contribute intelligence-gathering capabilities. Each radar system’s ability to contribute to various missions can be dependent on that radar’s inherent capabilities and physical location. See table 1 for a description of selected radar systems that can provide some or all of these capabilities. Various offices within the Air Force, in coordination with MDA, are responsible for the operation and sustainment of the Cobra Dane radar. Since 2013, Air Force Space Command has overseen the operation of Cobra Dane, and contributes to the sustainment of Cobra Dane’s site at Shemya Island. The Air Force Life Cycle Management Center has overall responsibility of the sustainment of the Cobra Dane radar. In addition, MDA works in coordination with the Air Force and combatant commands to develop, test, and field ballistic missile defense assets. MDA also shares funding with the Air Force to operate and sustain Cobra Dane. U.S. Northern Command and U.S. Strategic Command define priorities for the overall radar infrastructure and establish the various missions that those radar systems are intended to meet. U.S. Northern Command oversees the homeland ballistic missile defense mission, and establishes operational objectives for radar systems operating in its region. U.S. Northern Command officials told us that they are the end user for Cobra Dane. U.S. Strategic Command has established a ballistic missile defense and a space surveillance mission, both of which are supported by Cobra Dane. Further, U.S. Strategic Command’s components coordinate global missile defense and space operations planning. In its January 2018 report to Congress, the Air Force described how Cobra Dane and LRDR can meet mission requirements through their shared and unique capabilities, as well as how their locations affect their ability to provide those capabilities for DOD’s ballistic missile defense mission. MDA studies we reviewed found that locating LRDR at Clear Air Force Station allows for operational advantages and cost savings. The Air Force included information in its report to Congress on the ballistic missile defense capabilities of Cobra Dane and LRDR, and the effects of each radar’s location on those capabilities. Specifically, the Air Force report stated that both radars have the capabilities to track and classify missile threats. However, the report incorrectly stated that both radar systems have the inherent capability to determine if a missile threat is successfully intercepted. MDA documentation that we reviewed shows that Cobra Dane does not yet have this capability. When we shared our finding with Air Force and MDA officials, they agreed that this reported capability was incorrectly identified in the Air Force report to Congress. MDA officials also told us that Cobra Dane could provide this capability in the future if it implements software changes, but they are unlikely to do this until calendar year 2025. The Air Force report also noted that LRDR would have a unique capability, once it is operational, to discriminate missile threats from any deployed decoys. See table 2 for a summary of what the Air Force reported for the ballistic missile defense capabilities of Cobra Dane and LRDR. In addition to identifying ballistic missile defense capabilities of each radar, the Air Force report noted that both Cobra Dane and LRDR will have the inherent capabilities to support space surveillance and intelligence-gathering missions. DOD officials we spoke to confirmed that they have plans to use those inherent capabilities to support these other missions. For example, U.S. Strategic Command identified that DOD needs Cobra Dane to support its space surveillance mission. Further, Air Force and MDA officials told us that they use Cobra Dane to track small objects that no other radar system can track. MDA officials told us that LRDR could be used for space surveillance. However, Air Force and U.S. Strategic Command officials stated that there are no plans to use LRDR’s space surveillance capabilities as a replacement for Cobra Dane. Additionally, Air Force officials told us that neither Cobra Dane nor LRDR is required to support an intelligence-gathering mission. The Air Force also included information in its report on how the locations of Cobra Dane and LRDR affect their abilities to contribute to the ballistic missile defense mission. For example, the Air Force reported that Cobra Dane’s location at Shemya Island, Alaska, allows it to track missile threats from North Korea earlier in their trajectories than LRDR would be able to track at Clear Air Force Station, Alaska. This is consistent with an MDA analysis that we reviewed that outlined additional advantages provided by Cobra Dane’s location at Shemya Island. According to that analysis, Cobra Dane can begin tracking missile threats approximately 210 seconds earlier than LRDR. Air Force officials told us that the additional time to track missile threats allows the warfighter an earlier opportunity to intercept a missile threat and deploy additional interceptors if the first attempt fails. Further, the MDA analysis described a tracking gap between the areas covered by LRDR—once it is operational at Clear Air Force Station—and the two sets of AN/TPY-2 radars that are currently located in Japan. Without Cobra Dane’s coverage of this gap, the analysis found that the warfighter would have a more limited opportunity to intercept a missile threat from North Korea. Figure 2 shows how Cobra Dane covers a gap between the LRDR (once operational) and the two AN/TPY-2 radars in Japan. The Air Force report also noted that LRDR’s geographic location has its own advantages in contributing to ballistic missile defense compared to Cobra Dane’s location. For example, the Air Force report noted that LRDR’s location would allow it to track missile threats later in their trajectories beyond Cobra Dane’s coverage as those threats make their way to the continental United States. We also found that MDA has determined LRDR will have other advantages due to its location. For example, an MDA analysis that we reviewed found that LRDR’s location will allow for the radar system to contribute to ballistic missile defense from North Korean and Iranian threats. Absent LRDR, this analysis determined that there are no other radar systems that are located in a position to provide the capability to discriminate missile threats and determine if a threat was successfully intercepted. In addition to what the Air Force reported, we found that DOD decided to locate LRDR at Clear Air Force Station in Alaska after considering the advantages and disadvantages of other locations. For example, MDA completed studies that examined how LRDR could perform at various locations in Alaska, and the cost-effectiveness of constructing and sustaining the radar at those sites. In a June 2015 analysis, MDA compared how LRDR could perform in discriminating missile threats when co-locating it with Cobra Dane at Shemya Island or placing it at Clear Air Force Station. MDA determined that LRDR could provide more real-time discrimination information for missile threats targeting Alaska and the continental United States if it constructed the radar at Clear Air Force Station versus Shemya Island. Additionally, MDA identified in an October 2016 study that the department could obtain operational advantages and cost savings by constructing LRDR at Clear Air Force Station, Alaska, when compared to constructing it at Shemya Island, Alaska. Specifically, MDA determined that Clear Air Force Station could provide better results for 11 of the 13 factors it reviewed compared to Shemya Island. For example, MDA determined that locating LRDR at Clear Air Force Station would result in lower costs and enhanced system performance. According to DOD officials and documents we reviewed, other radar investments may reduce the department’s reliance on Cobra Dane for ballistic missile defense and space surveillance, given that U.S. Northern Command identified it has a need for Cobra Dane after DOD begins operating LRDR in fiscal year 2021. Specifically, the Pacific Radar and Space Fence may reduce DOD’s reliance on Cobra Dane to support ballistic missile defense and space surveillance, respectively. Pacific Radar: According to DOD officials, the department may no longer need Cobra Dane to meet the ballistic missile defense mission after MDA fields a new radar in the Pacific region in fiscal year 2025. MDA began developing the Pacific Radar to provide additional missile threat tracking and discrimination capabilities. According to U.S. Northern Command and MDA officials, the Pacific Radar may fill the gap in tracking missile threats currently covered by Cobra Dane. Space Fence: The Air Force has also determined it will no longer have a requirement for Cobra Dane to provide space surveillance once the Space Fence is fully operational. The Air Force plans for the Space Fence to be operational in fiscal year 2019. According to a U.S. Strategic Command briefing, the Space Fence will provide the same capabilities as Cobra Dane. Air Force officials noted that they want to continue relying on Cobra Dane for space surveillance when the Space Fence is operational, as long as the radar is available and used to contribute to ballistic missile defense. In its January 2018 report to Congress, the Air Force noted that Cobra Dane met its requirement for operational availability—i.e., the percentage of time that the radar system is able to meet its ballistic missile defense and space surveillance missions. Specifically, the Air Force report noted that Cobra Dane had been available an average of 91 percent of the time over a 2-year period (January 2016 through December 2017), which exceeded the 90 percent requirement for operational availability. Information that we reviewed from a more recent 2-year period (August 2016 through July 2018) showed that Cobra Dane’s 2-year average for operational availability had declined to approximately 88 percent—below the 90 percent requirement. Air Force officials stated that the decline in the operational availability over the more recent two-year period was due to a few instances where they needed to take Cobra Dane off-line for extended periods of scheduled downtime (e.g., regular operations and maintenance, calibration of instruments). Further, they noted that when Cobra Dane is not operationally available, the reason is usually due to scheduled downtime. Officials also told us there was one instance of unscheduled downtime (e.g., part or system failure) in that 2-year period which required emergency maintenance on the radar’s mission control hardware. We also reviewed Air Force data on the frequency of unscheduled downtime between August 2016 and July 2018, which show that Cobra Dane is able to contribute to its missions without unscheduled downtime 99.7 percent of the time. According to U.S. Northern Command and MDA officials, they can mitigate the effect on the ballistic missile defense mission if they know far enough in advance that Cobra Dane will not be operationally available— such as during scheduled downtime. Officials stated that they do this by moving a transportable radar, known as the Sea-Based X-band radar, to specific locations in the Pacific Ocean to provide additional tracking coverage of missile threats. A U.S. Northern Command analysis that we reviewed describes how DOD can deploy the Sea-Based X-band radar at particular locations in the Pacific Ocean to supplement Cobra Dane. This analysis found that U.S. Northern Command can lose the ability to track some missile threat trajectories if Cobra Dane is not available and the Sea-Based X-band radar is not deployed. We also reviewed Air Force data on space surveillance, which shows that the Air Force would face some limitations in its ability to complete its space surveillance mission when Cobra Dane is not operationally available. According to the data, Cobra Dane tracks 3,300 space objects each day that cannot be tracked by any other radar system. Air Force officials noted that when Cobra Dane is not operationally available for space surveillance for short periods (less than 24 hours), they can overcome that downtime without losing track of those unique objects. However, officials told us that it would take six months to reacquire all of the small space objects that Cobra Dane tracks, if they encounter any significant scheduled or unscheduled downtime. MDA officials told us there are no scheduled plans to take Cobra Dane down long enough to compromise DOD’s ability to conduct space surveillance. In its January 2018 report to Congress, the Air Force projected that the Air Force and MDA would contribute total funding of $278.6 million based on their fiscal year 2019 budget plans for the operation and sustainment of Cobra Dane. According to the report, the Air Force and MDA plan to share funding for the operation and maintenance of the Cobra Dane radar, and for three modernization projects that make up their sustainment plan for the radar. Table 3 outlines the plan for how the Air Force and MDA will share funding for the operation and maintenance of Cobra Dane. In addition, the Air Force included information in its report on how the Air Force and MDA plan to share funding to support Cobra Dane’s three modernization projects. Specifically, the Air Force and MDA plan to redesign parts for three sets of obsolete systems: (1) mission system replacement; (2) traveling wave tubes; and (3) transmitter groups. The Air Force has identified that it no longer has vendors that manufacture some critical parts, and failure of any of the three systems could result in Cobra Dane not being available to meet mission requirements. As such, the Air Force determined that it could sustain these three systems more effectively if they were redesigned. Table 4 summarizes the reported funding for the three projects that make up the Cobra Dane sustainment plan. In addition to what the Air Force reported, we identified that the Air Force developed a total cost estimate for its transmitter group replacement, but not for its other two projects. For the other two projects, Air Force officials stated that they plan to complete estimates for the total costs in conjunction with their fiscal year 2020 budget submission. In August 2016, the Air Force estimated that the transmitter group replacement would have a total cost of $91.2 million, but reported it would fund this project at $94.0 million through fiscal year 2023 (see table 4). Air Force officials plan to request the transfer of any unused funding to the other projects once it completes the transmitter group project. The Air Force also completed a partial cost estimate for the traveling wave tube redesign—covering the redesign of the parts and replacement of 1 of 12 groups of parts—estimating that the first phase would cost $16.0 million. Further, Air Force officials told us that they have not yet developed a total cost estimate for the mission system replacement. We also found that the Air Force and MDA expedited Cobra Dane’s mission system replacement project, but Air Force officials told us they face challenges in expediting the other two projects without compromising Cobra Dane’s operational availability. For the mission system replacement, MDA requested additional funding in fiscal year 2018. Air Force and MDA officials told us that the additional funding they received allowed them to prioritize the mission system replacement and advance its timeline earlier that year. Air Force officials stated that they explored ways to expedite the two other projects: the traveling wave tubes and transmitter groups. However, they stated that replacing too many parts at the same time will result in their having to take Cobra Dane off-line for longer periods of time. According to Air Force and MDA officials, they may look for opportunities to expedite timeframes for their other two projects as long as the amount of scheduled downtime is kept to acceptable levels. In its report to Congress, the Air Force identified that it plans to provide $140 million in funding for the sustainment and maintenance of operational access to Cobra Dane’s site at Shemya Island based on its fiscal year 2019 budget plans. According to the report, the Air Force is solely responsible for funding all work related to the operation and sustainment of Shemya Island, shared between two of its major commands: Air Force Space Command and Pacific Air Forces. Table 5 summarizes the information the Air Force included in its report on how funding will be shared for Shemya Island. We also reviewed a support agreement between Air Force Space Command and Pacific Air Forces that identifies how they will sustain the site and the calculation for sharing costs. The agreement describes the specific work to sustain the site, including maintaining the airfield, support facilities, and communication infrastructure. Air Force officials told us that they are constantly addressing challenges related to operational access to the site at Shemya Island, but Air Force Space Command and Pacific Air Forces work together to address those challenges. We provided a draft of this report to DOD for review and comment. DOD told us that it had no comments on the draft report. We are sending copies of this report to the Secretary of Defense; the Under Secretary of Defense for Acquisitions and Sustainment; the Secretary of the Air Force; the Director of the Missile Defense Agency; and the Commanders of U.S. Northern Command and U.S. Strategic Command. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Joe Kirschbaum at (202) 512-9971 or kirschbaumj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to the report are listed in Appendix I. In addition to the contact named above, Kevin O’Neill (Assistant Director), Scott Bruckner, Vincent Buquicchio, Martin De Alteriis, Amie Lesser, and Richard Powelson made key contributions to the report.
|
First fielded in 1976 on Shemya Island in Alaska, the Cobra Dane radar faces growing sustainment challenges that DOD plans to address through modernization projects. Anticipating future needs, DOD began investing in new radar systems that share capabilities with Cobra Dane to support ballistic missile defense and space surveillance, including the LRDR (Alaska), the Space Fence (Marshall Islands), and the Pacific Radar (location to be determined). The conference report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2018 included a provision that GAO review the Air Force's report to Congress on the operation and sustainment of Cobra Dane. This report identifies information included in the Air Force's report and describes additional information that GAO reviewed on (1) the capabilities of the Cobra Dane radar and other planned radars to meet DOD's mission requirements, (2) Cobra Dane's operational availability and the plan to mitigate the effect on those missions when Cobra Dane is not available, and (3) DOD's funding plan and project cost estimates for the operation and sustainment of Cobra Dane and its site at Shemya Island. GAO reviewed the Air Force report and related documentation, and interviewed relevant officials. In its January 2018 report to Congress, the Air Force reported how the Cobra Dane radar and the Long Range Discrimination Radar (LRDR) have shared and unique capabilities to support ballistic missile defense and space surveillance missions. The report noted that the respective locations of both radar systems affect their ability to provide those capabilities. The Department of Defense (DOD) also has other radar investments—the Pacific Radar and the Space Fence, which, according to DOD officials, may reduce DOD's reliance on Cobra Dane to provide ballistic missile defense and space surveillance capabilities. The Air Force's report to Congress noted that Cobra Dane met its requirement for operational availability, which refers to the percentage of time that the radar is able to meet its missions. GAO found that the Air Force has developed procedures to mitigate risks when Cobra Dane is not available. For example, U.S. Northern Command and Missile Defense Agency (MDA) officials stated that they can mitigate risks when Cobra Dane is not available by using the Sea-Based X-band radar to provide support for ballistic missile defense. The Air Force would face some limitations in its ability to conduct space surveillance if Cobra Dane were not available, as Cobra Dane tracks objects no other radar can track. However, MDA officials noted there are no plans to take Cobra Dane offline long enough to compromise space surveillance. The Air Force and MDA plan to contribute total funding of $278.6 million for the operation and sustainment of Cobra Dane, according to their fiscal year 2019 budget plans. Specifically, the Air Force and MDA plan to share funding for the operation and maintenance of the Cobra Dane radar and for three modernization projects that make up their sustainment plan for the radar. Further, the Air Force report noted that the Air Force also plans to provide $140 million in funding for the sustainment and maintenance of operational access to Cobra Dane's site at Shemya Island. In addition, GAO found that the Air Force developed a total cost estimate for one project—known as the transmitter group replacement—but not for its other two projects. Air Force officials plan to complete cost estimates for those two projects in conjunction with their fiscal year 2020 budget submission.
|
Concerned that the federal government was more focused on program activities and processes than the results to be achieved, Congress passed the Government Performance and Results Act of 1993 (GPRA). GPRA sought to focus federal agencies on performance by requiring agencies to develop long-term and annual goals, and measure and report on progress towards those goals annually. Based on our analyses of the act’s implementation, we concluded in March 2004 that GPRA’s requirements had laid a solid foundation for results-oriented management. At that time, we found that performance planning and measurement had slowly yet increasingly become a part of agencies’ cultures. For example, managers reported having significantly more performance measures in 2003 than in 1997, when GPRA took effect government-wide. However, the benefit of collecting performance information is fully realized only when that information is actually used by managers to make decisions aimed at improving results. Although our 2003 survey found greater reported availability of performance information than in 1997, it also showed managers’ use of that information for various management activities generally had remained unchanged. Based on those results, and in response to a request from Congress, in September 2005, we developed a framework intended to help agencies better incorporate performance information into their decision making. As shown in figure 1, we identified five leading practices that can promote the use of performance information for policy and program decisions; and four ways agency managers can use performance information to make program decisions aimed at improving results. Our September 2005 report also highlighted examples of how agencies had used performance information to improve results. For example, we described how the Department of Transportation’s National Highway Traffic Safety Administration used performance information to identify, develop, and share effective strategies that increased national safety belt usage—which can decrease injuries and fatalities from traffic accidents— from 11 percent in 1985 to 80 percent in 2004. Subsequently, the GPRA Modernization Act of 2010 (GPRAMA) was enacted, which significantly expanded and enhanced the statutory framework for federal performance management. The Senate Committee on Homeland Security and Governmental Affairs report accompanying the bill that would become GPRAMA stated that agencies were not consistently using performance information to improve their management and results. The report cited the results of our 2007 survey of federal managers. That survey continued to show little change in managers’ use of performance information. The report further stated that provisions in GPRAMA are intended to address those findings and increase the use of performance information to improve performance and results. For example, GPRAMA requires certain agencies to designate a subset of their respective goals as their highest priorities—known as agency priority goals—and to measure and assess progress toward those goals at least quarterly through data-driven reviews. Our recent work and surveys suggest that data-driven reviews are having their intended effect. For example, in July 2015, we found that agencies reported that their reviews had positive effects on progress toward agency goals and efforts to improve the efficiency of operations, among other things. In addition, for those managers who were familiar with their agencies’ data-driven reviews, our 2013 and 2017 surveys showed that the more managers viewed their programs as being subject to a review, the more likely they were to report their agencies’ reviews were driving results and conducted in line with our leading practices. Recognizing the important role these reviews were playing in improving data-driven decision making, our management agenda for the presidential and congressional transition in 2017 included a key action to expand the use of data-driven reviews beyond agency priority goals to other agency goals. More broadly, our recent surveys of federal managers have continued to show that reported government-wide uses of performance information generally have not changed or in some cases have declined. As we found in September 2017, and as illustrated in figure 2, the 2017 update to our index suggests that government-wide use of performance information did not improve between 2013 and 2017. In addition, it is statistically significantly lower relative to our 2007 survey, when we created the index. Moreover, in looking at the government-wide results on the 11 individual survey questions that comprise the index, we found few statistically significant changes in 2017 when compared to (1) our 2013 survey or (2) the year each question was first introduced. For example, in comparing 2013 and 2017 results, two questions had results that were statistically significantly different: The percentage of managers who reported that employees who report to them pay attention to their agency’s use of performance information was statistically significantly higher (from 40 to 46 percent). The percentage of managers who reported using performance information to adopt new program approaches or change work processes was statistically significantly lower (from 54 to 47 percent). As we stated in our September 2017 report, the decline on the latter question was of particular concern as agencies were developing plans to improve their efficiency, effectiveness, and accountability, as called for by an April 2017 memorandum from OMB. In early 2017, the administration announced several efforts intended to improve government performance. OMB issued several memorandums detailing the administration’s plans to improve government performance by reorganizing the government, reducing the federal workforce, and reducing federal agency burden. As part of the reorganization efforts, OMB and agencies were to develop government-wide and agency reform plans, respectively, designed to leverage various GPRAMA provisions. For instance, the April 2017 memorandum mentioned above stated that OMB intends to monitor implementation of the reforms using, among other things, agency priority goals. While many agency-specific organizational improvements were included in the President’s fiscal year 2019 budget, released in February 2018, OMB published additional government-wide and agency reform proposals in June 2018. The President’s Management Agenda (PMA), released in March 2018, outlines a long-term vision for modernizing federal operations and improving the ability of agencies to achieve outcomes. To address the issues outlined in the PMA, the administration established a number of cross-agency priority (CAP) goals. CAP goals, required by GPRAMA, are to address issues in a limited number of policy areas requiring action across multiple agencies, or management improvements that are needed across the government. The PMA highlights several root causes for the challenges the federal government faces. Among them is that agencies do not consistently apply data-driven decision-making practices. The PMA states that smarter use of data and evidence is needed to orient decisions and accountability around service and results. To that end, in March 2018, the administration established the Leveraging Data as a Strategic Asset CAP goal to improve the use of data in decision making to increase the federal government’s effectiveness. Over the past 25 years, various organizations, roles, and responsibilities have been created by executive action or in law to provide leadership in federal performance management. At individual agencies and across the federal government, these organizations and officials have key responsibilities for improving performance, as outlined below. OMB: At least every four years, OMB is to coordinate with other agencies to develop CAP goals—such as the one described earlier on leveraging data as an asset—to improve the performance and management of the federal government. OMB is also required to coordinate with agencies to develop annual federal government performance plans to define, among other things, the level of performance to be achieved toward the CAP goals. Following GPRAMA’s enactment, OMB issued guidance for initial implementation, as required by the act, and continues to provide updated guidance in its annual Circular No. A-11, additional memorandums, and other means. Chief Operating Officer (COO): The deputy agency head, or equivalent, is designated as the COO, with overall responsibility for improving agency management and performance through, among other things, the use of performance information. President’s Management Council (PMC): The PMC is comprised of OMB’s Deputy Director for Management and the COOs of major departments and agencies, among other individuals. Its responsibilities include improving overall executive branch management and implementing the PMA. Performance Improvement Officer (PIO): Agency heads designate a senior executive as the PIO, who reports directly to the COO. The PIO is responsible for assisting the head of the agency and COO to ensure that agency goals are achieved through, among other things, the use of performance information. Performance Improvement Council (PIC): The PIC is charged with assisting OMB to improve the performance of the federal government. It is chaired by the Deputy Director for Management at OMB and includes PIOs from each of the 24 Chief Financial Officers Act agencies, as well as other PIOs and individuals designated by the chair. Among its responsibilities, the PIC is to work to resolve government-wide or cross-cutting performance issues, and facilitate the exchange among agencies of practices that have led to performance improvements. Previously, the General Service Administration’s (GSA) Office of Executive Councils provided analytical, management, and administrative support for the PIC, the PMC, and other government-wide management councils. In January 2018, the office was abolished and its functions, staff, and authorities, along with those of the Unified Shared Services Management Office, were reallocated to GSA’s newly created Shared Solutions and Performance Improvement Office. As at the government-wide level—where, as described earlier, the use of performance information did not change from 2013 to 2017—managers’ reported use of performance information at most agencies also did not improve since 2013 (illustrated in figure 3). At the agency level, 3 of the 24 agencies had statistically significant changes in their index scores—1 increase (National Science Foundation) and 2 decreases (Social Security Administration and the Office of Personnel Management). Also, in 2017, 6 agencies had results that were statistically significantly different—4 higher and 2 lower—than the government-wide average (see sidebar). Throughout the report, we highlight two different types of statistically significant results—changes from our last survey in 2013 and differences from the 2017 government-wide average. The former indicates when an agency’s reported use of performance information or leading practices has measurably improved or declined. The latter indicates when it is statistically significantly higher or lower than the rest of government. These results suggest agencies have taken actions that led to improvements in their use of performance information. For example, when a result is a statistically significant increase since 2013, as with the National Science Foundation index score in 2017, this suggests that the agency has adopted practices that led to a measurable increase in the use of performance information by managers. When a result is statistically significantly higher than the government-wide average, like GSA’s 2017 index score, this suggests that the agency’s use of performance information is among the highest results when compared to the rest of government. These agencies could also have insights into practices that led to relatively high levels of performance information use. Finally, when a result is a statistically significant decrease since 2013, as with the Social Security Administration’s index score in 2017, or statistically significantly lower than the government-wide average, like the Department of Homeland Security’s 2017 index score, this suggests the agencies face challenges that are hampering their ability to use performance information. Appendix III provides each agency’s index scores from 2007, 2013, and 2017 to show changes between survey years. When we disaggregated the index and analyzed responses from the 11 questions that comprise the index—which could help pinpoint particular actions that improved the use of performance information—we similarly found relatively few changes in agencies’ recent results. Specifically, we identified 16 instances where agency responses on individual questions were statistically significantly different from 2013 to 2017—10 increases and 6 decreases. This represents about 6 percent of the total possible responses to the 11 survey questions from each of the agencies. In addition, we found 12 instances where an agency’s result on a question was statistically significantly higher (11) or lower (1) than the government-wide average in 2017. For example, the percentage of Social Security Administration (SSA) managers reporting that their peers use performance information to share effective approaches was statistically significantly higher than the government-wide average. Although SSA’s index score had a statistically significant decline in 2017 compared to 2013, the agency’s index score remains relatively high, as it has in prior years. The scope of our work has not allowed us to determine definitively what factors caused the decline in SSA’s index score and whether the decline is likely to continue, although its result on this particular question may indicate a continued strength. Each agency’s results on the 11 questions that comprise the index are presented in appendix I. The agencies’ respective statistically significant results are identified in figure 4. While some agencies had statistically significant improvements on individual questions, and could point to actions that led to improvements in their use of performance information, these improvements should be considered in relation to the range of agency results and the government- wide average. In figure 4, there are five agencies with statistically significant increases on responses to individual questions, where those results were not statistically significantly higher than the government-wide average (see arrows without plus signs for the Departments of Agriculture, Defense, and Justice; the Environmental Protection Agency; and the National Science Foundation). While these represent improvements, they should be considered in relation to the range of agency results and the government-wide average (provided in detail in the agency summaries in appendix I). For example, in 2017, the percentage of managers at the Department of Agriculture who reported that upper management use performance information to inform decisions about program changes was statistically significantly higher than in 2013. However, the department’s 2017 result (37 percent) was relatively lower when compared to the maximum agency result on that question (60 percent). Appendix I presents the results on the index and the 11 questions that comprise it for each of the 24 agencies. When we compared government-wide and agency-level results on selected survey questions that reflect practices that promote the use of performance information, we found that results between 2013 and 2017 generally remained unchanged. As described earlier, there are 10 survey questions that both reflect the five leading practices identified in our past work and had statistically significant associations with higher index scores. As shown in figure 5, government-wide results on 2 of the 10 questions were statistically significantly different, both increases, from 2013 to 2017. Despite these two increases, the overall results suggest these practices are not widely followed government-wide. On most of the 10 questions, only about half (or fewer) of the managers reported their agencies were following them to a “great” or “very great” extent. When we analyzed agency-level responses to these 10 questions, we also found relatively few changes in recent results. Specifically, our analysis found 20 instances—16 increases and 4 decreases—where agencies’ responses on individual questions were statistically significantly different from 2013 to 2017. This represents about 8 percent of the total possible responses to the 10 survey questions from each of the agencies. In addition, we found 10 instances where an agency’s result on a question was statistically significantly higher (8) or lower (2) than the government-wide average in 2017. Each agency’s results on these 10 questions are presented in appendix I, and the statistically significant results are identified in figure 6. Those agencies with results on individual questions that are either statistically significantly higher than 2013, higher than the 2017 government-wide average, or both may have taken actions in line with our leading practices for promoting the use of performance information. For example, the National Science Foundation had both types of statistically significant results on a question about having sufficient information on the validity of their performance data. Here, the agency’s result increased 27 percentage points from 2013 to 2017. While the scope of our review does not allow us to definitively determine the reasons for the National Science Foundation’s higher results, they suggest the agency has taken recent actions that greatly improved the availability and accessibility of information on the validity of performance data. In both 2013 and 2017, our analyses found this particular question to be the strongest predictor of higher performance information use when we tested for associations between the questions that reflect leading practices and our index. Our 2017 survey results show that managers who reported their programs were subject to data-driven reviews also were more likely to report using performance information in decision making to a greater extent (see figure 7). For the 35 percent of managers who reported being familiar with data-driven reviews, those who reported their programs had been subject to data-driven reviews to a “great” or “very great” extent had index scores that were statistically significantly higher than those whose programs were subject to these reviews to a lesser extent. Similarly, we found that being subject to data-driven reviews to a greater extent was also related to greater reporting of agencies following practices that can promote the use of performance information. As figure 8 shows, managers who reported their programs were subject to these reviews to a “great” or “very great” extent more frequently reported that their agencies followed the five leading practices that promote the use of performance information, as measured by the 10 related survey questions associated with higher scores on the index. For example, of the estimated 48 percent of managers who reported their programs were subject to data-driven reviews to a “great” or “very great” extent, 72 percent also reported that managers at their level (peers) effectively communicate performance information on a routine basis to a “great” or “very great” extent. Conversely, for the 24 percent of managers who reported their programs were subject to data-driven reviews to a “small” or “no” extent, only 30 percent reported that managers at their level do this to a “great” or “very great” extent. Our past work has found that the Executive Branch has taken steps to improve the use of performance information in decision making by senior leaders at federal agencies. However, our survey results indicate those steps have not led to similar improvements in use by managers at lower levels. Through its guidance to implement GPRAMA, OMB developed a framework for performance management in the federal government that involves agencies setting goals and priorities, measuring performance, and regularly reviewing and reporting on progress. This includes expectations for how agency senior leaders should use performance information to assess progress towards achieving agency priority goals through data-driven reviews, and strategic objectives through strategic reviews. For example, GPRAMA requires, and OMB’s guidance reinforces, that data-driven reviews should involve the agency head, Chief Operating Officer, Performance Improvement Officer, and other senior officials responsible for leading efforts to achieve each goal. OMB’s guidance also identifies ways in which agency leaders should use the results of those reviews to inform various decision-making activities, such as revising strategies, formulating budgets, and managing risks. Our past work also found that agencies made progress in implementing these reviews and using performance information. In July 2015, we found that agencies generally were conducting their data-driven reviews in line with GPRAMA requirements and our related leading practices, including that agency leaders used the reviews to drive performance improvement. In addition, in September 2017, we reported on selected agencies’ experiences in implementing strategic reviews and found that the reviews helped direct leadership attention to progress on strategic objectives. Despite those findings, our survey results continue to show that the reported use of performance information by federal managers has generally not improved, and actually declined at some agencies. This could be because of the two different groups of agency officials covered by our work. GPRAMA’s requirements, and the federal performance management framework established by OMB’s guidance, apply at the agency-wide level and generally involve senior leaders. Our past work reviewing implementation of the act therefore focused on improvements in the use of performance information by senior leaders at the agency- wide level. In contrast, our surveys covered random samples of mid- and upper-level managers within those agencies, including at lower organizational levels such as component agencies. Their responses indicate that the use of performance information more broadly within agencies—at lower organizational levels—generally has not improved over time. The exception to this was managers whose programs were subject to the data-driven reviews required by GPRAMA. As described above, those managers were more likely to report greater use of performance information in their agencies. This reinforces the value of the processes and practices put in place by GPRAMA. Our survey results suggest that limited actions have been taken to diffuse processes and practices related to the use of performance information to lower levels within federal agencies, where mid-level and senior managers make decisions about managing programs and operations. Although OMB staff agreed that diffusing processes and practices to lower levels could lead to improved use of performance information, they told us they have not directed agencies to do so for a few reasons. First, OMB staff expressed concerns about potentially imposing a “one-size-fits- all” approach on agencies. They stated that agencies are best positioned to improve their managers’ use of performance information, given their individual and unique missions and cultures, and the environments in which they operate. We agree that it makes sense for agencies to be able to tailor their approaches for those reasons. OMB’s existing guidance provides an overarching framework that recognizes the need for flexibility and for agencies to tailor their approaches. Moreover, given the long- standing and cross-cutting nature of this challenge, a government-wide approach also would provide a consistent focus on improving the use of performance information more extensively within agencies. OMB staff also told us that they believed it would go beyond their mandate to direct agencies to extend GPRAMA requirements to lower levels. GPRAMA requires OMB to provide guidance to agencies to implement its requirements, which only apply at the agency-wide level. As noted earlier, however, GPRAMA also requires OMB to develop cross- agency priority (CAP) goals to improve the performance and management of the federal government. The President’s Management Agenda established a CAP goal to leverage data as a strategic asset, in part, to improve the use of data for decision making and accountability throughout the federal government. This new CAP goal presents an opportunity for OMB and agencies to identify actions to expand the use of performance information in decision making throughout agencies. As of June 2018, the action plan for implementing the Leveraging Data as a Strategic Asset CAP goal is limited. According to the President’s Management Agenda and initial CAP goal action plan, the goal primarily focuses on developing and implementing a long-term, enterprise-wide federal data strategy to better govern and leverage the federal government’s data. It is through this strategy that, among other things, the administration intends to improve the use of data for decision making and accountability. However, the strategy is under development and not expected to be released until January 2019, with a related plan to implement it expected in April 2019. The existing action plan, released in March 2018 and updated in June 2018, does not yet include specific steps needed to improve the use of data—including performance information—more extensively within agencies. According to the action plan for the goal, potential actions currently under consideration focus on establishing agency “learning agendas” that prioritize the development and use of data and other evidence for decision-making; building agency capacity to use data and other evidence; and improving the timeliness of performance information and other data, and making that information available to decision makers and the public. Although developing learning agendas and building capacity could help improve the use of performance information in agencies, improving availability of data may be less effective. For example, as our past survey results have shown, increasing the availability of performance information has not resulted in corresponding increases in its use in decision making. We recognize that the CAP goal was created in March 2018. Nonetheless, it is important that OMB and its fellow goal leaders develop the action plan and related federal data strategy consistent with all key requirements to better ensure successful implementation. The action plan does not yet include complete information related to the following GPRAMA requirements: performance goals that define the level of performance to be achieved each year for the CAP goal; the various federal agencies, organizations, programs, and other activities that contribute to the CAP goal; performance measures to assess overall progress towards the goal as well as the progress of each agency, program, and other activity contributing to the goal; and clearly defined quarterly targets. Consistent with GPRAMA, Standards for Internal Control in the Federal Government identifies information that agencies are required to include in their plans to help ensure they achieve their goals. The standards state that objectives—such as improving the use of data in decision making— should be clearly defined to enable the identification of risks. Objectives are to be defined in specific terms so they can be understood at all levels of the entity—in this case, government-wide as well as within individual agencies. This involves defining what is to be achieved, who is to achieve it, how it will be achieved, and the time frames for achievement. Ensuring that future updates to the new CAP goal’s action plan includes all required elements is particularly important, as our previous work has found that some past CAP goal teams did not meet all planning and reporting requirements. For example, in May 2016 we found that most of the CAP goal teams we reviewed had not established targets for all performance measures they were tracking. This limited the transparency of their efforts and the ability to track progress toward established goals. We recommended that OMB, working with the Performance Information Council (PIC), report on actions that CAP goal teams are taking, or plan to take, to develop such targets and performance measures. OMB staff generally agreed and, in July 2017, told us they were working, where possible, to assist the development of measures for CAP goals. However, the recommendation has not been addressed and OMB staff said the next opportunity to address it would be when the administration established new CAP goals (which took place in March 2018). Following the initial release of the new CAP goals, CAP goal teams are to more fully develop the related action plans through quarterly updates. Given the ongoing importance of meeting these planning and reporting requirements, we will continue to monitor the status of actions to address this recommendation as implementation of the new CAP goals proceeds. While the PIC, which is chaired by OMB, has contributed to efforts to enhance the use of performance information, our survey results identify additional opportunities to further those efforts. The PIC’s past efforts have included hosting various working groups and learning events for agency officials to provide performance management guidance, and developing resources with relevant practices. For example, the PIC created a working group focused on agency performance reviews, which was used to share recommendations for how agencies can implement reviews, along with a guide with practices for effectively implementing strategic reviews. In January 2018, staff supporting the PIC joined with staff from another GSA office to create a new group called Fed2Fed Solutions. This group consults with agencies and provides tailored support, such as data analysis and performance management training for agency officials, to help them address specific challenges related to organizational transformation, data-driven decision making, and other management improvement efforts. Our survey results identify useful information related to potential promising practices and challenges that OMB and the PIC could use to inform efforts to enhance the use of performance information more extensively within agencies (e.g., at lower levels). As was previously described, the PIC has responsibilities to (1) facilitate the exchange among agencies of proven practices, and (2) work to resolve government- wide or cross-cutting performance issues, such as challenges. Our analyses of 2017 survey results identified instances where agencies may have found effective ways to enhance the use of performance information by agency leaders and managers in decision making, as well as instances where agencies (and their managers) face challenges in doing so. Specifically, based on analyses of our survey responses, we identified 14 agencies that may have insights into specific practices that led to recent improvements in managers’ use of performance information, or ways that they maintain relatively high levels of use by their managers when compared to the rest of the government. Figure 9 summarizes the agencies identified earlier in the report that had statistically significant increases, or results higher than the government-wide average, on our index or individual survey questions. As the figure shows, several agencies had statistically significant results across all three sets of analyses and therefore may have greater insights to offer: the General Services Administration, National Aeronautics and Space Administration, and the National Science Foundation. In addition, our analyses identified nine agencies where results suggest managers face challenges that have hampered their ability to use performance information. Figure 10 summarizes the agencies identified earlier in the report that had statistically significant decreases, or results lower than the government-wide average, on our index or individual survey questions. As the figure shows, the Office of Personnel Management had statistically significant decreases in all three sets of analyses. Four agencies—the Departments of the Treasury and Veterans Affairs, the Nuclear Regulatory Commission, and the Social Security Administration—were common to both of the figures above. That is, they had results that indicate they may have insights on some aspects of using performance information and face challenges in other aspects. As was mentioned earlier, to provide proper context, these results should be considered in relation to the range of agency results and the government- wide average (provided in detail in the agency summaries in appendix I). Given the prioritization of other activities, such as the recent creation of the Fed2Fed Solutions program, the PIC has not yet undertaken a systematic approach that could improve the use of performance information by managers at lower levels within agencies. Such an approach would involve identifying and sharing practices that have led to improved use, as well as identifying common or cross-cutting challenges that have hampered such use. The results of our analyses could help the PIC do so, and in a more targeted manner. By identifying and sharing proven practices, the PIC could further ensure that agency leaders and managers are aware of effective or proven ways they can use performance information to inform their decisions across the spectrum of activities they manage within their agencies. Those proven practices also may help agency leaders and managers resolve any identified challenges. Furthermore, in September 2017, we found that, for the estimated 35 percent of managers who reported familiarity with data-driven reviews, the more they viewed their programs being subject to a review, the more likely they were to report the reviews were driving results and were conducted in line with our leading practices for using performance information. Despite the reported benefits of and results achieved through data-driven reviews, they were not necessarily widespread. As noted above, GPRAMA requires agencies to conduct such reviews for agency priority goals, which represent a small subset of goals, and they are required at the departmental level. These reasons may explain why most managers reported they were not familiar with the reviews. As a result, we recommended that OMB should work with the PIC to identify and share among agencies practices for expanding the use of data-driven reviews. OMB staff agreed with our recommendation but have yet to address it. In June 2018, OMB updated its annual guidance to agencies to explicitly encourage them to expand data-driven reviews to include other goals, priorities, and management areas as applicable to improve organizational performance. However, as of June 2018, OMB and the PIC have yet to take any steps to identify and share practices for expanding the use of these reviews in line with our recommendation. Given the additional analyses we conducted for this report—which show that being subject to data-driven reviews is related to greater reported use of performance information and leading practices that promote such use—we continue to believe these further actions would help agencies implement these reviews more extensively. We reiterate the importance of the September 2017 recommendation and will continue to monitor OMB’s progress to address it. For more than 20 years, our work has highlighted weaknesses in the use of performance information in federal decision making. While the Executive Branch has taken some actions in recent years, such as establishing a framework for performance management across the federal government, our survey results underscore that more needs to be done to improve the use of performance information more extensively within agencies and government-wide. The President’s Management Agenda and its related CAP goal to leverage data as a strategic asset present an opportunity to do so, as it aims to improve data-driven decision making. As OMB and its fellow goal leaders more fully develop the action plan for achieving this goal, providing additional details for its plans to improve data-driven decision making would help provide assurance that it can be achieved. As part of those initiatives, our survey results could provide a useful guide for targeting efforts. Officials at each agency could use these results to identify areas for additional analysis and potential actions that could help improve the use of performance information across the agency and at lower levels. Similarly, OMB and the PIC could use the results to identify broader issues in need of government-wide attention. It will also be important, however, for OMB and the PIC to go beyond this analysis and work with agencies to identify and share proven practices for increasing the use of performance information at lower levels within agencies, as well as challenges that may be hampering agencies’ ability to do so. We are making the following two recommendations to OMB: The Director of OMB should direct the leaders of the Leveraging Data as a Strategic Asset CAP Goal to ensure future updates to the action plan, and the resulting federal data strategy, provide additional details on improving the use of data, including performance information, more extensively within federal agencies. The action plan should identify performance goals; contributing agencies, organizations, programs, and other activities; those responsible for leading implementation within these contributors; planned actions; time frames; and means to assess progress. (Recommendation 1) The Director of OMB, in coordination with the PIC, should prioritize efforts to identify and share among agencies proven practices for increasing, and challenges that hamper, the use of performance information in decision making more extensively within agencies. At a minimum, this effort should involve the agencies that our survey suggests may offer such insights. (Recommendation 2) We provided a draft of this report to the Director of the Office of Management and Budget for review and comment. We also provided a draft of the report to the heads of each of the 24 federal agencies covered by our survey. OMB had no comments, and informed us that it would assess our recommendations and consider how best to respond. We are sending copies of this report to congressional requesters, the Director of the Office of Management and Budget, the heads of each of the 24 agencies, and other interested parties. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or mcneilt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. Key contributors to this report are listed in appendix IV. (USDA) (Goverment-wide) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (Commerce) (Goverment-wide) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (Goverment-wide) (DOD) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (Education) (Goverment-wide) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (Government-wide) (Energy) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (HHS) (Goverment-wide) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (DHS) (Goverment-wide) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (Government-wide) (HUD) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (Interior) (Government-wide) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (DOJ) (Government-wide) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (DOL) (Government-wide) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (State) (Government-wide) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (DOT) (Government-wide) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (Treasury) (Government-wide) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (Government-wide) (VA) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (Goverment-wide) (USAID) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (Government-wide) (EPA) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (Government-wide) (GSA) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (Government-wide) (NASA) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (Government-wide) (NSF) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (NRC) (Government-wide) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (Government-wide) (OPM) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (SBA) (Government-wide) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” (Government-wide) (SSA) 10. The individual I report to 11. Employees that report to me 0 Percent of managers reporting “Great” or “Very Great” This report responds to a request that we analyze agency-level results from our 2017 survey of federal managers at the 24 agencies covered by the Chief Financial Officers (CFO) Act of 1990, as amended, to determine the extent agencies are using performance information. This report assesses the extent to which: 1. the reported use of performance information and related leading practices at 24 agencies has changed compared to our prior survey in 2013; 2. being subject to data-driven reviews related to managers’ reported use of performance information and leading practices; and 3. the Executive Branch has taken actions to enhance agencies’ use of performance information in various decision-making activities. From November 2016 through March 2017, we administered our online survey to a stratified random sample of 4,395 individuals from a population of 153,779 mid- and upper-level civilian managers and supervisors at the 24 CFO Act agencies. The management levels covered general schedule (GS) or equivalent schedules at levels comparable to GS-13 through GS-15, and career Senior Executive Service (SES) or equivalent. We obtained the sample from the Office of Personnel Management’s Enterprise Human Resources Integration database as of September 30, 2015—the most recent fiscal year data available at the time. The sample was stratified by agency and whether the manager or supervisor was a member of the SES. To help determine the reliability and accuracy of the database elements used to draw our sample of federal managers for the 2017 survey, we checked the data for reasonableness and the presence of any obvious or potential errors in accuracy and completeness and reviewed our past analyses of the reliability of this database. We concluded in our September 2017 report that the data used to draw our sample were sufficiently reliable for the purpose of the survey. For the 2017 survey, we received usable questionnaires from about 67 percent of the eligible sample. The weighted response rate at each agency generally ranged from 57 percent to 82 percent, except the Department of Justice, which had a weighted response rate of 36 percent. The overall survey results are generalizable to the population of managers government-wide and at each individual agency. To assess the potential bias from agencies with lower response rates, we conducted a nonresponse bias analysis using information from the survey and sampling frame as available. The analysis confirmed discrepancies in the tendency to respond to the survey related to agency and SES status. The analysis also revealed some differences in response propensity by age and GS level; however, the direction and magnitude of the differences on these factors were not consistent across agencies or strata. Our data may be subject to bias from unmeasured sources for which we cannot control. Results, and in particular estimates from agencies with low response rates such as the Department of Justice, should be interpreted with caution. However, the survey’s results are comparable to five previous surveys we conducted in 1997, 2000, 2003, 2007, and 2013. To address the first objective, we used data from our 2017 survey to update agency scores on our use of performance information index. This index, which was last updated using data from our 2013 survey, averages managers’ responses on 11 questions related to the use of performance information for various management activities and decision making. Using 2017 survey data, we conducted statistical analyses to ensure these 11 questions were still positively correlated. That analysis confirmed that no negative correlations existed and therefore no changes to the index were needed. Figure 11 shows the questions that comprise the index. After calculating agency index scores for 2017, we compared them to previous results from 2007 and 2013, and to the government-wide average for 2017, to identify any statistically significant differences. We focus on statistically significant results because these indicate that observed relationships between variables and differences between groups are likely to be valid, after accounting for the effects of sampling and other sources of survey error. For each of the 11 questions that comprise the index, we identified individual agency results, excluding missing and no basis to judge responses, and determined when they were statistically significantly different from (1) the agency’s results on the same question in 2013, or (2) the government-wide average results on the question in 2017. In this report, we analyzed and summarized the results of our 2017 survey of federal managers. Due to the limited scope of the engagement, we did not conduct additional audit work to determine what may have caused statistically significant changes between our 2017 and past survey results. To further address this objective we completed several statistical analyses that allowed us to assess the association between the index and 22 survey questions that we determined relate to leading practices we previously found promote the use of performance information. See figure 12 for the 22 specific questions related to these five practices that we included in the analysis. When we individually tested these 22 survey questions (bivariate regression), we found that each was statistically significantly and positively related to the index in 2017. This means that each question, when tested in isolation from other factors, was associated with higher scores on the index. However, when all 22 questions were tested together (multivariate regression), we found that 5 questions continued to be positively and significantly associated with the index in 2017, after controlling for other factors. To conduct this multivariate analysis, we began with a base model that treated differences in managers’ views of agency performance management use as a function of the agency where they worked. We found, however, that a model based on agency alone had little predictive power (R-squared of 0.04). We next examined whether managers’ responses to these questions reflecting practices that promote the use of performance information related to their perceptions of agency use of performance information, independent of agency. The results of this analysis are presented in table 1 below. Each coefficient reflects the increase in our index associated with a one-unit increase in the value of a particular survey question. Our final multivariate regression model had an R-squared of 0.67, suggesting that the variables in this model explain approximately 67 percent of the variation in the use index. We also tested this model controlling for whether a respondent was a member of the SES and found similar results. As shown above in table 1, five questions related to three of the leading practices that promote agencies’ use of performance information were statistically significant in 2017. These results suggest that, when controlling for other factors, certain specific efforts to increase agency use of performance information—such as providing information on the validity of performance data—may have a higher return and lead to higher index scores. With respect to aligning agency-wide goals, objectives, and measures, we found that each increase in terms of the extent to which individuals felt that managers aligned performance measures with agencywide goals and objectives was associated with a 0.08 increase in their score on the use index. In terms of improving the usefulness of performance information, we found that having information on the validity of performance data for decision making was the strongest predictor in our model (0.18). As measured here, taking steps to ensure the performance information is useful and appropriate was associated with almost as large a change in a managers’ index score (0.16). In terms of developing agency capacity to use performance information, we found that having sufficient analytical tools to collect, analyze, and use performance information (0.07), and providing or paying for training that would help link their programs to achievement of agency strategic goals (0.10), were also statistically significantly related to a manager’s reported use of performance information. When we combined these results with what we previously found through a similar analysis of 2013 survey results in September 2014, we identified 10 questions that have had a statistically significant association with higher index scores. This reinforces the importance of the five leading practices to promote the use of performance information. For each of these questions, which are outlined in figure 13 below, we determined when agency results were statistically significantly different from 2013 results or the 2017 government-wide average. For the second objective, we examined, based on the extent they responded their programs had been subject to agency data-driven reviews, differences in managers’ use index scores and responses on questions related to practices that promote the use of performance information. We grouped managers based on the extent they reported their programs had been subject to these reviews, from “no extent” through “very great extent.” We then calculated the average index scores for the managers in each of those five categories. We also examined differences in how managers responded to the 10 questions reflecting practices that can promote the use of performance information based on the extent they reported their programs had been subject to data-driven reviews. We grouped managers into three categories based on the extent they reported their programs had been subject to these reviews (no-small extent, moderate extent, great-very great extent). We then compared how these groups responded to the ten questions. For the third objective, we reviewed our past work that assessed Executive Branch activities to enhance the use of performance information; various resources (i.e., guidance, guides, and playbooks) developed by the Office of Management and Budget (OMB) and the Performance Improvement Council (PIC) that could support agencies’ use of performance information; and the President’s Management Agenda, and related materials with information on cross-agency efforts to improve the use of data in federal decision making. Lastly, for the third objective we also interviewed OMB and PIC staff about any actions they have taken, or planned to take, to further support the use of performance information across the federal government. We conducted this performance audit from October 2017 to September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the above contact, Benjamin T. Licht (Assistant Director) and Adam Miles (Analyst-in-Charge) supervised this review and the development of the resulting report. Arpita Chattopadhyay, Caitlin Cusati, Meredith Moles, Dae Park, Amanda Prichard, Steven Putansu, Alan Rozzi, Shane Spencer, and Khristi Wilkins also made key contributions. Robert Robinson developed the graphics for this report. Alexandra Edwards, Jeff DeMarco, Mark Kehoe, Ulyana Panchishin, and Daniel Webb verified the information presented in this report. Results of the Periodic Surveys on Organizational Performance and Management Issues Managing for Results: Further Progress Made in Implementing the GPRA Modernization Act, but Additional Actions Needed to Address Pressing Governance Challenges. GAO-17-775. Washington, D.C.: September 29, 2017. Supplemental Material for GAO-17-775: 2017 Survey of Federal Managers on Organizational Performance and Management Issues. GAO-17-776SP. Washington, D.C.: September 29, 2017. Program Evaluation: Annual Agency-wide Plans Could Enhance Leadership Support for Program Evaluations. GAO-17-743. Washington, D.C.: September 29, 2017. Managing for Results: Agencies’ Trends in the Use of Performance Information to Make Decisions. GAO-14-747. Washington, D.C.: September 26, 2014. Managing for Results: Executive Branch Should More Fully Implement the GPRA Modernization Act to Address Pressing Governance Challenges. GAO-13-518. Washington, D.C.: June 26, 2013. Managing for Results: 2013 Federal Managers Survey on Organizational Performance and Management Issues, an E-supplement to GAO-13-518. GAO-13-519SP. Washington, D.C.: June 26, 2013. Program Evaluation: Strategies to Facilitate Agencies’ Use of Evaluation in Program Management and Policy Making. GAO-13-570. Washington, D.C.: June 26, 2013. Government Performance: Lessons Learned for the Next Administration on Using Performance Information to Improve Results. GAO-08-1026T. Washington, D.C.: July 24, 2008. Government Performance: 2007 Federal Managers Survey on Performance and Management Issues, an E-supplement to GAO-08-1026T. GAO-08-1036SP. Washington, D.C.: July 24, 2008. Results-Oriented Government: GPRA Has Established a Solid Foundation for Achieving Greater Results. GAO-04-38. Washington, D.C.: March 10, 2004. Managing for Results: Federal Managers’ Views on Key Management Issues Vary Widely Across Agencies. GAO-01-592. Washington, D.C.: May 25, 2001. Managing for Results: Federal Managers’ Views Show Need for Ensuring Top Leadership Skills.GAO-01-127. Washington, D.C.: October 20, 2000. The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven. GAO/GGD-97-109. Washington, D.C.: June 2, 1997.
|
To reform the federal government and make it more efficient and effective, agencies need to use data about program performance. The benefit of collecting performance information is only fully realized when it is used by managers to make decisions aimed at improving results. GAO was asked to review agencies' use of performance information. This report assesses, among other things, the extent to which: (1) 24 agencies' reported use of performance information and related leading practices has changed since 2013 and (2) the Executive Branch has taken actions to enhance the use of performance information. To address the first objective, GAO analyzed results from its 2017 survey of federal managers, and compared them to 2013 results. The survey covered a stratified random sample of 4,395 managers from the 24 Chief Financial Officers Act agencies. The survey had a 67 percent response rate and results can be generalized to the population of managers government-wide and at each agency. For the second objective, GAO reviewed agency documents and interviewed staff from OMB and the PIC. Agencies' reported use of performance information to make decisions, and leading practices that can promote such use, generally has not improved since GAO's last survey of federal managers in 2013. However, GAO's survey results continue to point to certain practices that could help agencies improve managers' use of performance information. For example, as shown in the table below, GAO's survey found that managers whose programs were subject to data-driven reviews (regular reviews used to assess progress on select agency goals) to a greater extent reported statistically significantly greater use of performance information to make decisions. The Executive Branch has begun taking steps to improve the use of performance information within agencies and across the government. For example, In the President's Management Agenda and government-wide reform plan, released in March and June 2018 respectively, the administration acknowledged the need to do more, and announced a goal, among other actions, to improve the use of data in federal decision making. However, the Office of Management and Budget (OMB) and others responsible for this goal have yet to fully develop action plans to hold agencies accountable for achieving it. The Performance Improvement Council (PIC), which is chaired by OMB, has undertaken efforts to improve the use of performance information by, for example, creating a working group on agency performance reviews. But it has not yet taken a systematic approach to identify and share proven practices that led to, or challenges that may be hampering, increased use of performance information by managers. GAO's survey results identified agencies that may have insights into such practices and challenges. More fully developing action plans for the new goal, and identifying and sharing proven practices and challenges, could help ensure the Executive Branch takes further steps to improve the use of performance information by managers within agencies and across the federal government. To improve the use of performance information within agencies and across the federal government, GAO recommends that OMB work with (1) fellow goal leaders to more fully develop action plans for the new goal to improve the use of data and (2) the PIC to prioritize efforts to identify and share proven practices and challenges. OMB had no comments on this report.
|
The Marine Corps, within the Department of the Navy, organizes itself into different Marine Air Ground Task Forces. Each Marine Air Ground Task Force consists of a command element that includes a ground combat element, air combat element, and logistics combat element that can conduct operations across a broad range of crisis and conflict situations. As shown in figure 1, there are four types of Marine Air Ground Task Forces: Marine Expeditionary Forces (MEFs), Marine Expeditionary Brigades, Marine Expeditionary Units, and Special Purpose Marine Air Ground Task Forces. The MEF is the principal warfighting organization for the Marine Corps and consists of one or more divisions, including subordinate units such as regiments and battalions. There are three MEFs in the active component of the Marine Corps: I MEF—Camp Pendleton, California; II MEF—Camp Lejeune, North Carolina; and III MEF—Okinawa, Japan. Headquarters Marine Corps consists of the Commandant of the Marine Corps (Commandant) and the staff organizations, which are responsible for advising and assisting the Commandant to carry out duties. For example, the Deputy Commandant for Programs and Resources is responsible for developing, defending, and overseeing the Marine Corps’ financial requirements and the Deputy Commandant for Plans, Policies, and Operations is responsible for establishing policy, procedures, training, and guidance on unit readiness reporting. Marine Corps units train to their core missions—the fundamental missions a unit is organized or designed to perform—and their assigned missions—those missions which an organization or unit is tasked to carry out. Units train to a list of Mission Essential Tasks that are assigned based on the unit’s required operational capabilities and projected operational environments. For example, the Mission Essential Tasks for a Marine Corps infantry battalion include amphibious operations, offensive operations, defensive operations, and stability operations. Marine Corps Training and Readiness manuals describe the training events, frequency of training required to sustain skills, and the conditions and standards that a unit must accomplish to be certified in a Mission Essential Task. Unit commanders are responsible for their units’ readiness, including assessing and reporting their units’ capabilities to accomplish Mission Essential Tasks to specified conditions and standards. Unit readiness assessments are tracked in the Defense Readiness Reporting System– Marine Corps. This information provides Marine Forces Command, Headquarters Marine Corps, the Office of the Secretary of Defense, Joint Staff, and Combatant Commands, among others, a means to assess ground combat forces’ readiness trends and to assist with strategic and operational planning. The Marine Corps’ O&M budget funds a wide range of activities, including the recruiting, organizing, training, sustaining, and equipping of the service. The Department of Defense (DOD) uses the Planning, Programming, Budgeting, and Execution (PPBE) process to allocate resources to provide capabilities necessary to accomplish the department’s missions. In this report, we refer to the PPBE process as the budget cycle. The budget cycle includes the following phases: The planning phase of the budget cycle examines the military role and defense posture of the United States and DOD in the world environment and considers enduring national security objectives, as well as the need for efficient management of defense resources. The programming phase of the budget cycle involves developing proposed programs consistent with planning, programming, and fiscal guidance, reflecting, among other things, the effective allocation of resources. The budgeting phase of the budget cycle refers to developing and submitting detailed budget estimates for programs. The execution phase of the budget cycle involves spending funds. The Marine Corps’ Office of Programs and Resources has multiple divisions that support Program Objective Memorandum (POM) development, strategy, independent analysis, budget justification and legislative coordination, among others. Two key divisions that have responsibilities regarding Marine Corps resources are: The Budget and Execution Division is responsible for leading development and submission of the POM, providing quality control over programmatic and financial data, and allocating funds to major commands. According to a Marine Corps official, the division also assists with defending the Marine Corps’ budget request to Congress and others. The Program Analysis and Evaluation Division is responsible for providing Marine Corps senior leaders with independent and objective analysis to inform resource allocation decisions and assessing institutional risk. The Program Budget Information System (PBIS) is the primary information system used by the Navy and Marine Corps in the programming and budgeting phases of the budget cycle to develop and submit financial plans (i.e., the POM and the budget) to the Office of the Secretary of Defense. Once appropriated, funds are passed via allocation and allotment to subordinate units and executed via the Standard Accounting, Budgeting, and Reporting System (SABRS). SABRS is used to (1) record and report financial information; (2) provide an accounting and reporting system for the execution of appropriations; and (3) record financial transactions that originate from source systems. Our analysis of data from the three MEFs for fiscal year 2017 funds shows that the MEFs had some data available that could be used to track some training funds from budget request to obligation. According to the Marine Corps’ Financial Guidebook for Commanders, as part of the budget cycle, commanders should determine the cost involved in meeting requirements, among other things. To help develop a sound budget, commanders need to know what they were and were not able to accomplish as a result of funding in previous years. However, Marine Corps officials told us they faced limitations tracking training funds, as discussed below. Specifically, as shown in table 1, we found that I MEF and II MEF were able to provide data on their fiscal year 2017 budget request, allotment, and obligations for training exercises directed at the MEF and division level, but data on exercises at smaller unit levels, such as regiments and battalions, were not consistently available because officials at those levels do not always track funds for these exercises. We found that III MEF was able to provide obligations data for fiscal year 2017 training exercises at all unit levels, but was not able to provide data on funds requested and allotted by training exercise. Officials at III MEF stated that these data were not available because III MEF incurs several large one-time expenses that contribute to training, but allocating those costs across specific training exercises is difficult. One of the primary reasons that the Marine Corps cannot fully track all training funds through the budget cycle is that the Office of Programs and Resources has not established the consistent use of fiscal codes to provide greater detail about the use of funds across the budget cycle phases, and the accuracy of these fiscal codes is sometimes questionable. The Marine Corps uses a variety of fiscal codes to track funds in the programming and execution phases of the budget cycle in the PBIS and SABRS systems, respectively. Some of these codes are used across DOD, while others are specific to the Marine Corps. Two key fiscal codes that officials identified as relevant to efforts to track funds for unit-level training are the Marine Corps Programming Code (MCPC) and the Special Interest Code (SIC). However, we identified limitations with how these fiscal codes are applied, as detailed below. MCPCs are used to program funds for intended use, but are not clearly linked to executed funds. When the Marine Corps programs funds for intended use, it uses MCPCs to identify the funds; however, when it executes those funds, it uses a different set of fiscal codes to identify them. As a result, the Marine Corps cannot link the programmed intent of the funds to the execution of the funds, making it difficult to track funds through the budget cycle. In fiscal years 2011, 2012, and 2013, the Marine Corps found in a series of reports that it faced challenges tracking funds through the budget cycle, in part because MCPCs were used to program funds, but not to track them in the execution phase. According to the fiscal year 2012 report, such tracking would enable the Marine Corps to improve financial traceability and add consistently reliable program execution data that would promote an understanding of the current fiscal environment to Marine Corps financial managers, comptrollers and others. In 2014, the Marine Corps implemented a process to include MCPCs in the execution phase of the budget cycle. The process enabled SABRS to automatically generate MCPCs for executed funds, based on the fiscal codes already used in the execution phase of the budget cycle. According to officials in the Office of Programs and Resources, this process increased the amount of executed funding that could be linked to an MCPC. However, Marine Corps officials told us that the mapping of MCPCs used in the programming phase to those used in the execution phase were not cleanly aligned, causing uncertainty about their linkage. The MCPCs associated with executed funds are estimates based on subject matter expert and working group mapping of fiscal codes to an MCPC and require continuous manual validation to ensure their accuracy. Additionally, the data quality of the multiple execution fiscal codes that are used to generate MCPCs is questionable because the data quality of the various underlying systems that feed data into SABRS is poor, according to officials in the Office of Programs and Resources. Senior Marine Corps officials from the Office of Programs and Resources told us that due to these limitations, analysts cannot be certain that executed funds associated with an MCPC as reflected in SABRS correspond to the purpose for which the funds associated with the same MCPC were programmed in the Program Budget Information System. This limits the Marine Corps’ ability to assess the extent to which funds were executed consistent with their programmed intent and track funds through the budget cycle. SICs are not used consistently across units. The Marine Corps uses SICs to track funds associated with individual training exercises. However, units, including the MEF and its subordinate units, do not consistently use SICs in identifying funds associated with all training exercises. Specifically, officials at all three MEFs told us that units generate SICs for large-scale training exercises directed at the MEF or division level, but may not generate SICs to track expenses for small-scale exercises at lower unit levels such as the regiment and battalion, making it difficult to track those funds. Officials at I MEF and II MEF stated that tracking costs associated with small-scale exercises is less consistent because units are not required to use SICs to track funds associated with exercises at those levels, and SICs associated with each exercise may change from year to year. Further, officials at I MEF and II MEF stated that supply officers are responsible for financial management at units below the division level, and they may not prioritize use of SICs. Officials at III MEF stated that tracking costs associated with specific exercises was difficult because officials could not attribute several large one-time training expenses to specific training exercises. Officials at all three MEFs stated that there is currently no systematic way to ensure that SICs are used accurately to associate funds executed with training exercises, which means they do not have complete or consistent data on costs associated with individual training exercises. As a result, commanders may lack accurate data for making resource decisions about training exercises needed to complete Mission Essential Tasks and improve units’ training readiness. In 2014, the Marine Corps issued Marine Corps Order 5230.23, Performance Management Planning, with the mission of linking resources to readiness and requiring the Deputy Commandant for the Office of Programs and Resources to ensure visibility and traceability of funds through the budget cycle and accounting systems for all organizational units and programs. Officials in the Office of Programs and Resources cited one effort to align inconsistent fiscal codes, but this effort will not directly address the challenges we have identified. According to officials in the Office of Programs and Resources, the Marine Corps is currently conducting a fiscal code alignment effort to address inconsistent use of fiscal codes, but this effort is in its early stages, and the Marine Corps has not yet developed clear guidance for implementation of the effort. Further, while the Marine Corps uses a variety of fiscal codes to track funds in the programming and execution phases of the budget cycle, an official from the Budget and Execution division told us that this effort will focus on fiscal codes that are used across DOD due to manpower limitations. However, MCPCs are unique to the Marine Corps and not recognized in larger DOD budgeting systems. As a result, the fiscal code alignment effort will not include aligning MCPCs across the programming and execution phases of the budget cycle, even though the Marine Corps will continue to use MCPCs. Additionally, although an official told us that SIC codes will be a part of this effort, implementation guidance for the effort was still under development and as a result, it is unclear whether the effort will address the inconsistent use of SICs across unit-level training exercises. Without the ability to track unit-level training funds through the budget cycle, including aligning MCPCs and ensuring consistent use of SIC codes, the Marine Corps lacks data to assess the extent to which funds were obligated consistent with their programmed intent and to adequately forecast and defend budget requests for training. As a result, commanders may face challenges making informed resource decisions. Although internal Marine Corps assessments and guidance state that the Marine Corps needs an enterprise-wide process to link resources to readiness, the Marine Corps has made little progress fulfilling this need. The Marine Corps has been aware for years of the challenges it faces in explaining its resource needs in its budget estimates to Congress. As stated in its 2009 Financial Guidebook for Commanders, “Many of the congressional cuts the Marine Corps receives are because of an inability to explain why we spent the money the way we did.” From fiscal years 2009 through 2014, the Marine Corps Office of Programs and Resources issued a series of classified and unclassified reports—referred to as the Marine Corps Strategic Health Assessments—that evaluated the health of the Marine Corps. The reports cited a number of factors inhibiting the Marine Corps’ ability to link funding to readiness, including stove-piped efforts, lack of an analytical framework, limited data availability, and poor data quality. For example, the fiscal year 2013 and 2014 reports found that the lack of a comprehensive model to connect the output of institutional processes to readiness measures hindered the Marine Corps’ ability to link funding to readiness. Table 2 below summarizes some of the key related findings in the reports. In fiscal year 2014, the Marine Corps stopped issuing the Marine Corps Strategic Health Assessments, in part, because the person responsible for preparing the analyses moved to another position. A senior Marine Corps official also told us that the reports were discontinued because producing them was no longer a priority for Marine Corps leadership. However, the Marine Corps also issued guidance in August 2014 calling for an enterprise-wide effort to link institutional resources to readiness. Specifically, Marine Corps Order 5230.23 called for the development and implementation of an enterprise-wide performance management process that links resources to institutional readiness via a robust analytic framework. The order included requirements to, among other things, identify readiness goals, develop strategic performance indicators, and improve data and business processes to include ensuring the visibility and traceability of funds. While implementing this order could address a number of the findings in the Marine Corps Strategic Health Assessments, Marine Corps officials told us that the service had not prioritized implementation of this order. Specifically, the Marine Corps did not designate a single oversight entity with the authority to enforce the order and directly oversee and coordinate efforts to link training funds to readiness. For example, although the order directed the Deputy Commandant for Programs and Resources to organize a quarterly coordination event of key stakeholders to synchronize activities within each major line of effort, officials from this office told us that they have not been given the authority to direct the various efforts. As a result, problems identified in the Marine Corps Strategic Health Assessments have persisted, and the Marine Corps does not have a comprehensive model to connect the output of institutional processes to readiness measures, as called for in the fiscal year 2013 Marine Corps Strategic Health Assessment. According to Standards for Internal Control in the Federal Government, management should establish an organizational structure, assign responsibility, and delegate authority to achieve its objective. Marine Corps officials told us the benefits of having a single entity to oversee efforts to tie funds to readiness include having one authority responsible for ensuring a consistent data architecture—how data will be collected, stored and transferred across the Marine Corps—and data quality. Further, having a single entity would help ensure a unified approach that would help analysts better answer questions about how funds affect readiness. In the absence of a single entity responsible for overseeing the Marine Corps’ efforts to link training funds to readiness, two different organizations within the Marine Corps developed separate and overlapping initiatives. First, in 2012, the Commanding General of II MEF directed the development of C2RAM, a tool that attempts to link funding to readiness for ground combat forces by capturing and correlating resources and requirements associated with specific unit-level training exercises. C2RAM was developed in response to our recommendation that the Marine Corps develop results-oriented performance metrics that can be used to evaluate the effectiveness of its training management initiatives. The tool, a complex excel-based spreadsheet, is used to capture day-to-day operating costs for training exercises to meet a unit’s core and assigned Mission Essential Tasks for training readiness requirements. For example, unit operations and resource officials enter data on training exercise costs and the Mission Essential Tasks expected to be accomplished by each exercise, and the tool uses this data to project the unit’s expected training readiness levels. Further, commanders can use the tool to project the expected effect of decreases in funding on training readiness levels. According to Marine Corps officials, they spent approximately $11 million on the C2RAM initiative from fiscal years 2012 through 2017. Second, in 2015, the Headquarters Marine Corps Office of Programs and Resources adopted and made adjustments for Marine Corps purposes to the Air Force’s Predictive Readiness Assessment system and test-piloted it with Marine Corps units. The Marine Corps’ system was known as the Predictive Readiness Model (PRM). PRM was designed to evaluate the complex interactions between resources and readiness to help inform decisions about resource allocations and readiness outcomes. According to Headquarters Marine Corps officials, PRM attempted to map approximately 500 causal factors related to readiness ratings. The effort involved input from more than 70 subject matter experts from multiple Marine Corps organizations. In addition, data input into PRM was obtained from various authoritative sources, including readiness, financial, and training systems of record, as well as other unauthoritative sources, including C2RAM. According to Marine Corps officials, as of June 2018, the Corps had spent approximately $4 million to develop PRM. In March 2019, while responding to a draft of this report, the Marine Corps stated that it decided to discontinue development of PRM because the model did not meet its objectives. While these initiatives were both designed to help the Marine Corps link dollars to readiness, each had its own particular use and design. For example, unlike C2RAM, which focuses only on the training pillar of readiness for ground combat forces, PRM focused on all pillars of readiness tracked by the Marine Corps for ground combat forces and air combat forces. In addition, while PRM attempted to capture all training data, C2RAM does not. For example, it does not capture data on individual training. Moreover, while C2RAM is primarily used at the MEF level and below to help inform commanders’ decisions about how much training funding to request and identify the effect of funding on readiness, PRM was designed to help officials in Marine Corps Headquarters make service-wide decisions about budget development and resource allocation. During our review, we found data quality and classification challenges faced by both PRM and C2RAM, as discussed below. Data quality limitations. Some Headquarters Marine Corps officials questioned the accuracy and reliability of some of the data planned for use in PRM because the data had to be aggregated from multiple sources that have varying degrees of internal control. In addition, officials told us that existing data were insufficient or are not currently collected, so, in some cases, PRM had to rely on the opinion of subject matter experts to determine how causal factors affect readiness. According to Marine Corps officials at various levels, C2RAM data quality is questionable because data is manually input by various sources with varying degrees of expertise. This is exacerbated by weak processes for conducting quality checks of the data. Moreover, officials stated that cost data may be inaccurate because units may neglect to update cost estimates with actual costs after a training event is completed. Further, C2RAM is not consistently used across all three MEFs. For example, when we visited II MEF, we learned that their resource management officials do not use C2RAM to build their budgets because of concerns about data quality. Classification of Data. Another challenge that both efforts faced is the classification of aggregated data. Readiness data are classified; budget data are generally not. When these data are combined, the resulting data are classified, potentially making the tool less useful and available to officials seeking to make informed decisions about resource allocation. For example, C2RAM is currently an unclassified system that captures fiscal and training data, but not readiness data. However, officials at I Marine Expeditionary Force told us that if readiness data were incorporated into the tool, it could become classified, which would limit its availability and usefulness to lower unit levels. As the Marine Corps found in its Fiscal Year 2012 Strategic Health Assessment, its stove-piped processes often require integration at the senior leadership level to develop a comprehensive view of issues, including the effect of dollars on readiness. Development of C2RAM and PRM, however, was not integrated, resulting in two separate systems— each devoted to tackling the same problem, but in different ways and with similar weaknesses, such as data quality limitations. Moreover, there was some overlap between the two systems. For example, C2RAM was one of the many data sources for PRM. In addition, both PRM and C2RAM used some of the same data sources. For instance, both systems relied on information captured in the Marine Corps Training Information Management System as well as on data captured in SABRS. The Marine Corps assessed the feasibility of moving forward with the PRM tool and, in March 2019, while responding to a draft of this report, the Marine Corps stated that they decided to discontinue its development. However, the Marine Corps has not assessed C2RAM as part of an enterprise wide performance management process that links resources to readiness. For example, the Marine Corps could learn from the experience of commanders at the MEF level who find C2RAM useful and consider the extent to which those usability considerations could and should be brought into a service-wide model. Without conducting this analysis, the Marine Corps is unlikely to make headway in tackling the challenges posed by trying to link resources to readiness. To meet the demands of its missions, the future security environment will require military forces to train across the full range of military operations, according to DOD. While the Marine Corps continues to ask for increased funding, according to a congressional report, the Marine Corps is unable to provide sufficient detail in its O&M budget estimates for training that would allow Congress to determine the benefits gained from additional funding. The Marine Corps has been aware for many years of the importance of providing accurate budget justifications to Congress. A number of factors have made it challenging for the Marine Corps to provide Congress the information it needs. First, the Marine Corps cannot fully track training funds through the budget cycle, making it difficult for the Marine Corps to, among other things, show that training funds were spent as planned. Second, the Marine Corps has not prioritized tackling the longstanding problem of how to link training resources to readiness. Although the Marine Corps has a standing order to develop an enterprise- wide performance management framework that links resources to readiness via a robust analytical framework, no single entity has been assigned the authority to enforce this order. In the absence of that leadership, certain components of the Marine Corps have developed their own, independent initiatives that were designed to achieve the same objective of linking funding to readiness, but had their own specific approaches and intended uses. Moreover, the Marine Corps has not assessed whether C2RAM provides an enterprise-wide performance management process linking resources to readiness. Until the Marine Corps assigns the authority needed to oversee development and implementation of a methodologically sound approach and assesses the degree to which C2RAM could be used, it will continue to face challenges making fully informed decisions about how much money it needs for training purposes and what it can reasonably expect to deliver for that money in terms of readiness gains. We are making the following three recommendations: The Secretary of the Navy should ensure that the Deputy Commandant for the Office of Programs and Resources oversee development and implementation of an approach to enable tracking of unit-level training funds through the budget cycle. This approach should include aligning MCPCs across the Marine Corps and ensuring consistent use of SIC codes. (Recommendation 1) The Secretary of the Navy should ensure that the Commandant of the Marine Corps designates a single entity responsible for directing, overseeing, and coordinating efforts to achieve the objective of establishing an enterprise-wide performance management process that links resources to readiness. (Recommendation 2) The Secretary of the Navy should ensure that the Commandant of the Marine Corps assesses C2RAM to determine the extent to which this system, or elements of this system, should be adapted for use in an enterprise-wide performance management process linking resources to readiness. (Recommendation 3) We provided a draft of this report to DOD for review and comment. In written comments, DOD concurred with all three of the recommendations in the draft report and stated that the Marine Corps would take actions to track unit-level training funds, link resources to readiness, and examine C2RAM, as discussed below. DOD’s comments are reprinted in appendix II. DOD also provided technical comments, which we incorporated as appropriate. DOD concurred with the third recommendation in the draft report that the Secretary of the Navy should ensure that the Commandant of the Marine Corps assesses C2RAM and PRM and determine the extent to which these systems or elements of these systems could and should be adapted for use in the enterprise-wide performance management process linking resources to readiness. In its comments, the Marine Corps stated that work to develop PRM had been discontinued because the model did not satisfy the Marine Corps objectives. Given that the Marine Corps’ decision to stop development of PRM mitigates the potential for overlapping initiatives moving forward, we revised the report and recommendation to focus on the Marine Corps assessing C2RAM for use in the enterprise-wide performance management process linking resources to readiness. The Marine Corps stated in its written response that C2RAM has potential utility for supporting an understanding of resources to readiness and it will examine the system further. We are sending copies of this report to the appropriate congressional committees, the Acting Secretary of Defense, the Secretary of the Navy, and the Commandant of the Marine Corps. In addition, the report is available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2775 or FieldE1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report evaluates the extent to which the Marine Corps (1) tracks unit- level Operations and Maintenance (O&M) training funds for ground combat forces through the budget cycle; and (2) links unit-level training funds for ground combat forces to readiness. This report focuses on ground combat forces which conduct a myriad of training at the Marine Expeditionary Forces (MEF). For our first objective, we requested and analyzed budget request, allocation, and obligations training exercise data for fiscal year 2017 from I Marine Expeditionary Force (MEF), II MEF, and III MEF. We collected data for this fiscal year because it was the most recently completed fiscal year for which actual obligated amounts could be obtained. We used this data request to determine the Marine Corps’ ability to provide the data as well as determine the source or sources they used to provide the data. We discussed the systems—Cost to Run a Marine Expeditionary Force (C2RAM) and Standard Accounting, Reporting, and Budgeting System (SABRS)—used to provide this data with knowledgeable Marine Corps officials, including discussion of the data reliability concerns with these systems which are identified in this report. We interviewed knowledgeable officials about the systems, reviewed the user guide for one of the systems, and observed how data was input and extracted to form reports. Although we found the data to be insufficient to consistently identify and fully track unit-level O&M training funding data though the budget cycle, we determined that the data we obtained were sufficiently reliable to provide information about the availability of fiscal year 2017 funding amounts requested, allotted, and obligated for unit-level training exercises, as discussed in this report. We also reviewed and analyzed data from a series of classified and unclassified reports that were issued by the Marine Corps from fiscal year 2009 through fiscal year 2014. These reports, known as the Marine Corps Strategic Health Assessment (MCSHA), evaluated the health of the Marine Corps, including its use of fiscal codes, through an enterprise- wide study of resource investments, organizational activities, and readiness outcomes. We also reviewed data about Marine Corps Programming Codes (MCPC) and Special Interest Codes (SIC) in Marine Corps documents such as the MCSHAs as well as the Standard Accounting, Budgeting, and Reporting System (SABRS) Customer Handbook. We assessed this information against Marine Corps Order 5230.23, Performance Management Planning, which requires the Deputy Commandant for Programs and Resources to ensure visibility and traceability of funds through the budget cycle and accounting systems for all organizational units and programs, as well as Standards for Internal Control in the Federal Government, which states that management should design an entity’s information system to ensure, among other things, that data is readily available to users when needed. For our second objective, we reviewed reports and supporting documentation on Marine Corps efforts to evaluate readiness levels achieved from O&M obligations for ground combat forces training and observed the operation of systems used to track training funds and readiness. Specifically, we reviewed and analyzed the MCSHAs to identify challenges that the Marine Corps reported facing in attempting to link training funds to readiness. As a part of our review of supporting documentation, we reviewed and analyzed the MCSHAs from fiscal years 2011 through 2014 issued by the Marine Corps Office of Program Analysis and Evaluation to summarize some of the key findings identified by the Marine Corps related to linking training funds to readiness. We reviewed these reports because they intended to provide a comprehensive overview of the health of the Marine Corps. From these reports, we identified and summarized key findings related to our review. Specifically, one GAO analyst reviewed the four reports to identify reported findings that prevent the Marine Corps from linking resources to readiness, such as stove-piped processes and inconsistent data management processes, while a second analyst confirmed the summary from this review. We shared our summary of key findings with Marine Corps officials and they concurred. In addition, we reviewed guidance and other related documents on the Predictive Readiness Model (PRM) and Cost to Run a Marine Expeditionary Force (C2RAM). We were briefed on and observed data being input into the C2RAM model and queries being run from that data. We were able observe the summary reports that resulted from the queries which helped to enhance our understanding of the Marine Corps efforts to link training funds to readiness. In addition, we reviewed previously issued GAO reports related to the issue. We assessed this information against Marine Corps Order 5230.23, Performance Management Planning, which calls for the development and implementation of an enterprise-wide performance management process that links resources to institutional readiness via a robust analytic framework, as well as Standards for Internal Control in the Federal Government, which states that management should establish an organizational structure, assign responsibility, and delegate authority to achieve its objective. To answer the two objectives for this review, we interviewed knowledgeable officials from the following offices: Office of the Secretary of Defense Cost Assessment and Program Evaluation Personnel and Readiness, Force Readiness Headquarters Marine Corps, Washington, D.C. Office of Programs and Resources Budget and Execution Division Program Analysis and Evaluation Division Command, Control, Communications, and Computers Marine Forces Command – Norfolk, Virginia Marine Corps Training and Education Command – Quantico, Virginia I Marine Expeditionary Force – Camp Pendleton, California II Marine Expeditionary Force – Camp Lejeune, North Carolina III Marine Expeditionary Force – Okinawa, Japan. We conducted this performance audit from August 2017 to April 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact name above, Margaret Best, Assistant Director; William J. Cordrey; Pamela Davidson; Angela Kaylor; Amie Lesser; Tamiya Lunsford; Samuel Moore, III; Shahrzad Nikoo; Clarice Ransom; Cary Russell; Matthew Ullengren; and Sonja Ware made key contributions to this report.
|
Training is key to building readiness—the military's ability to fight and meet the demands of its missions. Through the Department of Defense (DOD) budget cycle, the Marine Corps estimates or programs its funding needs for training and spends funds to accomplish its training mission. Questions have been raised about whether the Marine Corps' training budget estimates are sufficiently detailed to determine training costs at the unit level or the expected readiness generated by those costs. House Report 115-200 included a provision for GAO to examine the military services' budgeting processes to build unit-level training readiness. This report examines the extent to which the Marine Corps (1) tracks unit-level training funds for ground combat forces through the budget cycle, and (2) links ground combat forces' unit-level training funds to readiness. GAO analyzed budget data and studies conducted by the Marine Corps and others, examined tools used by units to link training funds with readiness, and interviewed knowledgeable officials at various levels in the Marine Corps. The Marine Corps cannot fully track all unit-level training funds for ground combat forces through the budget cycle. According to GAO's analysis of data provided by the Marine Expeditionary Forces (MEFs), the principal warfighting organization for the Marine Corps, units can track some, but not all, funds for training exercises from the budget request through use of the funds. The Marine Corps cannot fully track all training funds through the budget cycle, in part, because it has not established the consistent use of fiscal codes. Two key fiscal codes that officials identified as relevant to track funds for unit-level training are the Marine Corps Programming Code (MCPC) and the Special Interest Code (SIC). The Marine Corps uses MCPCs to program funds, but GAO found that when the Marine Corps spends those funds, it uses a different set of fiscal codes. This makes it difficult to link the programmed intent of funds to the execution of those funds. The Marine Corps uses SICs to track funds associated with training exercises, but GAO found that units do not use SICs consistently. For example, officials at all three MEFs told GAO that units generate SICs for large-scale training exercises, but may not do so for small-scale exercises. The Marine Corps is taking steps to align fiscal codes across the budget cycle, but this effort is in its early stages and will not include MCPCs, and may not address the inconsistent use of SICs. Without the ability to track unit-level training funds through the budget cycle, the Marine Corps lacks readily available data to assess whether funds were obligated consistent with their programmed intent and to adequately forecast and defend budget requests for training. Although internal Marine Corps assessments and guidance state that the Marine Corps needs an enterprise-wide process to link resources to readiness, the Marine Corps has made little progress establishing a link between training funds for ground combat forces and readiness. The Marine Corps identified challenges with linking funds to readiness in a series of reports from fiscal years 2009 through 2014, citing factors such as stove-piped efforts and limited data availability and quality. Guidance directed that the Deputy Commandant for Programs and Resources organize quality coordination events with key stakeholders to synchronize activities within major lines of effort, but officials from this office stated that they have not been given the authority to direct the various efforts. Therefore, challenges have persisted, in part, because the Marine Corps has not designated a single entity with authority to oversee and coordinate efforts to link training funds to readiness. In the absence of a single oversight entity, two separate and overlapping tools were developed—the Cost to Run a MEF (C2RAM) tool and the Predictive Readiness Model (PRM). Although each tool had its own particular use and design, both were intended to link resources to readiness. Moreover, both faced similar challenges, such as data quality limitations, and relied on some of the same data sources. The Marine Corps recently assessed and discontinued development of PRM, however, it has not assessed C2RAM and how it could support an enterprise wide performance management process linking resources to readiness. Without dedicating a single entity with authority, and conducting an assessment of C2RAM, the Marine Corps is unlikely to make headway in addressing the challenges posed by trying to link resources to readiness. GAO recommends that the Marine Corps (1) tracks training funds through the budget cycle, (2) designates a single entity to oversee establishment of a process that links resources to readiness, and (3) conducts an assessment of C2RAM. DOD concurred, and based on its comments, GAO modified one recommendation.
|
This section describes (1) NNSA’s weapons design and production sites; (2) the framework for managing LEPs, known as the Phase 6.X process, and NNSA’s program execution instruction; and (3) NNSA’s technology development and assessment process. NNSA oversees three national security laboratories—Lawrence Livermore in California, Los Alamos in New Mexico, and Sandia in New Mexico and California. Lawrence Livermore and Los Alamos are the design laboratories for the nuclear components of a weapon, while Sandia works with both to design nonnuclear components and as the system integrator. Los Alamos led the original design of the W78, but Lawrence Livermore is leading current efforts to design the replacement warhead. NNSA also oversees four nuclear weapons production plants—the Pantex Plant in Texas, the Y-12 National Security Complex in Tennessee, the Kansas City National Security Campus in Missouri, and the Savannah River Site in South Carolina. In general, the Pantex Plant assembles, maintains, and dismantles nuclear weapons; the Y-12 National Security Complex produces the secondary and the radiation case; the Kansas City National Security Campus produces nonnuclear components; and the Savannah River Site replenishes a component known as a gas transfer system that transfers boost gas to the primary during detonation. DOD and NNSA have established a process, known as the Phase 6.X process, to manage life extension programs. According to a Nuclear Weapons Council document, NNSA’s Office of Defense Programs will follow this process to manage a W78 replacement program. As shown in figure 1, this process includes key phases or milestones that a nuclear weapon LEP must undertake before proceeding to subsequent steps. In January 2017, while the program was still suspended, NNSA issued a supplemental directive that defines additional activities that NNSA offices should conduct in support of the Phase 6.X process. For example, as discussed below, NNSA’s supplemental directive established a new requirement during Phase 6.1 (Concept Assessment) that NNSA conduct a technology readiness assessment of technologies proposed for potential use in the warhead. In addition, NNSA’s Office of Defense Programs issued a program execution instruction that defines enhanced program management functions for an LEP and other programs. This instruction also describes the level of program management rigor that the LEP must achieve as it advances through the Phase 6.X process. According to NNSA’s Fiscal Year 2018 Stockpile Stewardship Management Plan, NNSA extends the life of existing U.S. nuclear warheads by replacing aged nuclear and non-nuclear components with modern technologies. In replacing these components, NNSA seeks approaches that will increase safety, improve security, and address defects in the warhead. Several technologies are frequently developed concurrently before one approach is selected. According to NNSA’s Fiscal Year 2018 Stockpile Stewardship Management Plan, this approach allows selection of the option which best meets warhead requirements and reduces the risks and costs associated with an LEP. NNSA conducts technology readiness assessments to provide a snapshot in time of the maturity of technologies and their readiness for insertion into a program’s design and schedule, according to NNSA’s guidance. NNSA’s assessments also look at the ability to manufacture the technology. NNSA measures technological maturity using technology readiness levels (TRLs) on a scale from TRL 1 (basic principles developed) through TRL 9 (actual system operation). Similarly, NNSA measures manufacturing readiness using manufacturing readiness levels (MRL) on a scale from MRL 1 (basic manufacturing implications identified) through MRL 9 (capability in place to begin full rate production). According to NNSA’s guidance, NNSA recommends but does not require that an LEP’s critical technologies reach TRL 5 (technology components are integrated with realistic supporting elements) at the beginning of Phase 6.3 (Development Engineering). At the end of Phase 6.3, it recommends that a technology be judged to have achieved MRL 5 (capability to produce prototype components in a production relevant environment). However, according to NNSA officials, lower TRLs and MRLs may be accepted in circumstances where a technology is close to achieving the desired levels or the program team judges that the benefit of the technology is high and worth the increased risk that it may not be sufficiently mature when the program needs it. NNSA has taken steps to prepare to restart a program to replace the W78 nuclear warhead capability. According to NNSA officials, these steps are typically needed to conduct any LEP. Therefore, they can be undertaken despite the uncertainty about whether the final program will develop the warhead for the Air Force only or for both the Air Force and the Navy. Specifically, NNSA has (1) taken initial steps to establish the program management functions needed to execute the program and assemble personnel for a program management team; (2) assessed technologies that have been under development while the program was suspended that could potentially be used to support a W78 replacement; and (3) initiated plans for the facilities and capabilities needed to provide the nuclear and nonnuclear components for the warhead. At the time of our review, NNSA and DOD officials stated that, in response to the 2018 NPR, they planned to restart a program that would focus on replacing the capabilities of the W78 for the Air Force; however, the extent to which the program would focus on providing a nuclear explosive package for the Navy was uncertain. DOD officials said that the Navy plans to complete a study examining the feasibility of using the nuclear explosive package developed for the W78 replacement warhead in its SLBM system by the end of fiscal year 2019. According to DOD officials, the Nuclear Weapons Council will make a decision about developing an interoperable warhead for the Air Force and the Navy based on the results of the study but, as of August 2018, had not established time frames for making that decision. According to Air Force and NNSA officials, if the Nuclear Weapons Council decided that the Navy should participate in the program, then NNSA would not need to redo the work planned for fiscal year 2019. NNSA has taken initial steps to establish the program management functions needed to execute the program and assemble personnel for a program management team, as follows: Program management. In fiscal year 2018, NNSA started to establish the program management functions needed to execute a W78 replacement program, as required in the Office of Defense Programs’ program execution instruction. In preparation for the program restart, NNSA assigned a manager for a W78 replacement program who is taking or plans to take steps to implement these functions. For example, among other steps, the W78 replacement program manager told us that he had started developing the risk management plan to define the process for identifying and mitigating risks that may impact the program. The program manager also said NNSA had started to adapt a standardized work breakdown structure for life extension programs to define and organize the W78 replacement program’s work scope for restart. An initial version of this work breakdown structure would be completed before the program restarts in fiscal year 2019, according to the program manager. Further, as NNSA refines the scope of work, the agency will refine and tailor the work breakdown structure. At the time of our review, this work was under development and therefore we were not able to review these plans and tools. In addition, as of July 2018, NNSA had created a preliminary schedule for a W78 replacement program under the Phase 6.X process (see fig. 2). According to NNSA’s preliminary schedule, the program will: Restart in Phase 6.2 (Feasibility and Design Options) in the third quarter of fiscal year 2019. NNSA previously completed Phase 6.1 and was authorized by the Nuclear Weapons Council to start Phase 6.2 in June 2012. During Phase 6.2, NNSA plans to, among other things, select design options and develop cost estimates of the selected design options. Conduct Phase 6.2A (Design Definition and Cost Study) for one year beginning in the fourth quarter of fiscal year 2021. During this phase, for example, NNSA plans to develop a preliminary cost estimate for the program, called a weapons design and cost report, and also produce an independent cost estimate. Start Phase 6.3 (Development Engineering) in the fourth quarter of fiscal year 2022 and transition to Phase 6.4 (Production Engineering) in the mid-2020s. During these phases, NNSA will develop the final design as well as begin producing selected acquisition reports, which detail the total program cost, schedule, and performance, among other things. According to the W78 program manager, the military characteristics will be finalized in Phase 6.4 and before that point DOD will continue to update the requirements. Achieve production of the first warhead—Phase 6.5—by the second quarter of fiscal year 2030 so that it can be fielded on the Air Force’s planned Ground Based Strategic Deterrent that same year. Start Phase 6.6 (Full Scale Production) by the second quarter of fiscal year 2031. When the program restarts in fiscal year 2019, NNSA intends to develop or finalize initial versions of other plans and tools such as a requirements management plan, according to the program manager. (See appendix I for a detailed description of the steps NNSA is taking or plans to take to establish the program management functions needed to execute a W78 replacement program, according to the manager for the W78 replacement program.) The program manager also told us that as the program progresses through Phases 6.2 (Feasibility and Design Options), 6.2A (Design Definition and Cost Study), and 6.3 (Development Engineering), NNSA will increase the maturity of the program management processes and tools, consistent with the Office of Defense Programs’ program execution instruction. For example, in Phases 6.2 and 6.2A, NNSA intends to establish an earned value management system (EVM)—used to measure the performance of large, complex programs. In Phase 6.3, NNSA will further develop the system to be consistent with DOE and industry standards, as specified in the program execution instruction. NNSA officials said they will need to achieve sufficient program management rigor in Phase 6.3 to effectively report to Congress on the status and performance of the program as NNSA develops cost and schedule baselines. Personnel. At the time of our review, NNSA was reconstituting a program management team. Specifically, as mentioned above, NNSA assigned a new program manager in March 2017. In the spring of 2018, NNSA began assigning additional federal staff and contractor support to help ramp up the program in advance of the fiscal year 2019 restart date. According to the program manager, he expected to complete a plan in the late summer or early fall of 2018 that NNSA could use to hire additional federal staff needed to manage the program in fiscal year 2019. The advanced development and implementation of staffing plans prior to each phase of an LEP was a key lesson learned from an NNSA review of another LEP—the W76- 1. While the program was suspended, NNSA supported other programs that developed weapons technologies—including materials and manufacturing processes—that could potentially be used by the W78 replacement program and potentially by other future life extension programs. Specifically, according to NNSA officials, NNSA supported the development of technologies through ongoing LEPs (such as the W80-4 LEP) and other technology maturation projects (such as the Joint Technology Demonstrator) that could support future LEPs. For example, the W80-4 program has supported development at Lawrence Livermore of certain new materials as a risk mitigation strategy in case certain legacy materials used in the secondary are not available. According to NNSA officials, NNSA will likely continue to develop these new materials for use in future weapons, including the W78 replacement. In addition, contractors at Lawrence Livermore told us that test demonstrations conducted under the Joint Technology Demonstrator have helped to mature potential technologies for a W78 replacement. Examples they cited included additively manufactured mounts and cushions for securing and stabilizing the nuclear explosive package inside the Air Force’s aeroshell. In May 2018, in anticipation of the restart of a W78 replacement program and to retroactively address NNSA’s new supplemental requirement to conduct a technology readiness assessment in Phase 6.1, NNSA’s Office of Systems Engineering and Integration completed a technology readiness assessment that evaluated the maturity of technologies potentially available for the W78 replacement program. According to NNSA officials, the assessment identified and evaluated technologies that NNSA would have available for the next LEP, irrespective of whether the final program will replace the W78 warhead in ICBMs only or will also be used in the Navy’s SLBMs. The assessment evaluated 126 technologies based on proposals from the laboratories and production sites. As shown in table 1 below, the proposals related to key functional areas of the warhead, including the nuclear explosive package and the arming, fuzing, and firing mechanism—which provides signaling that initiates the nuclear explosive chain. For the W78 warhead replacement, DOD divided the military characteristics into two categories: threshold or minimum requirements (or “needs”) and objective or optional requirements (or “wants”). NNSA’s assessment grouped the technologies into one of three categories, as follows. Must do. A technology deemed “must do” means that it is the only technology available that can meet a minimum requirement (or “need”) for the warhead to function. The technology that previously fulfilled this requirement is generally obsolete or no longer produced, and there are no alternatives. Must do (trade space). “Must do (trade space)” technologies fulfill a minimum requirement (or “need”) for the warhead, but there are two or more technologies that could meet this need. NNSA must evaluate and select which technology it will use to fulfill the need. Trade space. “Trade space” technologies are those that can meet an optional requirement (or “want”) for the warhead. Among the nine “must do” technologies that NNSA evaluated, for example, was a new manufacturing process being developed at Sandia to produce a type of magnesium oxide—needed for use in the thermal batteries that power the warhead’s firing mechanism—that is no longer available from a vendor and for which NNSA’s existing supplies are limited. For this new process, the assessment team estimated that it had completed TRL 1 (basic principles developed) but had not yet reached MRL 1 (basic manufacturing implications identified). The technology readiness assessment noted that for technologies with a TRL of 3 or less, an MRL of 1 or less is expected. In addition, according to the report, Sandia estimated that it may cost about $7.1 million to develop the material and manufacturing process to TRL 5 and MRL 4 during fiscal years 2018 through 2023—when the program is slated to reach Phase 6.3—to achieve a level of readiness where it could potentially be included in the design of the W78 replacement warhead. Among the 59 “must do (trade space)” technologies that NNSA evaluated were, for example, two new gas transfer system technologies developed by Sandia that may offer advantages compared with the existing technology. A gas transfer system is a required capability (or “must do”) but, according to the technology readiness assessment report, NNSA needs to compare the costs, benefits, and risks of these new technologies with the traditional technology (i.e., evaluate the “trade space”) and make a selection among them. The first new technology was a gas transfer system bottle made out of aluminum that could be cheaper, weigh less, and last longer than the gas transfer system used in the W78. According to the technology readiness assessment report, the assessment team estimated the aluminum-based bottle had completed TRL 2 but did not have enough information to estimate an MRL. Sandia estimated that it would cost about $6.5 million to achieve TRL 5 and MRL 4 during fiscal years 2018 through 2023. The second Sandia technology involved an advanced gas transfer system technology. The assessment team estimated that this technology had completed TRL 3 but did not have enough information to estimate an MRL. Sandia estimated that it would cost about $5.4 million to achieve TRL 5 and MRL 4 during fiscal years 2018 through 2023. According to the technology readiness assessment report, NNSA will need to further evaluate these approaches as well as the traditional technology to make a selection for a W78 replacement program. The 75 “trade space” technologies that the assessment team evaluated included, for example, several proposed by Lawrence Livermore, Los Alamos, and Sandia for providing an advanced safety feature to prevent unauthorized detonation of the warhead. As mentioned above, when NNSA extends the life of existing U.S. nuclear warheads it also seeks approaches that will increase the safety and improve security of the warhead. According to the report, the laboratories proposed similar concepts that varied in maturity levels and estimated costs for further development. Specifically, the assessment team estimated the Lawrence Livermore and Los Alamos technologies to have completed TRL 4 and Sandia’s proposal to have completed TRL 3. Regarding MRLs, the assessment team also estimated Lawrence Livermore’s technology to have completed MRL 1, Los Alamos’s technology to be at MRL 1, and did not have enough information to estimate the MRL for Sandia’s technology. In addition, according to the report, Lawrence Livermore estimated costs of about $31.2 million to $45.6 million to further mature its technology during fiscal years 2018 through 2023. Los Alamos estimated costs of about $72.1 million to $154.5 million to further mature its technology during the same period. Sandia estimated costs of about $8.2 million to further mature its technology during the same period. Because the feature is not a minimum requirement, NNSA officials told us that they are continuing to evaluate the costs, benefits, and risks of including the feature. According to NNSA’s manager for the W78 replacement program and key staff involved in preparing to restart the program, when the program restarts in fiscal year 2019 they will use the assessment to identify specific technologies or groups of technologies (i.e., trade spaces) to further evaluate for potential use in the warhead. These officials said they will continue evaluating technologies and make selections of preferred options at the same time that the warhead’s program requirements and priorities are refined during Phases 6.2 and 6.2A. According to the program manager, NNSA will produce a technology development plan for technologies selected for a W78 replacement during Phase 6.2 and 6.2A and that will identify the current readiness levels of the technologies, key risks, and estimated costs to bring them to TRL 5 in Phase 6.3. In addition, the technology readiness assessment team made several recommendations to the NNSA Deputy Administrator for Defense Programs regarding the development of technologies that could provide benefits to the nuclear security enterprise overall. For example, the assessment team observed that 21 of the proposed technologies for a W78 replacement involved the use of additive manufacturing. The assessment noted that, if successful, these technologies could reduce component production costs and schedule risks for future LEPs compared to current methods. The team recommended that the Office of Defense Programs conduct an analysis to validate these capabilities and develop a nuclear enterprise-wide effort to address additive manufacturing for a W78 replacement, future LEPs, and other applications. According to the NNSA official who led the assessment, at the time of our review, the assessment team was preparing to present its enterprise-wide recommendations to the Office of Defense Program’s senior leadership; therefore, specific follow-on actions had not yet been decided. The manager of the W78 replacement program said that he has begun to identify the facilities and capabilities at the laboratories and production sites that will be needed to provide the nuclear and nonnuclear components for a W78 replacement, and plans to draft formal agreements to help ensure coordination with them. According to the program manager, collecting the information that identifies facilities and capabilities—including a rough idea of key milestone dates for when the program will need to use them—is the first step in producing a major impact report, which is required upon completion of Phase 6.2 and accompanies the final Phase 6.2 study report delivered to the Nuclear Weapons Council. Among other things, a major impact report identifies aspects of the program—including facilities and capabilities to support it— that could affect the program’s schedule and technical risk, according to the Phase 6.X guidelines. According to an NNSA official and contractor representatives, many of the existing nuclear and nonnuclear components of the W78 are outdated or unusable and a W78 replacement will need all newly manufactured components. As a result, NNSA will need to exercise numerous manufacturing capabilities in support of this effort, and the facilities and capabilities must be ready to support the work. However, many of the facilities that may be needed to provide components for a W78 replacement program are outdated and are undergoing modernization to either build new facilities or repair existing facilities and capabilities, which represents a critical external risk to the program. According to NNSA’s Fiscal Year 2018 Stockpile Stewardship and Management Plan, these planned modernization activities will require sustained and predictable funding over many years to ensure they are available to support the weapons programs. Some examples of NNSA activities to build or repair facilities and capabilities that will provide nuclear or nonnuclear components for a W78 replacement warhead—and which may have schedule, cost, or capacity issues that could impact the program— include: Plutonium pit production facilities. NNSA does not currently have the capability to manufacture sufficient quantities of plutonium pits for a W78 replacement program. NNSA’s Fiscal Year 2018 Stockpile Stewardship and Management Plan stated that the agency will increase its capability to produce new pits over time, from 10 pits per year in fiscal year 2024 to 30 pits per year in fiscal year 2026, and as many as 50 to 80 pits per year by 2030. NNSA is refurbishing its pit production capabilities at Los Alamos to produce at least 30 pits per year. In addition, in May 2018, NNSA announced its intention to repurpose the Mixed Oxide Fuel Fabrication Facility at the Savannah River Site in South Carolina to produce at least an additional 50 pits per year by 2030. NNSA officials told us that they will need both the Los Alamos and Savannah River pit production capabilities to meet anticipated pit requirements for the W78 replacement program and for future warhead programs. Uranium processing facilities. NNSA’s construction of the Uranium Processing Facility at the Y-12 National Security Complex will help ensure NNSA’s continued ability to produce uranium components for the W78 replacement program. NNSA plans to complete the facility for no more than $6.5 billion by the end of 2025—approximately 4 years before the scheduled delivery of the first production unit of a W78 replacement program warhead. This effort is part of a larger NNSA plan to relocate and modernize other enriched uranium capabilities performed in a legacy building at the Y-12 National Security Complex to other existing buildings or in newly constructed buildings. Lithium production facility. NNSA will require lithium for a W78 replacement warhead. The United States no longer maintains full lithium production capabilities and relies on recycling as the only source of lithium for nuclear weapon systems. According to the Fiscal Year 2018 Stockpile Stewardship and Management Plan, NNSA has analyzed options to construct a new lithium production facility, and a conceptual design effort is next, with an estimated completion date of fiscal year 2027 for the new facility. Until the facility is available, NNSA has developed a bridging strategy to fill the interim supply gaps. Radiation-hardened microelectronics facility. Nuclear warheads, such as a W78 replacement warhead, include electronics that must function reliably in a range of operational environments. NNSA has a facility at Sandia that produces custom, strategic radiation-hardened microelectronics for nuclear weapons. In August 2018, NNSA officials told us that this facility, known as Microsystems and Engineering Sciences Applications, can remain viable until 2040—but would need additional investment. The W78 replacement program manager told us that the need for newly manufactured components coupled with the scale of NNSA’s modernization activities means that a comprehensive coordination effort will be necessary to ensure that the facilities and capabilities are ready to provide components for the warhead by the end of the 2020s. Because these activities are separately managed and supported outside the W78 replacement program, NNSA considers progress on them to represent a critical external risk to the program. NNSA is taking or plans to take some action to mitigate this external risk at the program and agency level. One step that the program plans to take to address this risk is to draft formal agreements—called interface requirements agreements—with other NNSA program offices that oversee the deliverables and schedules for the design, production, and test facilities that are needed for the program. These agreements describe the work to be provided by these external programs, including milestone dates for completing the work; funding; and any risks to cost, schedule, or performance. The W78 program manager stated that they are generally drafted toward the end of Phase 6.2 through Phase 6.2A and largely finalized in Phase 6.3—though small adjustments may be made into Phase 6.4 (Production Engineering). At the agency level, in response to a direction in the 2018 NPR, NNSA officials told us that the agency is also developing an agency-wide integrated master schedule that is intended to align NNSA’s enterprise- wide modernization schedule with milestone delivery dates for nuclear weapons components. The W78 program manager and other NNSA officials told us that the information they provide on the facilities and capabilities needed, as well as milestone dates, will be integrated into this schedule and used to help ensure that the facilities and capabilities are ready to support the program. We provided a draft of this report to NNSA and DOD for comment. NNSA and DOD provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Defense and Energy, the Administrator of NNSA, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or bawdena@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to the report are listed in appendix II. The table below identifies the steps NNSA is taking or plans to take to establish the program management functions needed to execute a W78 replacement program. NNSA was directed by the Nuclear Weapons Council to suspend the program in fiscal year 2014 and the 2018 Nuclear Posture Review directed NNSA to restart the program in fiscal year 2019. The NNSA Office of Defense Program’s program execution instruction defines enhanced program management functions for a warhead life extension program (LEP) such as the W78 replacement program and other programs. The instruction also describes the level of program management rigor that the LEP must achieve as it advances through the Department of Defense and NNSA process for managing life extension programs called the Phase 6.X process. This process includes key phases or milestones that a nuclear weapon life extension program must undertake before proceeding to subsequent steps. NNSA completed Phase 6.1 (Concept Assessment) and started Phase 6.2 (Feasibility and Design Options) activities before the program was suspended in fiscal year 2014. NNSA, therefore, plans to restart the program in Phase 6.2. Allison B. Bawden, (202) 512-3841 or bawdena@gao.gov. In addition to the individual named above, William Hoehn (Assistant Director), Brian M. Friedman (Analyst in Charge), and Julia T. Coulter made significant contributions to this report. Also contributing to this report were Antoinette Capaccio, Pamela Davidson, Penney Harwell Caramia, Greg Marchand, Diana Moldafsky, Cynthia Norris, Katrina Pekar-Carpenter, and Sara Sullivan.
|
The Department of Defense and NNSA have sought for nearly a decade to replace the capabilities of the aging W78 nuclear warhead used by the U.S. Air Force. NNSA undertakes LEPs to refurbish or replace the capabilities of nuclear weapons components. In fiscal year 2014, NNSA was directed to suspend a program that was evaluating a capability that could replace the W78 and also be used by the U.S. Navy. NNSA's most recent estimate—reported in October 2018—was that the combined program would cost about $10 billion to $15 billion. NNSA has been directed by the 2018 Nuclear Posture Review to restart a program to replace the W78 for the Air Force in fiscal year 2019. The 2018 Nuclear Posture Review also directed NNSA and the Navy to further evaluate whether the Navy could also use the warhead. Senate report 115-125 included a provision for GAO to review NNSA's progress on the program to replace the W78. GAO's report describes NNSA's steps in key early planning areas—including program management, technology assessment, and coordination with facilities and capabilities—to prepare to restart a program to replace the W78. GAO reviewed documentation on areas such as program management, technologies, and facilities needed for the program, and interviewed NNSA and DOD officials. The Department of Energy's National Nuclear Security Administration (NNSA) has taken steps to prepare to restart a life extension program (LEP) to replace the capabilities of the Air Force's W78 nuclear warhead—a program which was previously suspended. According to NNSA officials, these steps are typically needed to conduct any LEP. Therefore, they can be undertaken despite the current uncertainty about whether the final program will develop the warhead for the Air Force only or for both the Air Force and the Navy. Specifically, NNSA has taken the steps described below: Program management. NNSA has begun to establish the program management functions needed to execute a W78 replacement program, as required by NNSA's program execution instruction. For example, NNSA has started to develop a risk management plan to define the process for identifying and mitigating risks. In addition, NNSA has created a preliminary schedule to restart the program in fiscal year 2019 in the feasibility and design options phase with the goal of producing the first unit in fiscal year 2030. (See figure) Technology assessment. In May 2018, NNSA completed an assessment of 126 technologies for potential use in a W78 replacement. These included nine technologies that are needed to replace obsolete or no longer available technologies or materials. These are considered “must-do” because they are the only technologies or materials available to meet minimum warhead requirements established by the Department of Defense and NNSA. NNSA officials said that in fiscal year 2019 they will use the assessment to further evaluate technologies for potential use in the warhead. Coordination with facilities and capabilities. NNSA's program manager is identifying the facilities and capabilities needed to provide components for the warhead. This information will be used to produce a report that identifies aspects of the program—including facilities and capabilities to support it—that could affect the program's schedule and technical risk. However, several of the needed facilities must be built or repaired, and these activities are separately managed and supported outside the W78 replacement program—representing a critical external risk to the program. As mitigation, the program intends to coordinate with the offices that oversee these facilities to draft agreements that describe the work to be performed and timeframes, among other things. GAO is not making recommendations. NNSA and DOD provided technical comments, which GAO incorporated as appropriate.
|
The Navy currently has 51 attack submarines—comprising 33 Los Angeles class, 3 Seawolf class, and 15 Virginia class submarines (see fig. 1). Attack submarines are homeported at bases in the United States: in New London, Connecticut; Pearl Harbor, Hawaii; Norfolk, Virginia; San Diego, California; and Bangor, Washington; 4 are homeported overseas, in the U.S. territory of Guam. Submarine Safety Controls and Culture On April 10, 1963, the USS Thresher (SSN 593) sank during deep submergence tests off the coast of New England. One hundred and twelve officers and enlisted sailors and 17 civilians perished in the tragedy. The accident investigation concluded that the Navy did not have adequate procedures in place to prevent and respond to a catastrophic flooding incident. time, and that maintenance delays reduce the amount of time during which ships and submarines are available for training and operations. Submarine fleet and squadron officials emphasized the strict safety culture that permeates the submarine community. This emphasis on meeting safety certification criteria means that the Navy operates a supply-based submarine force that does not compromise on adherence to training and maintenance standards to meet combatant commander demands, according to these officials (see sidebar). Officials added that the Navy will delay deployment dates if necessary to ensure that these standards are met. As a result, deployed readiness is high and attack submarines are in excellent materiel condition as compared with the rest of the Navy fleet. The loss of the USS Thresher and its crew spurred the Navy to establish stringent safety requirements for submarines to prevent another loss at sea. Following the accident, the Navy established submarine safety certification criteria to provide maximum reasonable assurance that critical systems would protect the crew from flooding and allow the submarine to conduct an emergency surfacing should flooding occur. This program, known as SUBSAFE, is still in use today, to ensure that these critical systems receive a high quality of work and that all work is properly documented. According to the Navy, the SUBSAFE certification status of a submarine is fundamental to its mission capability, as it provides a thorough and systematic approach to quality, and to a culture that permeates the entire submarine community. According to Navy officials, since the SUBSAFE program was established in 1963, no SUBSAFE-certified submarine has ever been lost. The Navy has been unable to begin or complete the vast majority of its attack submarine maintenance periods on time resulting in significant maintenance delays and operating and support cost expenditures. Our analysis of Navy maintenance data shows that between fiscal year 2008 and the end of fiscal year 2018, attack submarines will have incurred 10,363 days of idle time and maintenance delays as a result of delays in getting into and out of the shipyards. Our analysis found that the primary driver affecting attack submarines are delays in completing depot maintenance. For example, of the 10,363 total days of lost time since fiscal year 2008, 8,472 (82 percent) were due to depot maintenance delays. As we previously reported, completing ship and submarine maintenance on time is essential to Navy readiness, as maintenance periods lasting longer than planned could reduce the number of days during which ships and crews are available for training or operations. Attack submarines also face delays in beginning maintenance when the public shipyards have no available capacity, in some cases forcing submarines to idle pierside because they are no longer certified to conduct normal operations. According to Navy officials, the SUBSAFE program—its program to ensure and certify submarine safety—requires submarines to adhere to strict maintenance schedules and pass materiel condition assessments before they are allowed to submerge. Attack submarines that go too long without receiving required maintenance are at risk of having their materiel certification expire. Should this certification expire, these submarines are restricted to sitting idle, pierside, while they wait until a shipyard has the capacity to begin their maintenance period (see fig. 2). We found that since fiscal year 2008, 14 attack submarines have spent a combined 61 months (1,891 days) idling while waiting to enter shipyards for maintenance. Idle time incurred while waiting to begin a maintenance period is often coupled with maintenance delays while at the shipyards, thus compounding total delays. We also found that the Navy incurs significant costs in operating and supporting submarines that are experiencing maintenance delays and idle time. We analyzed the operating and support costs the Navy incurs on average to estimate the costs of crewing, maintaining, and supporting attack submarines that are delayed in getting into and out of the shipyards. Using historical daily cost data the Navy adjusted for inflation, we estimated that since fiscal year 2008 the Navy has spent more than $1.5 billion in fiscal year 2018 constant dollars on attack submarines sitting idle while waiting to enter the shipyards, and on those delayed in completing their maintenance at the shipyards (see table 1). While the Navy would incur these costs regardless of whether the submarine was delayed, idled, or deployed, our estimate of $1.5 billion represents costs incurred from fiscal year 2008 through fiscal year 2018 for attack submarines without receiving any operational capability in return. While acknowledging the magnitude of these costs, Navy officials stated that there may be some benefits that could be realized from these operating and support costs since crews on idle attack submarines can conduct some limited training. Operating and support costs include payment of crew salaries, purchasing of spare parts, and conducting of maintenance, among other things, but they do not represent the full operational impact incurred by the Navy from the idle time and maintenance delays. For example, attack submarine depot-level maintenance requires the use of a drydock, and officials from the three public shipyards we visited told us that their drydock capacity was limited. A delayed attack submarine maintenance period can restrict the use of a drydock for much longer than originally anticipated, thereby preventing the shipyard from using that drydock to maintain other vessels, including other types of ships, or to conduct necessary repairs on the facilities. The Navy has started to address workforce shortages and facilities needs at the public shipyards. These efforts to address the Navy’s maintenance challenges are important steps, but they will require several years of sustained management attention to reach fruition. As we reported in September 2017, maintenance on ships and submarines may be delayed for numerous reasons, including workforce gaps and inexperience, the poor condition of facilities and equipment, parts shortages, changes in planned maintenance work, and weather. According to Navy officials, all of these issues continue to affect the Navy’s ability to complete attack submarine maintenance on time. According to officials, the Navy has begun to address some of these challenges. For example: The public shipyards have been hiring to address workforce shortages. The number of civilian full-time employees at the shipyards increased from 25,087 in 2007 to 34,160 in 2017, with a goal to reach 36,100 by 2020. Navy officials cautioned that this newly hired workforce is largely inexperienced and will require time to attain full proficiency. The Navy has released a plan to guide public shipyard capital investments. In September 2017 we reported that the Navy projected an inability to support 50 planned submarine maintenance periods over the ensuing 23 years, due to capacity and capability shortfalls at the public shipyards. We recommended that the Navy develop a comprehensive plan for shipyard capital investment. In February 2018 the Navy published its shipyard optimization plan, outlining an estimated $21 billion investment needed to address shipyard facility and equipment needs over 20 years to meet the operational needs of the current Navy fleet, but not the larger fleet size planned for the future. While the public shipyards have operated above capacity for the past several years, attack submarine maintenance delays are getting longer and idle time is increasing. The Navy expects the maintenance backlogs at the public shipyards to continue. We estimate that, as a result of these backlogs, the Navy will incur approximately $266 million in operating and support costs in fiscal year 2018 constant dollars for idle submarines from fiscal year 2018 through fiscal year 2023, as well as additional depot maintenance delays. The Navy may have options to mitigate idle time and maintenance delays. For example, officials at the private shipyards—General Dynamics Electric Boat and Huntington Ingalls Industries-Newport News Shipbuilding—told us that they will have available capacity for repair work for at least the next 5 years. Although the Navy has shifted about 8 million man-hours in attack submarine maintenance to private shipyards over the past 5 years, it has done so sporadically, having decided to do so in some cases only after experiencing lengthy periods of idle time. According to private shipyard officials, the sporadic shifts in workload have resulted in repair workload gaps that have disrupted private shipyard workforce, performance, and capital investment—creating costs that are ultimately borne in part by the Navy. We believe that the Navy has not fully mitigated this challenge because it has not completed a comprehensive business case analysis to inform maintenance workload allocation across public and private shipyards, and to proactively minimize attack submarine idle time and maintenance delays. Such an analysis would help the Navy better assess private shipyard capacity to perform attack submarine maintenance and would help it incorporate a complete accounting of all costs, benefits, and risks, including: the large operating and support costs of having attack submarines sitting idle; the qualitative benefits associated with providing additional availability to the combatant commanders; and the potential for additional work at private shipyards to reduce schedule risk to submarine construction programs by allowing the yards to build and maintain a stable shipyard workforce. The April 2011 DOD Product Support Business Case Analysis Guidebook provides standards for DOD’s process for conducting analyses of costs, benefits, and risks. It states that data sources used to conduct a business case analysis should be comprehensive and should include both quantitative and qualitative values. It notes that benefits, such as the availability of a weapon system, may be qualitative in nature, and that DOD should evaluate all possible support options, to include government- and contractor-provided maintenance. Navy leadership has acknowledged that they need to be more proactive in leveraging private shipyard repair capacity, but officials cautioned that maintenance could cost more at a private shipyard than at a public shipyard. However, without a complete accounting of all costs, benefits, and risks, the Navy will remain unable to determine whether the cost of performing a maintenance period at a private shipyard would outweigh the mission benefits of having reduced idle time, additional operational availability, and the potential for reduced risk to submarine construction programs. The nation’s investment in attack submarines provides the United States an asymmetric advantage to gather intelligence undetected, attack enemy targets, and insert special forces, among other capabilities. However, the Navy’s attack submarine fleet has suffered from persistent and costly maintenance delays. Although the Navy has several activities underway to reduce maintenance delays for the attack submarine fleet, it has not yet taken additional steps to maximize attack submarine readiness that fully address challenges such as the allocation of maintenance periods between public and private shipyards. Without addressing this challenge, the Navy will not achieve the full benefit of the nation’s investment in its attack submarines, and it risks continued expenditure of operating and support funding to crew, maintain, and support attack submarines that provide no operational capability because they are delayed in getting into and out of maintenance. The Secretary of the Navy should ensure that the Chief of Naval Operations conducts a business case analysis to inform maintenance workload allocation across public and private shipyards; this analysis should include an assessment of private shipyard capacity to perform attack submarine maintenance, and should incorporate a complete accounting of both (a) the costs and risks associated with attack submarines sitting idle, and (b) the qualitative benefits associated with having the potential to both mitigate risk in new submarine construction and provide additional availability to the combatant commanders. We provided a draft of the classified version of the report to DOD for review and comment. That draft contained the same recommendation as this unclassified version as well as three additional recommendations DOD deemed sensitive. In written comments provided by DOD (reprinted in appendix II), DOD concurred with our recommendation stating that it has taken the first steps to take a more holistic view of submarine maintenance requirements and impacts across both the public and private shipyards. The Navy also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to congressional committees; the Secretary of Defense; the Secretary of the Navy, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To assess the extent to which the Navy has experienced maintenance delays in its attack submarine fleet, we analyzed attack submarine maintenance delay and idle time data from Naval Sea Systems Command, and we reviewed prior GAO work on shipyard maintenance delays. The Navy determines depot maintenance delays by counting each day in which a submarine maintenance period extends beyond the planned completion date. Two Navy offices within Naval Sea Systems Command—that is, the Logistics, Maintenance, and Industrial Operations office and Program Executive Office Submarines—track days incurred from depot-level maintenance delays and idle time. To determine the total number of days of maintenance delays for each fiscal year within our scope, we subtracted the planned completion date from the actual completion date to produce the number of days of maintenance delays for each maintenance period for each submarine. We added together the days of maintenance delays across all attack submarines for each fiscal year, and then added the fiscal year totals to produce the overall total. Although the data included some maintenance periods that began before fiscal year 2008, we counted days of maintenance delays only from periods that were incurred in fiscal years 2008 through 2018. We also tracked the total number of days that the Navy completed maintenance periods ahead of schedule—that is, 153—but we noted these separately instead of subtracting them from the total number of days of maintenance delays. To estimate costs associated with these delays, we analyzed annual data from fiscal years 2011 through 2017 (the most current data available at the time of our review) from the Navy’s Visibility and Management of Operating and Support Costs system. We also reviewed prior work on determining the operating and support costs of Navy ships. The Navy calculates total operating and support expenditures for each attack submarine on an annual basis, as well as the yearly average expenditure for each attack submarine class, including Los Angeles class, Seawolf class, and Virginia class blocks one and two. For each class, we converted the Navy’s annual class averages into daily average costs by adding the annual class averages together for each year that data were available, fiscal years 2011 through 2017, then dividing that number by the total number of days. We then multiplied the daily class average by the total number of days of maintenance delays and idle time incurred by submarines within that class, according to our calculations outlined above, between fiscal year 2008 and fiscal year 2018, and we added these totals together to produce the total estimated operating and support cost for days of maintenance delays and idle time incurred during this period. The data did not include annual class average costs for fiscal years 2008, 2009, 2010, or 2018. However, the annual class averages for fiscal years 2011 through 2017 did not show significant variation, so we applied these averages to 2008, 2009, 2010, and 2018. To assess the extent to which the Navy has addressed any challenges and developed mitigation plans for any maintenance delays, we reviewed the Navy’s plans to address attack submarine maintenance delays and interviewed Navy headquarters, fleet, and squadron officials, attack submarine crews, and public and private shipyard officials to understand any plans to address attack submarine maintenance delays and idle time. We analyzed data on factors contributing to attack submarine maintenance delays, such as cannibalization rates. We visited three of the four public shipyards, including Pearl Harbor Naval Shipyard and Intermediate Maintenance Facility, Portsmouth Naval Shipyard, and Norfolk Naval Shipyard, to observe operations, training, and the condition of the facilities and equipment, and to interview officials about challenges affecting operational efficiency and performance. We also met with Navy maintainers at Naval Station Norfolk and Naval Submarine Base New London, and with the crew of the submarine tenders USS Frank Cable (AS-40) and USS Emory S. Land (AS-39) in Guam. We toured the two private shipyards that conduct attack submarine repair work—General Dynamics Electric Boat and Huntington Ingalls Industries-Newport News Shipbuilding—and interviewed executives at both locations. We also toured attack submarines and met with crew leadership, selected according to which submarines and crews were available for tours at each of the sites we visited. We visited the USS Boise (SSN 764) at Naval Station Norfolk and four attack submarines in depot-level maintenance: the USS Albany (SSN 753), the USS Jefferson City (SSN 759), the USS New Mexico (SSN 779), and the USS Springfield (SSN 761). We met with the crews of two attack submarines assigned to the operating forces at the time of our visit, the USS Missouri (SSN 780) and the USS North Dakota (SSN 784). We evaluated the Navy’s plans to address any challenges against criteria in federal standards for internal control, which state that agencies should evaluate performance in achieving key objectives and addressing risks; the Department of Defense’s business case analysis guidebook, which provides standards for the process used to conduct analyses of costs, benefits, and risks; the Project Management Book of Knowledge, which provides best practices for project management; and the Secretary of the Navy’s December 2017 Strategic Readiness Review, which calls for the early identification of systemic risks before problems occur. To assess the reliability of the data sources for conducting analyses to address all of the objectives in this report, we reviewed systems documentation and interviewed officials to understand system operating procedures, organizational roles and responsibilities, and error-checking mechanisms. We selected the time frames for each of the data series above after assessing their availability and reliability, to maximize the amount of data available for us to make meaningful comparisons. We assessed the reliability of each of the data sources. The Navy provided information based on our questions regarding data reliability, including information on an overview of the data, data-collection processes and procedures, data quality controls, and overall perceptions of data quality. The Navy provided documentation of how the systems are structured and what written procedures are in place to help ensure that the appropriate information is collected and properly categorized. Additionally, we interviewed Navy officials to obtain further clarification on data reliability, discuss how the data were collected and reported, and explain how we planned to use the data. We also conducted our own error checks to look for inaccurate or questionable data, and we discussed with officials any data irregularities we found. We conducted these assessments on the following data for attack submarines: Navy deployed and surge-ready submarines from fiscal years 2011 through 2018; maintenance timeliness from fiscal years 2000 through 2018; idle time from fiscal years 2008 through 2018; operating and support costs from fiscal years 2011 through 2017; and cannibalization rates from 2012 through 2017. Some of these data were used in prior reports, and their reliability had previously been assessed. After further assessing any data that we had not recently used, we determined that they were sufficiently reliable for the purposes of summarizing attack submarine readiness trends and related information. We interviewed officials, and where appropriate obtained documentation, at the following locations: Office of the Chief of Naval Operations Undersea Warfare Division (N97) Warfare Integration Division (N83) U.S. Fleet Forces Command Commander, Submarine Force, U.S. Atlantic Fleet Commander, Submarine Squadron 4 Commander, Regional Support Group Groton Commander, Submarine Force, U.S. Pacific Fleet Commander, Submarine Squadron 1 Commander, Submarine Squadron 7 Commander, Submarine Squadron 15 Naval Sea Systems Command (NAVSEA) Logistics, Maintenance, and Industrial Operations (NAVSEA 04) Program Executive Office, Submarines Attack Submarine Program Office (PMS 392) Submarine Maintenance Engineering, Planning, and Procurement (SUBMEPP) Supervisor of Shipbuilding, Conversion, and Repair (SUPSHIP) Newport News, Virginia Navy Education and Training Command Submarine Learning Facility Norfolk Navy Board of Inspection and Survey Norfolk Naval Shipyard, Norfolk, Virginia Pearl Harbor Naval Shipyard and Intermediate Maintenance Facility, Pearl Harbor, Hawaii Portsmouth Naval Shipyard, Kittery, Maine Newport News Shipbuilding, Virginia, operated by Huntington Ingalls Industries Electric Boat, Groton, Connecticut, operated by General Dynamics The performance audit upon which this report is based was conducted from August 2017 to October 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We worked with DOD to prepare this unclassified version of the report for public release. This public version was also prepared in accordance with these standards. Report numbers with a C or RC suffix are Classified. Classified reports are available to personnel with the proper clearances and need to know, upon request. Columbia Class Submarine: Immature Technologies Present Risks to Achieving Cost Schedule and Performance Goals. GAO-18-158. Washington, D.C.: Dec. 21, 2017. Navy Readiness: Actions Needed to Address Persistent Maintenance, Training, and Other Challenges Affecting the Fleet. GAO-17-809T. Washington, D.C.: Sept. 19, 2017. Naval Shipyards: Actions Needed to Improve Poor Conditions that Affect Operations. GAO-17-548. Washington, D.C.: Sept. 12, 2017. Navy Readiness: Actions Needed to Address Persistent Maintenance, Training, and Other Challenges Facing the Fleet. GAO-17-798T. Washington, D.C.: Sept. 7, 2017. Military Readiness: DOD’s Readiness Rebuilding Efforts May Be at Risk without a Comprehensive Plan. GAO-16-841. Washington, D.C.: Sept. 7, 2016. Navy and Marine Corps: Services Face Challenges to Rebuilding Readiness. GAO-16-481RC. Washington, D.C.: May 25, 2016. (SECRET//NOFORN) Military Readiness: Progress and Challenges in Implementing the Navy’s Optimized Fleet Response Plan. GAO-16-466R. Washington, D.C.: May 2, 2016. Navy Force Structure: Sustainable Plan and Comprehensive Assessment Needed to Mitigate Long-Term Risks to Ships Assigned to Overseas Homeports. GAO-15-329. Washington, D.C.: May 29, 2015. In addition to the contact named above, Suzanne Wren, Assistant Director; Chris Watson, Analyst in Charge; Herb Bowsher; Chris Cronin; Ally Gonzalez; Cynthia Grant; Carol Petersen; Amber Sinclair; and Cheryl Weissman made key contributions to this report.
|
According to the Navy, its 51 attack submarines provide the United States an asymmetric advantage to gather intelligence undetected, attack enemy targets, and insert special forces, among others. These capabilities make attack submarines some of the most–requested assets by the global combatant commanders. GAO was asked to review the readiness of the Navy's attack submarine force. This report discusses the extent to which the Navy (1) has experienced maintenance delays in its attack submarine fleet and costs associated with any delays; and (2) has addressed any challenges and developed mitigation plans for any maintenance delays. GAO analyzed readiness information from fiscal years 2008-2018, operating and support costs, maintenance performance, and other data; visited attack submarines and squadrons; and interviewed public and private shipyard and fleet officials. This is a public version of a classified report issued in October 2018. Information the Department of Defense deemed classified or sensitive, such as attack submarine force structure requirements and detailed data on attack submarine maintenance delays, has been omitted. The Navy has been unable to begin or complete the vast majority of its attack submarine maintenance periods on time resulting in significant maintenance delays and operating and support cost expenditures. GAO's analysis of Navy maintenance data shows that between fiscal year 2008 and 2018, attack submarines have incurred 10,363 days of idle time and maintenance delays as a result of delays in getting into and out of the shipyards. For example, the Navy originally scheduled the USS Boise to enter a shipyard for an extended maintenance period in 2013 but, due to heavy shipyard workload, the Navy delayed the start of the maintenance period. In June 2016, the USS Boise could no longer conduct normal operations and the boat has remained idle, pierside for over two years since then waiting to enter a shipyard (see figure). GAO estimated that since fiscal year 2008 the Navy has spent more than $1.5 billion in fiscal year 2018 constant dollars to support attack submarines that provide no operational capability—those sitting idle while waiting to enter the shipyards, and those delayed in completing their maintenance at the shipyards. The Navy has started to address challenges related to workforce shortages and facilities needs at the public shipyards. However, it has not effectively allocated maintenance periods among public shipyards and private shipyards that may also be available to help minimize attack submarine idle time. GAO's analysis found that while the public shipyards have operated above capacity for the past several years, attack submarine maintenance delays are getting longer and idle time is increasing. The Navy may have options to mitigate this idle time and maintenance delays by leveraging private shipyard capacity for repair work. But the Navy has not completed a comprehensive business case analysis as recommended by Department of Defense guidelines to inform maintenance workload allocation across public and private shipyards. Navy leadership has acknowledged that they need to be more proactive in leveraging potential private shipyard repair capacity. Without addressing this challenge, the Navy risks continued expenditure of operating and support funding to crew, maintain, and support attack submarines that provide no operational capability because they are delayed in getting into and out of maintenance. GAO recommends that the Navy conduct a business case analysis to inform maintenance workload allocation across public and private shipyards. The Department of Defense concurred with GAO's recommendation.
|
To identity inmates with mental illness, BOP screens inmates prior to designation to a facility by reviewing an inmate’s pre-sentence report and assigning preliminary medical and mental health screening levels. Once an inmate is designated to a BOP institution, the institution staff assesses inmates to provide an accurate mental health diagnosis and determination of the severity of any mental illness as well as determining their suicide risk. BOP also identifies the mental health needs of each inmate and matches the inmate to an institution with the appropriate resources. Institution mental health care levels range from 1 to 4, with 1 being institutions that care for the healthiest inmates and 4 being institutions that care for inmates with the most acute needs. Inmate mental health care levels are also rated in this manner from level 1 to level 4. After an inmate arrives at a BOP institution, during the admission and orientation process, every inmate receives information on mental health services available at that site. Table 1 identifies inmate mental health care levels and the percentage of all inmates by designated level. Throughout an inmate’s incarceration, BOP psychologists, psychiatrists, and qualified mid-level practitioners (i.e., a physician assistant or nurse practitioner who is licensed in the field of medicine and possess specialized training in mental health care) can determine a new mental health care level following a review of records and a face-to-face clinical interview. BOP’s Psychology Services Branch, which the Reentry Services Division oversees, provides most mental health services to inmates in BOP- operated institutions, including providing individualized psychological care and residential and non-residential treatment programs (Figure 1 shows BOP’s organization for providing mental health services). BOP’s Health Services Division manages psychiatry and pharmacy services. Most mental health treatment is provided in what BOP calls its mainline, or regular, institutions. Acutely ill inmates in need of psychiatric hospitalization, such as inmates suffering from schizophrenia or bipolar disorder, may receive these services at one of BOP’s five medical referral centers, which provide inpatient psychiatric services as part of their mission. At BOP institutions, psychologists are available for formal counseling and treatment on an individual or group basis. In addition, staff in an inmate’s housing unit is available for informal counseling. Psychiatric services available at the institution are enhanced by contract services from the community. Prior to the passage of the 21st Century Cures Act, and at the beginning of our work, BOP defined serious mental illness in accordance with the agency’s program statement—which states that classification of an inmate as seriously mentally ill requires consideration of diagnoses; the severity and duration of symptoms; the degree of functional impairment associated with the illness; and treatment history and current treatment needs. In accordance with BOP’s program statement, BOP used this guidance along with other variables to develop six criteria to identify the population of inmates with serious mental illness who were incarcerated in fiscal years 2016 and 2017—the most recent fiscal years for which data on these criteria are available. The additional criteria to identify the population of inmates with serious mental illness are as follows: 1. Inmate was evaluated by BOP and assigned a mental health care level 3: An inmate requires enhanced outpatient mental health care such as weekly psychosocial intervention or residential mental health care. 2. Inmate was evaluated by BOP and assigned a mental health care level 4: An inmate requires acute care in a psychiatric hospital; the inmate is gravely disabled and cannot function in a general population environment. 3. Inmate was assigned a mental health study level 4: This indicated that the inmate was subject to a court ordered forensic study that required an inpatient setting. 4. Inmate was diagnosed to have one or more of 74 Diagnostic and Statistical Manual of Mental Disorders (DSM) diagnoses, both active and in remission, that BOP considers a serious mental illness. 5. Inmate was evaluated by BOP and identified as having a chronic suicide risk, due to the inmate having a history of two or more suicide attempts. 6. Inmate was evaluated by BOP and assigned a psychology alert status. This designation was applied to inmates who were evaluated as having substantial mental health concerns and requiring extra care when changing housing or transferring institutions. On August 15, 2017, in a memorandum for the Comptroller General of the United States from the Acting Director of BOP, BOP defined “serious mental illness” for purposes of section 14016 of the 21st Century Cures Act as follows: Individuals with a serious mental illness are persons: Who currently or at any time during the past year, Have had a diagnosable mental, behavioral, or emotional disorder of sufficient duration to meet diagnostic criteria specified within the most current edition of the Diagnostic and Statistical Manual of Mental Disorders, That has resulted in functional impairment which substantially interferes with or limits one or more major life activities. The memorandum also stated that BOP may further operationalize this definition by identifying specific mental disorders which are to be classified as serious mental illness and providing examples of functional impairment specific to BOP’s settings and/or populations. BOP officials indicated that BOP’s program statement and the six criteria to identify the population of inmates with serious mental illness who were incarcerated in fiscal years 2016 and 2017 would coincide with the definition for “serious mental illness” provided in the memorandum for the Comptroller General of the United States for purposes of the 21st Century Cures Act and identify an identical set of BOP inmates with “serious mental illness” for fiscal years 2016 and 2017. The periods during incarceration in federal and state prisons and reentry into the community are considered to be key periods to implement interventions to reduce recidivism among individuals with serious mental illness, according to public health and correctional stakeholders. The Bureau of Justice Statistics has found that for all offenders, regardless of their mental health status, the highest rate of recidivism occurs during the first year after release from prison. Further, researchers have found that offenders with serious mental illness return to prison sooner than those without serious mental illness. Multiple factors may contribute to the cycle of repeated incarceration among individuals with serious mental illness. SAMHSA reports that individuals with mental illness face additional challenges upon reentering the community, including those associated with finding treatment providers, stable housing, and employment. Federal agencies have established interagency groups and other mechanisms to share information on how to address the challenges related to recidivism among offenders with serious mental illness. Examples of these information sharing mechanisms are described in appendix III. While the periods of incarceration and reentry are the focus of this review, there are other points in the criminal justice system where there are opportunities to intervene to prevent individuals with serious mental illness from becoming further involved with the system, such as during the initial law enforcement response or during court proceedings. Further, SAMHSA has identified connecting those in need of treatment to community mental health services before a behavioral health crisis begins as a way to prevent individuals with mental illness from becoming involved in the criminal justice system. About two-thirds of BOP inmates with a serious mental illness were incarcerated for four types of offenses—drug offenses (23 percent), sex offenses (18 percent), weapons and explosives offenses (17 percent), and robbery (8 percent)—as of May 27, 2017. As shown in figure 2, some differences in offenses exist between inmates with and without serious mental illness in BOP custody. Specifically, our analysis found that BOP inmates with serious mental illness were incarcerated for sex offenses, robbery, and homicide or aggravated assault at about twice the percentage of inmates without serious mental illness, and were incarcerated for drug and immigration offenses at about half or less the rate of inmates without serious mental illness. Additionally, we found some differences between BOP inmates with and without serious mental illness in the length and severity of sentences. Although a similar percentage of inmates with and without serious mental illness have life sentences (2.8 percent and 2.5 percent, respectively), a lower percentage of inmates with serious mental illness had sentences of 10 years or less (43.5 percent and 49.2 percent, respectively). About .06 percent (5 inmates) of inmates with serious mental illness and about .03 percent (52 inmates) of inmates without serious mental illness received a death sentence. See appendix I for additional information on the characteristics of BOP inmates with and without serious mental illness. Based on our analysis of available data provided by selected states’ departments of corrections, the most common crimes committed by inmates with serious mental illness varied from state to state. The difference in types of crimes reported by states and BOP may be due to different priorities, laws, and enforcement priorities across the state and federal criminal justice systems, among other things. The federal and state governments also define serious mental illness differently, and they track different categories of crime in their respective data systems. The percentages and types of crimes committed by incarcerated inmates are shown in figures 3 through 5 below for three selected states’ departments of corrections. The New York State Department of Corrections and Community Supervision (DOCCS) cared for 2,513 inmates with serious mental illness out of a total of 51,436 inmates as of December 31, 2016. Figure 3 shows the categories of offenses committed by inmates defined by DOCCS as having serious mental illness. Three out of four inmates with serious mental illness under the care of DOCCS were incarcerated for violent crimes. According to DOCCS program descriptions, diagnostic criteria for serious mental illness are: (1) an inmate is determined by the New York State Office of Mental Health to have specified mental health diagnoses; (2) an inmate is actively suicidal or has made a recent, serious suicide attempt; or (3) an inmate is diagnosed with serious mental illness, organic brain syndrome, or a severe personality disorder that is manifested in significant functional impairment such as acts of self-harm or other behaviors that have a serious adverse effect on life or on mental or physical health. The Virginia Department of Corrections cared for 527 inmates with serious mental illness out of a total of 30,052 inmates as of September 29, 2017. Figure 4 shows the crimes committed by inmates that Virginia defined as having serious mental illness. About one quarter of the inmates with serious mental illness in Virginia committed rape, sexual assault, and other assault crimes. Virginia policy defines an inmate with serious mental illness as an offender diagnosed with a psychotic disorder, bipolar disorder, major depressive disorder, PTSD or anxiety disorder, or any diagnosed mental disorder (excluding substance use disorders) currently associated with serious impairment in psychological, cognitive, or behavioral functioning that substantially interferes with the person’s ability to meet the ordinary demands of living and requires an individualized treatment plan by a qualified mental health professional(s). The Washington Department of Corrections cared for 1,881 inmates with serious mental illness out of a total of 17,234 inmates as of June 30, 2017. Figure 5 shows the crimes committed by Washington inmates that Washington defined as having serious mental illness. About half of the inmates with serious mental illness in Washington committed assault or sex crimes. The Washington Department of Corrections defines serious mental illness as a substantial disorder of thought or mood which significantly impairs judgment, behavior, or capacity to recognize reality or cope with the ordinary demands of life within the prison environment and is manifested by substantial pain or disability. The Washington Department of Corrections’ definition does not include inmates who are substance abusers or substance dependent—including alcoholics and narcotics addicts—or persons convicted of any sex offense, who are not otherwise diagnosed as seriously mentally ill. According to BOP officials, the agency does not track costs specifically associated with inmates with serious mental illness due to resource restrictions and the administrative burden such tracking would require. BOP officials stated that BOP, unlike a hospital, is not structured to bill individual interactions; and noted that, generally, the correctional industry does not account for costs by tracking individual costs. BOP officials said that requiring BOP staff to gather individual cost data manually would be an extremely time consuming and burdensome process. In addition, BOP does not maintain the mental health care cost data necessary to calculate the individual inmate costs related to specific program areas (i.e., psychology and psychiatric services). BOP tracks the costs associated with incarcerating its overall inmate population and with providing mental health care services to inmates system-wide and separately by institution. For fiscal year 2016, BOP’s institution-level data show that total incarceration costs vary by BOP institution (ranging from $15 million to over $247 million), for a number of reasons, including varying amounts of medical and mental health care available at each institution. Table 2 identifies BOP’s costs for mental health care services provided to all inmates (including inmates with serious mental illness) for fiscal year 2016, the last year for which BOP had complete data during our audit work. The costs below are the most readily available BOP-wide costs directly related to mental health care. BOP’s Psychology Services staff provides most inmate mental health services in BOP-operated institutions, including the provision of individualized psychological care. Psychotropic medication may be used to treat mental illness, although in some instances, BOP uses psychotropic medication to treat individuals with other kinds of health conditions. Residential Reentry Centers, also known as halfway houses, provide assistance to inmates nearing release, including some inmates with serious mental illness. BOP includes psychiatric treatment and services under medical care costs, but BOP does not track psychiatric costs separately. In July 2013, we reported that BOP also does not track its contractors’ costs of providing mental health services to the 13 percent of BOP inmates housed in privately managed facilities. The performance-based, fixed- price contracts that govern the operation of BOP’s privately managed facilities give flexibility to the contractors to decide how to provide mental health services. BOP tracks and maintains information on the number and types of inmate interactions with Psychology Services personnel. These interactions include clinical and non-clinical interactions between Psychology Services staff and inmates that may be crisis-oriented or routine, such as individual and group therapy. Based on our analysis of these data, in fiscal year 2016, BOP inmates with serious mental illness were more likely than other inmates to use 18 of the 20 services or programs tracked by Psychology Services. On average, we found that an inmate with serious mental illness had 9.6 clinical interventions compared to 0.24 clinical interventions for inmates without serious mental illness during fiscal year 2016. As a result, an average BOP inmate with serious mental illness was 40 times more likely to receive a clinical intervention than an average inmate without serious mental illness. BOP data do not capture the time and resources associated with any of the Psychology Services interactions; thus we cannot assign a cost value to differences between populations in receipt of these services. Appendix IV shows the extent to which BOP’s inmate population received specific types of psychology services in fiscal year 2016. The selected state departments of corrections provided us with estimates for different types of mental health care costs, but did not identify mental health care costs specifically for inmates with serious mental illness. Additionally, the states did not provide us with the total cost to incarcerate inmates with serious mental illness. For example, officials from one state said staff did not calculate costs separately for inmates with mental illness compared to inmates without mental illness as they did not believe an accurate comparison could be made. Officials from another state said that they did not track costs of incarceration or mental health services per inmate based on whether or not an inmate has mental illness, while officials from another state said they were not able to track costs for mental health services for inmates at the individual level. The selected state departments of corrections also used different methods to determine the costs of the mental health services they provided to their inmate population. For example: Two state departments of corrections provided us with the average per-inmate costs of incarceration for a mental health treatment unit or treatment center where some inmates with serious mental illness are treated, but these per-inmate costs also included incarceration costs for inmates without serious mental illness who were housed in these facilities. Another state department of corrections provided total psychotropic medication costs for all inmates and mental health care costs per offender. Mental health care costs per offender were averaged across all offenders, not exclusively those with serious mental illness. Two other states provided total costs for one budget item related to mental illness: total mental health program spending in one state, and psychiatric care expenditures in the other state. These costs were for all inmates, not exclusively for inmates with serious mental illness. Another state department of corrections provided an estimate for average mental health care costs per inmate with mental illness, but this estimate included all inmates diagnosed as having a mental illness, not exclusively those inmates diagnosed with serious mental illness. In 2012, the Council of State Governments Justice Center developed the Criminogenic Risk and Behavioral Health Needs Framework in collaboration with DOJ’s National Institute of Corrections and Bureau of Justice Assistance, SAMHSA, and experts from correctional, mental health, and substance abuse associations. The framework is an approach to reduce recidivism and promote recovery among adults under correctional supervision with mental illness, substance use disorders, or both. It calls for correctional agencies to assess individuals’ criminogenic risk (the risk of committing future crimes), substance abuse and mental health needs. The agencies are to use the results of the assessment to target supervision and treatment resources based on these risks and needs. Additionally, the framework states that individuals with the highest criminogenic risks should be prioritized for treatment to achieve the greatest effect on public safety outcomes. Mental health and substance abuse treatment There are a number of different approaches that can be tailored and combined to address an individual’s mental health and substance abuse treatment needs. Examples include: Psychopharmacology. Approach that aims to address dysfunctional thoughts, moods, or behavior through time-limited counseling. To help implement the principles set forth in the framework, SAMHSA developed additional guidance in collaboration with the Council of State Governments Justice Center, the Bureau of Justice Assistance and experts from correctional, mental health, and substance abuse associations. This guidance is for mental health, correctional, and community stakeholders, and uses the Assess, Plan, Identify, Coordinate model to provide procedural guidelines to reduce recidivism and promote recovery at different points during incarceration and reentry. Table 3 below describes selected guidelines and examples of strategies that were identified by BOP and the six selected states that correspond to each element of the model. A residential treatment program for individuals with both substance use and mental disorders that uses a peer community to address substance abuse, psychiatric symptoms, cognitive impairments, and other common impairments. who are in recovery and have previously been involved in the criminal justice system provide support to others who are also involved in the criminal justice system. Forensic intensive case management. A case manager coordinates services in the community to help clients sustain recovery and prevent further involvement with the criminal justice system. Forensic Assertive Community Treatment (FACT). Treatment is coordinated by a multidisciplinary team, which may include psychiatrists, nurses, peer specialists, and probation officers. FACT teams have high staff-to-client ratios and are available around-the-clock to address clients’ case management and treatment needs. Examples of Bureau of Prisons (BOP) and Selected State Strategies booking/intake process as feasible and throughout the criminal justice continuum to detect substance use disorders, mental disorders, co-occurring substance use and mental disorders, and criminogenic risk. Follow up with comprehensive assessment to guide program placement and service delivery. Assessment should include clinical needs, social support needs (e.g., housing, education, employment, and transportation), and risk factors. All six selected states and BOP have developed mental health assessments during the intake process. BOP officials stated that the agency is in the process of enhancing the predictive validity of its criminogenic risk assessment and expects to complete this project in 2018. One of the six selected states uses a multidisciplinary treatment team composed of a clinician, psychiatrist, and correctional counselor, to assess the treatment and programming needs of inmates with serious mental illness. In addition to mental health treatment, the multidisciplinary team assesses if the inmate is ready for and would benefit from institutional services such as academic and vocational education programs, work, or substance abuse counseling. These assessments occur at least annually, but may occur whenever an inmate’s treatment needs have changed. To identify strategies to reduce recidivism among offenders with mental illness during incarceration and reentry, we searched for studies that analyzed the relationship between programs and recidivism among offenders with mental illness. Our search identified about 200 publications. We used a systematic process to conduct the review, which appendix II describes in more detail. We ultimately identified 14 studies that (1) assessed correctional institution or reentry programs for offenders with mental illness implemented in the United States, (2) contained quantitative analyses of the effect of a program on recidivism, and (3) used sufficiently sound methodologies for conducting such analyses. The studies examined different kinds of recidivism outcomes (e.g., re- arrest, re-incarceration, reconviction) and one study often examined more than one recidivism outcome. We categorize the findings for each study as follows: Statistically significant reduction in recidivism: the study reported that one or more outcome measures indicated a statistically significant reduction in recidivism among program participants; the study may also have one or more recidivism outcome measures that were not statistically significant. Statistically significant increase in recidivism: the study reported that one or more outcome measures indicated a statistically significant increase in recidivism among program participants; the study may also have one or more recidivism outcome measures that were not statistically significant. No statistically significant effect on recidivism: the study reported only outcomes indicating no statistically significant effect on recidivism among program participants. The statistical significance finding categories are based on the effect of the program as a whole and do not indicate if or how all individual elements of the programs impacted recidivism. For additional information on recidivism findings, see appendices V and VI. See appendix VII for a bibliography of the studies. The results of the literature review provide insights into factors that can affect recidivism among individuals with mental illness; however, the following considerations should be taken into account: (1) the type of mental illness of program participants varied within and across programs making it difficult to generalize results to individuals with all types of mental illness; (2) the studies may not provide a full description of the programs; (3) not all participants may have used available program services; (4) studies assessed the programs as a whole and did not determine to what extent different elements of the programs impacted recidivism; and (5) some studies used designs which cannot control for all unobserved factors that could affect the recidivism results. Nine of the 14 studies we reviewed found statistically significant reductions in recidivism. The studies that found statistically significant reductions generally involved programs that offered multiple support services, as shown in figure 6. Providing mental health and substance abuse treatment (8 of 9 studies), case management (5 of 9 studies), release planning (5 of 9 studies), housing (6 of 9 studies) and employment assistance (4 of 9 studies) were the most common services across the programs where studies we reviewed found statistically significant reductions in recidivism. In addition, more than half of the programs that resulted in statistically significant reductions in recidivism were coordinated with multidisciplinary stakeholders, such as mental health providers, correctional officials, substance use specialists, social workers, and peer support specialists (7 of 9 studies), and community corrections agencies, such as probation or parole offices (6 of 9 studies). However, other studies found that programs that offered multiple support services did not reduce recidivism, suggesting that other factors may also affect recidivism. Such factors may include the extent to which participants used services, as well as other unique programmatic factors, such as addressing criminogenic risk or criminal thinking. We further discuss examples of programs that did and did not reduce recidivism below. For example, study 9 examined Washington’s Dangerously Mentally Ill Program, in which a multidisciplinary committee determines which offenders meet the program criteria of having a mental illness and are at high risk of being dangerous to themselves or others six months prior to their release from prison. Members of the committee include representatives from the Department of Social and Health Services, Department of Corrections, law enforcement, and community mental health and substance abuse treatment agencies. Offenders designated for participation are immediately assigned a community mental health treatment provider and receive special transition planning prior to their release from prison. After release, and for up to five years, a variety of services are available to participants based on assessed needs. Services may include mental health and substance abuse treatment, housing and medical assistance, training, and other support services. Researchers found that program participants were about 42 percent less likely to be reconvicted of a new felony than similar offenders in the comparison group four years after release (recidivism rates were 28 percent and 48 percent, respectively). Two other studies (numbers 3 and 6) evaluated Colorado’s Modified Therapeutic Community, a residential program that was provided both as a 12-month prison program and 6-month reentry program after release from prison for offenders with co-occurring mental illness and substance use disorders. Participants may have participated in only the prison program, only the reentry program, or both. Both programs use a cognitive-behavioral curriculum designed to help participants recognize and respond to the interrelationship of substance abuse, mental illness, and criminality and to use strategies for symptom management. The reentry program was coordinated with the community corrections agency, which provided the residential facility and monitored medication and compliance with parole terms for both participants and the comparison group. The reentry program also assisted with housing placement and employment. Researchers found that both the prison program and the reentry program resulted in statistically significant reductions in recidivism among participants. Specifically, the studies found that at 12 months post- release, prison program participants had a 9 percent reincarceration rate versus a 33 percent rate for the comparison group that did not participate in either program; and reentry program participants had a 19 percent reincarceration rate versus 38 percent for the comparison group. Further, researchers found that those who participated in both the prison and reentry program experienced the greatest reductions in recidivism, with a reincarceration rate of 5 percent versus a rate of 33 percent for the comparison group that did not participate in either program 12 months after release from prison. Studies that did not find a reduction in recidivism also provide insights on factors that may affect recidivism. For example, study 10 examined a Washington program to help enroll inmates with severe mental illness in Medicaid prior to their release from prison and found that jail and prison stays were higher among program participants than non-participants. The researchers hypothesized that receiving mental health treatment may have led to more interaction with authorities, putting participants at a greater risk of being caught violating the terms of their parole than non- participants. There was some evidence to support this: they found that most of the difference in prison days between participants and non- participants was the result of noncompliance with conditions of parole (technical violations) rather than the commission of new crimes. Further, the researchers conclude that Medicaid benefits alone are not enough to reduce arrests or keep people with severe mental illness out of jail or prison. In addition, study 11 examined Minnesota’s release planning services for inmates with serious and persistent mental illness, which provided some of the same types of services as the programs that did reduce recidivism. For example, while incarcerated, inmates were provided pre-release planning to address vocational, housing, chemical dependency, psychiatric, disability, medical, medication, and transportation needs. However, this program did not result in any significant reduction in recidivism. The researchers conclude that including programming to target criminogenic risks and providing a continuum of care from the institution to the community, instead of only providing services in the institution, may make the program more effective at reducing recidivism. We provided a draft of this report to DOJ and HHS for review and comment. DOJ and HHS did not provide official written comments or technical comments. We are sending copies of this report to the Assistant Attorney General for Administration, Department of Justice, the Secretary of Health and Human Services, selected congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. The population of Federal Bureau of Prisons (BOP) inmates with and without serious mental illness varies in several characteristics, see table 4. To address all three objectives, we reviewed documents, interviewed officials, and analyzed data obtained from the Federal Bureau of Prisons (BOP) and selected states’ departments of corrections. For objective 3, we also reviewed documents and interviewed officials from the Department of Justice’s (DOJ) Office of Justice Programs and the Department of Health and Human Services’ (HHS) Substance Abuse and Mental Health Services Administrations (SAMHSA) and the National Institute of Mental Health. We selected six state departments of corrections (California, New York, Ohio, Texas, Virginia, and Washington) based upon variation in the rate of incarcerated adults per capita to obtain a mix of states with high, medium, and low rates, specialist recommendations on data quality and quality of programs for inmates with serious mental illness, and variation in geography. We contacted officials from SAMHSA and the National Institute of Mental Health and representatives from correctional accreditation organizations, as well as subject matter specialists from Pew Charitable Trusts and the Treatment Advocacy Center that we identified through previous work and asked for their recommendations of states that, in their view, had reliable data sources on the number of incarcerated individuals with serious mental illness and the costs of providing mental health services, as well as noteworthy programming for inmates with serious mental illness. The results from these six states are not generalizable, but provide insights. For purposes of this review, we based our work on the definition(s) of serious mental illness that are provided by each of the selected federal agencies and selected states’ departments of corrections. We analyzed policies and guidance at BOP and the departments of corrections in selected states to determine how, if at all, the agencies define serious mental illness and the processes used to identify incarcerated inmates with serious mental illness. To determine the population of inmates with serious mental illness for the purposes of our work, BOP operationalized its definition of serious mental illness using six criteria, covering the required degree of mental health care, mental illness diagnoses, and suicide risk. BOP defined “serious mental illness” in accordance with the agency’s program statement, BOP Program Statement 5310.16, Treatment and Care of Inmates with Mental Illness, May 1, 2014. On August 15, 2017, in a memorandum for the Comptroller General of the United States from the Acting Director of BOP, BOP defined “serious mental illness” for purposes of section 14016 of the 21st Century Cures Act. BOP officials indicated that BOP’s program statement and the six criteria to identify the population of inmates with serious mental illness who were incarcerated in fiscal years 2016 and 2017 would coincide with the definition for “serious mental illness” provided in the memorandum for the Comptroller General of the United States for purposes of the 21st Century Cures Act and identify an identical set of BOP inmates with “serious mental illness” for fiscal years 2016 and 2017. BOP applied these criteria to inmate information in its SENTRY, Bureau Electronic Medical Record (BEMR), and Psychology Data System (PDS) data systems to identify inmates with serious mental illness. To assess the reliability of the these data, we performed electronic data testing for obvious errors in accuracy and completeness, and interviewed agency officials knowledgeable about these systems to determine the processes in place to ensure the integrity of the data. We determined that the data were sufficiently reliable for identifying the population of BOP inmates with serious mental illness, for the purposes of this report. To determine what types of crimes were committed by inmates with serious mental illness who were incarcerated by the federal and selected state governments we analyzed available data from BOP and the departments of corrections in selected states on the most serious types of crimes for which inmates with serious mental illness were incarcerated during fiscal year 2017. BOP officials track and maintain information on the types of crimes for which inmates have been incarcerated via SENTRY. We interviewed officials from BOP’s Office of Research and Evaluation, Reentry Services Division, and Correctional Programs Division to discuss the number and types of crimes committed by BOP inmates with serious mental illness. To assess the reliability of BOP’s criminal offense data, tracked in BOP’s SENTRY system, we performed electronic data testing for obvious errors in accuracy and completeness, and interviewed agency officials from BOP’s Office of Research and Evaluation knowledgeable about BOP’s inmate tracking system to determine the processes in place to ensure the integrity of the data. We determined that the data were sufficiently reliable for the purposes of this report. We also interviewed and received written responses from officials from the selected state departments of corrections to determine the challenges they faced in recording, tracking, and maintaining data on inmates with serious mental illness, but we did not independently assess the internal controls associated with the selected states’ data systems. We provided state level data as illustrative examples of the crimes committed by inmates with serious mental illness in selected states. To identify what is known about the costs to the federal and selected state governments to incarcerate and provide mental health services to incarcerated individuals with serious mental illness, we interviewed and received written responses from officials from BOP’s Reentry Services Division, Correctional Programs Division, Administration Division, Program Review Division, and Health Services Division, and the departments of corrections in selected states to discuss and obtain documentation on the processes and systems used to track the costs to incarcerate and provide mental health services to inmates with serious mental illness, and obtain their perspectives on the challenges faced, if any, in tracking such costs. We analyzed BOP obligation data from fiscal year 2016 for the following budget categories: Psychology Services, psychotropic medications, and Residential Reentry Center mental health care costs. We included these obligation categories as indicators of BOP mental health care costs because our prior work identified that these services were used by inmates with mental illness. To assess the reliability of BOP’s obligations data, we performed electronic testing for obvious errors in accuracy and completeness, and interviewed agency officials knowledgeable about BOP’s budget to determine the processes in place to ensure the integrity of the data. We determined that the data were sufficiently reliable for the purposes of this report. In response to our inquiries, the selected states provided various data on costs to incarcerate and provide mental health care to inmates under their supervision. We did not independently assess the internal controls associated with the selected states’ data systems. We provided state level data as illustrative examples of the manner in which state correctional agencies tracked costs of incarceration and mental health care services for inmates under their supervision. Additionally, we obtained and analyzed BOP data from PDS on the extent to which inmates interacted with Psychology Services personnel and programs during fiscal year 2016, to calculate the average psychology services interactions (by category) per inmate during fiscal year 2016. To assess the reliability of BOP’s psychology services utilization services data, we performed electronic testing for obvious errors in accuracy and completeness, and interviewed agency officials knowledgeable about BOP’s psychology services to determine the processes in place to ensure the integrity of the data. We determined that the data were sufficiently reliable for the purposes of this report. To determine what strategies for reducing recidivism among individuals with serious mental illness have been identified by the federal and selected state governments and in literature, we obtained and analyzed documents and interviewed officials from BOP and the selected states’ corrections departments, as well as from DOJ and HHS organizations that support research, training, and programs related to mental health and recidivism. These DOJ organizations included the National Institute of Corrections, within BOP, and the Bureau of Justice Assistance and National Institute of Justice, within the Office of Justice Programs. The Department of Health and Human Services (HHS) organizations included SAMHSA and the National Institute of Mental Health. We also interviewed subject matter experts from the Council of State Governments Justice Center, Pew Charitable Trusts, and the Treatment Advocacy Center, which we selected to obtain perspectives from researchers and mental health and criminal justice organizations. Further, we conducted a literature review of studies that have sound methodologies and use primary data collection or secondary analysis to assess the impact of programs or interventions during incarceration or reentry on recidivism among adult offenders with mental illness. To identify relevant studies, we took the following steps: 1. A GAO research librarian conducted searches of various research databases and platforms including ProQuest, MEDLINE, PsycINFO, Social SciSearch, and Scopus, among others, to identify scholarly and peer reviewed publications; government reports; and publications by trade associations, nonprofits and think tanks from 2008 through 2017, a period chosen to identify a comprehensive set of relevant and timely research. 2. We identified and reviewed selected additional studies that were cited within literature reviews, meta analyses and studies referenced on information-sharing websites, including the Council of State Governments’ “What Works in Reentry” website, National Institute of Justice’s “Crime Solutions” website, and SAMHSA’s Registry of Evidence Based Practices and Programs, and other secondary sources published from 2000 through 2017. We chose this time period to ensure we identified key older, reliable studies we may have missed by virtue of our database search timeframe. We identified these secondary resources during the course of our audit through the previously discussed database search, interviews with agency officials and representatives from research, criminal justice, and mental health organizations, and by reviewing websites of relevant agencies. The literature search produced about 200 publications. To select studies that were relevant to our research objective two reviewers independently assessed the abstracts for each publication using the following criteria: 1. Program studied was implemented in the U.S. 2. Study described in the publication includes original data analysis to assess the impact of a program for adults with mental illness on recidivism. For those that met the above two criteria we obtained and reviewed the full text of the publication, using the same criteria. We also further categorized the studies that met the two criteria above into the following categories: 1) studies that evaluated programs implemented during the period of incarceration or reentry, 2) studies that evaluated programs meant to divert individuals with serious mental illness from jail or prison (e.g., mental health courts) and 3) other, for those interventions that did not fall into either of these categories. As our review focused on strategies to reduce recidivism during incarceration and reentry, we excluded the studies on diversion programs (the second category). We evaluated the 31 studies that fell into the incarceration and reentry and the other categories using a data collection instrument. The data collection instrument captured information on the elements of the program, the recidivism effects, and the study’s methodology. The data collection instrument was initially filled out by one individual and then verified for accuracy by another individual; any differences in the individuals’ assessments were discussed and reconciled. To determine if the findings of the 31 studies should be included in our review of the literature, the study reviewers conferred regarding each study and assessed if: 1) the study was sufficiently relevant to the objective; and 2) the study’s methodology was sufficiently rigorous. With regard to the study’s relevance, we included studies that evaluated: a program for individuals with mental illness incarcerated in prison or jail or provided directly upon release from prison or jail; or a program for individuals with mental illness that is not provided in a prison, jail, or directly upon release from prison or jail (e.g., in a psychiatric hospital or in the community after a psychiatric hospitalization), but is hypothesized to impact criminal justice involvement and could potentially be applied in a correctional setting. With regard to methodological rigor, two GAO methodologists used generally accepted social science standards to assess the design and analytic strategy of each study to ensure analyses were sufficiently sound to support the results and conclusions. Specifically, the methodologists examined such factors as how the effects of the programs were isolated (i.e., use of comparison groups and statistical controls); the appropriateness of treatment and comparison group selection, if used; and the statistical analyses used. As a result of this process, we found 18 studies within the scope of our review that used sufficiently sound methodologies. Some studies used a randomized controlled trial methodology or quasi-experimental research designs, and some studies used non-experimental designs to compare recidivism outcomes for a single population before and after the intervention. These studies used various recidivism measures, and some used more than one measure. For each of the 18 studies, we reviewed the study’s findings related to recidivism, and categorized the findings based on statistical significance as follows: Statistically significant reduction in recidivism: the study reported that one or more outcome measures indicated a statistically significant reduction in recidivism among program participants; the study may also have one or more recidivism outcome measures that were not statistically significant. Statistically significant increase in recidivism: the study reported that one or more outcome measures indicated a statistically significant increase in recidivism among program participants; the study may also have one or more recidivism outcome measures that were not statistically significant. No statistically significant effect on recidivism: the study reported only outcomes indicating no statistically significant effect on recidivism among program participants. For a list of the 18 studies, see appendix VII. We conducted this performance audit from February 2017 through February 2018, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Federal agencies have established interagency groups and other mechanisms, such as web-based resources, to share information related to correctional mental health and reducing recidivism among individuals with serious mental illness, among other things. Examples of these information sharing mechanisms are described in table 5 below. Our literature review also identified four studies that met the criteria of (1) containing quantitative analyses of the effect of a program for individuals with mental illness on recidivism, and (2) using sufficiently sound methodologies for conducting such analyses; but were in non-correctional settings, such as in a psychiatric hospital or in the community after a psychiatric hospitalization. While the findings from these studies may not be generalizable to a correctional setting, they may offer insights on effective strategies for reducing recidivism, as many of the program participants had a history of involvement with the criminal justice system. As shown in figure 7, half (2 of 4) of the studies found statistically significant reductions in recidivism. The non-correctional programs that were found to reduce recidivism included some of the same elements as the correctional programs that reduced recidivism, including mental health treatment (2 of 2 studies), substance abuse treatment (1 of 2 studies), case management (2 of 2 studies), release planning (1 of 2 studies), employment assistance (2 of 2 studies), housing assistance (1 of 2 studies), and multidisciplinary coordination among mental health providers, substance use specialists, social workers, and/or peer support specialists, for example (1 of 2 studies). However, similar to the literature on correctional programs, there were also studies that found that programs that offered multiple support services did not reduce recidivism, suggesting other factors may affect recidivism; such factors may include the extent to which participants used services, as previously noted, as well as other unique programmatic factors. We further discuss examples of programs that did and did not reduce recidivism below. For example, study 15 evaluated New York’s Assisted Outpatient Treatment, a court-ordered treatment program for individuals with mental illness and a history of multiple hospitalizations or violence toward self or others. Individuals entering the program are assigned a case manager and prioritized for enhanced services that include housing and vocational services. Researchers found that the comparison group who never received Assisted Outpatient Treatment had nearly double the odds (odds ratio of 1.91) of being arrested than program participants during and shortly after the period of assignment to the program. The programs that were found not to reduce recidivism also provide some insights into factors that affect recidivism. For example, study 18 evaluated a Pennsylvania-based modified outpatient therapeutic community treatment program for individuals with co-occurring substance use disorder and emotional distress or mental illness and found that it had no significant effect on recidivism. Researchers attributed this finding to the program’s emphasis on substance use rather than on addressing criminogenic risks. The 14 studies we identified through our literature review that (1) assessed correctional institution or reentry programs for offenders with mental illness implemented in the United States (2) contained quantitative analyses of the effect of a program on recidivism, and (3) used sufficiently sound methodologies for conducting such analyses, used a number of different recidivism outcome measures, and some assessed more than one recidivism outcome measure. Tables 7, 8, and 9 below show the recidivism results for studies that measured reincarceration rates, reconviction rates, and number of days in jail or prison, which were reported by multiple studies. These do not represent all recidivism findings; some studies used other recidivism measures such as the number of arrests or convictions, odds ratio or hazard ratio of reincarceration, and self-reported criminal activity. This bibliography contains citations for the 18 studies we reviewed regarding programs for individuals with mental illness that may affect recidivism. (See appendix II for more information about how we identified these studies.) Following the citation we include the study numbers that we used to reference the study earlier in this report. Burke, C. and S. Keaton. San Diego County’s Connections Program Board of Corrections Final Report. San Diego, CA: SANDAG, June 2004. (Study 1) Chandler, D.W. and G. Spicer. “Integrated Treatment for Jail Recidivists with Co-occuring Psychiatric and Substance Use Disorders.” Community Mental Health Journal, vol. 42, no. 4 (2006):405-425. (Study 2) Compton, M.T., M.E. Kelley, A. Pope, K. Smith, B. Broussard, T.A. Reed, J.A. DiPolito, B.G. Druss, C. Li, and N.L. Haynes. “Opening Doors to Recovery: Recidivism and Recovery Among Persons With Serious Mental Illnesses and Repeated Hospitalizations.” Psychiatric Services, vol. 62, no. 2 (2016): 169-175. (Study 17) Cusack, K.J., J.P. Morrissey, G.S. Cuddleback, A. Prins, and D.M. Williams. “Criminal Justice Involvement, Behavioral Health Service Use, and Costs of Forensic Assertive Community Treatment: A Randomized Trial.” Community Mental Health Journal, vol. 46 (2010): 356-363. (Study 4) Duwe, G. “Does Release Planning for Serious and Persistent Mental Illness Offenders Reduce Recidivism? Results From an Outcome Evaluation.” Journal of Offender Rehabilitation, vol. 54, no. 1 (2015): 19- 36. (Study 11) Link, B.G., M.W. Epperson, B.E. Perron, D.M. Castille, and L.H. Yang. “Arrest Outcomes Associated with Outpatient Commitment in New York State.” Psychiatric Services, vol. 62, no. 5 (2011): 504-508. (Study 15) Mayfield, J. The Dangerous Mentally Ill Offender Program: Four-Year Felony Recidivism and Cost Effectiveness. Olympia, WA: Washington State Institute for Public Policy, February 2009. (Study 9) Morrissey, J.P., G.S. Cuddeback, A.E. Cuellar, and H.J. Steadman. “The Role of Medicaid Enrollment and Outpatient Service Use in Jail Recidivism Among Persons with Severe Mental Illness.” Psychiatric Services, vol. 58, no. 6 (2007):794-801. (Study 5) Morrissey, J.P., M.E. Domino, and G.S. Cuddeback. “Expedited Medicaid Enrollment, Mental Health Service Use, and Criminal Recidivism Among Released Prisoners With Severe Mental Illness.” Psychiatric Services, vol. 67, no. 8 (2016): 842-849. (Study 10) Sacks, J.Y., K. McKendrick, and Z. Hamilton. “A Randomized Clinical Trial of a Therapeutic Community Treatment for Female Inmates: Outcomes at 6 and 12 Months After Prison Release.” Journal of Addictive Diseases, vol. 31, no. 3 (2012): 258-269. (Study 7) Sacks, S., M. Chaple, J.Y. Sacks, K. McKendrick, C.M. Cleland. “Randomized Trial of a Reentry Modified Therapeutic Community for Offenders with Co-Occuring Disorders: Crime Outcomes.” Journal of Substance Abuse Treatment, vol. 42 (2012): 247-259. (Study 3) Sacks, S, K. McKendrick, J.Y. Sacks, S. Banks, M. Harle. “Enhanced Outpatient Treatment for Co-Occurring Disorders: Main Outcomes.” Journal of Substance Abuse Treatment, vol. 34 (2008): 48-60. (Study 18) Sacks, S., J.Y. Sacks, K. McKendrick, S. Banks, and J. Stommel. “Modified TC for MICA Offenders: Crime Outcomes.” Behavioral Sciences and the Law, vol. 22 (2004): 477-501. (Study 6) Taylor, N. An Analysis of the Effectiveness of Santa Clara County’s Mentally Ill Offender Crime Reduction Program. Anne Arbor, MI: ProQuest Information and Learning Company, May 2005. (Study 14) Theurer, G. and D. Lovell. “Recidivism of Offenders with Mental Illness Released from Prison to an Intensive Community Treatment Program.” Journal of Offender Rehabilitation, vol. 47, no. 4 (2008): 385-406. (Study 8) Van Stelle, K.R., and D.P. Moberg. “Outcome Data for MICA Clients After Participation in an Institutional Therapeutic Community.” Journal of Offender Rehabilitation, vol. 39 no.1 (2004): 37-62. (Study 12) Yates, K.F., M. Kunz, A. Khan, J. Volavka, and S. Rabinowitz. “Psychiatric Patients with Histories of Aggression and Crime Five Years after Discharge from a Cognitive-Behavioral Program.” The Journal of Forensic Psychiatry and Psychology, vol. 21, no. 2 (2010):167-188. (Study 16) Zlotnick, C., J. Johnson, and L.M. Najavits. “Randomized Controlled Pilot Study of Cognitive-Behavioral Therapy in a Sample of Incarcerated Women with Substance Use Disorder and PTSD.” Behavior Therapy, vol. 40 (2009): 325-336. (Study 13) In addition to the contact above, Tom Jessor (Assistant Director); Frederick Lyles, Jr. (Analyst-in-Charge); Pedro Almoguera; David Blanding, Jr.; Billy Commons, III; Thomas C. Corless; Dominick Dale; Michele Fejfar; Eric Hauswirth; Valerie Kasindi; Heather May; Leia J. Dickerson; Sam Portnow; and Cynthia Saunders all made key contributions to this report.
|
In 2016, SAMHSA estimated that about 10.4 million adults in the United States suffered from a serious mental illness, which generally includes conditions such as schizophrenia and bipolar disorder. As of May 27, 2017, BOP was responsible for overseeing 187,910 inmates and 7,831 of these inmates were considered to have a serious mental illness. Research has shown that inmates with serious mental illness are more likely to recidivate than those without. The 21st Century Cures Act directed GAO to report on the prevalence of crimes committed by persons with serious mental illness and the costs to treat these offenders—including identifying strategies for reducing recidivism among these individuals. This report discusses (1) what is known about crimes committed by inmates with serious mental illness incarcerated by the federal and selected state governments; (2) what is known about the costs to the federal and selected state governments to incarcerate and provide mental health care services to those individuals; and (3) what strategies have the federal and selected state governments and studies identified for reducing recidivism among individuals with serious mental illness. GAO selected six states that varied in their adult incarceration rates and provided geographic diversity. At BOP and the six states' departments of corrections, GAO analyzed criminal offense and incarceration and mental health care cost data and interviewed officials about strategies for reducing recidivism for inmates with serious mental illness. The results from these six states are not generalizable, but provide insights. GAO also reviewed studies that analyzed the relationship between various programs and recidivism among offenders with mental illness. About two-thirds of inmates with a serious mental illness in the Department of Justice's (DOJ) Federal Bureau of Prisons (BOP) were incarcerated for four types of offenses—drug (23 percent), sex offenses (18 percent), weapons and explosives (17 percent), and robbery (8 percent)—as of May 27, 2017. GAO's analysis found that BOP inmates with serious mental illness were incarcerated for sex offenses, robbery, and homicide/aggravated assault at about twice the rate of inmates without serious mental illness, and were incarcerated for drug and immigration offenses at about half or less the rate of inmates without serious mental illness. GAO also analyzed available data on three selected states' inmate populations and the most common crimes committed by inmates with serious mental illness varied from state to state due to different law enforcement priorities, definitions of serious mental illness and methods of tracking categories of crime in their respective data systems. BOP does not track costs related to incarcerating or providing mental health care services to inmates with serious mental illness, but BOP and selected states generally track these costs for all inmates. BOP does not track costs for inmates with serious mental illness in part because it does not track costs for individual inmates due to resource restrictions and the administrative burden such tracking would require. BOP does track costs associated with mental health care services system-wide and by institution. System-wide, for fiscal year 2016, BOP spent about $72 million on psychology services, $5.6 million on psychotropic drugs and $4.1 million on mental health care in residential reentry centers. The six state departments of corrections each used different methods and provided GAO with estimates for different types of mental health care costs. For example, two states provided average per-inmate costs of incarceration for mental health treatment units where some inmates with serious mental illness are treated; however, these included costs for inmates without serious mental illness housed in those units. DOJ, Department of Health and Human Service's Substance Abuse and Mental Health Services Administration (SAMHSA), and criminal justice and mental health experts have developed a framework to reduce recidivism among adults with mental illness. The framework calls for correctional agencies to assess individuals' recidivism risk and substance abuse and mental health needs and target treatment to those with the highest risk of reoffending. To help implement this framework, SAMHSA, in collaboration with DOJ and other experts, developed guidance for mental health, correctional, and community stakeholders on (1) assessing risk and clinical needs, (2) planning treatment in custody and upon reentry based on risks and needs, (3) identifying post-release services, and (4) coordinating with community-based providers to avoid gaps in care. BOP and the six states also identified strategies for reducing recidivism consistent with this guidance, such as memoranda of understanding between correctional and mental health agencies to coordinate care. Further, GAO's literature review found that programs that reduced recidivism among offenders with mental illness generally offered multiple support services, such as mental health and substance abuse treatment, case management, and housing assistance.
|
Grade-crossing safety has improved significantly since 1975, but since 2009, the number of crashes and fatalities at grade crossings has plateaued (see fig. 1). The yearly number of grade-crossing crashes declined from 12,126 in 1975 to 2,117 in 2017. In that time frame, fatalities dropped from 917 to 273. The most significant reductions in grade-crossing crashes and fatalities were achieved from 1975 to 1985, when states closed or improved the most dangerous crossings. Grade- crossing safety continued to improve until the mid-2000s, though at a slower rate. Since 2009, the number of grade-crossing crashes and fatalities remains at around 2,100 crashes and 250 fatalities a year. These fatalities typically make up less than one percent of all highway- related fatalities. The decrease in crashes and fatalities occurred as the volume of train and highway traffic generally increased over the years. FRA expects the traffic volumes to continue to increase and has expressed concern that grade-crossing crashes and fatalities may also increase. As a set-aside portion of FHWA’s much larger Highway Safety Improvement Program (HSIP), the Section 130 Program provides funds to state DOTs for the elimination of hazards at highway-rail grade crossings. States determine what improvements need to be made at grade crossings. FHWA has oversight responsibilities regarding the use of federal funds as part of its administration of federal-aid highway programs and funding, including HSIP funds. FHWA uses a statutory formula to distribute to states Section 130 Program funds, which averaged $235 million per year during the last 10 years (fiscal years 2009 through 2018). Section 130 Program projects are funded at a 90 percent federal share, with the state or the roadway authority funding the remaining 10 percent. States have 4 years to obligate their program funds before they expire, meaning that in any given fiscal year, states can obligate funds appropriated in that year as well as any unobligated funds from the previous 3 fiscal years. In addition, states may choose to combine funds from multiple years to fund relatively expensive projects. The Section 130 Program’s requirements direct states to establish an implementation schedule for grade-crossing-safety improvement projects that, at a minimum, include warning signs for all public grade crossings. Grade crossings are generally categorized as “active” or “passive” depending on the type of traffic control devices that are present. As of July 2018, according to FRA’s National Highway-Rail Crossing Inventory, there were approximately 68,000 public grade crossings with electronic, or active, traffic control devices in the United States. Another approximately 58,000 public grade crossings have passive traffic-control devices, which include signs and supplementary pavement markings. The requirements also specify that at least 50 percent of Section 130 Program funding must be dedicated to the installation of protective devices at grade crossings, including traffic control devices. States can use remaining program funds for any hazard elimination project. States may also use program funds to improve warning signs and pavement markings or to improve the way the roadway aligns with the tracks (e.g., to ensure low-clearance vehicles do not get stuck on the tracks). In addition, states can use up to 2 percent of the funds to improve their grade-crossing inventories and to collect and analyze data. See figure 2 for examples of the types of projects eligible for Section 130 Program funds and graphical depictions of grade crossings before and after safety improvements have been made. FHWA and FRA are the primary agencies responsible for safety at grade crossings, and they both play key—yet distinct—roles. FHWA oversees the Section 130 Program and monitors states’ uses of program funds through 52 division offices located in each state, the District of Columbia, and Puerto Rico and through headquarters staff in Washington, D.C. In addition, FHWA’s division staff reviews states’ processes for prioritizing and selecting grade-crossing-safety improvement projects. FHWA does not evaluate the appropriateness of individual grade-crossing projects, but instead helps states determine that projects meet program eligibility requirements. Division staff assists in the implementation of Section 130 Program state-administered projects, and they may participate in state- DOT-led, on-site reviews of grade crossings under consideration for Section 130 Program projects. FHWA headquarters staff is responsible for FHWA-wide initiatives, such as working with stakeholders to establish standards for traffic control devices and systems at grade crossings and for engineering oversight of state-administered safety improvement projects. FRA provides safety oversight of both freight and passenger railroads by: collecting and analyzing data; issuing and enforcing numerous safety regulations, including on grade-crossings’ warning systems; conducting focused inspections, audits, and accident providing technical assistance to railroads and other stakeholders. Specifically, FRA oversees rail safety through eight regional offices and through headquarters staff in Washington, D.C. Regional staff monitor railroads’ compliance with federal safety regulations through inspections and provide technical assistance and guidance to states. In 2017, FRA created a new discipline for grade-crossing safety and is hiring new grade-crossing inspectors. These inspectors conduct field investigations, identify regulatory defects and violations, recommend civil penalty assessments when appropriate, and may participate in state- DOT-led teams that conduct on-site reviews of grade crossings to evaluate potential safety improvements. According to FRA documentation, FRA’s new inspectors will also work with a variety of stakeholders to institute new types of training, explore new safety concepts and technologies, and assist in the development of new or modified highway-rail grade-crossing-safety regulations, initiatives, and programs. The inspectors will also work with FHWA and other DOT operating administrations in a cooperative effort to improve grade- crossing safety. FRA regional staff also investigates select railroad crashes, including those at grade crossings, to determine root causation and any contributing factors, so that railroads can implement corrective actions. FRA headquarters staff develops analytical tools for states to use to prioritize grade-crossing projects. In addition, headquarters staff manages research and development to support improved railroad safety, including at grade crossings. FRA’s Office of Railroad Safety maintains the National Highway-Rail Crossing Inventory database and the Railroad Accident/Incident Reporting System on grade-crossing crashes. Both states and railroads submit information to FRA’s crossing inventory, which is designed to contain information on every grade crossing in the nation. Railroads submit information such as train speed and volume; states submit information such as highway speed limits and average annual daily traffic. The Rail Safety Improvement Act of 2008 added requirements for both railroads and states to periodically update the inventory; however, the Moving Ahead for Progress in the 21st Century Act (MAP-21) repealed a provision providing DOT authority to issue implementing regulations that would govern states’ reporting to the inventory. According to FRA officials, while FRA’s regulations do not require states to report the information, FRA encourages them to do so. FRA regulations require railroads to report and update their information in the inventory every 3 years or sooner in some instances, such as if new warning devices are installed or the grade crossing is closed. FRA’s accident system contains details about each grade-crossing accident that has occurred. In addition to submitting immediate reports of fatal grade-crossing crashes, railroads are required to submit accident reports within 30 days after the end of the month in which the accident occurred and describe conditions at the time of the accident (e.g., visibility and weather); information on the grade crossing (e.g., type of warning device); and information on the driver (e.g., gender and age). In its role overseeing grade-crossing safety, FRA has sponsored a number of research efforts to better understand the causes of grade- crossing crashes and identify potential ways to improve engineering, education, and enforcement efforts. For example, FRA sponsored an in- depth data analysis of grade-crossing crashes to better identify which crossing characteristics increase the risk of an accident. The report, issued in 2017, found that the volumes of train and vehicle traffic at a crossing are the biggest predictors of grade-crossing crashes. Changes in vehicle and train traffic therefore affect the annual number of grade- crossing crashes. For example, as highway traffic decreased in 2008, possibly due to the economic recession and higher gas prices, so too did the number of grade-crossing crashes. As previously noted, FRA expects that the number of grade-crossing crashes will likely grow with anticipated increases in future train and highway traffic. As discussed below, vehicle and train volume are included in the U.S. DOT Accident Prediction Model, which some states use to select grade-crossing improvement projects. According to FRA officials, FRA is using the results of this recent in-depth data analysis to, in part, evaluate whether additional risk factors, such as the number of male drivers or trains carrying toxic materials, should be added to the model. FRA has targeted other research into understanding driver behavior at grade crossings, which is the leading cause of crashes. According to FRA’s accident data, in 2017, 71 percent of fatal crashes at public grade crossings occurred at those with gates. In 2004, the DOT Inspector General (IG) reported that 94 percent of grade-crossing crashes from 1994 to 2003 could be attributed to risky driver behavior or poor judgement. State officials we spoke with explained that drivers may become impatient waiting at a grade crossing and decide to go around the gates. Drivers may also line up over the grade crossing in heavy vehicular traffic, and be unable to exit before the gates come down. See figure 3 for examples of risky driver behavior at grade crossings. To better understand driver behavior, FRA sponsored a John A. Volpe National Transportation Systems Center (Volpe Center) study that recorded and analyzed drivers’ actions as they approached grade crossings. The researchers found that almost half of drivers were doing another task, such as eating, and over a third did not look in either direction while approaching passive grade crossings. We have previously reported, and many stakeholders we interviewed agreed, that in light of inappropriate driver behavior, technological solutions alone may not fully resolve safety issues at grade crossings. In addition, public-education and law-enforcement efforts can augment the effectiveness of technological solutions. According to FRA officials, they shared information on driver education with DOT’s National Highway Traffic Safety Administration (NHTSA) as NHTSA works more closely with states on driver education manuals. According to DOT officials, NHTSA updates its driver education materials every 2–3 years and plans to consider including grade-crossing-safety materials in the next versions. FRA is also working with states and localities to research and develop new protective devices and other safety measures targeted at improving driver behavior at grade crossings. As most fatal crashes happen at grade crossings already equipped with gates, FRA and state and local agencies are exploring whether additional safety measures can improve safety at those locations. For example, in 2016 and 2017, FRA’s Grade Crossing Task Force worked with the Volpe Center and the City of Orlando to test whether photo enforcement at grade crossings could reduce risky driver behavior. The City of Orlando installed automated photo-enforcement devices at a grade crossing, and instead of issuing fines to drivers who had violated its warning devices, sent drivers a warning notice and educational safety materials. Eight months after the photo-enforcement system was installed, grade crossing violations decreased by 15 percent. While FRA judged these enforcement efforts successful at changing driver behavior, a 2015 FRA whitepaper noted that photo enforcement equipment is costly—on average costing over $300,000 per crossing to install and operate for 2 years—and may not be cost-effective for most grade crossings. FRA found that due to costs and state laws prohibiting photo-enforcement, only two photo- enforcement cameras were currently in operation at grade crossings across the country. States, localities, and FHWA are also exploring whether new types of pavement markings at grade crossings can improve driver behavior. According to DOT officials, FHWA is working with two states to develop new cross-hatch pavement markings for grade crossings that would comply with the Manual on Uniform Traffic Control Devices, similar to the “don’t block the box” type pavement markings used in intersections. FHWA also worked with a city to test the use of in-roadway lights to delineate the crossing. (See fig. 4). FRA and state DOTs are also trying to improve pedestrian safety at grade crossings by developing new safety measures. Grade-crossing accidents involving pedestrians are less frequent than those involving automobiles at grade crossings but have a higher fatality rate. While pedestrians were involved in only 9 percent of accidents at public crossings in 2017, almost 40 percent of fatal grade-crossing accidents involved pedestrians. To try to improve pedestrian safety, in 2012 the Volpe Center worked with New Jersey Transit to study whether adding additional pedestrian gate skirts— hanging gates that further block a crossing (see fig. 5)—would prevent people from ducking under the gates. The Volpe Center reported that these new gates had mixed success. While incidents of people going under and around the gates decreased, more people chose to cross the tracks in the street rather than at the sidewalk. Finally, FRA is exploring new automated and connected vehicle technologies that could reduce risky driver behavior at grade crossings. FRA, FHWA, and officials from one state we interviewed said they anticipate that such technology will be critical to further improving safety. Specifically, FRA and FHWA are coordinating with DOT’s Intelligent Transportation Systems Joint Program Office to develop pilot technology that would enable crossing infrastructure or trains to communicate wirelessly with vehicles. Vehicles can use this information to warn the driver that a crash or violation is imminent, or integrate with onboard active safety systems. According to FRA officials, they completed a proof of concept in 2013 and completed and tested a prototype of the technology in 2017. DOT officials said that DOT does not have a time frame for when automakers might begin incorporating such connected vehicle technologies and noted that retrofitting older cars with new equipment will likely make this a long-term effort. FRA shares information on its research in various ways with state DOTs, because states are responsible for deciding which safety measures to install at grade crossings. Specifically, FRA and FHWA jointly hold quarterly webinars with stakeholders, including state DOT officials, and conduct presentations at highway-rail safety workshops. Information on safety measures such as grade-crossing devices, signs, and markings are also included in the Railroad-Highway Grade Crossing Handbook. According to DOT officials, the handbook was developed jointly by FHWA and FRA. The last version of the handbook was updated in 2007 and includes some out of date information. FRA and FHWA officials said they began working on an update in 2017, but missed the July 2018 target completion date. According to FHWA officials, updating the handbook is a complex undertaking that has taken more time than they anticipated due to the extensive collaboration required among stakeholders. FHWA officials said they anticipate completing the update during the spring of 2019. The risk of crashes at public grade crossings within a state factors into states’ selection of over 1,000 new Section 130 Program projects nationally each fiscal year. FHWA requires states to develop a grade crossing program that considers relative risk. FHWA officials said they review the methods that states use to select projects to ensure that risk is considered. According to a 2016 academic study of 50 states, most states use mathematical formulas, or “accident prediction models,” to help assess risk and identify grade crossings for potential projects. More specifically, these accident prediction models use factors such as grade crossing characteristics and accident history to rank grade crossings by risk. DOT provides one such model—the Accident Prediction Model—and some states have developed their own models. The study reported that 19 states used DOT’s model and 20 states used a different model. It also found that the DOT and commonly used state models include some similar grade-crossing characteristics to predict accident risk. For example, the selected models reviewed all considered vehicle- and train- traffic volume, which FRA has found to be the strongest predictors of grade-crossing crashes. FRA makes its Accident Prediction Model available to states online through its Web Accident Prediction System. This system is an online tool that uses FRA’s crossing inventory, crossing collision history, and the DOT Accident Prediction Model to predict accident risk for grade crossings in each state. Only one of the eight states in our review used the system as its primary source for ranking grade-crossing risk. Most of the other states perform their own calculations to rank grade crossings. Officials from two states said that they believe their state-maintained data are more reliable than FRA’s crossing inventory and explained that they go directly to their contacts at railroads to get updated information on factors such as train volume. Accident prediction models are only one source of information states use when selecting Section 130 Program projects. According to the state officials we spoke with, a variety of other considerations can also influence their decisions, including the following: Proximity of projects together along a railroad “corridor” in order to gain efficiencies and reduce construction costs. Requests from local jurisdictions or railroads. These stakeholders may have information on upcoming changes at a grade crossing, such as higher train volume or new housing developments nearby, which would increase risk but would not be reflected yet in the accident prediction model. Availability of local funding to provide the required 10 percent match for Section 130 Program projects, while trying to spread the funds fairly across the state. States may also consider grade crossings that have had close calls in the past, such as where a car narrowly avoided being hit by a train. FRA does not require railroads to report on these close calls, or “near misses;” however, according to state officials, railroads sometimes provide this information to states on an ad-hoc basis. State officials from four of the eight states we spoke with said they considered near misses when selecting Section 130 Program projects. A 2004 Volpe Center report noted that studying close calls was a proactive way to improve safety. According to the report, FRA sponsored a workshop to learn about the benefits of collecting and analyzing close calls. However, stakeholders we interviewed noted challenges formalizing near-miss reporting. For example, Volpe Center officials said these reports are subjective in nature—what one engineer considers a close call, others may not. FRA developed another online tool—GradeDec—to allow states to compare the costs and benefits for various grade-crossing improvement projects. GradeDec uses models to analyze a project’s risk and calculate cost-benefit ratios and net present value for potential projects. FRA provides state DOTs with on-site GradeDec workshops upon request. While FRA officials noted that many state and local governments have registered to use the program, none of the state officials we spoke with identified GradeDec as a tool that they use to conduct cost-benefit analysis. Officials from two state DOTs we spoke with said that cost- benefit analyses could help them better identify and select the most cost- effective crossing safety projects in the future. According to the academic study of 50 states noted above, because of limited funding for grade-crossing improvements, states should consider the life-cycle costs of the projects as well as net present value to help select projects. As discussed later in this report, the small number of crashes at grade crossings can make it challenging to distinguish between different projects in terms of their effectiveness in reducing accidents. Finally, after they have considered risk factors and created a list of potential grade crossings for improvement, state officials, along with relevant stakeholders from railroads and local governments, conduct field reviews of the potential projects. According to state officials, these reviews help identify grade-crossing characteristics that may not be included in the accident prediction models, such as vegetation that would obstruct drivers’ views. In 2008, legislation was enacted mandating reporting by states and railroads to the National Highway-Rail Crossing Inventory. However, the fact that reporting to the inventory remained voluntary until 2015 has had lingering effects on the completeness of the data in the inventory. In 2015, as mandated by statute, FRA issued regulations requiring railroads to update certain data elements for all grade crossings every 3 years. However, our analysis of FRA’s crossing inventory found that 4 percent of grade crossings were last updated in 2009 or earlier. In addition, because MAP-21 repealed DOT’s authority to issue regulations that would govern state reporting to the inventory, state reporting of grade-crossing data remains voluntary, according to FRA officials, and all state-reported information is not complete. Our analysis of state-reported data in FRA’s crossing inventory found varying levels of completeness. For example, while some state-reported data fields were almost entirely complete, 33 percent of public grade crossings were missing data on posted highway speed. We also found that of the crossings for which states reported the year when the highway-traffic count was conducted, 64 percent of the highway-traffic counts for public grade crossings, another important risk factor, had not been updated since 2009, or earlier. According to the 2015 final rule, FRA will continue to evaluate whether additional regulations to address state reporting are needed to maintain the crossing inventory’s accuracy. FRA officials told us that improving inventory data will help them better deploy their limited resources, particularly their grade-crossing inspectors, and said that they have taken steps to help improve the data. In 2017, FRA regional officials conducted field reviews to verify the latitude and longitude data for grade crossings in the inventory, data that states are responsible for updating. In addition, FRA expects its grade-crossing inspectors as part of their inspections to review and identify issues with the railroad- and state-reported inventory data. According to FRA officials, FRA has begun to both transition its 19 grade-crossing managers into grade-crossing inspectors and also hire new inspectors, for an eventual total of 24 inspectors and eight regional specialists to supervise their activities. To help ensure railroads’ compliance with crossing inventory regulations, officials said that the inspectors will use spot checks to validate the inventory data by comparing grade-crossing characteristics in the field with the information railroads submitted to the inventory. In addition, FRA has incorporated information on inventory-reporting requirements into the grade-crossing inspectors’ training. Finally, FRA is currently developing guidelines for the grade-crossing inspections similar to those for other FRA safety disciplines. FRA headquarters officials acknowledged that they are still clarifying the details for the inspections that will be included in the compliance manuals that inspectors will use. Specifically, they said they are still determining appropriate inspector workloads and drafting specific guidelines that will need to be integrated into FRA’s regional inspection plans. FRA officials said they are working to develop and make available inventory inspection guidance to the grade-crossing managers and inspectors by December 31, 2018. In the meantime, FRA held training that included information on inventory-reporting requirements. In August 2018, FRA developed guidance for grade-crossing inspections specific to quiet zones in response to a recommendation we made in 2017. It is important that FRA meets its goal to issue similar guidance specific to reviewing the accuracy of the inventory data, as FRA cannot have reasonable assurance that inspections that are already under way are being conducted in such a manner that would allow them to consistently identify data reliability issues at each crossing. About 75 percent of all Section 130 Program projects states implemented in fiscal year 2016 involved installing or updating active grade-crossing equipment, including warning lights and protective gates (see fig. 6). The prevalence of this type of project is in part due to the Section 130 Program requirement that states spend at least 50 percent of funds on protective devices. Other than eliminating a grade crossing, adding protective devices has long been considered the most effective way of reducing the risk of a crash. Officials from six of eight state DOTs we interviewed told us that the numbers and types of grade-crossing projects they implement are dependent on the amount of Section 130 Program funding they receive and the cost of the projects. As previously described, funds are set aside from the Highway Safety Improvement Program and distributed to states by a statutory formula that includes factors such as the number of grade crossings in each state. Officials from six of the eight state DOTs we spoke to agreed that the set-aside nature of the program was crucial in allowing them to implement projects, many of which they said would not have been possible without Section 130 Program funds. For example, many said the formula funding ensures that grade-crossing projects are completed along with highway safety projects, particularly given the fact that fatalities resulting from grade-crossing crashes account for so few when compared to highway deaths. Overall, fatalities resulting from grade-crossing crashes account for less than 1 percent of all highway- related fatalities. In fiscal year 2018, the funds distributed ranged from a low of approximately $1.2 million for eight states and Washington, D.C., to over $16 million for California and over $19 million for Texas. The number of grade crossings in the eight states and Washington, D.C. ranged from 5 to 380, while California had almost 6,000 and Texas had over 9,000. Project implementation costs varied by project type and ranged widely depending on project scope. Based on 2016 DOT data, some typical project costs ranged as follows: adding signs to passive grade crossings—$500 to $1,500; adding flashing lights and two gates to passive grade crossings— $150,000 to $300,000; adding four gates to grade crossings with flashing lights—$250,000 - closing a grade crossing—$25,000 to $100,000; and separating a grade crossing from traffic (Grade Separation)—$5 million to $40 million. State officials we spoke with cited several challenges in pursuing certain types of controversial, innovative, and expensive projects that could help them address the evolving nature of risk at grade crossings and difficulty in measuring the effectiveness of their projects. First, most state DOT officials said that the cost of grade-separation projects and, at times, the controversy of eliminating grade crossings through closure reduces the number of these projects, while acknowledging that they are the most effective ways to improve safety. These types of projects made up only 3 percent of Section 130 Program projects in fiscal year 2016 (see fig. 6). Grade-separation projects are often more expensive than the annual Section 130 Program funding available to states. In 2018, only eight states received annual Section 130 Program funding sufficient to fund a $7-million grade-separation project. As discussed previously, to fund relatively expensive projects, states may choose to combine funds from multiple years. Also, states and railroads may make incentive payments to localities for the permanent closure of a grade crossing. In addition to the cost, most state DOT officials reported challenges obtaining local support for closing grade crossings. They said closures may inconvenience residents who use the road and force emergency responders to take longer routes, potentially slowing response times. Grade-separation projects address these safety concerns and may be more agreeable to residents, but they are substantially more expensive. While up to $7,500 in Section 130 Program funding can be used to help incentivize communities to close grade crossings, officials from some of our selected state DOTs said this amount is generally not enough to persuade local officials to support the closing. Second, officials from many state DOTs we interviewed also reported that the requirements of the Section 130 Program create challenges for them in implementing what they considered to be innovative projects. For example, the program requirement that 50 percent of funds be used on protective devices, combined with what one researcher described to us as the tendency by states to implement “known” projects—i.e., protective devices—may impede states’ selection of new, more innovative safety projects. Officials we interviewed from many state DOTs described challenges related to the program’s requirements. They noted that they are prevented from using Section 130 Program funds for new types of safety technologies not yet incorporated into FHWA’s Manual on Uniform Traffic Control Devices. As noted previously in this report, outside the Section 130 Program FHWA is working with states and localities to explore whether new types of pavement markings at grade crossings, not in the manual, can improve driver behavior. One state DOT official we interviewed suggested changes to allow states to fund one grade- crossing pilot project per year or to use a set percentage of program funds to finance a pilot project that could help them explore promising but as yet unproven technologies. Third, state DOT officials from four of the eight selected states also said it can be difficult to find funding for the required 10 percent state match. As previously mentioned, while certain rail-safety projects are eligible for up to 100 percent federal funding, Section 130 Program projects are funded at a 90 percent federal share. According to DOT documentation we reviewed, only some states have a dedicated source for such a match, and state DOT officials from one of our selected states said their state cannot use state funds for the 10 percent match. Some state DOT officials said this situation can drive project selection. For example, they sometimes chose projects based on which localities or railroads were willing to provide matching funds or offer cost savings. Finally, many state officials cited challenges in measuring the effectiveness of grade crossing projects in reducing crashes or the risk of crashes. In particular, state officials we spoke to said it can be difficult to use before-and-after crash statistics as a measure of effectiveness because of the low number and random nature of crashes. Also, as FRA research has shown and as FHWA and FRA have noted, reporting on before-and-after grade-crossing accident statistics can be misleading, given the infrequency of crashes and crashes that are not the result of grade crossing conditions. States’ required Section 130 Program annual progress reports to the Secretary of DOT call for states to report on the effectiveness of the improvements they made. FHWA reporting guidance suggests they define effectiveness as the reduction in the number of fatalities and serious injuries after grade-crossing projects were implemented, consistent with statutory requirements. In addition, FHWA guidance states that consideration should be given to quantifying effectiveness in the context of fatalities and serious injuries. However, states often report no differences in crashes after specific projects were implemented, and there have been instances where states reported a slight increase in crashes. Such an increase does not necessarily mean that the project was not effective in reducing the overall risk of a crash. Also, not all projects are implemented at grade crossings where there has been a crash. Among other information, states also typically report information on funding and data on the numbers and types of projects implemented. In addition, the extent to which states report projects’ effectiveness varies greatly. Given states’ responsibility for implementing the Section 130 Program and the differences in the amounts of funding they receive, FHWA officials said states should determine and report on the appropriate effectiveness metrics for their programs. According to FHWA officials, during the 2017 reporting year, a few states requested examples of what to include when reporting effectiveness, and FHWA responded with examples of various methods they could use, such as a benefit-cost ratio or the percentage decrease in fatalities, serious injuries, and crashes. Regardless of the difficulty in measuring the effectiveness of specific projects, most state DOT officials we interviewed stressed the importance of the Section 130 Program in funding grade-crossing projects. FHWA’s biennial report to Congress is intended to provide information to Congress on the progress being made by the states in implementing projects to improve safety and, in addition, make recommendations for future implementation of the program. FHWA reviews states’ annual Section 130 Program reports and uses them to formulate the report to Congress every 2 years. FHWA’s 2018 report highlights that the Section 130 Program has seen great success since 1975, with a decrease of approximately 74 percent in fatalities at the same time that there was an increase in vehicle and train traffic. The report described the latest available 10-year trend, from 2007 to 2016, as showing a 31 percent decrease in fatalities. Fatalities have also decreased when adjusted for train traffic. However, FHWA officials acknowledged in interviews with us that crashes and fatalities have remained constant since about 2009, with more recent data showing a slight increase in fatalities over the last 2 to 3 years, data that are consistent with the increases in overall roadway fatalities. The officials said increased train- and vehicle-traffic volumes could be contributing to that increase, in addition to other factors, such as more bicycle riders and pedestrians using grade crossings. As described earlier, states have generally already used Section 130 Program funding to address safety at the riskiest grade crossings by adding protective measures, typically lights and gates. Yet crashes continue to occur at these improved grade crossings. Given these trends and the challenges discussed earlier related to the requirements of the Section 130 Program, it is not clear whether the program remains effective in continuing to reduce the risk of crashes and fatalities at grade crossings. As required, FHWA’s biennial report includes a section on “recommendations for future implementation” of the Section 130 Program. As part of this, FHWA reports on challenges and actions being taken to address them. FHWA’s 2018 report identified one of the same challenges we heard about from state DOT officials related to the inability or unwillingness of local agencies to provide matching funds and the relatively low amount of funding designed to incentivize localities to close crossings. FHWA reported on its efforts to address these challenges, including by providing guidance, resources, and supportive training to states and local agencies and serving as a clearinghouse for innovative methods of supporting projects. However, with the exception of the funding challenge, FHWA’s most recent report does not include the other challenges state officials identified to us related to the requirements of the Section 130 Program discussed above. These include program funding requirements that may impede innovative approaches and the difficulties of using before-and-after crash statistics to measure effectiveness. Many state DOT officials we spoke with said there may be an opportunity to more broadly assess the Section 130 Program at the national level. It could be more informative to comprehensively assess more detailed crash trends, such as those that look forward over multiple years across the more than 1,700 crashes nationwide, rather than on the approximately 35 that occur on average within a state, and identify strategies to address those trends. Doing so could help FHWA learn more about why crashes are continuing and what types of projects may be effective. There could be ways to evaluate the program in a more comprehensive way; many state DOT officials we interviewed told us such a comprehensive evaluation could help improve program effectiveness in a number of ways, including by enabling the program to better keep up with the rapid pace of technological change and re- examining eligibility requirements that limit the flexibility of states to consider other types of projects beyond engineering. Also, most state DOT officials we interviewed agreed that education and enforcement efforts are crucial to further improving safety, as did 8 out of 10 other stakeholders we spoke to, as well as officials from Volpe Center and NTSB. However, according to FHWA officials, those project types are not allowed under the Section 130 Program’s requirements. The officials said FHWA has partnered with FRA and NHTSA on research efforts, such as driver-behavior studies, to inform grade-crossing safety issues. However, the officials said that FHWA has not conducted a program evaluation of the Section 130 Program to consider whether the program’s funding and other requirements allow states to adequately address ongoing safety issues such as driver behavior. FHWA officials said that there is no federal requirement for them to conduct such a program evaluation. We have previously reported that an important component of effective program management is through program performance assessment, which helps establish a program’s effectiveness—the extent to which a program is operating as it was intended and the extent to which a program achieves what the agency proposes to accomplish. This type of evaluative information helps the executive branch and congressional committees make decisions about the programs they oversee. Assessing program performance includes conducting program evaluations, which are individual systematic studies that answer specific questions about how well a program is meeting its objectives. In addition, federal internal-control standards state that management should identify, analyze, and respond to significant changes in a program’s environment that could pose new risks. FHWA officials said the fact that crashes and fatalities have held steady while the volume of train and vehicle traffic has increased is an indication that grade-crossing safety has continued to improve. However, specific to fatalities per million train-miles, FHWA’s 2018 biennial report shows this rate to be fairly constant since 2009. As noted previously, FRA expects train and traffic volumes to continue to increase and has expressed concern that grade-crossing crashes and fatalities may also increase. Without conducting a program evaluation, FHWA cannot ensure that the Section 130 Program is achieving one of the national goals of the federal- aid highway program, to reduce fatalities and injuries. In addition, It is difficult to see how FHWA, in its biennial reports to Congress, could make informed recommendations for future program implementation without conducting a program evaluation to assess, among other things, whether program requirements first established some four decades ago continue to reduce fatalities and injuries. We note that as part of a program evaluation, some changes that FHWA, working with FRA, identifies as potentially having merit to improve the program’s effectiveness could require a statutory change. The continued number of crashes and fatalities at grade crossings with devices intended to warn of a train’s presence calls into question whether the Section 130 Program is structured to help states continue making progress toward the national goal to reduce fatalities and injuries. An evaluation of the program’s requirements could help determine whether Congress should consider better ways to focus federal funds to address the key factor in crashes—risky driver behavior. An FHWA program evaluation could also help determine whether, for example, states could more strategically target emerging safety problems if changes were made to the types of projects eligible for funding under the Section 130 Program. FRA’s new grade-crossing inspectors are meant to increase the effectiveness of FRA’s rail-safety oversight activities, and accordingly, these FRA inspectors, along with FRA researchers, may be well positioned to help FHWA evaluate potential changes to improve the effectiveness of the Section 130 Program. The Administrator of FHWA, working with FRA, should evaluate the Section 130 Program’s requirements to determine whether they allow states sufficient flexibility to adequately address current and emerging grade-crossing safety issues. As part of this evaluation, FHWA should determine whether statutory changes to the program are necessary to improve its effectiveness. (Recommendation 1) We provided a draft of this report to DOT for review and comment. In written comments, reproduced in appendix II, DOT concurred with our recommendation. DOT also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Transportation, the Administrator of the Federal Highway Administration, and the Administrator of the Federal Railroad Administration. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report examines (1) what has been the focus of Federal Railroad Administration’s (FRA) grade-crossing-safety research, (2) how states select and implement grade-crossing projects and what railroad- and state-reported data are available from FRA to inform states’ decisions, and (3) the challenges states reported in implementing and assessing projects and the extent to which the Federal Highway Administration (FHWA) assesses the program’s effectiveness. The scope of this work focused on the nation’s more than 128,000 public grade crossings. We did not include private grade crossings, as states can only use Railway- Highway Crossings Program (commonly referred to as the Section 130 Program) funds to improve safety at public grade crossings. While FRA provides safety grants to address rail issues, including for grade-crossing projects, we focused our work on the Section 130 Program because it is the primary source of federal funding directed at grade-crossing-safety improvement. For each objective we reviewed: pertinent statutes and FHWA and FRA regulations and documents; interviewed FHWA and FRA program officials in headquarters; and conducted in-depth interviews with a non- generalizable sample of organizations that included officials from 4 freight and passenger railroads, 12 state agencies from 8 states, 6 FRA regional offices, and 8 FHWA state division offices. We also spoke with representatives from relevant associations and officials from NTSB and Volpe Center. We selected these organizations based on our initial background research, prior work, and input from other stakeholders, among other things. See the paragraph below for additional selection details and table 5 for a complete list of organizations we spoke with. We selected eight states as part of our non-generalizable sample for interviews. These states included Arizona, California, Florida, Illinois, Missouri, New Jersey, North Carolina, and Pennsylvania. The states were selected to include a mix of state experiences based on a variety of factors, including the number of grade crossings and crashes at those crossings, and the amount of Section 130 Program funding they received. Specifically, we selected four states from those in the top 25 percent of all states in terms of their number of grade crossings and the amount of Section 130 Program funds they received. We selected the other four states to include a mix of these factors. We also considered geographical diversity and recommendations from FRA and FHWA officials. Within these eight states, we conducted in-depth interviews with FHWA division staff, FRA regional staff, and state officials. A variety of state agencies administer the Section 130 Program within their state; the state officials we spoke with from our eight selected states worked for agencies such as state departments of transportation, corporation commissions, and public utility commissions. We also spoke with a non-generalizable sample of four railroads: Amtrak, CSX, Norfolk Southern, and Sierra Northern. We selected railroads based on a variety of factors including geographic location and stakeholder recommendations. We also conducted additional work related to each of the objectives. To describe the focus of FRA’s grade-crossing-safety research, we examined FRA research aimed at understanding the causes of grade- crossing crashes and identifying potential improvements and described FRA efforts to test new approaches that could improve safety. We did not assess the quality of FRA’s research, as that was beyond the scope of this engagement. Instead, we described the nature of the research. We also spoke with FRA research and development staff, Volpe researchers, and state partners about this work. To describe how states select and implement grade-crossing projects, and what FRA data are available to inform their decisions, we reviewed an academic study that included a literature review and interviews with state officials to describe how states select Section 130 Program projects. We spoke with the researcher and determined the study to be reliable for the purposes of our reporting objectives. We also spoke with officials from our eight selected states, FHWA division staff, and FRA regional staff, and reviewed the states’ 2017 Section 130 Program reports. As part of this objective, we also assessed the reliability of data reported for all railroads in FRA’s National Highway-Rail Crossing Inventory data as of August 31, 2018. For public grade crossings that were not closed, we examined a selection of fields within the database to identify the frequency of missing data (see table 1), data anomalies (see table 2), relational errors, where two related data fields had values that were incompatible (see table 3), and when the data was last updated (see table 4). Specifically, we conducted the following electronic tests on the crossing inventory data to determine if they were within reasonable ranges, were internally consistent, and appeared complete: Before conducting our analysis, we filtered the inventory data to only include open, public, at-grade crossings. To understand FRA’s efforts to improve its crossing inventory data, we interviewed FRA regional and headquarters staff and reviewed job descriptions for FRA’s new grade- crossing inspectors. Finally, to determine the challenges states reported in implementing and assessing grade-crossing safety projects and the extent to which FHWA assesses the program’s effectiveness, we reviewed program requirements and state project data and other components from FHWA’s 2016 and 2018 Section 130 Program biennial reports to Congress. We also reviewed FHWA’s summary of fiscal year 2018 program funds provided to states and federal laws and guidance related to implementing projects and measuring performance. We interviewed state DOT officials from the eight selected states and other stakeholders on the challenges states reported in implementing and assessing projects, and FHWA and FRA officials for their perspectives on managing the program, including how FHWA measures performance and assesses program effectiveness. We compared information collected from FHWA and FRA to federal internal-control standards and criteria on program evaluation identified in our previous work. In addition, we reviewed FHWA and FRA documents designed to guide states, such as the Grade Crossing Handbook, the Manual on Uniform Traffic Control Devices, the Action Plan and Project Prioritization Noteworthy Practices Guide, and other related documents. We conducted this performance audit from November 2017 to November 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Susan A. Fleming, (202) 512-2834, Flemings@gao.gov. In addition to the individual named above, Maria Edelstein (Assistant Director); Gary Guggolz (Analyst in Charge); Steven Campbell; Tim Guinane; Ben Licht; Catrin Jones; Delwen Jones; SaraAnn Moessbauer; Malika Rice; Larry Thomas; and Crystal Wesco made key contributions to this report.
|
Crashes at highway-rail grade crossings are one of the leading causes of railroad-related deaths. According to FRA data, in 2017, there were more than 2,100 crashes resulting in 273 fatalities. Since 2009 crashes have occurred at a fairly constant rate. The federal government provides states funding to improve grade-crossing safety through FHWA's Section 130 Program. The persistence of crashes and deaths raises questions about the effectiveness of the federal grade-crossing-safety program. GAO was asked to review federal efforts to improve grade-crossing safety. This report examines: (1) the focus of FRA's grade-crossing-safety research, (2) how states select and implement grade-crossing projects and what data are available from FRA to inform their decisions, and (3) the challenges states reported in implementing and assessing projects and the extent to which FHWA assesses the program's effectiveness. GAO analyzed FRA data; reviewed FRA's, FHWA's, and states' documents; reviewed a study of states' selection of projects; and interviewed FRA and FHWA headquarters and field staff, and officials from a non-generalizable sample of eight states, selected to include a mix in the number of grade crossings and crashes, and geographic diversity. Research sponsored by the Federal Railroad Administration (FRA) has identified driver behavior as the main cause of highway-rail grade crossing crashes and that factors such as train and traffic volume can contribute to the risk of a crash. (See figure.) Over 70 percent of fatal crashes in 2017 occurred at grade crossings with gates. To meet the requirements of the federal grade-crossing program, states are responsible for selecting and ensuring the implementation of grade-crossing improvement projects. Most state DOT officials and other relevant transportation officials use local knowledge of grade crossings to supplement the results of models that rank grade crossings based on the risk of an accident. These states generally consider the same primary risk factors, such as vehicle and train traffic. FRA is taking steps to improve the data used in its model to help states assess risk factors at grade crossings. For example, FRA's grade-crossing inspectors will review and identify issues with railroad- and state-reported inventory data. FRA is currently developing guidelines, which it plans to finalize by the end of 2018, to implement these inspections as it has for other types of FRA inspections. Officials we spoke with in eight states reported challenges in pursuing certain types of projects that could further enhance safety, in part because of federal requirements. While safety has improved, many crashes occur at grade crossings with gates, and officials said there could be additional ways to focus program requirements to continue improving safety. States' and the Federal Highway Administration's (FHWA) reporting focuses on the program's funding and activity, such as the number and types of projects, yet the low number of crashes makes it difficult to assess the effectiveness of projects in reducing crashes and fatalities. FHWA reports the program has been effective in reducing fatalities by about 74 percent since 1975. However, since 2009, annually there have been about 250 fatalities—almost one percent of total highway fatalities. FRA expects future crashes to grow, in part, due to the anticipated increase in rail and highway traffic. An evaluation of the program should consider whether its funding and other requirements allow states to adequately address ongoing safety issues. FHWA officials said they are not required to perform such evaluations. GAO has previously reported on the importance of program evaluations to determine the extent to which a program is meeting its objectives. An evaluation of the program could lead FHWA to identify changes that could allow states to more strategically address problem areas. GAO recommends that FHWA evaluate the program's requirements to determine if they allow states the flexibility to address ongoing safety issues. The Department of Transportation concurred with GAO's recommendation.
|
As we have previously reported, transportation systems and facilities are vulnerable and difficult to secure given their size, easy accessibility, large number of potential targets, and proximity to urban areas. TSA’s mission is to protect the nation’s transportation systems by providing effective and efficient security to ensure freedom of movement for people and commerce. Accordingly, TSA is responsible for managing vetting and credentialing programs to ensure that individuals that transport hazardous materials or have unescorted access to secure or restricted areas of transportation facilities at maritime ports and TSA-regulated airports do not pose a security threat. In order to carry out this responsibility, TSA conducts background checks—known as security threat assessments— on individuals seeking an endorsement, credential, access, and/or privilege (hereafter called a credential). Specifically, TSA reviews applicant information and searches government databases, such as criminal history records from federal, state, and local sources in the Federal Bureau of Investigation’s National Crime Information Center database and Terrorist Screening Database, which is the federal government’s consolidated terrorist watchlist. This information is used to determine whether the applicant has known ties to terrorism and whether the applicant may be otherwise precluded from obtaining a credential based on his or her immigration status and criminal history, among other factors. If TSA determines that an applicant does not pose a security threat, a credential may be supplied by an issuing entity. If it determines an applicant should be denied, the agency issues a preliminary determination of ineligibility letter to the applicant. The applicant may seek redress by appealing the determination or requesting a waiver. TSA’s security threat assessments support over 30 credentialing programs in the maritime, surface, and aviation transportation segments. The largest programs include the Transportation Worker Identification Credential program for maritime workers, Hazardous Materials Endorsement program for commercially licensed drivers, the Aviation Worker program, and TSA Pre® for travelers at airport checkpoints. According to TIM program officials, these transportation programs are collectively estimated to have processed about 12.8 million enrollments by October 2017. Table 1 describes the largest transportation credentialing programs, by segment, and purpose of each. TSA’s legacy IT systems that are currently used to help conduct its security threat assessment and credentialing functions are an aggregation of stove-piped solutions that were developed over a period of time to support individual transportation screening programs. These systems are duplicative and lack needed sophistication to effectively detect, for example, if an individual is attempting to gain access to multiple facilities across different transportation programs in an effort to find any successful entry point. Early detection of this type of threat is difficult and time consuming because many aspects of the current systems are not fully automated. Additionally, we and the DHS Office of Inspector General (OIG) have previously reported numerous shortfalls with TSA’s security threat assessment and credentialing systems. We reported in 2011 that the demand for security threat assessments is expected to continue to grow and the existing credentialing systems will not be able to accommodate this growing enrollment demand. In July 2013, we reported on functional limitations and technical problems with TSA’s legacy credentialing systems that were to be addressed by the TIM system. These limitations included the inability to run reports to measure TSA response times to applicants, track adjudication of cases, and address case workload backlogs. We also reported on delays in processing new cases. We made recommendations to address these issues and DHS agreed with our recommendations. DHS has taken several actions to implement the recommendations, such as establishing a process for developing accurate workload projections and hiring additional adjudicators. In June 2015, DHS’s OIG reported on issues with TSA’s lack of continuous vetting once a credential was issued, referred to as recurrent vetting. For example, the OIG reported on the need for recurrent vetting of aviation workers. Specifically, it found that TSA did not have effective controls in place for ensuring that aviation workers had not committed crimes that would disqualify them from having unescorted access to secure areas of airports, and that they had lawful immigration status and were authorized to work in the United States. Instead, TSA depended on the commercial airports and air carriers to verify criminal histories of workers who already hold credentials, and on the credential holders themselves to report disqualifying crimes to the airports where they worked. The DHS OIG recommended that TSA pilot the Federal Bureau of Investigation’s Rap Back program and take steps to institute recurrent vetting of criminal histories at all commercial airports. TSA concurred with the recommendation and stated that it planned to initiate a pilot Rap Back program to help ensure full implementation across all eligible TSA- regulated populations in the future. In September 2016, DHS’s OIG reported that, although TSA required Transportation Worker Identification Credential cardholders to self- report to the administration and surrender their card when charged with a disqualifying offense, this self-reporting occurred only once between 2007 and 2016. The report also stated that TSA was testing two methods to implement recurrent vetting into its credentialing programs—the Federal Bureau of Investigation’s Rap Back program to check for criminal violations and the use of DHS’s Automated Biometric Identification System to check for both criminal and immigration violations. However, TSA’s plans did not include a method for determining the best approach, and the OIG reported that this would impede TSA’s ability to implement recurrent vetting successfully and efficiently. Accordingly, the OIG recommended that TSA establish measurable and comparable criteria to use in evaluating and selecting the best criminal and immigration recurrent vetting option, and TSA concurred with this recommendation. Also, in September 2016, the DHS OIG reported that the background checks for the Transportation Worker Identification Credential program were not as reliable as they could be. For example, the OIG found that TSA did not have processes in place to ensure the proper separation of duties for adjudicators, who had the ability to assign, review, and perform quality assurance on the same case. The OIG also found missing supervisory review controls in the terrorism vetting process. Accordingly, the OIG recommended that TSA identify and implement additional internal controls and quality assurance procedures; TSA agreed with the recommendation. In response, TSA planned to make improvements to the TIM system to include an additional quality assurance component in which the system would automatically select cases for senior adjudicators to review and to incorporate into the overall reporting and monitoring activities. The TIM system is intended to address the shortfalls identified in these prior reports by providing a modern and centralized end-to-end credentialing system. The system is also intended to provide counter- terrorism and trend analytic capabilities to help identify unusual activities (e.g., credential shopping and using multiple aliases) across the entire credentialing process and all transportation populations supported by TSA’s security threat assessments. In addition, the system is expected to enable automated recurrent vetting of individuals against criminal and immigration databases to ensure that a credential or endorsement is revoked if an individual commits a disqualifying act. The planned credentialing process that is to be supported by the TIM system includes: Registration and enrollment: Individuals seeking a credential or endorsement under one of the transportation programs supported by the system are expected to be able to apply for a security threat assessment at a Universal Enrollment Center or via the system’s online portal. The biographic and biometric information collected from the applicant is to be received and processed by the system. Eligibility vetting and risk assessment: The system is to conduct automated vetting of the applicant’s information against criminal, immigration, and terrorism watchlists to determine the security risk associated with allowing access privileges based on the criteria for the credential or endorsement that the individual is seeking to obtain. If the results return a flag for a potentially disqualifying factor, the applicant’s case is to be sent for adjudication. TSA adjudicators are to use the system to review and adjudicate cases that did not pass automated vetting by comparing the applicant’s information to the criteria for the credential or endorsement that the individual is seeking to obtain. The adjudicators are to determine the applicant’s eligibility for the credential or endorsement, and approve or deny the individual’s application. Issuance: When an applicant is approved through eligibility vetting or adjudication, the system is to notify the applicant of approval and provide instructions on how to receive the credential, which is to be activated by the system and supplied by the issuing entity. The applicant also is to be able to login to the online portal to view the status of the application. Verification and use: Use of the credential in secured areas is to be verified, including determining that the credential is authentic, that the individual is the correct recipient of the credential, and that the credential’s status is valid (not revoked or expired). Revocation and expiration: The system is expected to conduct subsequent automated recurrent vetting of individuals who previously had been approved against criminal, immigration, and terrorist databases on an ongoing basis. If, as a result of recurrent vetting or self-reporting, there is new information indicating that an applicant’s credential should be revoked, the system is to alert the adjudicators who are then to determine if revocation is needed. The system is to prompt credential expiration at the end of a specified period of time. Redress or waiver: An applicant that is denied a credential is to be able to apply to TSA to either appeal the decision, to include providing documentation to prove that he/she is eligible, or request a waiver from having to meet the eligibility criteria. Trend analytics: The system is to allow TSA’s Office of Intelligence and Analysis users to select from a standardized suite of analysis tools that would allow them to identify unusual activities across transportation populations. A key objective would be to identify through analysis those adversaries and terrorists who may attempt to hide behind multiple personas and aliases. Figure 1 provides an overview of the intended future credentialing process which the TIM system is expected to support. TIM program officials decided to adopt an Agile software development approach—a type of incremental development—which calls for the rapid delivery of software in small, short increments rather than in the typically long, sequential phases of a traditional waterfall software development approach. This decision is consistent with OMB’s guidance as specified in its IT Reform Plan, as well as the legislation commonly referred to as the Federal Information Technology Acquisition Reform Act. Agile emphasizes early and continuous software delivery, as well as using collaborative teams and measuring progress with working software. Figure 2 provides a depiction of software development using the Agile approach compared to a waterfall approach. The Agile approach significantly differs in several ways from traditional waterfall software development. Table 2 highlights major differences between the Agile and waterfall software development approaches. Additionally, Agile practices integrate planning continuously throughout the life-cycle. Although Agile requires some high-level, up front planning, in general, planning in Agile focuses on the near term of the next few software releases. Planning sessions are conducted to support each release, iteration, and every work day. For example, development teams have daily meetings, where the team members discuss what they did yesterday and what they plan to do that day. Frequent planning is aimed at ensuring the program is delivering the needed capabilities to the end users. As we have previously reported, numerous frameworks are available to Agile practitioners to guide their Agile software development activities. Scrum is one common framework, which is widely used in the public and private sectors and its terminology is often used in Agile discussions. The following are key Scrum terminology and concepts: Product owners represent the end user community and have the authority to set business priorities, make decisions, and accept completed work. Scrum iterations (also called sprints) are where development teams build a piece of working software during a short, set period of time (e.g., 2 weeks). A collective set of sprints is bundled into a software release. Sprint teams (or development teams) conduct the Agile software development and testing work. These teams collaborate with minimal management direction, often co-located in work rooms. They meet daily and post their task status visibly, such as on wall charts. Scrum masters, similar to project managers, are responsible for removing impediments to the sprint teams’ ability to deliver the product goals and deliverables. User stories convey the customers’ requirements at the smallest and most discrete unit of work that must be done to create working software. Each user story is assigned a level of effort, called story points, which is a relative unit of measure used to communicate complexity and progress between the business and development sides of the project. To ensure that the product is usable at the end of every iteration, teams adhere to an agreed-upon definition of what constitutes acceptable, completed work. Backlogs are lists of requirements, such as user stories, to be addressed by working software. If new requirements or defects are discovered, these can be stored in the backlog to be addressed in future iterations. Velocity is a metric which is used to track the rate of work completed using the number of story points completed or expected to be completed in an iteration (i.e., sprint), or release. For example, if a team completed 100 story points during a 4-week iteration, the velocity for the team would be 100 story points every 4 weeks. Another framework, referred to as the Scaled Agile Framework (SAFe), is a governance model for organizations to use to align and collaborate the product delivery for modest to large numbers of Agile software development teams. The framework is intended to be applied to several organizational levels, including the development team level, the program level, and the portfolio level. It is also intended to provide a scalable and flexible governance framework that defines roles, artifacts, and processes for Agile software development across all levels of an organization. DHS has sought to establish Agile software development as the preferred method for acquiring and delivering IT capabilities. Specifically, in February 2016, the DHS Under Secretary for Management initiated an Agile software development pilot to improve the execution and oversight of the department’s IT acquisitions. The Under Secretary for Management selected five DHS programs that were in various stages of the acquisition life-cycle, including the TIM program, to be part of the pilot. As part of this pilot initiative, DHS established integrated product teams designed to support each of the five programs in their efforts to adopt Agile practices. These teams were directed to focus on effectively planning and executing the pilot programs, as well as developing appropriate documentation to support program execution. According to the Under Secretary for Management, the department plans to use lessons learned from the pilots to develop and update policies and procedures for executing the pilot programs and future IT acquisitions. As of May 2017, department officials had not determined a completion date for the pilot. Additionally, DHS established a headquarters-level Agile team intended to collaborate across the department on improvements to policy, governance, and acquisition guidance. This group is intended to support Agile delivery; codify and publicize process improvement artifacts generated by the program-level integrated product teams; and eliminate redundancies and conflicting guidance so that oversight groups speak with one voice, reducing time through the acquisition process. In addition to the use of Agile software development principles, the TIM program is subject to the department’s oversight framework. Specifically, the program is to adhere to DHS’s acquisition policy, including its systems engineering life-cycle framework, which is intended to support efficient and effective delivery of IT capabilities. The Under Secretary for Management serves as the decision authority for the program, and is responsible for overseeing adherence to DHS’s acquisition policies for the department’s largest acquisition programs (i.e., those with life-cycle cost estimates of $1 billion or more). The Under Secretary for Management is supported by two offices within the department. The first of these offices—the Office of Program Accountability and Risk Management (PARM)—is responsible for DHS’s overall acquisition governance process. PARM is responsible for, among other things, periodically conducting program health assessments to evaluate acquisition programs, in terms of a program’s management, resources, planning and execution activities, requirements, cost and schedule, and how these factors are impacting a program’s ability to deliver a capability. The other key supporting office—the DHS Chief Information Officer (CIO)—is responsible for, among other things, setting departmental IT policies, processes, and standards. The CIO is also responsible for ensuring that acquisitions comply with the department’s IT management processes, technical requirements, and the approved enterprise architecture. Within the Office of the Chief Information Officer (OCIO), the Enterprise Business Management Office is to ensure that the department’s IT investments align with its missions and objectives. As part of its responsibilities, this office periodically assesses investments to gauge how well they are performing through a review of program risk, human capital, cost and schedule, and requirements—referred to as the CIO’s program health assessment. According to the CIO, the Chief Technology Officer, which is responsible for leading the development of IT and standards across the department, and for management of the Agile pilot initiative, offers guidance and assistance to programs to help improve their execution. In addition, the Director of the Office of Test and Evaluation is to provide oversight of components’ independent test and evaluation activities. The DHS Acquisition Review Board is chaired by the Under Secretary for Management and is made up of many executive level members including the CIO, the Executive Director of the Office of PARM, and the Chief Procurement Officer. The board is to meet periodically to oversee programs’ business strategies, resources, management, accountability, and alignment to strategic initiatives. Additionally, the department has established executive steering committees, which generally are comprised of component and DHS executive-level members, such as the component CIO and Chief Financial Officer, as well as the DHS Chief Technology Officer and the Executive Director of the Office of PARM. The committees are to provide governance, oversight, and guidance to programs and their related projects and initiatives to help ensure successful development and operations. Figure 3 shows the organizational structure of the key DHS organizations with IT acquisition management responsibilities. The TIM program office resides within the Mission Operations component of TSA’s Office of Information Technology. The expected users of the TIM system come from multiple offices under the Office of Intelligence and Analysis, including the Security Threat Assessment Operations office, which is responsible for conducting the security threat assessments, and the Program Management office, which is responsible for managing TSA’s maritime, surface, and aviation credentialing programs. The TIM program’s Executive Steering Committee is chaired by the TSA CIO, who is the head of the Office of Information Technology, and the TSA Deputy Component Acquisition Executive, and meets quarterly. In addition, the TSA Operational Test Agent is to perform operational testing and evaluation of the TIM system’s operational effectiveness, interoperability, cybersecurity, and suitability. As previously mentioned, the DHS Director of the Office of Test and Evaluation is to provide oversight of these test and evaluation activities. Figure 4 shows the key TSA organizations involved with the TIM program. The TIM program experienced significant cost, schedule, and performance issues during its initial implementation efforts. Specifically, in May 2014, TSA launched an initial version of a commercial-off-the-shelf (COTS) system for the maritime transportation segment of TIM that was to support the Transportation Worker Identification Credential program. However, as we previously reported, in September 2014, TSA reported to DHS that the program had breached its baseline because it had significant cost, schedule, and performance issues due to, among other things, the addition of newly created credentialing programs that were added to the program’s scope, such as TSA Pre® and Chemical Facility Anti-Terrorism Standards. TIM program officials also reported in the breach remediation plan other issues that led to the breach, including different expectations between TSA officials and the contractor regarding the extent of reuse of system functionality among the different transportation segments. Specifically, TSA expected that it would be able to reuse more of the maritime functionality for the surface and aviation populations, while the contractor expected there to be less reuse. In January 2015, the Acting Under Secretary for Management directed program officials to suspend all planning and development efforts related to the other two segments of the program—surface and aviation—until the issues with the maritime segment could be resolved. In August 2015, program officials prepared a revised life-cycle cost estimate which increased costs to approximately $1.34 billion (about $713 million more than the original 2011 estimate), and delayed full deployment of the TIM system (to include all three transportation segments) to fiscal year 2022 (7 years later than originally planned). Also, in September 2015, the Director of the Office of Test and Evaluation issued a letter of assessment which concluded that initial operational testing of the COTS system for the maritime segment had determined that the system was not operationally effective and not operationally suitable. The Under Secretary for Management directed the DHS CIO to conduct a thorough review of the proposed plans for moving forward with the TIM program. After conducting the review, the CIO did not support the program’s proposal. As a result, in November 2015, the Under Secretary for Management continued the suspension of all developmental efforts for the surface and aviation transportation segments, but authorized the program to continue resolving problems that were identified during initial operational testing for the COTS system being used by the maritime segment. The Under Secretary for Management also directed the CIO to form and lead an integrated product team with senior TSA representatives and the TIM program office to develop a new strategy for the program. In March 2016, DHS and TSA officials completed a new strategy for delivering TIM capabilities. This strategy included the following changes: replace proprietary COTS applications with custom-developed applications using open source code; transition traditional, large development teams using a waterfall system development methodology to an Agile software development framework to enable rapid, incremental development and deployment; and migrate from a defined, fixed data center environment to a scalable Federal Risk and Authorization Management Program (FedRAMP) certified cloud computing environment. Also, according to the new strategy, the move from the COTS product to an open source solution is to include replacing the COTS product that had already been deployed to the maritime segment with the open source solution. It is also to include replacing the legacy systems that support the credentialing programs from the other two transportation segments (surface and aviation) with the open source solution. TSA plans to incrementally transition the program from these legacy systems between fiscal years 2018 and 2021. Additionally, the system is expected to interface with at least 19 other information systems, including the following key systems: TSA’s Transportation Vetting System, which conducts initial and recurrent name-based matching against defined terrorist related data sets. The Federal of Bureau of Investigation’s National Crime Information Center, which is an electronic clearinghouse of crime data. DHS’s Automated Biometric Identification System, also referred to as IDENT, which is the central DHS-wide system for storage and processing of biometric and associated biographic information for national security, law enforcement, immigration and border management, intelligence, and other background investigative purposes. TSA’s Secure Flight, which identifies individuals who may pose a threat to aviation or national security and designates them for enhanced screening or prohibition from boarding an aircraft, as appropriate. The U.S. Citizenship and Immigration Service’s Systematic Alien Verification for Entitlements, which is the primary data source for government agencies to verify legal entry and presence in the United States of a non-U.S. citizen or naturalized U.S. citizen. In April 2016, the Under Secretary for Management approved the TIM program’s new strategy and, in September 2016—almost 2 years after the program was initially suspended—the program was rebaselined to reflect the new strategy. As we previously reported, the estimated cost and schedule in the revised baseline was significantly different than the initial baseline. The revised baseline estimate was for about $1.27 billion (a $74 million decrease from the previous 2015 cost estimate and an overall increase of $639 million from the original 2011 estimate), with full deployment planned for 2021 (a 1-year acceleration from the previous 2015 schedule and an overall delay of 6 years from the original 2011 schedule). Table 3 shows the estimated costs and schedules reflected in the initial and revised estimates. According to TIM officials, in the program’s first 8 years (between October 2008 and September 2016), TSA spent over $280 million to deploy the initial COTS solution to the maritime segment and address critical fixes in the solution (i.e., the solution that TSA determined it needs to replace). Also during 2016, TSA began transitioning to an Agile software development framework. In September 2016, TSA issued two task orders to a contractor to provide Agile software development services. The orders were issued to the same design and development contractor that had assisted with the initial deployment of the TIM COTS solution. From October 2016 to June 2017, the program deployed four software releases using Agile software development practices. These releases were focused on, for example, deploying new functionality to the COTS system to enhance the criminal and immigration vetting data provided to adjudicators. In December 2016, between the first and second Agile releases, the program suspended new development for 1 month while officials reconsidered the order in which they would deliver functionality. Also during this period, the program developed and deployed a smaller release which program officials refer to as a “half release.” According to program officials, this release did not produce any new capabilities and instead addressed operations and maintenance-related fixes to the deployed COTS system. After development of the second software release, at the end of March 2017, the program was reviewed by DHS’s Acquisition Review Board. The purpose was to review the results of follow-on operational testing that was performed to determine whether the program had adequately addressed the prior system and usability issues and implementation of the program’s new strategy. The meeting was also intended to discuss the status of several action items from a prior review board meeting that occurred in September 2016, such as finalizing a test and evaluation master plan, conducting a cybersecurity threat assessment, updating the program’s mission needs statement and concept of operations, and establishing software development cost metrics. Implementation of the new strategy continues to be monitored by DHS and TSA oversight bodies. The new strategy for the TIM program addressed a number of major challenges that the program faced during earlier efforts to develop and deploy the system; nevertheless, key challenges remain. Specifically, of the seven major challenges that the program faced during its initial implementation of a COTS solution for the maritime segment, four challenges have been addressed related to (1) system performance and usability issues, (2) data migration issues, (3) information security testing, and (4) the inadequacy of the program’s previous hosting facility. However, the remaining three challenges regarding constraints with COTS product, significant addition of new transportation programs (e.g., TSA Pre®), and insufficient stakeholder coordination and communication have not been fully addressed. According to DHS guidance, among other things, an operational test and evaluation examines systems for operational effectiveness. Specifically, it tests for the ability of a system to accomplish a mission when used by representative users in the expected environment. The 2015 initial operational testing of the maritime segment (supporting the Transportation Worker Identification Credential program) found that the COTS system was extremely unreliable due to frequent critical failures, and had several system performance and usability issues that limited users’ ability to execute tasks in a timely and accurate manner. These issues included lags, freezes, the need for excessive refreshes, inadequate reporting and case management functionalities, as well as an interface that was not user-friendly. For example, the system was unable to produce accurate reports on case workload and status, so users expended significant effort creating spreadsheets to manually assign cases and manage their progress. The system was also unable to perform certain waiver functions in a timely and complete manner, which resulted in a significant backlog. The program office has addressed the issues identified in the initial operational test report by first identifying a list of over 900 action items. According to TIM officials, they validated this list with the operational test agent and prioritized the action items with the product owners (i.e., end users) to identify which were the most critical to complete. For example, critical items included addressing issues with the waiver functions, assigning cases, and issuing credentials. The program implemented the critical fixes by developing seven software releases from September 2015 to October 2016. In January 2017, the TSA operational test agent reported that follow-on operational testing of the COTS system confirmed that the program had adequately addressed the prior system and usability issues. As a result, according to the test agent, the program’s previously deployed maritime segment of the system performed as intended. According to leading practices, IT programs should identify potential problems before they occur. This allows programs to plan and execute activities to mitigate the risk of such problems having adverse impacts on the program. When the TIM program transitioned maritime users from the legacy system to the COTS system, according to TSA’s breach remediation plan, program officials found that cleaning and properly migrating data was very difficult and time consuming because the legacy systems were old and the data mapping information was not readily evident. Program officials stated that the data migration efforts were also difficult because of the proprietary nature of the COTS product, which impacted the ability to effectively migrate data from legacy systems. The additional time needed for data migration resulted in higher than anticipated costs for the maritime transportation segment. Program officials have taken action to better account for the TIM program’s future data migration efforts. Specifically, as part of the new strategy, the officials plan to defer legacy data migration until after system deployment efforts are complete to avoid disrupting deployment efforts. The strategy focuses on the program migrating only closed case data from the legacy systems to the new system. As such, adjudicators are to continue to complete and close any security threat assessment cases opened in the legacy system even after the new system is deployed, and the new system is to only handle newly opened security threat assessment cases. Once final disposition of the cases in the legacy system is complete, those cases would then be included in the closed case data migration effort, which is planned to occur at the end of development, around fiscal years 2020 to 2021. In addition, the new strategy includes streamlining the data migration by using the open source solutions to help simplify the migration of data on transportation populations from the legacy systems. As a result of the new approach, the program should be better positioned to more effectively migrate data during future transitions between the legacy systems and new system. According to DHS guidance, the operational test and evaluation also should examine the department’s systems for operational suitability, which is the degree to which a system is deployable and sustainable. The evaluation is to take into account factors such as reliability, maintainability, availability, and interoperability. The 2015 initial operational testing of the COTS system found that it was not suitable because the system had significant information security weaknesses. Specifically, the system inappropriately provided users with greater access than was necessary to do their jobs, which undermined the security benefits of controlling what different users were able to do in the system based on their role. The COTS system also contained critical and high-risk system security vulnerabilities which could result in the compromise of sensitive system information, such as passwords, and could hinder TSA officials’ ability to effectively respond to incidents. Program officials took actions to address the security weaknesses previously identified. For example, in response to the findings from the initial operational testing, between September 2015 and October 2016, they developed and released fixes to the significant security weaknesses. In April 2017, the results of the follow-on operational testing confirmed that the COTS system was free of critical or high-risk system security vulnerabilities and that it appropriately restricted access to the system by only allowing users to access areas of the system needed to support their specific business tasks. In addition, critical steps to evaluate the system’s cybersecurity have been planned, but not yet completed. Specifically, testing for realistic cybersecurity threats which is used to help categorize the system’s risk- level in terms of confidentiality, integrity, and availability, was deferred until March 2018. Program officials decided to defer this test until new hosting environments for TIM are implemented, rather than testing TIM in an environment that will soon be retired. These environments are intended to enable the development, testing, and production of the system. However, implementation of those environments has been delayed until December 2017, and as a result, the cybersecurity vulnerability assessment has been deferred to March 2018. The identification of a time frame in which the program plans to conduct this important cybersecurity test is a step in the right direction, and avoiding additional delays will be important. According to OMB, a hosting facility or data center is to process or store data and must meet stringent availability requirements. Additionally, cloud computing can be used as a means for enabling on-demand access to shared and scalable pools of computing resources. During the initial implementation of TIM, the system was hosted in a cloud that operated out of a DHS data center (referred to as DHS Data Center 1). However, the DHS cloud was higher in operations and maintenance costs than the program originally planned, which presented a challenge for the program. To address this challenge, in 2016, TIM program officials decided to move the COTS system that was previously deployed (the maritime segment) out of the DHS cloud and set it up in a public cloud environment. They also planned to use the public cloud environment to develop, test, and operate the future TIM open-source based system. The officials planned to use a phased migration that consisted of first establishing hosting environments at two data centers—DHS Data Center 1 and TSA Colorado Springs Operations Center. The officials planned to use the data centers for the development, testing, and production of the future TIM open-source based system, and then eventually transition to a public or hybrid cloud once the system reaches full operational capability in fiscal year 2021. As part of this approach, officials planned to establish 10 development, testing, and production environments at these data centers from January to July 2017, so that TIM’s development teams did not have to compete for the same environments during Agile software development and testing efforts. While the program experienced delays in setting up its production environment, officials recently took actions to address these delays. Specifically, the program was expected to have a new production environment available at the TSA Colorado Springs Operations Center by March 2017; however, it was delayed until May 2017. Additionally, while migration of the TIM system to the new hosting environments was planned to occur by September 2017, it has been delayed. These delays have contributed, in part, to delays in other aspects of the program, including the execution of the cybersecurity vulnerability assessment, as well as delays in the implementation of automated testing and deployment tools (discussed later in this report). In response to these delays, program officials recently established a revised schedule in May 2017 for setting up the new environments by December 2017. Effectively executing against this updated schedule should help to keep the program on track with delivering these important environments and fully addressing the related challenge that the program experienced during its prior implementation efforts. According to leading practices and guidance, technology decisions should seek to enable services to scale easily and cost-effectively and to avoid vendor lock-in by, for example, using open source solutions. The benefits of using open source solutions can include improved software reliability and security through the identification and elimination of defects from continuous and broad peer review of publicly available source code that might otherwise go unrecognized by a more limited core development team; unrestricted ability to modify software source code; no reliance on a particular software vendor due to proprietary restrictions; reduced software licensing costs; and the ability to “test drive” the software with minimal costs and administrative delays in a rapid prototyping and experimentation environment. Also, according to leading practices, IT programs should ensure that their plans include how they will transition from the current state to the final state of system operations. Such planning provides a mutual understanding to relevant stakeholders of how programs are to accomplish the transition. According to TSA’s breach remediation plan, the TIM program’s use of a COTS solution led to several challenges. For example, program officials reported that the COTS product restricted their ability to make changes to the product to improve system usability and, as previously discussed, impacted the ability to effectively migrate data from legacy systems because of the proprietary COTS product. Program officials also reported that they were highly dependent on the COTS vendor to remediate compatibility issues and resolve problems, which required additional time. The plan also stated that the COTS product required a complex system architecture which prevented the program from implementing modern software development and testing tools. Finally, use of the COTS product resulted in higher software licensing costs. The TIM program’s new strategy is intended to address these challenges by moving away from using a COTS product to a custom-developed open source solution. However, the program’s approach for developing and delivering this new solution has been in a continual state of fluctuation and implementation plans have not been defined. As such, this challenge has yet to be fully addressed. Specifically, In September 2016—after the 2-year pause in the program and completion of its extensive rebaselining effort—DHS and TSA officials decided that TSA would incrementally retire legacy systems as the transportation programs that use those systems are migrated to the open source solution; they also decided to eventually replace the COTS system that was previously deployed to support the maritime Transportation Worker Identification Credential program and migrate to the open source solution. This was to be completed using a staged approach between the migrations, and also by using two versions of the COTS system as well as the open source system. However, the program lacked a plan detailing how it was going to migrate from the current legacy state, to the interim environment (with the two versions of COTS plus an open source system), to the final state. As previously mentioned, in December 2016, new development for the TIM system was paused once again to, among other things, further evaluate the transitioning approach that was agreed to 3 months prior. Four months later (in mid-March 2017), program officials decided to continue pursuing the approach that was agreed to in September. Subsequently, the high-level implementation schedule was revised to adjust for delays that this most recent replanning effort contributed to (other contributing factors for the delay are discussed later in this report). The revised schedule delayed deployment of the initial Pre® capabilities by 6 months and other key functionality up to 12 months. Further adding to the fluctuation in the program, at the end of March 2017, the DHS Acquisition Review Board requested that the program’s implementation approach be revised to accelerate the delivery of the TIM program’s front-end interface for adjudication and redress functions. However, it is unclear how the acceleration of the development and implementation of these functions will impact the delivery of the other planned functionality, and what tradeoffs the program will need to make. Program officials were expected to develop an overview of the acceleration efforts associated with cost, schedule, risk, and impacts on the program and deliver it to PARM and the Office of the Chief Technology Officer in August 2017. As a result, while it has been 8 months since the TIM program was rebaselined, the details of how the program will transition from its current state, to an interim state, then to the final state of full open source, have yet to be determined. This is contrary to leading practices that we have previously identified, which state that when pursuing an IT modernization effort, organizations should develop a plan for transitioning from the current to the target environment. In response to our concerns, program officials stated that after they determine how they will adjust to incorporate the Acquisition Review Board’s recent acceleration request, they will determine the details of how the program will achieve the desired final state. However, until the program establishes and implements specific time frames for determining key implementation details, including how it will transition the program from its current state to an interim state and to the final state, the TIM program office, and TSA and DHS oversight bodies cannot be certain about how the program will ultimately deliver its complete open source solution. According to leading practices, programs should manage changes to requirements as they evolve during the project. Programs should also ensure that planned schedules provide a realistic forecast for completion of activities, including providing reasonable slack (i.e., flexibility in the schedule). After the TIM program was initiated in 2008, it experienced significant increases in scope, such as the addition of TSA Pre® and Chemical Facility Anti-Terrorism Standards populations in 2012, which required more functionality and considerably more processing demands than originally planned. The TIM program was challenged to accommodate the additional work needed to incorporate these new transportation populations and capabilities, and, in part, contributed to a significant breach in its original cost and schedule estimates. To address the challenge, the TIM program incorporated the additional functionality and processing requirements into its cost and schedule rebaseline that was approved in September 2016. In addition, the program’s new strategy addressed the need to be adaptable to accommodate any new transportation populations and capabilities that could be added in the future by taking an enterprise-level approach to providing capabilities. Nevertheless, while the TIM program incorporated TSA Pre® into its new plans, the implementation schedule for the program was very compressed and program officials did not establish a schedule that realistically forecasted when activities would be completed. Specifically, program officials planned to deploy initial TSA Pre® capabilities by May 2017 without any slack in the schedule. According to program officials, the reason for this approach, was because TSA Pre® was considered a high priority for migrating from its legacy system in order to accommodate an expected influx of applicants during the summer months. However, slack was not incorporated in the implementation schedule; therefore, when the program experienced schedule delays, it resulted in the program missing the May 2017 implementation deadline and being rescheduled to November 2017. The 6-month delay in delivering initial Pre® capabilities was due to the delays discussed in the prior section associated with replanning the strategy for transitioning to the open source system, as well as delays in onboarding additional development team members and setting up new development and production environments. The delay in delivering Pre® capabilities is especially problematic because program officials have reported that the legacy system is at risk of exceeding its processing capacity. Additionally, as previously mentioned, the program’s revised schedule shows the delivery dates for almost all (8 of 10) capabilities being significantly pushed back—with 2 capabilities being delayed up to 12 months. Moreover, not only were the implementation dates delayed for these efforts, the time to complete a number of these efforts was reduced by about 1 to 12 months—thus further exacerbating our concerns about unrealistic schedules. Without a schedule that realistically forecasts when activities will be completed, TIM program officials cannot ensure that they will meet the dates that they have committed to, such as when key capabilities for TSA Pre® are to be deployed. According to leading practices, programs should coordinate and collaborate with relevant stakeholders (i.e., those that are affected by or in some way accountable for the outcome of the program, such as program or work group members, suppliers, and end users). Stakeholder coordination includes, for example, involving stakeholders in reviewing and committing to program plans, agreeing on revisions to the plans, and identifying risks. Programs should also identify the needs and expectations of stakeholders and translate them into end user requirements. However, during prior implementation efforts with the COTS solution, the program experienced challenges with effectively coordinating and communicating with end-users. For example, according to program documentation, it had not adequately collaborated with end users in developing and implementing business requirements and conducting post-deployment user satisfaction assessments. This led to frustration among end users who felt inadequately informed and prepared for the new COTS system. To address this challenge, the TIM program’s new strategy includes establishing a product owner role, which, as previously mentioned, is intended to represent the end user community and have the authority to set business priorities, make decisions, and accept completed work. The program’s adoption of the Agile software development approach has also significantly increased the frequency of the program’s engagement with stakeholders to define, test, and implement software releases. In addition, program officials established an organizational change management strategy in October 2016 that is intended to, among other things, focus broadly on establishing overall communication processes for program stakeholders. This strategy identifies key steps such as, establishing a communication team and hiring a communication lead to oversee the development and execution of the communication action plans, establishing a communication working group, and serving as chair of the communication working group. This group is to be responsible for developing four communication action plans for key stakeholder groups (e.g., new transportation populations, existing transportation populations, and management). These particular steps were to be completed from November 2016 through January 2017. However, while as of May 2017, the TIM program had implemented certain steps from the organizational change management strategy, such as establishing a communication team, the program has been delayed in implementing other steps. Specifically, the communication lead position was to be filled in November 2016. However, in March 2017 TIM program officials stated that the position had not yet been filled due to the federal hiring freeze. Additionally, because of the vacancy in the communication lead position, other key actions have been delayed, such as the development and execution of the communication action plans. Program officials have not established new time frames for completing the remaining steps outlined in the organizational change management strategy. Until these time frames are established and effectively executed, program officials will have less assurance that there will be effective communication with stakeholders and customers to ensure that the program is meeting their needs. As discussed previously, transitioning a program from waterfall development to Agile software development is a significant effort, and requires the implementation of fundamental practices to ensure that the transition is successful. According to leading guidance, an organization transitioning to Agile software development should establish critical practices to help ensure successful adoption of the Agile approach, such as obtaining full support from leadership to adopt Agile processes, enhancing Agile knowledge, ensuring product owners are engaged with the development teams and have clearly defined roles, establishing a clear product vision, prioritizing backlogs of requirements, and implementing automated tools to enable rapid system development and deployment. While the TIM program has fully implemented the first two of these leading practices necessary to ensure the successful adoption of Agile, the remaining four practices have not been fully implemented. The gaps we have identified with the program’s implementation of Agile are concerning given that it did not follow key IT acquisition best practices when using its waterfall development approach during the program’s first 8 years and spent over $280 million on a system that TSA has determined it needs to replace. According to leading practices and guidance, an organization transitioning to Agile software development should get and maintain full support from the organization’s leadership to adopt Agile processes. Leadership support helps empower employees to continuously improve the use of Agile software development practices. DHS and TSA leadership have approved the TIM program’s adoption of Agile software development, and continue to support the transition. For example, the DHS OCIO worked closely with TSA officials in 2015 and 2016 to develop the new strategy for the program which included moving away from a waterfall development approach to Agile software development. As previously mentioned, the Under Secretary for Management selected the TIM program to be part of the DHS Agile pilot initiative in February 2016 and approved the program’s new strategy in April 2016. Moreover, the DHS Office of the Chief Technology Officer has continued to provide guidance and resources to the program since it adopted Agile. For example, TIM program officials stated that the DHS Chief Technology Officer added two of the office’s full-time and one part-time staff members to the TIM program. DHS and TSA officials stated that the Chief Technology Officer also provided an Agile coach to assist the TIM Program Manager about 3 days per week with establishing an Agile governance framework. Finally, DHS established an Agile Integrated Product Team that is co-chaired by PARM and the TIM Program Manager. The team meets bi-weekly to provide guidance on adopting Agile processes. As a result of the sustained leadership commitment, the program is better positioned to continuously improve its Agile practices. According to leading practices and guidance, an organization transitioning to Agile software development should ensure that the entire program team receives Agile training. This allows organizations to achieve a faster shift away from the previous culture and processes and toward a more agile culture. Toward this end, the TIM program requires its Agile contractor to ensure that development teams are trained and skilled in Agile methods, as well as in the specific Agile frameworks the program has adopted, which include the Scrum and SAFe frameworks. Additionally, the program provided initial Agile training for key program staff when it began transitioning to Agile software development. Specifically, the program provided a mandatory 2-day Agile workshop in October and December 2016 which covered basic Agile principles and the Scrum and SAFe frameworks. This training was provided to many key staff members, including contractor support staff, a contracting officer representative, and product owners. Further, in December 2016, the program began providing training on the SAFe framework to its government employees. This training was tailored based on different roles, such as Agile practitioner, program manager or product owner, and scrum master. The training courses were provided to key staff members, including TIM program leadership, team leads, branch managers, and scrum masters. As a result of providing Agile training, the program’s staff should be able to more effectively adopt and apply Agile software development processes. According to leading practices and guidance, an organization transitioning to Agile software development should designate a product owner who represents the user community and establishes priorities based on business needs, approves user stories and their acceptance criteria, and decides whether completed work meets the acceptance criteria and can be considered done. The product owner should also maintain close collaboration with the development teams by, among other things, providing daily support to help clarify requirements and attending key Agile meetings, such as sprint- and release-level planning sessions and system demonstrations. Additionally, roles and responsibilities among relevant stakeholders, such as the product owner, should be clearly defined and documented by the organization that is transitioning to Agile software development, so that the stakeholders are aware of their responsibilities and given the authority to perform their roles. The TIM program has two different groups of individuals that collectively share the responsibilities of product owner, and while these groups frequently engage with the development teams, program officials have not yet clearly defined the groups’ roles and responsibilities. Specifically, according to program officials, the first group consists of five product owners that represent end users and are collectively responsible for supporting all development teams, attending all Agile meetings, and prioritizing and approving planned and completed work. In addition, according to program officials, these five individuals are also responsible for approving user stories associated with new system functionality. The other group is referred to as the solutions team, which includes, for example, the TIM Chief Architect and Chief Engineer. According to program officials, the technical work (which is to help enable the system functionality, such as ensuring network connectivity and proper software licenses) is approved by the solutions team. Nevertheless, while program officials told us about these high-level roles and responsibilities, the program’s documentation does not clearly define them among the five product owners and the solutions team. Moreover, program officials have not defined the rules of engagement for these product owners, such as how competing priorities among different product owners should be handled. According to program officials, the lack of clearly defined roles and responsibilities has not been a problem for the program because the product owners and the solutions team regularly communicate and coordinate with each other, and thus far, have been in agreement on the priorities for the program. However, the program recently scaled up the amount of work being conducted simultaneously, which adds to the volume of the decisions that need to be made and the coordination that has to occur among the five product owners and solutions team. Thus, even if the program has not yet experienced issues with coordination, without more clarity in the roles and responsibilities among the groups that are responsible for prioritizing and accepting work, the program risks facing challenges in establishing priorities, approving user stories, and deciding whether completed work meets the acceptance criteria. According to leading practices and guidance, a program transitioning to Agile software development should have a clearly defined vision. This can be in the form of a product roadmap, to guide the development of the product and to help inform the planning and requirements development of Agile software development releases. Consistent with leading practices, TSA established a vision for the TIM program. This vision is articulated in multiple documents—including the Mission Needs Statement, Concept of Operations, and Operational Requirements Document. Officials also use a strategic roadmap to articulate the program’s vision, which specifies the high-level system capabilities that are to be deployed over the life-cycle of the program through 2021. However, the program’s vision has not always informed the planning of requirements for the software releases, as intended by leading practices. Specifically, the capabilities outlined in the program vision documents, such as the strategic roadmap, do not consistently map to program requirements. While 5 of the 10 capabilities in the strategic roadmap align to the high-level and large scope requirements, referred to as epics, the other half of the capabilities do not clearly align to the epics. For example, the adjudication and redress capabilities that are in the strategic roadmap do not align to any epic. In addition, the capability for public-facing portals does not clearly track to any epic. TIM officials recognized the alignment issues, and in August 2017, stated that they are in the process of establishing alignment from the program’s vision down to the lowest level of requirements, by refining the program’s vision and requirements. Officials also stated that they expected this effort to be completed by 2018. Effective execution of this effort should help ensure the program’s vision is informing requirements planning. According to leading practices and guidance, a program transitioning to Agile software development should have a prioritized list of the requirements that are to be delivered—referred to as the backlog. This backlog should be maintained so that the program can ensure it is always working on the highest priority requirements that will deliver the most value to the users. In addition, according to TIM Agile management documentation and program officials, the program’s backlog of features (i.e., mid-sized requirements) is expected to represent the features that are to be delivered over the next several software releases. These features are to be assigned priority levels to help determine which should be selected for development when planning the next release. According to TIM Agile management documentation, the TIM program is expected to manage a backlog for each software release, which is to identify the features and their derived user stories (i.e., the smallest and most detailed requirements) that are to be delivered in a specific release. The documentation also indicates that each feature and user story is to be assigned priority levels to determine which should be included in the development of the next release and associated sprint. Figure 5 illustrates the intended prioritization in the features, releases, and user stories backlogs. However, as of July 2017, the program’s backlogs did not contain specific prioritization levels for each of the features and user stories, as called for in DHS guidance. According to program officials, instead of assigning specific prioritization levels, they had more generally identified which features should be developed within the near-term (e.g., in the next several Agile releases). Program officials recognized that they still needed to prioritize their backlogs by assigning priority levels to all features and user stories, but they did not have a time frame for completing this effort. Without ensuring full prioritization of current and future features and user stories, the program is at risk of delivering functionality that is not aligned with the highest needs of those that are responsible for conducting security threat assessments to protect the nation’s critical transportation infrastructure. According to leading practices and guidance, automating system development and deployment work and avoiding manual work is especially important for Agile programs, as it enhances the ability for rapid development and delivery of high quality software. Specifically, a program transitioning to Agile software development should use an automated tool for managing Agile activities, such as maintaining the product backlog and tracking the status of completed work. The program should also establish automated testing and deployment capabilities to improve the quality of the system. For example, according a DHS’s Agile development instruction manual, the vast majority of software defects are discovered during system integration testing, and—if automated—this testing can be run multiple times on a sprint or release in order to identify more defects sooner. In addition, automated tools can enable more efficient processes for frequently integrating computer code that is developed by different team members (e.g., hourly or daily), in order to quickly detect any code integration errors. Automation of testing can also help decrease the risk of introducing security flaws due to human error. However, program officials deferred implementation of an automated Agile program management tool and many other testing and deployment tools. Specifically, while the program had been using Agile software development practices since October 2016, the program has not used an automated management tool for tracking the status of completed work for its first three Agile software releases. Instead the program has used spreadsheets that require TIM program officials to manually populate and track large amounts of program status information. Program officials had planned to implement an automated management tool by October 2016, but did not do so until the end of April 2017. According to the officials, the delay occurred because they were in the process of tailoring the SAFe governance framework and the management tool needed to be customized to reflect the tailored approach. Regarding tools for testing and deployment, as of May 2017, the program was only using 4 of the16 automated tools that program officials planned to use. These included tools that enable the management of software code development, defect tracking, and components of automated functional testing. However, the remaining 12 testing and deployment tools had not yet been implemented. These include, among others, tools that enable the automated building of software code, frequent merging of an individual piece of software code with the main code repository so that new changes are tested continuously (referred to as continuous integration), small automated tests to verify that each individual unit of code written by the developer works as intended, and installation of application patches to protect against known vulnerabilities. TIM program officials stated that these testing and deployment tools are not expected to be implemented until the new development, testing, and production environments are set up. However, as previously mentioned, the program has experienced challenges in implementing these environments. As a result, the program’s use of manual processes have been time consuming, impeded visibility into the process, and hindered software testing. In addition, without automated tools, program performance metrics were being manually calculated and this increases the risk for incomplete and inaccurate data. While the automated Agile management tool has just been implemented, until the remainder of the automated Agile testing and deployment tools are implemented, the program is likely to continue to operate at reduced efficiency levels, and be limited in its ability to ensure product quality. According to leading practices, to ensure effective program oversight of cost, schedule, and performance, organizations should: ensure that corrective actions are identified and tracked until the desired outcomes are achieved, document relevant governance and oversight policies and monitor program performance and progress, and rely on complete and accurate data to review performance against expectations. While TSA fully implemented the first practice, the remaining three practices were not fully implemented by DHS and TSA. As a result, the effectiveness with which the governance bodies oversee and monitor the program has been limited. According to leading practices, effective program oversight includes ensuring that corrective actions are identified and tracked until the desired outcomes are achieved. In this regard, governance bodies should collect and analyze data on program risks and issues and determine corrective actions to address them and track them to completion. TSA has established a process for ensuring that corrective actions are identified and tracked. Specifically, the program has a process for identifying corrective actions and monitoring the status of these actions in its weekly program status reviews. The program also uses an automated tool to track and maintain a complete list of all actions that have been identified. As of February 2017, the list contained 89 actions and included the status of the actions—83 of which had been tracked to completion. As a result of the program having a process that can identify and track corrective actions, it is better positioned to address significant deviations in cost, schedule, and performance parameters. According to leading practices, effective program oversight includes the use of documented policies and procedures for program governance and oversight, such as reporting and control processes. These processes may include, among others, requiring programs to report on the status and progress of activities; expected or incurred program resource requirements; known risks, risk response plans, and escalation criteria; and benefits realized. Oversight and governance documentation may also include threshold criteria to use when analyzing performance, and the conditions under which a program or project would be terminated. TSA and DHS have documented selected policies and procedures for governance and oversight of the TIM program. Specifically, DHS documented procedures for its Acquisition Review Board and its Executive Steering Committee for the TIM program on how these governance bodies are to review the cost, schedule, and performance of the program. For example, according to the Committee’s charter, it is responsible for assessing the health of the program and identifying major issues and risks, utilizing a standard reporting format at oversight meetings. TSA has also documented processes for the program’s Agile milestone reviews, such as conducting workshops at the end of the release cycle to perform a system demonstration, review qualitative metrics, and promote continuous quality improvement. TSA also developed a risk management plan tailored for the Agile approach to guide TIM staff members in identifying, managing, and mitigating risks and issues impacting cost, schedule, and performance of the program. The agency also developed a test and evaluation master plan that outlines how it and DHS will conduct and oversee testing and evaluation of the program’s capabilities under the new Agile software development approach. However, TSA and DHS have not developed or finalized other key oversight and governance documents. Specifically, three oversight and governance policies have not been finalized and/or appropriately updated: the TIM program’s tailoring plan for SAFe, a DHS-level oversight policy for Agile programs, and DHS Office of the Chief Technology Officer’s guidance for Agile programs to use for collecting and reporting on performance metrics. The TIM program has not updated its Systems Engineering Life Cycle Tailoring Plan (which outlines the Agile governance process and all milestone reviews that are required for planning and deploying Agile releases), to reflect changes in the way officials have reported using the SAFe governance framework. As a result, there are inconsistencies in the governance documentation. For example, the Systems Engineering Life Cycle Tailoring Plan describes four levels of governance—portfolio, value stream, program, and team—while program officials have reported omitting the value stream level from the governance framework. According to TSA officials in May 2017, they planned to update the Systems Engineering Life Cycle Tailoring Plan to reflect the revised governance framework, but they did not have a specific time frame for completing the revision. Until the TIM program fully updates its Systems Engineering Life Cycle Tailoring Plan to reflect the revised governance framework, the program lacks a clearly documented and repeatable governance process to effectively oversee the program. DHS officials stated that they plan to conduct biannual oversight reviews of the five Agile pilot programs (including TIM), instead of the annual reviews that are typically conducted for traditional waterfall development programs. According to the officials, the purpose of moving to biannual reviews is to better ensure cost, schedule, and performance remain on track for these Agile programs. However, officials in the Office of the Chief Technology Officer stated that DHS- level Agile governance and oversight policies and procedures have not been revised to reflect this new oversight approach because consensus among DHS leadership on related changes needs to be established before this new oversight approach can be documented in the department’s guidance. As of May 2017, officials had not specified a time frame for reaching such consensus. Until DHS leadership reaches consensus on needed oversight and governance changes, and then documents and implements associated changes, the program continues to plan as though it is undergoing annual oversight reviews, versus biannual reviews. As of early May 2017, officials in the Office of the Chief Technology Officer were also in the process of drafting guidance for Agile programs to use for collecting and reporting on performance metrics, but did not know when this guidance will be finalized. According to TSA officials, in the absence of complete Agile guidance, the TIM program receives support from DHS’s Agile team supporting the pilot initiative, which, as specified in the team’s charter, is intended to help the program (as well as the other four pilot programs) facilitate Agile software development. However, this team is not intended to perform oversight functions to ensure that the program is meeting cost, schedule, and performance targets. Thus, until the Office of the Chief Technology Officer completes guidance for Agile programs to use for collecting and reporting on performance metrics, TIM program officials may not report the most informative Agile performance metrics to oversight entities. According to leading practices, effective program oversight includes monitoring program performance and progress by comparing actual cost, schedule, and performance data with estimates in the plan and identifying significant deviations from established targets or thresholds for acceptable performance levels. Program reviews are to be conducted at predetermined checkpoints or milestones in order to determine progress by measuring programs against cost, schedule, and performance metrics. In addition, Agile programs should be measured on, among other things, velocity (i.e., number of story points completed per sprint or release), development progression (e.g., the number of features and user stories planned and accepted), product quality (e.g., number of defects and unit test coverage), and user satisfaction. The TIM program management office conducts frequent and regular performance reviews and focuses on several important Agile release- level metrics. Specifically, program management officials monitor TIM’s performance and progress during weekly program status review meetings and in periodic Agile reviews that are conducted at the end of each release. These reviews also include officials from the development teams and program stakeholders. The reviews focus on, among other things, velocity, progress, and product quality. They also include the status of key activities and risks impacting cost, schedule, and performance. Nevertheless, while the program management office uses performance metrics, the program has not established thresholds or targets for acceptable performance levels for these metrics. For example, program status reports showed that about 47 percent of the work that was planned to be completed in the first Agile release was accepted by the product owners. While the program appears to have been improving in this metric—74 percent was accepted in the second Agile release and 94 percent in the third Agile release—program officials have not established the thresholds or targets to determine the acceptable level of performance. Program officials stated that they considered the performance in the first Agile release to be low, but they have not yet established targets or thresholds. According to program officials, they planned to establish targets based on the capacity of work that development teams are expected to complete in a release, which can be better predicted as the teams spend more time together. However, the program has since developed three releases and continues to lack performance thresholds and targets. Until program officials establish performance thresholds or targets, oversight bodies may lack important information to ensure the program is meeting acceptable performance levels. In addition, the program management office’s performance reviews have included limited information on program cost. According to TIM officials, the program manager holds weekly meetings with the contract, finance, and budget groups to review costs associated with TIM’s contracts. However, management does not review or produce reports on overall life- cycle cost performance for the program or Agile software development cost performance. Program officials said they have not yet determined how best to measure cost performance in an Agile software development environment. In September 2016, the Under Secretary for Management instructed the program to collaborate with DHS’s Cost Analysis Division and the headquarters-level Agile integrated product team to establish agreed-upon software development cost metrics as well as a method for collecting and reporting on those metrics by the end of the March 2017. However, as of May 2017, this effort was still in progress. Until the TIM program begins collecting and reporting on Agile-related cost, oversight bodies will have limited information by which to monitor TIM costs. Department-level oversight bodies have focused on reviewing certain program life-cycle metrics for the TIM program. Specifically, the DHS Acquisition Review Board conducts periodic reviews of the program to monitor the program’s performance and hold the program accountable. Since the program was rebaselined in September 2016 and transitioned to Agile software development, the Acquisition Review Board has conducted one review. In addition, the Executive Steering Committee, which is chaired by the TSA CIO and Deputy Component Acquisition Executive, and includes representatives from the DHS Chief Technology Officer and PARM, reviews the program quarterly. As of July 2017, the Executive Steering Committee had conducted three reviews of the TIM program since implementing its new development approach. These oversight bodies reviewed, for example, performance information such as comparisons of the dates that milestones were actually achieved, against the planned schedule, and the burnup charts for the program (i.e., graphical representations of accumulated story points planned and completed per release). However, the Acquisition Review Board and the Executive Steering Committee have not been measuring the program against the rebaselined life-cycle costs, or important Agile release-level metrics, which are essential for providing early indicators of issues with the program. For example, these oversight bodies did not review the program’s velocity, number of features/user stories planned and accepted, product quality, or Agile software development cost metrics. In addition, while we have previously reported that there was overlap in the DHS OCIO’s and the PARM office’s assessments of certain IT programs, neither of these offices assessed the TIM program’s progress against key Agile performance metrics or cost performance. Specifically, the DHS OCIO and the PARM office conducted periodic (monthly or quarterly health assessments) of the program that included, among other things, schedule and system performance indicators for the entire life- cycle of the program (similar to what is used to review traditional waterfall programs). While these metrics are useful for understanding the program’s progress against the full schedule (60 months to full operational capability, or 30 Agile releases), they do not offer insight into the progress of individual Agile releases, which are deploying high-priority capabilities for the TIM program every 2 months. For example, as of April 2017, these two oversight bodies did not include Agile performance metrics which would have offered important insights into the progress of individual releases, such as velocity, progress metrics, quality metrics, post-deployment user satisfaction, or Agile software development costs. Thus, until DHS-level oversight bodies review key Agile performance and cost metrics and use them to inform management oversight decisions, the oversight bodies will be limited in their ability to obtain early indicators of any issues with the program, and to call for course correction, if needed. Recently, the TIM program also began measuring user satisfaction. Specifically, in April 2017, the DHS Acting Under Secretary for Management directed TSA’s Operational Test Agent to implement a continuous evaluation dashboard based on the results from the program’s third Agile release by the end of June 2017. This dashboard was to measure, among other things, post-deployment user satisfaction. TSA subsequently implemented the continuous evaluation dashboard in June 2017. Table 4 summarizes the extent to which performance metrics are reviewed by various oversight bodies. According to leading practices, effective program oversight includes relying on complete and accurate data to review program performance against stated expectations. Complete and accurate data allow oversight bodies to have transparency into the performance of programs and helps them identify when course correction is needed. However, TIM’s reported performance data were not always complete and accurate. Specifically, when reporting on the velocity (i.e., total number of story points completed per sprint and/or release across the development teams) of TIM’s first release after it was deployed, program officials inconsistently reported velocity among the program’s performance reports, thus calling into question the accuracy and completeness of the information. Since the data were being reported on a completed release, the velocity should have been reported as one consistent number that did not change. According to program officials, the reason for inconsistent reporting was that, despite best practices, the program’s methodology for measuring velocity was not consistent and was calculated differently each time. For example, table 5 shows three different numbers that were to represent the collective velocity across the development teams, and that officials reported to program management after the deployment of the first software release. While there was less variation in the velocity data reported after the second software release was deployed, discrepancies were still present. For example, table 6 shows the different numbers that officials reported to TIM program management after the deployment of the second software release. Program officials stated that the reason for the inconsistencies in reported velocity data was that during the first release they were still in the process of adapting Agile and were working to determine how best to calculate velocity. However, as shown in table 6 inconsistent data continued to occur beyond that first release. These inconsistencies in reported data call into question the completeness and accuracy of the velocity numbers reported, and the potential impact on oversight bodies’ ability to hold the program accountable. For example, velocity is most useful when tracked over time to ensure consistent performance and for forecasting how quickly development teams can work through the items in a backlog. However, without a complete and accurate velocity number from each release, it is difficult for oversight bodies to ensure the program is producing work at an acceptable pace to enable the program to meet its cost, schedule, and performance targets. In addition, the program had been reporting inaccurate unit test coverage data using a manual measurement approach. Specifically, from December 2016 to March 2017, program officials were reporting that, for each release, they tested every line of code, based on a manual estimate (i.e., 100 percent). However, testing each line of code manually is unrealistic because with manual tests, it is difficult to determine which function, line of code, or logic decision is executed, and which is not. As such, program officials were reporting that they were testing every line of code, even though they were unable to confirm that they were actually doing so, thus calling into question the reliability and accuracy of the data reported. In response to our concerns, program officials acknowledged that they could not confirm whether they had tested every line of code. Accordingly, program officials stopped estimating this metric manually and stated that they planned to begin measuring unit test coverage again once lines of code could be tracked using automated tools. As previously discussed, program officials stated that the testing and deployment tools are not expected to be implemented until the new development, testing, and production environments are set up. However, until the program has complete and accurate unit test code coverage data, program officials will not know if portions of its code are going untested, which could lead to undetected issues and impact the quality of the product. TSA’s TIM program has taken notable steps to address several of the major issues it faced during prior system development and deployment efforts, such as implementing system fixes to address critical performance and usability issues found in the maritime segment. Nonetheless, a number of significant challenges have not been fully addressed. In particular, until the TIM program establishes specific time frames for determining key implementation details, ensures its schedule provides planned completion dates based on realistic estimates, and establishes new time frames for implementing the actions identified in the strategy, it is at significant risk of repeating past mistakes and experiencing the same pitfalls as it did during its initial implementation attempts. An indication of concern is that the program is currently experiencing a delay of at least 6 months in the rebaselined schedule for delivering TSA Pre® capabilities. While the program has also taken certain steps to successfully make the transition from a waterfall development approach to Agile software development—a substantial and complex effort—TIM has not defined key roles and responsibilities, prioritized features and user stories, or implemented automated capabilities that are essential to ensuring effective adoption of Agile. The gaps we identified with the program’s implementation of Agile are concerning given that it did not follow key IT acquisition best practices when using its waterfall development approach, in which the program spent approximately 8 years and over $280 million on a system that TSA has determined it needs to replace. While selected corrective actions have been taken, until the TIM program is implemented in accordance with leading practices, the program will be putting at risk its ability to deliver a quality system that strengthens and enhances the sophistication of TSA’s security threat assessment and credentialing programs. In addition, while TSA and DHS have implemented certain practices for overseeing and governing the program, the lack of other practices has impeded their oversight effectiveness, including the lack of thresholds or targets for acceptable performance levels, the lack of reporting on Agile- related cost metrics, and inconsistent measuring and reporting of program velocity and unit test coverage for software releases. These gaps limit the ability of DHS oversight bodies to obtain early indicators of any issues with the program, and to call for course corrections, if needed. Further, until DHS leadership reaches consensus on needed oversight and governance changes related to Agile programs, and then documents and implements associated changes to align oversight reviews with the timing of Agile software releases, the department will not be well positioned to hold the program accountable. Moreover, until the Office of the Chief Technology Officer completes guidance for Agile programs to use for collecting and reporting on performance metrics, and DHS-level oversight bodies require the TIM program to report on key Agile performance and cost metrics and use them to inform management oversight decisions, the department will also be limited in its ability to hold the TIM program accountable and ensure that it is meeting its cost, schedule, and performance targets. We are making the following 14 recommendations to DHS: The TSA Administrator should ensure that the TIM program management office establishes and implements specific time frames for determining key strategic implementation details, including how the program will transition from the current state to the final TIM state. (Recommendation 1) The TSA Administrator should ensure that the TIM program management office establishes a schedule that provides planned completion dates based on realistic estimates of how long it will take to deliver capabilities. (Recommendation 2) The TSA Administrator should ensure that the TIM program management office establishes new time frames for implementing the actions identified in the organizational change management strategy and effectively executes against these time frames. (Recommendation 3) The TSA Administrator should ensure that the TIM program management office defines and documents the roles and responsibilities among product owners, the solution team, and any other relevant stakeholders for prioritizing and approving Agile software development work. (Recommendation 4) The TSA Administrator should ensure that the TIM program management office establishes specific prioritization levels for current and future features and user stories. (Recommendation 5) The TSA Administrator should ensure that the TIM program management office implements automated Agile management testing and deployment tools, as soon as possible. (Recommendation 6) The TSA Administrator should ensure that the TIM program management office updates the Systems Engineering Life Cycle Tailoring Plan to reflect the current governance framework and milestone review processes. (Recommendation 7) The TSA Administrator should ensure that the TIM program management office establishes thresholds or targets for acceptable performance-levels. (Recommendation 8) The TSA Administrator should ensure that the TIM program management office begins collecting and reporting on Agile-related cost metrics. (Recommendation 9) The TSA Administrator should ensure that the TIM program management office ensures that program velocity is measured and reported consistently. (Recommendation 10) The TSA Administrator should ensure that the TIM program management office ensures that unit test coverage for software releases is measured and reported accurately. (Recommendation 11) The Secretary of Homeland Security should direct the Under Secretary for Management to ensure that appropriate DHS leadership reaches consensus on needed oversight and governance changes related to the frequency of reviewing Agile programs, and then documents and implements associated changes. (Recommendation 12) The Secretary of Homeland Security should direct the Under Secretary for Management to ensure that the Office of the Chief Technology Officer completes guidance for Agile programs to use for collecting and reporting on performance metrics. (Recommendation 13) The Secretary of Homeland Security should direct the Under Secretary for Management to ensure that DHS-level oversight bodies review key Agile performance and cost metrics for the TIM program and use them to inform management oversight decisions. (Recommendation 14) DHS provided written comments on a draft of this report, which are reprinted in appendix II. In its comments, the department concurred with all 14 of our recommendations and described actions it has planned or taken to address them. For example, with regard to recommendation 6, which calls for DHS to implement automated Agile management testing and deployment tools, the department stated that TSA plans to implement such tools by June 30, 2018. Additionally, for recommendation 14, the department stated that DHS intends to ensure that oversight bodies review key Agile performance and cost metrics for the TIM program by June 30, 2018. If implemented effectively, these actions should address the weaknesses we identified. The department also described recent actions that it and TSA had taken to address three of the recommendations, and requested that we consider these recommendations resolved. Specifically, in response to recommendation 9, calling for TSA to ensure that the TIM program management office begins collecting and reporting on Agile-related cost metrics, the department stated that the program is now reporting these metrics on a monthly basis. In response to recommendation 10, calling for TSA to ensure that the program’s velocity is measured and reported consistently, the department stated that velocity is now being reported consistently and in accordance with DHS guidelines. Further, in response to recommendation 13, which calls for DHS to complete guidance for Agile programs to use for collecting and reporting on performance metrics, the department stated that the guidance had recently been published and provided to us. However, to date, we have received only draft versions of the guidance. We will work with the department to obtain finalized documentation related to the three recommendations, to determine if the recent actions fully address the recommendations. In addition to the aforementioned comments, we received technical comments from DHS and TSA officials, which we incorporated, as appropriate. We are sending copies of this report to the Secretary of Homeland Security and interested congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4456 or harriscc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objectives were to (1) describe the Transportation Security Administration’s (TSA) past implementation efforts for the Technology Infrastructure Modernization (TIM) program and its new implementation strategy; (2) determine the extent to which TSA’s new strategy for the program addresses the challenges encountered during earlier implementation attempts; (3) determine the extent to which TSA has implemented selected key practices for transitioning to an Agile software development framework for the program; and (4) determine the extent to which the TSA and the Department of Homeland Security (DHS) are effectively overseeing and governing the TIM program to ensure that it is meeting cost, schedule, and performance requirements. To address our first objective, we reviewed program documentation, such as initial and current acquisition program baselines, initial and current life- cycle cost estimates, acquisition decision memorandums, and program plans documenting a new strategy for implementing the program. We used the information in this documentation to summarize the program’s earlier attempts to implement TIM capabilities and its new implementation strategy for delivering the program, including estimated costs, schedule, and key decisions made. We also interviewed TSA officials, including the TIM Director and Deputy Director, on the status of TIM program office efforts. To determine the extent to which the TIM program’s new strategy addresses the challenges encountered during earlier implementation attempts, we reviewed documentation on the challenges the TIM program faced when it breached cost and schedule thresholds and experienced system performance issues, such as those described in initial operational test reports, the breach remediation plan, and the results of a technical evaluation of program challenges. We synthesized the information in these documents to identify a consolidated list of key challenges the program had faced. We did not include challenges that were already being evaluated as part of other objectives, such as the use of the waterfall software development approach. We then reviewed documentation on the program’s new strategy, such as plans documenting the new strategy, follow-on operational test reports, program schedules, program status reports, and identified risks. We assessed the extent to which the new strategy outlined in these documents addressed the prior challenges by comparing them against criteria identified in leading practices and guidance, such as DHS’s Systems Engineering Lifecycle Guide and the Software Engineering Institute’s Capability Maturity Model® Integration for Development. In addition, we conducted a site visit at the TSA Adjudication Center in Reston, Virginia. During this site visit, we observed demonstrations of the current commercial-off-the- shelf system and legacy systems for TSA Pre® and Aviation Workers, and we interviewed adjudicators and supervisors on current security threat assessment processes and limitations. Further, we interviewed TSA officials, including the TIM Director and Deputy Director, on the program office’s efforts to address prior challenges. To determine the extent to which the program has implemented selected key practices for transitioning to an Agile software development framework, we identified leading practices and guidance outlined in the following sources: GAO, Software Development: Effective Practices and Federal Challenges in Applying Agile Methods Software Engineering Institute, Agile Readiness and Fit TechFAR handbook TSA Agile Scrum guidance CMMI® for Development, version 1.3 Software Engineering Institute, Agile Metrics After reviewing the sources listed, in consultation with our internal expert, we grouped practices that were identified as being critical to establish when transitioning to an Agile software development framework, and selected the practices that were most relevant based on the status of the program’s transition and we discussed the practice areas with TSA officials. The practices included: full support from leadership to adopt Agile processes, enhancing Agile knowledge, ensuring product owners are engaged with the development teams and have clearly defined roles, establishing a clear product vision, prioritized backlogs of requirements, and implementing automated tools to enable rapid system development and deployment. We reviewed program management documentation against these practices, such as Agile training records, Agile contracts, program roadmaps, backlogs, test plans, Agile release artifacts, program status reports, and identified risks. Additionally, we observed Agile release and sprint development activities at TSA facilities in Annapolis Junction, Maryland, and at a contractor’s facilities in Beltsville, Maryland, and we observed a demonstration of how user stories map from high-level capabilities and tracked through development and testing. We also interviewed TSA officials, including the TIM Director and Deputy Director and the five TIM product owners, on their efforts to transition the program to an Agile software development framework. Further, we interviewed DHS officials, including the Chief Technology Officer, on their efforts to conduct an Agile pilot to assist programs like TIM in adopting Agile software development processes. We assessed the evidence against leading practices to determine the extent to which TSA met the practices. To determine the extent to which TSA and DHS are effectively overseeing and governing the program to ensure that it is meeting cost, schedule, and performance requirements, we identified leading practices and guidance outlined in the following sources: TSA Agile Scrum guidance CMMI for Development, version 1.3 Software Engineering Institute, Agile Metrics After reviewing the sources listed, we grouped practices related to oversight and governance for programs using Agile software development into four key practice areas and we discussed the practices with DHS and TSA officials. These areas included: Document relevant governance and oversight policies and procedures. Monitor program performance and progress. Rely on complete and accurate data to review performance against expectations. Ensure that corrective actions are identified and tracked until the desired outcomes are achieved. To assess the extent that TSA and DHS had addressed these key practices, we reviewed the most current program management and governance documentation as of April 2017. Specifically, we analyzed documentation on program management processes, such as TIM’s Systems Engineering Life Cycle Tailoring Plan, TIM Agile and Technical Strategy, TIM Agile software development contract, and draft DHS Agile Acquisition Program Delivery Metrics Playbook; and artifacts from TIM’s program execution and review, such as Agile release artifacts, program status reports, contractor status reports, program schedules, life-cycle cost estimates, risk registers, TSA Executive Steering Committee reviews, DHS program health assessments, DHS Agile pilot integrated product team meetings, DHS Office of the Chief Technology Officer Agile pilot reviews, and DHS Acquisition Review Board reviews. Additionally, we interviewed TSA officials, including the TIM Director and Deputy Director, on their efforts to oversee TIM’s development. Further, we interviewed DHS officials, including the Chief Technology Officer, on their efforts to oversee the program’s Agile software development activities. We compared this evidence against leading practices to determine the extent to which TSA and DHS met the practices. To assess the reliability of the data that we used to support the findings in this report, we reviewed relevant program documentation to substantiate evidence obtained through interviews with agency officials. We determined that the data used in this report were sufficiently reliable for the purposes of our reporting objectives. We made appropriate attribution indicating the sources of the data. We conducted this performance audit from September 2016 to October 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, the following staff made key contributions to this report: Shannin G. O’Neill (Assistant Director), Jeanne Sung (Analyst in Charge), Jennifer Beddor, Rebecca Eyler, Bruce Rackliff, and Dwayne Staten.
|
TSA conducts security threat assessment screening and credentialing activities for millions of workers and travelers in the maritime, surface, and aviation transportation industries that are seeking access to transportation systems. In 2008, TSA initiated the TIM program to enhance the sophistication of its security threat assessments and to improve the capacity of its supporting systems. However, the program experienced significant cost and schedule overruns, and performance issues, and was suspended in January 2015 while TSA established a new strategy. The program was rebaselined in September 2016 and is estimated to cost approximately $1.27 billion and be fully operational by 2021 (about $639 million more and 6 years later than originally planned). GAO was asked to review the TIM program's new strategy. This report determined, among other things, the extent to which (1) TSA implemented selected key practices for transitioning to Agile software development for the program; and (2) TSA and DHS are effectively overseeing the program's cost, schedule, and performance. GAO compared program documentation to key practices identified by the Software Engineering Institute and the Office of Management and Budget, as being critical to transitioning to Agile and for overseeing and governing programs. The Transportation Security Administration's (TSA) new strategy for the Technology Infrastructure Modernization (TIM) program includes using Agile software development, but the program only fully implemented two of six leading practices necessary to ensure successful Agile adoption. Specifically, the Department of Homeland Security (DHS) and TSA leadership fully committed to adopt Agile and TSA provided Agile training. Nonetheless, the program had not defined key roles and responsibilities, prioritized system requirements, or implemented automated capabilities that are essential to ensuring effective adoption of Agile. Until TSA adheres to all leading practices for Agile implementation, the program will be putting at risk its ability to deliver a quality system that strengthens and enhances the sophistication of TSA's security threat assessments and credentialing programs. TSA and DHS fully implemented one of the key practices for overseeing the TIM program, by establishing a process for ensuring corrective actions are identified and tracked. However, TSA and DHS did not fully implement the remaining three key practices, which impede the effectiveness of their oversight. Specifically, TSA and DHS documented selected policies and procedures for governance and oversight of the TIM program, but they did not develop or finalize other key oversight and governance documents. For example, TSA officials developed a risk management plan tailored for Agile; however, they did not update the TIM system life-cycle plan to reflect the Agile governance framework they were using. The TIM program management office conducted frequent performance reviews, but did not establish thresholds or targets for oversight bodies to use to ensure that the program was meeting acceptable levels of performance. In addition, department-level oversight bodies have focused on reviewing selected program life-cycle metrics for the TIM program; however, they did not measure the program against the rebaselined cost, or important Agile release-level metrics. TIM's reported performance data were not always complete and accurate. For example, program officials reported that they were testing every line of code, even though they were unable to confirm that they were actually doing so, thus calling into question the accuracy of the data reported. These gaps in oversight and governance of the TIM program were due to, among other things, TSA officials not updating key program management documentation and DHS leadership not obtaining consensus on needed oversight and governance changes related to Agile programs. Given that TIM is a historically troubled program and is at least 6 months behind its rebaselined schedule, it is especially concerning that TSA and DHS have not fully implemented oversight and governance practices for this program. Until TSA and DHS fully implement these practices to ensure the TIM program meets its cost, schedule, and performance targets, the program is at risk of repeating past mistakes and not delivering the capabilities that were initiated 9 years ago to protect the nation's transportation infrastructure. GAO is making 14 recommendations, including that DHS should prioritize requirements and obtain leadership consensus on oversight and governance changes. DHS concurred with all 14 recommendations.
|
The Freedom of Information Act establishes a legal right of access to government information on the basis of the principles of openness and accountability in government. Before FOIA’s enactment in 1966, an individual seeking access to federal records faced the burden of establishing a “need to know” before being granted the right to examine a federal record. FOIA established a “right to know” standard, under which an organization or person could receive access to information held by a federal agency without demonstrating a need or reason. The “right to know” standard shifted the burden of proof from the individual to a government agency and required the agency to provide proper justification when denying a request for access to a record. Any person, defined broadly to include attorneys filing on behalf of an individual, corporations, or organizations, can file a FOIA request. For example, an attorney can request labor-related workers’ compensation files on behalf of his or her client, and a commercial requester, such as a data broker who files a request on behalf of another person, may request a copy of a government contract. In response, an agency is required to provide the relevant record(s) in any readily producible form or format specified by the requester, unless the record falls within a permitted exemption that provides limitations on the disclosure of information. Various amendments have been enacted and guidance issued to help improve agencies’ processing of FOIA requests, including: The Electronic Freedom of Information Act Amendments of 1996 (e- FOIA amendments) strengthened the requirement that federal agencies respond to a request in a timely manner and reduce their backlogged requests. The amendments, among other things, made a number of procedural changes, including allowing a requester to limit the scope of a request so that it could be processed more quickly and requiring agencies to determine within 20 working days whether a request would be fulfilled. This was an increase from the previously established time frame of 10 business days. The amendments also authorized agencies to multi-track requests— that is, to process simple and complex requests concurrently on separate tracks to facilitate responding to a relatively simple request more quickly. In addition, the amendment encouraged online, public access to government information by requiring agencies to make specific types of records available in electronic form. Executive Order 13392, issued by the President in 2005, directed each agency to designate a senior official as its chief FOIA officer. This official was to be responsible for ensuring agency-wide compliance with the act by monitoring implementation throughout the agency and recommending changes in policies, practices, staffing, and funding, as needed. The chief FOIA officer was directed to review and report on the agency’s performance in implementing FOIA to agency heads and to Justice on an annual basis. (These are referred to as chief FOIA officer reports.) The OPEN Government Act, which was enacted in 2007, made the 2005 executive order’s requirement for agencies to have a chief FOIA officer a statutory requirement. It also required agencies to submit an annual report to Justice outlining their administration of FOIA, including additional statistics on timeliness. Specifically, the act called for agencies to adequately track their agency’s FOIA request processing information throughout the reporting year and then produce reports on that topic to comply with FOIA reporting requirements and Justice guidance for reporting. The FOIA Improvement Act of 2016 addressed procedural issues, including requiring that agencies: (1) make records available in an electronic format if they have been requested three or more times; (2) notify requesters that they have a minimum of 90 days to file an administrative appeal, and (3) provide dispute resolution services at various times throughout the FOIA process. This act also created more duties for chief FOIA officers, including requiring them to offer training to agency staff regarding FOIA responsibilities. The act also revised and added new obligations for OGIS, and created the Chief FOIA Officers Council to assist in compliance and efficiency. Further, the act required OMB, in consultation with Justice, to create a consolidated online FOIA request portal that allows the public to submit a request to any agency through a single website. In responding to requests, FOIA authorizes agencies to utilize one of nine exemptions to withhold portions of records, or the entire record. Agencies may use an exemption when it has been determined that disclosure of the requested information would harm an interest related to certain protected areas. These nine exemptions can be applied by agencies to withhold various types of information, such as information concerning foreign relations, trade secrets, and matters of personal privacy. One such exemption, the statutory (b)(3) exemption, specifically authorizes withholding information under FOIA on the basis of a law which: requires that matters be withheld from the public in such a manner as to leave no discretion on the issue; or establishes particular criteria for withholding or refers to particular types of matters to be withheld; and if enacted after October 28, 2009, specifically refers to section 552(b)(3) of title 5, United States Code. To account for agencies use of the statutory (b)(3) exemptions, FOIA requires each agency to submit, in its annual report to Justice, a complete listing of all statutes that the agency relied on to withhold information under exemption (b)(3). The act also requires that the agency describe for each statute identified in its report (1) the number of occasions on which each statute was relied upon; (2) a description of whether a court has upheld the decision of the agency to withhold information under each such statute; and (3) a concise description of any information withheld. Further, to provide an overall summary of the statutory (b)(3) exemptions used by agencies in a fiscal year, Justice produces consolidated annual reports that list the statutes used by agencies in conjunction with (b)(3). As previously noted, agencies are generally required by the e-FOIA amendments of 1996 to respond to a FOIA request within 20 working days. Once received, the request is to be processed through multiple phases, which include assigning a tracking number, searching for responsive records, and releasing the records response to the requester. Also, as relevant, FOIA allows a requester to challenge an agency’s final decision on a request through an administrative appeal or a lawsuit. Specifically, a requester has the right to file an administrative appeal if he or she disagrees with the agency’s decision on their request. Agencies have 20 working days to respond to an administrative appeal. Figure 1 provides a simplified overview of the FOIA request and appeals process. In a typical agency, as indicated, during the intake phase, a request is logged into the agency’s FOIA tracking system, and a tracking number is assigned. The request is then reviewed by FOIA staff to determine its scope and level of complexity. The agency then typically sends a letter or email to the requester acknowledging receipt of the request, with a unique tracking number that the requester can use to check the status of the request. Next, FOIA staff (non-custodian) begin the search to retrieve the responsive records by routing the request to the appropriate program office(s).This step may include requesting that the custodian (owner) of the record search and review paper and electronic records from multiple locations and program offices. Agency staff then process the responsive records, which includes determining whether a portion or all of any record should be withheld based on FOIA’s exemptions. If a portion or all of any record is the responsibility of another agency, FOIA staff may consult with the other agency or may send (“refer”) the document(s) to that other agency for processing. After processing and redaction, a request is reviewed for errors and to ensure quality. The documents are then released to the requester, either electronically or by regular mail. In addition, FOIA allows requesters to sue an agency in federal court if the agency does not respond to a request for information within the statutory time frames or if the requesters believe they are entitled to information that is being withheld by the agency. Further, the act requires the Office of Special Counsel (OSC) to initiate a proceeding to determine whether disciplinary action is warranted against agency personnel in cases involving lawsuits where a court has found, among other things that agency personnel may have acted arbitrarily or capriciously in responding to a FOIA request. The act requires Justice to notify OSC when a lawsuit meets this requirement. Responsibility for the oversight of FOIA implementation is spread across several federal offices and other entities. These include Justice’s OIP, NARA’s OGIS, and the Chief FOIA Officers Council. These oversight agencies and the council have taken steps to assist agencies to address the provisions of FOIA. Justice’s OIP is responsible for encouraging agencies’ compliance with FOIA and overseeing their implementation of the act. In this regard, the office, among other things, provides guidance, compiles information on FOIA compliance, provides FOIA training, and prepares annual summary reports on agencies’ FOIA processing and litigation activities. The office also offers FOIA counseling services to government staff and the public. Issuing guidance. OIP has developed guidance, available on its website, to assist federal agencies by instructing them in how to ensure timely determinations on requests, expedite the processing of requests, and reduce backlogs. The guidance also informs agencies on what should be contained in their annual FOIA reports to Justice’s Attorney General. The office also has documented ways for federal agencies to address backlog requests. In March 2009 the Attorney General issued guidance and related policies to encourage agencies to reduce their backlogs of FOIA requests. In addition, in December 2009, OMB issued a memorandum on the OPEN Government Act, which called for a reduction in backlogs and the publishing of plans to reduce backlogs. Further, in August 2014, OIP held a best practices workshop and issued guidance to agencies on reducing FOIA backlogs and improving timeliness of agencies’ responses to FOIA requests. The OIP guidance instructed agencies to obtain leadership support, routinely review FOIA processing metrics, and set up staff training on FOIA. Overseeing agencies’ compliance. OIP collects information on compliance with the act by reviewing agencies’ annual FOIA reports and chief FOIA officer reports. These reports describe the number of FOIA requests received and processed in a fiscal year, as well as the total costs associated with processing and litigating requests. Providing training. The office offers an annual training class that provides a basic overview of the act, as well as hands-on courses about the procedural requirements involved in processing a request from start to finish. In addition, it offers a seminar outlining successful litigation strategies for attorneys who handle FOIA cases. Preparing administrative and legal annual reports. OIP prepares two major reports yearly—one related to agencies’ annual FOIA processing and one related to agencies’ FOIA litigation and compliance. The first report, compiled from agencies’ annual FOIA reports, contains statistics on the number of requests received and processed by each agency, the time taken to respond, and the outcome of each request, as well as other statistics on FOIA administration such as number of backlogs, and the use of exemptions to withhold information from a requestor. The second report describes Justice’s efforts to encourage compliance with the act and provides a listing of all FOIA lawsuits filed or determined in that year, the exemptions and/or dispositions involved in each case, and any court-assessed costs, fees, and penalties. NARA’s OGIS was established by the OPEN Government Act of 2007 to oversee and assist agencies in implementing FOIA. OGIS’s responsibilities include reviewing agency policies and procedures, reviewing agency compliance, recommending policy changes, and offering mediation services. The 2016 FOIA amendments required agencies to update response letters to FOIA requesters to include information concerning the roles of OGIS and agency’s FOIA public liaisons. As such, OGIS and Justice worked together to develop a response letter template that includes the required language for agency letters. In addition, OGIS, charged with reviewing agency’s compliance with FOIA, launched in 2014 a FOIA compliance program. OGIS also developed a FOIA compliance self- assessment program, which is intended to help OGIS look for potential compliance issues across federal agencies. The Chief FOIA Officers Council is co-chaired by the Director of OIP and the Director of OGIS. Council members include senior representatives from OMB, OIP, and OGIS, together with the chief FOIA officers of each agency, among others. The council’s FOIA-related responsibilities include: developing recommendations for increasing compliance and disseminating information about agency experiences, ideas, best practices, and innovative approaches; identifying, developing, and coordinating initiatives to increase transparency and compliance; and promoting the development and use of common performance measures for agency compliance. Selected Agencies Collect and Maintain Records That Can Be Subject to FOIA Requests. The 18 agencies selected for our review are charged with a variety of operations that affect many aspects of federal service to the public. Thus, by the nature of their missions and operations, the agencies have responsibility for vast and varied amounts of information that can be subject to a FOIA request. For example, the Department of Homeland Security’s (DHS) mission is to protect the American people and the United States homeland. As such, the department maintains information covering, among other things, immigration, border crossings, and law enforcement. As another example, the Department of the Interior’s (DOI) mission includes protecting and managing the Nation’s natural resources and, thus, providing scientific information about those resources. Table 1 provides details on each of the 18 selected agencies’ mission and the types of information they maintain. The 18 selected agencies reported that they received and processed more than 2 million FOIA requests from fiscal years 2012 through 2016. Over this 5-year period, the number of reported requests received fluctuated among the agencies. In this regard, some agencies saw a continual rise in the number of requests, while other agencies experienced an increase or decrease from year to year. For example, from fiscal years 2012 through 2014, DHS saw an increase in the number of requests received (from 190,589 to 291,242), but in fiscal year 2015, saw the number of requests received decrease to 281,138. Subsequently, in fiscal year 2016, the department experienced an increase to 325,780 requests received. In addition, from fiscal years 2012 through 2015, the reported numbers of requests processed by the selected agencies showed a relatively steady increase. However, in fiscal year 2016, the reported number of requests processed by these agencies declined. Figure 2 provides a comparison of the total number of requests received and processed in this 5-year period. Among other things, the FOIA Improvement Act of 2016 and the OPEN Government Act of 2007 calls for agencies to (1) update response letters, (2) implement tracking systems, (3) provide FOIA training, (4), provide required records online, (5) designate chief FOIA officers, and (6) update and publish timely and comprehensive regulations. As part of our ongoing work, we determined that the 18 selected agencies included in our review had implemented the majority of the six FOIA requirements evaluated. Specifically, 18 agencies updated response letters, implemented tracking systems, 15 agencies provided required records online, and 12 agencies designated chief FOIA officers. However, only 5 of the agencies published and updated their FOIA regulations in a timely and comprehensive manner. Figure 3 summarizes the extent to which the 18 agencies implemented the selected FOIA requirements. Beyond these selected agencies, Justice’s OIP and OMB also had taken steps to develop a government-wide FOIA request portal that is intended to allow the public to submit a request to any agency from a single website. The 2016 amendments to FOIA required agencies to include specific information in their responses when making their determinations on requests. Specifically, agencies must inform requesters that they may seek assistance from the FOIA Public Liaison, file an appeal to an adverse determination within a period of time that is not less than 90 days after the date of such adverse determination; and seek dispute resolution services from the FOIA Public Liaison of the agency or OGIS. Among the 18 selected agencies, all had updated their FOIA response letters to include this required information. Various FOIA amendments and guidance call for agencies to use automated systems to improve the processing and management of requests. In particular, the OPEN Government Act of 2007 amended FOIA to require that federal agencies establish a system to provide individualized tracking numbers for requests that will take longer than 10 days to process and establish telephone or Internet service to allow requesters to track the status of their requests. Further, the President’s January 2009 Freedom of Information Act memorandum instructed agencies to use modern technology to inform citizens about what is known and done by their government. In addition, FOIA processing systems, like all automated information technology systems, are to comply with the requirements of Section 508 of the Rehabilitation Act (as amended). This act requires federal agencies to make their electronic information accessible to people with disabilities. Each of the 18 selected agencies had implemented a system that provides capabilities for tracking requests received and processed, including an individualized number for tracking the status of a request. Specifically, Ten agencies used commercial automated systems, (DHS, EEOC, FDIC, FTC, Justice, NTSB, NASA, Pension Benefit Guaranty Corporation, and USAID). Three agencies developed their own agency systems (State, DOI, and TVA). Five agencies used Microsoft Excel or Word to track requests (Administrative Conference of the United States, American Battle Monuments Commission, Broadcasting Board of Governors, OMB, and U.S. African Development Foundation). Further, all of the agencies had established telephone or Internet services to assist requesters in tracking the status of requests; and they used modern technology (e.g., mobile applications) to inform citizens about FOIA. For example, the commercial systems allow requesters to submit a request and track the status of that request online. In addition, DHS developed a mobile application that allows FOIA requesters to submit requests and check the status of existing requests. The 2016 FOIA amendments require agencies’ chief FOIA officers to offer training to agency staff regarding their responsibilities under FOIA. In addition, Justice’s OIP has advised every agency to make such training available to all of their FOIA staff at least once each year. The office has also encouraged agencies to take advantage of FOIA training opportunities available throughout the government. The 18 selected agencies’ chief FOIA officers offered FOIA training opportunities to staff in fiscal years 2016 and 2017. For example: Eleven agencies provided training that gave an introduction and overview of FOIA (the American Battle Monuments Commission, EEOC, Justice, FDIC, FTC, NARA, Pension Benefit Guaranty Corporation, State, TVA, U.S. African Development Foundation, and USAID). Three agencies offered training for their agencies’ new online FOIA tracking and processing systems (DOI, NTSB, and Pension Benefit Guaranty Corporation). Three agencies provided training on responding to, handling, and processing FOIA requests (DHS, DOI, and State). Three agencies offered training on understanding and applying the exemptions under FOIA (FDIC, FTC, and U.S. African Development Foundation). Two agencies offered training on the processing of costs and fees (NASA and TVA). Memorandums from both the President and the Attorney General in 2009 highlight the importance of online disclosure of information and further direct agencies to make information available without a specific FOIA request. Further, the 2016 FOIA amendments require online access to government information and require agencies to make information available to the public in electronic form for up to four categories: agency final opinions and orders, administrative staff manuals of interest to the public, and frequently requested records. While all 18 agencies that we reviewed post records online, only 15 of them had posted all categories of information, as required by the FOIA amendments. Specifically, 7 agencies—the American Battle Monuments Commission, the Pension Benefit Guaranty Corporation, and EEOC, FDIC, FTC, DOJ, and State—had, as required, made records in all four categories publicly available online. In addition, 5 agencies that were only required to publish online records in three of the categories—the Administrative Conference of the United States, Broadcasting Board of Governors, DHS, OMB, and USAID— had done so. Further, 3 agencies that were only required to publish online records in two of the categories—U.S. African Development Foundation, NARA, and TVA— had done so. The remaining 3 agencies—DOI, NASA, and NTSB—had posted records online for three of four required categories. Regarding why the three agencies did not post all of their four required categories of online records, DOI officials stated that the agency does not make publically available all FOIA records that have been requested 3 or more times, as it does not have the time to post all such records that have been requested. NASA officials explained that, while the agency issues final opinions, it does not post them online. As for NTSB, while its officials said they try to post information that is frequently requested, they do not post the information on a consistent basis Making the four required categories of information available in electronic form is an important step in allowing the public to easily access to government documents. Until these agencies make all required categories of information available in electronic form, they cannot ensure that they are providing the required openness in government. In 2005, the President issued an executive order that established the role of a Chief FOIA Officer. In 2007, amendments to FOIA required each agency to designate a chief FOIA officer who shall be a senior official at the Assistant Secretary or equivalent level. Of the 18 selected agencies, 12 agencies have Chief FOIA Officers who are senior officials at the Assistant Secretary or equivalent level. The Assistant Secretary level is comparable to senior executive level positions at levels III, IV, and V. Specifically, State has designated its Assistant Secretary of Administration, Bureau DOI and NTSB had designated its Chief Information Officers; Administrative Conference of the United States, Broadcasting Board of Governors, FDIC, NARA, and U.S. African Development Foundation have designated their general counsels; and Justice, NASA, TVA, and USAID designated their Associate Attorney General, Associate Administrator for Communications, the Vice President for Communications, and the Assistant Administrator for the Bureau of Management, respectively. However, 6 agencies — American Battle Monuments Commission DHS, EEOC, Pension Benefit Guaranty Corporation, FTC, and OMB — do not have chief FOIA officers that are senior officials at the Assistant Secretary or equivalent level. According to officials from 5 of these agencies, the agencies all have chief FOIA officers and officials believed they had designated the appropriate officials. Officials at FTC acknowledged that the chief FOIA officer position is not designated at a level equivalent to an Assistant Secretary but a senior position within the agency. However, while there are chief FOIA officers at these agencies, until the chief FOIA officers are designated at the Assistant Secretary or equivalent level, they will lack assurance regarding the necessary authority to make decisions about agency practices, personnel, and funding. FOIA requires federal agencies to publish regulations in the Federal Register that inform the public of their FOIA operations. Specifically, in 2016, FOIA was amended to require agencies to update their regulations regarding their FOIA operations. To assist agencies in meeting this requirement, OIP created a FOIA regulation template for agencies to use as they update their regulations. Among other things, OIP’s guidance encouraged agencies to: describe their dispute resolution processed, describe their administrative appeals process for response letters of notify requesters that they have a minimum of 90 days to file an inform requesters that the agency may charge fees for requests determined as “unusual” circumstances ; and update the regulations in a timely manner (i.e., update regulations by 180 days after the enactment of the 2016 FOIA amendment.) Five agencies in our review—DHS, DOI, FDIC, FTC, and USAID— addressed all five requirements in updating their regulations. In addition, seven agencies addressed four of the five requirements: the Administrative Conference of the United States, EEOC, Justice, NARA, NTSB, Pension Benefit Guaranty Corporation, and TVA did not update their regulations in a timely manner. Further, four agencies addressed three or less requirements (U.S. African Development Foundation, State, NASA, and Broadcasting Board of Governors) and two agencies (American Battle Monuments Commission and OMB) did not address any of the requirements. Figure 4 indicates the extent to which the 18 agencies had addressed the five selected requirements. Agencies that did not address all five requirements provided several explanations as to why their regulations were not updated as required: American Battle Monuments Commission stated that while they updated their draft regulation in August 2017, it is currently unpublished due to internal reviews with the General Counsel in preparation for submission to the Federal Register. No new posting date has been established. American Battle Monuments Commission last updated its regulation in February 26, 2003. State officials noted that their regulation was updated two months prior to the new regulation requirements but did not provide a specific reason for not reissuing its regulation. As such, they explained that they have a working group reviewing their regulation for updates, with no timeline identified. State last updated its regulation on April 6, 2016. NASA officials did not provide a reason for not updating its regulation as required. Officials did, however, state that its draft regulation is with the Office of General Counsel for review. NASA last updated its regulations on August 11, 2017. Broadcasting Board of Governors officials did not provide a reason for not updating its regulation as required. Officials did, however, note that the agency is in the process of updating its regulation and anticipates it will complete this update by the end of 2018. The Broadcasting Board of Governors last updated its regulation on February 2, 2002. OMB officials did not provide a reason for not updating the agency’s regulation as required. Officials did, however, state that due to a change in leadership they do not have a time frame for updating their regulation. OMB last updated its regulation on May 27, 1998. The chief FOIA officer at the U.S. African Development Foundation stated that, while the agency had updated and submitted their regulation to be published in December 2016, they were unpublished due to an error that occurred with the acknowledgement needed to publish the regulation on the federal register. The regulation was subsequently published on February 3, 2017. The official further noted that when the agency responds to FOIA requests it has not charged a fee for unusual circumstances, and therefore they did not believe they had to disclose information regarding fees in its regulation. Until these six agencies publish updated regulations that address the necessary requirements, as called for in FOIA and OIP guidance, they likely will be unable to provide the public with required regulatory and procedural information to ensure transparency and accountability in the government. The 2016 FOIA amendments required OMB to work with Justice to build a consolidated online FOIA request portal. This portal is intended to allow the public to submit a request to any agency from a single website and include other tools to improve the public’s access to the benefits of FOIA. Further, the act required OMB to establish standards for interoperability between the consolidated portal and agency FOIA systems. The 2016 FOIA amendments did not provide a time to develop the portal and standards. With OMB’s support, Justice developed an initial online portal. Justice’s OIP officials stated that they expect to update the portal to provide basic functionality that aligns with requirements of the act, including the ability to make a FOIA request, and technical processes for interoperability amongst agencies’ various FOIA systems. According to OIP officials, in partnership with OMB, OIP was able to identify dedicated funding source to operate and maintain the portal to ensure its success in the long term, with major agencies sharing in the costs to operate, maintain, and fund any future enhancements designed to improve FOIA processes. The first iteration of the National FOIA portal launched on Justice’s foia.gov website on March 8, 2018. In our draft report, we determined that the 18 selected agencies in our review had FOIA request backlogs of varying sizes, ranging from no backlogged requests at some agencies to 45,000 or more requests at other agencies. Generally, the agencies with the largest backlogs had received the most requests. In an effort to aid agencies in reducing their backlogs, Justice’s OIP identified key practices that agencies can use. However, while the agencies reported using these practices and other methods, few of them managed to reduce their backlogs during the period from fiscal year 2012 through 2016. In particular, of the four agencies with the largest backlogs, only one—NARA—reduced its backlog. Agencies attributed their inability to decrease backlogs to the number and complexity of requests, among other factors. However, agencies also lack comprehensive plans to implement practices on an ongoing basis. The selected agencies in our review varied considerably in the size of their FOIA request backlogs. Specifically, from fiscal year 2012 through 2016, of the 18 selected agencies 10 reported a backlog of 60 or fewer requests, and of these 10 agencies, 6 reported having no backlog in at least 1 year. 4 agencies had backlog numbers between 61 and 1,000 per year; and 4 agencies had backlogs of over 1,000 requests per year. The four agencies with backlogs of more than 1,000 requests for each year we examined were Justice, NARA, State and DHS. Table 2 shows the number requests and the number of backlogged request for the 18 selected agencies during the 5-year period. Over the 5-year period, 14 of the 18 selected agencies experienced an increase in their backlogs in at least 1 year. By contrast, 2 agencies (Administrative Conference of the United States and the U.S. African Development Foundation) reported no backlogs, and 3 agencies (American Battle Monument Commission, NASA and NARA) reported reducing their backlogs. Further, of the four agencies with the largest backlogs (DHS, State, Justice, and NARA) only NARA reported a backlog lower in fiscal year 2016 than in fiscal year 2012. Figure 5 shows the trends for the four agencies with the largest backlogs, compared with the rest of the 18 agencies. In most cases, agencies with small or no backlogs (60 or fewer) also received relatively few requests. For example, the Administrative Conference of the United States and the U.S. African Development Foundation reported no backlogged requests during any year but also received fewer than 30 FOIA requests a year. The American Battle Monuments Commission also received fewer than 30 requests a year and only reported 1 backlogged request per year in 2 of the 5 years examined. However, the Pension Benefit Guaranty Corporation and FDIC received thousands of requests over the 5-year period, but maintained zero backlogs in a majority of the years examined. PBGC received a total of 19,120 requests during the 5-year period and only reported a backlog of 8 requests during one year, fiscal year 2013. FDIC received a total of 3,405 requests during the 5-year period and reported a backlog of 13 requests in fiscal year 2015 and 4 in fiscal year 2016. The four agencies with backlogs of 1,000 or more (Justice, NARA, State, and DHS) received significantly more requests each year. For example, NARA received between about 12,000 and 50,000 requests each year, while DHS received from about 190,000 to 325,000 requests. In addition, the number of requests NARA received in fiscal year 2016 was more than double the number received in fiscal year 2012. DHS received the most requests of any agency—a total of 1,320,283 FOIA requests over the 5- year period. The Attorney General’s March 2009 memorandum called on agency chief FOIA officers to review all aspects of their agencies’ FOIA administration and report to Justice on steps that have been taken to improve FOIA operations and disclosure. Subsequent Justice guidance required agencies are to include in their chief FOIA officer reports information on their FOIA request backlogs, including whether the agency experienced a backlog of requests; whether that backlog decreased from the previous year; and, if not, reasons the backlog did not decrease. In addition, agencies that had more than 1,000 backlogged requests in a given year were required to describe their plans to reduce their backlogs. Beginning in fiscal year 2015, these agencies were to describe how they implemented their plans from the previous year and whether that resulted in a backlog reduction. In addition, Justice’s OIP identified best practices for reducing FOIA backlogs. The office held a best practices workshop on reducing backlogs and improving timeliness. The office then issued guidance in August 2014 which highlighted key practices to improve the quality of a FOIA program. OIP identified the following methods in its best practices guidance. Utilize resources effectively. Agencies should allocate their resources effectively by using multi-track processing, making use of available technology, and shifting priorities and staff assignments to address needs and effectively manage workloads. Routinely review metrics. Agencies should regularly review their FOIA data and processes to identify challenges or barriers. Additionally, agencies should identify trends to effectively allocate resources, set goals for staff, and ensure needs are addressed. Emphasize staff training. Agencies should ensure FOIA staff are properly trained so they can process requests more effectively and with more autonomy. Training and engagement of staff can also solidify the importance of the FOIA office’s mission. Obtain leadership support. Agencies should ensure that senior management is involved in and supports the FOIA function in order to increase awareness and accountability, as well as make it easier to obtain necessary resources or personnel. Agencies identified a variety of methods that they used to address their backlogs. These included both the practices identified by Justice, as well as additional methods. Ten agencies maintained relatively small backlogs of 60 or fewer requests and were thus not required to develop plans for reducing backlogs. However, 2 of these 10 agencies, who both received significant numbers of requests, described various methods used to maintain a small backlog: PBGC officials credits its success to training, not only for FOIA staff, but all Incoming personnel, while also awarding staff for going above and beyond in facilitating FOIA processing. Pension Benefit Guaranty Corporation has incorporated all the best practices identified by OIP, including senior leadership involvement that supports FOIA initiatives and program goals, routine review of metrics to optimize workflows, effective utilization of resources and staff training. According to FDIC officials, its overall low backlog numbers are attributed to a trained and experienced FOIA staff, senior management involvement, and coordination among FDIC divisions. However, FDIC stated the reason for the increase in backlogs in fiscal year 2015 was due to increased complexity of requests. The 4 agencies with backlogs greater than 60 but fewer than 1,000 (EEOC, DOI, NTSB, and USAID) reported using various methods to reduce their backlogs. However, all 4 showed an increase over the 5-year period. EEOC officials stated that it had adopted practices recommended by OIP such as multi-track processing, reviewing workloads to ensure sufficient staff, and using temporary assignments to address needs. However, it has seen a large increase in its backlog numbers, going from 131 in fiscal year 2012 to 792 in fiscal year 2016. EEOC attributed the rise in backlogs to an increase in requests received, loss of staff, and the complex and voluminous nature of requests. DOI, according to agency officials, has also tried to incorporate reduction methods and best practices, including proactively releasing information that may be of interest to the public, thus avoiding the need for a FOIA request; enhanced training for its new online FOIA tracking and processing system; improved inter-office collaboration; monthly reports on backlogs and weekly charts on incoming requests to heighten awareness among leadership; and monitoring trends. Yet, DOI has seen an increase in its backlog, from 449 in fiscal year 2012 to 677 in fiscal year 2016, an increase of 51 percent. DOI attributed the increase to loss of FOIA personnel, increase in the complexity of requests, increase in FOIA-related litigation, increase in incoming requests, and staff having additional duties. Officials at NTSB stated that it utilized contractors and temporary staff assignments to augment staffing and address backlogs. Despite the effort, NTSB saw a large increase in backlogs, from 62 in fiscal year 2012 to 602 in fiscal year 2016. Officials stated that reason for the increase was due to increased complexity of requests, including requests for “any and all” documentation related to a specific subject, often involving hundreds to thousands of pages per request. According to USAID officials, the agency conducts and reviews inventories of its backlog and requests to remove duplicates and closed cases, group and classify requests by necessary actions and responsive offices, and initiate immediate action. In addition, USAID seeks to identify tools and solutions to streamline records for review and processing. However, its backlog numbers have continually increased, from 201 in fiscal year 2012 to 318 in fiscal year 2016. USAID attributes that increase to increase in the number of requests, loss of FOIA staff, increased complexity and volume of requests, competing priorities, and world events that may drive surges in requests. Of the four agencies with the largest backlogs, all reported taking steps that in some cases included best practices identified by OIP; however, only NARA successfully reduced its backlog by the end of the 5-year period. Justice made noted that it efforts to reduce its backlog by incorporating best practices. Specifically, OIP worked with components within Justice through the Component Improvement Initiative to identify causes contributing to a backlog and assist components in finding efficiencies and overcoming challenges. The Chief FOIA Officer continued to provide top-level support to reduction efforts by convening the department’s FOIA Council to manage overall FOIA administration. In addition, many of the components created their own reduction plans, which included hiring staff, utilizing technology, and providing more training, requester outreach, and multitrack processing. However, despite these efforts, the number of backlogs steadily increased during the 5-year period, from 5,196 in fiscal year 2012 to 10,644 in fiscal year 2016, an overall increase of 105 percent. Justice attributes the increase in backlogs to several challenges, including an increase of incoming requests and an increase in the complexity of those requests. Other challenges that Justice noted were staff shortages and turnover, reorganization of personnel, time to train incoming staff, and the ability to fill positions previously held by highly qualified professionals. NARA officials stated that one key step NARA took was to make corrections in its Performance Measurement and Reporting System. They noted that this system previously comingled backlogged requests with the number of pending FOIA requests, skewing the backlog numbers higher. The improvements included better accounting for pending and backlogged cases, distinguishing between simple and complex requests, and no longer counting as open cases that were closed within 20 days, but not until the beginning of the following fiscal year. In addition, officials also stated that the FOIA program offices have been successful at working with requesters to narrow the scope of requests. NARA also stated that it was conducting an analysis of FOIA across the agency to identify any barriers in the process. Officials also identified other methods, including using multi-track processing, shifting priorities to address needs, improved communication with agencies, proactive disclosures, and the use of mediation services. NARA has shown significant progress in reducing its backlog. In fiscal year 2012 it had a backlog of 7,610 requests, which spiked to 9,361 in fiscal year 14. However, by fiscal year 2016 the number of backlogged requests had dropped to 2,932 despite an more than doubling of requests received for that fiscal year. However, NARA did note challenges to reducing its backlog numbers, namely, the increase in the number of requests received. State developed and implemented a plan to reduce its backlog in fiscal year 2016. The plan incorporated two best practices by focused on identifying the extent of the backlog problem and developing ways to address the backlog with available resources. According to State officials, effort was dedicated to improve how FOIA data was organized and reported. Expedited and litigation cases were top priorities, whereas in other cases a first in first out method was employed. Even with these efforts, however, State experienced a 117 percent increase in its backlog over the 5-year period. State’s backlog doubled from 10,045 in fiscal year 2014 to 22,664 in fiscal year 2016. Among the challenges to managing its backlog, State reported an increase in incoming requests, a high number of litigation cases, and competing priorities. Specifically, the number of incoming requests for State increase by 51 percent during the 5-year period. State has also reported that it has allocated 80 percent of its FOIA resources to meet court-ordered productions associated with litigation cases, resulting in fewer staff to work on processing routine requests. This included, among other efforts, a significant allocation of resources in fiscal year 2015 to meet court-imposed deadlines to process emails associated with the former Secretary of State, resulting in a surge of backlogs. In 2017 State began an initiative to actively address its backlogs. The Secretary of State issued an agency-wide memorandum stating the department’s renewed efforts by committing more resources and workforce to backlog reduction. The memo states new processes are to be implemented for both the short and long-term, and the FOIA office has plans to work with the various bureaus to outline the tasks, resources, and workforce necessary to ensure success and compliance. With renewed leadership support, State has reported significant progress in its backlog reduction efforts. DHS, in its chief FOIA officer reports, reported that it implemented several plans to reduce backlogs. The DHS Privacy office, which is responsible for oversight of the department’s FOIA program, worked with components to help eliminate the backlog. The Privacy Office sent monthly emails to component FOIA officers on FOIA backlog statistics, convened management meetings, conducted oversight, and reviewed workloads. Leadership met weekly to discuss the oldest pending requests, appeals, and consultations, and determined needed steps to process those requests. In addition, several other DHS components implemented actions to reduce backlogs. Customs and Border Protection hired and trained additional staff, encouraged requesters to file requests online, established productivity goals, updated guidance, and utilized better technology. U.S. Citizenship and Immigration Services, National Protection and Programs Directorate, and Immigration and Customs Enforcement increased staffing or developed methods to better forecast future workloads ensure adequate staffing. Immigration and Customs Enforcement also implemented a commercial off-the-shelf web application, awarded a multi-million dollar contract for backlog reduction, and detailed employees from various other offices to assist in the backlog reduction effort. Due to efforts by the Privacy Office and other components, the backlog dropped 66 percent in fiscal year 2015, decreasing to 35,374. Yet, despite the continued efforts in fiscal year 2016, the backlog numbers increased again, to 46,788. DHS attributes the increases in backlogs to several factors, including an increase in the number of requests received, increased complexity and volume of responsive records for those requests, loss of staff and active litigation with demanding production schedules. One reason the eight agencies with significant backlogs may be struggling to consistently reduce their backlogs is that they lack documented, comprehensive plans that would provide a more reliable, sustainable approach to addressing backlogs. In particular, they do not have documented plans that describe how they will implement best practices for reducing backlogs over time, including specifying how they will use metrics to assess the effectiveness of their backlog reduction efforts and ensure that senior leadership supports backlog reduction efforts, among other best practices identified by OIP. While agencies with backlogs of 1,000 or more are required to describe backlog reduction efforts in their chief FOIA officer reports, these consist of a high-level narrative and do not include a specific discussion of how the agencies will implement best practices over time to reduce their backlog. In addition, agencies with backlogs of fewer than 1,000 requests are not required to report on backlog reduction efforts; however, the selected agencies in our review with backlogs in the hundreds still experienced an increase over the 5-year period. Without a more consistent approach, agencies will continue to struggle to reduce their backlogs to a manageable level, particularly as the number and complexity of requests increase over time. As a result, their FOIA processing may not respond effectively to the needs of requesters and the public. FOIA requires agencies report annually to Justice on their use of statutory (b)(3) exemptions. This includes specifying which statutes they relied on to exempt information from disclosure and the number of times they did so. To assist agencies in asserting and accounting for their use of these statutes, Justice instructs agencies to consult a running list of all the statutes that have been found to qualify as proper (b)(3) statutes by the courts. However, agencies may also use a statute not included in the Justice list, because many statutes that appear to meet the requirements of (b)(3) have not been identified by a court as qualifying statutes. If the agency uses a (b)(3) statute that is not identified in the qualifying list, Justice guidance instructs the agency to include information about that statute in its annual report submission. Justice reviews the statute and provides advice to the agency, but does not make a determination on the appropriateness of using that statute under the (b)(3) exemption. Based on data agencies reported to Justice, during fiscal years 2010 to 2016, agencies claimed 237 statutes as the basis for withholding information. Of these statutes, 75 were included on Justice’s list of qualifying statutes under the (b)(3) exemption. Further, we identified 140 additional statutes that were not identified in our 237 statutes claimed by agencies during fiscal years 2010 to 2016, but have similar provisions to other (b)(3) statutes authorizing an agency to withhold information from the public. We found that the 237 statutes cited as the basis for (b)(3) exemptions during the period from fiscal year 2010 to 2016 to fell into eight general categories of information. These categories were (1) personally identifying information, (2) national security, (3) commercial, (4) law enforcement and investigations, (5) internal agency, (6) financial regulation, (7) international affairs, and (8) environmental. Figure 6 identifies the eight categories and the number of agency-claimed (b)(3) statutes in each of the categories. Of the 237 (b)(3) statutes cited by agencies, the majority—178—fell into four of the eight categories: Forty-nine of these statutes related to withholding personally identifiable information including, for example, a statute related to withholding death certificate information provided to the Social Security Administration. Forty-five statutes related to the national security category. For example, one statute exempted files of foreign intelligence or counterintelligence operations of the National Security Agency. Forty-two statutes were in the law enforcement and investigations category, including a statute that exempts from disclosure information provided to Justice pursuant to civil investigative demands pertaining to antitrust investigations. Forty-two statutes fell into the commercial category. For example, one statute in this category related to withholding trade secrets and other confidential information related to consumer product safety. The remaining 59 statutes were in four categories: internal agency functions and practices, financial regulation, international affairs, and environmental. The environmental category contained the fewest number of statutes and included, for example, a statute related to withholding certain air pollution analysis information. As required by FOIA, agencies also reported the number of times they used each (b)(3) statute. In this regard, 33 FOIA-reporting agencies indicated that they had used 10 of the 237 (b)(3) statutes more than 200,000 times. Of these 10 most-commonly used statutes, the single most-used statute (8 U.S.C § 1202(f)) related to withholding records pertaining to the issuance or refusal of visas to enter the United States. It was used by 4 agencies over 58,000 times. Further, of the 10 most-commonly used statutes, the statute used by the greatest number of agencies (26 U.S.C § 6103) related to the withholding of certain tax return information; it was used by 24 FOIA-reporting agencies about 30,000 times. By contrast, some statutes were only used by a single agency. Specifically, the Department of Veterans Affairs used a statute related to withholding certain confidential veteran medical records (38 U.S.C. § 7332) more than 16,000 times. Similarly, EEOC used a statute related to employment discrimination on the basis of disability (42 U.S.C. § 12117) more than 10,000 times. Table 4 shows the 10 most-used statutes under the (b)(3) exemption, the agency that used each one most frequently, and the number of times they were used by that agency for the period covering fiscal years 2010 through 2016. The OPEN FOIA Act of 2009 amended FOIA to require that any federal statute enacted subsequently must specifically cite paragraph (b)(3) of FOIA to qualify as a (b)(3) exemption statute. Prior to 2009, a federal statute qualified as a statutory (b)(3) exemption if it (1) required that the matters be withheld from the public in such a manner as to leave no discretion on the issue, or (2) established particular criteria for withholding or refers to particular types of matters to be withheld. In response to the amendment, in 2010, Justice released guidance to agencies stating that any statute enacted after 2009 must specifically cite to the (b)(3) exemption to qualify as a withholding statute. Further, the guidance encouraged agencies to contact Justice with questions regarding the implementation of the amendment. Even with this guidance, we found that a majority of agency-claimed statutes during fiscal years 2010 through 2016 did not specifically cite the (b)(3) exemption. Specifically, of the 237 (b)(3) statutes claimed by agencies, 103 were enacted or amended after 2009 and, thus, were subject to the requirement of the OPEN FOIA Act. Of those 103 statutes, 86 lacked the required statutory text that cited exemption (b)(3) of FOIA. Figure 7 shows the number of agency-claimed statutes subject to the OPEN FOIA Act of 2009 requirement that did not cite the (b)(3) exemption. Agencies are using these statutes as the basis for withholding information when responding to a FOIA request. This is despite these statutes not having a reference to the (b)(3) exemption as required by the 2009 FOIA amendments. In our report, being issued today, we found that, according to the available information and Justice and OSC officials, since fiscal year 2008, no court orders have been issued that have required OSC to initiate a proceeding to determine whether disciplinary action should be taken against agency FOIA personnel. Specifically, officials in Justice’s Office of Information Policy stated that there have been no lawsuits filed by a FOIA requester that have led the courts to conduct all three requisite actions needed for Justice to refer a court case to OSC. Justice’s litigation and compliance reports identified six court cases (between calendar years 2013 and 2016) in which the requesters sought a referral from the courts in an attempt to have OSC initiate an investigation. However, in all six cases, the courts denied those requests, finding that each case did not result in the courts taking the three actions necessary to involve OSC. Thus, given these circumstances, Justice has not referred any court orders to OSC to initiate a proceeding to determine whether disciplinary action should be taken against agency FOIA personnel. For its part, OSC officials confirmed that the office has neither received, nor acted on, any such referrals from Justice. As such, OSC has not had cause to initiate disciplinary actions for the improper withholding of FOIA records. In summary, the 18 agencies we selected for review fully implemented half of the six FOIA requirements reviewed and the vast majority of agencies implemented two additional requirements. However, 5 agencies published and updated their FOIA regulations in a timely and comprehensive manner. Fully implementing FOIA requirements will better position agencies to provide the public with necessary access to government records and ensure openness in government. The selected agencies in our review varied considerably in the size of their backlogs. While 10 reported a backlog of 60 or fewer requests, 4 had backlogs of over 1,000 per year. Agencies identified a variety of methods that they used to address their backlogs, including practices identified by Justice, as well as additional methods. However, the selected agencies varied in the success achieved for reducing their backlogs. This was due, in part, to a lack of plan that describes how the agencies will implement best practices for reducing backlogs over time. Until agencies develop plans to reduce backlogs, they will be limited in their ability to respond effectively to the needs of requesters and the public. Accordingly, our draft report contains 23 planned recommendations to selected agencies. These recommendations address posting records online, designating chief FOIA officers, updating regulations consistent with requirements, and developing plans to reduce backlogs. Implementation of our recommendations should better position these agencies to address FOIA requirements and ensure the public is provided with access to government information. Chairman Grassley, Ranking Member Feinstein, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you have any questions on matters discussed in this testimony, please contact David A. Powner at (202) 512-9286 or at pownerd@gao.gov. Individuals who made key contributions to this testimony are Anjalique Lawrence (assistant director), Lori Martinez (analyst in charge), Gerard Aflague, David Blanding, Christopher Businsky, Rebecca Eyler, James Andrew Howard, Carlo Mozo, David Plocher, and Sukhjoot Singh. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
FOIA requires federal agencies to provide the public with access to government records and information based on the principles of openness and accountability in government. Each year, individuals and entities file hundreds of thousands of FOIA requests for information on numerous topics that contribute to the understanding of government actions. In the last 9 fiscal years, federal agencies subject to FOIA have received about 6 million requests. GAO was asked to summarize its draft report on federal agencies' compliance with FOIA requirements. GAO's objectives, among others, were to (1) determine the extent to which agencies have implemented selected FOIA requirements; (2) describe the methods established by agencies to reduce backlogged requests and the effectiveness of those methods; and (3) identify any statutory exemptions that have been used by agencies as the basis for withholding (redacting) information from requesters. To do so, GAO selected 18 agencies based on their size and other factors and assessed their policies against six FOIA requirements. GAO also reviewed the agencies' backlog reduction plans and developed a catalog of statutes that agencies have used to withhold information. In its draft report, GAO determined that all 18 selected agencies had implemented three of six Freedom of Information Act (FOIA) requirements reviewed. Specifically, all agencies had updated response letters to inform requesters of the right to seek assistance from FOIA public liaisons, implemented request tracking systems, and provided training to FOIA personnel. For the three additional requirements, 15 agencies had provided online access to government information, such as frequently requested records, 12 agencies had designated chief FOIA officers, and 5 agencies had published and updated their FOIA regulations to inform the public of their operations. Until these agencies address all of the requirements, they increase the risk that the public will lack information that ensures transparency and accountability in government operations. The 18 selected agencies had backlogs of varying sizes, with 4 agencies having backlogs of 1,000 or more requests during fiscal years 2012 through 2016. These 4 agencies reported using best practices identified by the Department of Justice, such as routinely reviewing metrics, as well as other methods, to help reduce their backlogs. Nevertheless, these agencies' backlogs fluctuated over the 5-year period (see figure). The 4 agencies with the largest backlogs attributed challenges in reducing their backlogs to factors such as increases in the number and complexity of FOIA requests. However, these agencies lacked plans that described how they intend to implement best practices to reduce backlogs. Until agencies develop such plans, they will likely continue to struggle to reduce backlogs to a manageable level. Agencies used various types of statutory exemptions to withhold information when processing FOIA requests during fiscal years 2010 to 2016. The majority of these fell into the following categories: personally identifiable information, national security, law enforcement and investigations, and confidential and commercial business information. GAO's draft report contains recommendations to selected agencies to post records online, designate chief FOIA officers, update regulations consistent with requirements, and develop plans to reduce backlogs.
|
The BSA established reporting, recordkeeping, and other AML requirements for financial institutions. By complying with BSA/AML requirements, U.S. financial institutions assist government agencies in the detection and prevention of money laundering and terrorist financing by, among other things, maintaining compliance policies, conducting ongoing monitoring of customers and transactions, and reporting suspicious financial activity. Regulation under and enforcement of BSA involves several federal agencies. FinCEN is responsible for administering the BSA, has authority for enforcing compliance with its requirements and implementing regulations, and also has the authority to enforce the act, including through civil money penalties. FinCEN issues regulations under BSA and relies on the examination functions performed by other federal regulators, including the federal banking regulators. FinCEN also collects, analyzes, and maintains the reports and information filed by financial institutions under BSA and makes those reports available to law enforcement and regulators. FinCEN has delegated BSA/AML examination authority for banks to the federal banking regulators. The federal banking regulators have issued their own BSA regulations that require banks to establish and maintain a BSA compliance program which, among other things, requires banks to identify and report suspicious activity. The banking regulators are also required to review compliance with BSA/AML requirements and regulations which they generally do every 12 to 18 months as a part of their routine safety and soundness examinations. Federal banking regulators take a risk-based approach to BSA examinations—that is, they review key customers of risk or specific problems identified by the bank. Among other things, examiners review whether banks have an adequate system of internal controls to ensure ongoing compliance with BSA/AML regulations. The federal banking regulators may take enforcement actions using their prudential authorities for violations of BSA/AML requirements. They may also assess civil money penalties against financial institutions and individuals independently, or concurrently with FinCEN. All banks are required to establish an AML compliance program that includes policies, procedures, and processes which, at a minimum, must provide for: a system of internal controls to ensure ongoing compliance, a designated individual or individuals responsible for managing BSA compliance (BSA compliance officer), training for appropriate personnel, independent testing for BSA/AML compliance, and appropriate risk-based procedures for conducting ongoing customer due diligence. BSA/AML regulations require that each bank tailor a compliance program that is specific to its size and own risks based on factors such as the products and services offered, customers, types of transactions processed, and locations served. BSA/AML compliance programs may include the following components: Customer Identification Program (CIP)—Banks must have written procedures for opening accounts and, at a minimum, must obtain from each customer their name, date of birth, address, and identification number before opening an account. In addition, banks’ CIPs must include risk-based procedures for verifying the identity of each customer to the extent reasonable and practicable. Banks must also collect information on individuals who are beneficial owners of a legal entity customer in addition to the information they are required to collect on the customer under the CIP requirement. Customer Due Diligence (CDD)—CDD procedures enable banks to predict with relative certainty the types of transactions in which a customer is likely to engage, which assists banks in determining when transactions are potentially suspicious. Banks must document their process for performing CDD and implement and maintain appropriate risk-based procedures for conducting ongoing customer due diligence. These procedures include, but are not limited to, understanding the nature and purpose of customer relationships for the purpose of developing a customer risk profile, and conducting ongoing monitoring to identify and report suspicious transactions and, on a risk basis, to maintain and update customer information. Enhanced Due Diligence (EDD)—Customers who banks determine pose a higher money laundering or terrorist financing risk are subject to EDD procedures. EDD for higher-risk customers helps banks understand these customers’ anticipated transactions and implement an appropriate suspicious activity monitoring system. Banks review higher-risk customers and their transactions more closely at account opening and more frequently throughout the term of their relationship with the bank. Suspicious Activity Monitoring—Banks must also have policies and procedures in place to monitor transactions and report suspicious activity. Banks use different types of monitoring systems to identify or alert staff of unusual activity. A manual transaction monitoring system typically targets specific types of transactions (for example, those involving large amounts of cash and those to or from foreign areas) and includes a manual review of various reports generated by the bank’s information systems in order to identify unusual activity. An automated monitoring system can cover multiple types of transactions and use various rules, thresholds, and scenarios to identify potentially suspicious activity. These systems typically use computer programs to identify individual transactions, patterns of unusual activity, or deviations from expected activity. Banks that are large, operate in many locations, or have a large volume of higher-risk customers typically use automated monitoring systems. Banks also must comply with certain reporting requirements, including: CTR: A bank must electronically file a CTR for each transaction in currency—such as a deposit or withdrawal—of more than $10,000. SAR: Banks are required to electronically file a SAR when a transaction involves or aggregates at least $5,000 in funds or other assets, and the institution knows, suspects, or has reason to suspect that the transaction meets certain criteria qualifying as suspicious. Generally, the federal banking regulators do not direct banks to open, close, or maintain individual accounts. However, banks generally include policies and procedures to describe criteria for not opening, or closing, an account in their BSA/AML compliance program. For example, although there is no requirement for a bank to close an account that is the subject of a SAR filing, a bank should develop policies and procedures that indicate when it will escalate issues identified as the result of repeat SAR filings on accounts, including criteria on when to close an account. Additionally, a bank’s CIP should contain procedures for circumstances when a bank cannot verify the customer’s identity, including procedures that include circumstances in which the bank should not open an account and when the bank should close an account. Federal banking regulators also cannot prohibit banks from closing branches. However, FDIC-insured banks are required to submit a notice of any proposed branch closing to their primary banking regulator no later than 90 days prior to the date of the proposed branch closing. The notice must include a detailed statement of the reasons for closing the branch and statistical or other information in support of the reasons. Banks are also required to mail a notice to the customers of the branch proposed to be closed at least 90 days prior to the proposed closing and must post a notice to customers in the branch proposed to be closed at least 30 days prior to the proposed closing. The notice should state the proposed date of closing and either identify where branch customers may obtain service following that date or provide a telephone number for customers to call to determine such alternative sites. In October 2017, Mexico was the second largest goods trading partner of the United States in terms of both imports and exports, according to U.S. Census trade data. Trade with Mexico is an important component of Southwest border states’ economies, which benefit from their proximity to the international border and the related seaports and inland ports for the exportation and importation of goods. The fresh produce industry is an example of a key industry in the border region. The fresh produce industry encompasses several activities involved with importation, inspection, transportation, warehousing, and distribution of Mexican- grown produce to North American markets, all of which provide employment opportunities and revenues to local economies. Another key industry in the region is manufacturing. The Southwest border has played a role in a growing trend known as production sharing, in which companies—predominantly based in the United States—locate some operations in Mexico, thus achieving lower costs in the overall production process. Local Southwest border communities also benefit from pedestrians crossing into the United States from Mexico to visit and shop in their communities. For example, Department of Transportation border crossing data show that in September 2017, nearly 750,000 pedestrians entered the United States at the San Ysidro, California, border crossing— the busiest pedestrian port of entry into the country. The Department of State has identified Mexico as a major money laundering country. As a result of its proximity to Mexico, the Southwest border region faces high money laundering and related financial crime risks. The U.S.-Mexico border includes major population centers, transportation hubs, and large tracts of uninhabited desert. According to Treasury’s 2015 National Money Laundering Risk Assessment, criminal organizations have used the vast border to engage in cross-border drug trafficking, human smuggling, and money laundering. The 2015 assessment also states that bulk cash smuggling remains the primary method Mexican drug trafficking organizations use to move illicit proceeds across the Southwest border into Mexico. Some cash collected domestically to pay the drug trafficking organizations for drugs is channeled from distribution cells across the United States to cities and towns along the Southwest border, and from there is smuggled into Mexico. All counties within the Southwest border region have been identified as either a High Intensity Financial Crime Area (HIFCA) or a High Intensity Drug Trafficking Area (HIDTA) with the vast majority being identified as both (see fig. 1). HIFCAs and HIDTAs aim to concentrate law enforcement efforts at the federal, state, and local levels to combat money laundering and drug trafficking in designated high-intensity money laundering zones and in areas determined to be critical drug-trafficking regions of the United States, respectively. Several characteristics of the Southwest border region make the region a high-risk area for money laundering activity. These characteristics, which require additional efforts for Southwest border banks to comply with BSA/AML requirements, include high volumes of cash transactions, cross-border transactions, and foreign accountholders. Bank representatives we spoke with said that they manage these added BSA/AML compliance challenges through activities such as more frequent monitoring and investigating of suspicious activities, but that these efforts require an investment of resources. Money laundering risk is high in the Southwest border region because of the high volume of cash transactions, the number of cross-border transactions, and foreign accountholders, according to bank representatives, federal banking regulators, and others. Cash transactions increase the BSA/AML compliance risk for banks because the greater anonymity associated with using cash results in greater risk for money laundering or terrorist financing. A regional economic development specialist noted, for example, that Mexican nationals who shop in border communities typically use cash as a payment form. Further, representatives from a regional trade group told us that border businesses prefer payment in cash over checks from Mexican banks because of potential variations in the exchange rate before a peso- denominated check clears. The trade group representatives we spoke with also noted that currency exchanges also add to the volume of cash transactions in the region. In June 2010, the Mexican finance ministry published new AML regulations that restricted the amounts of physical cash denominated in U.S. dollars that Mexican financial institutions could receive. According to FinCEN officials and some of the federal bank examiners we spoke with, these regulations altered the BSA/AML risk profile of some U.S. banks, particularly those in the Southwest border region. For example, U.S. banks started receiving bulk shipments of currency directly from Mexican nationals and businesses, rather than from Mexican banks. This increased BSA/AML compliance risk for the U.S. banks because they now had to assess the risk of each individual customer shipping them currency, rather than the collective risk from their Mexican banking counterparts. In addition, according to FinCEN, the regulations added to the level of cash in the Southwest border region because businesses in the region saw higher levels of cash payments from Mexican customers. This also created additional risk for U.S. banks when these businesses deposited the cash payments. Our review of data on banks’ CTR filings confirmed that bank branches that operate in Southwest border region counties handle more large cash transactions than bank branches elsewhere. For example, our analysis found that bank branches in Southwest border region counties generally file more CTRs than bank branches in comparable counties in the same border states or in other high-risk financial crime or drug trafficking counties that are not in border states. Specifically, in 2016, bank branches in Southwest border region counties filed nearly 30 percent more CTRs, on average, than bank branches in comparable counties elsewhere in their same state, and about 60 percent more than those in other high-risk counties outside the region. Similar differences occurred in 2014 and 2015 (see fig. 2). Cross-border transactions are also higher risk for money laundering because international transfers can present an attractive method to disguise the source of funds derived from illegal activity. Certain industries, such as agriculture, that are prevalent in the Southwest border region have legitimate business practices that could appear suspicious without sufficient context, regional representatives said. For example, representatives of one produce industry association we spoke with said produce distributors often import produce from Mexican farmers and pay them via wire transfer, which the farmers may then immediately withdraw in cash to pay laborers. Transactions that involve cross-border wire transfers and immediate withdrawals of cash may raise suspicion of money laundering that requires further scrutiny by the bank. BSA/AML regulations generally require banks to keep additional documentation for domestic and international fund transfers of $3,000 or more, including specific identifying information about the originator and beneficiary of the transaction. If the bank sends or receives funds transfers to or from institutions in other countries, especially those with strict privacy and secrecy laws, the bank should have policies and procedures to determine whether the amounts, the frequency of the transfer, and countries of origin or destination are consistent with the nature of the business or occupation of the customer. Southwest border banks cited foreign accountholders as another type of high-risk customer for money laundering and terrorist financing. These types of customers are prevalent in the Southwest border region, examiners said, and can create challenges for banks to verify and authenticate their identification, source of funds, and source of wealth. Southwest border banks and others cited these types of customers as adding BSA/AML compliance risk for banks, particularly if the accountholders do not reside in the United States. These customers may also have more frequent funds transfers to other countries. Foreign accountholders who are “senior foreign political figures” also create additional money laundering and terrorist financing risk because of the potential for their transactions to involve the proceeds from foreign-official corruption. Some Southwest border banks told us they provide accounts to senior foreign political figures, but may limit the number of those types of accounts. The volume of high-risk customers and cross-border transactions can lead to more intensive account monitoring and investigation of suspicious transactions, Southwest border bank representatives said. Performing effective due diligence and complying with CIP requirements for higher- risk customers and transactions can be more challenging because banks might need specialized processes for higher-risk customers and transactions than for those that are lower-risk. For example, representatives from some Southwest border banks told us their BSA/AML compliance staff travel to Mexico or collect information from sources in Mexico to establish the legitimacy of businesses across the border. Another bank said they ask to see 3 months of some high-risk businesses’ previous bank statements to determine the typical volume of cash and wire transfers and that this type of due diligence is very time- consuming. The bank also collects details about the recipients of the wired funds in an effort to determine the legitimacy of the payments. Some Southwest border banks also described using special processes to evaluate BSA/AML compliance risks for foreign customers and said they used extra caution before accepting them as customers. These special processes included translating business documents from Spanish to English to certify the legitimacy of business customers and developing internal expertise on currently acceptable identity documents issued by foreign governments. Southwest border bank representatives we spoke with said addressing these compliance challenges also can require more resources for monitoring high-risk customers and investigating suspicious transactions. High-risk customers require additional detail to be collected when accounts are opened and on an ongoing basis. Representatives of one Southwest border bank explained that they monitor high-risk customers’ transactions more frequently—every 3 months, compared to every 6 months for medium-risk customers. Further, high volumes of cash activity can generate substantial numbers of alerts in bank monitoring systems, and these alerts are evaluated by banks to determine whether SARs should be filed. Transaction structuring, which involves attempts to evade the $10,000 CTR filing requirement by, for example, making several smaller transactions, is a common source of alerts, some bank representatives said. Several banks we interviewed cited the investigation of potential structuring as one of their common BSA/AML compliance activities. Although many banks have monitoring software to generate suspicious activity alerts, representatives said the flagged transactions generally are investigated manually and can be a labor-intensive part of banks’ overall BSA/AML compliance programs. Southwest border bank representatives we spoke with also told us that their suspicious activity monitoring systems often generate “false positives”—meaning further investigation leads to a determination that no SAR filing is warranted. As a result, the total number of SAR filings can actually understate banks’ total BSA/AML compliance efforts associated with suspicious transaction monitoring. We found that bank branches in Southwest border region counties filed more SARs, on average, from 2014 through 2016 than bank branches in comparable counties in the same border states or in other high-risk financial crime or drug trafficking counties that are not in border states. For example, in 2016, bank branches in Southwest border region counties filed three times as many SARs, on average, as bank branches operating in other counties within Southwest border states and about 2.5 times as many SARs, on average, as bank branches in other high-risk financial crime or drug trafficking counties in nonborder states. These differences in SAR filings showed a similar pattern in 2014 and 2015 (see fig. 3). Federal banking regulators cited some Southwest border banks for noncompliance with BSA/AML requirements from January 2009 through June 2016. Those citations included 41 formal or informal enforcement actions taken against Southwest border banks. FinCEN also took two formal enforcement actions during that period. As part of the bank examination process, the federal banking regulators also cited Southwest border banks for 229 BSA/AML violations from January 2009 through June 2016. Of these, SAR-related violations were the most common type of violation (33 percent). This was followed closely by violations related to BSA/AML monitoring and compliance (31 percent)—a category we defined to include competencies such as having an adequate system of BSA/AML internal controls and providing adequate BSA/AML training (see fig. 4). Our nationally representative survey found that most Southwest border banks terminated accounts for reasons related to BSA/AML risk from January 2014 through December 2016 and limited, or did not offer, accounts to certain customer types, consistent with BSA/AML purposes. However, our survey also found that many Southwest border banks may also be engaging in derisking. Nationally, our econometric analysis suggests that counties that were urban, younger, had higher income, or had higher money laundering-related risk were more likely to lose branches. Money laundering-related risks were likely to have been relatively more important drivers of branch closures in the Southwest border region. Most Southwest border banks reported terminating accounts for reasons related to BSA/AML risk. Based on our survey results, from January 1, 2014, through December 31, 2016, we estimate that almost 80 percent of Southwest border banks had terminated personal or business accounts for reasons related to BSA/AML risk. For the subset of Southwest border banks whose operations extend outside of the Southwest border region, we estimate that almost 60 percent reported that they terminated business or personal accounts domiciled in their Southwest border branches. For banks that did not operate in the Southwest border region (non-Southwest border banks), account terminations related to BSA/AML risk varied by the size of the bank. For example, an estimated 93 percent of medium banks and an estimated 95 percent of large banks terminated accounts for reasons related to BSA/AML risk, compared to an estimated 26 percent of small banks. Among the five types of businesses we identified for our survey as high risk for money laundering and terrorist financing, cash-intensive small businesses (for example, retail stores, restaurants, and used car dealers) were the most common types of business accounts that Southwest border banks reported terminating for reasons related to BSA/AML risk. For example, over 70 percent of Southwest border banks reported terminating cash-intensive small business accounts. Between 45 percent and 58 percent of Southwest border banks cited terminating accounts for the remaining four categories of high-risk business accounts we identified: money services businesses, domestic businesses engaged in cross-border trade, nontrade-related foreign businesses, and foreign businesses engaged in cross-border trade. Bank-Reported Data on Accounts Terminated in 2016 for BSA/AML Reasons In response to our survey, several banks provided data on the number of accounts they terminated in 2016 for reasons related to BSA/AML risk. We found that two extra-large banks (those banks with $50 billion or greater in assets) were responsible for the majority of these account terminations for both business and personal accounts. These terminations accounted for less than half a percent of the extra-large banks’ overall accounts. These numbers only represent account terminations for the banks that provided data and are not generalizable to the population of banks. The most common reason related to BSA/AML risk banks reported for terminating accounts from January 2014 through December 2016 was the filing of SARs associated with the accounts. Based upon our survey, we estimate that 93 percent of Southwest border banks terminated accounts because of the filing of SARs. Through discussions with Southwest border bank representatives, we found that banks vary the level of internal investigations they conduct into the suspicious activity before deciding to terminate an account as a result of a certain number of SAR filings. Representatives from 3 of the 19 Southwest border banks we spoke with told us that their account closure policies generally required the automatic termination of an account when a certain number of SARs—ranging from 1 to 4—were filed for an account. Representatives from two other Southwest border banks said a certain number of SARs filed for one account would lead to an automatic review of the account that would determine whether or not the account should be closed. Other Southwest border bank representatives we interviewed did not indicate having a specific policy for terminating accounts related to the number of SAR filings, but some of these representatives said that SAR filings were one of the factors that could lead to account terminations. Figure 5 shows the survey estimates for the other BSA/AML reasons Southwest border banks cited for terminating accounts. Some commonly cited reasons were the failure of the customer to respond adequately to requests for information as part of customer due diligence processes and the reputational risk associated with the customer type. For example, an estimated 80 percent of Southwest border banks cited the failure of the customer to respond adequately to requests for information as part of customer due diligence processes. Some Southwest border bank representatives told us that sometimes customers do not provide adequate documentation in response to their due diligence inquiries. These representatives said that after a certain number of attempts to obtain the documentation, the lack of customer responsiveness results in them terminating the account. A bank may also terminate an account if the activity of the customer could risk the reputation of the bank. About 68 percent of Southwest border banks that terminated accounts cited the reputational risk associated with the customer type as a reason for terminating an account. Some Southwest border bank representatives we spoke with said they have closed accounts due to the nature of the business. For example, some bank representatives said they have closed accounts for gambling and marijuana businesses. In addition, law enforcement officials from the Southwest Border Anti-Money Laundering Alliance told us that they thought that some of the accounts terminated by Southwest border banks were a result of the information the banks were given from local law enforcement and other federal agencies. For example, when funnel accounts—accounts in one geographic area that receive multiple cash deposits and from which funds are withdrawn in a different geographic area with little time elapsing between the deposits and withdrawals—were first identified by law enforcement as a money laundering method, banks responded by closing these types of accounts. Non-Southwest border banks generally reported the same primary reasons for terminating accounts as Southwest border banks. The top two reasons for terminating accounts cited by non-Southwest border banks that responded to the survey was the filing of SARs associated with the accounts and the failure of the customer to respond adequately to requests for information as part of customer due diligence processes. A majority of Southwest border banks and non-Southwest border banks reported limiting or not offering accounts to certain types of businesses considered high risk for money laundering and terrorist financing, particularly money services businesses and foreign businesses. For example, the estimates for Southwest border banks that have limited, or not offered, accounts to nontrade-related foreign businesses is 76 percent, money service businesses is 75 percent, and foreign businesses engaged in cross-border trade is 72 percent. The most common reason (cited by 88 percent of Southwest border banks) for limiting, or not offering, an account to these types of businesses was that the business type fell outside of the bank’s risk tolerance—the acceptable level of risk an organization is willing to accept around specific objectives. Similarly, 69 percent of Southwest border banks cited the inability to manage the BSA/AML risk associated with the customer (for example, because of resource constraints) as a factor for limiting, or not offering, accounts. Representatives from some Southwest border banks we spoke with explained that they do not have the resources needed to conduct adequate due diligence and monitoring for some of the business types considered high risk for money laundering and terrorist financing. As a result, they told us that they no longer offer accounts for certain business lines. For example, a representative from one Southwest border bank told us that the bank no longer offers accounts to money services businesses because of the BSA/AML compliance requirements and monitoring needed to service those types of accounts. In particular, they stated they do not have the resources to monitor whether the business has the appropriate BSA/AML compliance policies and procedures in place and to conduct site visits to ensure it is operating in compliance with BSA/AML requirements. Another Southwest border bank representative told us they have stopped banking services for used clothing wholesalers who export their product to Mexico because they were unable to mitigate the risk associated with these types of businesses. They explained that these companies’ business models involve many individuals crossing the U.S.- Mexico border to purchase with cash pallets of clothing to import to Mexico. The bank representative explained that the business model for this industry made it very hard to identify the source of the large volumes of cash. Other reasons Southwest border banks reported for limiting, or not offering, certain types of business accounts are shown in figure 6. Similar to the reasons given by Southwest border banks, the most common reason that non-Southwest border banks reported limiting, or not offering accounts, to certain types of businesses considered high risk for money laundering and terrorist financing was that the customer type fell outside of the bank’s risk tolerance. The second most common reason—cited by 80 percent of Southwest border banks—for limiting, or not offering, accounts to certain types of businesses considered high risk for money laundering and terrorist financing, was that the customer type drew heightened BSA/AML regulatory oversight—behavior that could indicate derisking. For example, representatives from one Southwest border bank explained that they no longer offer accounts to money services businesses because they want to be viewed from a good standpoint with their regulator. They added that banking for these types of customers is very high risk for the bank with very little reward. Another bank that operates in the Southwest border region explained that rather than being able to focus on their own BSA/AML risk assessment and the performance of accounts, they feel pressured to make arbitrary decisions to close accounts based on specific concerns of their examiners. Several Southwest border bank representatives also described how recent BSA/AML law enforcement and regulatory enforcement actions have caused them to become more conservative in the types of businesses for which they offer accounts. For example, representatives from one Southwest border bank we spoke with stated that many of the banks that do business in the Southwest border region have stopped servicing cross-border businesses due to a large enforcement action in which the allegations against the bank cited an ineffective AML program that exposed it to illicit United States/Mexico cross-border cash transactions. A representative from another Southwest border bank explained that his bank could have a large banking business in one of the state’s border towns, but the bank has chosen not to provide services there because if BSA/AML compliance deficiencies are identified from servicing that area, the penalties could be high enough to shut down the whole bank. In addition, while banks may terminate accounts because of SAR filings as a method to manage money laundering and terrorist financing risk and to comply with BSA/AML requirements, some of these terminations may be related to derisking. For example, some Southwest border bank representatives we spoke with as part of this review, as well as other banks and credit unions we spoke with in a previous review, told us that they have filed SARs to avoid potential criticism during examinations, not because they thought the observed activity was suspicious. Non-Southwest border banks also commonly cited the inability to manage risk associated with the customer type and heightened regulatory oversight as reasons for limiting, or not offering, accounts. Our survey results and discussions with Southwest border bank representatives are consistent with what a senior Treasury official identified in a 2015 speech as causing correspondent banking and money services business account terminations. The speech noted that a number of interrelated factors may be resulting in the terminations, but that the most frequently mentioned reason related to efforts to comply with AML and terrorist financing requirements. In particular, banks raised concerns about (1) the cost of complying with AML and terrorist financing regulations, (2) uncertainty about supervisors’ expectations regarding what is appropriate due diligence, and (3) the nature of the enforcement and supervisory response if they get it wrong. The speech noted that banks said that they made decisions to close accounts not so much because they were unable to manage the illicit finance risks but because the costs associated with taking on those risks had become too high. It further stated that there is a gap between what supervisory agencies have said about the standards they hold banks to and banks’ assessment of those standards, and that there was still a perception among banks that supervisory and enforcement expectations lack transparency, predictability, and consistency. The senior Treasury official noted this perception feeds into higher anticipated compliance costs and when banks input this perceived risk into their cost-benefit analysis, it may eclipse the potential economic gain of taking on a new relationship. Counties in the Southwest border region have been losing bank branches since 2012, similar to national and regional trends, as well as trends in other high-risk financial crime or drug trafficking counties that are outside the region. Most of the 32 counties (18 counties or nearly 60 percent) comprising the Southwest border region did not lose bank branches from 2013 through 2016, but 5 counties lost 10 percent or more of their branches over this time period (see top panel of fig. 7). Those 5 counties are Cochise, Santa Cruz, and Yuma, Arizona; Imperial, California; and Luna, New Mexico. Within those counties we identified as having the largest percentage loss of branches, sometimes those losses were concentrated in smaller communities within the county (see bottom panel of fig. 7). For example, Calexico in Imperial County, California, lost 5 of its 6 branches from 2013 through 2016. In Santa Cruz County in Arizona, one zip code in Nogales accounted for all of the branch losses in the county from 2013 through 2016, losing 3 of its 9 branches. More generally, branch losses can vary substantially across different zip codes in a county (see for example bottom panel of fig. 7). In other instances, counties that lost a relatively small share of their branches can contain communities that lost a more substantial share—for example San Ysidro in San Diego County lost 5 of its 12 branches (about 42 percent) while the county as a whole lost only 5 percent of its branches from 2013 through 2016. Based on our analysis, counties losing branches in the Southwest border region tended to have substantially higher SAR filings, on average, than Southwest border region counties that did not lose branches. That is, counties that lost branches from 2013 through 2016 had about 600 SAR filings per billion dollars in deposits, on average, and counties that did not lose branches had about 60 SAR filings per billion dollars in deposits, on average (see fig. 8). The econometric models we developed and estimated generally found that demographic and money laundering-related risk factors were important predictors of national bank branch closures. These models are subject to certain limitations, some of which we detail later in this section as well as appendix III, and as such, we interpret the results with some degree of caution. In general, our results suggest that counties were more likely to lose branches, all else equal, if they were (1) urban, had a higher per capita personal income, and had a younger population (proportion under 45); or (2) designated as a HIFCA or HIDTA county, or had higher SAR filings. We term the latter three characteristics (HIFCA, HIDTA, and SAR filings) “money laundering-related risk factors.” While our models are unable to definitively identify the causal effect of BSA/AML regulation on branch closures from these money laundering- related risk factors, the impact of the SAR variables, in particular, could reflect a combination of BSA/AML compliance effort and the underlying level of suspicious or money laundering-related activity in a county. Our econometric models are based on all counties with bank branches in the United States and are designed to predict whether a county will lose a branch the following year based on the characteristics of the county. The models included demographic, economic, and money laundering-related risk factors that might have influenced branch closures nationally since 2010 (see app. III for additional information on our models). The demographic factors included in our models are Rural-Urban Continuum Codes, age profile (proportion of the county over 45), and the level of per capita income. We chose these demographic factors, in particular, because they are associated with the adoption of mobile banking, which may explain the propensity to close branches in a community. The economic factors included in our models—intended to reflect temporary or cyclical economic changes affecting the county—are the growth of per capita income, growth in building permits (a measure of residential housing conditions), and growth of the population. The money laundering- related risk factors, as described previously, are whether a county has been designated a HIFCA or a HIDTA and the level of suspicious or possible money laundering-related activity reported by bank branches in the county, as represented by SAR filings. Demographic characteristics of counties were important predictors of branch closures. Our results were consistent with those demographic characteristics associated with the adoption of mobile banking. As such, our results are consistent with the hypothesis that mobile banking is among the factors leading some banks to close branches. The most urban counties were about 22 percentage points more likely to lose one or more branches over the next year than the most rural counties. A county with 70 percent of the population under 45 was about 9 percentage points more likely to lose one or more branches over the next year than a county with half the population under 45. A county with per capita income of $50,000 was about 7 percentage points more likely to lose one or more branches over the next year than a county with per capita income of $20,000. Money laundering-related characteristics of a county were also important predictors of branch closures in our models. HIDTA counties were about 11 percentage points more likely to lose one or more branches over the next year than non-HIDTA counties (the effect in HIFCA counties is less significant statistically and smaller in magnitude). A county with 200 SARs filed per billion dollars in bank deposits was about 8 percentage points more likely to lose one or more bank branches over the next year than a county where no bank branch had filed a SAR. Southwest border bank officials we spoke with generally said that SAR filings were a time- and resource-intensive process, and that the number of SARs filings—to some extent—reflected the level of effort, and overall BSA compliance risk, faced by the bank. That said, the impact of SAR variables in our models could reflect a combination of (1) the extent of BSA/AML compliance effort and risk faced by the bank, as expressed by bank officials, and (2) the underlying level of suspicious or money laundering- related activity in a county. Money laundering-related risk factors were likely to have been relatively more important drivers of branch closures in the Southwest border region because it had much higher SAR filings and a larger share of counties designated as HIDTAs than the rest of the country. More generally, given the characteristics of Southwest border counties and the rest of the United States, our models suggest that while demographic factors have been important drivers of branch closures in the United States overall, risks associated with money laundering were likely to have been relatively more important in the Southwest border region. Specifically, the Southwest border region is roughly as urban as the rest of the country, has a somewhat lower per capita income (about $35,000 in the Southwest border region versus about $41,000 elsewhere) and is somewhat younger on average (about 40 percent 45 and over in the Southwest border region versus about 45 percent elsewhere), but money laundering-related risk factors were relatively more prevalent, based on our measures, in the Southwest border region. Southwest border bank representatives we interviewed told us they considered a range of factors when deciding whether or not to close a branch. For example, most Southwest border bank representatives that we spoke with about the reasons for branch closures (6 of 10) told us that BSA/AML compliance challenges were not part of the decision to close a branch. However, most Southwest border bank representatives said that the financial performance of the branch is one of the most important factors they consider when deciding to close a branch, and as described previously, BSA/AML compliance can be resource intensive, which may affect the financial performance of a branch. Further, nearly half of the Southwest border bank representatives we spoke with (4 of 10), did mention that BSA/AML compliance costs could be among the factors considered in determining whether or not to close a branch. In addition, at least one bank identified closing a branch as one option to address considerable BSA/AML compliance challenges. Finally, some Southwest border bank representatives (3 of 10) also mentioned customer traffic in the branch or the availability of mobile banking as relevant to their decision to close a branch. Communities we visited in Arizona, California, and Texas experienced multiple bank branch closures from 2013 through 2016. Some local banking customers that participated in the discussion groups we held in these communities also reported experiencing account terminations. While perspectives gathered from our visits to the selected cities cannot be generalized to all locations in Southwest border counties, stakeholders we spoke with noted that these closures affected key businesses and local economies and raised concerns about economic growth. According to some discussion group participants, local businesses, economic development specialists, and other stakeholders (border stakeholders) in the three Southwest border communities we visited, banks in their communities terminated the accounts of longtime established customers, sometimes without notice or explanation. They acknowledged that, because of their proximity to the U.S.-Mexico border, their communities were susceptible to money laundering-related activity and described how banks’ increased efforts to comply with BSA/AML requirements may have influenced banks’ decisions to terminate accounts. Each of the three Southwest border communities we visited— Nogales, Arizona; San Ysidro, California; and McAllen, Texas—also experienced multiple bank branch closures from 2013 through 2016 (see fig. 9). Our analysis shows that from 2013 through 2016, these communities lost a total of 12 bank branches, 9 of which were branches of large or extra- large banks, based on asset size. But the percentage of branch closures in some communities was more significant in locations where there were already a limited number of branch options. For instance, Nogales (3 of its 9 branches closed) and San Ysidro (5 of its 12 branches closed) both lost a third or more of all their bank branches compared to McAllen where approximately 6 percent of its branches were closed (4 of its 63 branches closed). According to border stakeholders we spoke with, businesses engaged in cross-border trade, cash-intensive businesses, and Mexican nationals— all significant parts of the border economy—were affected by account terminations and branch closures in the three communities we visited. For example, the cross-border produce industry accounts for almost 25 percent of jobs and wages in Nogales, according to a 2013 study prepared for Nogales Community Development. One produce business owner who had an account terminated told us that she was told that the volume of funds deposited into the account from her affiliated Mexican business created security risks that the bank was no longer willing to sustain, and she was unable to negotiate with the bank to keep it open. She said that it took almost 7 months to open a new account and that it involved coordination among bankers in multiple cities on both sides of the border. While some produce businesses and economic development specialists we spoke with explained that some regional banks in their communities have opened accounts for some small- to medium-sized produce businesses, they still have concerns about the long-term effects of limited access to banking services on smaller produce firms. One economic development specialist explained that these small companies often rely on local banks for funding, which enables them to develop and bring innovation to the produce industry. Some discussion group participants who we spoke with also described challenges related to account terminations that cash-intensive businesses face in operating in the Southwest border region because of banks’ increased emphasis on BSA/AML compliance. They explained that cash transactions raised suspicions for banks because of their associated money laundering risk; however, cash is a prevalent payment source for legitimate businesses in the region. For example, one money services business owner who participated in our discussion group in San Ysidro said that because his business generates large volumes of cash, he struggles to keep a bank account as a result of banks’ oversight of and caution regarding cash transactions. He said his business account has been closed three times over the past 35 years and that banks have declined his requests to open an account at least half a dozen times. Similarly, another discussion group participant explained that companies that import automobiles into Mexico use cash to pay for cars in the United States and that trying to make these large cash deposits raised suspicions for U.S. banks. Border stakeholders we spoke with also described how challenges associated with branch closures and terminations of accounts of Mexican nationals affected the Southwest border communities we visited. Border communities like San Ysidro are home to retail businesses, such as restaurants and clothing stores. According to our analysis of Bureau of Transportation Statistics data, an average of almost 69,000 personal vehicle passengers and 25,000 pedestrians entered the United States daily in September 2017 through the San Ysidro land port of entry. Economic development specialists told us that these visitors spend money on goods and services in local border communities. For example, one economic development specialist in Arizona estimated that Mexican nationals spend about $1 billion in Pima County alone each year, and another one estimated that 70 percent of the sales taxes collected in Nogales are paid by Mexican customers who cross the border to shop. One of the specialists explained that Mexicans—both Mexican day travelers to Tucson, as well as those who own U.S. real estate and travel to the United States for other investment business—used to visit the region and withdraw money from their U.S. bank accounts and subsequently spend money in border communities. He explained that Mexican nationals find it easier to have U.S. bank accounts to use while visiting and shopping on the U.S. side of the border. However, some discussion group participants said that because Mexican nationals have faced difficulties maintaining U.S. bank accounts, they have made fewer trips across the border and engaged in less commerce, which has affected the economies in their communities. Some participants also said that branch closures have affected businesses’ sales volumes in their communities. For example, one participant said that when branches closed in the San Ysidro Boulevard area—which is at the base of the pedestrian border crossing—businesses have had difficulty thriving due to reduced foot traffic by customers. According to border stakeholders we spoke with, branch closures also resulted in fewer borrowing options and limited investment in the communities, which they thought hindered business growth. For example, one discussion group participant explained that middle-sized businesses, such as those with revenues of approximately $2 million–$25 million, have fewer borrowing options when branches closed in the community because the remaining regional and smaller banks may not have the capital to support the lending needs of businesses that size. One economic development specialist and some discussion group participants also suggested that branch closures limited opportunities for local business expansion when banks outside the community are reluctant to lend to them. For example, in Tucson, Arizona, one specialist said that small businesses are having difficulty getting loans, which affects the ability of businesses to grow. To fill the void, some local businesses have turned to alternative lending options, such as title loan companies, accounts receivable lending companies, and family members as alternative funding sources. Rigorous academic research we reviewed suggests that branch closures reduce small business lending and employment growth in the area immediately around the branch. Our analysis of branch closure data based on estimates from this research suggests closed branches in the communities we visited could have amounted to millions of dollars in reduced lending and hundreds of fewer jobs. For example, in McAllen, Texas, this research suggests that the loss of four bank branches could have reduced employment growth by over 400 jobs and small business lending by nearly $3.5 million. Some discussion group participants said that as a result of branch closures and account terminations in the Southwest border communities we visited, they traveled further to conduct banking activities, paid higher fees for new banking alternatives, and experienced difficulty completing banking transactions. Some participants told us that they had to travel further to their new banking location, which resulted in additional costs and inconvenience for customers. For instance, some participants in Nogales and San Ysidro said they had to travel 20 to 40 minutes further to the next closest bank branch, with one participant noting that this especially created difficulty for elderly bank customers. One discussion group participant said that when their local bank branch closed, they kept their account with that bank and traveled more than 70 miles to the next closest branch because they were afraid that they would not be able to open an account with another bank. Another participant also noted the additional cost of gas and time lost for other important matters as a result of traveling further to a branch. Other participants also noted that they experienced longer lines at their new branches because of the higher volume of customers from closed branches. Some participants also found that some banking alternatives were more expensive than their previous banking options when their accounts were terminated or a local branch closed. For instance, some discussion group participants said they paid higher fees at their new bank and one participant mentioned that she received a lower interest rate on her deposits at her new bank. Some participants also mentioned that some banking alternatives they used, such as currency exchanges, were more expensive than their previous banking options. Some discussion group participants also told us that they experienced difficulty completing banking transactions in their communities as a result of branch closures or banks’ increased efforts to comply with BSA/AML requirements. For example, some participants from one discussion group session said that only an automated teller machine (ATM) was available in their community after their branch closed and it was not appropriate for all types of banking transactions. Further, some participants were unsatisfied with not being able to get in-person assistance from bank staff when their branch closed. For instance, one participant said that without a local branch, there was no nearby bank personnel to help her when the local ATM malfunctioned. Further, while acknowledging banks’ need to comply with BSA/AML requirements, some discussion group participants explained that some banking transactions have become more difficult, such as banks requiring additional forms of identification and limitations placed on cash transactions. Some participants, many who were longtime customers with their bank, also noted their disapproval with banks’ additional questioning and documentation requirements, and that there was little acknowledgment by the bank of their value as a legitimate customer or of their knowledge about them as a customer. Some participants acknowledged that they did not experience this challenge because of the increasing availability of mobile banking options, which allow customers to complete some transactions without going to a physical branch location. As another example, one business owner said she mostly used online banking and has a check reader in her office that she uses to deposit checks directly into her business accounts. The results of our survey (for both Southwest border banks and non- Southwest border banks) and discussions with Southwest border bank representatives indicate that banks are terminating accounts and limiting services, in part, as a way to manage perceived regulatory concerns about facilitating money laundering. In addition, the econometric models we developed and estimated also generally found that money laundering- related risk factors that could be reflective, in part, of BSA/AML compliance effort and risks, were an important predictor of national bank branch closures, and likely to have been relatively more important in the Southwest border region. Regulators have taken some actions in response to derisking, including issuing guidance and conducting some agency reviews. Regulators have also conducted retrospective reviews on some BSA/AML requirements. However, regulators have taken limited steps aimed at addressing how banks’ regulatory concerns and BSA/AML compliance efforts may be influencing banks to engage in derisking or close branches. FinCEN and the federal banking regulators have responded to concerns about derisking on a national level by issuing guidance to banks and conducting some evaluations within their agencies to understand the extent to which derisking is occurring. The guidance issued by regulators has been aimed at clarifying BSA/AML regulatory expectations and discouraging banks from terminating accounts without evaluating risk presented by individual customers or banks’ abilities to manage risks. The guidance has generally encouraged banks to use a risk-based approach to evaluate individual customer risks and not to eliminate entire categories of customers. Some of the guidance issued by regulators attempted to clarify their expectations specifically for banks’ offering of services to money services businesses. For example, in March 2005, the federal banking regulators and FinCEN issued a joint statement on providing banking services to money services businesses to clarify the BSA requirements and supervisory expectations as applied to accounts opened or maintained for this type of customer. The statement acknowledged that money services businesses were losing access to banking services as a result of concerns about regulatory scrutiny, the risks presented by these types of accounts, and the costs and burdens associated with maintaining such accounts. In addition, in November 2014, OCC issued a bulletin which explained that OCC-supervised banks are expected to assess the risks posed by an individual money services business customer on a case-by-case basis and to implement controls to manage the relationship commensurate with the risks associated with each customer. More recently, Treasury and the federal banking regulators issued a joint fact sheet on foreign correspondent banking which summarized key aspects of federal supervisory and enforcement strategy and practices in the area of correspondent banking. In addition to issuing guidance, FDIC and OCC have taken some steps aimed at trying to determine why banks may be terminating accounts because of perceived regulatory concerns. For example, in January 2015, FDIC issued a memorandum to examiners establishing a policy that examiners document and report instances in which they recommend or require banks to terminate accounts during examinations. The memorandum noted that recommendations or requirements to terminate accounts must be made and approved in writing by the Regional Director before being provided to and discussed with bank management and the board of directors. As of December 2017, FDIC officials stated that there were no instances of recommendations or requirements for account terminations being documented by examiners. In 2016, OCC reviewed how the institutions it supervises develop and implement policies and procedures for evaluating customer risks as part of their BSA/AML programs and for making risk-based determinations to close customer accounts. OCC focused its review on certain large banks’ evaluation of risk for foreign correspondent bank accounts. This effort resulted in OCC issuing guidance to banks on periodic evaluation of the risks of foreign correspondent accounts. The guidance describes corporate governance best practices for banks’ consideration when conducting these periodic evaluations of risk and making account retention or termination decisions on their foreign correspondent accounts. Further, OCC’s Fiscal Year 2018 Bank Supervision Operating Plan noted that examiners should be alert to banks’ BSA/AML strategies that may inadvertently impair financial inclusion. However, as of September 2017, OCC officials stated that the agency has not identified any concerns related to financial inclusion. Treasury and the federal banking regulators have also participated in a number of international activities related to concerns about the decline in the number of correspondent banking and money services business accounts. For example, FDIC, OCC, and the Federal Reserve participate in the Basel Committee on Banking Supervision’s Anti-Money Laundering/Counter Financing of Terrorism Experts Group. Recent efforts of the group involved revising guidelines to update and clarify correspondent banking expectations. Treasury leads the U.S. engagement to the Financial Action Task Force (FATF)—an inter- governmental body that sets standards for combating money laundering, financing of terrorism, and other related threats to the integrity of the international financial system—which has issued guidance on correspondent banking and money services businesses. Treasury also participates in the efforts to combat derisking that are occurring through the Financial Stability Board’s Correspondent Banking Coordination Group, the Global Partnership for Financial Inclusion, and the International Monetary Fund. The federal banking regulators also met with residents and businesses in the Southwest border region to discuss concerns related to derisking in the region. For example, FDIC officials hosted a BSA/AML workshop in Nogales, Arizona, in 2015 for banks, businesses, trade organizations, and others. Officials from the Federal Reserve and OCC also participated in the workshop during which the regulators tried to clarify BSA/AML regulatory requirements and expectations. In addition, OCC officials told us that they met with representatives of the Fresh Produce Association of the Americas, who had concerns about banks not providing services in the region. OCC officials spoke to the produce industry representatives about various money laundering schemes and the role of the agency’s examiners during the meeting. Evaluation of BSA/AML regulations and their implementation is essential to ensuring the integrity of the financial system while facilitating financial inclusion. Without oversight of regulations after implementation, they might prove to be less effective than expected in achieving their intended goals, become outdated, or create unnecessary burdens. Regulations may also change the behaviors of regulated entities and the public in ways that cannot be predicted prior to implementation. Some regulators and international standard setters recognize that establishing a balanced BSA/AML regulatory regime is challenging. For example, in a 2016 speech, the then Comptroller of the Currency Curry stated that preventing money laundering and terrorist financing are important goals, but that a banking system that is truly safe and sound must also meet the legitimate needs of its customers and communities. FinCEN officials also told us that while the agency’s mission is to safeguard the financial system from illicit use and combat money laundering, they also must be cautious that their efforts do not prevent people from using the system. Further, FATF acknowledged that AML and counter-terrorism financing safeguards can affect financial inclusion efforts. FATF explained that applying an overly cautious approach to safeguards for money laundering and terrorist financing can have the unintended consequence of excluding legitimate businesses and consumers from the formal financial system. Executive orders encourage and legislation requires agencies to review existing regulations to determine whether they should be retained, amended, or rescinded, among other things. Retrospective reviews of existing rules help agencies evaluate how existing regulations are working in practice. A retrospective review is an important tool that may reveal that an existing rule—while needed—has not operated as well as expected, and that changes may be warranted. Retrospective reviews seek to make regulatory programs more effective or less burdensome in achieving their regulatory objectives. Many recent presidents have directed agencies to evaluate or reconsider existing regulations. For example, in 2011 President Obama issued Executive Orders 13563 and 13579. Among other provisions, Executive Orders 13563 and 13579 require executive branch agencies and encourage independent regulatory agencies, such as the federal banking regulators, respectively, to develop and implement retrospective review plans for existing significant regulations. Further, the Trump Administration has continued to focus on the need for agencies to improve regulatory effectiveness while reducing regulatory burdens. Executive Order 13777, issued by President Trump in February 2017, also reaffirms the objectives of previous executive orders and directs agency task forces to identify regulations which, among other criteria, are outdated, unnecessary, or ineffective. In addition to the executive orders, the Economic Growth and Regulatory Paperwork Reduction Act (EGRPRA) requires federal banking regulators to review the regulations they prescribe not less than once every 10 years and request comments to identify outdated, unnecessary, or unduly burdensome statutory or regulatory requirements. FinCEN and the federal banking regulators have all participated in retrospective reviews of different parts of the BSA/AML regulations. For example, FinCEN officials told us that they review each new or significantly amended regulation to assess its clarity and effectiveness within 18 months of its effective date. Each assessment is targeted to the specific new regulation, or significant change to existing regulations, and a determination is made on how best to evaluate its effectiveness. FinCEN officials explained that the agency consistently receives feedback from all of the relevant stakeholders, including law enforcement, regulated entities, relevant federal agencies, and the public, which informs their retrospective reviews. Based on the specific findings of an assessment, FinCEN considers whether to publish guidance or whether additional rule making is required. For example, FinCEN officials explained that they revised the money services business definitions to adapt to evolving industry practice as part of the regulatory review process. As part of fulfilling their requirements under EGRPRA, the federal banking regulators—through the Federal Financial Institutions Examination Council (FFIEC)—have also participated in retrospective reviews of BSA/AML regulations. As part of the 2017 EGRPRA review, FFIEC received several public comments on BSA/AML requirements, including increasing the threshold for filing CTRs, the SAR threshold, and the overall increasing cost and burden of BSA compliance. The federal banking regulators referred the comments to FinCEN. FinCEN is not a part of the EGRPRA review and is not required to consider the comments; however, in its response in the 2017 EGRPRA report, the agency stated that it finds the information helpful when assessing BSA requirements. FinCEN officials and the federal banking regulators stated that the agencies are working to address the BSA-related EGRPRA comments—particularly those related to CTR and SAR filing requirements—through the BSA Advisory Group (BSAAG), which established three subcommittees to address some of the concerns raised during the EGRPRA process. One subcommittee is reviewing the metrics used by industry, law enforcement, and FinCEN to assess the value and effectiveness of BSA reporting. Another subcommittee is focusing on how SAR filing requirements could be streamlined or reduced while maintaining the value of the data, and the third subcommittee is focusing on issues related to the filing of CTRs. FinCEN and the federal banking regulators are also considering, through the advisory group, the EGRPRA comments that involve the supervisory process and expectations related to BSA examinations of financial institutions. FinCEN officials stated that there have been significant discussions during two BSAAG meetings since the 2017 EGRPRA report was issued and that, as of November 2017, all of these efforts are ongoing. In addition to the BSAAG, regulators also told us that that the FFIEC BSA/AML working group has discussed EGRPRA and other compliance burden issues at its recent meetings and is trying to promote BSA examination consistency through its monthly meetings and with the interagency FFIEC BSA/AML examination manual. The actions FinCEN and the federal banking regulators have taken related to derisking—issuing guidance, conducting internal agency reviews, and meeting with affected Southwest border residents—have not been aimed at addressing and, if possible ameliorating, the full range of factors that influence banks to engage in derisking, in particular banks’ regulatory concerns and BSA/AML compliance efforts. Further, the actions regulators have taken to address concerns raised in BSA/AML retrospective reviews have focused primarily on the burden resulting from the filing of CTRs and SARs, but again, these actions have not evaluated how regulatory concerns may influence banks to engage in derisking or close branches. Federal internal control standards call for agencies to analyze and respond to risks to achieving their objectives. Further, guidance implementing Executive Orders 13563 and 13579 states that agencies should consider conducting retrospective reviews on rules that unanticipated circumstances have overtaken. Our evidence shows that derisking may be an unanticipated response from the banking industry to BSA/AML regulations and their implementation. For example, our evidence demonstrates that banks not only terminate or limit customer accounts as a way to address legitimate money laundering and terrorist financing threats, but also, in part, as a way to manage regulatory concerns. Further, our econometric models and discussions with bank representatives suggest that BSA/AML compliance costs and risks can play a role in the decision to close a branch. The actions FinCEN and the federal banking regulators have taken to address derisking and the retrospective reviews that have been conducted have not been broad enough to evaluate all of the BSA/AML factors banks consider when they derisk or close branches, including banks’ regulatory concerns which may influence their willingness to provide services. Without assessing the full range of BSA/AML factors that may be influencing banks to derisk or close branches, FinCEN, the federal banking regulators, and Congress do not have the information they need to determine if adjustments are needed to ensure that the BSA/AML regulations and their implementation are achieving their regulatory objectives in the most effective and least burdensome way. BSA/AML regulations promote the integrity of the financial system by helping a number of regulatory and law enforcement agencies detect money laundering, drug trafficking, terrorist financing, and other financial crimes. As with any regulation, oversight after implementation is needed to ensure the goals are being achieved and that unnecessary burdens are identified and ameliorated. The collective findings from our work indicate that BSA/AML regulatory concerns have played a role in banks’ decisions to terminate and limit accounts and close branches. However, the actions taken to address derisking by the federal banking regulators and FinCEN and the retrospective reviews conducted on BSA/AML regulations have not fully considered or addressed these effects. Retrospective reviews help agencies evaluate how existing regulations are working in practice and can assist to make regulatory programs more effective or less burdensome in achieving their regulatory objectives. BSA/AML regulations have helped to detect money laundering and other financial crimes, but there are also real concerns about the unintended effects, such as derisking, that these regulations and their implementation may be having. While it is important to evaluate how effective BSA/AML regulations are in helping to identify money laundering, terrorist financing, and other financial crimes, it is also important to identify and attempt to address any unintended outcomes. We have found that reduced access to banking services can have consequential effects on local communities. However, without evaluating how banks’ regulatory concerns may be affecting their decisions to provide services, the federal banking regulators, FinCEN, and Congress do not have the information to determine if BSA/AML regulations and their implementation can be made more effective or less burdensome in achieving their regulatory objectives. We are making four recommendations to FinCEN and the three federal banking regulators in our review—FDIC, the Federal Reserve, and OCC—to jointly conduct a retrospective review of BSA/AML regulations and their implementation for banks. The Director of FinCEN should jointly conduct a retrospective review of BSA/AML regulations and their implementation for banks with FDIC, the Federal Reserve, and OCC. This review should focus on how banks’ regulatory concerns may be influencing their willingness to provide services. In conducting the review, FDIC, the Federal Reserve, OCC, and FinCEN should take steps, as appropriate, to revise the BSA regulations or the way they are being implemented to help ensure that BSA/AML regulatory objectives are being met in the most effective and least burdensome way. (Recommendation 1) The Chairman of FDIC should jointly conduct a retrospective review of BSA/AML regulations and their implementation for banks with the Federal Reserve, OCC, and FinCEN. This review should focus on how banks’ regulatory concerns may be influencing their willingness to provide services. In conducting the review, FDIC, the Federal Reserve, OCC, and FinCEN should take steps, as appropriate, to revise the BSA regulations or the way they are being implemented to help ensure that BSA/AML regulatory objectives are being met in the most effective and least burdensome way. (Recommendation 2) The Chair of the Federal Reserve should jointly conduct a retrospective review of BSA/AML regulations and their implementation for banks with FDIC, OCC, and FinCEN. This review should focus on how banks’ regulatory concerns may be influencing their willingness to provide services. In conducting the review, FDIC, the Federal Reserve, OCC, and FinCEN should take steps, as appropriate, to revise the BSA regulations or the way they are being implemented to help ensure that BSA/AML regulatory objectives are being met in the most effective and least burdensome way. (Recommendation 3) The Comptroller of the Currency should jointly conduct a retrospective review of BSA/AML regulations and their implementation for banks with FDIC, the Federal Reserve, and FinCEN. This review should focus on how banks’ regulatory concerns may be influencing their willingness to provide services. In conducting the review, FDIC, the Federal Reserve, OCC and FinCEN should take steps, as appropriate, to revise the BSA regulations or the way they are being implemented to help ensure that BSA/AML regulatory objectives are being met in the most effective and least burdensome way. (Recommendation 4) We provided a draft of this report to CFPB, the Department of Justice, the Federal Reserve, FDIC, Treasury/FinCEN, and OCC. The Federal Reserve, FDIC, and OCC provided written comments that have been reproduced in appendixes IV–VI, respectively. Treasury/FinCEN did not provide a written response to the report. FDIC, Treasury/FinCEN, and OCC provided technical comments on the draft report, which we have incorporated, as appropriate. CFPB and the Department of Justice did not have any comments on the draft of this report. In their written responses, the Federal Reserve, FDIC, and OCC agreed to leverage ongoing interagency work reviewing BSA/AML regulations and their implementation for banks to address our recommendation. We agree that using existing interagency efforts is an appropriate means for conducting a retrospective review of BSA/AML regulations that focuses on evaluating how banks’ BSA/AML regulatory concerns may be influencing their willingness to provide services. The Federal Reserve, FDIC, and OCC also raised concerns with some of the findings of our report and the methodologies we used. For example, in their responses, each agency discussed that the report did not take into consideration the extent to which law enforcement activities may be a driver of account terminations and branch closures in the Southwest border region. In response to this comment, we added some information to the report that we received from law enforcement officials about instances in which some account terminations were the result of law enforcement’s identification of suspicious accounts. This type of account termination, however, is not included in our definition of the term “derisking,” because such terminations are consistent with BSA/AML purposes. In addition, when we discuss the role that enforcement actions have played in making Southwest border banks more conservative in their account offerings, we’ve clarified the language to ensure it encompasses both regulatory enforcement actions taken by the federal banking regulators and criminal enforcement actions taken by law enforcement agencies. Treasury/FinCEN’s technical comments also noted that the report did not take into consideration the 2010 Mexican exchange control regulations and their subsequent changes, which it considers to be the most important catalyst of changes to BSA risk profiles for banks in the Southwest border region. To address this comment, we added language describing these regulations and their potential effects on Southwest border banks. In its written response, the Federal Reserve stated that the report does not find a causal linkage between the agency’s regulatory oversight and derisking decisions made by some banks that operate along the Southwest border (see app. IV). OCC made a similar comment in its technical comments on the draft report. While the methodologies used in our report included a nationally representative survey of banks, econometric modeling of potential drivers of branch closures, and discussions with bank representatives, do not on their own allow us to make a definitive causal linkage between regulation and derisking, the collective evidence we gathered indicates that banks’ BSA/AML regulatory concerns have played a role in their decisions to terminate and limit accounts and close branches. We believe that, based on this evidence, further examination by the federal banking regulators and FinCEN into how banks’ perceived regulatory concerns are affecting their offering of services is warranted. OCC’s written response noted that the definition of derisking we used is inconsistent with definitions used by other regulatory bodies and that our definition encompasses a wide range of situations in which banks limit certain services or end customer relationships (see app. VI). Treasury/FinCEN also made a similar comment in its technical comments on the draft report. OCC’s letter notes that FATF and the World Bank define derisking as situations in which financial institutions terminate or restrict business relationships with entire countries or classes of customers in order to avoid, rather than to manage, AML-related risks. We, however, defined derisking for the purposes of our report as the practice of banks limiting certain services or ending their relationships with customers to, among other things, avoid perceived regulatory concerns about facilitating money laundering because it best described the bank behavior we wanted to examine. While we recognize that there are narrower definitions of derisking that focus solely on the treatment of entire countries or classes of customers, we chose to focus on banks’ perceived regulatory concerns because these concerns could influence banks’ decisions to provide services in a variety of ways. Moreover, including perceived regulatory concerns as a factor enabled us to examine whether there were ways the federal regulators may be able to improve the implementation of BSA/AML to reduce the effects of derisking on different populations of banking customers. Furthermore, our definition is broader and allows us to include individual decisions banks make to terminate or limit accounts, as well as whole categories of customer accounts. Our decision to define derisking in this manner was based on, among other things, discussions we had with representatives of Southwest border banks who indicated such behavior was occurring. We added additional information on the definition of derisking we chose to our scope and methodology section (see app. I). OCC’s response letter also notes that because we focus exclusively on BSA/AML regulatory issues, the report does not take into consideration other reasons that banks terminate account relationships. We recognize that banks may terminate accounts for a variety of reasons, some of which are not related to BSA/AML regulatory issues. However, because the focus of our review was to determine why banks are terminating accounts for BSA/AML regulatory reasons, we did not seek to identify all the potential reasons banks may terminate accounts. Finally, OCC’s letter states that the agency has concerns regarding our econometric analysis and the conclusions that can be drawn from it. FDIC made similar comments in its technical comments on the draft report. In response to these comments, we have clarified how we interpret the effect of money laundering-related risk in our models. We agree that the econometric results on their own do not provide definitive evidence that regulatory burden is causing branch closures, but our econometric models and discussions with bank representatives together suggest that BSA/AML compliance costs and risks can play a role in the decision to close a branch. FDIC’s written letter states that the report does not distinguish account or branch closures resulting from suspected money laundering or other illicit financial transactions from closures that may have resulted from ineffective or burdensome regulations. In response to this concern, we revised language in the report to ensure that we do not imply that instances in which banks limit services or terminate relationships based on credible evidence of suspicious or illegal activity reflects derisking behavior. As noted above, we also clarified how we interpret the effect of money laundering-related risk on branch closures in our models and recognize that our econometric results alone do not provide definitive evidence that regulatory burden is causing branch closures. However, our econometric models coupled with discussions we had with bank representatives suggest that BSA/AML compliance costs and risks can play a role in the decision to close a branch. FDIC’s letter also stated that our report highlighted that 1 in 10 branch closures may be due to “compliance challenges.” This statement is incorrect. The report states that nearly half of the Southwest border bank representatives (4 of 10) we spoke with mentioned that BSA/AML compliance costs could be among the factors considered in whether or not to close a branch. Further, we identified one bank that considered closing a branch as an option to address considerable BSA/AML compliance challenges. In addition, most Southwest border bank representatives we spoke with said that the financial performance of the branch is one of the most important factors they consider when deciding to close a branch, and as we describe in the report, BSA/AML compliance can be resource intensive, which may affect the financial performance of a branch. We are sending copies of this report to the appropriate congressional committees, the Director of Financial Crimes Enforcement Network, the Chairman of the Federal Deposit Insurance Corporation, the Chair of the Board of Governors of the Federal Reserve System, the Comptroller of the Currency, the Attorney General, the Acting Director of the Bureau of Consumer Financial Protection, and other interested parties. The report will also be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or evansl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are listed on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. The objectives of this report were to (1) describe the types of heightened Bank Secrecy Act/anti-money laundering (BSA/AML) compliance risks that Southwest border banks may face and the BSA/AML compliance challenges they may experience; (2) determine the extent to which banks are terminating accounts and closing bank branches in the Southwest border region and their reasons for any terminations or closures; (3) describe what Southwest border banking customers and others told us about any effects of account terminations and branch closures on Southwest border communities; and (4) evaluate how the Department of the Treasury’s (Treasury) Financial Crimes Enforcement Network (FinCEN) and the federal banking regulators—the Board of Governors of the Federal Reserve System (Federal Reserve), Federal Deposit Insurance Corporation (FDIC), and Office of the Comptroller of the Currency (OCC)—have assessed and responded to concerns about derisking in the Southwest border region and elsewhere, and the effectiveness of those efforts. We defined “derisking” to mean the practice of banks limiting certain services or ending their relationships with customers to, among other things, avoid perceived regulatory concerns about facilitating money laundering. We developed this definition by reviewing various existing definitions used by international banking industry standard setters and others, including the Financial Action Task Force (FATF)—an intergovernmental body that, among other things, sets standards for combating money laundering; the Bank for International Settlements; the World Bank; and the Global Partnership for Financial Inclusion. We also reviewed guidance and other documentation issued by the federal banking regulators, Treasury, and FinCEN; research reports on derisking; an industry survey; and testimonial evidence from several banks we interviewed. The methodologies we used allowed us to gather information on a variety of factors that may be causing banks to limit services, while our definition of derisking allowed us to focus on the role played by the federal regulators in implementing BSA/AML requirements. We defined the Southwest border region as all counties that have at least 25 percent of their landmass within 50 miles of the U.S.-Mexico border. Thirty-three counties fell within this definition. They are: Cochise, Pima, Santa Cruz, and Yuma, Arizona; Imperial and San Diego, California; Dona Ana, Hidalgo, and Luna, New Mexico; and Brewster, Brooks, Cameron, Culberson, Dimmit, Edwards, El Paso, Hidalgo, Hudspeth, Jeff Davis, Jim Hogg, Kenedy, Kinney, La Salle, Maverick, Presidio, Starr, Terrell, Uvalde, Val Verde, Webb, Willacy, Zapata, and Zavala, Texas. We excluded credit unions from the scope of our review based on discussions with and information received from the National Credit Union Administration (NCUA)—which oversees credit unions for compliance with BSA/AML requirements—and two regional credit union groups that cover the Southwest border states. These groups noted that neither branch closures nor account terminations by credit unions were prevalent in the Southwest border region. To describe the types of heightened BSA/AML compliance risks that Southwest border banks may face and the BSA/AML compliance challenges they may experience, we analyzed data from FinCEN on the volume of Suspicious Activity Reports (SAR) and Currency Transaction Reports (CTR) filed by bank branches in Southwest border counties and compared the volume of those filings to filings in similar geographic areas outside the Southwest border region from 2014 through 2016. To adjust for variances in the size of counties, which may be reflected in the number of SAR and CTR filings by counties, we standardized the quantity of SARs and CTRs filed by county by calculating the number of SAR and CTR filings per billion dollars in bank branch deposits. We used data from FDIC’s Summary of Deposits database for information on bank branch deposits. To construct comparison groups that were comparable along some key dimensions, we matched Southwest border counties to counties with the same 2013 Rural-Urban Continuum Code (RUCC), which measures how urban or rural a county is, and by population if there was more than one potential matching county. We undertook this process for two comparison groups, one for counties in Southwest border states, but not directly on the U.S.-Mexico border, and one for counties outside the Southwest border states that were designated as High Intensity Financial Crimes Areas (HIFCA) or High Intensity Drug Trafficking Areas (HIDTA). In addition, we analyzed data on BSA/AML bank examination violations using nonpublic data provided by FDIC, OCC, and the Federal Reserve from January 2009 through June 2016. We obtained data for all Southwest border banks (if they had been cited for a BSA/AML compliance violation during the period we reviewed), as well as aggregated data for all banks in the United States that received a BSA/AML compliance violation during the period we reviewed. Because each regulator categorized violations differently, we developed a set of categories to apply to violations across all three regulators. We analyzed the distribution of violations by category. In addition, we analyzed data on BSA/AML informal enforcement actions provided by the federal banking regulators and formal BSA/AML enforcement actions taken by the federal banking regulators and FinCEN from January 2009 through June 2016. We also reviewed documentation from BSA/AML examinations of selected Southwest border banks to gain additional context about BSA/AML violations. We also interviewed representatives from 19 Southwest border banks. Using data from FDIC’s Summary of Deposits database, we identified all Southwest border banks as of June 30, 2016. We then selected banks to interview in the following ways. First, we interviewed four of the five largest Southwest border banks (based on asset size). Second, as part of our site visits to communities in the Southwest border region (described below), we interviewed nine Southwest border banks that operate in or near the communities we visited— Nogales, Arizona; San Ysidro, California; and McAllen, Texas. We selected banks in these communities based on the following criteria: (1) the number of branches the bank operates in the Southwest border region, focusing on banks that operate only a few branches in the region; (2) the size of the bank based on assets; and (3) the bank’s primary federal regulator. We focused our selection on banks that operate fewer branches in the region because we interviewed four of the five largest banks in the region that operate many branches in the region. To the extent that a bank was located in the community and willing to speak with us, we interviewed at least one bank that was regulated by each federal banking regulator (Federal Reserve, FDIC, and OCC). Third, we interviewed six additional Southwest border banks as part of the development of our bank survey (described in more detail below) and also asked them questions related to their efforts to comply with BSA/AML requirements. We selected these banks using the same criteria we used for the selection of banks in our site visit communities: the bank’s primary federal regulator, size of the bank (based on assets), and number of branches. For the interviews, we used a semistructured interview protocol, and responses from bank officials were open-ended to allow for a wide variety of perspectives and responses. Responses from these banks are not generalizable to all Southwest border banks. In addition to the interviews with banks, we also interviewed officials from FDIC, Federal Reserve, and OCC, as well as BSA/AML examination specialists from each federal banking regulator to gain their perspectives on the risks faced by banks in the Southwest border region. To determine the extent to which banks are terminating accounts in the Southwest border region and the reasons for the terminations, we administered a web-based survey to a nationally representative sample of banks to obtain information on bank account terminations for reasons related to BSA/AML risk. In the survey, we asked banks about limitations and terminations of accounts related to BSA/AML risk, the types of customer categories being limited or terminated, and the reasons for these decisions. We administered the survey from July 2017 to September 2017, and collected information for the 3-year time period of January 1, 2014, to December 31, 2016. Appendix II contains information on the survey results. To identify the universe of banks, we used data from FDIC’s Statistics on Depository Institutions database. Our initial population list contained 5,922 banks downloaded from FDIC’s Statistics on Depository Institutions database as of December 31, 2016. We stratified the population into five sampling strata and used a stratified random sample. First, banks that did not operate in the Southwest border region (non-Southwest border banks) were stratified into four asset sizes (small, medium, large, and extra- large). Second, to identify the universe of Southwest border banks, we used FDIC’s Summary of Deposits database as of June 30, 2016. This is a hybrid stratification scheme. Our initial sample size allocation was designed to achieve a stratum-level margin of error no greater than plus or minus 10 percentage points for an attribute level at the 95 percent level of confidence. Based upon prior surveys of financial institutions, we assumed a response rate of 75 percent to determine the sample size for the asset size strata. Because there are only 17 extra-large banks in the population, we included all of them in the sample. We also included the entire population of 115 Southwest border banks as a separate certainty stratum. We reviewed the initial population list of banks in order to identify nontraditional banks not eligible for this survey. We treated nontraditional banks as out-of- scope. We also reviewed the initial population list to determine whether subsidiaries of the same holding company should be included separately in the sample. In addition, during the administration of our survey, we identified six banks that had been bought and acquired by another bank, as well as one additional bank that was nontraditional and, therefore, not eligible for this survey. We treated these sample cases as out-of-scope; this adjusted our population of banks to 5,805 and reduced our sample size to 406. We obtained a weighted survey response rate of 46.5 percent. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (for example, plus or minus 7 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Confidence intervals are provided along with each sample estimate in the report. All survey results presented in the body of this report are generalizable to the estimated population of 5,805 in-scope depository institutions, except where otherwise noted. The practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question or sources of information available to respondents can introduce unwanted variability into the survey results. We took steps in developing the questionnaire, collecting the data, and analyzing the results to minimize such nonsampling error. To inform our methodology approach and our survey development, we conducted interviews with representatives from seven selected Southwest border banks. From these interviews, we gathered information on the type and amount of data banks keep on account terminations for reasons related to BSA/AML risk. The selection process used to identify these banks is described above. We conducted pretests of the survey with four banks. We selected these banks to achieve variation in geographic location (within and outside the Southwest border region) and asset size (small, large, extra large). The pretests of the survey were conducted to ensure that survey questions were clear, to obtain any suggestions for clarification, and to determine whether representatives would be able to provide responses to questions with minimal burden. We also interviewed the federal banking regulators; federal, state, and local law enforcement officials; and bank industry associations, to obtain their perspectives on banks’ experience with account terminations. To determine the extent to which banks have closed branches in the Southwest border region and the reasons for the closures, we analyzed data from a variety of sources and interviewed bank officials. To assess trends in bank branch closures, we analyzed data from FDIC’s Summary of Deposits database on the size and location of bank branches. Our measure of bank branches includes both full-service and limited-service branches. Limited-service branches provide some conveniences to bank customers but generally offer a reduced set of bank services. As of 2016, limited-service branches were about 2.5 percent of branches in the Southwest border region. We compared growth rates for all branches in the Southwest border region and only full-service branches, for 2013 through 2016, and found that they were almost identical (-5.92 percent and -5.93 percent, respectively). We combined the Summary of Deposits data on the size and location of bank branches with demographic, economic, and money laundering-related risk data from the U.S. Census Bureau, U.S. Department of Commerce’s Bureau of Economic Analysis, and FinCEN, among other sources. We then utilized the merged dataset to conduct an econometric analysis of the potential drivers of branch closures (see app. III for information on the econometric analysis). We also compared trends in branch closures in the Southwest border region to national trends, as well as trends in counties in Southwest border states that were not in the Southwest border region, and trends in HIFCA and HIDTA counties not in Southwest border states. We also interviewed representatives from banks that operate in the Southwest border region about the time and resources required to file SARs and how they approached the decision to close a branch. To describe what Southwest border banking customers and others told us about any effects of account terminations and branch closures in Southwest border communities, we conducted site visits to communities in three of the four Southwest border states (Nogales, Arizona; San Ysidro, California; and McAllen, Texas). We selected these communities to achieve a sample of locations that collectively satisfied the following criteria: (1) counties with different classifications of how rural or urban they are based on their RUCC classification; (2) counties that experienced different rates of branch closures from 2013 through 2016; and (3) counties that had received different designations by the federal banking regulators as distressed or underserved as of June 1, 2016. Perspectives gathered from our visits to the selected cities cannot be generalized to all locations in Southwest border counties. During our site visits, we conducted a total of five discussion groups and summarized participants’ responses about how they were affected by account terminations and branch closures in their communities. Discussion groups included a range of 2 to10 participants with varied experiences related to access to banking services in their area, including customers whose accounts were terminated or branch was closed. Participants were selected using a convenience sampling method, whereby we coordinated with local city government and chamber of commerce officials who agreed to help us recruit participants and identify facilities where the discussion groups were held. Local officials disseminated discussion group invitations and gathered demographic data on potential participants. Three of the five discussion group sessions included business banking customers—persons representing businesses that utilize banking services (such as banking accounts or business loans). The other two sessions included nonbusiness retail banking customers—persons with individual experience with banking services (such as a personal checking or savings account) and were conducted in Spanish. Each session was digitally recorded, translated (if necessary), and transcribed by an outside vendor, and we used the transcripts to summarize participant responses. An initial coder assigned a code that best summarized the statements from discussion group participants and provided an explanation of the types of discussion group participant statements that should be assigned to a particular code. A separate individual reviewed and verified the accuracy of the initial coding. The initial coder and reviewer discussed orally and in writing any disagreements about code assignments and documented consensus on the final analysis results. Discussion groups are intended to generate in- depth information about the reasons for the participants’ views on specific topics. The opinions expressed by the participants represent their points of view and may not represent the views of all residents in the Southwest border region. We also interviewed various border stakeholders including economic development specialists, industry and trade organizations that focus on border trade and commerce, as well as chamber of commerce and municipal officials representing border communities. We reviewed recent articles on the effects of account terminations and branch closures on communities as well as research organization, industry, and government reports. Finally, we reviewed academic studies on the effects of branch closings on communities. In particular, we focused our review on one recent paper that estimated the impact of branch closings, using detailed geographic and lending data, on employment growth and small business lending, among other outcomes. We identified the census tracts of all branch closures in our three site visit communities from 2013 through 2016 and applied impact estimates from this research to the level of small business lending and employment in these communities, based on data from Community Reinvestment Act reporting (small-business lending) and the U.S. Census American Community Survey (employment).These results are intended to illustrate an approximate magnitude of effects and not produce precise estimates of local impacts. To evaluate how FinCEN and the federal banking regulators have assessed and responded to concerns about derisking and the effectiveness of those efforts, we reviewed guidance the agencies issued to banks related to derisking, related agency memorandums and documents, and an OCC internal analysis on derisking. We also reviewed guidance from FATF on AML and terrorist financing measures and financial inclusion. In addition, we reviewed various executive orders that require most executive branch agencies, and encourage independent agencies, to develop a plan to conduct retrospective analyses, and Office of Management and Budget guidance implementing those executive orders. We reviewed Treasury documentation on BSA regulatory reviews and the BSA-related components of the 2007 and 2017 Economic Growth and Regulatory Paperwork Reduction Act reports issued by the Federal Financial Institutions Examination Council (FFIEC). We also reviewed federal internal control standards related to risk assessment. Finally, we interviewed officials from FinCEN and the federal banking regulators about the actions they have taken related to derisking, as well as retrospective reviews they had conducted on BSA regulations. We utilized multiple data sources throughout our review and took steps to assess the reliability of each one. First, to assess the reliability of data in FDIC’s Summary of Deposits database we discussed the appropriateness of the database for our purposes with FDIC officials, reviewed related documentation, and conducted electronic testing for missing data, outliers, or any obvious errors. Second, to assess the reliability of FinCEN’s data on SAR and CTR filings, we interviewed knowledgeable agency officials on the appropriateness of the data for our purposes, any limitations associated with the data, and the methods they used to gather the data for us. We also reviewed related documentation and conducted electronic testing to identify missing data, outliers, and any obvious errors. Third, we assessed the reliability of the HIFCA and HIDTA county designations by interviewing officials from FinCEN, the Office of National Drug Control Policy, and the National HIDTA Assistance Center on changes to county designations over time and reviewed related documentation. Fourth, to assess the reliability of FDIC’s Statistics on Depository Institutions database, we reviewed related documentation and conducted electronic testing of the data for missing data, outliers, or any obvious errors. Fifth, we interviewed officials from FDIC, the Federal Reserve, and OCC on the data the agencies collect related to BSA/AML bank exam violations and also asked them questions related to methods they used to gather the data for us and any limitations associated with the data. We also manually reviewed the data for any obvious errors and followed up with agency officials, as needed. Finally, for data we obtained from the U.S. Census Bureau (American Community Survey data on population and age and the Residential Building Permits Survey), the Bureau of Economic Analysis (Local Area Personal Income), and Department of Agriculture (Rural-Urban Continuum Codes), we reviewed related documentation, interviewed knowledgeable officials about the data, when necessary, and conducted electronic testing of the data for missing data, outliers, or any obvious errors. We concluded that all applicable data were sufficiently reliable for the purposes of describing BSA/AML risks and compliance challenges for Southwest border banks; identifying banks to survey on account terminations and limitations; evaluating branch closure trends in the Southwest border region and elsewhere, and the factors driving those closures; and describing the effects for Southwest border communities experiencing branch closures and account terminations. We conducted this performance audit from March 2016 to February 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. From July 2017 to September 2017, we administered a web-based survey to a nationally representative sample of banks. In the survey, we asked banks about the number of account terminations for reasons related to Bank Secrecy Act/anti-money laundering (BSA/AML) risk; whether banks are terminating, limiting, or not offering accounts to certain types of customer categories; and the factors influencing these decisions. We collected information for the 3-year time period of January 1, 2014, to December 31, 2016. All survey results presented in this appendix are generalizable to the population of banks, except where otherwise noted. We obtained a weighted survey response rate of 46.5 percent. Because our estimates are from a generalizable sample, we express our confidence in the precision of our particular estimates as 95 percent confidence intervals. Responses to selected questions we asked in our survey that were directly applicable to the research objectives in this report are shown below. Survey results presented in this appendix are categorized into three groups (1) all banks nationwide, (2) Southwest border banks, and (3) non-Southwest border banks, unless otherwise noted. Our survey was comprised of closed- and open-ended questions. In this appendix, we do not provide information on responses provided to the open-ended questions. For a more detailed discussion of our survey methodology, see appendix I. Questions 15 through 23 applied only to banks in our sample that had branches domiciled both inside and outside of the Southwest border region in order to obtain information on their accounts domiciled in the Southwest border region. All the percentage estimates for this question are not statistically reliable. All the percentage estimates for this question are not statistically reliable. Between January 1, 2014 and December 31, 2016, did the bank terminate any cash-intensive small business checking, savings, or money market accounts domiciled in the bank’s Southwest border branches for reasons related to BSA/AML risk? (Check one.) (Question 21) All the percentage estimates for this question are not statistically reliable. This technical appendix outlines the development, estimation, results, and limitations of the econometric model we described in the report. We undertook this analysis to better understand factors that may have influenced banks to close branches in recent years. We developed a number of econometric models that included demographic, economic, and risk factors that might have influenced branch closures nationally since 2010. We developed these models based on a small number of relevant studies, our discussions with banks and regulators, and our own prior empirical work on banking. Our models are based on all counties with bank branches in the United States and are designed to predict whether a county will lose a branch the following year based on the characteristics of the county. Because we are modeling a binary outcome (whether or not a county lost a branch) we use a specific functional form for our regression models known as a logistic regression (logit). The demographic factors included in our models are rural-urban continuum codes, age profile (proportion of the population of the county over 45), and the level of per capita income. We chose these demographic factors, in particular, because they tend to be associated with the adoption of mobile banking, which may explain the propensity to close branches in a community. The economic factors included in our models—intended to reflect temporary or cyclical economic changes affecting the county—are the growth of per capita income, growth in building permits (a measure of residential housing conditions), and growth of the population. The money laundering-related risk factors are whether a county has been designated a High Intensity Financial Crime Area (HIFCA) or a High Intensity Drug Trafficking Area (HIDTA), and the level of suspicious or possible money laundering-related activity reported by bank branches in the county (known as Suspicious Activity Report (SAR) filings). HIDTA and HIFCA designations in our model could proxy for a number of features of a county, including but not limited to the intensity of criminal activity related to drug trafficking or financial crimes. Bank officials we spoke with generally said that SAR filings were a time and resource-intensive process, and that the number of SARs filings—to some extent—reflected the level of effort, and overall BSA compliance risk, faced by the bank. That said, the impact of SAR variables in our models could reflect a combination of (1) the extent of BSA/AML compliance effort and risk faced by the bank, as described by bank officials, and (2) the underlying level of suspicious or money laundering- related activity in a county. We constructed variables from the following data sources to estimate our models: Net branch closures and the size of deposits in each county, from Federal Deposit Insurance Corporation’s (FDIC) Summary of Deposits; Rural-urban continuum codes, from the U.S. Department of Population growth and age profile in each county, from the Census Bureau’s American Community Survey; Per capita income, from Bureau of Economic Analysis Local Area Building permits by county, from the Census Bureau; HIFCA and HIDTA county designations from the Financial Crimes Enforcement Network (FinCEN) and the Office of National Drug Control Policy, respectively; and SAR filings by depository institution branches, from FinCEN We estimated a large number of econometric models to ensure that our results were generally not sensitive to small changes in our model, in other words, to determine if our results were “robust.” Our results, as described in the body of the report, were highly consistent across models and were generally both statistically and economically significant—that is, results of this size are unlikely to occur at random if there were no underlying relationship (p-values of interest are almost always less than 0.001), and the estimated impacts on the probability of branch closures are substantively relevant. For our baseline model, we estimated branch closures (dependent variable: 1/0 for whether or not a county lost one or more branches, on net, that year) as a function of the 1 year lagged share of the population over 45 in the county, a rural-urban continuum code, level of per capita income, population growth, growth in the value of building permits, growth in per capita income, whether or not the county is a HIDTA, and the level of suspicious activity report filings per billion dollars of deposits held in the county, including time and state fixed effects. Economic variables were adjusted for inflation (converted to constant 2015 dollars) using appropriate price indices. We generally estimated models with cluster robust standard errors, clustering at the county. See the logistic regression equation for our baseline model below, where the c subscript represents the county and the t subscript represents the year. Where f is the cumulative logistic function: 𝑓𝑓(𝑧𝑧)= 𝑦𝑦𝑧𝑧1+𝑦𝑦𝑧𝑧 Full year SAR filings are only available for 2014–2016 which is generally the limiting factor on the time dimension of our panel. Because FinCEN changed reporting requirements as of April 2013, we were able to obtain an additional year of data by calculating SAR filings for 4 truncated years, which is April–December 2013, April–December 2014, April–December 2015, and April–December 2016. As we discussed earlier in the report, this variable is an important geographic measure of money laundering- related risk, based on a bank-reported measure of the extent of suspicious or money-laundering related activity associated with branches located in a particular county. After confirming that results were similar for full year and truncated year SARs, we continued estimation with truncated year SARs to benefit from the additional year of data. We report estimates from the version of our baseline model that includes truncated year SARs. Marginal effects for select coefficients (and associated p- values) are reported in table 20 below along time period, sample size, and goodness-of-fit (pseudo r-squared). Generally speaking, across our baseline specifications and robustness tests, counties were more likely to lose branches, all else equal, if they were (1) urban, high income, and had a younger population (proportion under 45), or (2) designated HIFCA, HIDTA, or had higher SAR filings. Economic variables were generally not statistically significant. Below is a list of robustness tests—changing how or which variables influenced branch closures in the model, over what time period—we performed. Unless specifically noted the results described above were very similar in the models listed below (i.e., robust): As an alternative to total SARs as an indicator of money laundering- related risk, we estimated a model with only those SARs that were classified as money laundering or structuring. Total SARs include suspicious activity that may be unrelated to money laundering or structuring, including, for example, check fraud. As an alternative to HIDTAs as a county risk designation we estimated a model with HIFCA county designations. The impact of HIFCAs in the model was smaller magnitude and less statistically significant. We estimated a model interacting HIDTAs with SARs (the interaction suggests SARs have a larger impact on non-HIDTA counties). We estimated models restricted to only rural counties or only urban counties. SARs and HIDTAs have larger effects in urban counties and the impact of the age profile and per capita income are not statistically significant in the model with only rural counties. We estimated models with MSA fixed effects or state-year fixed effects, in addition to state and year fixed effects. We estimated models that assumed that economic conditions from the previous 2 years were relevant or only economic conditions from 2 years prior. Our baseline model assumed only the prior year’s economic conditions influenced branch closures. We estimated a panel logit with random effects. We estimated a panel logit with county fixed effects. None of the results discussed above are statistically significant when county fixed effects are introduced. This suggests that the model is identified primarily based on cross-sectional (differences between counties that persist over time) rather than time series variation in the relevant variables. The role of county fixed effects here may also indicate the presence of unobserved, county characteristics that are omitted from our models, although it is generally not possible to simultaneously estimate the role of highly persistent factors that influence branch closures while including fixed effects. We estimated models where we omitted small percentage changes in branches from our indicator dependent variable—for example, we estimated models with indicators equal to one only if branch losses were above 3 percent or 5 percent (omitting smaller branch losses from the dependent variable altogether). Generally speaking, demographic factors have less explanatory power for larger loss levels although SARs remains statistically significant and at practically meaningful magnitudes. This suggests that higher SARs are relatively better at explaining larger branch losses while demographic factors are better at explaining smaller branch losses. Despite the robustness of our results and our efforts to control for relevant factors, our results are subject to a number of standard caveats. The variables we use come from a number of datasets, and some of them have sampling error, relied on imputation, or are better thought of as proxy variables that measure underlying factors of interest with some degree error. As such, our statistical measures, including standard errors, p-values, and goodness of fit measures such as pseudo r-squared, should be viewed as approximations. Some of the effects we measure based on these variables may reflect associational rather than causal relationships. Also, our regression models may be subject to omitted variable bias or specification bias—for example, it is unlikely that we have been able to quantify and include all relevant factors in bank branching decisions, and even where we have measured important drivers with sufficient precision the functional form assumptions embedded in our choice of regression model (e.g., logistic regression) are unlikely to be precisely correct. Should omitted variables be correlated with variables that we include, the associated coefficient may be biased. We interpret our results, including our statistical measures and coefficients values, with appropriate caution. In addition to the individual named above, Stefanie Jonkman (Assistant Director), Christine Houle (Analyst in Charge), Carl Barden, Timothy Bober, Rebecca Gambler, Toni Gillich, Michael Hansen, Michael Hoffman, Jill Lacey, Patricia Moye, Erica Miles, Marc Molino, Steve Robblee, Tovah Rom, Jerry Sandau, Mona Sehgal, Tyler Spunaugle, and Verginie Tarpinian made key contributions to this report.
|
Some Southwest border residents and businesses have reported difficulties accessing banking services in the region. GAO was asked to review if Southwest border residents and businesses were losing access to banking services because of derisking and branch closures. This report (1) describes the types of heightened BSA/AML compliance risks that Southwest border banks may face and the BSA/AML compliance challenges they may experience; (2) determines the extent to which banks have terminated accounts and closed branches in the region and the reasons for any terminations and closures; and (3) evaluates how regulators have assessed and responded to concerns about derisking in the region and elsewhere, and how effective their efforts have been; among other objectives. GAO surveyed a nationally representative sample of 406 banks, which included the 115 banks that operate in the Southwest border region; analyzed Suspicious Activity Report filings; developed an econometric model on the drivers of branch closures; and interviewed banks that operate in the region. “Derisking” is the practice of banks limiting certain services or ending their relationships with customers to, among other things, avoid perceived regulatory concerns about facilitating money laundering. The Southwest border region is a high-risk area for money laundering activity, in part, because of a high volume of cash and cross-border transactions, according to bank representatives and others. These types of transactions may create challenges for Southwest border banks in complying with Bank Secrecy Act/anti-money laundering (BSA/AML) requirements because they can lead to more intensive account monitoring and investigation of suspicious activity. GAO found that, in 2016, bank branches in the Southwest border region filed 2-1/2 times as many reports identifying potential money laundering or other suspicious activity (Suspicious Activity Reports), on average, as bank branches in other high-risk counties outside the region (see figure). According to GAO's survey, an estimated 80 percent (+/- 11 percent margin of error) of Southwest border banks terminated accounts for BSA/AML risk reasons. Further, according to the survey, an estimated 80 percent (+/- 11) limited or did not offer accounts to customers that are considered high risk for money laundering because the customers drew heightened regulatory oversight—behavior that could indicate derisking. Counties in the Southwest border region have been losing bank branches since 2012, similar to national and regional trends. Nationally, GAO's econometric analysis generally found that counties that were urban, younger, had higher income or had higher money laundering-related risk were more likely to lose branches. Money laundering-related risks were likely to have been relatively more important drivers of branch closures in the Southwest border region. Regulators have not fully assessed the BSA/AML factors influencing banks to derisk. Executive orders and legislation task the Department of the Treasury's Financial Crimes Enforcement Network (FinCEN) and the federal banking regulators with reviewing existing regulations through retrospective reviews to determine whether they should be retained or amended, among other things. FinCEN and federal banking regulators have conducted retrospective reviews of parts of BSA/AML regulations. The reviews, however, have not evaluated how banks' BSA/AML regulatory concerns may influence them to derisk or close branches. GAO's findings indicate that banks do consider BSA/AML regulatory concerns in providing services. Without assessing the full range of BSA/AML factors that may be influencing banks to derisk or close branches, FinCEN, the federal banking regulators, and Congress do not have the information needed to determine if BSA/AML regulations and their implementation can be made more effective or less burdensome. GAO recommends that FinCEN and the federal banking regulators conduct a retrospective review of BSA regulations and their implementation for banks. The review should focus on how banks' regulatory concerns may be influencing their willingness to provide services. The federal banking regulators agreed to the recommendation. FinCEN did not provide written comments.
|
CMS has four principal programs: Medicare, Medicaid, CHIP, and the health-insurance marketplaces. See table 1 for information about the four programs. As discussed earlier, Medicare and Medicaid are CMS’s largest programs and have been growing steadily (see fig. 1). CBO projects that, in 2026, under current law, Medicare spending will reach $1.3 trillion. Medicaid is also expected to continue to grow—program spending is projected to increase 66 percent to over $950 billion by fiscal year 2025, and more than half of the states have chosen to expand their Medicaid programs by covering certain low-income adults not historically eligible for Medicaid coverage, as authorized under the Patient Protection and Affordable Care Act of 2010 (PPACA). The two programs’ use of managed-care delivery systems to provide care has also increased. For example, the number and percentage of Medicare beneficiaries enrolled in Medicare Part C has grown steadily over the past several years, increasing from 8.7 million (20 percent of all Medicare beneficiaries) in calendar year 2007 to 17.5 million (32 percent of all Medicare beneficiaries) in calendar year 2015. As of July 1, 2015, nearly two-thirds of all Medicaid beneficiaries were enrolled in managed- care plans and about 40 percent of expenditures in fiscal year 2015 were for health-care services delivered through managed care. CMS receives appropriations to carry out antifraud activities through several funds including the Health Care Fraud and Abuse Control (HCFAC) program and the Medicaid Integrity Program. The HCFAC program was established under the Health Insurance Portability and Accountability Act of 1996 to coordinate federal, state, and local law- enforcement efforts to address health-care fraud and abuse and to conduct investigations and audits, among other things. In fiscal year 2016, CMS received $560 million through the HCFAC program appropriations. The Medicaid Integrity Program, established by the Deficit Reduction Act of 2005, supports contracts to audit and identify overpayments in Medicaid claims, and provides technical assistance for states’ program-integrity efforts. According to CMS, it received $75 million every year since fiscal year 2009 through the Medicaid Integrity Program appropriations. According to CMS, in fiscal year 2016, total program-integrity obligations to address fraud, waste, and abuse for Medicare and Medicaid were $1.45 billion. As mentioned previously, we designated Medicare and Medicaid as high- risk programs starting in 1990 and 2003, respectively, because their size, scope, and complexity make them vulnerable to fraud, waste, and abuse. Similarly, the Office of Management and Budget (OMB) designated all parts of Medicare as well as Medicaid “high-priority” programs because these programs report $750 million or more in estimated improper payments in a given year. We also highlighted challenges associated with improper payments in Medicare and Medicaid in our annual report on duplication and opportunities for cost savings in federal programs. Improper payments are a significant risk to the Medicare and Medicaid programs and can include payments made as a result of fraud. Improper payments are payments that are either made in an incorrect amount (overpayments and underpayments) or those that should not be made at all. For example, CMS estimated in fiscal year 2016 that the Medicare fee-for-service (FFS) improper payment rate was 11 percent (approximately $41 billion) and the Medicaid improper payment rate was 10.5 percent (approximately $36 billion). Improper payment measurement does not specifically identify or estimate improper payments due to fraud. Health-care fraud can take many forms, and a single case can involve more than one scheme. Schemes may include fraudulent billing for services not provided, services provided that were not medically necessary, and services intentionally billed at a higher level than appropriate. These fraud schemes may include compensating providers, beneficiaries, or others for participating in the fraud scheme. Fraud can be regionally focused or can target particular service areas such as home-health services, or durable medical equipment such as wheelchairs. Fraud may also have nonfinancial effects. For example, patients may be subjected to harmful or unnecessary services by fraudulent providers. Fraud can be perpetrated by different actors, such as providers, beneficiaries, health-insurance plans, as well as organized crime. Fraud and “fraud risk” are distinct concepts. Fraud is challenging to detect because of its deceptive nature. Additionally, once suspected fraud is identified, alleged fraud cases may be prosecuted. If the court determines that fraud took place, then fraudulent spending may be recovered. Fraud risk exists when individuals have an opportunity to engage in fraudulent activity, have an incentive or are under pressure to commit fraud, or are able to rationalize committing fraud. When fraud risks can be identified and mitigated, fraud may be less likely to occur. Although the occurrence of one or more cases of health-care fraud indicates there is a fraud risk, a fraud risk can exist even if fraud has not yet been identified or occurred. Suspicious billing patterns, certain types of health-care providers, or complexities in program design may indicate a risk of fraud. Information to help identify potential fraud risks may come from various sources, including whistleblowers, agency officials, contractors, law-enforcement agencies, beneficiaries, or providers. According to federal standards and guidance, executive-branch agency managers are responsible for managing fraud risks and implementing practices for combating those risks. Federal internal control standards call for agency management officials to assess the internal and external risks their entities face as they seek to achieve their objectives. The standards state that as part of this overall assessment, management should consider the potential for fraud when identifying, analyzing, and responding to risks. Risk management is a formal and disciplined practice for addressing risk and reducing it to an acceptable level. In July 2015, GAO issued the Fraud Risk Framework, which provides a comprehensive set of key components and leading practices that serve as a guide for agency managers to use when developing efforts to combat fraud in a strategic, risk-based way. The Fraud Risk Framework describes leading practices in four components: commit, assess, design and implement, and evaluate and adapt, as depicted in figure 2. The Fraud Reduction and Data Analytics Act of 2015, enacted in June 2016, requires OMB to establish guidelines for federal agencies to create controls to identify and assess fraud risks and design and implement antifraud control activities. The act further requires OMB to incorporate the leading practices from the Fraud Risk Framework in the guidelines. In July 2016, OMB published guidance about enterprise risk management and internal controls in federal executive departments and agencies. Among other things, this guidance affirms that managers should adhere to the leading practices identified in the Fraud Risk Framework. Further, the act requires federal agencies to submit to Congress a progress report each year for 3 consecutive years on the implementation of the controls established under OMB guidelines, among other things. CMS’s antifraud efforts for its four principal programs are part of the agency’s broader program-integrity approach to address fraud, waste, and abuse. CMS’s Center for Program Integrity (CPI) is the agency’s focal point for program integrity across the programs. According to CMS, its approach to program-integrity allows it to “address the whole spectrum of fraud, waste, and abuse.” For example, CMS describes its program- integrity activities as addressing unintentional errors resulting from providers being unaware of recent policy changes on one end of the spectrum, through somewhat more-serious patterns of abuse such as billing for a more-expensive service than was performed (known as upcoding), and finally up to serious fraudulent activities, such as billing for services that were not provided. CMS then aims to target its corrective actions to fit the risk. See figure 3 for CMS’s description of the spectrum of fraud, waste, and abuse that its program-integrity activities aim to address. Within its program-integrity activities, CMS has established several control activities that are specific to managing fraud risks, while others serve broader program-integrity purposes. According to CMS officials, the agency’s antifraud control activities mainly focus on providers in Medicare FFS. Officials told us that when CPI began operating, its primary focus was developing program integrity for Medicare FFS and, as a result, it is the most “mature” of all of CPI’s programs. CMS’s specific fraud control activities include, for example, the Fraud Prevention System (FPS), a predictive-analytics system that helps identify potentially fraudulent payments in Medicare FFS, and the Unified Program Integrity Contractors (UPIC), which detect and investigate aberrant provider behavior and potential fraud in Medicare and Medicaid. Other control activities serve broader program-integrity purposes such as to reduce improper payments resulting from error, waste, and abuse in addition to preventing or detecting potential fraud. For example, CMS provides education and outreach to Medicare providers and beneficiaries on issues identified through data analyses in order to reduce improper payments and to increase their awareness of fraud. HHS and CMS department- and agency-wide strategic plans guide CMS’s program-integrity activities—including antifraud activities. The program-integrity goals identified in the HHS strategic plan primarily focus on improper payments and are driven by statutory requirements. For example, the HHS strategic plan for fiscal years 2014–2018 includes performance goals of reducing the percentage of improper payments made under Medicare FFS and Medicare Parts C and D. One antifraud- focused goal in the HHS strategic plan is to increase the percentage of Medicare providers and suppliers identified as high risk that receive administrative actions, such as suspending payments to providers or revoking providers’ billing privileges. HHS and CMS department- and agency-wide strategic plans also include an emphasis on fraud prevention and early detection—a leading practice in the Fraud Risk Framework—and moving away from a “pay-and-chase” model. For example, the HHS strategic plan calls for “fostering early detection and prevention of improper payments by focusing on preventing bad actors from enrolling or remaining in Medicare and Medicaid” and to “use public-private partnerships to prevent and detect fraud across the health care industry by sharing fraud-related information and data between the public and private sectors.” As a part of this emphasis on prevention, CMS developed FPS in response to the Small Business Jobs Act of 2010, which required CMS to implement predictive-analytics technologies. Also, the Patient Protection and Affordable Care Act of 2010 (PPACA) included provisions to strengthen Medicare and Medicaid’s provider enrollment standards and procedures, among other program-integrity provisions. CMS works with an extensive and complex network of stakeholders to manage fraud risks in its four principal programs. In Medicaid and CHIP, CMS partners with and oversees the 50 states and the District of Columbia. Until the Deficit Reduction Act of 2005 expanded CMS’s role in Medicaid program integrity to provide effective federal support and assistance to states’ efforts to combat fraud, waste, and abuse, states were primarily responsible for Medicaid program integrity. Each state has its own Medicaid program-integrity unit, Medicaid Fraud Control Unit (MFCU), and state audit organization. CMS also uses numerous contractors to conduct the majority of its program-integrity activities. Since the enactment of Medicare in 1965, contractors have played an integral role in the administration of the program. The original Medicare program was designed so that the federal government contracted with health insurers or similar organizations experienced in handling physician and hospital claims to pay Medicare claims. Later, the Health Insurance Portability and Accountability Act of 1996 required the Secretary of Health and Human Services to enter into contracts to promote the integrity of the Medicare program. According to CMS officials, in fiscal year 2016 contractors received 92 percent of CMS’s program-integrity funding. Medicare and Medicaid program- integrity contractors play a variety of roles: (1) processing and reviewing claims, (2) conducting site visits of providers enrolling in Medicare, (3) auditing claims and recovering overpayments, (4) performing data analysis, and (5) investigating aberrant claims and provider behaviors, among other things. States also use contractors in many of these roles for managing program integrity. Additionally, multiple private health-insurance plans in Medicare Parts C and D and over 200 health-insurance plans in Medicaid managed care also carry out program-integrity activities. For the health-insurance marketplaces, CMS is responsible for operating the federally facilitated marketplace and overseeing the state-based marketplaces. CMS also developed the Federal Data Services Hub, which acts as a portal for exchanging information between state-based marketplaces, the federally facilitated marketplace, and state Medicaid agencies, among other entities, as well as other external partners, including other federal agencies, such as the Internal Revenue Service. Finally, law- enforcement groups, including the joint Department of Justice (DOJ) and HHS OIG Medicare Fraud Strike Force Teams, identify, investigate, and prosecute instances of fraud in CMS programs. See figure 4 for a depiction of CMS’s stakeholder network for managing fraud risks. This figure illustrates approximate numbers of stakeholders (through the concentration of dots), but not the extent of individual stakeholder roles. CMS provides oversight to, or partners with, these stakeholders to manage fraud risks. For oversight, CMS creates policies and guidance to direct stakeholders’ antifraud efforts, such as Medicare and Medicaid program-integrity manuals and the Medicaid Provider Enrollment Compendium. CMS also provides technical assistance to states in areas such as provider enrollment and data analysis. In areas where CMS does not have a primary role, it acts as a partner by collaborating and coordinating program-integrity and antifraud activities. For example, CMS is directly responsible for Medicare program integrity, but, in Medicaid and CHIP, states are the first line of program-integrity efforts. Similarly, CMS maintains control over Medicare FFS program integrity, but within Medicare managed care, it provides guidance for health- insurance plans to carry out their own program-integrity activities. In the health-insurance marketplaces, CMS reviews state-based marketplaces’ procedures for verifying applicant eligibility for coverage. For example, it conducts annual reviews of the state-based marketplaces, which include a review of states’ fraud, waste, and abuse policies. See figure 5 for a further description of CMS’s and various stakeholders’ roles and responsibilities in fraud risk management. CMS also facilitates collaboration among federal, state, and private entities for managing fraud risks. In 2012, CMS created the Healthcare Fraud Prevention Partnership (HFPP) to share information with public and private stakeholders and to conduct studies related to health-care fraud, waste, and abuse. According to CMS, as of October 2017, the HFPP included 89 public and private partners, including Medicare- and Medicaid-related federal and state agencies, law-enforcement agencies, private health-insurance plans (payers), and antifraud and other health- care organizations. The HFPP has conducted studies that pool and analyze multiple payers’ claims data to identify providers with patterns of suspect billing across payers. In a recent report, participants separately told us that the HFPP’s studies helped them to identify and take action against potentially fraudulent providers and payment vulnerabilities of which they might not otherwise have been aware, and fostered both formal and informal information sharing. CMS’s relationships with stakeholders were varied in terms of maturity and extent of information sharing, according to stakeholders we interviewed. While some relationships between CMS and stakeholders have been long-standing, some are developing, and others exist on an ad hoc basis. For example, CMS has had a long-standing relationship with state Medicaid program-integrity units, by collaborating through monthly meetings of the Medicaid Fraud and Abuse Technical Advisory Group, sending fraud alerts, and offering courses through the Medicaid Integrity Institute. However, in our interviews with state program-integrity units, and as we recently reported, some state Medicaid agencies shared concerns about the communication, level of policy guidance, and technical support provided by and received from CMS for managing fraud risks in Medicaid. This concern was echoed by state audit officials, with whom CMS recently initiated coordination to build relationships that would facilitate state auditing of Medicaid programs. CMS also has varying relationships with its law-enforcement partners. For example, the relationship between CMS and DOJ’s Health Care Fraud unit, which leads the DOJ and HHS OIG Medicare Fraud Strike Force Teams, has been ad hoc. According to CMS and DOJ officials, the interactions between the agencies have been based on specific fraud cases such as coordination of national takedowns when DOJ provided CMS with the names of providers committing fraud so that CMS could suspend them consistently with the timing of the enforcement efforts. According to CMS officials, they coordinate more with HHS OIG, working together on payment suspensions and revocations for OIG cases, or working with it to take administrative actions against large providers. CMS’s antifraud efforts partially align with the Fraud Risk Framework. Consistent with the framework, CMS has demonstrated commitment to combating fraud by creating a dedicated entity to lead antifraud efforts. It has also taken steps to establish a culture conducive to fraud risk management, although it could expand its antifraud training to include all employees. CMS has taken some steps to identify fraud risks in Medicare and Medicaid; however, it has not conducted a fraud risk assessment or developed a risk-based antifraud strategy for Medicare and Medicaid as defined in the Fraud Risk Framework. CMS has established monitoring and evaluation mechanisms for its program-integrity control activities that, if aligned with a risk-based antifraud strategy, could enhance the effectiveness of fraud risk management in Medicare and Medicaid. The commit component of the Fraud Risk Framework calls for an agency to commit to combating fraud by creating an organizational culture and structure conducive to fraud risk management. This component includes establishing a dedicated entity to lead fraud risk management activities. Within CMS, the Center for Program Integrity (CPI) serves as the dedicated entity for fraud, waste, and abuse issues in Medicare and Medicaid, which is consistent with the Fraud Risk Framework. CPI was established in 2010, in response to a November 2009 Executive Order on reducing improper payments and eliminating waste in federal programs. This formalized role, according to CMS officials, elevated the status of program-integrity efforts, which previously were carried out by other parts of CMS. As an executive-level Center—on the same level with five other executive-level Centers at CMS, such as the Center for Medicare and the Center for Medicaid and CHIP Services—CPI has a direct reporting line to executive-level management at CMS. The Fraud Risk Framework identifies a direct reporting line to senior-level managers within the agency as a leading practice. According to CMS officials, this elevated organizational status offers CPI heightened visibility across CMS, attention by CMS executive leadership, and involvement in executive- level conversations. Additionally, in 2014, CMS established a Program Integrity Board that has brought together senior officials across CMS Centers on a monthly basis to coordinate on fraud and program-integrity vulnerabilities. According to CPI officials, the board is one of the mechanisms through which CPI engages other executive-level offices at CMS. CPI chairs the meetings and typically develops meeting agendas to solicit information from and disseminate information to other CMS units or stakeholders. Further, the board may establish small working groups, known as integrated project teams, to address specific vulnerabilities. For example, according to CMS officials, in 2016 the board established a Marketplace integrated project team to resolve potential fraud eligibility and enrollment issues in the federally facilitated marketplace using the Fraud Risk Framework. CPI has further demonstrated commitment to addressing fraud, waste, and abuse through several organizational changes with the goal of improving coordination and communication of program-integrity activities across Medicare and Medicaid. Most recently, in 2014, CPI reorganized its structure to align functional areas across Medicare and Medicaid, where possible. Previously, separate units within CPI administered their own program-integrity activities for Medicare and Medicaid programs. For example, CPI established a Provider Enrollment and Oversight Group, responsible for provider screening and enrollment functions in both Medicare and Medicaid. According to CMS officials, if CPI employees identify an issue in provider enrollment in Medicare, the same CPI employees also consider how this issue applies to Medicaid. According to CMS officials, the reorganization has helped CPI to look at vulnerabilities in a crosscutting way and to facilitate communication across programs. Similarly, since 2016, CPI began shifting contracting functions from separate Medicare and Medicaid regional contractors that identify and investigate cases of potential fraud and conduct audits to five regional UPICs responsible for a range of program-integrity and fraud-specific activities in both Medicare FFS and Medicaid. According to CMS, the purpose of the UPICs is to coordinate provider investigations across Medicare and Medicaid, improve collaboration with states by providing a mutually beneficial service, and increase contractor accountability through coordinated oversight. CMS officials told us that UPIC integration is a cornerstone of CMS’s contract management strategy and would help to ensure communication and coordination across Medicare and Medicaid program-integrity efforts. CMS plans to award all the UPIC contracts by the end of 2017, ultimately phasing out the ZPICs and Medicaid Integrity Contractors. The commit component of the Fraud Risk Framework also includes creating an organizational culture to combat fraud at all levels of the agency. Consistent with the Fraud Risk Framework, CMS has promoted an antifraud culture by demonstrating a senior-level commitment to combating fraud through public statements, increased resource levels, and internal and external coordination. In addition to HHS and CMS strategic documents discussed earlier, CMS and CPI leaders have testified publicly about CMS’s commitment to preventing fraud and protecting taxpayers and beneficiaries. For example, CPI’s former Director testified in May 2016 before the House Committee on Energy and Commerce’s Subcommittee on Oversight and Investigations that “CMS is deeply committed to our efforts to prevent waste, fraud and abuse in Medicare and Medicaid programs, protecting both taxpayers and the beneficiaries that we serve.” More recently, CMS’s new Administrator testified in her February 2017 confirmation hearing regarding her intent to prioritize efforts around preventing fraud and abuse. CPI’s budget and resources have increased over time to support its ongoing program-integrity mission. According to CMS, program-integrity obligations for Medicare and Medicaid increased from about $1.02 billion in fiscal year 2010 to $1.45 billion in fiscal year 2016. According to CMS officials, the Health Care Fraud and Abuse Control (HCFAC) account, one of the primary sources of CPI funding, has never received a funding reduction. Additionally, in 2015, CPI received additional funding based on a discretionary cap adjustment to HCFAC. Similarly, CPI staff resources have increased over time. According to CMS, CPI’s full-time equivalent positions increased from 177 in 2011 to 419 in 2017. Consistent with leading practices in the Fraud Risk Framework to involve all levels of the agency in setting an antifraud tone, CPI has also worked collaboratively with other CMS Centers. In addition to engaging executive-level officials of other CMS Centers through the Program Integrity Board, CPI has worked collaboratively with other Centers within CMS to incorporate antifraud features into new program design or policy development and established regular communication at the staff level. For example: Center for Medicare and Medicaid Innovation (CMMI). When developing the Medicare Diabetes Prevention Program, CMMI officials told us they worked with CPI’s Provider Enrollment and Oversight Group and Governance Management Group to develop risk-based screening procedures for entities that would enroll in Medicare to provide diabetes-prevention services, among other activities. The program was expanded nationally in 2016, and CMS determined that an entity may enroll in Medicare as a program supplier if it satisfies enrollment requirements, including that the supplier must pass existing high categorical risk-level screening requirements. Center for Medicaid and CHIP Services (CMCS). CMCS officials told us they worked closely with CPI to issue Medicaid guidance and best practices to states on home and community-based services that incorporate program-integrity provisions. A senior CMCS official told us that, to address fraud, CMS has requested that states include provider information on claims to determine whether providers are meeting eligibility criteria. Center for Medicare (CM). In addition to building safeguards into programs and developing policies, CM officials told us that there are several standing meetings, on monthly, biweekly, and weekly bases, between groups within CM and CPI that discuss issues related to provider enrollment, FFS operations, and contractor management. A senior CM official also told us that there are ad hoc meetings taking place between CM and CPI: “We interact multiple times daily at different levels of the organization. Working closely is just a regular part of our business.” CMS has also demonstrated its commitment to addressing fraud, waste, and abuse to its stakeholders. Representatives of CMS’s extensive stakeholder network whom we interviewed—state officials, contractors, and officials from public and private entities—generally recognized the agency’s commitment to combating fraud. In our interviews with stakeholders, officials observed CMS’s increased commitment over time to address fraud, waste, and abuse and cited examples of specific CMS actions. State officials, for example, told us that the Medicaid Integrity Institute, a training center coordinated jointly by CMS and DOJ, has been a helpful resource for states to build capacity to address fraud and program integrity. CMS contractors told us that CMS’s commitment to combating fraud is incorporated into contractual requirements, such as requiring (1) data analysis for potential fraud leads and (2) fraud- awareness training for providers. Officials from entities that are members of the HFPP, specifically, a health-insurance plan and the National Health Care Anti-Fraud Association, added that CMS’s effort to establish the HFPP and its ongoing collaboration and information sharing reflect CMS’s commitment to combat fraud in Medicare and Medicaid. The Fraud Risk Framework identifies training as one way of demonstrating an agency’s commitment to combating fraud. Training and education intended to increase fraud awareness among stakeholders, managers, and employees, serves as a preventive measure to help create a culture of integrity and compliance within the agency. The Fraud Risk Framework discusses requiring all employees to attend training upon hiring and on an ongoing basis thereafter. To increase awareness of fraud risks in Medicare and Medicaid, CMS offers and requires training for stakeholder groups such as providers, beneficiaries, and health-insurance plans. Specifically, through its National Training Program and Medicare Learning Network, CMS makes available training materials on combating Medicare and Medicaid fraud, waste, and abuse. These materials help to identify and report fraud, waste, and abuse in CMS programs and are geared toward providers, beneficiaries, as well as trainers and other stakeholders. Separately, CMS requires health-insurance plans working with CMS to provide annual fraud, waste, and abuse training to their employees. However, CMS does not offer or require similar fraud-awareness training for the majority of its workforce. For a relatively small portion of its overall workforce—specifically, contracting officer representatives who are responsible for certain aspects of the acquisition function—CMS requires completion of fraud and abuse prevention training every 2 years. According to CMS, 638 of its contracting officer representatives (or about 10 percent of its overall workforce) completed such training in 2016 and 2017. Although CMS offers fraud-awareness training to others, the agency does not require fraud-awareness training for new hires or on a regular basis for all employees because the agency has focused on providing process-based internal controls training for its employees. While fraud-awareness training for contracting officer representatives is an important step in helping to promote fraud risk management, fraud- awareness training specific to CMS programs would be beneficial for all employees. Such training would not only be consistent with what CMS offers to or requires of its stakeholders and some of its employees, but would also help to keep the agency’s entire workforce continuously aware of fraud risks and examples of known fraud schemes, such as those identified in successful OIG investigations. Such training would also keep employees informed as they administer CMS programs or develop agency policies and procedures. Considering the vulnerability of Medicare and Medicaid programs to fraud, waste, and abuse, without regular required training CMS cannot be assured that its workforce of over 6,000 employees is continuously aware of risks facing its programs. Although CMS has shown commitment to combating fraud, at times CPI’s efforts to combat fraud compete with other mission priorities, such as (1) ensuring beneficiary access to health-care services and (2) limiting provider burden. CPI leadership has been aware of this inherent challenge. For example, at a congressional hearing in May 2016, CPI’s Director stated that “our efforts strike an important balance: protecting beneficiary access to necessary health care services and reducing the administrative burden on legitimate providers and suppliers, while ensuring that taxpayer dollars are not lost to fraud, waste, and abuse.” Beneficiary access to care. In accordance with its mission statement, providing and improving beneficiaries’ access to health care is a CMS priority. CMS’s commitment to providing access to high-quality care and coverage is reflected in the agency’s mission statement and is one of its four strategic goals. As a result, before taking administrative actions against a Medicare Part A provider, such as a hospice, or providers in rural areas, CMS officials told us that they first look at whether there is a sufficient number of providers in an area by running a provider search by provider county and adjacent counties and considering how heavily populated an area is with Medicare beneficiaries. According to these officials, rather than taking an administrative action against a provider that would limit beneficiaries’ access to services, the agency may enter into a corrective action plan with the provider. CMS officials told us that revoking a provider’s enrollment in Medicare, an option available to CMS in cases of provider noncompliance or misconduct, is rare. Administrative burden on providers. According to CMS documents and officials, concern over placing undue burden on providers—the majority of whom are presumed to be honest—provides a counterforce to implementing program-integrity control activities. CMS’s web page entitled Reducing Provider Burden states: “CMS is committed to reducing improper payments but must be mindful of provider burden because medical review is a resource-intensive process for both the healthcare provider and the Medicare review contractor.” Two CMS contractors told us that they scaled back or did not pursue audits of providers’ documentation because of provider burden or sensitivity considerations. One contractor removed providers from audit samples after some providers opposed having to supply multiple medical records. CPI officials told us that they want to reduce provider burden in a logical manner. For example, according to CMS officials, in the Medicare FFS Recovery Audit Program, CMS established limits on Additional Documentation Requests, which are requests for medical documentation supporting a claim being reviewed. CMS requires such documentation adjustments so that they align with a providers’ claim denial rates. Providers with low denial rates will have lower documentation requirements, while providers with high denial rates will have higher documentation requirements, thus adjusting provider burden based on demonstrated compliance. The assess component of the Fraud Risk Framework calls for federal managers to plan regular fraud risk assessments and to assess risks to determine a fraud risk profile. Identifying fraud risks is one of the steps included in the Fraud Risk Framework for assessing risks to determine a fraud risk profile. CMS has taken steps to identify some fraud risks through several control activities that target areas the agency has designated as higher risk within Medicare and Medicaid, including specific provider types, such as home health agencies, and specific geographic locations. As discussed earlier, CMS officials told us that CPI initially focused on developing control activities for Medicare FFS and considers these activities to be the most mature of all CPI efforts to address fraud risks. CMS has identified fraud risks in the following selected examples, which are not an exhaustive list of its control activities. Data analytics to assist investigations in Medicare FFS. In 2011, CMS implemented FPS, a data-analytic system that screens all Medicare FFS claims to identify health-care providers with suspect billing patterns for further investigation. Medicare FFS contractors—ZPICs and UPICs— have used FPS to identify and prioritize leads for investigations of potential fraud by high-risk Medicare FFS providers. Contractors told us that FPS allows them to quickly identify and triage leads. CMS’s guidance requires contractors to prioritize investigations with the greatest program impact or urgency and identifies required criteria for prioritizing investigations, such as patient abuse or harm, multistate fraud, and high dollar amount of potential overpayments. One contractor we interviewed developed a risk-prioritization model that incorporated CMS’s required criteria, such as patient harm, as well as additional criteria, such as provider spikes in billing, into a tool that automatically creates a provider risk score to help the contractor focus and prioritize investigative resources. Prior authorization for Medicare FFS services or supplies. CMS published a final rule in December 2015 that identifies a master list of durable medical equipment, prosthetics, orthotics, and supplies for which CMS can require prior authorization before suppliers submit a Medicare FFS claim. In this rule, CMS identified 135 items that are frequently subject to unnecessary utilization and stated that the agency expects the final rule to result in savings in the form of reduced unnecessary utilization, fraud, waste, and abuse. Under this program, prior authorization is a condition of payment for claims. CMS can choose which items on the master list to subject to prior authorization. For example, in March 2017, it began requiring prior authorization for selected power wheelchairs in four states and expanded the prior authorization program for these items to all states in July 2017. CMS also began to test the use of prior authorization on a voluntary basis through a series of fixed-length demonstrations for items and services that have been associated with high levels of improper payments, including high incidences of fraud in some cases, and unnecessary utilization in certain geographic areas. For example, CMS began implementing a voluntary prior authorization demonstration in September 2012 for other power mobility devices, such as power scooters, in seven states where historically there has been extensive evidence of fraud and improper payments. CMS expanded the demonstration to an additional 12 states in October 2014, for a total of 19 states. According to the initial Federal Register notice, CMS planned to use the demonstration to develop improved methods for investigation and prosecution of fraud to protect federal funds from fraudulent actions and the resulting improper payments. Under the demonstration, providers and suppliers are encouraged—but not required—to submit a request for prior authorization for certain items before they provide the item to the beneficiary and submit a claim for payment. Revised provider screening and enrollment processes for Medicare FFS and Medicaid FFS. In response to PPACA, in 2011 CMS implemented a revised screening process for providers and suppliers who enroll in Medicare and Medicaid based on identified provider risk categories. CMS placed all Medicare provider and supplier types into one of three risk categories—limited, moderate, or high—based on its assessment of the potential risk of fraud, waste, and abuse each provider and supplier type poses. For example, CMS designated prospective (newly enrolling) home health agencies and prospective suppliers of durable medical equipment, prosthetics, orthotics, and supplies in the high-risk category. According to the final rule and our interviews with CMS officials, CMS developed these risk-based categories based on its review and synthesis of various information sources about the fraud risks posed by each provider and supplier type, including (1) the agency’s experience with claims data used to identify potentially fraudulent billing practices, (2) expertise of contractors responsible for investigating and identifying Medicare fraud, and (3) GAO and OIG reports. CMS designated specific screening activities for each risk category, with increased requirements for moderate- and high-risk provider and supplier types. For example, moderate- and high-risk providers and suppliers must receive preenrollment site visits, and high-risk providers and suppliers also are subject to fingerprint-based criminal-background checks. As part of the revised screening process, beginning in September 2011, CMS also undertook its first program-wide effort to rescreen, or revalidate, the enrollment records of about 1.5 million existing Medicare FFS providers and suppliers, to determine whether they remain eligible to bill Medicare. Temporary provider enrollment moratoriums for certain providers and geographic areas for Medicare FFS and Medicaid FFS. CMS identified certain provider types and geographic areas as high risk for fraud and used its authority under PPACA to implement temporary moratoriums to suspend enrollment of such Medicare and Medicaid providers in those areas. For example, in July 2016, CMS extended temporary moratoriums statewide on the enrollment of new Medicare Part B nonemergency ambulance suppliers and Medicare home health agencies statewide in six states, as applicable. The statewide moratoriums also apply to Medicaid. According to the Federal Register notice, CMS imposed the temporary moratoriums based on qualitative and quantitative factors suggesting a high risk of fraud, waste, or abuse, such as law-enforcement expertise with emerging fraud trends and investigations. CMS’s data analysis also confirmed the agency’s determination of a high risk of fraud, waste, and abuse for these provider and supplier types within certain geographic areas, according to the notice. Medicaid state program integrity reviews and desk reviews. CMS tailored state Medicaid program-integrity reviews to areas it identified as high risk for improper payments, such as personal care services, which may also be at high risk for fraud. In March 2017, we reported that, from fiscal years 2014 through 2016, CMS conducted focused reviews of state program-integrity efforts in 31 states, reviewing 10 or 11 states annually. For each state, CMS tailored its focused reviews to the state’s managed care plans and relevant other high-risk areas, including provider enrollment and screening, nonemergency medical transportation, and personal care services. CMS and state officials we spoke with as part of that work told us that the tailored oversight had been beneficial and helped identify areas for improvement. CMS has also initiated desk reviews of state program-integrity efforts. According to CMS, these desk reviews allow the agency to provide states with customized program- integrity oversight. Vulnerability tracking system for Medicare. CPI recently initiated an effort to centralize and formalize a vulnerability tracking process for Medicare, which could support identification of specific fraud risks, both in Medicare and possibly Medicaid. As described by CPI officials, the process aims to collect information on fraud-related vulnerabilities from CMS employees, contractors, and other sources, such as GAO and HHS OIG reports. The assess component of the Fraud Risk Framework calls for federal managers to plan regular fraud risk assessments and assess risks to determine a fraud risk profile. Furthermore, federal internal control standards call for agency management to assess the internal and external risks their entities face as they seek to achieve their objectives. The standards state that, as part of this overall assessment, management should consider the potential for fraud when identifying, analyzing, and responding to risks. The Fraud Risk Framework states that, in planning the fraud risk assessment, effective managers tailor the fraud risk assessment to the program by, among other things, identifying appropriate tools, methods, and sources for gathering information about fraud risks and involving relevant stakeholders in the assessment process. Fraud risk assessments that align with the Fraud Risk Framework involve (1) identifying inherent fraud risks affecting the program, (2) assessing the likelihood and impact of those fraud risks, (3) determining fraud risk tolerance, (4) examining the suitability of existing fraud controls and prioritizing residual fraud risks, and (5) documenting the results. (See fig. 6.) Although, as discussed earlier, CMS has identified some fraud risks posed by providers in Medicare FFS and, to a lesser degree, Medicaid FFS, the agency has not conducted a fraud risk assessment for either the Medicare or Medicaid program. Such a risk assessment would provide the detailed information and insights needed to create a fraud risk profile, which, in turn, is the basis for creating an antifraud strategy. According to CMS officials, CMS has not conducted a fraud risk assessment for Medicare or Medicaid because, within CPI’s broader approach of preventing and eliminating improper payments, its focus has been on addressing specific vulnerabilities among provider groups that have shown themselves particularly prone to fraud, waste, and abuse. With this approach, however, it is unlikely that CMS will be able to design and implement the most-appropriate control activities to respond to the full portfolio of fraud risks. A fraud risk assessment consists of discrete activities that build upon each other. Specifically: Identifying inherent fraud risks affecting the program. As discussed earlier, CMS has taken steps to identify fraud risks. However, CMS has not used a process to identify inherent fraud risks from the universe of potential vulnerabilities facing Medicare and Medicaid programs, including threats from various sources. According to CPI officials, most of the agency’s fraud control activities are focused on fraud risks posed by providers. The Fraud Risk Framework discusses fully considering inherent fraud risks from internal and external sources in light of fraud risk factors such as incentives, opportunities, and rationalization to commit fraud. For example, according to CMS officials, the inherent design of the Medicare Part C program may pose fraud risks that are challenging to detect. A fraud risk assessment would help CMS identify all sources of fraudulent behaviors, beyond threats posed by providers, such as those posed by health-insurance plans, contractors, or employees. Assessing the likelihood and impact of fraud risks and determining fraud risk tolerance. CMS has taken steps to prioritize fraud risks in some areas, but it has not assessed the likelihood or impact of fraud risks or determined fraud risk tolerance across all parts of Medicare and Medicaid. Assessing the likelihood and impact of inherent fraud risks would involve consideration of the impact of fraud risks on program finances, reputation, and compliance. Without assessing the likelihood and impact of risks in Medicare or Medicaid or internally determining which fraud risks may fall under the tolerance threshold, CMS cannot be certain that it is aware of the most-significant fraud risks facing these programs and what risks it is willing to tolerate based on the programs’ size and complexity. Examining the suitability of existing fraud controls and prioritizing residual fraud risks. CMS has not assessed existing control activities or prioritized residual fraud risks. According to the Fraud Risk Framework, managers may consider the extent to which existing control activities—whether focused on prevention, detection, or response—mitigate the likelihood and impact of inherent risks and whether the remaining risks exceed managers’ tolerance. This analysis would help CMS to prioritize residual risks and to determine mitigation approaches. For example, CMS has not established preventive fraud control activities in Medicare Part C. Using a fraud risk assessment for Medicare Part C and closely examining existing fraud control activities and residual risks, CMS could be better positioned to address fraud risks facing this growing program and develop preventive control activities. Further, without assessing existing fraud control activities and prioritizing residual fraud risks, CMS cannot be assured that its current control activities are addressing the most-significant risks. Such analysis would also help CMS determine whether additional, preferably preventive, fraud controls are needed to mitigate residual risks, make adjustments to existing control activities, and potentially scale back or remove control activities that are addressing tolerable fraud risks. Documenting the risk-assessment results in a fraud risk profile. CMS has not developed a fraud risk profile that documents key findings and conclusions of the fraud risk assessment. According to the Fraud Risk Framework, the risk profile can also help agencies decide how to allocate resources to respond to residual fraud risks. Given the large size and complexity of Medicare and Medicaid, a documented fraud risk profile could support CMS’s resource-allocation decisions as well as facilitate the transfer of knowledge and continuity across CMS staff and changing administrations. Senior CPI officials told us that the agency plans to start a fraud risk assessment for Medicare and Medicaid after it completes a separate fraud risk assessment of the federally facilitated marketplace. This fraud risk assessment for the federally facilitated marketplace eligibility and enrollment process is being conducted in response to a recommendation we made in February 2016. In April 2017, CPI officials told us that this fraud risk assessment was largely completed, although in September 2017 CPI officials told us that the assessment was undergoing agency review. CPI officials told us that they have informed CM and CMCS officials that there will be future fraud risk assessments for Medicare and Medicaid; however, they could not provide estimated timelines or plans for conducting such assessments, such as the order or programmatic scope of the assessments. Once completed, CMS could use the federally facilitated marketplace fraud risk assessment and apply any lessons learned when planning for and designing fraud risk assessments for Medicare and Medicaid. According to the Fraud Risk Framework, factors such as size, resources, maturity of the agency or program, and experience in managing risks can influence how the entity plans the fraud risk assessment. Additionally, effective managers tailor the fraud risk assessment to the program when planning for it. The large scale and complexity of Medicare and Medicaid as well as time and resources involved in conducting a fraud risk assessment underscore the importance of a well-planned and tailored approach to identifying the assessment’s programmatic scope. Planning and tailoring may involve decisions to conduct a fraud risk assessment for Medicare and Medicaid programs as a whole or divided into several subassessments to reflect their various component parts (e.g., Medicare FFS, Medicaid managed care) as well as determining the timing and order of assessments (e.g., concurrently or consecutively for Medicare and Medicaid). CMS’s existing fraud risk identification efforts as well as communication channels with stakeholders could serve as a foundation for developing a fraud risk assessment for Medicare and Medicaid. The leading practices identified in the Fraud Risk Framework discuss the importance of identifying appropriate tools, methods, and sources for gathering information about fraud risks and involving relevant stakeholders in the assessment process. CMS’s fraud risk identification efforts discussed earlier could provide key information about fraud risks and their likelihood and impact. Further, existing relationships and communication channels across CMS and its extensive network of stakeholders could support building a comprehensive understanding of known and potential fraud risks for the purposes of a fraud risk assessment. For example, the fraud vulnerabilities identified through data analysis and information sharing with states, health-insurance plans, law-enforcement organizations, and contractors through the HFPP could inform a fraud risk assessment. CPI’s Command Center missions—facilitated collaboration sessions that bring together experts from various disciplines to improve the processes for fraud prevention in Medicare and Medicaid—could bring together experts to identify potential or emerging fraud vulnerabilities or to brainstorm approaches to mitigate residual fraud risks. As CMS makes plans to move forward with a fraud risk assessment for Medicare and Medicaid, it will be important to consider the frequency with which the fraud risk assessment would need to be updated. While, according to the Fraud Risk Framework, the time intervals between updates can vary based on the programmatic and operating environment, assessing fraud risks on an ongoing basis is important to ensure that control activities are continuously addressing fraud risks. The constantly evolving fraud schemes, the size of the programs in terms of beneficiaries and expenditures, as well as continual changes in Medicare and Medicaid programs—such as development of innovative payment models and increasing managed-care enrollment—call for constant vigilance and regular updates to the fraud risk assessment. The design and implement component of the Fraud Risk Framework calls for federal managers to design and implement a strategy with specific control activities to mitigate assessed fraud risks and collaborate to help ensure effective implementation. According to the Fraud Risk Framework, effective managers develop and document an antifraud strategy that describes the program’s approach for addressing the prioritized fraud risks identified during the fraud risk assessment, also referred to as a risk-based antifraud strategy. A risk- based antifraud strategy describes existing fraud control activities as well as any new fraud control activities a program may adopt to address residual fraud risks. In developing a strategy and antifraud control activities, effective managers focus on fraud prevention over detection, develop a plan for responding to identified instances of fraud, establish collaborative relationships with stakeholders, and create incentives to help effectively implement the strategy. Additionally, as part of a documented strategy, management identifies roles and responsibilities of those involved in fraud risk management activities; describes control activities as well as plans for monitoring and evaluation, creates timelines, and communicates the antifraud strategy to employees and stakeholders, among other things. As discussed earlier, CMS has some control activities in place to identify fraud risk in Medicare and Medicaid, particularly in the FFS program. However, CMS has not developed and documented a risk-based antifraud strategy to guide its design and implementation of new antifraud activities and to better align and coordinate its existing activities to ensure it is targeting and mitigating the most-significant fraud risks. Antifraud strategy. CMS officials told us that CPI does not have a documented risk-based antifraud strategy. Although CMS has developed several documents that describe efforts to address fraud, the agency has not developed a risk-based antifraud strategy for Medicare and Medicaid because, as discussed earlier, it has not conducted a fraud risk assessment that would serve as a foundation for such strategy. In 2016, CPI identified five strategic objectives for program integrity, which include antifraud elements and an emphasis on prevention. However, according to CMS officials, these objectives were identified from discussions with CMS leadership and various stakeholders and not through a fraud risk assessment process to identify inherent fraud risks from the universe of potential vulnerabilities, as described earlier and called for in the leading practices. These strategic objectives were presented at an antifraud conference in 2016, but were not announced publicly until the release of the Annual Report to Congress on the Medicare and Medicaid Integrity Programs for Fiscal Year 2015 in June 2017. Stakeholder relationships and communication. CMS has established relationships and communicated with stakeholders, but, without an antifraud strategy, stakeholders we spoke with lacked a common understanding of CMS’s strategic approach. Prior work on practices that can help federal agencies collaborate effectively calls for a strategy that is shared with stakeholders to promote trust and understanding. Once an antifraud strategy is developed, the Fraud Risk Framework calls for managers to collaborate to ensure effective implementation. Although some CMS stakeholders were able to describe various CMS program- integrity priorities and activities, such as home health being a fraud risk priority, the stakeholders could not communicate, articulate, or cite a common CMS strategic approach to address fraud risks in its programs. Incentives. The Fraud Risk Framework discusses creating incentives to help ensure effective implementation of the antifraud strategy once it is developed. Currently, some incentives within stakeholder relationships may complicate CMS’s antifraud efforts. As discussed earlier, CMS is a partner and provides oversight to states’ program-integrity functions. Officials from one state told us that they were reluctant to share their program vulnerabilities because CMS would use this information to later audit the state. Among contractors, CMS encourages information sharing through conferences and workshops; however, competition for CMS business among contractors can be a disincentive to information sharing. CMS officials acknowledged this concern and said that they expect contractors to share information related to fraud schemes, outcomes of investigations, and tips for addressing fraud, but not proprietary information such as algorithms to risk-score providers. Without developing and documenting an antifraud strategy based on a fraud risk assessment, as called for in the design and implement component of the Fraud Risk Framework, CMS cannot ensure that it has a coordinated approach to address the range of fraud risks and to appropriately target and allocate resources for the most-significant risks. Considering fraud risks to which the Medicare and Medicaid programs are most vulnerable, in light of the malicious intent of those who aim to exploit the programs, would help CMS to examine its current control activities and potentially design new ones with recognition of fraudulent behavior it aims to prevent. This focus on fraud is distinct from a broader view of program integrity and improper payments by considering the intentions and incentives of those who aim to deceive rather than well-intentioned providers who make mistakes. Also, continued growth of the programs, such as growth of Medicare Part C and Medicaid managed care, call for consideration of preventive fraud control activities across the entire network of entities involved. Further, considering the large size and complexity of Medicare and Medicaid and the extensive stakeholder network involved in managing fraud in the programs, a strategic approach to managing fraud risks within the programs is essential to ensure that a number of existing control activities and numerous stakeholder relationships and incentives are being aligned to produce desired results. Once developed, an antifraud strategy that is clearly articulated to various CMS stakeholders would help CMS to address fraud risks in a more coordinated and deliberate fashion. Thinking strategically about existing control activities, resources, tools, and information systems could help CMS to leverage resources while continuing to integrate Medicare and Medicaid program-integrity efforts along functional lines. A strategic approach grounded in a comprehensive assessment of fraud risks could also help CMS to identify future enhancements for existing control activities, such as new preventive capabilities for FPS or additional fraud factors in provider enrollment and revalidation, such as provider risk scoring, to stay in step with evolving fraud risks. The evaluate and adapt component of the Fraud Risk Framework calls for federal managers to evaluate outcomes using a risk-based approach and adapt activities to improve fraud risk management. Furthermore, according to federal internal control standards, managers should establish and operate monitoring activities to monitor the internal control system and evaluate the results, which may be compared against an established baseline. Ongoing monitoring and periodic evaluations provide assurances to managers that they are effectively preventing, detecting, and responding to potential fraud. CMS has established monitoring and evaluation mechanisms for its program-integrity activities that it could incorporate into an antifraud strategy. In Medicare, CMS has taken steps to measure the rate of fraud in a particular service area. We have previously reported that agencies may face challenges measuring outcomes of fraud risk management activities in a reliable way. These challenges include the difficulty of measuring the extent of deterred fraud, isolating potential fraud from legitimate activity or other forms of improper payments, and determining the amount of undetected fraud. Despite these challenges, CMS has taken steps to estimate a fraud baseline—meaning the rate of probable fraud—in the home health benefit. In fiscal year 2016, CMS conducted a pretest in the Miami-Dade area of Florida to evaluate its potential measurement approach that could later be used in a nationwide study of probable fraud among home health agencies. The pretest was not a random sample and was not intended to produce a rate of fraud, but instead was intended to test the interview instruments and data-collection methodology CMS might use in a study nationwide. CMS and its contractor collected information from home health agencies, the attending providers, and Medicare beneficiaries in the Miami-Dade area in order to test these interview instruments. CMS completed this pretest, but, according to CMS officials, the agency does not yet have plans to roll out a nationwide study that would estimate a probable fraud rate for the Medicare FFS home health benefit. In its 2015 annual report to Congress, CMS stated that “documenting the baseline amount of fraud in Medicare is of critical importance, as it allows officials to evaluate the success of ongoing fraud prevention activities.” CMS officials working on the pilot told us that having an estimate of the rate of fraud in home health benefits would allow CMS to reliably assess its efforts at eliminating or reducing fraud. Without a baseline, officials said, the agency cannot know whether its antifraud efforts are as effective as they could be. We previously reported that the lack of a baseline for the amount of health-care fraud that exists limits CMS’s ability to determine whether its activities are effectively reducing health care fraud and abuse. A baseline estimate could provide an understanding of the extent of fraud and, with additional information on program activities, could help to inform decision making related to allocation of resources to combat health-care fraud. As described in the Fraud Risk Framework, in the absence of a fraud baseline, agencies can gather additional information on the short-term or intermediate outcomes of some antifraud initiatives, which may be more readily measured. For example, CMS has developed some performance measures to provide a basis for monitoring its progress towards meeting the program-integrity goals set in the HHS Strategic Plan and Annual Performance Plan. Specifically, CMS measures whether it is meeting its goal of “increasing the percentage of Medicare FFS providers and suppliers identified as high risk that receive an administrative action.” CMS does not set specific antifraud goals for other parts of Medicare or Medicaid; other CMS performance measures relate to measuring or reducing improper payments in CHIP, Medicaid, and the various parts of Medicare. CMS uses return-on-investment and savings estimates to measure the effectiveness of its Medicare program-integrity activities and FPS. For example, CMS uses return-on-investment to measure the effectiveness of FPS and, in response to a recommendation we made in 2012, CMS developed outcome-based performance targets and milestones for FPS. CMS has also conducted individual evaluations of its program-integrity activities, such as an interim evaluation of the prior-authorization demonstration for power mobility devices that began in 2012 and is currently implemented in 19 states. Commensurate with greater maturity of control activities in Medicare FFS compared to other parts of Medicare and Medicaid, monitoring and evaluation activities for Medicare Parts C and D and Medicaid are more limited. For example, CMS calculates savings for its program-integrity activities in Medicare Parts C and D, but not a full return-on-investment. CMS officials told us that calculating costs for specific activities is challenging because of overlapping activities among contractors. CMS officials said they continue to refine methods and develop new savings estimates for additional program-integrity activities. According to the Fraud Risk Framework, effective managers develop a strategy and evaluate outcomes using a risk-based approach. In developing an effective strategy and antifraud activities, managers consider the benefits and costs of control activities. Ongoing monitoring and periodic evaluations provide reasonable assurance to managers that they are effectively preventing, detecting, and responding to potential fraud. Monitoring and evaluation activities can also support managers’ decisions about allocating resources, and help them to demonstrate their continued commitment to effectively managing fraud risks. As CMS takes steps to develop an antifraud strategy, it could include plans for refining and building on existing methods such as return-on- investment or savings measures, and setting appropriate targets to evaluate the effectiveness of all of CMS’s antifraud efforts. Such a strategy would help CMS to efficiently allocate program-integrity resources and to ensure that the agency is effectively preventing, detecting, and responding to potential fraud. For example, while doing so would involve challenges, CMS’s strategy could detail plans to advance efforts to measure a potential fraud rate through baseline and periodic measures. Fraud rate measurement efforts could also inform risk assessment activities, identify currently unknown fraud risks, align resources to priority risks, and develop effective outcome metrics for antifraud controls. Such a strategy would also help CMS ensure that it has effective performance measures in place to assess its antifraud efforts beyond those related to providers in Medicare FFS, and establish appropriate targets to measure the agency’s progress in addressing fraud risks. As CMS makes plans to move forward with a strategy and to further develop evaluation and monitoring mechanisms, it will be important to share its efforts with stakeholders. The Fraud Risk Framework states that effective managers communicate lessons learned from fraud risk management activities to stakeholders. For example, CMS could be a leader to states in measuring the effectiveness of program-integrity efforts. Officials in three of the four states we spoke with expressed interest in receiving CMS guidance on how to measure the effectiveness of their Medicaid program-integrity efforts, such as by providing models for how to calculate return-on-investment. Medicare and Medicaid provide health insurance to over 129 million Americans, but the size—in terms of number of beneficiaries and amount of expenditures—as well as complexity of these programs make them inherently susceptible to fraud and improper payments. CMS currently manages these risks across its programs as part of a broader approach to identifying and controlling for multiple sources of improper payments and by developing relationships with an extensive network of stakeholders. In Medicare and Medicaid specifically, we note that CMS has taken many important steps toward implementing a strategic approach for managing fraud. However, the agency could benefit by more fully aligning its efforts with the four components of the Fraud Risk Framework. CMS is well positioned to leverage its fraud risk management efforts— such as demonstrated leadership for combating fraud, existing control activities, and stakeholder relationships—to provide additional antifraud training, as well as to develop an antifraud strategy based on fraud risk assessments for Medicare and Medicaid. We recognize that the effort may be challenging, given the size and complexity of Medicare and Medicaid, and the need to balance antifraud activities with CMS’s other mission priorities. However, by not employing the actions identified in the Fraud Risk Framework and incorporating them in its approach to managing fraud risks, CMS is missing a significant opportunity to better ensure employee vigilance against fraud, and to organize and focus its many antifraud and program-integrity activities and related resources into a comprehensive strategy. Such a strategy would (1) provide reasonable assurance that CMS is targeting the most-significant fraud risks in its programs and (2) help protect the government’s substantial and growing investments in these programs. We are making the following three recommendations to CMS: The Administrator of CMS should provide fraud-awareness training relevant to risks facing CMS programs and require new hires to undergo such training and all employees to undergo training on a recurring basis. (Recommendation 1) The Administrator of CMS should conduct fraud risk assessments for Medicare and Medicaid to include respective fraud risk profiles and plans for regularly updating the assessments and profiles. (Recommendation 2) The Administrator of CMS should, using the results of the fraud risk assessments for Medicare and Medicaid, create, document, implement, and communicate an antifraud strategy that is aligned with and responsive to regularly assessed fraud risks. This strategy should include an approach for monitoring and evaluation. (Recommendation 3) We provided a draft of this report to HHS and DOJ for comment. HHS provided written comments, which are reprinted in appendix I. DOJ did not have comments. HHS and DOJ also provided technical comments, which we incorporated as appropriate. In commenting on this report, HHS agreed with our three recommendations. Specifically, in response to our first recommendation to provide required fraud-awareness training to all employees, HHS stated that it will develop and implement a fraud-awareness training plan to ensure all CMS employees receive training. Regarding our second recommendation to conduct fraud risk assessments for Medicare and Medicaid, HHS stated that it is currently conducting a fraud risk assessment on the federally facilitated marketplace and, when this assessment is complete, will apply the lessons learned in assessing this program to fraud risk assessments of Medicare and Medicaid. In response to our third recommendation to create, document, implement, and communicate an antifraud strategy that is aligned with and responsive to regularly assessed fraud risks, HHS stated that it will develop respective risk-based antifraud strategies after completing fraud risk assessments for Medicare and Medicaid. We are sending copies of this report to the Acting Secretary of Health and Human Services, the Administrator of CMS, the Assistant Attorney General for Administration at DOJ, as well as appropriate congressional committees and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6722 or bagdoyans@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix II. In addition to the contact named above, Tonita Gillich (Assistant Director), Irina Carnevale (Analyst-in-Charge), Michael Duane, Laura Sutton Elsberg, and Catrin Jones made key contributions to this report. Also contributing to the report were Lori Achman, James Ashley, Colin Fallon, Leslie V. Gordon, Maria McMullen, Sabrina Streagle, and Shana Wallace.
|
CMS, an agency within the Department of Health and Human Services (HHS), provides health coverage for over 145 million Americans through its four principal programs, with annual outlays of about $1.1 trillion. GAO has designated the two largest programs, Medicare and Medicaid, as high risk partly due to their vulnerability to fraud, waste, and abuse. In fiscal year 2016, improper payment estimates for these programs totaled about $95 billion. GAO's Fraud Risk Framework and the subsequent enactment of the Fraud Reduction and Data Analytics Act of 2015 have called attention to the importance of federal agencies' antifraud efforts. This report examines (1) CMS's approach for managing fraud risks across its four principal programs, and (2) how CMS's efforts managing fraud risks in Medicare and Medicaid align with the Fraud Risk Framework. GAO reviewed laws and regulations and HHS and CMS documents, such as program-integrity manuals. It also interviewed CMS officials and a sample of CMS stakeholders, including state officials and contractors. GAO selected states based on fraud risk and other factors, such as geographic diversity. GAO selected contractors based on a mix of companies and geographic areas served. The approach that the Centers for Medicare & Medicaid Services (CMS) has taken for managing fraud risks across its four principal programs—Medicare, Medicaid, the Children's Health Insurance Program (CHIP), and the health-insurance marketplaces—is incorporated into its broader program-integrity approach. According to CMS officials, this broader program-integrity approach can help the agency develop control activities to address multiple sources of improper payments, including fraud. As the figure below shows, CMS views fraud as part of a spectrum of actions that may result in improper payments. CMS's efforts managing fraud risks in Medicare and Medicaid partially align with GAO's 2015 A Framework for Managing Fraud Risks in Federal Programs (Fraud Risk Framework). This framework describes leading practices in four components: commit , assess , design and implement , and evaluate and adapt . CMS has shown commitment to combating fraud in part by establishing a dedicated entity—the Center for Program Integrity—to lead antifraud efforts. Furthermore, CMS is offering and requiring antifraud training for stakeholder groups such as providers, beneficiaries, and health-insurance plans. However, CMS does not require fraud-awareness training on a regular basis for employees, a practice that the framework identifies as a way agencies can help create a culture of integrity and compliance. Regarding the assess and design and implement components, CMS has taken steps to identify fraud risks, such as by designating specific provider types as high risk and developing associated control activities. However, it has not conducted a fraud risk assessment for Medicare or Medicaid, and has not designed and implemented a risk-based antifraud strategy. A fraud risk assessment allows managers to fully consider fraud risks to their programs, analyze their likelihood and impact, and prioritize risks. Managers can then design and implement a strategy with specific control activities to mitigate these fraud risks, as well as an appropriate evaluation approach consistent with the evaluate and adapt component. By developing a fraud risk assessment and using that assessment to create an antifraud strategy and evaluation approach, CMS could better ensure that it is addressing the full portfolio of risks and strategically targeting the most-significant fraud risks facing Medicare and Medicaid. GAO recommends that CMS (1) provide and require fraud-awareness training to its employees, (2) conduct fraud risk assessments, and (3) create an antifraud strategy for Medicare and Medicaid, including an approach for evaluation. HHS concurred with GAO's recommendations.
|
The design and development of information systems can be complex undertakings, consisting of a multitude of pieces of equipment and software products, and service providers. Each of the components of an information system may rely on one or more supply chains—that is, the set of organizations, people, activities, information, and resources that create and move a product or service from suppliers to an organization’s customers. Obtaining a full understanding of the sources of a given information system can also be extremely complex. According to the Software Engineering Institute, the identity of each product or service provider may not be visible to others in the supply chain. Typically, an acquirer, such as a federal agency, may only know about the participants to which it is directly connected in the supply chain. Further, the complexity of corporate structures, in which a parent company (or its subsidiaries) may own or control companies that conduct business under different names in multiple countries, presents additional challenges to fully understanding the sources of an information system. As a result, the acquirer may have little visibility into the supply chains of its suppliers. Federal procurement law and policies promote the acquisition of commercial products when they meet the government’s needs. Commercial providers of IT use a global supply chain to design, develop, manufacture, and distribute hardware and software products throughout the world. Consequently, the federal government relies heavily on IT equipment manufactured in foreign nations. Federal information and communications systems can include a multitude of IT equipment, products, and services, each of which may rely on one or more supply chains. These supply chains can be long, complex, and globally distributed and can consist of multiple tiers of outsourcing. As a result, agencies may have little visibility into, understanding of, or control over how the technology that they acquire is developed, integrated, and deployed, as well as the processes, procedures, and practices used to ensure the integrity, security, resilience, and quality of the products and services. Table 1 highlights possible manufacturing locations of typical components of a computer or information systems network. Moreover, many of the manufacturing inputs required for these components—whether physical materials or knowledge—are acquired from various sources around the globe. Figure 1 depicts the potential countries of origin of common suppliers of various components in a commercially available laptop computer. The Federal Information Security Modernization Act (FISMA) of 2014 requires federal agencies to develop, document, and implement an agency-wide information security program to provide information security for the information systems and information that support the operations and assets of the agency. The act also requires that agencies ensure that information security is addressed throughout the life cycle of each agency information system. FISMA assigns NIST the responsibility for providing standards and guidelines on information security to agencies. In addition, the act authorizes DHS to develop and issue binding operational directives to agencies, including directives that specify requirements for the mitigation of exigent risks to information systems. NIST has issued several special publications (SP) that provide guidelines to federal agencies on controls and activities relevant to managing supply chain risk. For example, NIST SP 800-39 provides an approach to organization-wide management of information security risk, which states that organizations should monitor risk on an ongoing basis as part of a comprehensive risk management program. NIST SP 800-53 (Revision 4) provides a catalogue of controls from which agencies are to select controls for their information systems. It also specifies several control activities that organizations could use to provide additional supply chain protections, such as conducting due diligence reviews of suppliers and developing acquisition policy, and implementing procedures that help protect against supply chain threats throughout the system development life cycle. NIST SP 800-161 provides guidance to federal agencies on identifying, assessing, selecting, and implementing risk management processes and mitigating controls throughout their organizations to help manage information and communications technology supply chain risks. In addition, as of June 2018, DHS has issued one binding operational directive related to an IT supply chain-related threat. Specifically, in September 2017, DHS issued a directive to all federal executive branch departments and agencies to remove and discontinue present and future use of Kaspersky-branded products on all federal information systems. In consultation with interagency partners, DHS determined that the risks presented by these products justified their removal. Beyond these guidelines and requirements, the Ike Skelton National Defense Authorization Act for Fiscal Year 2011 also included provisions related to supply chain security. Specifically, Section 806 authorizes the Secretaries of Defense, the Army, the Navy, and the Air Force to exclude a contractor from specific types of procurements on the basis of a determination of significant supply chain risk to a covered system. Section 806 also establishes requirements for limiting disclosure of the basis of such procurement action. In several reports issued since 2012, we have pointed out that the reliance on complex, global IT supply chains introduces multiple risks to federal information and telecommunications systems. This includes the risk of these systems being manipulated or damaged by leading foreign cyber-threat nations such as Russia, China, Iran, and North Korea. Threats and vulnerabilities created by these cyber-threat nations, vendors or suppliers closely linked to cyber-threat nations, and other malicious actors can be sophisticated and difficult to detect and, thus, pose a significant risk to organizations and federal agencies. As we reported in March 2012, supply chain threats are present at various phases of a system’s development life cycle. Key threats that could create an unacceptable risk to federal agencies include the following. Installation of hardware or software containing malicious logic, which is hardware, firmware, or software that is intentionally included or inserted in a system for a harmful purpose. Malicious logic can cause significant damage by allowing attackers to take control of entire systems and, thereby, read, modify, or delete sensitive information; disrupt operations; launch attacks against other organizations’ systems; or destroy systems. Installation of counterfeit hardware or software, which is hardware or software containing non-genuine component parts or code. According to the Defense Department’s Information Assurance Technology Analysis Center, counterfeit IT threatens the integrity, trustworthiness, and reliability of information systems for several reasons, including the facts that (1) counterfeits are usually less reliable and, therefore, may fail more often and more quickly than genuine parts; and (2) counterfeiting presents an opportunity for the counterfeiter to insert malicious logic or backdoors into replicas or copies that would be far more difficult in more secure manufacturing facilities. Failure or disruption in the production or distribution of critical products. Both man-made (e.g., disruptions caused by labor, trade, or political disputes) and natural (e.g., earthquakes, fires, floods, or hurricanes) causes could decrease the availability of material needed to develop systems or disrupt the supply of IT products critical to the operations of federal agencies. Reliance on a malicious or unqualified service provider for the performance of technical services. By virtue of their position, contractors and other service providers may have access to federal data and systems. Service providers could attempt to use their access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. Installation of hardware or software that contains unintentional vulnerabilities, such as defects in code that can be exploited. Cyber attackers may focus their efforts on, among other things, finding and exploiting existing defects in software code. Such defects are usually the result of unintentional coding errors or misconfigurations, and can facilitate attempts by attackers to gain unauthorized access to an agency’s information systems and data, or disrupt service. We noted in the March 2012 report that threat actors can introduce these threats into federal information systems by exploiting vulnerabilities that could exist at multiple points in the global supply chain. In addition, supply chain vulnerabilities can include weaknesses in agency acquisition or security procedures, controls, or implementation related to an information system. Examples of the types of vulnerabilities that could be exploited include acquisitions of IT products or parts from sources other than the original manufacturer or authorized reseller, such as independent distributors, brokers, or on the gray market; lack of adequate testing for software updates and patches; and incomplete information on IT suppliers. If a threat actor exploits an existing vulnerability, it could lead to the loss of the confidentiality, integrity, or availability of the system and associated information. This, in turn, can adversely affect an agency’s ability to carry out its mission. In March 2012, we reported that the four national security-related agencies (i.e., Defense, Justice, Energy, and DHS) had acknowledged the risks presented by supply chain vulnerabilities. However, the agencies varied in the extent to which they had addressed these risks by (1) defining supply chain protection measures for department information systems, (2) developing implementing procedures for these measures, and (3) establishing capabilities for monitoring compliance with, and the effectiveness of, such measures. Of the four agencies, the Department of Defense had made the most progress addressing the risks. Specifically, the department’s supply chain risk management efforts began in 2003 and included: a policy requiring supply chain risk to be addressed early and across a system’s entire life cycle and calling for an incremental implementation of supply chain risk management through a series of pilot projects; a requirement that every acquisition program submit and update a “program protection plan” that was to, among other things, help manage risks from supply chain exploits or design vulnerabilities; procedures for implementing supply chain protection measures, such as an implementation guide describing 32 specific measures for enhancing supply chain protection and procedures for program protection plans identifying ways in which programs should manage supply chain risk; and a monitoring mechanism to determine the status and effectiveness of supply chain protection pilot projects, as well as monitoring compliance with and effectiveness of program protection policies and procedures for several acquisition programs. Conversely, our report noted that the other three agencies had made limited progress in addressing supply chain risks for their information systems. For example: The Department of Justice had defined specific security measures for protecting against supply chain threats through the use of provisions in vendor contracts and agreements. Officials identified (1) a citizenship and residency requirement and (2) a national security risk questionnaire as two provisions that addressed supply chain risk. However, Justice had not developed procedures for ensuring the effective implementation of these protection measures or a mechanism for verifying compliance with, and the effectiveness of these measures. We stressed that, without such procedures, Justice would have limited assurance that its departmental information systems were being adequately protected against supply chain threats. In May 2011, the Department of Energy revised its information security program, which required Energy components to implement provisions based on NIST and Committee on National Security Systems guidance. However, the department was unable to provide details on implementation progress, milestones for completion, or how supply chain protection measures would be defined. Because it had not defined these measures or associated implementing procedures, we reported that the department was not in a position to monitor compliance or effectiveness. Although its information security guidance mentioned the NIST control related to supply chain protection, DHS had not defined the supply chain protection control activities that system owners should employ. The department’s information security policy manager stated that DHS was in the process of developing policy that would address supply chain protection, but did not provide details on when it would be completed. In the absence of such a policy, DHS was not in a position to develop implementation procedures or to monitor compliance or effectiveness. To assist Justice, Energy, and DHS in better addressing IT supply chain- related security risks for their departmental information systems, we made eight recommendations to these three agencies in our 2012 report. Specifically, we recommended that Energy and DHS: develop and document departmental policy that defines which security measures should be employed to protect against supply chain threats. We also recommended that Justice, Energy, and DHS: develop, document, and disseminate procedures to implement the supply chain protection security measures defined in departmental policy, and develop and implement a monitoring capability to verify compliance with, and assess the effectiveness of, supply chain protection measures. The three agencies generally agreed with our recommendations and, subsequently, implemented seven of the eight recommendations. Specifically, we verified that Justice and Energy had implemented each of the recommendations we made to them by 2016. We also confirmed that DHS had implemented two of the three recommendations we made to that agency by 2015. However, as of fiscal year 2016, DHS had not fully implemented our recommendation to develop and implement a monitoring capability to verify compliance with, and assess the effectiveness of, supply chain protections. Although the department had developed a policy and approach for monitoring supply chain risk management activities, it could not provide evidence that its components had actually implemented the policy. Thus, we were not able to close the recommendation as implemented. Nevertheless, the implementation of the seven recommendations and partial implementation of the eighth recommendation better positioned the three agencies to monitor and mitigate their IT supply chain risks. In addition, we reported in March 2012 that the four national security- related agencies had participated in interagency efforts to address supply chain security, including participation in the Comprehensive National Cybersecurity Initiative, development of technical and policy tools, and collaboration with the intelligence community. In support of the cybersecurity initiative, Defense and DHS jointly led an interagency initiative on supply chain risk management to address issues of globalization affecting the federal government’s IT. Also, DHS had developed a comprehensive portfolio of technical and policy-based product offerings for federal civilian departments and agencies, including technical assessment capabilities, acquisition support, and incident response capabilities. The efforts of the four agencies could benefit all federal agencies in addressing their IT supply chain risks. In summary, the global IT supply chain introduces a myriad of security risks to federal information systems that, if realized, could jeopardize the confidentiality, integrity, and availability of federal information systems. Thus, the potential exists for serious adverse impact on an agency’s operations, assets, and employees. These factors highlight the importance and urgency of federal agencies appropriately assessing, managing, and monitoring IT supply chain risk as part of their agencywide information security programs. Chairmen King and Perry, Ranking Members Rice and Correa, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to answer your questions. If you have any questions regarding this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov. Other key contributors to this statement include Jeffrey Knott (assistant director), Christopher Businsky, Nancy Glover, and Rosanna Guerrero. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
IT systems are essential to the operations of the federal government. The supply chain—the set of organizations, people, activities, and resources that create and move a product from suppliers to end users—for IT systems is complex and global in scope. The exploitation of vulnerabilities in the IT supply chain is a continuing threat. Federal security guidelines provide for managing the risks to the supply chain. This testimony statement highlights information security risks associated with the supply chains used by federal agencies to procure IT systems. The statement also summarizes GAO's 2012 report that assessed the extent to which four national security-related agencies had addressed such risks. To develop this statement, GAO relied on its previous reports, as well as information provided by the national security-related agencies on their actions in response to GAO's previous recommendations. GAO also reviewed federal information security guidelines and directives. Reliance on a global supply chain introduces multiple risks to federal information systems. Supply chain threats are present during the various phases of an information system's development life cycle and could create an unacceptable risk to federal agencies. Information technology (IT) supply chain-related threats are varied and can include: installation of intentionally harmful hardware or software (i.e., containing “malicious logic”); installation of counterfeit hardware or software; failure or disruption in the production or distribution of critical products; reliance on malicious or unqualified service providers for the performance of technical services; and installation of hardware or software containing unintentional vulnerabilities, such as defective code. These threats can have a range of impacts, including allowing adversaries to take control of systems or decreasing the availability of materials needed to develop systems. These threats can be introduced by exploiting vulnerabilities that could exist at multiple points in the supply chain. Examples of such vulnerabilities include the acquisition of products or parts from unauthorized distributors; inadequate testing of software updates and patches; and incomplete information on IT suppliers. Malicious actors could exploit these vulnerabilities, leading to the loss of the confidentiality, integrity, or availability of federal systems and the information they contain. GAO reported in 2012 that the four national security-related agencies in its review—the Departments of Defense, Justice, Energy, Homeland Security (DHS)—varied in the extent to which they had addressed supply chain risks. Of the four agencies, Defense had made the most progress addressing the risks. It had defined and implemented supply chain protection controls, and initiated efforts to monitor the effectiveness of the controls. Conversely, Energy and DHS had not developed or documented policies and procedures that defined security measures for protecting against IT supply chain threats and had not developed capabilities for monitoring the implementation and effectiveness of the measures. Although Justice had defined supply chain protection measures, it also had not developed or documented procedures for implementing or monitoring the measures. Energy and Justice fully implemented the recommendations that GAO made in its 2012 report and resolved the deficiencies that GAO had identified with their supply chain risk management efforts by 2016. DHS also fully implemented two recommendations to document policies and procedures for defining and implementing security measures to protect against supply chain threats by 2015, but could not demonstrate that it had fully implemented the recommendation to develop and implement a monitoring capability to assess the effectiveness of the security measures. In its 2012 report, GAO recommended that Justice, Energy, and DHS take eight actions, as needed, to develop and document policies, procedures, and monitoring capabilities that address IT supply chain risk. The departments generally concurred with the recommendations and subsequently implemented seven recommendations and partially implemented the eighth recommendation.
|
In 2016, commercial trucks transported about 70 percent of all U.S. freight, and over 250,000 heavy trucks were sold in the same year. These trucks operate within a diverse industry that can be distinguished in several ways: Long-haul vs. local-haul. Long-haul trucking operations are so named because the drivers frequently drive hundreds of miles for a single route and can be on the road for days or weeks at a time. For these operations, freight is usually shipped from a single customer and may fill an entire trailer by either space or weight. Long-haul trucking also includes “less-than-truckload” freight shipments, or freight combined from multiple customers. In comparison, local-haul trucking operations may involve delivering packages and shipments between a customer and a freight company’s drop-off point, where they are combined with other shipments in preparation to move them over longer distances. This type of operation also includes local cement trucks, as well as moving shipping containers at ports and moving freight a short distance from a train that has transported it long-distance to near its destination. For-hire vs. private (in-house). Different types of companies—or carriers—engage in long-haul and local trucking and are known either as “for-hire” (those that transport goods for others) or “private” (those that transport their own goods in their own trucks). For instance, J.B. Hunt is a for-hire carrier that transports goods for clients, while Walmart is a private carrier that uses its in-house fleet of trucks to transport its own goods between its distribution centers and its stores. Carrier size. In addition, carriers vary in size, with fleets ranging from one truck to tens of thousands of trucks. For example, a person might own and drive one for-hire truck; these are known as “owner- operators.” By contrast, the largest for-hire trucking companies in the country can have fleets of over 20,000 tractors and even more trailers. Operating costs. Driver compensation represents either the largest or second-largest cost component for truck carriers, depending on the price of fuel; each typically accounts for about one-third of total operating costs. Other operating costs include purchasing truck tractors and trailers, as well as repair and maintenance of the trucks and trailers, and insurance. BLS data indicate that in 2017, the United States had nearly 1.9 million truck drivers categorized as “heavy and tractor-trailer truck drivers,” who operate trucks over 26,000 pounds. This category includes many different kinds of drivers, including long-haul and local-haul, along with cement or garbage truck drivers and drivers of specialty loads, such as trucks transporting cars, logs, or livestock. The number of heavy and tractor-trailer truck drivers has increased over the last 5 years, from fewer than 1.6 million in 2012, and is projected to increase to about 2 million drivers by 2026. The trucking industry has also had high annual driver turnover, according to industry reports—approaching 100 percent for large, truckload carriers, though it can be less for small, truckload carriers. This turnover includes drivers who move to other carriers and others who leave the field altogether or retire. Some companies that experience lower turnover rates are able to provide drivers with predictable schedules and coordinate around the various obligations the drivers may have. Firms must balance the costs of scheduling drivers to return home more frequently with the costs of high turnover rates. Industry reports have noted that companies find it difficult to hire and retain sufficient numbers of long-haul drivers, even with wages reportedly rising for many drivers. Heavy and tractor-trailer truck drivers make more on average—$44,500 in 2017—than other types of drivers, according to BLS data. Many drivers, including most drivers working in long-haul trucking, are compensated on a per-mile basis rather than a per-hour basis. The per-mile rate varies from employer to employer and may depend on the type of cargo and the experience of the driver. Some long-haul truck drivers are paid a share of the revenue from shipping. In order to operate certain commercial vehicles, including heavy trucks and tractor-trailers, drivers must obtain a state-issued commercial driver’s license (CDL). DOT administers the federal CDL program through the Federal Motor Carrier Safety Administration by setting federal standards for knowledge and driving skills tests, among other requirements. CDL applicants must have a state motor vehicle driver’s license and must be at least 21 years old to operate in interstate commerce. Prior to receiving a CDL, applicants must first pass the knowledge test and meet other federal requirements, after which they are eligible to pursue a commercial learner’s permit. After receiving the learner’s permit, applicants must wait at least 14 days before taking the skills test. During this period, applicants may train on their own with a CDL holder, with a truck driver training school—a private school or public program run through a community college, for example—or with a motor carrier to prepare for the skills test. Applicants must pass all three parts of the skills test—pre- trip inspection, basic control skills, and an on-the-road driving test—in the type of vehicle they intend to operate with their license. Apart from the CDL requirements, some truck driving jobs (such as those that involve handling hazardous materials) require additional endorsements, and some employers require on-the-job training. DOL and other federal agencies administer programs that can be used to provide training for truck drivers. For example, DOL administers federal employment and training programs, such as those funded through the Workforce Innovation and Opportunity Act (WIOA), which provide training dollars that can be used by prospective truck drivers, among others. Likewise, the Department of Education provides federal student aid funds that can be used at eligible accredited trucking schools, and DOT and the Department of Veterans Affairs both operate programs that can assist veterans interested in becoming truck drivers. Federal regulation of trucking is focused primarily on interstate trucking activity; states can have separate regulations related to intrastate motor carriers. DOT is the lead federal agency responsible for overall vehicle safety, including commercial truck safety. The agency also regulates other aspects of commercial trucking, such as the maximum number of hours truck drivers are allowed to drive. For example, under current hours of service regulations, a truck driver may drive a maximum of 11 total hours within a 14-hour window after coming on duty. In addition, DOT regulates CDL standards and the maximum weight of trucks allowed on the Interstate Highway System, among other things. Until recently, DOT’s National Highway Traffic Safety Administration led automated vehicles policy with a focus on passenger vehicles. However, DOT’s October 2018 federal automated vehicles policy was developed by the Office of the Secretary of Transportation and includes several different modes of transportation, including automated commercial trucks. Automated vehicles can perform certain driving tasks without human input. They encompass diverse automated technologies ranging from relatively simple driver assistance systems to self-driving vehicles. Certain automated features, like adaptive cruise control, can adjust vehicle speed in relation to other objects on the road and are currently available on various truck models. DOT has adopted a framework for automated driving developed by the Society of Automotive Engineers International, which categorizes driving automation into 6 levels (see fig. 1). Commercial trucks with Level 0 and 1 technologies, as outlined in figure 1, are already available for private ownership and are currently used on public roadways. Level 0 encompasses conventional trucks where a human driver controls all aspects of driving and technologies can warn drivers of safety hazards, such as lane departure warning, but do not take control away from the driver and are not considered automated. Level 1 technologies incorporate automatic control over one major driving function, such as steering or speed, and examples include adaptive cruise control and automatic emergency braking. The Society of Automotive Engineers International categorizes vehicles with Level 3, 4, and 5 technologies as Automated Driving Systems. At Level 3, the system can take full control of the vehicle in certain conditions. However, a human driver must maintain situational awareness at all times to ensure the vehicle is functioning safely. At Level 4, automation controls all aspects of driving in certain driving conditions and environments, such as on highways in good weather. In these particular driving conditions and environments, a human driver would not be required to take over the driving task from the automated vehicle and the system would ensure the vehicle is functioning safely. At Level 5, the vehicle can operate fully, in any condition or environment, without a human driver or occupant. There are various automated vehicle technologies that could help guide a vehicle capable of driving itself, including cameras and other sensors (see fig. 2). According to stakeholders we spoke with and literature we reviewed, automated trucks, including self-driving trucks, are being developed, generally for long-haul trucking. Specifically, we found there could be various types of automation for long-haul trucks, including platooning, self-driving for part of a route, and self-driving for an entire route. Platooning. Technology developers and researchers told us there is ongoing development and testing of truck platoons, which involve one or more trucks following closely behind a lead truck, linked by wireless—or vehicle-to-vehicle—communication (see fig. 3). In a platoon, the driver in the lead truck controls the braking and acceleration for all of the connected trucks in the platoon, while the driver in each following truck controls its own steering. Several stakeholders we interviewed and three studies we reviewed identified potential benefits from platooning, including fuel savings and increased safety, for example, due to the trucks’ faster reaction times for braking. Self-driving for part of a route. Most of the technology developers we spoke with said they were developing automated trucks that will be self-driving for part of a long-haul route, such as exit-to-exit on highways (see fig. 4). Representatives from one developer explained that their truck uses self- driving software installed on the truck. The software instructs the truck what to do, such as to steer or brake. In addition, cameras and other sensors on the truck’s exterior provide the self-driving software with a view of the truck’s surroundings to inform the software’s instructions. For example, Light Detection and Ranging (LIDAR) sensors use lasers to map a truck’s surroundings (see fig. 5). Such trucks would operate with no driver intervention under favorable conditions, such as on highways in good weather. Two developers said that in their business models a driver would be in the truck for the first and last portions of the route to assist with picking up and dropping off trailers at hubs outside urban areas. Alternatively, one developer said a remote driver—one not in the truck but operating controls from another location—would drive the first and last portions of a route. Stakeholders identified potential benefits of self-driving for part of a route, such as increased safety, labor cost savings, and addressing what they said is a truck driver shortage. Research funded by industry also suggests that an automated truck could improve productivity by, for example, continuing to drive to a destination while a human in the truck conducts other work or rests. In addition, one study noted that the most likely scenario for widespread adoption of automated trucks is the one in which trucks are capable of self-driving from exit-to-exit. Self-driving for an entire route. None of the technology developers we interviewed told us they are planning to develop automated trucks that are self-driving for an entire route (see fig. 6). Such trucks would be able to drive under all weather and environmental conditions. A person would not be expected to operate these trucks at any time. The potential benefits of these kinds of trucks are similar to those of trucks that are self-driving for part of a route, with higher potential labor savings because a person would not need to drive the first and last portions of a route. Stakeholders we spoke with generally indicated that it will be years to decades before the widespread deployment of automated commercial trucks (see text box). However, many stakeholders also noted the uncertainty of predicting a specific timeframe for particular technologies. Platooning. Many stakeholders said that platooning will likely deploy within the next 5 years and will be the first automated trucking technology to be widely available. Notably, one company that is developing platooning technology said it could begin deployment in 2019. In addition, DOT officials told us that truck platoons are currently being tested, but that it would be difficult to estimate when there might be widespread adoption of platooning technology. Self-driving for part of a route. Automated trucks that are self- driving for part of a route may become available for commercial use within the next 5 to 10 years, according to several stakeholders, including technology developers. While such trucks may begin appearing on roads in that timeframe, other stakeholders, including two researchers, said widespread deployment may take more than 10 years. DOT officials noted that multiple variables make it difficult to develop a precise estimate for the deployment and widespread adoption of trucks that are self-driving for part of a route. Self-driving for an entire route. Although none of the technology developers told us they are developing trucks that would be self- driving for an entire route, other stakeholders we spoke with said such trucks could become available in more than a decade. However, most stakeholders either did not provide a timeframe for, or said they did not know, when such trucks might become available. Similarly, at a listening session in August 2018, DOT officials told attendees that it will be decades before large trucking operations replace their fleets of conventional trucks with trucks that self-drive for an entire route. One Stakeholder’s Description of Anticipated Timeframes for Overall Automated Truck Adoption One researcher described an anticipated timeframe for automated truck adoption in which there is an initial, long period of development and testing, which would include making technological adjustments. This period would then be followed by a period of automated truck adoption—i.e., when such trucks replace human drivers. At that point, technology developers and truck manufacturers would also encounter scenarios in which it may not be desirable to use an automated truck, such as for the transport of hazardous materials, according to the researcher. Such scenarios would limit the extent to which automated trucks could replace human drivers. Stakeholders we interviewed and the literature we examined identified technological, operational, infrastructure, legal, and other factors that may affect automated truck development and deployment. Stakeholders and literature identified several technology-related limitations that may affect the timing of automated truck deployment. Specifically, several stakeholders and a study noted that automated trucks may require simpler operating environments, such as highways, in the near term because they are less complex for the technology to navigate than roads in an urban setting, for example. Even so, a highway presents its own challenges, several stakeholders said. For instance, a developer, a manufacturer, and a researcher we spoke with told us that Light Detection and Ranging (LIDAR)—a costly and complex technology—may not be as useful at higher speeds due to its limited range and its inability to process information about the surrounding environment as quickly as needed at these speeds. Further, one manufacturer told us that LIDAR is not as durable as it needs to be for commercial trucking—for example, able to withstand dirt and debris. Stakeholders also discussed the need to have backup systems built into trucks’ automated systems in case of technology failures, including the ability to guide the truck to a safe stop. Stakeholders identified several operational factors that may pose challenges for the deployment of automated trucks. For example, several stakeholders said that there may be challenges with self-driving trucks with no person inside when responding to a tire blowout or other mechanical problems. Likewise, several stakeholders said there must be ways for a self-driving truck to respond to required safety inspections and communicate with inspectors. Representatives from a safety organization noted that a truck could potentially communicate a unique identification number through an electronic device. This number would give the inspector information about the truck, such as safety information from the sensors on automated trucks. Additionally, several stakeholders said platooning may not be practical for logistical reasons, for instance, if trucks are not traveling on the same routes or if cargo is not ready to depart at the same time. In addition, according to stakeholders we spoke with and literature we reviewed, the lead truck in a platoon will save less on fuel than the following trucks. If trucking fleets adopt platooning systems that work on commercial trucks across different companies—i.e., systems that are interoperable—distributing fuel savings in a manner agreeable to all parties involved may be challenging. Representatives from two fleet owners and one industry association we spoke with raised concerns about platooning across different companies, including that companies might not partner with other fleets to platoon trucks because they would be primarily concerned with their own fuel savings, not with saving fuel for their competitors. In addition to these operational factors, stakeholders noted that automated trucks may be prohibitively expensive for some smaller fleet owners, including owner-operators, particularly when these trucks are first deployed. Several stakeholders and relevant literature noted that certain infrastructure factors may affect the development, testing, and deployment of automated trucks. For example, a few stakeholders said if one truck picks up or drops off trailers for another truck at a location near highways, land acquisition near these highways may be an issue. Representatives from a developer that planned to acquire land for its business model said the land acquisition could take 5 to 10 years. The representatives explained that they found enabling direct access to freeways is more difficult than simply acquiring vacant land. They planned to partner with states to create hubs on under-utilized land with existing freeway access by, for example, repurposing abandoned rest stops. In addition to land acquisition, two technology developers and a study identified the need for widely available data connectivity and the related ability to use connected vehicle technologies as an infrastructure challenge. Connected technologies allow vehicles to communicate with other vehicles (vehicle-to-vehicle), roadway infrastructure (vehicle-to- infrastructure), and personal communication devices. Connectivity has potential implications for, among other things, the maps self-driving trucks use to navigate routes and obstacles, as well as the ability for trucks in a platoon to communicate with one another effectively. However, because the ability for vehicles to communicate with infrastructure is not ubiquitous, two of the developers we spoke with are not taking into account connected infrastructure as they develop and test their automated trucks. Two stakeholders also expressed concern about platooning trucks and the stress they could place on bridges, for example, that were not designed to hold the weight of two or more heavy trucks at once. In addition, stakeholders noted that automated trucks may encounter difficulties with things like road work or construction zones. This may be because the truck relies on pre-built maps, in addition to sensors, that would potentially be outdated or might not reflect current road conditions, including any recent or temporary changes. Several legal factors may affect the timing of development, testing, and deployment for automated trucks, according to our stakeholder interviews and literature review. Many stakeholders expressed concern about the possibility of a “patchwork” of state laws related to automated trucks that could affect interstate trucking, with some saying they would like to see a shared national framework. For example, one technology developer said that this emerging patchwork can make it difficult for an automated truck to travel across the country without a driver, because some states specifically prohibit self-driving vehicles, including trucks. However, this same developer said that some states are less restrictive regarding the need for a driver in a self-driving truck, and that others have ambiguous regulations. Several stakeholders we spoke with and two studies we reviewed noted that liability issues may arise and become more complex for automated trucks. This may be because, for example, more parties may become involved. One of these stakeholders—a fleet owner—said that these parties could include the software developer, the truck manufacturer, the owner of the truck, and, if applicable, the truck driver. These issues could be addressed under the current liability system, and courts would decide the various liability issues on a case-by-case basis. In addition, several stakeholders have requested that DOT clarify whether existing regulations require that human drivers always be present in automated trucks, particularly those capable of Level 4 and 5 driving automation, in which at least some of the driving is done by the automated truck. Two technology developers have requested that DOT confirm that regulations that apply to human drivers do not apply to automated trucks, and one of these developers also requested confirmation that a truck capable of at least Level 4 automation is allowed to operate without a human on board, which could permit testing without a person in the truck. In Preparing for the Future of Transportation: Automated Vehicles 3.0, DOT’s automated vehicles voluntary guidance, the agency laid out its approach to its automated vehicles policy. DOT’s guidance stated that, going forward, DOT will interpret and, consistent with all applicable notice and comment requirements, adapt the definitions of “driver” and “operator” to recognize that such terms do not refer exclusively to a human, but may include an automated system. In the same guidance document, DOT also noted that regulations will no longer assume that the driver of a commercial truck is always human or that a human is necessarily present inside of a truck during its operation. A few stakeholders also said that DOT may have to clarify the hours of service rules if a human driver is in an automated truck that is self-driving for part or all of a route. This is because under current hours of service regulations, a human driver may drive a maximum of 11 total hours within a 14-hour window after coming on duty. However, if a truck self-drives for at least part of a route, it is unclear if a human driver would need to comply with the existing hours of service requirements and, if not, how the driver would account for worked time. For example, if the human driver is not actively engaged in the driving task, whether monitoring the automated driving system or even sleeping, there could be a question about whether that time would be counted toward “driving,” according to the requirements. For a list of potential legal factors identified by stakeholders or in literature that may affect timing for the development and deployment of automated commercial trucks, and related DOT information, see appendix II. Stakeholders and relevant literature identified several other factors, such as public perception and cybersecurity, that could affect timing for the development and deployment of automated trucks. Several stakeholders we interviewed and a study we reviewed noted that public acceptance concerning the safety of platooning and self-driving trucks may pose a challenge to the deployment of these trucks. One researcher we spoke with said interactions between truck platoons and cars may be problematic, because drivers may need to speed in order to change lanes around the platoons of trucks following each other closely. Similarly, other stakeholders told us that it may be difficult for the public to accept large automated commercial trucks. Two of these stakeholders said this is particularly true for a heavy truck without a human driver on board— implying that vehicle size and weight play roles in the public’s acceptance of these types of automated vehicles. Several stakeholders also expressed concerns about cybersecurity and automated trucks’ reliance on wireless communication and self-driving software. They said connectivity could leave automated trucks vulnerable to cyberattacks. Predicting workforce changes in light of future automated trucking is inherently challenging, as it is based on uncertainties about how the trucking industry will respond to new technologies that face operational, regulatory, and other factors that could affect deployment. Many of the stakeholders we interviewed declined to predict various possible workforce effects, because they said to do so was too speculative. However, stakeholders we spoke with and literature we reviewed presented two main scenarios for the future trucking workforce: one in which trucks would be self-driving for part of a route, without a driver or operator, and the other in which trucks would require a driver or operator in the truck for the entire route. An operator would monitor truck operations and may not always function as a traditional driver. Because most stakeholders agreed that the prospect of using fully self-driving trucks for an entire route is either unlikely or at least several decades into the future—and no developer we spoke with was planning to develop a fully self-driving truck—we do not discuss the workforce effects of that scenario in this report. Technology developers we spoke with generally envisioned trucks that are self-driving for part of a route, which they said would potentially lead to significant workforce changes. Several technology developers and researchers, along with two studies, said trucks that are self-driving for part of a route could decrease the number of long-haul drivers, and perhaps decrease wages and affect retention as well. Additionally, any displaced drivers may need new skills if they change jobs, according to several stakeholders we spoke with and studies we reviewed. Employment levels: Technology developers we interviewed generally predicted the number of long-haul jobs would decrease with the adoption of trucks that are self-driving for part of a route. Drivers constitute a significant operational cost, so part of the reported economic rationale for self-driving trucks is to employ fewer drivers, allowing companies to transport the same amount of freight—or more—at lower labor costs. Several studies have analyzed the potential number of driving jobs that might be eliminated in this scenario, but the studies specifically noted the speculative, long-term nature of those estimates and the inability to identify the number of current long-haul truck drivers whose jobs could be lost sometime in the future. Estimates in the studies we reviewed ranged from under 300,000 driver jobs lost to over 900,000 jobs lost—out of a total of nearly 1.9 million heavy and tractor-trailer truck driver jobs, according to BLS data—and in each case over periods of 10 to 20 years or more. Although long-haul jobs would decrease in this scenario, local-haul jobs could increase and offset those losses, according to a study and several stakeholders, including two technology developers. The study, for example, said that automated trucking would drive long-haul trucking costs down, leading more companies to use trucking to ship goods. As a result, demand for trucking could increase, leading to an increased demand for local-haul truck drivers on either end of the long-haul routes, two studies noted. Several stakeholders we spoke with agreed that any decrease in long- haul jobs would likely not affect many current drivers because most will have voluntarily left driving for a different job or retired by the time self-driving trucks are widely deployed. According to the Census Bureau’s American Community Survey data, the average age of truck and sales delivery drivers from 2012 through 2016 was 46. Many stakeholders also said that trucking fleets are currently having difficulty hiring and retaining qualified drivers, and two technology developers said automation could help move goods in an environment in which it is difficult to find workers. Technology developers also told us they are focusing the initial development of automated trucking technology in the southwest United States because of its good weather and long highways. As a result, any future job losses could first occur there. Additionally, BLS data show that the estimated concentration of truck driving jobs varies in different areas of the country (see fig. 7). One study noted that trucking job losses in more regionally concentrated occupations are likely to pose more challenges for workers, because more workers with similar skills in the same labor markets will be out of work at the same time, and thus the whole local economy will be more likely to suffer. Wages: If the truck is self-driving for parts of a route, wages for long- haul drivers could decrease because there would be lower demand for—or greater supply of—such drivers, according to several stakeholders. Moreover, one study noted that average long-haul wages could decrease because the jobs most likely to be automated include those that tend to be unionized and have higher wages and benefits, such as jobs at parcel delivery companies and some private carriers. Similarly, drivers changing occupations might face significant wage reductions in new occupations that do not require retraining, according to a researcher and one study. Wages for local-haul drivers—generally lower than for long-haul drivers—could decrease as well, because transitioning long-haul drivers could increase competition for those jobs, according to two studies. One technology developer presented a different perspective, saying that wages for local-haul drivers could increase from current levels due to increased overall demand for trucking. Retention: Overall, retention of truck drivers could improve if the long-haul portion of the route becomes self-driving, lessening time drivers spend away from home—a key reason long-haul drivers leave the profession, according to many stakeholders. However, retention may depend on several factors, including wages, time at home, and other working conditions, making it more difficult to predict self-driving trucks’ effect on retention. Skills: Long-haul drivers have skills that would transfer to local-haul routes, so additional training may not be needed for those who move to local-haul routes. However, displaced long-haul drivers seeking to move to a different occupation or industry may need additional training, according to several stakeholders and two studies. From 2012-2016, the highest level of education attainment for almost 65 percent of truck and sales delivery drivers was high school or its equivalent. Most officials from truck driver training schools, organizations representing truck drivers, and workforce development boards envisioned automated trucks as continuing to need either a driver or some kind of operator in the truck, with several noting that drivers may need to do non- driving tasks. Automated trucking with an operator in the truck would have a more limited effect on the numbers of truck drivers, but would still result in workforce changes, according to several stakeholders. As with the driverless scenario, many stakeholders said future developments were so uncertain that they could not predict how automated trucking would affect various aspects of the workforce, such as wages or retention. Employment levels: Under this scenario, automated trucking would have a more limited effect on employment levels. Several stakeholders noted, for example, that a person would still be needed in the truck to manage emergencies, repair flat tires, and secure cargo, among other duties. (See text box.) For example, one study noted that even for trucking jobs identified as the most likely to be automated, driving may represent only about half of drivers’ total work time. Additionally, particular kinds of long-haul trucking may present different non-driving tasks that could make automating those driving jobs more difficult. Wages: If the truck has an operator, several stakeholders said that wages might increase if increased skills are needed to operate more sophisticated equipment. However, several other stakeholders said wages might not change significantly or could decrease with fewer driving tasks. Two studies noted that wage changes were difficult to predict and could be affected by specific policy interventions. Truck Drivers: Responsible for More than Just Driving Truck drivers have many responsibilities other than driving a truck. Non-driving tasks for heavy and tractor-trailer truck drivers can include: checking vehicles to ensure that mechanical, safety, and emergency equipment is in good working order; loading or unloading trucks, including checking contents for any damage; inspecting loads to ensure that cargo is secure; and performing basic vehicle maintenance tasks, such as adding fuel or radiator fluid; performing minor repairs; or removing debris from loaded trailers. Retention: Many stakeholders said new technology could help the trucking industry bring in and retain more people—such as women and younger workers—if it could, for example, make truck driving safer, less stressful, and less physically demanding. Others cautioned that automated technology may not decrease truck operators’ time away from home, because they would still have to be in the truck for the entirety of long-haul routes. One stakeholder, who was also a truck driver, said that many truck drivers enjoy driving, so automating aspects of that task would not necessarily entice those drivers to stay in the job. Two other stakeholders noted that some drivers may not want to learn how the new technology works and could leave the field rather than drive automated trucks. Skills: Future truck operators may need new skills to work with automated technology that assists rather than replaces them, many stakeholders noted. For example, operators may need to adapt to technology that takes over a number of the standard driving functions, such as braking, staying in a designated lane, and keeping a safe distance from other vehicles. Operators may also need to understand how to monitor software and hardware used to automate the driving function and how to make appropriate use of advanced safety systems. Furthermore, officials from many truck driver training schools and workforce development boards said additional certification beyond the standard CDL may be needed in order to demonstrate an understanding of how to operate the technology in automated trucks. In some instances, the skills needed may vary across trucking companies and trucks, requiring further on-the-job training. Regardless of their vision for how automated trucking might materialize, many stakeholders said there could be new trucking-related occupations, such as specialized technicians, mechanics, and engineers, which will accompany the deployment of automated trucks. For example, one study noted that these jobs could include producing the technology used by automated trucks, in addition to jobs created as a result of potential greater spending on other consumer goods and services, in the event that automated trucking decreases overall industry transportation costs. Another study noted that autonomous trucks, e-commerce, and economic growth are together poised to create many new trucking jobs. However, new jobs may be located in different geographical areas than any jobs lost, and as noted above, may require different skills than the prior jobs. One study noted this development could potentially leave lower-skilled workers competing for jobs that pay little and have few opportunities for advancement. While many stakeholders we spoke with and several studies we reviewed stated that the potential workforce effects of automated trucking were difficult to predict, they generally agreed that any effect would not occur for at least 5 to 10 years. Several stakeholders and two studies said this time horizon provides an opportunity for federal agencies and workers to prepare for potential workforce changes. One of these studies noted that trucking policy is complex; any changes could take a long time to fully materialize. That same study suggested that now is the appropriate time for policy research and debate. The other study and several stakeholders stated that potential workforce effects are not set in stone, and that public policy could influence specific workforce outcomes. That study said that with advance planning, the federal government and other stakeholders could realize the possible benefits of automated trucks and other vehicles while mitigating potential workforce effects and other costs. DOT and DOL have both taken some steps to prepare for the potential workforce effects of automated trucking. DOT has held events to obtain stakeholder perspectives on automated vehicles policy, including how it affects commercial long-haul trucks. For example, DOT had public listening sessions in 2017 and 2018 to solicit information on the design, development, testing, and integration of Automated Driving Systems, and requests for comment to inform potential rulemaking efforts for the Federal Motor Carrier Safety Regulations. DOT officials said their role during these discussions was to hear stakeholder concerns. They also said that their ongoing goal is to identify barriers in their regulations to safe deployment of automated driving technology. Stakeholders have raised concerns about the potential workforce effects of automated trucks at DOT’s listening sessions. For example, after participants questioned potential job losses at a listening session in August 2018, DOT officials said that automation may eventually change the role of a truck driver from driver to technician and that any changes would probably not be immediate. DOL officials said they have participated in some of DOT’s listening sessions. For its part, DOL has taken steps to study how automated trucking may affect the near-term demand for truck drivers as part of their standard, biennial employment projections for all occupations. DOL officials said they consulted experts and economic studies prior to publishing their most recent projections, covering 2016 to 2026, and included information on possible effects of automation in projections for heavy and tractor- trailer truck drivers. The projections state that the demand for these drivers is expected to grow by 5.8 percent between 2016 and 2026, with an average of over 200,000 job openings each year, of which 10,000 are projected to be new jobs. DOL’s analysis anticipated that automation will not reduce the number of drivers by 2026. DOL officials said that they expect automation to assist drivers rather than displace them in the near term. Unlike estimates developed by other researchers, these numbers do not include potential job losses after 2026, though DOL officials noted that the agency’s next projections, for 2018 to 2028, will incorporate information on how automated trucking technology has evolved since the 2016-2026 projections. Additionally, officials said the agency is transitioning to annual updates of projections to more quickly incorporate developing information. Congress has directed DOT to consult with DOL to study the workforce impacts of automated trucking technology. Specifically, the Explanatory Statement accompanying the Consolidated Appropriations Act, 2018 instructs the Secretary of Transportation to consult with the Secretary of Labor to conduct a comprehensive analysis of the effect of advanced driver-assistance systems and highly automated vehicle technology on drivers and operators of commercial vehicles, including commercial trucks. Congress directed DOT to include stakeholder outreach in its analysis and provide information on workers who may be displaced as a result of such technology, as well as minimum and recommended training requirements for operating vehicles with these systems. DOL officials told us that they have begun collaborating with DOT on this study by consulting with organized labor and other stakeholders. In October 2018, DOT issued a request for information to solicit comments on the scope of this analysis and detailed several potential research questions, including which commercial drivers are likely to be affected and what skills might be needed to operate new vehicles or transition to new jobs. DOT also announced that it is planning to coordinate with the Departments of Commerce and Health and Human Services, in addition to consulting with DOL to conduct this analysis. The Explanatory Statement directs DOT to conduct this analysis by March 23, 2019, and DOT officials told us they expect to meet this deadline and report on the analysis by that date. DOL and DOT have taken some steps to convene stakeholders to inform DOT’s analysis of automated trucking in advance of March 2019. However, DOL and DOT have not made plans to continue collaborating to convene key groups of stakeholders as the technology evolves to gather information about potential workforce effects of automated trucking. Insofar as automated trucking technology is still evolving, convening stakeholders solely to inform the March 2019 analysis will not provide agency officials with sufficient information about important developments that may occur after the analysis is completed. This analysis will be an important step. However, DOT must complete it before potential workforce effects can be more fully predicted. After its completion, developers will likely continue to test their technologies, and issues related to operational and other factors that will affect the deployment of automated trucks may change or be resolved. For the agencies to more fully understand these developments and clarify the range of associated workforce effects, they would need to collaborate and to continue to gather information in the future, for example by continuing to convene key groups of stakeholders as the technology evolves. The majority of stakeholders we spoke with, including representatives from local workforce development boards, truck driver training schools, technology developers, and groups representing truck drivers, told us it would be helpful for federal agencies to play a convening role so that DOL and DOT can better anticipate and understand any potential workforce changes. Several stakeholders also said that convening stakeholders would enable DOL and DOT to surface different parties’ concerns. Additionally, our recent report on emerging technologies found that federal agencies can play an important role in convening stakeholders to gather information in areas where technology is still under development, including information on the research plans of industry stakeholders and ways to address national needs. Continuing to convene stakeholders could also help agencies to identify any information or data gaps that may need to be addressed to understand the potential workforce effects of automated trucking. DOL officials said that because the technology is still advancing, the related workforce effects, including the magnitude of any job losses, are uncertain. They also said they do not have information to identify the number of long-haul truck drivers, whose jobs may be the most likely to be affected by automation. Specifically, the occupational code DOL uses to classify heavy and tractor-trailer truck drivers captures drivers who operate any type of heavy truck. Along with long-haul drivers, this code includes other drivers whose jobs may be harder to automate, such as tow truck operators. Experts who participated in the National Science Foundation-sponsored workshop on the potential workforce effects of automated trucking also identified information gaps. They noted that more information is needed in several areas, including a better understanding of current truck drivers’ skills beyond driving, how those skills might translate to other occupational areas, and new jobs and skills that will be required with the deployment of automated trucks. DOL officials said that the agency provides information on knowledge, skills, and abilities for various driver occupations, as well as detailed work activities, on its Occupational Information Network (O*NET). However, that information is based on surveys to current workers and therefore does not include what skills future drivers may need as automated technology evolves. DOL officials told us they do not typically convene stakeholders on an industry-specific basis. They also said that state and local workforce development boards are best positioned to identify and respond to changes in their local economy and employment needs, because these boards include members from the local business community who know which industries are growing in their local labor markets. However, there are close to 1.9 million heavy and tractor-trailer truck drivers across the country, making the trucking industry an important segment of the national workforce. In addition, one of DOL’s objectives in its fiscal year 2018-2022 strategic plan is to provide timely, accurate, and relevant information on labor market activity, working conditions, and price changes. While DOL officials said they consider the agency’s national labor statistics as the primary tool in understanding macroeconomic changes, they acknowledged that gathering information from local boards and other stakeholders may complement those statistics. DOL officials said they may consider continuing to convene stakeholders to learn more about automated trucking if they find that their current efforts with DOT provide fruitful information, but they currently do not have plans to do so. If DOL waits until the effects of automated trucking on the workforce are widespread enough to affect multiple local economies, the agency will have missed the opportunity to proactively gather information that could help it anticipate large-scale workforce changes in this important industry before they take effect. DOT officials told us they have likewise not made plans to work with DOL to convene stakeholders on an ongoing basis to gather information. Rather, they said they have concentrated on developing the analysis described by the Explanatory Statement accompanying the Consolidated Appropriations Act, 2018 and they do not plan to update that analysis after it is completed. Nonetheless, one of the objectives outlined by DOT in its fiscal year 2018-2022 strategic plan is to promote economic competitiveness by supporting the development of appropriately skilled transportation workers (including truck drivers who transport freight) and strategies to meet emerging workforce challenges. Working with DOL to gather and analyze information from stakeholders as technology continues to develop could assist DOT in meeting this goal. DOT has previously collaborated with DOL on transportation workforce issues. For example, in 2015, DOT and DOL worked with the Department of Education on a blueprint for aligning investments in transportation, including trucking, with career pathways. The report highlighted potential future growth areas in the transportation industry and identified potential jobs that may be in demand through 2022. Unless DOL and DOT continue to gather information from stakeholders as automated trucking technology evolves, they may be unable to fully anticipate the emerging workforce challenges that may result. DOT’s prior efforts to convene stakeholders to address automated vehicles could serve as a model for gathering information from stakeholders about automated trucking. For example, DOT held a series of meetings across the country to gather information, identify key issues, and support the transportation community to integrate automated vehicles onto roads for its National Dialogue on Highway Automation. Further, analyzing information from ongoing meetings with stakeholders could help DOT as it considers potential workforce-related regulatory changes that might be affected by automated truck technologies, such as the requirements to obtain a commercial driver’s license or the maximum number of hours commercial truck drivers are permitted to work. DOL has not provided information to stakeholders about the potential workforce effects of automated trucking technology, including how the skills needed to operate a truck may change in the future. DOL officials told us they have not done so, in part, because they do not yet know how skills and training needed to be a truck driver might change, if at all. Representatives from all of the truck driver training schools and training associations we interviewed said they expect drivers to need new skills to operate or maintain automated trucks, and that future truck drivers may need an additional certification or endorsement to their commercial driver’s license. However, in the absence of specific information about future skill changes, they all said they did not know what specific adjustments would be needed to their curriculum. Additionally, nearly all stakeholders we spoke with—including representatives of technology developers, truck driver training schools, and local workforce development boards—told us that federal agencies can help prepare the future workforce by sharing information with stakeholders about impending workforce changes. In particular, some workforce officials we spoke with said they would benefit from information about technology developers’ plans that would affect future demand or skills for truck drivers. Furthermore, DOL officials told us that heavy and tractor-trailer truck driving was the most common type of occupational training funded through the WIOA Adult and Dislocated Worker programs between April 2017 and March 2018, the most recent period for which data are available. Specifically, local workforce development boards provided funding from these programs to roughly 17,000 individuals for heavy and tractor-trailer truck driver training during that year, or about 15 percent of all individuals who received training services that began within that timeframe. This was more than twice as many individuals as those who received funding for nursing assistant training, the second most frequently funded type of training through these programs. As previously noted, one of DOL’s strategic objectives is to provide timely and accurate labor market information. In addition, according to Standards for Internal Control in the Federal Government, an agency’s management should externally communicate the necessary quality information to achieve the entity’s objective. This includes communicating quality information so that external parties can help the entity address related risks. Additionally, our work has shown that federal agencies can play an important role in sharing information. We have noted that such information sharing is important to help maintain U.S. competiveness. DOT’s strategic plan highlights the agency’s concern that the lack of credentialed workers, combined with projected retirements, threaten to cause significant worker shortages, and that the introduction of innovations and new technologies adds additional complexity for workforce development. Consulting with DOT to provide stakeholders with information about how automated technology could affect the number of trucking jobs and the skills needed to drive or operate commercial trucks would better position local workforce development boards, truck driver training schools, and others to adequately prepare the workforce for future needs. DOL officials said that existing employment and training programs administered by the agency, usually through grants, are generally designed to respond to economic changes that may result in job losses, including any that may result from automated trucking. In addition, DOL officials said that the agency has several resources to support state and local workforce areas to respond to mass layoffs and help workers upgrade their skills. For example, Rapid Response, which is carried out by states and local workforce development agencies, can provide services to employees after a layoff, including career counseling, job search assistance, and information about unemployment insurance and training opportunities. Additionally, under WIOA, local workforce development boards can use up to 20 percent of their Adult and Dislocated Worker allocations to help fund the cost of providing incumbent worker training designed to help avert potential layoffs or increase the skill levels of employees. While these programs may help mitigate any future job losses due to automated trucking, DOL would be better positioned to help local economies leverage them effectively if the agency continued to convene stakeholders, building on its efforts to gather and share good information on when and how those workforce effects are likely to materialize as technology evolves. Automated and self-driving technology for commercial trucks could make the industry safer and more efficient, but it also introduces significant uncertainties for the trucking workforce that DOL and DOT, in consultation with other federal agencies and stakeholders, can help navigate. For example, there is uncertainty about the widespread deployment of self-driving trucks as well as what the resulting effects will be on employment levels, wages, and needed skills. Although technology companies generally envision self-driving trucks being used for long-haul routes—which could result in fewer long-haul trucking jobs—other stakeholders argued that a truck will always need a driver or operator. Stakeholders we interviewed also lacked consensus about what automated trucking might mean for wages and what new skills will be needed to drive or operate automated trucks. Federal agencies have an opportunity to prepare truck drivers for the possible workforce effects of automated trucking. Many stakeholders noted that the effects would be gradual, giving the government time to act, but studies note the effects could eventually be significant, possibly affecting hundreds of thousands of truck driving jobs. DOT is taking an important step toward learning about these workforce effects by consulting with DOL and other stakeholders to inform DOT’s analysis of these developments. However, these agencies have not made plans to continue to convene stakeholders to gather information on an ongoing basis or update their analysis as the technology evolves and the effects become more apparent. Doing so could allow DOL and DOT the foresight to consider whether additional policy changes are needed to prepare for any possible future workforce effects. Similarly, DOL’s publication of routine employment projections and current driver skills and tasks provide useful information. However, DOL has not shared information on what skills drivers might require in the future with other key stakeholders, including technology developers, industry experts, truck driver representatives, training schools, local workforce development boards, and other relevant federal agencies. As a result, those stakeholders may miss an opportunity to better anticipate and plan for changes that may arise from automated trucking technology, including potential labor displacement, wage changes, and the need for new skills. We are making the following four recommendations, including two for the Department of Labor and two for the Department of Transportation: 1. The Secretary of Labor should collaborate with the Secretary of Transportation to continue to convene key groups of stakeholders to gather information on potential workforce changes that may result from automated trucking as the technology evolves, including analyzing needed skills and identifying any information or data gaps, to allow the agencies to fully consider how to respond to any changes. These stakeholders could include, for example, representatives of other relevant federal agencies, technology developers, the trucking industry, organizations that represent truck drivers, truck driver training schools, state workforce agencies, and local workforce development boards. (Recommendation 1) 2. The Secretary of Transportation should collaborate with the Secretary of Labor to continue to convene key groups of stakeholders to gather information on potential workforce changes that may result from automated trucking as the technology evolves, including analyzing needed skills and identifying any information or data gaps, to allow the agencies to fully consider how to respond to any changes. These stakeholders could include, for example, representatives of other relevant federal agencies, technology developers, the trucking industry, organizations that represent truck drivers, truck driver training schools, state workforce agencies, and local workforce development boards. (Recommendation 2) 3. The Secretary of Transportation should consult with the Secretary of Labor to further analyze the potential effects of automated trucking technology on drivers to inform potential workforce-related regulatory changes, such as the requirements to obtain a commercial driver’s license or hours of service requirements (e.g., the maximum hours commercial truck drivers are permitted to work). This could include leveraging the analysis described by the Explanatory Statement accompanying the Consolidated Appropriations Act, 2018 once it is complete, as well as information the department obtains from stakeholders as the technology evolves. (Recommendation 3) 4. The Secretary of Labor should consult with the Secretary of Transportation to share information with key stakeholders on the potential effects of automated trucking on the workforce as the technology evolves. These stakeholders could include, for example, representatives of other relevant federal agencies, technology developers, the trucking industry, organizations that represent truck drivers, truck driver training schools, state workforce agencies, and local workforce development boards. (Recommendation 4) We provided a draft of this report for review and comment to the Departments of Education, Labor (DOL), Transportation (DOT), and Veterans Affairs. We received formal written comments from DOL and DOT, which are reproduced in appendices III and IV, respectively. In addition, DOL and DOT provided technical comments, which we have incorporated as appropriate. The Departments of Education and Veterans Affairs did not have comments on our report. In its written comments, DOL agreed with our recommendations and noted several efforts that it said will help the agency assess and provide information on the potential workforce effects of evolving technologies, such as automated trucking. For example, DOL noted that the agency’s employment projections incorporate expert interviews and other information to identify shifts in industry employment. DOL is also currently consulting with DOT to study these workforce effects, and agreed to consider what other information and stakeholder meetings remain necessary after that study—due in March 2019—is completed. Likewise, DOL agreed to share related information as the technology evolves, and the agency noted it currently publishes employment projections and other occupational information. While useful, these efforts alone will not allow DOL to sufficiently anticipate the future workforce effects of automated trucking. For instance, the broad employment projections do not provide estimates specifically for the long-haul truck drivers who could be affected by automated trucking first. Further, DOL’s occupational information is based on surveys of current workers, so it does not include the skills future drivers will need as automated trucking evolves. Therefore, we continue to believe that convening stakeholders and sharing information about potential workforce effects in the future will position DOL to better understand and inform key stakeholders of these changes. In its written comments, DOT agreed with our recommendations. DOT noted two of its current efforts related to automated trucking technology, namely its October 2018 automated vehicles voluntary guidance, Preparing for the Future of Transportation: Automated Vehicles 3.0, and its forthcoming Congressionally-directed research on the impact of automated vehicle technologies on the workforce. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Education, Labor, Transportation, and Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact us at (202) 512-7215 or brownbarnesc@gao.gov or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Our objectives were to examine: (1) what is known about how and when automated vehicle technologies could affect commercial trucks; (2) what is known about how the adoption of automated trucks could affect the commercial trucking workforce; and (3) the extent to which the Department of Transportation (DOT) and Department of Labor (DOL) are preparing to assist drivers whose jobs may be affected by automated trucking. For all the objectives, we reviewed relevant federal laws and regulations as well documentation from DOT and DOL. To determine the extent to which federal agencies are preparing to assist current and future drivers, we compared DOT and DOL’s efforts against their strategic plans as well as Standards for Internal Control in the Federal Government. Additionally, we: Conducted Interviews: We interviewed officials from several federal agencies to obtain relevant information about our objectives, including the Departments of Education, Labor, Transportation, and Veterans Affairs, as well as the National Science Foundation. To obtain information about all of our objectives, we also interviewed other selected stakeholders. We used our initial research and interviews to develop a list of stakeholder categories that would provide informed perspectives, which when taken as a whole, provided a balanced perspective to answer our objectives. We selected stakeholders who had a range of perspectives regarding the timing for adoption of automated trucking technology, and how this adoption could affect the truck driving workforce. We used the following criteria to select interviewees: 1. authored a report, article, book, or paper regarding automated trucking technology or its potential workforce effects; 2. participated in panels, hearings, or roundtables regarding automated trucking or its potential workforce effects; or 3. was recommended by at least one of our interviewees. We interviewed organized labor representatives; researchers; and representatives from three truck manufacturers and three companies operating their own trucking fleet; two national industry organizations; one national safety organization; four truck driver training schools; an association of state and local workforce organizations; and four local workforce development boards. We selected the schools in part based on recommendations from an association of truck driver training schools, and included two accredited and two non-accredited schools in our selection. We selected three of the workforce development boards due to the prevalence of trucking jobs in their areas and the other board because it was in an area that several stakeholders suggested could be early to adopt automated trucking technology. Additionally, we visited California, where we interviewed representatives of four automated truck technology developers and a manufacturer, and viewed demonstrations of automated trucking technology. We selected California because it had the largest number of technology developers that we identified through our research efforts. We asked all of these stakeholders a core set of questions, as well as tailored questions based on their expertise. Some of the questions we asked stakeholders varied, and some stakeholders chose not to answer every question we asked because they either did not think they had sufficient knowledge about the specific question or did not want to make predictions about future industry developments. Therefore, we generally did not report the specific number of stakeholder responses in this report. The views of the stakeholders we interviewed are illustrative examples and may not be generalizable. For a full list of stakeholders we interviewed, see table 1. Analyzed federal data. To examine how the adoption of automated trucks could affect the current and future trucking workforce, we analyzed relevant data from the Bureau of Labor Statistics (BLS) and the Census Bureau on the current trucking workforce. Specifically, we examined BLS’s Occupational Employment Statistics to obtain employment level and wage data for heavy and tractor-trailer truck drivers (Standard Occupational Classification code 53-3032). The Occupational Employment Statistics survey is a federal-state cooperative program between the Bureau of Labor Statistics and State Workforce Agencies. The survey provides estimates regarding occupational employment and wage rates for the nation as a whole, by state, by metropolitan or nonmetropolitan area, and by industry or ownership. Data from self-employed persons are not included in the estimates. For our analysis of geographic concentration of heavy and tractor-trailer truck driving jobs, we carried out a one-sided test at the 0.05 percent level of significance of the null hypothesis that a region’s concentration is equal to or less than twice the national concentration versus the alternative hypothesis, that the region’s concentration is greater than twice the national concentration. We classified the results, excluding any unreliable areas (i.e., areas with a 95 percent confidence level margin of error for the estimated number of truck drivers that was larger than 30 percent of the estimate itself). We used Poisson tests because these are more appropriate for event occurrences in smaller populations or on a small number of cases. In addition, we analyzed data from the Census Bureau’s American Community Survey regarding the education level, sex, and age of current truck drivers and other drivers. The American Community Survey is an ongoing survey that collects information about the U.S. population such as jobs and occupations, educational attainment, income and earnings and other topics. According to the Census Bureau’s description of the American Community Survey, this survey uses a series of monthly samples to produce annually updated estimates for the same small areas (census tracts and block groups) formerly surveyed via the decennial census long-form sample. Based on our review of related documents and interviews with knowledgeable agency officials, we found the data to be reliable for our purposes. Synthesized literature. To explore how and when automated vehicle technologies could affect the current fleet of commercial trucks and gather information about the possible employment effects of this technology, we conducted a review of key research related to automated vehicle technologies for commercial trucks. We searched bibliographic databases for articles that were published between January 1, 2014 and May 22, 2018 and included key terms such as “autonomous”, “automated”, “driverless”, and “truck platoon” to describe the trucking technology. We also asked the researchers we interviewed to identify any studies that may be relevant to our work. Our search initially resulted in over 250 articles with potential relevance to our objectives. Two analysts reviewed the abstracts of these articles to determine if the articles in this initial search were germane to our objectives. We excluded any articles that were not relevant to our objectives or did not meet our standards for empirical analysis. We included articles that were published in peer review journals, by industry, or by government agencies, as well as articles that were recommended by researchers we interviewed. We identified a final list of 12 studies that met our criteria. Although we reviewed each study’s methodological approach, we did not independently assess the evidence in the articles or verify the analysis of the evidence that was used to come to the conclusions these studies reached. Cindy Brown Barnes or Susan Fleming, (202) 512-7215 or brownbarnesc@gao.gov or flemings@gao.gov. GAO staff who made major contributions to this report include Brandon Haller (Assistant Director), Rebecca Woiwode (Assistant Director), Drew Nelson (Analyst-in-Charge), MacKenzie Cooper, Marcia Fernandez, and Hedieh Fusfield. Additional assistance was provided by Susan Aschoff, David Ballard, James Bennett, Melinda Cordero, Patricia Donahue, Philip Farah, Camilo Flores Monckeberg, David Hooper, Angie Jacobs, Michael Kniss, Terence Lam, Ethan Levy, Sheila R. McCoy, Madhav Panwar, James Rebbe, Benjamin Sinoff, Pamela Snedden, Almeta Spencer, John Stambaugh, Walter Vance, Sonya Vartivarian, and Stephen C. Yoder.
|
Automated vehicle technology may eventually make commercial trucking more efficient and safer, but also has the potential to change the employment landscape for nearly 1.9 million heavy and tractor-trailer truck drivers, among others. GAO was asked to examine the potential workforce effects of automated trucking. This report addresses (1) what is known about how and when automated vehicle technologies could affect commercial trucks; (2) what is known about how the adoption of automated trucks could affect the commercial trucking workforce; and (3) the extent to which DOT and DOL are preparing to assist drivers whose jobs may be affected. GAO reviewed research since 2014 on automated trucking technology, viewed demonstrations of this technology, and analyzed federal data on the truck driver workforce. GAO also interviewed officials from DOT and DOL, as well as a range of stakeholders, including technology developers, companies operating their own trucking fleets, truck driver training schools, truck driver associations, and workforce development boards. Automated trucks, including self-driving trucks, are being developed for long-haul trucking operations, but widespread commercial deployment is likely years or decades away, according to stakeholders. Most technology developers said they were developing trucks that can travel without drivers for part of a route, and some stakeholders said such trucks may become available within 5 to 10 years. Various technologies, including sensors and cameras, could help guide a truck capable of driving itself (see figure). However, the adoption of this technology depends on factors such as technological limitations and public acceptance. Stakeholders GAO interviewed predicted two main scenarios for how the adoption of automated trucks could affect the trucking workforce, which varied depending on the future role of drivers or operators. Technology developers, among others, described one scenario in which self-driving trucks are used on highway portions of long-haul trips. Stakeholders noted this scenario would likely reduce the number of long-haul truck drivers needed and could decrease wages because of lower demand for such drivers. In contrast, groups representing truck drivers, among others, predicted a scenario in which a truck would have an operator at all times for complex driving and other non-driving tasks, and the number of drivers or operators would not change as significantly. However, stakeholders lacked consensus on the potential effect this scenario might have on wages and driver retention. Most stakeholders said automated trucking could create new jobs, and that any workforce effects would take time—providing an opportunity for a federal response, such as any needed policy changes. The Department of Transportation (DOT) is consulting with the Department of Labor (DOL) to conduct a congressionally-directed analysis of the workforce impacts of automated trucking by March 2019. As part of this analysis, DOT and DOL have coordinated to conduct stakeholder outreach. However, they do not currently plan to convene stakeholders on a regular basis to gather information because they have focused on completing this analysis first. Continuing to convene stakeholders could provide the agencies foresight about policy changes that may be needed to prepare for any workforce effects as this technology evolves. GAO is making four recommendations, including that both DOT and DOL should continue to convene key stakeholders as the automated trucking technology evolves to help the agencies analyze and respond to potential workforce changes that may result. DOT and DOL agreed with the recommendations.
|
EM oversees a nationwide complex of 16 sites. A majority of the sites were created during World War II and the Cold War to research, produce, and test nuclear weapons (see figure 1). Much of the complex is no longer in productive use but still contains vast quantities of radioactive and hazardous materials related to the production of nuclear weapons. In 1989, EM began carrying out activities around the complex to clean up, contain, safely store, and dispose of these materials. Starting at about the same time, DOE documents indicate that EM and state and federal regulators entered into numerous cleanup agreements that defined the scope of cleanup work and established dates for coming into compliance with applicable environmental laws. EM has spent more than $170 billion since it began its cleanup program, but its most challenging and costly cleanup work remains, according to EM documents. The processes that govern the cleanup at EM’s nuclear waste sites are complicated, involving multiple laws, agencies, and administrative steps. EM’s cleanup responsibilities derive from different laws, including CERCLA, RCRA, the Atomic Energy Act, and state hazardous waste laws. Federal facility agreements, compliance orders, and other compliance agreements also govern this cleanup. Federal facility agreements are generally enforceable agreements that DOE enters into with EPA and affected states under CERCLA and applicable state laws. For each federal facility listed on the National Priorities List, EPA’s list of seriously contaminated sites, section 120 of CERCLA requires the relevant federal agency to enter into an interagency agreement with EPA for the completion of all necessary cleanup actions at the facility. The interagency agreement must include, among other things, the selection of the cleanup action and schedule for its completion. Interagency agreement provisions can be renegotiated, as necessary, to incorporate new information, adjust schedules, and address changing conditions. States generally issue federal facility compliance orders to DOE under RCRA and the Federal Facilities Compliance Act. RCRA prohibits the treatment, storage or disposal of hazardous waste without a permit from EPA or a state that EPA has authorized to implement and enforce a hazardous waste management program. Under the Federal Facilities Compliance Act, federal agencies are subject to state hazardous waste laws and state enforcement actions, including compliance orders. RCRA regulations establish detailed and often waste-specific requirements for the management and disposal of hazardous wastes, including the hazardous waste component of mixed waste. Tri-party agreements among DOE, EPA, and the relevant state often serve as both a federal facility agreement and a compliance order. In addition to federal facility agreements, other types of agreements governing cleanup at specific sites may also be in place, including administrative compliance orders, court-ordered agreements, and settlement agreements. Administrative compliance orders are orders from state agencies enforcing state hazardous waste management laws. Court-ordered agreements result from lawsuits initiated primarily by states. Settlement agreements are agreements between parties that end a legal dispute. These agreements may include milestones—dates by which DOE commits to plan and carry out its cleanup work at the sites. DOE has identified two different types of milestones: enforceable and planning milestones. Generally, an enforceable milestone has a fixed, mandatory due date, subject to the availability of appropriated funds, whereas a planning milestone is not enforceable and usually represents a placeholder or shorter term of work. In this report, we are examining any enforceable milestone that derives from either federal facility agreements or other compliance agreements. EM manages its cleanup program based on internal guidance, on milestone commitments to regulators, and in consultation with a variety of stakeholders. First, according to EM officials, EM manages cleanup activities based on requirements listed in a cleanup policy that it issued in July 2017 along with guidance listed in standard operating policies and procedures associated with this policy. The 2017 cleanup policy states that EM will apply DOE’s project management principles described in Order 413.3B to its operations activities in a tailored way. Second, EM’s budget requests are explicit regarding the role the milestones play in the cleanup effort. For example, in its fiscal year 2019 request to Congress, EM stated that the request addresses cleanup “governed through enforceable regulatory milestones.” Third, in addition to the milestone commitments to EPA and state environmental agencies, other stakeholders involved include county and local governmental agencies, citizen groups, and other organizations. These stakeholders advocate their views through various public involvement processes, including site- specific advisory boards. At EM’s 16 cleanup sites, cleanup is governed by 72 agreements and hundreds of cleanup milestones. These agreements include federal facility agreements generally negotiated between DOE, the state, and EPA, and compliance orders from state regulators. These agreements may impose penalties for missing milestones and may amend or modify earlier agreements, including extending or eliminating milestone dates. Within the agreements, hundreds of milestones outline deadlines for specific actions to be taken by EM as it carries out its cleanup work. However, because EM lacks a standard definition of milestones, some sites track milestones differently than EM headquarters, limiting EM’s ability to monitor performance. In total, DOE has entered into 72 cleanup agreements at EM’s 16 cleanup sites. The agreements were initially signed between 1985 and 2009 (see table 1). With the exception of the Moab Uranium Mill Tailings Remedial Action Project in Utah and the Waste Isolation Pilot Plant in New Mexico, each site is governed by at least one cleanup agreement. Twelve are governed by multiple agreements (up to as many as 17 at the Savannah River Site, for example). Twelve sites are governed by federal facility agreements, generally with the relevant state and EPA. These agreements generally set out a sequence for accomplishing the work, tend to cover a relatively large number of cleanup activities, and include milestones that DOE must meet. All of the 12 sites with federal facility agreements are also governed by additional compliance agreements that have been negotiated at each site subsequent to the initial federal facility agreement or other agreement with the state. These agreements may impose penalties for missing milestones and may amend or modify earlier agreements, including extending or eliminating milestone dates. For example, the Hanford Site is subject to three consent decrees that resulted from litigation in which the state of Washington sued DOE for failing to meet certain cleanup milestones. EM headquarters and cleanup site officials provided us with different totals on the number of milestones in place at the four sites we selected for further review. Both federal facility agreements and other compliance agreements contain milestones with which EM must comply and, according to EM officials and our review of the agreements, these agreements collectively contain hundreds of milestones. However, milestone information that EM headquarters and site officials shared with us was not consistent. For example, for milestones due in fiscal years 2018 through 2020, officials at EM headquarters identified 135 enforceable cleanup milestones at the four selected sites, which was less than half of the number of such milestones officials at those sites reported to us (see table 2). These discrepancies result from how headquarters and selected sites define and track milestones. Milestone definitions. EM headquarters officials said that they are primarily concerned with milestones related to on-the-ground cleanup; that is, cleanup activities that actually result in waste being removed, treated, or disposed of. EM officials said they consider these to be major milestones. However, not all sites make the same distinction between major and non-major milestones and, as a result, are not consistently reporting the same types of milestones to EM headquarters. For example, officials at the Savannah River Site track milestones in a federal facility agreement that lists 79 milestones due in fiscal years 2018 through 2020. This agreement makes no distinction between major and non-major milestones and includes administrative activities, such as revisions to cleanup reports, in its milestone totals. EM headquarters officials, on the other hand, do not include these activities as major milestones and list only 43 milestones due in the same time frame. Similarly, Hanford officials do not distinguish between major or other milestones in their internal tracking. As a result, Hanford officials are tracking 178 milestones due in fiscal years 2018 through 2020, whereas EM headquarters officials are tracking 57 for the same time frame at Hanford. Requirements for updating milestones. Sites do not consistently provide EM headquarters with the most up-to-date information on the status of milestones at each site. This is because EM requirements governing the submission of milestone information do not specify when or how often sites are to update this information, so sites have the discretion to choose when to send updated milestone data to headquarters. As a result, the information on the list of milestones used to track cleanup performance by EM headquarters may differ from the more up-to-date information kept by the sites. For example, officials at each of the four sites we examined stated that they try to send updated information on the status of milestones to headquarters on an annual basis, though they sometimes send it less frequently. Officials at EM headquarters acknowledged that their list of milestones is not always up-to-date because of the lag between when a milestone changes at the site and when sites update that information in the EM headquarters’ database. In addition to inconsistencies in tracking and defining milestones, lists of milestones maintained by EM headquarters and the four selected sites may not include all cleanup milestones governing the cleanup work at the site. We found two cases in which permits at two sites included milestones that neither EM headquarters nor site officials included in their list of sites’ cleanup milestones. For example, milestones related to a major construction project at one of the selected sites we reviewed— Savannah River—are not listed in either EM headquarters’ or the Savannah River Site’s list of enforceable milestones. According to South Carolina state environmental officials, milestones associated with this project are part of a separate permit and dispute resolution agreement not connected to the federal facility agreement or one of the sites’ compliance agreements. Recently, DOE acknowledged in its fiscal year 2019 budget request that this project has faced technical challenges, and officials noted that the previously agreed-upon start date for operating this project would be delayed. However, this milestone and its delay are not included in either EM headquarters’ or Savannah River’s list of milestones. Similarly, officials at the Hanford Site said that some milestones governing Hanford’s cleanup are part of the site wide RCRA permit issued by the state, which is separate from its federal facility agreement, and, as a result, officials do not track this information in the same Hanford milestone tracking system and do not report it to EM headquarters. EM does not have a standard definition of milestones for either sites or headquarters to use for reporting and monitoring cleanup milestones or guidance on how often sites should update the status of milestones. EM headquarters officials cited guidance that sites can refer to when entering their milestone data into the headquarters-managed database. This guidance addresses how to submit milestone data but does not include a definition of milestones or specify how often sites should update the information. EM headquarters officials noted that sites have the discretion to input milestones as they choose. EM’s lack of a standard definition of milestones limits management’s ability to use milestones to manage EM’s cleanup mission and monitor its progress. We have previously found that poorly defined, incomplete, or missing requirements make it difficult to hold projects accountable, result in programs or projects that do not meet user needs, and can result in cost and schedule growth. In addition, according to Standards for Internal Control in the Federal Government, information and communication are vital for an entity to achieve its objectives. According to these standards, the first principle of information and communication is that management should define the information requirements at the relevant level and the requisite specificity for appropriate personnel. Without this, EM’s ability to use milestones for managing and measuring the performance of its cleanup program is limited. EM relies on cleanup milestones, among other metrics, to measure the overall performance of its operations activities. However, sites regularly renegotiate milestones they are at risk of missing, and EM does not track data on the history of postponed milestones. As a result, EM cannot accurately track the progress of cleanup activities to meet these milestones. Additionally, EM has not consistently reported required information to Congress, and the information it has reported is incomplete. For example, in its report to Congress on the status of the enforceable milestones, EM includes the latest (meaning the most recently renegotiated) milestone dates with no indication of whether or how often those milestones have been missed or postponed. Site officials typically renegotiate enforceable milestones they are at risk of missing with their regulators, in accordance with the modification procedures established in federal facility agreements. EM officials said that sites have the ability to renegotiate milestones before they are missed. For example, the Hanford Site Federal Facility Agreement allows DOE to request an extension of any milestone; the request must include, among other things, DOE’s explanation of the good cause for the extension. As long as there is consensus among EM and its regulators, the milestone is changed. Similarly, the Los Alamos Federal Facility Agreement requires site officials to negotiate cleanup milestones each fiscal year. Because renegotiated milestones are not technically missed, EM avoids any fines or penalties associated with missed milestones. Site officials we interviewed at the four selected sites stated that it is common for regulators and sites to renegotiate milestones before sites miss them. For example, at the Savannah River Site, both DOE and South Carolina officials said they could not recall any missed milestones among the thousands of milestones completed since the cleanup began. Similarly, Hanford officials told us that since the beginning of the cleanup effort in 1989, more than 1,300 milestones had been completed and only 62 had actually been missed because, in most cases, whenever milestones were at risk of being missed, they were renegotiated. However, officials at these sites could not provide us with the exact number of times milestones had been renegotiated. This is because once milestones are changed, sites are not required to maintain or track the original milestones. As a result, the new milestones become the new agreed-upon time frame, essentially resetting the deadline. Because EM does not track the original baseline schedule for renegotiated milestone dates, milestones do not provide a reliable measure of program performance. According to best practices identified in GAO’s schedule assessment guide, agencies should formally establish a baseline schedule against which performance can be measured. In particular, we have previously found that management does not have the ability to identify and mitigate the effects of unfavorable performance without a formally established baseline schedule against which it can measure performance. We have also found that, without a documented and consistently-applied schedule change control process, program staff may continually revise the schedule to match performance, hindering management’s insight into the true performance of the project. In addition, DOE’s internal project management policies call for steps to maintain a change control process, including setting a baseline schedule for completing certain activities and maintaining a record of any subsequent deviations from that baseline. EM uses milestones as one of its metrics for measuring the performance of its cleanup efforts, since the milestones are effectively schedule targets. However, since neither EM headquarters nor the sites track renegotiated milestones and their baseline dates at the sites, EM cannot accurately use milestones for managing and measuring the performance of its cleanup program. EM has not consistently reported required information to Congress on the status of its milestones. The National Defense Authorization Act for Fiscal Year 2011 established a requirement for EM to annually provide Congress with a future-years defense environmental cleanup plan. This plan is to contain, among other things, information on the current dates for enforceable milestones at specified cleanup sites, including whether each milestone will be met and, if not, an explanation as to why and when it will be met. However, since 2011, EM has only provided Congress with the required annual plan in 2 years—2012 and 2017—and EM officials told us in September 2018 that they were unsure when EM would release the next future-years plan. EM officials said that, instead of the annual plan, they have provided oral briefings to Congressional staff during the 4 years when a formal report was not produced. In addition, our analysis of the 2012 and 2017 plans EM submitted to Congress identified three ways in which the plans provide inaccurate or incomplete information on EM’s enforceable milestones. No historical record. First, the plans contain no indication of whether each milestone date reported is the original date for that milestone or whether or how many times the milestones listed have been missed or postponed. Instead, the plans report the latest (and most recently renegotiated) dates for the milestones without listing the original dates or acknowledging that some of the milestones have been delayed, in some cases by several years, beyond their original agreed-upon completion dates. For example, we found that at least 14 milestones from the 2012 plan were repeated in the 2017 plan with new forecasted completion dates, but the 2017 plan gave no indication that these milestones had been postponed (see table 3). The milestones’ due dates had been pushed back by as many as 6 years without any indication in the 2017 report that they were delayed. As noted above, EM headquarters does not track changes to milestones and EM officials at both headquarters and the sites said that they have not historically kept a record of the original baseline dates for renegotiated milestones they change. As a result, EM officials could not readily provide information on whether the other milestones listed in the 2012 report met their listed due date or whether they were postponed. Headquarters officials stated that to gather this information they would need to survey officials at each site. Inaccurate forecast. Second, the forecast completion dates for milestones listed in the 2012 and 2017 plans may not present an accurate picture of the status of the milestones and EM’s cleanup efforts. For example, in the 2012 plan, DOE reported that four out of 218 milestones were at risk of missing their planned completion date, while the rest were on schedule. As discussed above, we found 14 of the milestones in the 2012 plan had been postponed and listed again in the 2017 plan. Similarly, the 2017 plan listed only one milestone out of 154 as forecasted to miss its due date. However, because EM does not have a historical record of the changes made to the milestones, it is unclear how many of these milestones represented their original due dates. Incomplete list. Third, the plans did not include milestones from all of the 10 DOE cleanup sites that EM is required to report on. In 2012, EM did not report milestone information for two of the 10 sites that were required to be included in the plan. In the 2017 plan, information was missing for one of the 10 required sites. EM headquarters officials said that this could be because some sites did not update their milestone information or some sites may still be renegotiating new milestones. However, neither report indicated that data were missing for these sites. As a result of these issues, DOE’s future-years defense environmental cleanup plans provide only a partial picture of the milestones and overall cleanup progress made across the cleanup complex, and actual progress made in cleanup is not transparent to Congress. The absence of reliable and complete information on the progress of EM’s cleanup mission limits EM’s ability to manage its mission and complicates Congress’s ability to oversee the cleanup work. Best practices and DOE requirements for project management call for a root cause analysis when problems lead to schedule delays, but EM officials at both headquarters and selected sites have not analyzed reasons why milestones are missed or postponed. According to best practices identified in GAO’s cost estimating guide, agencies should identify root causes of problems that lead to schedule delays and renegotiated milestones. Specifically, when risks materialize (i.e., when milestones are missed or delayed), risk management should provide a structure for identifying and analyzing root causes. The benefits of doing so include developing a better understanding of the factors that caused milestones to be missed and providing agencies with information to more effectively address those factors in the future. In addition, DOE has recently emphasized the importance of doing this kind of analysis. In 2015, DOE issued a directive requiring sites to do a root cause analysis when the project team, program office, or independent oversight offices determine that a project has breached its cost or schedule thresholds. This directive, which applies to all programs and projects within DOE, calls for “an independent and objective root cause analysis to determine the underlying contributing causes of cost overruns, schedule delays, and performance shortcomings,” such as missed or postponed milestones. However, EM has not done a complex-wide analysis of the reasons for missed or postponed milestones. Similarly, officials we interviewed at the four selected sites said that they were not aware of any site-wide review of why milestones were missed or postponed. According to headquarters officials, this analysis has not been done because EM has determined that DOE requirements governing this type of analysis apply only to contract schedules, not regulatory milestones, and that missed or postponed milestones are not necessarily an indication of cleanup performance shortcomings. However, as previously noted in this report, missing or postponing milestones is a systemic problem across the cleanup complex that makes it difficult for DOE to accurately identify cleanup performance shortcomings. Because EM has not analyzed why it has missed or postponed milestones, EM cannot address these systemic problems and consider those problems when renegotiating milestones with regulators. Without such analysis, EM and its cleanup regulators lack information to set more realistic and achievable milestones and, as a result, future milestones are likely to continue to be pushed back, further delaying the cleanup work. As we have reported previously, these delays lead to increases in the overall cost of the cleanup. The federal government faces a large and growing future environmental liability, the vast majority of which is related to the cleanup of radioactive and hazardous waste at DOE’s 16 sites around the country. EM has responsibility for addressing the human health and environmental risks presented by this contamination in the most cost-effective way. However, most of EM’s largest projects are significantly delayed and over budget, and state regulators for nearly all of EM’s cleanup sites have responded by initiating enforcement actions, often leading to additional agreements, including administrative orders and court settlements, in addition to initial federal facility agreements to ensure those risks are addressed. EM relies on cleanup milestones, among other metrics, to measure the overall performance of its operations activities, and EM reports that very few of its cleanup milestones over the past 2 decades have been missed. However, EM’s self-reported performance in achieving milestones does not provide an accurate view of actual progress in cleaning up sites. EM has not established clear definitions for tracking and reporting milestones and does not have any requirements governing the way sites are to update milestone information. As a result, EM’s internal tracking of these milestones has inconsistencies. Additionally, since the requirement to annually report on the status of milestones was set in 2011, EM has produced only two reports to Congress, and these were inaccurate and incomplete. Without a clear and consistent approach to collecting and reporting this data, including the history of milestone changes, EM cannot accurately use milestones for managing and measuring the performance of its cleanup program. The absence of reliable and complete information on the progress of EM’s cleanup mission also limits EM’s and Congress’s ability to oversee the cleanup work. In addition, without a root cause analysis of why milestones are missed or postponed, EM and its cleanup regulators lack information to set more realistic and achievable milestones. As a result, future milestones are likely to continue to be pushed back, further delaying the cleanup work, which will likely increase cleanup costs and risks to human health and the environment. We are making the following four recommendations to DOE: The Assistant Secretary of DOE’s Office of Environmental Management should update EM’s policies and procedures to establish a standard definition of milestones and specify requirements for both including and updating information on milestones across the complex. (Recommendation 1) The Assistant Secretary of DOE’s Office of Environmental Management should track original milestone dates as well as changes to its cleanup milestones. (Recommendation 2) The Assistant Secretary of DOE’s Office of Environmental Management should comply with the requirements in the National Defense Authorization Act by reporting annually to Congress on the status of its cleanup milestones and including a complete list of cleanup milestones for all sites required by the act. The annual reports should also include, for each milestone, the original date along with the currently negotiated date. (Recommendation 3) The Assistant Secretary of DOE’s Office of Environmental Management should conduct root cause analyses of missed or postponed milestones. (Recommendation 4) We provided a draft of this report to DOE for review and comment. DOE provided written comments, which are reproduced in appendix II; the agency also provided technical comments that we incorporated in the report as appropriate. Of the four recommendations in the report, DOE agreed with three, and partially agreed with one. Regarding the recommendation that DOE update EM’s policies and procedures to establish a standard definition of milestones and specify requirements for both including and updating information on milestones across the complex, the agency agreed with the recommendation. DOE stated that these policy-driven reforms can improve the efficiency of milestone tracking. Regarding the recommendation that DOE track changes to cleanup milestones, the agency agreed with the recommendation. DOE stated that EM currently monitors milestone status, including changes as the need for changes are identified and as part of its ongoing communication with field offices, and therefore DOE considers the recommendation to be closed. However, as we noted in the report, neither EM headquarters nor the sites track the original baseline schedule for renegotiated milestone dates. We adjusted the language of the recommendation to make clear that the EM Assistant Secretary should track original milestone dates as well as changes to cleanup milestones. DOE stated in its written comments that EM does not believe that tracking original and changed milestones will strengthen EM's ability to use milestones to manage and measure the performance of its cleanup program. However, as we noted in this report, according to best practices identified in GAO's schedule assessment guide, agencies should formally establish a baseline schedule against which performance can be measured. We have found that, without a documented and consistently-applied schedule change control process, program staff may continually revise the schedule to match performance, hindering management's insight into the true performance of the project. In addition, DOE's internal project management policies call for steps to maintain a change control process, including setting a baseline schedule for completing certain activities and maintaining a record of any subsequent deviations from that baseline. Regarding our recommendation that DOE comply with the requirements in the National Defense Authorization Act by reporting annually to Congress on the status of its cleanup milestones and including a complete list of cleanup milestones for all sites required by the act, the agency partially agreed with the recommendation. DOE stated that additional budget and clarification of purpose and scope would be required to fulfill this recommendation. As we point out in our report, DOE has not fully complied with requirements established by the act, including not submitting all required annual reports and, even when DOE did submit these reports, its reporting omitted information about some sites. DOE stated that EM is reviewing options to address this recommendation. Regarding our recommendation that DOE conduct root cause analyses of performance shortcomings that lead to missed or postponed milestones, the agency agreed with the recommendation and stated that EM is evaluating options to implement it. However, DOE stated that there may be multiple reasons why milestones are changed, and not all of the changes are due to DOE performance. To acknowledge the uncertainty in the causes of missed or postponed milestones, we adjusted the language of the recommendation to clarify that the EM Assistant Secretary should conduct root cause analyses of missed or postponed milestones. In addition, in its written comments, DOE disagreed with the draft report's description of the process and authorities related to renegotiating compliance milestones, stating that EM cannot and does not unilaterally delay/postpone milestones and that EPA and state regulator approval of milestone changes is required. We agree, and the report states that it is common for regulators and sites to renegotiate milestones before sites miss them. DOE also disagreed with the draft report’s characterization of the coordination between EM sites and headquarters in tracking milestones. In particular, DOE’s written comments state that site-specific databases include all regulatory compliance milestones drawn from applicable agreements, while the headquarters database tracks major enforceable milestones. However, as our report notes, because not all sites make the same distinction between major and non-major milestones, sites are not consistently reporting the same types of milestones to EM headquarters. In addition, DOE’s written comments state that EM sites and headquarters routinely collaborate and discuss the status of milestones via meetings and EM periodically requests that sites verify the data in the EM headquarters database. Nevertheless, as our report notes, EM requirements governing the submission of milestone information do not specify when or how often sites are to update this information. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix III. The Brookhaven National Laboratory was established in 1947 by the Atomic Energy Commission. Formerly Camp Upton, a U.S. Army installation site, Brookhaven is located on a 5,263-acre site on Long Island in Upton, NY, approximately 60 miles east of New York City. Historically, Brookhaven was involved in the construction of accelerators and research reactors such as the Cosmotron, the High Flux Beam Reactor, and the Brookhaven Graphite Research Reactor. These accelerators and reactors led the way in high-energy physics experiments and subsequent discoveries but also resulted in radioactive waste. To complete the cleanup mission, DOE is working to build and operate groundwater treatment plants, decontaminate and decommission the High Flux Beam Reactor and the Brookhaven Graphite Research Reactor, and dispose of some wastes off-site. The Energy Technology Engineering Center occupies 90 acres within the 290 acre Santa Susana Field Laboratory 30 miles north of Los Angeles, California. The area was primarily used for DOE research and development activities. In the mid-1950s, part of the area was set aside for nuclear reactor development and testing, primarily related to the development of nuclear power plants and space power systems, using sodium and potassium as coolants. In the mid-1960s, the Energy Technology Engineering Center was established as a DOE laboratory for the development of liquid metal heat transfer systems to support the Office of Nuclear Energy Liquid Metal Fast Breeder Reactor program. DOE is now involved in the deactivation, decommissioning, and dismantlement of contaminated facilities on the site. DOE is responsible for one of the world’s largest environmental cleanup projects: the treatment and disposal of millions of gallons of radioactive and hazardous waste at its 586 square mile Hanford Site in southeastern Washington State. Hanford facilities produced more than 20 million pieces of uranium metal fuel for nine nuclear reactors along the Columbia River. Five plants in the center of the Hanford Site processed 110,000 tons of fuel from the reactors, discharging an estimated 450 billion gallons of liquids to soil disposal sites and 53 million gallons of radioactive waste to 177 large underground tanks. Plutonium production ended in the late 1980s. Hanford cleanup began in 1989 and now involves (1) groundwater monitoring and treatment, (2) deactivation and decommissioning of contaminated facilities, and (3) the construction of the waste treatment and immobilization plant intended, when complete, to treat the waste in the underground tanks. DOE’s Idaho Site is an 890-square-mile federal reserve, situated in the Arco Desert over the Snake River Plain Aquifer in central Idaho. The Idaho Cleanup Project involves the environmental cleanup of the Idaho Site, contaminated with legacy wastes generated from World War II-era conventional weapons testing, government-owned research and defense reactors, spent nuclear fuel reprocessing, laboratory research, and defense missions at other DOE sites. The 1-square-mile Lawrence Livermore National Laboratory site is an active, multi-program DOE research laboratory about 45 miles east of San Francisco. A number of research and support operations at Lawrence Livermore handle, generate, or manage hazardous materials that include radioactive wastes. The site first was used as a Naval Air Station in the 1940s. In 1951, it was transferred to the U.S. Atomic Energy Commission and was established as a nuclear weapons and magnetic fusion energy research facility. Over the past several years, Lawrence Livermore constructed several treatment plants for groundwater pumping and treatment and for soil vapor extraction. These systems will continue to operate until cleanup standards are achieved. Los Alamos National Laboratory is located in Los Alamos County in north central New Mexico. The laboratory, founded in 1943 during World War II, served as a secret facility for research and development of the first nuclear weapon. The site was chosen because the area provided controlled access, steep canyons for testing high explosives, and existing infrastructure. The Manhattan Project’s research and development efforts that were previously spread throughout the nation became centralized at Los Alamos and left a legacy of contamination. Today, the Los Alamos National Laboratory Cleanup Project is responsible for the treatment, storage, and disposition of a variety of radioactive and hazardous waste streams; removal and disposition of buried waste; protection of the regional aquifer; and removal or deactivation of unneeded facilities. The Moab Site is located about 3 miles northwest of the city of Moab in Grand County, Utah. The former mill site encompasses approximately 435 acres, of which about 130 acres is covered by the uranium mill tailings pile. Uranium concentrate (called yellowcake), the milling product, was sold to the U.S. Atomic Energy Commission through December 1970 for use in national defense programs. After 1970, production was primarily for commercial sales to nuclear power plants. During its years of operation, the mill processed an average of about 1,400 tons of ore a day. The milling operations created process-related wastes and tailings, a radioactive sand-like material. The tailings were pumped to an unlined impoundment in the western portion of the Moab Site property that accumulated over time, forming a pile more than 80 feet thick. The tailings, particularly in the center of the pile, have a high water content. Excess water in the pile drains into underlying soils, contaminating the ground water. In 1950, President Truman established what is now known as the Nevada National Security Site in Mercury, Nevada, to perform nuclear weapons testing activities. In support of national defense initiatives, a total of 928 atmospheric and underground nuclear weapons tests were conducted at the site between 1951 and 1992, when a moratorium on nuclear testing went into effect. Today, the site is a large, geographically-diverse research, evaluation, and development complex that supports homeland security, national defense, and nuclear nonproliferation. In Nevada, DOE activities focus on groundwater, soil, and on-site facilities; radioactive, hazardous, and sanitary waste management and disposal; and environmental planning. DOE’s Oak Ridge Reservation is located on approximately 33,500 acres in eastern Tennessee. The reservation was established in the early 1940s by the Manhattan Engineer District of the U. S. Army Corps of Engineers and played a role in the production of enriched uranium during the Manhattan Project and the Cold War. DOE is now working to address excess and contaminated facilities, remove soil and groundwater contamination, and enable modernization that allows the National Nuclear Security Administration to continue its national security and nuclear nonproliferation responsibilities and the Oak Ridge National Laboratory to continue its mission for advancing technology and science. The Paducah Gaseous Diffusion Plant, located within an approximately 650-acre fenced security area in in McCracken County in western Kentucky, opened in 1952 and played a role in the production of enriched uranium during and after the Cold War until ceasing production for commercial reactor fuel purposes in 2013. Decades of uranium enrichment and support activities required the use of a number of typical and special industrial chemicals and materials. Plant operations generated hazardous, radioactive, mixed (both hazardous and radioactive), and nonchemical (sanitary) wastes. Past operations also resulted in soil, groundwater, and surface water contamination at several sites located within plant boundaries. The Portsmouth Gaseous Diffusion Plant is located in Pike County, Ohio, in southern central Ohio, approximately 20 miles north of the city of Portsmouth, Ohio. Like the Paducah Plant, this facility was also initially constructed to produce enriched uranium to support the nation’s nuclear weapons program and was later used by commercial nuclear reactors. Cleanup activities here are similar to those at the Paducah Plant. The Sandia National Laboratories comprises 2,820 acres within the boundaries of the 118 square miles of Kirtland Air Force Base and is located about 6 miles east of downtown Albuquerque, New Mexico. It is managed by the National Nuclear Security Administration. Sandia National Laboratories was established in 1945 for nuclear weapons development, testing, and assembly for the Manhattan Engineering District. Beginning in 1980, the mission shifted toward research and development for nonnuclear components of nuclear weapons. Subsequently, the mission was expanded to research and development on nuclear safeguards and security and multiple areas in science and technology. The Savannah River Site complex covers 198,344 acres, or 310 square miles, encompassing parts of Aiken, Barnwell, and Allendale counties in South Carolina, bordering the Savannah River. The site is a key DOE industrial complex responsible for environmental stewardship, environmental cleanup, waste management, and disposition of nuclear materials. During the early 1950s, the site began to produce materials used in nuclear weapons, primarily tritium and plutonium-239. Five reactors were built to produce nuclear materials and resulted in unusable by-products, such as radioactive waste. About 35 million gallons of radioactive liquid waste are stored in 43 underground tanks. The Defense Waste Processing Facility is processing the high-activity waste, encapsulating radioactive elements in borosilicate glass, a stable storage form. Since the facility began operations in March 1996, it has produced more than 4,000 canisters (more than 16 million pounds) of radioactive glass. The Separations Process Research Unit is an inactive facility located at the Knolls Atomic Power Laboratory in Niskayuna, New York, near Schenectady. The Mohawk River forms the northern boundary of this site. Built in the late 1940s, its mission was to research the chemical process to extract plutonium from irradiated materials. Equipment was flushed and drained, and bulk waste was removed following the shutdown of the facilities in 1953. Today, process vessels and piping have been removed from all the research unit’s facilities. In 2010, cleanup of radioactivity and chemical contamination in the Lower Level Railroad Staging Area, Lower Level Parking Lot, and North Field areas was completed. The Waste Isolation Pilot Plant is an underground repository located near Carlsbad, New Mexico, that is used for disposing of defense transuranic waste. The plant is managed by DOE’s Office of Environmental Management and is the only deep geological repository for the permanent disposal of defense generated transuranic waste. The West Valley Demonstration Project occupies approximately 200 acres within the 3,345 acres of land called the Western New York Nuclear Service Center. The project is located approximately 40 miles south of Buffalo, New York. The West Valley Demonstration Project Act of 1980 established the project. The act directed DOE to solidify and dispose of the high-level waste and decontaminate and decommission the facilities used in the process. The land and facilities are not owned by DOE. Rather, the project premises are the property of the New York State Energy Research and Development Authority. DOE does not have access to the entire 3,345 acres of property. In addition to the contact named above, Nico Sloss (Assistant Director), Jeffrey T. Larson (Analyst in Charge), Natalie M. Block, Antoinette C. Capaccio, R. Scott Fletcher, Cindy K. Gilbert, Richard P. Johnson, Jeffrey R. Rueckhaus, Ilga Semeiks, Sheryl E. Stein, and Joshua G. Wiener made key contributions to this report.
|
EM manages DOE's radioactive and hazardous waste cleanup program using compliance agreements negotiated between DOE and other federal and state agencies. Within the agreements, milestones outline cleanup work to be accomplished by specific deadlines. EM's cleanup program faces nearly $500 billion in future environmental liability, which has grown substantially. GAO was asked to review DOE's cleanup agreements. This report examines the extent to which EM (1) tracks the milestones in cleanup agreements for EM's cleanup sites; (2) has met, missed, or postponed cleanup-related milestones at selected sites and how EM reports information; and (3) has analyzed why milestones are missed or postponed and how EM considers those reasons when renegotiating milestones. GAO reviewed agreements and milestones at EM's 16 cleanup sites and compared information tracked by EM headquarters and these sites; interviewed officials from four selected sites (chosen for variation in location and scope of cleanup, among other factors); and reviewed EM guidance related to milestone negotiations. The cleanup process at the 16 sites overseen by the Department of Energy's (DOE) Office of Environmental Management (EM) is governed by 72 agreements and hundreds of milestones specifying actions EM is to take as it carries out its cleanup work. However, EM headquarters and site officials do not consistently track data on the milestones. EM headquarters and site officials provided GAO with different totals on the number of milestones in place at the four sites GAO selected for review. These discrepancies result from how headquarters and selected sites define and track milestones. First, not all sites make the same distinction between major (i.e., related to on-the-ground cleanup) and non-major milestones and, as a result, are not consistently reporting the same milestones to EM headquarters. Second, sites do not consistently provide EM headquarters with the most up-to-date information on the status of milestones at each site. These inconsistencies limit EM's ability to use milestones to manage the cleanup mission and monitor its progress. EM does not accurately track met, missed, or postponed cleanup-related milestones at the four selected sites, and EM's milestone reporting to Congress is incomplete. EM sites renegotiate milestone dates before they are missed, and EM does not track the history of these changes. This is because once milestones change, sites are not required to maintain or track the original milestone dates. GAO has previously found that without a documented and consistently-applied schedule change control process, program staff may continually revise the schedule to match performance, hindering management's insight into the true performance of the project. Further, since 2011, EM has not consistently reported to Congress on the status of the milestones each year, as required, and the information it has reported is incomplete. EM reports the most recently renegotiated milestone dates with no indication of whether or how often those milestones have been missed or postponed. Since neither EM headquarters nor the sites track renegotiated milestones and their baseline dates at the sites, milestones do not provide a reliable measure of program performance. EM officials at headquarters and selected sites have not conducted root cause analyses on missed or postponed milestones; thus, such analyses are not part of milestone negotiations. Specifically, EM has not done a complex-wide analysis of the reasons for missed or postponed milestones. Similarly, officials GAO interviewed at the four selected sites said that they were not aware of any site-wide review of why milestones were missed or postponed. Best practices for project and program management outlined in GAO's Cost Estimating and Assessment Guide note the importance of identifying root causes of problems that lead to schedule delays. Additionally, in a 2015 directive, DOE emphasized the importance of conducting such analysis. Analyzing the root causes of missed or postponed milestones would better position EM to address systemic problems and consider those problems when renegotiating milestones with regulators. Without such analysis, EM and its cleanup regulators lack information to set more realistic and achievable milestones and, as a result, future milestones are likely to continue to be pushed back, further delaying the cleanup work. As GAO has reported previously, these delays lead to increases in the overall cost of the cleanup. GAO is making four recommendations, including that EM establish a standard definition of milestones across the cleanup sites, track and report original and renegotiated milestone dates, and identify the root causes of why milestones are missed or postponed. In commenting on a draft of this report, DOE agreed with three of the recommendations and partially agreed with a fourth.
|
VA serves veterans of the U.S. armed forces and provides health, pension, burial, and other benefits. The department’s three operational administrations—VHA, Veterans Benefits Administration, and National Cemetery Administration—operate largely independently from one another. Each has its own contracting authority, though all three also work with national contracting organizations under the Office of Acquisition, Logistics, and Construction for certain types of purchases, such as medical equipment and information technology. VHA, which provides medical care to about 7 million veterans at 170 medical centers, is by far the largest of the three administrations. These medical centers are organized into 18 VISNs, organizations that manage medical centers and associated clinics across a given geographic area. Each VISN is served by a corresponding Network Contracting Office. Figure 1 shows the organizational structure of the procurement function at VA. For over a decade, each of VA’s 170 medical centers used VHA’s legacy MSPV program to order medical supplies, such as bandages and scalpels. Many of those items were purchased using the Federal Supply Schedules, which provided medical centers with a great deal of flexibility. As we reported in 2016, this legacy program, however, prevented VHA from standardizing items used across its medical centers and affected its ability to leverage its buying power to achieve greater cost avoidance. Standardization is a process of narrowing the range of items purchased to meet a given need in order to improve buying power, simplify supply chain management, and provide clinical consistency. For example, a hospital network might find that it purchases 100 varieties of bandages, but might ultimately determine—with input from clinicians—that it can narrow those choices down to 10 varieties to fill most needs, which would provide greater consistency and allow the hospital to negotiate lower prices. In part because the legacy MSPV program limited standardization, VHA decided to transition to a new iteration, called MSPV-NG. VHA launched the MSPV-NG program in December 2016 but allowed a 4-month transition period. After April 2017, medical centers could no longer use the legacy program. MSPV-NG now restricts ordering to a narrow “formulary”—a list of specific items that medical centers are allowed to purchase. VA has had a formulary in place for pharmaceuticals since 1997, and many leading hospital networks rely on a similar formulary approach when it comes to purchasing their own medical supplies. VHA policy requires medical centers to use MSPV-NG—as opposed to other means such as open market purchase card transactions—when purchasing items that are available in the formulary. Figure 2 illustrates the program structure and key participants involved in the transition to MSPV-NG. VA’s primary MSPV-NG program goals are to: Standardize requirements for supply items for greater clinical consistency. Achieve cost avoidance by leveraging VA’s substantial buying power when making competitive awards; VA set a goal of achieving $150 million in cost avoidance in 2016 through a supply chain transformation effort, of which MSPV-NG is a primary part. Achieve greater efficiency in ordering and supply chain management, including a metric of ordering 40 percent of medical centers’ supplies from the MSPV-NG formulary. Involve clinicians in requirements development to ensure uniform clinical review of medical supplies. VHA gave responsibility for developing and implementing MSPV-NG to its Healthcare Commodity Program Executive Office (program office), an organization within VHA’s Procurement and Logistics Office. According to documentation, the program office and SAC, a VA-wide contracting organization, identified several steps to allow for a successful transition to MSPV-NG. These steps included the following: 1. Identify and develop requirements – Determine which types of medical supplies should be made available to medical centers via the MSPV-NG formulary and their key characteristics. The program office was responsible for this aspect of the transition. 2. Award contracts and establish agreements – SAC was responsible for awarding distribution contracts to a select number of prime vendors within certain geographic areas to deliver supplies to medical centers. SAC was also responsible for awarding contracts and establishing agreements with suppliers that provide the products themselves, which set prices for individual items. 3. Implement MSPV-NG at medical centers – MSPV-NG orders are placed by ordering officers—members of the logistics staff at each medical center that are delegated authority by SAC contracting officers to place orders for medical supplies. Each medical center’s most frequently purchased items—referred to as their core list—vary based on the type of care provided, local preferences, and other factors. We have previously reported that organizational transformations (such as MSPV-NG) require careful planning and implementation to be successful. For instance, one leading practice is for leadership to set clear implementation goals and a timeline to achieve them. Likewise, communicating a strategy and progress to stakeholders—as well as seeking feedback—is a hallmark of successful organizational transformations. We have reported that at the center of any serious change management initiative are the people. Thus, to facilitate success, is to recognize the “people” element and implement strategies to help individuals maximize their full potential in the organization, while simultaneously managing the risk of reduced productivity and effectiveness that often occurs as a result of the changes. Building on the lessons learned from the experiences of large private and public sector organizations, the key practices and implementation steps that we identified in our prior work can help agencies transform their cultures so that they can be more results oriented, customer focused, and collaborative in nature. Standards for Internal Control in the Federal Government also identify related principles, such as the importance of the tone from the top and ensuring that data used in decision-making are reliable. Leading hospital networks we spoke with have similar goals to VA in managing their supply chains, including clinical standardization and reduced costs. In managing their supply chain efforts, the leading hospital networks we identified take consistent approaches to drive change and achieve savings. These hospitals reported they analyze their spending to identify items purchased most frequently, and which ones would be the best candidates to standardize first to yield cost savings. These hospitals also acknowledge that this is an iterative process and do not attempt to standardize all categories of medical supplies at a single time, but instead prioritize categories of supplies based on the potential for standardization. The hospitals’ supply chain managers establish consensus with clinicians through early and frequent collaboration on supply chain standardization. These hospitals also continually involve clinicians in determining key supply characteristics and evaluating potential items, understanding that clinician involvement is critical to the success of any effort to standardize their medical supply chain. For example, a supply chain official from one large hospital we spoke with stated that selecting an item that does not meet clinician needs could damage clinician buy-in for future efforts, so they take great care to be thorough in taking clinician input into account. Supply chain officials from these leading hospitals have reported positive results from these efforts, such as increased cost savings and the potential for improved patient care. By tackling a few specific categories at a time and communicating with clinicians on an ongoing basis about the outcomes of these processes and the decisions taken, these hospitals are able to achieve efficiencies, including significant cost savings in some cases, while maintaining buy-in from their clinicians. Figure 3 depicts the key steps that selected hospitals’ supply chain managers reported following when standardizing their medical supply chains, including the critical role of clinicians throughout the process. The Federal Acquisition Regulation (FAR) generally requires agencies to contract using full and open competition, but permits contracting without full and open competition in specified circumstances, such as when the agency’s need for supplies or services is of unusual and compelling urgency. The VHA Procurement Manual describes an emergency as a situation—such as response to fires or floods—where delay in award of a contract would result in financial or physical injury to the VA or a veteran. The manual also states that neither a lack of advance planning nor concerns about a need to obligate funds before the end of the fiscal year are valid justifications for an urgent or emergency procurement request. For needs that cannot be met through MSPV-NG, medical centers submit purchase requests to their local VHA contracting office—the Network Contracting Office. The contracting office provides medical centers with expected lead times for various types of procurements, which can be from days to months, depending on the complexity of the requested item. However, if a medical center has an urgent need that must be met more quickly than the expected lead times, the customer submitting the request can identify it as an emergency. The purchase request is entered into two VA data systems, the Integrated Funds Distribution Control Point Activity, Accounting and Procurement and VA’s Electronic Contract Management System (eCMS). The medical center designates the priority level of the request as: 1. Emergency: life threatening cases, emergency physical plant repair, and requires acquisition action within 24 hours; 2. Special: urgent, non-life threatening, and requires acquisition action within 72 hours; and 3. Standard: all other cases and requires acquisition action within 40 days. Incoming requests are screened by Network Contracting Office managers and assigned to individual contracting officers, who must prioritize emergency requests over other pending contract actions. Figure 4 illustrates the typical process for submitting and awarding an emergency procurement. VHA’s implementation of the MSPV-NG program—from its initial work to identify a list of supply requirements in 2015, through its roll-out of the formulary to medical centers in December 2016—was not executed in line with leading practices. Despite changes aimed at improving implementation, the agency continues to face challenges that have precluded achievement of program goals. Specifically, VHA lacked a documented program strategy, leadership stability, and workforce capacity for the transition that—if in place—could have facilitated buy-in for the change throughout the organization. Furthermore, the initial requirements development process and tight time frames contributed to ineffective contracting processes. As a result, VHA developed an initial formulary that did not meet the needs of the medical centers. VA made some changes in the second phase of requirements development to address deficiencies identified in the initial roll out, namely by increasing the level of clinical involvement. However, VHA has not yet achieved its goals for utilization and cost avoidance. VA did not document a clear overall strategy for the MSPV-NG program at the start and has not done so to date. According to program office and SAC officials responsible for developing and executing the program, no document existed at the outset of the MSPV-NG program that outlined the overall strategy. About 6 months after our initial requests for a strategy or plan, an official provided us with an October 2015 plan focusing on the mechanics of establishing the MSPV-NG formulary. However, this plan was used only within the VHA Procurement and Logistics Office and had not been approved by VHA or VA leadership. Leading practices for organizational transformation state that agencies must have well-documented plans and strategies for major initiatives (such as MSPV-NG) and communicate them clearly and consistently to all involved—which included VHA headquarters, the SAC, and all 170 medical centers. Without such a strategy, VA could not ensure that all stakeholders understood VHA’s approach for MSPV-NG and worked together in a coordinated manner to achieve program goals. This is also in contrast to the practices of several leading hospital networks we met with, which placed an emphasis on designing and communicating a strategy and governance structure for their medical supply standardization efforts before making any changes to purchasing. If VA continues to move forward with MSPV-NG without an overarching strategy that it communicates to all stakeholders to ensure they understand VHA’s approach for MSPV-NG, VA will continue to face challenges in meeting program goals. Leadership instability and workforce challenges also made it difficult for VA to execute its transition to MSPV-NG. Due to a combination of budget and hiring constraints, and lack of prioritization within VA, the program office, which has primary responsibility for implementing MSPV-NG, has never been fully staffed and has experienced instability in leadership. As of January 2017, 24 of the office’s 40 positions were filled, and program office officials stated that this lack of staff affected their ability to implement certain aspects of the program within the planned time frames. Our work has shown that leadership buy-in is necessary to ensure that major programs like MSPV-NG have the resources and support they need to execute their missions. We have also previously found that leadership must set a tone at the top and demonstrate strong commitment to improve and address key issues. However, leadership of VHA’s Procurement and Logistics Office changed frequently during the implementation of MSPV-NG, and two of its leaders, the Chief Procurement and Logistics Officer and the Deputy Chief Logistics Officer, were serving in an acting capacity. A similar instability in leadership affected the program office itself. Since the inception of MSPV-NG, the program office has had four directors, two of whom were acting and two of whom were fulfilling the director position while performing other collateral duties. For instance, one of the acting MSPV-NG program office directors was on detail from a VISN office to fulfill the position but had to abruptly leave and return to her VISN position due to a federal hiring freeze. Without prioritizing the hiring of the program director position on a permanent basis, this lack of stability could continue to affect execution of MSPV-NG. Moreover, VA’s Chief Acquisition Officer (CAO), whose responsibilities include oversight of VA acquisition programs such as MSPV-NG, is serving in an acting capacity and is not a “non-career employee.” By statute, VA is required to appoint or designate a non-career employee as the agency’s CAO. VA provided information to show that since 2009, VA has designated career employees as “acting” CAOs rather than appointing or designating non-career employees to the CAO position. As we reported in 2012, clear, strong, and effective leadership, including a CAO, is key to an effective acquisition function that can execute complicated procurements like MSPV-NG. By appointing a CAO in a non-acting capacity, VA could improve the effectiveness of its acquisition function. During our 2012 review, VA indicated that it sought to establish an Assistant Secretary for Acquisition, Logistics, and Construction, who would serve as VA’s CAO. In connection with the current review, VA’s Office of General Counsel cited a statutory limitation on the number of assistant secretaries that may be established within VA as the reason it has not established that additional assistant secretary position. VA’s Office of General Counsel indicated that the agency was considering requesting, in the reform plan that VA was required to submit to the Office of Management and Budget in September 2017, a change to the statute that limits the number of VA assistant secretaries. However, subsequently, VA’s Office of General Counsel indicated that the plan will not include such a request. By not appointing or designating a non- career employee as CAO, VA will continue to be noncompliant with the statute. Figure 5 summarizes the history of leadership changes in these positions, which are all currently filled in an acting capacity. Further, according to officials, leadership vacancies at medical centers and competing demands on logistics staff time made implementation of MSPV-NG more challenging at the selected VISNs and medical centers we visited. For instance, longstanding vacancies in the Chief Supply Chain Officer positions existed at one of the VISNs and its medical center that we visited. The VISN-level position was vacant for about 4 years, with Chief Supply Chain Officers from individual medical centers filling in for periods of time, according to the current Chief Supply Chain Officer, who took the position in January 2017. In one medical center within that VISN, the local position was also vacant for several years, according to the current Chief Supply Chain Officer, who took the position in 2016. He stated that he found that the staffing of the office had suffered in the absence of a leader, leaving it poorly-equipped to execute the transition to MSPV-NG. Medical center logistics staff also had several other major transformation efforts to manage alongside the MSPV-NG transition, such as implementing a new system for managing equipment. Several Chief Supply Chain Officers we interviewed stated that these additional demands made it challenging for their staff to implement the MSPV-NG program. The MSPV-NG program office initially developed requirements for medical and surgical supply categories—identifying items to include in the formulary—based almost exclusively on prior supply purchases, with limited clinician involvement. The program office concluded in its October 2015 formulary plan that relying on data on previous clinician purchases would be sufficient and that clinician input would not be required for identifying which items to include in the initial formulary. Further, rather than standardizing purchases of specific categories of supplies—such as bandages or scalpels—program officials told us they identified medical and surgical items on which VA had spent $16,000 or more annually and ordered at least 12 times per year, and made this the basis for the formulary. Officials said this analysis initially yielded a list of about 18,000 items, which the program office further refined to about 6,000 items by removing duplicate items or those that were not considered consumable commodities, such as medical equipment. In 2015, the program office also took the lead in developing requirements for these 6,000 items. In documentation, and as confirmed by agency officials, we found that the program office did not solicit input from clinicians for most items and did not prioritize categories of supplies. Instead, the program office relied on historical purchase data to set requirements across medical and surgical categories because officials said they thought this would provide a good representation of medical centers’ needs. This approach to requirement development stood in sharp contrast to those of the leading hospital networks we met with, which relied heavily on clinicians to help drive the standardization process and focused on individual categories of supplies rather than addressing all categories simultaneously. Based on the requirements developed by the program office, SAC began to issue solicitations for the 6,000 items on the initial formulary in June 2015. From June 2015 to January 2016, medical supply companies responded to only about 30 percent of the solicitations. As a result, according to SAC officials, they conducted outreach and some of these companies told SAC that VHA’s requirements did not appear to be based on clinical input and instead consisted of manufacturer-specific requirements that favored particular products instead of broader descriptions. Furthermore, SAC did not solicit large groups of related items, but rather issued separate solicitations for small groups— consisting of 3 or fewer items—of supply items. This is contrary to industry practices of soliciting large groups of related supplies together. Therefore, according to SAC officials, some medical supply companies told them that submitting responses to SAC’s solicitations required more time and resources than they were willing to commit. By its April 2016 deadline for having 6,000 items on the formulary, SAC had been working on the effort for over a year and had competitively awarded contracts for about 200 items, representing about 3 percent of the items. Without contracts for the items on the formulary in place, VA delayed the launch of the MSPV-NG program until December 2016. To continue the legacy MSPV program through the new launch date, SAC awarded bridge contracts—short-term sole-source contracts—to its legacy prime vendor contractors for a second year. We previously reported that bridge contracts had resulted in higher costs to the government. In part because of these costs, SAC officials stated that VA leadership did not view a third set of bridge contracts for the legacy MSPV program as a viable option. As a result of the pressure not to miss the revised December 2016 deadline, which VA documents we reviewed stated would have been “catastrophic,” SAC abandoned its original goal of using competitive procedures and relied instead on a non-competitive strategy for placing most of the items on the MSPV-NG initial formulary. Starting in August 2016, SAC established 175 limited source blanket purchase agreements with Federal Supply Schedule vendors to complete the initial Phase 1 formulary. While this approach enabled the MSPV- NG program office to establish the formulary more quickly, it did so at the expense of one of the primary goals of the MSPV-NG program— leveraging VA’s buying power to obtain cost avoidance through competition. We previously reported that a senior VA procurement official said VA could save 30 percent, on average, on the prices available under the Federal Supply Schedules when awarding competitive contracts that leveraged VA’s buying power under the legacy MSPV program. The discounts VA obtained from these limited source agreements were generally much less. We reviewed a non-generalizable sample of 10 randomly-selected limited source blanket purchase agreements and found that most items (332 of the 376 items covered by these agreements) were discounted 5 percent or less. Competition is the cornerstone of the acquisition system; its benefits are well established, including saving the taxpayer money. As shown in figure 6, the non- competitive agreements awarded in the last few months before the launch of MSPV-NG accounted for approximately 79 percent of the items on the January 2017 version of the formulary. Once VA’s MSPV-NG initial formulary was established in December 2016, each medical center was charged with implementing it. Previously, medical centers had hundreds of thousands of items they could obtain through the legacy MSPV program. In order to transition to the new formulary—consisting of around 6,000 items at launch—the program office directed medical centers to determine if items they had ordered in the past could be fulfilled by the formulary. To do this, each medical center’s Chief Supply Chain Officer—the head of the logistics office—was to review their center’s core list of previously ordered items to try to identify matches on the MSPV-NG formulary in three different categories: 1. Direct matches – For some items, the exact same item a medical center had been purchasing was available in the formulary. Identifying these matches may not necessarily be simple, as the names and identification numbers were not typically the same. 2. Potential clinical equivalents – Many items that were no longer available under the MSPV-NG formulary had close matches on the formulary. However, because these were not exactly the same, work was required to ensure that they were clinically equivalent—in nearly all cases, this required clinician input. Clinical Product Review Committees at each medical center, which are comprised of clinicians and others, are responsible for approving new supplies before they are introduced to a medical center. 3. Items without matches – Finally, there were some items that medical centers had been purchasing for which logistics staff were not able to identify a clinical equivalent in the MSPV-NG formulary. In these cases, logistics staff sought non-MSPV methods of obtaining the same items they had previously purchased—usually via purchase card transactions and, in a few cases, via requests to their local contracting office to award new contracts for the items. Figure 7 shows the typical process for identifying MSPV-NG matches for core list items at individual VA medical centers, as described by logistics officials at the selected medical centers. According to logistics officials we spoke with, the MSPV-NG formulary matching process was challenging for the selected medical centers, and they had varying levels of success, in part, due to incomplete guidance from the program office. The MSPV-NG program office provided some guidance, including a tool for identifying direct matches, but three of the Chief Supply Chain Officers at the selected medical centers stated that they did not find it very helpful, in part, because it only included matches for the highest-volume items. Based on our discussions with the MSPV- NG program office and selected medical centers, as well as our review of communications provided to medical centers, the program office provided various emails and held conference calls, but did not provide complete guidance to summarize the steps medical centers should take to execute the matching process. Without complete guidance, each selected VISN and medical center approached the process somewhat differently. One medical center devoted a great deal of effort to matching items early on, had completed its review, and determined its purchasing strategy for nearly all core list items before the transition period was complete. Others devoted less attention to this and planned instead to rely on purchase cards to continue buying the same items they had purchased under the legacy MSPV program, which works against VA’s goal of leveraging buying power through MSPV-NG. The amount of clinician input on the matching process varied among medical centers in our review, in part, because the various communications from the program office did not provide complete information on how to involve clinicians and Clinical Product Review Committees at medical centers. While the program office asked medical centers to involve clinicians, it did not specify a process for how to do so, and centers were left to develop their own approaches. For example, in one selected VISN, the Deputy Chief Medical Officer became involved with the logistics office coordination effort and obtained active participation from clinicians at each medical center, who formed working groups to review potential clinical equivalent matches. In other VISNs and medical centers, there was little concerted effort to involve clinicians at this stage of the process, and only a few clinical equivalent items were reviewed and matched with clinical input. Without effective matching to the formulary, VA cannot achieve the MSPV-NG utilization rates it needs to meet the program’s goals. Without complete guidance, these centers may be unable to effectively match their core lists to the MSPV-NG formulary and, thus, increase their utilization of it. The MSPV-NG formulary also continued to change while the medical centers were working to match their core list items, which made the process more challenging. Several clinicians and logistics staff at the medical centers we visited expressed frustration about the frequency by which items were being added and deleted on the formulary and the impact it had on their purchasing strategies. Our analysis found that in April 2017, 690 items were added to the formulary, but, in June, 628 items were deleted. These medical center officials also noted that they had not received any communications from the program office or SAC regarding why items were being added and deleted, and were unsure why the changes were taking place. SAC and MSPV-NG program office officials stated that these continuing changes stemmed from several factors, including elimination of duplicate items from multiple vendors and addition of other items identified as necessary by VHA or medical centers. In some cases, medical center officials told us that that they were less willing to expend effort on the matching process because the formulary was a moving target. Without visibility into or an understanding of the criteria used by the program office on its process for adding or removing items on the formulary, medical centers will likely continue to face challenges in matching their items to the formulary. See Table 1 for the number of items added and deleted from the formulary from January to July 2017. Many medical centers were unable to find direct matches or substitutes for a substantial number of items on their core lists, which negatively impacted utilization rates for the initial formulary. In October 2015, the program office estimated that the items on the initial formulary would meet 80 percent or more of the medical centers’ needs. However, according to SAC, as of June 2017, only about a third of the items on the initial version of the formulary were being ordered in any significant quantity by medical centers, indicating that many items on the formulary may not be those that are needed by medical centers. Senior VHA acquisition officials attributed this mismatch to shortcomings in their initial requirements development process as well as with VA’s purchase data. VA set out a target that medical centers would order 40 percent of their supplies from the MSPV-NG formulary, but utilization rates are below this target with a nationwide average utilization rate across medical centers of about 24 percent as of May 2017. Instead of fully using MSPV-NG, the selected medical centers are purchasing many items through other means, such as purchase cards or new contracts awarded by their local contracting office, in part, because they said the formulary does not meet their needs. These approaches run counter to the goals of the MSPV-NG program and result in VA not making the best use of taxpayer dollars. Specifically, Chief Supply Chain Officers—who are responsible for managing the ordering and stocking of medical supplies at the six selected medical centers—told us that many items they needed were not included in the MSPV-NG formulary. As discussed above, the difficult transition process also created a lack of clinician desire to find substitutes on the formulary. As such, we found that these six medical centers generally fell below VA’s stated utilization target that medical centers order 40 percent of their items from the MSPV-NG formulary. As shown in figure 8, among the six selected medical centers we reviewed, one met the target, while the remaining five were below 25 percent utilization. The one facility that met the target, Hampton VA Medical Center, is categorized by VA as a smaller, less complex facility, and had fewer items to match, which could contribute to its higher utilization. The utilization rate is VA’s primary metric for the success of MSPV-NG— broad usage of the formulary is necessary for VA to meet its goals of more efficient supply purchasing, standardization, and cost avoidance. Utilization is calculated by dividing the purchases made via MSPV-NG by the total purchases under the medical supply budget category. This is the same metric used under the legacy MSPV program, and most medical centers were meeting the 40 percent target prior to the transition to MSPV-NG. Officials stated that the current metric does not provide enough information and, as a result, VHA is in the process of preparing a new metric to more precisely assess MSPV-NG use and effectiveness, and has begun conducting routine surveys of its medical centers to obtain their feedback on MSPV-NG. Greater utilization of MSPV-NG is essential to VA achieving the cost avoidance goal of $150 million for its supply chain transformation effort. Under the legacy MSPV program, the National Acquisition Center tracked cost avoidance achieved by comparing prices for competitively-awarded MSPV supply contracts with prices available elsewhere. However, VHA officials stated that they are not currently tracking cost avoidance related specifically to MSPV-NG. VHA officials told us they plan to use a new cost avoidance metric that compares total supply spending for VHA as a whole across fiscal years. This new metric, however, does not measure whether cost savings are being achieved specifically through MSPV-NG. Officials stated the broader metric was more useful than measuring cost avoidance specific to MSPV-NG. VA’s practices are in contrast with those of the leading hospitals we met with, which maintain detailed, item-level data on cost avoidance and use them to inform future supply requirements and contracting. These hospitals we interviewed reported substantial cost savings from their standardization efforts. For example, the director of supply chain management at one leading hospital network stated that it achieved a goal of $100 million in cost savings on medical supplies in the first 2 years of their standardization effort, and an additional $35 million annually in the several years since. This hospital achieved these results despite its purchasing power being less than VA’s. Without calculating cost avoidance attributable to MSPV-NG, VHA cannot assess whether the program is meeting its goals, nor can it use cost avoidance data to guide future MSPV-NG requirement development and contracting strategy efforts. In Phase 2 of MSPV-NG, the program office has taken some steps to incorporate greater clinical involvement in subsequent requirement development, but both its requirements development and SAC’s contracting efforts have been hampered by staffing and schedule constraints. Work on Phase 2 began while medical centers were implementing Phase 1 and beginning to order from the MSPV-NG formulary. Figure 9 shows key dates in the concurrent requirements development, contracting, and implementation processes for Phases 1 and 2. In the fall of 2016, the program office began to establish panels of clinicians—including physicians, surgeons, and nurses working in the medical centers—to serve on MSPV-NG integrated product teams (IPT) assigned to the task of developing updated requirements for the second phase of the formulary. The IPTs were to review categories of medical supplies such as operating room surgical supplies and patient exam room instruments and supplies. According to VA officials and our analysis, this revised approach was based on a recognition that more robust mechanisms were needed for incorporating clinician input, in part, because VA had sought information on best practices from leading hospital networks, and because of shortcomings with the Phase 1 requirements that became apparent in the contracting process. Similar to the analysis performed in support of the initial formulary, the MSPV-NG program office analyzed updated data on medical center supply purchases to generate a list that had grown from the 6,000 items established for the initial formulary to a new total of about 9,900 items for these new IPTs to review. The program office set a March 2017 deadline to complete this second, IPT-based phase of requirements development—VHA ultimately met this compressed timeline, but in a rushed manner that limited the impact of the clinical involvement. Program officials said they had difficulty recruiting clinicians to participate, and the program office’s first IPTs were not established until the fall of 2016. In December 2016, slightly more than half (20 of the 38) of the IPTs had begun their work to review items and develop updated requirements. Many of the remaining IPTs were still looking for additional clinicians to participate. Program officials said they received assistance from the Assistant Deputy Under Secretary for Health for Administrative Operations in December 2016. According to program officials, this involvement proved critical in successfully recruiting staff to participate in some of the remaining IPTs, which were then able to make progress in reviewing each item in the formulary. However, the program office did not provide training for the IPTs on how to carry out their work until late January 2017, about 2 months before the IPTs were scheduled to complete the development of all medical and surgical requirements. Further, staff on the IPTs had to complete their responsibilities while simultaneously managing their regular workload as physicians, surgeons, or nurses. By early March 2017, the IPTs still had about 4,200 of the 9,900 items to review. Faced with meeting this unrealistic time frame, the MSPV-NG program office had 9 IPT members travel to one location—with an additional 10 members participating virtually—to meet for 5 days to review the remaining items. Members told us that this time pressure limited the extent to which they were able to pursue the goal of standardizing supplies, and that their review ended up being more of a data validation exercise than a standardization review. In addition, the program office attempted to pursue standardization across all supply categories rather than those with the greatest potential for standardization and cost avoidance and continues to lack a strategy for doing so going forward. Standards for Internal Control in the Federal Government state that management should define what is to be achieved and who is to achieve it, how it will be achieved, and the time frames for achievement. In addition, this approach runs counter to how leading hospitals standardize their supply chains by tackling individual categories one at a time and obtaining deep clinician involvement. Without a strategy for how best to prioritize these items by category for future phases of the requirement development process, these IPTs will be limited in fully contributing to VHA’s goals of more efficient supply purchasing, standardization, and cost avoidance. SAC’s ongoing Phase 2 contracting effort also faces an unrealistic schedule. The SAC plans to replace the existing Phase 1 limited source agreements with competitive awards based on the Phase 2 requirements generated by the IPTs, but it may not be able to keep up with expiring agreements. Because they were made on a non-competitive basis, the Phase 1 limited source blanket purchase agreements were established for a period of one year. In order to keep the full formulary available, the SAC director said his staff must award several hundred contracts before the Phase 1 limited source agreements expire later this year. However, the SAC director stated that doing so will be difficult because his staff must award between 200 to 250 contracts in a 3-month period from the end of September 2017 through December 2017. To adhere to this ambitious schedule, each of the 15 contracting staff on the MSPV-NG team would need to award between 13 to 17 contracts within 3 months, equaling one contract per staff member every 5 to 6 days, which is significantly faster than SAC’s typical pace. SAC officials acknowledged that it is unlikely that they will be able to award the 200 to 250 contracts by the time the existing limited source agreements expire. According to SAC officials, they are in the process of hiring more staff to deal with the increased workload. Further, the SAC division director told us that they cancelled all outstanding Phase 2 solicitations in September 2017 due to low response rates, protests from service-disabled veteran-owned small businesses, and changes in overall MSPV-NG strategy. SAC is still assessing alternative approaches, which poses additional challenges for replacing expiring agreements by December 2017. For cases where limited source agreements expire without new contracts in place, SAC officials said they intend to use a different type of agreement called a distribution and pricing agreement as a stopgap. They stated that the use of these agreements with suppliers who have existing limited source agreements would prevent items from falling off the formulary. However, like BPAs, the agreements are not contracts—the supplier informally agrees to continue to sell its products to VA at the same price and terms. SAC officials stated that VA has not used these types of agreements previously, and they pose a risk in that the supplier is not required to perform and VA has no remedy if the supplier opts to end the agreement or raise the price. These agreements also do not allow VA to achieve its goal of achieving greater cost avoidance through supply standardization and competitive contracts. Despite the unrealistic time frames and the risks of the stopgap approach, VA has not developed a plan for how to mitigate these risks, established an achievable schedule for making the competitive Phase 2 contract awards, or prioritized the various categories of supplies. Establishing such a plan would help ensure that VA is better positioned to mitigate risks and prioritize supply categories that are most likely to yield cost avoidance. VA is currently revising its approach to MSPV-NG requirement development to adopt a model that focuses on clinician-driven sourcing, a key step that leading hospital networks reported following in standardizing their medical supply chains. The MSPV-NG program office continues to refine its strategy for requirement development and is seeking greater clinician involvement in future requirement development efforts, which it refers to as clinician-driven sourcing. For example, program officials said they plan to involve VHA’s national clinical program offices—groups of clinicians at VHA that provide national policy and leadership within their clinical specialty—to obtain greater buy-in from senior clinical leaders and to implement a more structured approach for identifying clinicians willing to serve on integrated product teams. This approach, if implemented effectively, could mitigate some of the prior challenges in recruiting clinicians to participate. However, these efforts are in their early stages, and the MSPV-NG program office has not outlined whether or how it will use input from these clinical groups to prioritize its requirements development and standardization efforts. Without input from these national clinical program offices, VA will continue to be challenged to focus on supply categories that offer the best opportunity for standardization and cost avoidance. Senior VHA and MSPV-NG program officials also told us each VA medical center was expected to use a standing committee, known as the Clinical Product Review Committee, to review new items to include on the formulary and to evaluate opportunities to streamline the formulary through standardization. This approach will likely require additional effort on the part of the MSPV-NG program office to implement, as some centers’ clinicians said the Clinical Product Review Committees were not operating as intended. VA is also exploring major changes in its contracting strategy for MSPV- NG. Specifically, MSPV-NG program office and SAC officials plan to replace the current contract and formulary process with a new contract where the vendor would not only provide distribution services, but also develop the formulary. In October 2017, VA sought information from industry on their capabilities to support such a program. VA stated that its target completion date for this new MSPV-NG contracting strategy is December 2018. To date, VA has provided only limited details on this potential new approach, thus, we cannot assess whether it has the potential to address the shortcomings with the current MSPV-NG approach described in this report. Some emergencies are to be expected, as VHA operates one of the largest health care systems in the country. However, VHA designated a substantial number of its procurements in fiscal year 2016 as emergencies, and we found that it frequently uses emergency procurements to buy routine supplies and on-going services that do not warrant the emergency designation defined in VHA guidance. Among the 18 contract actions we reviewed from three VISNs, we found instances of emergency procurements caused by shortcomings in planning, funding, and communication. These emergency procurements strain the capacity of VA’s acquisition workforce and put the government at risk of paying more than it should for goods and services. Based on our analysis of VA data, we found that emergency procurements accounted for approximately 20 percent of VHA’s overall contract actions in fiscal year 2016, with obligations totaling about $1.9 billion. As shown in figure 10, we found that the percentage of requests designated as emergencies varied across the 19 VISNs. We selected a non-generalizable sample of 18 contract actions designated by customers as emergencies. Most of these contracts were not awarded on a competitive basis, and half cited the unusual and compelling urgency exception to full and open competition. Table 2 shows instances in which the 18 contract actions were awarded without competition, those that cited unusual and compelling urgency as the basis for use of non-competitive procedures, and our observations on the main contributing factor to designating these procurements as emergencies. Additional information on each of the contributing factors follows. VHA guidance specifies that neither a lack of acquisition advance planning nor concerns about a need to obligate funds before the end of the fiscal year are valid justifications for an urgent or emergency procurement request. However, among our selected contract actions, lack of planning by customers was a principal contributing cause, leading to 7 of the 18 contract actions being procured as emergencies, resulting in some non-competitive awards to the incumbent vendor for the same requirement. For instance, one medical center procured medical gas on an emergency basis through consecutive non-competitive contracts. The initial contract was terminated because the company was not licensed by the state where services were being provided, which led to a 3-month emergency contract being awarded to a different vendor. This was followed by a series of non-competitive bridge contracts to that incumbent vendor over a 3-year period. In another case, a medical center routinely procured custom surgical packs through consecutive emergency sole- source purchase orders. The contracting officer’s representative told us the medical center may be paying more for custom surgical packs ordered on an emergency basis than it would under a competitive, long- term contract. Funding uncertainty also contributed to three awards being designated as emergencies. For example, one medical center submitted an emergency request to outsource patient laundry due to funding uncertainties for repairs of on-site, VA-owned and operated laundry equipment. The contracting officer’s representative stated that the VISN could not provide funds to repair the equipment, leading to a series of last-minute emergency requests, a few months at a time, for contracted patient laundry services to prevent a gap in service. At another VISN, a large amount of funding became available late in the fiscal year, which led to an emergency request to purchase postage to ensure the funding was spent before it expired at the end of that fiscal year. The contracting office issued an order for $890,000 worth of metered mail postage, which medical center staff told us would cover 1 to 2 years of usage. We found that shortcomings in communication between customers and contracting offices also contributed to eight awards made on an emergency basis for routine items. For one of the contracts in our review, a medical center resubmitted a request in January 2016 to purchase equipment for a new operating room that had previously been submitted as a standard request months earlier. However, the contracting officer’s representative at the medical center told us that no action was taken by the contracting office, and he did not receive a response for 6 months. The medical center then upgraded the request to an emergency since the operating room was scheduled to open in June 2016. The contracting officer’s representative noted that delays procuring the equipment past the scheduled opening date would delay the opening of the new operating room and possibly result in the rescheduling or cancelling of procedures, affecting patient care. After the order was upgraded to an emergency, the equipment was ultimately delivered before the operating room was opened. In another case, an inventory manager routinely submitted emergency purchase requests for cardiac catheters as a strategy to manage stock levels. The reason he cited was that he was uncertain how long it would take the contracting office to fulfill standard requests. He stated that the contracting office’s time frames for standard orders are unpredictable, and more consistent communication about the expected delivery date of any given order would reduce his need to place emergency orders. He noted that being able to plan around delivery dates was important for maintaining stock at designated levels for the various types of catheters used in the cardiology department. Figure 11 shows a medical center stock room and designated stock levels for one type of catheter. The “L” indicates the standard stock level, and “R” indicates the level of stock at which refill is needed. Ordering officers use these levels to inform when they should place orders. In addition to being contrary to VHA guidance, overuse of emergency procurement requests has negative effects on the overall operation of VA’s procurement system. In reviewing the 18 selected contracts, we identified two primary effects—the potential for increased costs and increased burden on the contracting workforce that could take resources away from other important efforts. As noted above, half of the contract actions we reviewed (9 out of 18) cited unusual and compelling urgency as the basis for the use of non- competitive procedures. When unusual and compelling urgency exists, an agency may limit competition to the firms it reasonably believes can perform the work in the time available. In all nine cases, however, there was no competition at all, which puts the government at risk of paying more than it should for goods and services. Promoting competition— even in a limited form—increases the potential for quality goods and services at a lower price. We have previously reported that competition in contracting is a critical tool for achieving the best return on investment and that it can improve contractor performance and promote accountability for results. Emergency procurement requests must be processed quickly, and contracting officers have limited ability to question the validity of an emergency request. Nevertheless, many of the contracting officials we spoke with that had responsibility for our 18 selected contracts told us they generally communicate directly with the requestor to clarify the requirement and assess the nature of the request. As stated in the VHA procurement manual, contracting officers generally must process emergencies within 5 days or less. However, the manual acknowledges that different Network Contracting Offices assign different time frames to priority categories. For instance, officials from all three selected Network Contracting Offices told us they generally process emergencies immediately. Several contracting officials we interviewed stated that, because they do not have clinical expertise, they infrequently question the medical center staff customer about whether their request is truly an emergency. Even if they work with customers to reach a compromise, such as purchasing a smaller quantity to fill just the immediate need, emergencies still require immediate attention and result in deprioritizing other tasks. The impact on the contracting officer workload can be exacerbated by low staffing levels. For example, none of the three Network Contracting Offices we visited were staffed to their authorized levels. Table 3 shows the number of emergency actions processed by each selected Network Contracting Office in fiscal year 2016, along with staff levels. We have previously reported that when contracting officers process frequent and emergency small-dollar transactions, it reduces their ability to plan ahead and take a strategic view of procurement needs. Several of the VA contracting officials we spoke with noted that regularly processing emergency contracts and extensions affects their ability to work on bigger-picture efforts, some of which would reduce workload. For instance, one contracting officer stated that awarding emergency contract extensions has prevented him from competitively awarding more than 40 lab contracts. In these cases, the contracting officer stated that he instead extended the period of performance of the non-competitive contracts to the incumbent vendors. In addition, emergency contracts are generally awarded for short periods of time—often 1 year or less—while competitive contracts often have terms of 5 years. According to some contracting officers we spoke with, this can result in contracting officers spending much of their time tending to a large number of short-term contracts, instead of a smaller number of fully-competed contracts with longer periods of performance. We found that greater planning and coordination between medical center and contracting staff can help to leverage VA’s buying power by employing principles of strategic sourcing—a process that moves away from numerous individual procurements to a broader aggregate approach—and thereby reducing the need for emergencies. For example, inventory managers responsible for two of the selected cardiac catheter contracts in our sample stated that managing catheter inventory was difficult because of the unpredictability of the needs, the high cost of the items, and the long turnaround times from their respective contracting offices. As a result, they had to place frequent emergency orders to keep stock at safe levels. One inventory manager noted, however, that there is no longer a need to place emergency orders for catheters because the SAC has since put in place a purchasing agreement that enabled her to place orders directly, without requiring involvement from the contracting office. In addition to reducing contracting office workload, the supply technician said this agreement greatly reduced the amount of work required to place an order and allowed her to more effectively maintain her inventory with short and predictable turnaround times. She also stated that the agreement protected against the frequent price increases she experienced when purchasing the catheters on the open market through the contracting office. The agreement also reduced workload for the local VISN contracting office. In analyzing eCMS data on awards from fiscal years 2014 through 2016, we identified several types of goods and services that were repeatedly purchased on an emergency basis through stand-alone contract actions. This suggests there may be additional opportunities, at both the VISN and national levels, to reduce emergencies by making supplies and services available through more efficient, competitively-awarded contract vehicles. In addition to reducing burden on logistics and contracting staff, reviewing existing spending to find opportunities to leverage buying power is also in line with strategic sourcing best practices. MSPV-NG is one such contracting mechanism for procuring routine supplies, and a more strategic approach to developing requirements for the formulary could help avoid some emergency procurements. Our analysis of VA eCMS data found that many awards designated as emergencies were for medical-surgical items, some of which could likely be purchased through MSPV-NG. Figure 12 shows the number of medical-surgical procurements designated as emergencies within each VISN in fiscal year 2016. Within our sample of 18 contract actions, we found several instances of reoccurring emergency procurements for medical-surgical supplies, such as custom surgical packs and catheters. Procuring routine supplies on an emergency basis defeats the objectives of MSPV-NG to leverage VA’s large buying power and make the process of ordering supplies more efficient and transparent. However, while data on emergency procurements are available, VHA’s Procurement and Logistics Office does not currently analyze this data to identify items frequently purchased on an emergency basis to determine whether such items could be referred to SAC to be added to the MSPV-NG formulary. In addition, local VISN Network Contracting Offices have also not used available data on emergency purchases to identify items frequently purchased on an emergency basis. Steps by VHA’s Procurement and Logistics Office and individual VISN contracting offices to review such data and identify opportunities for leveraging MSPV-NG or other national contracts could help VA achieve greater efficiency. Purchasing medical supplies through individual emergency contract actions is much less efficient than using MSPV-NG; moreover, by making numerous individual procurements at the local level and not leveraging its aggregate buying power, VA is paying more for items than it needs to. Any major organizational change requires a solid strategic plan that is communicated with stakeholders, stable leadership, and stakeholder involvement and buy-in. VHA was missing all of these elements when it rolled out the MSPV-NG program, which presented obstacles to effective implementation and buy-in and affected the program’s ability to meet its goals. Moving forward, without an overall strategy that is communicated to all stakeholders and enhanced leadership stability, VHA will likely continue to face these challenges. In addition, in the initial requirements development process, VHA relied on prior purchase data—rather than clinician input—and did not prioritize categories of medical supplies, both of which veered from practices employed by leading hospital networks. Once the initial formulary was established, medical centers faced challenges matching supply items to the formulary and took varying approaches, in part, due to incomplete guidance on key aspects of the process and frequent changes in the items on the formulary. Providing complete guidance and communicating the criteria and processes for adding or removing items from the formulary would help centers more effectively match items to the formulary, thereby increasing utilization, which as of May 2017 was below VA’s established target. Further, because it does not calculate cost avoidance attributable to MSPV-NG, VA cannot accurately measure the extent to which the program is contributing to its overall cost avoidance goal. VA made changes during the second phase of requirements development, in particular to encourage greater clinician involvement. However, the program faces an unrealistic contracting schedule and has not yet developed a plan for how to manage or mitigate the associated risks. Establishing such a plan is essential for risk mitigation, and supply category prioritization could help VA target those categories most likely to yield cost avoidance. In addition, while the program is planning to involve national clinical program offices to obtain greater clinician buy-in, it has not outlined whether or how it will use input from these groups to prioritize its requirements development efforts. Without such input, VA will continue to face challenges focusing on those supply categories that offer the best opportunity for standardization and cost avoidance. Further, VA is considering another major change in its MSPV program in which the prime vendor may absorb some of the work currently conducted by SAC. However, VA may face challenges in this new approach until it addresses the existing shortcomings in the MSPV-NG program, such as the absence of a documented overall strategy, insufficient clinician involvement in the requirements development process, and lack of medical center buy-in. Meanwhile, among the 18 contract actions we reviewed, we found shortcomings in planning and communication that led to medical centers’ overreliance on emergency procurements to obtain routine goods and services—some of which could be made available via MSPV-NG— bypassing effective contracting practices like competition. These emergency procurements can be a particular drain on resources, especially those of contracting officers who must respond immediately to fulfill emergency orders. Identifying opportunities to more strategically purchase frequently purchased goods and services—both at the local levels and nationwide through the MSPV-NG program—could help minimize these workforce challenges and minimize costs. We are making 10 recommendations to VA. The Director of the MSPV-NG program office should, with input from the Strategic Acquisition Center (SAC), develop, document, and communicate to stakeholders an overarching strategy for the program, including how the program office will prioritize categories of supplies for future phases of requirement development and contracting. (Recommendation 1) The VHA Chief Procurement and Logistics Officer should take steps to prioritize the hiring of the MSPV-NG program office’s director position on a permanent basis. (Recommendation 2) The Secretary of Veterans Affairs should assign the role of Chief Acquisition Officer to a non-career employee, in line with statute. (Recommendation 3) The Director of the MSPV-NG program office should provide complete guidance to medical centers for matching equivalent supply items, which could include defining the roles of clinicians and local Clinical Product Review Committees. (Recommendation 4) The Director of the MSPV-NG program office should, with input from SAC, communicate to medical centers the criteria and processes for adding or removing items from the formulary. (Recommendation 5) The VHA Chief Procurement and Logistics Officer, in coordination with SAC, should calculate cost avoidance achieved by MSPV-NG on an ongoing basis. (Recommendation 6) The MSPV-NG program office and SAC should establish a plan for how to mitigate the potential risk of gaps in contract coverage while SAC is still working to make competitive Phase 2 awards, which could include prioritizing supply categories that are most likely to yield cost avoidance. (Recommendation 7) The VHA Chief Procurement and Logistics Officer should use input from national clinical program offices to prioritize its MSPV-NG requirements development and standardization efforts beyond Phase 2 to focus on supply categories that offer the best opportunity for standardization and cost avoidance. (Recommendation 8) The VHA Chief Procurement and Logistics Officer should direct VISN Network Contracting Offices to work with medical centers to identify any opportunities to more strategically purchase goods and services frequently purchased on an emergency basis. For example, offices could do this by analyzing existing data. (Recommendation 9) VHA Chief Procurement and Logistics Officer should analyze data on items that are frequently purchased on an emergency basis, determine whether such items are suitable to be added to the MSPV-NG formulary, and work with SAC to make any suitable items available via MSPV-NG. (Recommendation 10) We provided a draft of this report to the Department of Veterans Affairs for review and comment. VA provided written comments on a draft of this report. In its written comments, reprinted in appendix II, VA concurred with all of our 10 recommendations. In its response to our recommendation that VA assign the role of Chief Acquisition Officer to a non-career employee, as required by statute, VA stated that it is unable to implement the recommendation without congressional action and requested closure of the recommendation. We asked VA officials what congressional action they believe is necessary to follow the recommendation. The officials told us they believe the CAO position should be assigned to an assistant secretary, but that the number of assistant secretaries within VA is limited by statute. We decline to close this recommendation. VA should assign the role of CAO to a non-career employee, as required by statute. If VA maintains its view that it cannot meet this requirement without congressional action, then VA should request the specific congressional action that VA believes is necessary. VA provided technical comments on the draft report, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by email at oakleys@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. You requested that we examine the Department of Veterans Affairs’ (VA) transition to the Medical Surgical Prime Vendor-Next Generation (MSPV- NG) program and the extent to which the department contracts for good and services on an emergency basis. This report addresses the extent to which: (1) VA’s implementation of MSPV-NG was effective in meeting program goals, and (2) Veterans Health Administration (VHA) awards contracts on an emergency basis for routine supplies and ongoing services, and what impact, if any, these awards have on VHA’s acquisition function. To review the extent to which implementation of MSPV-NG was effective, we reviewed policy and guidance related to the program. We obtained and analyzed the MSPV-NG program’s formulary development plan, which explained the program’s rationale for pursuing its initial requirements development approach. We also obtained and reviewed additional program documentation, including communications to medical centers and other stakeholders, briefings, and training and tools provided to medical centers. We interviewed leaders in the VHA Procurement and Logistics Office and Healthcare Commodity Program Executive Office (the program office for MSPV-NG), as well as other staff involved in planning and executing aspects of MSPV-NG. We also interviewed VA’s Chief Acquisition Officer during the development of MSPV-NG, cognizant Office of General Counsel staff, and others regarding the program. We also interviewed supply chain managers from four leading hospital networks regarding their medical supply management practices. We selected the hospital networks because they were identified by an industry study as having leading supply chain practices. During interviews, we asked each of these supply chain managers a standard set of questions about processes followed to standardize their hospital networks’ supply chain. VA had also identified two of these hospital networks as having leading supply chain practices and used the industry study to identify these hospital networks. We used this information from the leading hospital networks to compare the key steps—identified by each of the four hospital networks—followed in standardizing their medical supply chains to those steps that VA followed when implementing the MSPV-NG program. We also confirmed these key steps with the leading hospital networks. We conducted site visits at a non-generalizable selection of three Veterans Integrated Service Networks (VISNs), and two medical centers within each selected VISN: VISN 6: Durham, North Carolina Durham, North Carolina VA Medical Center Hampton, Virginia VA Medical Center VISN 8: St. Petersburg, Florida Tampa, Florida VA Medical Center Gainesville, Florida VA Medical Center VISN 22: Long Beach, California Long Beach, California VA Medical Center San Diego, California VA Medical Center The VISNs were selected primarily based on highest total contract obligations in fiscal years 2014 through 2016 and representation of multiple geographic areas and prime vendor contractors. The first site visit to VISN 22 was also chosen based on the rollout schedule for the graphical user interface, an IT system related to MSPV-NG. The final site visit to VISN 6 was also chosen as the VISN with the highest percentage of contract actions designated as emergencies over the fiscal year 2014 through 2016 period. The selected medical centers in each VISN were chosen based on our review of VA Electronic Contract Management System (eCMS) data on emergency procurements within each VISN (see below) and geographic proximity to the VISN office location. At each selected VISN, we interviewed the Chief Supply Chain Officer and other members of leadership. At medical centers in each selected VISN, we met with the Chief Supply Chain Officer, ordering officers, other logistics staff, clinicians involved in the MSPV-NG transition, and on-site representatives of the prime vendor contractors. We evaluated MSPV-NG program office status briefings and integrated product team training briefings, which documented the planned role of clinicians in the Phase 2 requirements development process. We interviewed VHA Procurement and Logistics Office leadership, other MSPV-NG program office staff, and integrated product team managers and clinicians about the evolution of the program office’s requirements development approach, including the role of clinicians in preparing item descriptions and evaluating items. Three integrated product teams were selected for interviews based on those that covered the greatest number of items, as well as for diversity of types of medical supplies. We also met with members of additional integrated product teams during site visits to the selected medical centers. We obtained and analyzed the Strategic Acquisition Center’s acquisition strategy for MSPV-NG supply contracts and discussed its evolution with the Center’s acquisition staff. We analyzed the MSPV-NG formulary as of January 2017 to determine what acquisition instrument was used to add a particular item to the formulary, how the cumulative total of items by award type changed from fiscal year 2014 to fiscal year 2017, and when certain MSPV-NG items would be removed from the formulary because the underlying acquisition instrument had expired. We also analyzed the contents of the formulary monthly from January to July 2017 to determine the number of items added and deleted each month. We determined that the MSPV-NG formulary data were sufficiently reliable for the purposes of our reporting objectives. For the formulary data, we corroborated the supplier’s name, award number, award type, and the award’s effective and expiration dates with data in the Federal Procurement Data System- Next Generation. We were also able to corroborate the total number of items on the January 2017 MSPV-NG formulary through other documentation, such as program briefings. To determine the level of discounts obtained by the MSPV-NG program office, we randomly selected 10 limited source blanket purchase agreements. We reviewed each agreement and compared the price for each item on the supplier’s price list with the item’s Federal Supply Schedule price. We obtained and analyzed the current MSPV-NG indefinite delivery, indefinite quantity solicitations and the Defense Logistics Agency’s documentation on distribution and pricing agreements. We also reviewed related prior GAO reports and relevant parts of the Federal Acquisition Regulation. We obtained information on the metrics used by VA to assess the performance of MSPV-NG, primarily the utilization metric, which is calculated by VA based on budget object code spending data from the financial system and MSPV-NG spending data. We obtained data on the performance of the six selected medical centers for May 2017 and July 2017. We also interviewed officials responsible for maintaining this data to gather information on processes, accuracy, and completeness, as well as on planned changes in the metric. We found the utilization metric data to be sufficiently reliable for our purposes. To assess the extent to which VA has awarded contracts on an emergency basis for routine supplies and ongoing services, and the effect on VA’s acquisition function, we obtained and analyzed VA and VHA policy and guidance documents, reviewed relevant parts of the Federal Acquisition Regulation, and reviewed prior GAO reports. We obtained eCMS data for fiscal years 2014 through 2016, and analyzed these data to determine the number of actions designated by customers as emergencies, the percentage of actions designated as emergencies in each VISN, and the total obligations attributed to these actions. We also calculated the number and value of all actions designated as emergencies in selected Product and Service Codes related to medical supplies and services for fiscal year 2016. We determined that these eCMS data were sufficiently reliable for the purposes of determining the extent of emergency procurements by reviewing information on system controls and conducting validation of data, including tracing selected information to source documents for the contracts that we selected. We selected a non-generalizable sample of 18 contracts from the three selected VISNs. The selection was based primarily on: contracts designated by the customer as emergencies in eCMS data; use of the term “emergency” or “urgent” in the description field; high dollar value; and Product and Service Codes for services and medical supplies. We obtained and reviewed the contract files for each of the selected contracts, which are also stored in eCMS, including signed awards, limited competition justifications, work statements, and other documents. We compared key information, such as extent of competition, against data reported in eCMS. We interviewed the requesters—in most cases the contracting officer’s representative—for all selected contracts. We also visited Network Contracting Offices for each of the three selected VISNs and interviewed leadership at each location, as well as the contracting officials responsible for each selected contract. Finally, we met with a Strategic Acquisition Center contracting officer to discuss a related contract award. We conducted this performance audit from November 2016 to November 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Lisa Gardner, Assistant Director; Emily Bond; Matthew T. Crosby; Lorraine Ettaro; Michael Grogan; Jeff Hartnett; Katherine Lenane; Teague Lyons; Roxanna Sun; and Colleen Taylor made key contributions to this report.
|
VA medical centers spend hundreds of millions of dollars annually on medical supplies and services. In December 2016, VA instituted a major change in how it purchases medical supplies—the MSPV-NG program—to gain effectiveness and efficiencies. GAO was asked to examine VA's transition to the MSPV-NG program and its use of emergency procurements. This report assesses the extent to which (1) VA's implementation of MSPV-NG was effective in meeting program goals, and (2) VA awards contracts on an emergency basis. GAO analyzed VA's MSPV-NG requirements development and contracting processes, and identified key supply chain practices cited by four leading hospital networks. GAO also reviewed a non-generalizable sample of 18 contracts designated in VA's database as emergency procurements with high dollar values; and met with contracting, logistics, and clinical officials at 6 medical centers, selected based on high dollar contract obligations in fiscal years 2014-2016 and geographic representation. The Department of Veterans Affairs (VA) established the Medical Surgical Prime Vendor-Next Generation (MSPV-NG) program to provide an efficient, cost-effective way for its facilities to order supplies, but its initial implementation was flawed, lacked an overarching strategy, stable leadership, and sufficient workforce that could have facilitated medical center buy-in. VA developed requirements for a broad range of MSPV-NG items with limited clinical input. As a result, the program has not met medical centers' needs, and usage remains far below VA's 40 percent target. VA also established cost avoidance as a goal for MSPV-NG, but currently only has a metric in place to measure broader supply chain cost avoidance, not savings specific to MSPV-NG. Also, starting in June 2015, VA planned to award competitive contracts for MSPV-NG items, but instead, 79 percent were added using non-competitive agreements. (See figure.) This was done primarily to meet VA's December 2016 deadline to establish the formulary, the list of items available for purchase through MSPV-NG. The roll-out of MSPV-NG ran counter to practices of leading hospitals that GAO spoke with, which highlighted key steps, such as prioritizing supply categories and obtaining continuing clinician input to guide decision-making. VA has taken steps to address some deficiencies identified in the first phase of implementation and is considering a new approach for this program. However, until VA addresses the existing shortcomings in the MSPV-NG program, such as the lack of medical center buy-in, it will face challenges in meeting its goals. Medical centers often rely on emergency procurements to obtain routine goods and services—some of which could be made available at lower cost via MSPV-NG. Sixteen of the 18 contracts in GAO's sample were not competed, which puts the government at risk of paying more. For instance, one medical center procured medical gas on an emergency basis through consecutive non-competitive contracts over a 3-year period. VA policy clearly defines emergency actions; however, inefficiencies in planning, funding, and communication at the medical centers contributed to emergency procurements, resulting in the contracting officers quickly awarding contracts with no competition. GAO is making 10 recommendations, including that VA expand clinician input in requirements development, calculate MSPV-NG cost avoidance, establish a plan for awarding future competitive contracts, and identify opportunities to strategically procure supplies and services frequently purchased on an emergency basis. VA agreed with GAO's recommendations.
|
DOD is the largest U.S. federal department and one of the most complex organizations in the world. In support of its military operations, the department manages many interdependent business functions, including logistics management, procurement, health care management, and financial management. DOD relies extensively on IT to support its business functions. According to its IT investment data, the department has 2,097 business system investments. The department’s fiscal year 2018 IT budget request states that DOD plans to spend about $8.7 billion in fiscal year 2018 on its defense business systems. The IT budget organizes investments by mission areas. The four mission areas are enterprise information environment, business, warfighting, and defense intelligence. Figure 1 shows the amount of DOD’s requested fiscal year 2018 IT budget that the department plans to spend on each mission area. The department further organizes its IT budget by segments. For example, the business mission area includes segments such as financial management, health, and human resource management. Figure 2 shows the department’s projected fiscal year 2018 spending for each segment in the business mission area. GAO designated the department’s business systems modernization efforts as high risk in1995 and has continued to do so in the years since. DOD currently bears responsibility, in whole or in part, for half of the programs (17 of 34 programs) across the federal government that we have designated as high risk. Seven of these areas are specific to the department, and 10 other high-risk areas are shared with other federal agencies. Collectively, these high-risk areas are linked to the department’s ability to perform its overall mission and affect the readiness and capabilities of U.S. military forces. DOD’s business systems modernization is one of the department’s specific high-risk areas and is essential for addressing many of the department’s other high-risk areas. For example, modernized business systems are integral to the department’s efforts to address its financial and supply chain high-risk areas. Since 2005, we have issued 11 reports in response to mandates directing GAO to assess DOD’s actions to respond to business system modernization provisions contained in Section 2222 of Title 10, United States Code. These reports contained 23 recommendations to help strengthen the department’s management of its business systems. As of September 2017, the department had implemented 13 of the recommendations and 2 had been closed as not implemented. The other 8 recommendations remain open. The 11 reports are listed in appendix II. The NDAA for Fiscal Year 2016 included provisions requiring DOD to perform certain activities aimed at ensuring that its business system investments are managed efficiently and effectively. Specifically, the act established requirements for the department related to issuing policy and guidance for managing defense business systems; developing and maintaining a defense business enterprise architecture; establishing a Defense Business Council to provide advice to the Secretary on managing defense business systems; and obtaining approvals before systems proceed into development (or if no development is required, into production or fielding) and related annual reviews. According to the Joint Explanatory Statement accompanying the NDAA for Fiscal Year 2016, the act revised Section 2222 of Title 10, United States Code, to streamline requirements and clarify the responsibilities of senior officials related to acquiring and managing business systems. Key revisions pertain to: Covered defense business systems. The code previously defined a covered defense business system as a system having a total cost of over $1 million over the period of the future-years defense program. As revised, the code now defines a covered defense business system as a system that is expected to have a total amount of budget authority over the period of the current future-years defense program of over $50 million. Priority defense business systems. The act established a new category of system, called a priority defense business system. This is a system that is (1) expected to have a total amount of budget authority of over $250 million over the period of the current future- years defense program, or (2) designated by the DCMO as a priority defense business system based on specific program analyses of factors including complexity, scope, and technical risk, and after notification to Congress of such designation. Thresholds and officials responsible for review and certification of defense business systems. The code previously stated that the DCMO had responsibility for reviewing and certifying all defense business system investments over $1 million over the future-years defense program. The revised code states that, unless otherwise assigned by the Secretary of Defense, military department Chief Management Officers (CMO) are to have approval authority for their covered defense business system investments below $250 million over the future-years defense program. The DCMO is to have approval authority for defense business systems owned by DOD components other than the military departments, systems that will support the business process of more than one military department or other component, and priority defense business systems. Certification requirements. The code previously required that a defense business system program be reviewed and certified, at least annually, on the basis of its compliance with the business enterprise architecture and appropriate business process reengineering. In addition to these requirements, the revised code requires that the business system program be reviewed and certified on the basis of having valid, achievable requirements and a viable plan for implementing the requirements; having an acquisition strategy designed to eliminate or reduce the need to tailor commercial off-the- shelf systems; and being in compliance with the department’s auditability requirements. DOD Instruction 5000.75: Business Systems Requirements and Acquisition assigns roles and responsibilities for managing defense business system investments. Table 1 identifies the key entities and their responsibilities for managing defense business system investments. DOD has taken steps to address provisions of the NDAA for Fiscal Year 2016 related to defense business system investments. Specifically, as called for in the act, the department has established guidance that addresses most legislative requirements for managing its defense business systems; however, the military departments are still developing guidance to fully address certification requirements for their systems. Further, DOD has developed a business enterprise architecture and is in the process of updating the architecture to improve its content. The department also has a plan to improve the usefulness of the business enterprise architecture; however, the department has not delivered the plan’s intended capabilities. In addition, the department is in the process of updating its IT enterprise architecture; however, it does not have a plan for improving the department’s IT and computing infrastructure for each of the major business processes. Further, the department has not yet demonstrated that the business enterprise architecture and the IT enterprise architecture are integrated. The department fully addressed the act’s requirement related to defense business system oversight. Specifically, the department’s governance board, called the Defense Business Council, addressed legislative provisions to provide advice to the Secretary of Defense. Lastly, DOD and the military departments did not apply new legislative requirements when certifying business systems for fiscal year 2017. Instead, the DOD DCMO certified the systems in our sample in accordance with the previous fiscal year’s (fiscal year 2016) certification requirements. The NDAA for Fiscal Year 2016 required the Secretary of Defense to issue guidance by December 31, 2016 to provide for the coordination of, and decision making for, the planning, programming, and control of investments in covered defense business systems. The act required this guidance to address six elements: Policy to ensure DOD business processes are continuously reviewed and revised to implement the most streamlined and efficient business processes practicable and eliminate or reduce the need to tailor commercial off-the-shelf systems to meet or incorporate requirements or interfaces that are unique to the department. A process to establish requirements for covered defense business systems. Mechanisms for planning and controlling investments in covered defense business systems, including a process for the collection and review of programming and budgeting information for covered defense business systems. Policy requiring the periodic review of covered defense business systems that have been fully deployed, by portfolio, to ensure that investments in such portfolios are appropriate. Policy to ensure full consideration of sustainability and technological refreshment requirements, and the appropriate use of open architectures. Policy to ensure that best acquisition and systems engineering practices are used in the procurement and deployment of commercial systems, modified commercial systems, and defense-unique systems to meet DOD missions. Of these six elements called for by the act, the department has issued guidance that fully addresses four elements and partially addresses two elements. Table 2 summarizes our assessment of DOD’s guidance relative to the act’s requirements. DOD fully addressed the element requiring policy to ensure that the business processes of the department are continuously reviewed and revised. For example, DOD Instruction 5000.75 requires the functional sponsor of a defense business system to engage in continuous process improvement throughout all phases of the business capability acquisition cycle. The department also fully addressed the element to provide a process for establishing requirements for covered defense business systems with DOD Instruction 5000.75, which introduces the business capability acquisition cycle for business system requirements and acquisition. In addition, DOD fully addressed the element to provide mechanisms for planning and controlling investments in covered defense business systems. Specifically, the department’s Financial Management Regulation; Directive 7045.14 on its planning, programming, budgeting, and execution process; and the April 2017 Defense Business System Investment Management Guidance provide such mechanisms. For example, the April 2017 investment management guidance includes a process, called the integrated business framework, which the department is to follow for selecting, managing, and evaluating the results of investments in defense business systems. In addition, the directive assigns the DOD CIO responsibility for participating in the department’s annual resource allocation process and for advising the Secretary and Deputy Secretary of the Defense on IT resource allocations and investment decisions. Further, DOD fully addressed the requirement for a policy requiring the periodic review of covered business systems that have been fully deployed, by portfolio, to ensure that investments in such portfolios are appropriate. Specifically, the department’s April 2017 Defense Business System Investment Management Guidance requires the department to annually review an organization’s plan for managing its portfolio of defense business systems over the period of the current future-years defense program (e.g., Army’s plan for its financial management systems) to ensure, among other things, that the portfolio is aligned with applicable functional strategies (e.g., DOD’s strategy for its financial management functional area). DOD partially addressed the element requiring policy to ensure full consideration of sustainability and technological refreshment requirements, and the appropriate use of open architectures. Specifically, the department established policy requiring consideration of open architectures, but it has not established policy requiring consideration of sustainability and technological refreshment requirements. The Office of the DCMO stated that future guidance is expected to provide a policy to ensure full consideration of sustainability and technological refreshment requirements. However, the department could not provide a time frame for when the guidance will be developed and issued. Without a policy requiring full consideration of sustainability and technological refreshment requirements for its defense business system investments, the department may not be able to ensure that it has a full understanding of the costs associated with these requirements. As a result, the department may not be able to effectively manage spending on these systems. DOD has also partially addressed the element requiring policy to ensure that best acquisition and systems engineering practices are used in the procurement and deployment of commercial, modified-commercial, and defense-unique systems. Specifically, the department has established policy requiring the acquisition of business systems to be aligned with commercial best practices and to minimize the need for customization of commercial products to the maximum extent possible. On the other hand, the department has not established policy to ensure the use of best systems engineering practices. With regard to this finding, officials in the Office of the DCMO asserted that DOD Instruction 5000.75 addresses the requirement. However, while the instruction requires the system acquisition strategy to include a description of how the program plans to leverage systems engineering, it does not require the use of best systems engineering practices. Without a policy requiring the use of best systems engineering practices in the procurement and deployment of commercial, modified, and defense- unique systems, the department may be limited in its ability to effectively balance meeting system cost and performance objectives. In addition to guidance for addressing the aforementioned legislative requirements for business systems management, the NDAA for Fiscal Year 2016 requires the Secretary to direct the DCMO and the CMO of each of the military departments to issue and maintain supporting guidance, as appropriate and within their respective areas of responsibility. In this regard, one of the key areas for which the DCMO and military department CMOs are to provide supporting guidance is the review and certification of defense business systems in accordance with specific requirements. Specifically, the act requires that, for any fiscal year in which funds are expended for development or sustainment pursuant to a covered defense business system program, the appropriate approval official is to review the system to determine if the system: has been, or is being, reengineered to be as streamlined and efficient as practicable, and whether the implementation of the system will maximize the elimination of unique software requirements and unique interfaces; is in compliance with the business enterprise architecture or will be in compliance as a result of planned modifications; has valid, achievable requirements, and a viable plan for implementing those requirements (including, as appropriate, market research, business process reengineering, and prototyping activities); has an acquisition strategy designed to eliminate or reduce the need to tailor commercial off-the-shelf systems to meet unique requirements, incorporate unique requirements, or incorporate unique interfaces to the maximum extent practicable; and is in compliance with the department’s auditability requirements. The act and DOD Instruction 5000.75 define the systems that the DOD DCMO is responsible for certifying and the systems that military department CMOs are responsible for certifying. Consistent with the act, in April 2017, the DCMO issued guidance for certifying officials that addresses the certification requirements. Table 3 provides our rating and assessment of the DCMO’s guidance for implementing defense business system certification requirements. By establishing guidance requiring that defense business systems be certified on the basis of the legislative requirements, the department is better positioned to ensure that a covered system does not proceed into development (or, if no development is required, into production or fielding) without the appropriate due diligence. Further, the department has taken steps which should help ensure that funds are limited to systems in development or sustainment that meet these requirements. The military departments have made mixed progress in developing supporting guidance to assist in making certification decisions regarding systems within their respective areas of responsibility. More specifically, the Air Force has issued supporting guidance that addresses three of the act’s five certification requirements, but does not address the remaining two requirements. Navy has issued guidance that addresses two of the certification requirements, partially addresses one requirement, and does not address two requirements. The Army has not yet issued guidance on any of the five certification requirements. Table 4 provides an overview of our assessment of the Air Force’s, Navy’s, and Army’s guidance relative to the NDAA for Fiscal Year 2016 certification requirements. Each department’s efforts are further discussed following the table. Air Force. In April 2017, the Department of the Air Force issued guidance for certifying business systems for fiscal year 2018. The guidance addresses the requirements that a system be certified on the basis of sufficient business process reengineering, business enterprise architecture compliance, and valid requirements and a viable plan to implement them. Specifically, the guidance states that Air Force core defense business systems are required to comply with the business process reengineering guidance prescribed in the DCMO’s February 2015 Defense Business Systems Investment Management Process Guidance and for systems to assert compliance with the architecture through DCMO’s Integrated Business Framework—Data Alignment Portal. In addition, the guidance states that the department must follow DOD Instruction 5000.75, which requires that certifying officials determine that business requirements are valid and capability efforts have feasible implementation plans. However, the Air Force guidance does not address the remaining two certification requirements. Officials in the office of the Air Force DCMO acknowledged that the Air Force’s business system certification guidance does not address determining how the acquisition strategy is designed to eliminate or reduce the need to tailor commercial off-the-shelf systems to meet unique requirements, incorporate unique requirements, or incorporate unique interfaces to the maximum extent practicable or is in compliance with DOD’s auditability requirements. In May 2017, Air Force DCMO officials stated that the department was in the process of developing guidance. However, as of December 2017, the Air Force had not described specific plans to update its business system certification guidance. Navy. The Department of the Navy issued guidance in May 2016. This guidance addresses the requirements that a system be certified on the basis of sufficient business process reengineering and business enterprise architecture compliance. In this regard, the guidance provides guidelines for documenting business process reengineering and requires verification that business process reengineering is complete. The guidance also specifies that defense business systems are to map alignment with the business enterprise architecture in DCMO’s Integrated Business Framework–Data Alignment Portal. Navy’s guidance partially addresses the certification requirement for determining if a defense business system has valid requirements and a viable plan to implement them. Specifically, the guidance includes information on validating requirements; however, it does not include information on determining if a system has a viable plan to implement the requirements. In addition, Navy’s guidance does not address the remaining two certification requirements, which are to determine that the covered defense business system has an acquisition strategy that eliminates or reduces the need to tailor commercial-off-the-shelf systems, and that the system is in compliance with DOD’s auditability requirements. In August 2017, officials in the Office of the Under Secretary of the Navy (Management) stated that the office was in the process of updating its May 2016 Defense Business System Investment Certification Manual. The officials stated that the goal is to issue interim investment certification guidance by May 2018. As of November 2017, however, Navy had not established a plan for when it expects to publish finalized certification guidance. Army. The Department of the Army has not issued guidance that addresses any of the act’s certification requirements. The Army issued a template that was to be used to develop fiscal year 2018 portfolio review submissions. However, the template does not address any of the certification requirements. Officials in the Army’s Office of Business Transformation explained that the Army used DOD DCMO’s 2014 guidance to certify its business systems for fiscal year 2017. In May 2017, they stated that the Army was in the process of developing guidance to implement DOD’s new instruction. In November 2017, an official in the Army’s Office of Business Transformation stated that the office was in the process of completing the guidance and aimed to provide it to the Deputy Under Secretary’s office for signature in January 2018. However, the department has not committed to a specific time frame for when the new guidance is expected to be issued. Without guidance for the certification authority to determine that defense business systems have addressed each of the act’s certification requirements, the Air Force, Navy, and Army risk allowing systems to proceed into development or production that do not meet these requirements. In particular, the military departments risk wasting funds on developing and maintaining systems that do not have valid requirements and a viable plan to implement the requirements, introduce unnecessary complexity, or that do not adequately support the Department of Defense’s efforts to meet its auditability requirements. According to the NDAA for Fiscal Year 2016, DOD is to develop and maintain a defense business enterprise architecture to guide the development of integrated business processes within the department. In addition, the act states that the business architecture must be consistent with the policies and procedures established by the Director of the Office of Management and Budget. Among other things, OMB policy calls for agencies to develop an enterprise architecture that describes the current architecture, target architecture, and a transition plan to get to the target architecture. The act also calls for the business architecture to contain specific content, including policies, procedures, business data standards, business information requirements, and business performance measures that are to apply uniformly throughout the department. DOD has developed a business enterprise architecture that is intended to help guide the development of its defense business systems. The department issued version 10 of the business architecture, which is currently being used to support system certification decisions, in February 2013. The business architecture and related documentation include content describing aspects of the current architecture, target architecture, and a transition plan to get to the target architecture. In addition, the business architecture includes content that addresses the act’s requirements. Table 5 provides examples of required content in DOD’s business enterprise architecture. Nevertheless, some content included in version 10 of the business architecture is outdated and incomplete. For example, version 10 of the business architecture’s repository of laws, regulations, and policies was last updated in February 2013, and officials in the Office of the DOD CIO and Office of the DCMO confirmed that they are not current. Further, the department’s March 2017 business architecture compliance guidance stated that not all relevant business data standards are identified in the business architecture. In addition, based on our review, information about performance measures documented in the architecture is incomplete. For example, target values for performance measures associated with acquisition and logistics initiatives are not identified. According to officials in the Office of the DCMO, the department is working to update the business architecture. Specifically, the department has developed version 11 of the business architecture to, in part, replace outdated architecture content. According to the officials, version 11 of the architecture is currently available online, but version 10 remains the official version of the business enterprise architecture used for system certification decisions. The officials stated that the department continues to add content to version 11, and they expect that it will be used as the basis of system certification decisions for fiscal year 2019. In addition, DOD has ongoing work to address a key recommendation we made in July 2015 associated with improving the usefulness of its business architecture. In particular, we reported that the majority of military department portfolio managers that we surveyed believed that the business architecture had not been effective in meeting intended outcomes. For example, only 25 percent of the survey respondents reported that the business architecture effectively enabled DOD to routinely produce timely, accurate, and reliable business and financial information for management purposes. In addition, only 38 percent reported that the business architecture effectively guided the implementation of interoperable defense business systems. As a result, we reported that the architecture had produced limited value and recommended that the department use the results of our survey to determine additional actions that can improve the department’s management of its business enterprise architecture activities. In response to our recommendation, DOD identified opportunities to address our survey findings and developed a plan for improving its ability to achieve architecture-related outcomes. DOD’s business enterprise architecture improvement plan was signed by the Assistant DCMO in January 2017. However, the department has not yet demonstrated that it has delivered the capabilities described by the plan; thus, we will continue to monitor DOD’s progress to fully address this recommendation. In addition to the business enterprise architecture, according to the act, the DOD CIO is to develop an IT enterprise architecture. This architecture is to describe a plan for improving the IT and computing infrastructure of the department, including for each of the major business processes. Officials in the Office of the DOD CIO stated that the department considers its information enterprise architecture to be its IT enterprise architecture. The DOD CIO approved version 2.0 of its information enterprise architecture in August 2012. According to DOD documentation, this architecture describes the department’s current information enterprise (i.e., information resources, assets, and processes used to share information across the department and with its mission partners) and includes a vision for the target information enterprise; documents required capabilities, and the activities, rules, and services needed to provide them; and includes information for applying and complying with the architecture. Nevertheless, while the architecture includes content describing the department’s current and target information enterprise, which is consistent with OMB guidance, it does not include a transition plan that provides a road map for improving the department’s IT and computing infrastructure. Related to this finding, DCMO officials did not agree with our assessment concerning the department’s IT enterprise architecture transition plan. In this regard, officials in the Office of the DCMO stated that the department’s DOD IT Portfolio Repository includes information for managing efforts to improve IT and computing infrastructure at the system level. According to the repository’s data dictionary, this information can include system life cycle start and end dates, as well as information that supports planning for a target environment. However, documentation describing DOD’s information enterprise architecture does not identify the DOD IT Portfolio Repository as being part of the architecture. Moreover, it does not include a plan for improving the department’s IT and computing infrastructure for each of the major business processes. Officials in the Office of the CIO acknowledged that the architecture does not include such plans. According to the officials, the department is currently developing version 3.0 of its information enterprise architecture (i.e., its IT enterprise architecture). The officials stated that the department does not currently intend for the architecture to include a plan for improving the department’s IT and computing infrastructure that addresses each of the major business processes. They added, however, that there is an effort to ensure that functional areas, such as human resources management, are included. DCMO officials stated that the department has not defined how the DOD IT enterprise architecture needs to be segmented for each major business process because the infrastructure requirements seem to be similar for each of the processes. Without an architecture that includes a plan for improving its IT and computing infrastructure, including for each of the major business processes, DOD risks not ensuring that stakeholders across the department have a consistent understanding of the steps needed to achieve the department’s future vision, agency priorities, potential dependencies among investments, and emerging and available technological opportunities. According to the act, the DOD business enterprise architecture is to be integrated into the DOD IT enterprise architecture. The department’s business architecture compliance guide also recognizes that the business architecture is to be integrated with the IT enterprise architecture. However, the department has not demonstrated that it has integrated the business enterprise architecture into the information enterprise architecture. Specifically, the department did not provide documentation associated with either architecture that describes how the two are, or are to be, integrated. The business enterprise architecture compliance guide states that DOD Directive 8000.01 implements the requirement that the two architectures are to be integrated. However, the directive does not address how they are, or are to be, integrated. Officials in the Offices of the CIO and the DCMO described steps they were taking to coordinate the development of the next versions of the information enterprise architecture (i.e., IT enterprise architecture) and business enterprise architecture. However, these steps were not sufficient to help ensure integration of the two architectures. Specifically, in June 2017, officials in the Office of the DOD CIO stated they were participating in the development of the next version of the business architecture and that the DOD CIO is represented on the Business Enterprise Architecture Configuration Control Board. Officials in the Office of the DCMO confirmed that DOD CIO officials participate on the board. However, officials from the Office of the DCMO said that, until it met in June 2017 the board had not met since 2014. Moreover, documentation of the June 2017 meeting, and a subsequent November 2017 meeting, did not indicate that the board members had discussed integration of the department’s business and information enterprise architectures. In addition, officials in the Office of the DCMO reported that the office has not actively participated in the information enterprise architecture working group. Further, our review of meeting minutes from this working group did not identify participation by officials in the Office of the DCMO, or that integration of the architectures was discussed. The Office of the DCMO described other mechanisms for its sharing of information about architectures with the Office of the DOD CIO. For example, the Office of the DCMO stated that it participates with DOD CIO bodies governing version 3.0 development. Nevertheless, the Office of the DCMO reiterated that technical integration of the architectures has not been designed. Until DOD ensures that its business architecture is integrated into its IT enterprise architecture, the department may not be able to ensure that its business strategies capitalize on technologies and that its IT infrastructure will support DOD’s business priorities and related business strategies. The NDAA for Fiscal Year 2016 requires the Secretary to establish a Defense Business Council, chaired by the DCMO and the DOD CIO, to provide advice to the Secretary on: developing the business enterprise architecture, reengineering the department’s business processes, developing and deploying business systems, and developing requirements for business systems. DOD established the department’s Defense Business Council in October 2012, prior to the act. According to its current charter, dated December 2014, the Council is co-chaired by the DCMO and the DOD CIO. In addition, the Council is to serve as the principal governance body for vetting issues related to managing and improving defense business operations. Among other things, it serves as the investment review board for defense business system investments. The Defense Business Council charter also states that the Council was established as a principal supporting tier of governance to the Deputy’s Management Action Group. The Deputy’s Management Action Group was established by an October 2011 memorandum issued by the Deputy Secretary of Defense. According to information published on DCMO’s website, the group was established to be the primary civilian-military management forum that supports the Secretary of Defense, and is to address top department issues that have resource, management, and broad strategic and/or policy implications. The group’s primary mission is to produce advice for the Deputy Secretary of Defense in a collaborative environment and to ensure that the group’s execution aligns with the Secretary of Defense’s priorities. According to the Office of the DCMO, the Defense Business Council determines whether or not to elevate a topic to the Deputy’s Management Action Group to address on behalf of the Secretary. Based on our review of meeting documentation for 27 meetings that the Defense Business Council held between January 2016 and August 2017, the Council discussed the four topics on which the NDAA for Fiscal Year 2016 requires it to provide advice to the Secretary. According to the Office of the DCMO, during the discussions of these topics, the Council did not identify any issues related to the topics that needed to be elevated to the Deputy’s Management Action Group. Table 6 identifies the number of meetings in which the Council discussed each topic during this time period. By ensuring that the required business system topics are discussed during Defense Business Council meetings, the department should be positioned to raise issues to the Deputy’s Management Action Group, and ultimately, to advise the Secretary of Defense on matters associated with these topics. The NDAA for Fiscal Year 2016 requires that, for any fiscal year in which funds are expended for development or sustainment pursuant to a covered defense business system program, the Secretary of Defense is to ensure that a covered business system not proceed into development (or, if no development is required, into production or fielding) unless the appropriate approval official reviews the system to determine if the system meets five key requirements, as previously discussed in this report. In addition, the act requires that the appropriate approval official certify, certify with conditions, or decline to certify that the system satisfies these five requirements. The department issued DOD Instruction 5000.75, which established business system categories and assigned certifying officials, consistent with the act. Table 7 describes the business system categories and the assigned certifying officials, as defined in DOD Instruction 5000.75. The DOD DCMO certified the five systems in our sample (which included the military departments’ systems) for fiscal year 2017. However, these certifications were issued in accordance with the previous fiscal year’s (fiscal year 2016) certification requirements. Those requirements had stipulated that a defense business system program was to be reviewed and certified on the basis of the system’s compliance with the business enterprise architecture and appropriate business process reengineering, rather than on the basis of having met all five requirements identified in the NDAA for Fiscal Year 2016. Specifically, DCMO certified the systems on the basis of determining that the systems were in compliance with the business enterprise architecture and had been sufficiently reengineered. However, none of the systems were certified on the basis of a determination that they had valid, achievable requirements and a viable plan for implementing them; had an acquisition strategy to reduce or eliminate the need to tailor commercial off-the-shelf systems; or were in compliance with the department’s auditability requirements. Officials in the Offices of the DOD DCMO, the Air Force DCMO, the Under Secretary of the Navy (Management), and Army Business Transformation told us that the systems were not certified relative to three of the requirements because the department did not issue guidance to reflect changes made by the NDAA for Fiscal Year 2016 in time for the fiscal year 2017 certification process. Prior to the NDAA for Fiscal Year 2016, relevant legislation and DOD guidance only called for annual determinations to be made regarding whether a system complied with the business enterprise architecture and whether appropriate business process reengineering had been conducted. In January 2016, the DCMO issued a memorandum stating that the department planned to issue new guidance and policy to implement the new legislation by the end of February 2016. However, the department did not issue additional guidance addressing the new certification requirements until April 2017. The system certifications, which were required by the act to be completed before systems could spend fiscal year 2017 funds, occurred in August and September 2016. In explaining the delay in issuing new guidance on the certification requirements, officials in the Office of the DCMO stated that the statutory deadline for issuing guidance was December 31, 2016. They added that, given this statutory deadline, and the start of fiscal year 2017 on October 1, 2016, it was their determination that Congress did not intend for the NDAA for Fiscal Year 2016’s certification requirements to be fully implemented before fiscal year 2017 started. DCMO officials stated that they intend for the department to use the certification requirements established by the NDAA for Fiscal Year 2016 for future system certifications. While it was reasonable for the department to use the earlier guidance for its fiscal year 2017 certifications, given that the new guidance had not yet been issued, it will be important going forward that the department certifies business systems on the basis of the certification requirements established in the NDAA for Fiscal Year 2016 and its related guidance addressing these requirements. Certifying systems on the basis of the act’s requirements should help ensure that funds are not wasted on developing and maintaining systems that do not have valid requirements and a viable plan to implement the requirements, that introduce unnecessary complexity, or that impede the Department of Defense’s efforts to meet its auditability requirements. Since the NDAA for Fiscal Year 2016 was signed in November 2015, DOD has issued guidance that addresses most provisions of the NDAA for Fiscal Year 2016 related to managing defense business system investments. However, the department has not established policies requiring consideration of sustainability and technology requirements and the use of best systems engineering practices in the procurement and deployment of its systems. Having these policies would better enable the department to ensure it is efficiently and effectively procuring and deploying its business systems. In addition, the Air Force, the Army, and Navy have made mixed progress in issuing guidance to assist in making certification decisions regarding systems within their respective areas of responsibility. Specifically, the Air Force and Navy issued guidance on the certification of business systems that does not fully address new certification requirements, while the Army has not issued any updated guidance for its certifications. As a result, the Air Force, Navy, and Army risk wasting funds on developing and maintaining systems that do not have valid requirements and a viable plan to implement the requirements, introduce unnecessary complexity, or do not adequately support the Department of Defense’s efforts to meet its auditability requirements. Also, DOD has developed an IT architecture, but this architecture does not address the act’s requirement that it include a plan for improving the department’s IT and computing infrastructure, including for each business process. In addition, DOD’s plans for updating its IT architecture do not address how the department intends to integrate its business and IT architectures, as called for by the act. As a result, DOD risks not having a consistent understanding of what is needed to achieve the department’s future vision, agency priorities, potential dependencies among investments, and emerging and available technological opportunities. We are making six recommendations, including three to the Secretary of Defense and one to each of the Secretaries of the Air Force, the Navy and the Army: The Secretary of Defense should define a specific time frame for finalizing, and ensure the issuance of (1) policy requiring full consideration of sustainability and technological refreshment requirements for its defense business system investments; and (2) policy requiring that best systems engineering practices are used in the procurement and deployment of commercial systems, modified commercial systems, and defense-unique systems to meet DOD missions. (Recommendation 1) The Secretary of the Air Force should define a specific time frame for finalizing, and ensure the issuance of guidance for certifying the department’s business systems on the basis of (1) having an acquisition strategy designed to eliminate or reduce the need to tailor commercial off- the-shelf systems to meet unique requirements, incorporate unique requirements, or incorporate unique interfaces to the maximum extent practicable; and (2) being in compliance with DOD’s auditability requirements. (Recommendation 2) The Secretary of the Navy should define a specific time frame for finalizing, and ensure the issuance of guidance for certifying the department’s business systems on the basis of (1) having a viable plan to implement the system’s requirements; (2) having an acquisition strategy designed to eliminate or reduce the need to tailor commercial off-the-shelf systems to meet unique requirements, incorporate unique requirements, or incorporate unique interfaces to the maximum extent practicable; and (3) being in compliance with DOD’s auditability requirements. (Recommendation 3) The Secretary of the Army should define a specific time frame for finalizing, and ensure the issuance of guidance for certifying the department’s business systems on the basis of (1) being reengineered to be as streamlined and efficient as practicable, and determining that implementation of the system will maximize the elimination of unique software requirements and unique interfaces; (2) being in compliance with the business enterprise architecture; (3) having valid, achievable requirements and a viable plan to implement the requirements; (4) having an acquisition strategy designed to eliminate or reduce the need to tailor commercial off-the-shelf systems to meet unique requirements, incorporate unique requirements, or incorporate unique interfaces to the maximum extent practicable; and (5) being in compliance with DOD’s auditability requirements. (Recommendation 4) The Secretary of Defense should ensure that the DOD CIO develops an IT enterprise architecture which includes a transition plan that provides a road map for improving the department’s IT and computing infrastructure, including for each of its business processes. (Recommendation 5) The Secretary of Defense should ensure that the DOD CIO and Chief Management Officer work together to define a specific time frame for when the department plans to integrate its business and IT architectures and ensure that the architectures are integrated. (Recommendation 6) DOD provided written comments on a draft of this report, which are reprinted in appendix III. In the comments, the department stated that it concurred with three of the recommendations and partially concurred with three of the recommendations. DOD also provided evidence that it has fully addressed one of the recommendations. In addition, DOD provided technical comments that we incorporated in the report, as appropriate. DOD stated that it concurred with our first recommendation, which called for it to define a specific time frame for finalizing, and ensure the issuance of, policies that fully address provisions in the NDAA for Fiscal Year 2016. Furthermore, the department stated that it had complied with the recommendation. Specifically, the department stated that it had published its defense business systems investment management guidance in April 2017. This guidance identifies DOD’s Financial Management Regulation, Volume 2B, Chapter 18 “Information Technology” and supporting IT budget policy and guidance as well as DOD Instruction 5000.75 and supporting acquisition policy and guidance. The department stated that the Financial Management Regulation specifically addresses the requirement for sustainability and technological refreshment requirements for its defense business system investments. While DOD reported taking this action, we do not agree that the department has complied with our recommendation. In reviewing the department’s guidance, we found that none of the cited management documents includes a policy requiring consideration of sustainability and technological refreshment requirements for DOD’s defense business systems. Further, none of these documents includes a policy requiring that best systems engineering practices be used in the procurement and deployment of commercial, modified-commercial, and defense unique systems. Without a policy requiring full consideration of sustainability and technological refreshment requirements for its defense business system investments, the department may not be able to ensure that it has a full understanding of the costs associated with these requirements. Further, without a policy requiring the use of best systems engineering practices in systems procurement and deployment, the department may be limited in its ability to effectively balance meeting system cost and performance objectives. Accordingly, we continue to believe that our recommendation is valid. The department concurred with our second recommendation, that the Secretary of the Air Force define a specific time frame for finalizing, and ensure the issuance of, guidance that fully addresses certification requirements, in accordance with the NDAA for Fiscal Year 2016. Moreover, the department stated that the Air Force has complied with the recommendation. Specifically, DOD stated that Air Force Manual 63-144 details the consideration of using existing commercial solutions without modification or tailoring. However, while the manual provides a foundation on which the Air Force can build, it is not sufficient to fully address our recommendation because it does not include guidance on certifying business systems on the basis of having an acquisition strategy that eliminates or reduces the need to tailor commercial-off-the-shelf systems. In addition, the department did not demonstrate that the Air Force has issued guidance for certifying business systems on the basis of being in compliance with DOD’s auditability requirements. Rather, the Air Force stated that it has pending guidance that addresses the acquisition strategy and auditability requirements. We plan to evaluate the guidance to determine the extent to which it addresses our recommendation after it is issued. The department partially agreed with our third recommendation, that the Secretary of the Navy define a specific time frame for finalizing, and ensure the issuance of, guidance that fully addresses certification requirements. Specifically, DOD stated that Navy agreed to issue guidance. Subsequently, on March 8, 2018, Navy issued its updated guidance. However, Navy disagreed with the recommendation, as written, and suggested that GAO revise the recommendation to state that “The Secretary of the Navy should ensure guidance is issued according to established timeline for certifying the department’s business systems. . .” According to Navy, this change would support alignment with the timeline for certifying the department’s business systems driven by the Chief Management Officer investment review timeline. Based on our analysis, we found the guidance that Navy issued to be consistent with our recommendation. Thus, we plan to close the recommendation as fully implemented. We have also annotated this report, where appropriate, to explain that the Navy issued guidance while the draft of this report was at the department for comment. On the other hand, we did not revise the wording of our recommendation, as we believe it appropriately reflected the importance of Navy taking action to ensure the issuance of its guidance. The department stated that it concurred with our fourth recommendation, which called for the Secretary of the Army to define a specific time frame for finalizing, and ensure the issuance of, guidance for certifying the department’s business systems on the basis of the certification requirements. Furthermore, on March 23, 2018, the Army issued its guidance. However, because of the timing of this report relative to when the Army provided its guidance to us (on March 27, 2018), we have not yet completed an assessment of the guidance. We have annotated this report, where appropriate, to reflect the Army’s action on our recommendation. The department stated that it partially concurred with our fifth recommendation. This recommendation called for the DOD CIO to develop an IT enterprise architecture which includes a transition plan that provides a road map for improving the department’s IT and computing infrastructure, including for each of its business processes. Toward this end, the department agreed that the DOD CIO should develop an architecture that enables improving the department’s IT and computing infrastructure for each of its business processes. However, the department also stated that the recommendation is not needed because the goal is already being accomplished by a set of processes, organizations, protocols, and architecture data. For example, the department described processes and relationships between the Office of the DOD CIO and the Office of the Chief Management Officer and the boards that support the department’s business and IT enterprise architectures. In particular, the department stated that information enterprise architecture data relevant to the business enterprise are accessed via the DOD Information Enterprise Architecture Data Selection Wizard and imported into the business enterprise architecture. The department further stated that, if the business capability acquisition cycle process indicates a need to improve the IT or computing infrastructure, the Office of the Chief Management Officer has a protocol to initiate a proposal to change the information enterprise architecture. We agree that the department’s processes, organizations, protocols, and architecture data are keys to successful IT management. However, during the course of our audit, we found that documentation describing DOD’s IT architecture did not include a plan for improving the department’s IT and computing infrastructure for each of the major business processes. Moreover, officials in the Office of the CIO acknowledged that the architecture did not include such a plan. Without a transition plan that provides a road map for improving the department’s IT and computing infrastructure, including for each of its business processes, it will be difficult for the department to rely on its personnel to timely and proactively manage and direct modernization efforts of such a magnitude as DOD’s systems modernization efforts. Further, without such a plan, DOD risks not being able to ensure that stakeholders across the department have a consistent understanding of the steps needed to achieve the department’s future vision, agency priorities, potential dependencies among investments, and emerging and available technological opportunities. Thus, we maintain that the department should fully implement our recommendation. The department stated that it partially concurred with our sixth recommendation, that the DOD CIO and DCMO work together to define a specific time frame for when the department plans to integrate its business and IT architectures. In particular, the department stated that it agrees that the DOD CIO and Chief Management Officer should work together to establish a time frame and ensure coordination and consistency of the IT and business architectures. However, the department disagreed with the use and intent of the term “integrate,” as stated in the recommendation, although it did not explain the reason for this disagreement. Instead, it proposed that we change our recommendation to read “The GAO recommends the Secretary of Defense ensure the DoD CIO and CMO work together to define a specific timeline for coordinating its business and IT architectures to achieve better enterprise alignment among the architectures.” We agree that it is important to achieve coordination and consistency between the business and IT architectures. However, the department did not provide documentation associated with either architecture that describes how the two are, or are to be, integrated, as called for by the NDAA for Fiscal Year 2016 and DOD guidance. Integrating the architectures would help ensure that business strategies better capitalize on existing and planned technologies and that IT solutions and infrastructure support business priorities and related business strategies. Thus, we continue to believe that our recommendation is valid. However, we have updated the recommendation to state that the DOD CIO and the Chief Management Officer should work together. We made this change because, effective February 1, 2018, the Secretary of Defense eliminated the DCMO position and expanded the role of the Chief Management Officer, in accordance with the National Defense Authorization Act for Fiscal Year 2018. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; and the Director of the Office of Management and Budget. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-4456 or harriscc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix IV. Our objective was to determine the actions taken by the Department of Defense (DOD) to comply with provisions included in the National Defense Authorization Act for Fiscal Year 2016 (NDAA). These provisions require DOD to perform certain activities aimed at ensuring that its business system investments are managed efficiently and effectively. Specifically, we determined to what extent DOD has 1. established guidance for effectively managing its defense business 2. developed and maintained a defense business enterprise architecture and information technology (IT) enterprise architecture, in accordance with relevant laws and Office of Management and Budget (OMB) policies and guidance; 3. used the Defense Business Council to provide advice to the Secretary on developing the business enterprise architecture, reengineering the department’s business processes, developing and deploying business systems, and developing requirements for business systems; and 4. ensured that covered business systems are reviewed and certified in accordance with the act. To address the extent to which DOD has established guidance for effectively managing defense business system investments, we obtained and analyzed the department’s guidance, as well as the guidance established by the Departments of the Air Force, Army, and Navy, for managing defense business systems relative to the act’s requirements. Specifically, the NDAA for Fiscal Year 2016 required the Secretary of Defense to issue guidance, by December 31, 2016, to provide for the coordination of and decision making for the planning, programming, and control of investments in covered defense business systems. The act required this guidance to include the following six elements: Policy to ensure DOD business processes are continuously reviewed and revised to implement the most streamlined and efficient business processes practicable and eliminate or reduce the need to tailor commercial off-the-shelf systems to meet or incorporate requirements or interfaces that are unique to the department. Process to establish requirements for covered defense business systems. Mechanisms for planning and controlling investments in covered defense business systems, including a process for the collection and review of programming and budgeting information for covered defense business systems. Policy requiring the periodic review of covered defense business systems that have been fully deployed, by portfolio, to ensure that investments in such portfolios are appropriate. Policy to ensure full consideration of sustainability and technological refreshment requirements, and the appropriate use of open architectures. Policy to ensure that best acquisition and systems engineering practices are used in the procurement and deployment of commercial systems, modified commercial systems, and defense-unique systems to meet DOD missions. We assessed the February 2017 DOD Instruction 5000.75, Business Systems Requirements and Acquisitions, and April 2017 defense business system investment management guidance, which the department issued to address the act’s requirements. In addition, we assessed the department’s Financial Management Regulation and directive on its planning, programming, budgeting, and execution process, which the department stated also address the act’s provisions. We also assessed DOD’s guidance for managing business system investments relative to the act’s business system certification requirements. The act requires that the Secretary of Defense ensure that a covered defense business system not proceed into development (or, if no development is required, into production or fielding) unless the appropriate approval official determines that the system meets five requirements. The act further requires for any fiscal year in which funds are expended for development or sustainment pursuant to a covered defense business system program, the appropriate approval official to review the system to determine if the system: has been, or is being, reengineered to be as streamlined and efficient as practicable, and whether the implementation of the system will maximize the elimination of unique software requirements and unique interfaces; is in compliance with the business enterprise architecture or will be in compliance as a result of planned modifications; has valid, achievable requirements, and a viable plan for implementing those requirements (including, as appropriate, market research, business process reengineering, and prototyping activities); has an acquisition strategy designed to eliminate or reduce the need to tailor commercial off-the-shelf systems to meet unique requirements, incorporate unique requirements, or incorporate unique interfaces to the maximum extent practicable; and is in compliance with the department’s auditability requirements. We compared Office of the Deputy Chief Management Office (DCMO) certification guidance with the act’s certification requirements. In addition, we compared the guidance established by the Departments of the Air Force, the Army, and the Navy for certifying their business systems with the act’s certification requirements. We also interviewed cognizant officials responsible for managing defense business system investments at DOD, including the military departments. Specifically, we interviewed officials in the Office of the DCMO, the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, the Office of the Chief Information Officer (CIO), and the Offices of the CMOs in the Departments of the Air Force, Army, and Navy. To determine the extent to which DOD has developed and maintained a defense business enterprise architecture and IT enterprise architecture, in accordance with relevant laws and OMB policy and guidance, we assessed the business enterprise architecture against the relevant laws and OMB policy and guidance; the IT enterprise architecture against the relevant laws and OMB policy and guidance; and the department’s efforts to integrate its business and IT architectures against the act’s requirement. To determine the extent to which the department has developed and maintained a business enterprise architecture in accordance with relevant laws and OMB policy and guidance, we reviewed version 10 of its business enterprise architecture, which was released in February 2013, and related information relative to the act’s requirements; U.S. Code, Title 44, Section 3601, which defines an enterprise architecture; and OMB policy and guidance. We also reviewed version 11 of the architecture to determine the extent to which it differed from version 10. Further, we reviewed the department’s business enterprise architecture improvement plan, which it developed in response to a recommendation we made in July 2015. Specifically, we recommended that the department use the results of our portfolio manager survey to determine additional actions that could improve the department’s management of its enterprise architecture activities. In response to our recommendation, the department developed and approved a plan in January 2017. We assessed the extent to which the department had delivered the planned capabilities relative to the plan. We also reviewed the extent to which the delivery dates of the three planned capabilities and associated tasks changed over time relative to the plan. To assess the extent to which the department developed and maintained an IT enterprise architecture in accordance with relevant laws and OMB policy and guidance, we reviewed content from the department’s IT enterprise architecture and compared it with requirements from the act, U.S. Code, Title 44, Section 3601, and OMB policy and guidance. Specifically, we reviewed version 2.0 of the department’s information enterprise architecture, which was released in August 2012, relative to the act’s requirement for the DOD CIO to develop an IT enterprise architecture that is to describe a plan for improving the IT and computing infrastructure of the department, including for each of the major business processes. We reviewed volumes I and II of the information enterprise architecture and the four enterprise-wide reference architectures to determine if the architecture described a plan for improving the IT and computing infrastructure of the department, as called for by the act. We also reviewed whether the architecture included content that described the current and the target environments, and a transition plan to get from the current to the target environment, consistent with OMB policy and guidance. To determine the extent to which the department has integrated its business and IT architectures, as required by the act, we reviewed DOD Directive 8000.01, Management of the Department of Defense Information Enterprise. We also reviewed meeting documentation from the information enterprise architecture working group responsible for the development of an updated architecture. In addition, we reviewed meeting documentation from the Business Enterprise Architecture Configuration Control Board to identify any discussions among CIO and DCMO officials regarding integration of the two architectures, as well as the level of participation by both parties. Finally, we interviewed officials in the Office of the DCMO and the Office of the CIO about efforts to develop and maintain a business enterprise architecture, develop an IT enterprise architecture, and integrate the business and IT architectures. To determine the extent to which the department has used the Defense Business Council to provide advice to the Secretary of Defense on developing the business enterprise architecture, reengineering the department’s business processes, developing and deploying business systems, and developing requirements for business systems, in accordance with the act, we analyzed the department’s December 2014 Defense Business Council Charter and April 2017 defense business systems investment management guidance. We compared information in the charter and guidance to the requirement that the Secretary establish the Defense Business Council to advise the Secretary on the required defense business system topics. In addition, we obtained and analyzed meeting summaries and briefings for 27 Defense Business Council meetings that took place from January 2016 through August 2017. Specifically, we assessed the frequency with which the meetings held during this time period addressed the required topics. We chose this time period because 2016 was the first calendar year following the enactment of the NDAA for Fiscal Year 2016. Further, we chose August 2017 as our end date because it was the last month’s data that we could reasonably expect to obtain and review within our reporting time frame. We also interviewed officials in the Offices of the DCMO and CIO about the Defense Business Council and the Deputy’s Management Action Group, which is the governance entity to which the Council reports. To determine the extent to which DOD has ensured that covered business systems are reviewed and certified in accordance with the act, we reviewed a nongeneralizable sample of business systems from DOD’s two categories of covered defense business systems that require certification. To select the sample, we considered Category I systems, which were systems that were expected to have a total amount of budget authority of more than $250 million over the period of the current future- years defense program, and Category II systems, which were systems that were expected to have a total amount of budget authority of between $50 million and $250 million over the period of the future-years defense program. We further categorized the Category II systems into four groups—those owned by the Air Force, the Army, Navy, and the remaining DOD components. We selected one system with the highest expected cost over the course of the department’s future-years defense program from each group. This resulted in our selection of five systems: one Category I system, one Category II system from each military department, and one Category II system from the remaining DOD components. We reviewed, respectively, DOD’s Healthcare Management System Modernization Program; Air Force’s Maintenance, Repair and Overhaul initiative; Army’s Reserve Component Automation System; Navy’s Electronic Procurement System; and the Defense Logistics Agency’s Defense Agencies Initiative Increment 2. We determined that the number of systems we selected was sufficient for our evaluation. For each system, we assessed the extent to which it had been certified on the basis of the five certification requirements in the act. Specifically, we evaluated investment decision memos and certification assertions to determine if each system had been certified according to the act’s requirements, which include ensuring that the system had been, or was being, reengineered to be as streamlined and efficient as practicable, and the implementation of the system would maximize the elimination of unique software requirements and unique interfaces; was in compliance with the business enterprise architecture or would be in compliance as a result of planned modifications; had valid, achievable requirements, and a viable plan for implementing those requirements; had an acquisition strategy designed to eliminate or reduce the need to tailor commercial off- the-shelf systems to meet unique requirements, incorporate unique requirements, or incorporate unique interfaces to the maximum extent practicable; and was in compliance with the department’s auditability requirements. We did not determine whether the certification assertions were valid. For example, we did not evaluate business process reengineering activities to determine if they were sufficient. We also interviewed DOD DCMO and military department officials about the certification of these systems. To determine the reliability of the business system cost data used to select the systems, we reviewed system documentation for the three systems DOD uses to store data, which include the Defense Information Technology Investment Portal, the DOD Information Technology Portfolio Repository, and the Select and Native Programming-Information Technology system. In this regard, we requested and reviewed department responses to questions about the systems and about how the department ensures the quality and reliability of the data. In addition, we requested and reviewed documentation related to the systems (e.g., data dictionaries, system instructions, and user training manuals) and reviewed the data for obvious issues, including missing or questionable values. We also reviewed available reports on the quality of the inventories (e.g., inspector general reports). We found the data to be sufficiently reliable for our purpose of selecting systems for evaluation. We conducted this performance audit from January 2017 to March 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Since 2005, we have issued 11 reports assessing DOD’s actions to respond to business system modernization provisions contained in U.S. Code, Title 10, Section 2222. The reports are listed below. DOD Business Systems Modernization: Additional Action Needed to Achieve Intended Outcomes, GAO-15-627 (Washington, D.C.: July 16, 2015). Defense Business Systems: Further Refinements Needed to Guide the Investment Management Process, GAO-14-486 (Washington, D.C. May 12, 2014). DOD Business Systems Modernization: Further Actions Needed to Address Challenges and Improve Accountability, GAO-13-557 (Washington, D.C.: May 17, 2013). DOD Business Systems Modernization: Governance Mechanisms for Implementing Management Controls Need to Be Improved, GAO-12-685 (Washington, D.C.: June 1, 2012). Department of Defense: Further Actions Needed to Institutionalize Key Business System Modernization Management Controls, GAO-11-684 (Washington, D.C.: June 29, 2011). Business Systems Modernization: Scope and Content of DOD’s Congressional Report and Executive Oversight of Investments Need to Improve, GAO-10-663 (Washington, D.C.: May 24, 2010). DOD Business Systems Modernization: Recent Slowdown in Institutionalizing Key Management Controls Needs to Be Addressed, GAO-09-586 (Washington, D.C.: May 18, 2009). DOD Business Systems Modernization: Progress in Establishing Corporate Management Controls Needs to Be Replicated Within Military Departments, GAO-08-705 (Washington, D.C.: May 15, 2008). DOD Business Systems Modernization: Progress Continues to Be Made in Establishing Corporate Management Controls, but Further Steps Are Needed, GAO-07-733 (Washington, D.C.: May 14, 2007). Business Systems Modernization: DOD Continues to Improve Institutional Approach, but Further Steps Needed, GAO-06-658 (Washington, D.C.: May 15, 2006). DOD Business Systems Modernization: Important Progress Made in Establishing Foundational Architecture Products and Investment Management Practices, but Much Work Remains, GAO-06-219 (Washington, D.C.: November 23, 2005). In addition to the contact above, individuals making contributions to this report include Michael Holland (Assistant Director), Cheryl Dottermusch (Analyst in Charge), John Bailey, Chris Businsky, Camille Chaires, Nancy Glover, James Houtz, Anh Le, Tyler Mountjoy, Monica Perez-Nelson, Priscilla Smith, and Adam Vodraska.
|
DOD spends billions of dollars each year on systems that support its key business areas, such as personnel and logistics. For fiscal year 2018, DOD reported that these business system investments are expected to cost about $8.7 billion. The NDAA for Fiscal Year 2016 requires DOD to perform activities aimed at ensuring that business system investments are managed efficiently and effectively, to include taking steps to limit their complexity and cost. The NDAA also includes a provision for GAO to report every 2 years on the extent to which DOD is complying with the act's provisions on business systems. For this report, GAO assessed, among other things, the department's guidance for managing defense business system investments and its business and IT enterprise architectures (i.e., descriptions of DOD's current and future business and IT environments and plans for transitioning to future environments). To do so, GAO compared the department's system certification guidance and architectures to the act's requirements. GAO also interviewed cognizant DOD officials. The Department of Defense (DOD) has made progress in complying with most legislative provisions for managing its defense business systems, but additional actions are needed. For example, the National Defense Authorization Act (NDAA) for Fiscal Year 2016 required DOD and the military departments to issue guidance to address five requirements for reviewing and certifying the department's business systems. While DOD has issued guidance addressing all of these requirements, as of February 2018, the military departments had shown mixed progress. ● Fully addressed: The department provided evidence that it fully addressed this requirement. ◐ Partially addressed: The department provided evidence that it addressed some, but not all, portions of this requirement. ◌ Not addressed: The department did not provide any evidence that it addressed this requirement. Source: GAO analysis of Department of Defense documentation. | GAO-18-130 The military departments' officials described plans to address the gaps in their guidance; however, none had defined when planned actions are to be completed. Without guidance that addresses all five requirements, the military departments risk developing systems that, among other things, are overly complex and costly to maintain. DOD has efforts underway to improve its business enterprise architecture, but its information technology (IT) architecture is not complete. Specifically, DOD's business architecture includes content called for by the act. However, efforts to improve this architecture to enable the department to better achieve outcomes described by the act, such as routinely producing reliable business and financial information for management, continue to be in progress. In addition, DOD is updating its IT enterprise architecture, which describes, among other things, the department's computing infrastructure. However, the architecture lacks a road map for improving the department's IT and computing infrastructure for each of the major business processes. Moreover, the business and IT enterprise architectures have yet to be integrated, and DOD has not established a time frame for when it intends to do so. As a result, DOD lacks assurance that its IT infrastructure will support the department's business priorities and related business strategies. GAO is making six recommendations, including that DOD and the military departments establish time frames for, and issue, required guidance; and that DOD develop a complete IT architecture and integrate its business and IT architectures. DOD concurred with three and partially concurred with three recommendations. GAO continues to believe all of the recommendations are warranted as discussed in this report.
|
USDA’s APHIS is responsible for implementing the Animal Welfare Act. The act and its implementing regulations govern, among other things, how federal and nonfederal research facilities must treat particular species of warm-blooded animals to ensure their humane treatment when used in research, teaching, testing, or experimentation. The Animal Welfare Act’s definition of “animal” excludes birds, rats of the genus Rattus, and mice of the genus Mus when those animals are bred for use in research. The act also excludes horses not used for research purposes and other farm animals used or intended for use as food or fiber or in certain types of research. The Animal Welfare Act also excludes cold- blooded animals—such as fish, reptiles, or amphibians—and invertebrates. See table 1 for a summary of the animals covered and not covered by the Animal Welfare Act. (Animals covered by the Health Research Extension Act are also included in table 1 and described in the next section.) The Animal Welfare Act and its regulations contain specific standards for research facilities. These include: Registration. Nonfederal research facilities that conduct activities regulated by the Animal Welfare Act must register with APHIS. The act does not require that federal research facilities register with APHIS. APHIS does, however, assign federal research facilities certificate numbers that it uses to track whether they have submitted their required annual report (see below). As of March 2018, APHIS had assigned such numbers to 157 federal research facilities. Some of these federal research facilities, such as VA, have elected to report information to APHIS on an individual basis, while others, such as the HHS’s Centers for Disease Control and Prevention, submit a single report covering research facilities in several states. Annual report. Reporting facilities that used or intended to use live animals in research, tests, experiments, or for teaching must submit a retrospective annual report about those animals to APHIS on or before December 1 of each calendar year. Standards for humane handling, care, treatment, and transportation of animals. The Animal Welfare Act directs research facilities to meet certain standards of care for the animal species that are covered by the act. The standards of care are tailored to particular species of animals or groups of species. Institutional Animal Care and Use Committees. Research facilities must appoint a committee to, at least semi-annually, review the facility’s program for humane care and use of animals, to inspect all facilities, and to prepare reports of its evaluation. The committee is responsible for reviewing research proposals to determine whether the proposed activities are in accordance with the act or there is an acceptable justification for a departure from the act. Federal inspections. APHIS officials have the authority to inspect nonfederal research facilities, records, and animals to enforce the provisions of the act. The Animal Welfare Act does not expressly provide APHIS the authority to inspect federal research facilities, and APHIS will not do so unless invited. The Animal Welfare Act exempts farm animals, other than horses, from its coverage when they are used or intended for use as food or fiber or in agricultural research that is intended to improve animal nutrition, breeding, management, or production efficiency, or to improve the quality of food or fiber. According to officials with USDA’s Agricultural Research Service (ARS), most of the agency’s research activities fall under this exemption. Nevertheless, in February 2016, APHIS and ARS signed a memorandum of understanding concerning laboratory animal welfare. The intent of the memorandum of understanding is to maintain and enhance agency effectiveness and avoid duplication by allowing APHIS to use applicable sections of the Animal Welfare Act’s requirements, regulations, and standards to inspect ARS animal research facilities. Among the provisions of the memorandum, ARS agreed to register its animal research facilities with APHIS and submit an annual report to APHIS. As of March 2018, 35 ARS animal research facilities were voluntarily registered with APHIS, and ARS facilities submitted their first annual reports for activities conducted in fiscal year 2016. NIH, within the Department of Health and Human Services, administers the Health Research Extension Act. The act calls for the Director of NIH to establish guidelines that govern how certain research institutions that conduct activities using animals are to consider animal welfare. In particular, the guidelines govern how those research institutions— including federal facilities—that receive funding from Public Health Service agencies are to ensure the humane treatment of all vertebrate animals used in biomedical or behavioral science research. NIH conducts site visits at selected institutions to assess compliance with the act. Whereas the Animal Welfare Act applies to certain warm-blooded animals, the definition of animals used for the purposes of the Health Research Extension Act covers all vertebrates, including mice, rats, and fish species that are commonly used in laboratory research (see table 1). Under the act, research institutions are required to provide certain information to NIH in order to be eligible for Public Health Service funding. In particular, they must provide for NIH approval a document that describes their animal care and use program and that assures that the facility meets applicable standards. NIH calls for research institutions to provide, among other information, a commitment to comply with all applicable provisions of the Animal Welfare Act and other federal statutes and regulations relating to animals, a description of the facility, and an “average daily inventory” of species housed at the facility. In addition, research institutions approved for Public Health Service funding must annually report changes in their animal use program to NIH. As of September 2017, NIH had approved 111 federal facilities across 8 agencies for funding under the act. As directed by the regulations implementing the Animal Welfare Act, the 10 agencies we reviewed submitted to APHIS the required annual reports on their use of animals covered by the act from fiscal years 2014 through 2016. However, APHIS’s reporting instructions have not ensured consistent and complete reporting because they have been unclear about which animal species, activities, and activity locations are required to be reported for the purposes of the Animal Welfare Act. Federal facilities that conduct activities with animals using Public Health Service funding that we reviewed met NIH requirements to provide assurance documentation about their animal use programs and to provide required annual reports for fiscal years 2014 through 2016. The Animal Welfare Act regulations require federal agencies that use or intend to use live animals in research to report on their use of these animals. As directed by APHIS, these agencies, or their individual research facilities, must submit an annual report to APHIS on or before December 1 of each calendar year. APHIS instructs research facilities to submit an annual report that: includes information about animals covered by the Animal Welfare Act’s regulations and the number of such animals used as well as those held for use but not used, and provides assurances that the facility has met applicable standards, such as standards for the appropriate use of anesthetic, analgesic, and tranquilizing drugs. In addition, facilities must report whether the animals fall into one of three categories related to pain or distress and the efforts the facilities took to relieve pain or distress. Facilities must also attach a summary of any activity that did not meet the standards of the act but that were approved by the facility’s Institutional Animal Care and Use Committee. All 10 of the federal agencies we reviewed submitted annual reports to APHIS showing that their facilities had used animals in research in fiscal years 2014 through 2016. APHIS has procedures in place to track which agencies’ facilities have reported and to notify any that have not done so. For example, APHIS has developed schedules for sending reminders to facilities that have not yet reported. APHIS expects federal research facilities that it has assigned certificate numbers but that did not use any animals in a particular fiscal year to submit a report with that information. APHIS data show that the 10 federal agencies in our review reported that their facilities used more than 210,000 animals covered by the Animal Welfare Act in fiscal years 2014 through 2016. However, in our comparison of federal agencies’ annual reports to APHIS with their responses to our request for information about their activities, we found instances in which agencies did not report activities covered by the act or did not report similar activities consistently across facilities. These conditions resulted, in part, from APHIS not providing sufficient instructions on the research activities that federal agencies are to include in their annual reports. Additionally, we found that facilities reported species not covered by the act. As a result, the data that research facilities submit to APHIS in their annual reports may not accurately reflect the facilities’ uses of animals covered by the act. We identified three areas in which federal agencies’ annual reports were inconsistent or incomplete: birds, animal use outside the United States, and field studies. The Animal Welfare Act and birds Animal Welfare Act The term animal excludes birds bred for use in research. APHIS’s 2017 instructions for completing the annual report “o NOT report the use of … birds, reptiles, fish or other animals w hich are exempt from the regulation under the .” In 2002, Congress amended the definition of animal in the Animal Welfare Act to exclude birds that are bred for use in research. However, APHIS instructs facilities to not report any birds in their annual reports, regardless of whether they were bred for research. Five agencies reported to us that their research facilities used birds in fiscal years 2014 through 2016—including some not bred for research and therefore potentially covered by the act—but that they followed APHIS’s instructions to not report them. According to APHIS officials, since Congress amended the definition of animal in the act, the agency has been aware of the need to define which birds are covered by the act and should, among other things, be reported to APHIS by research facilities. The officials said that until the agency has defined birds covered by the act, they do not believe that it is appropriate to require research facilities to report their use of birds. However, as of February 2018, APHIS had not provided us with a schedule or plan for defining birds covered by the act or for developing reporting requirements for those birds. As a result, it is unclear when, or if, APHIS will require research facilities to report their use and treatment in research of birds that are covered by the Animal Welfare Act. Until APHIS develops such requirements, federal (and other) research facilities will have incomplete information about what information they should include in annual reports submitted to APHIS, and APHIS will not have assurance that annual reports from research facilities fully reflect research activities covered by the act. The Animal Welfare Act and reporting facilities Animal Welfare Act regulations “The reporting facility shall be that segment of the research facility, or that department, agency, or instrumentality of the United States, that uses or intends to use live animals in research, tests, experiments, or for teaching.” APHIS’s 2017 Instructions for completing the annual report The instructions do not instruct federal research facilities to report activities involving animal use outside the United States. The Animal Welfare Act regulations define a reporting facility to include a department, agency, or instrumentality of the United States. Officials from USDA’s Office of the General Counsel told us that there is no exclusion in the act or its regulations for federal research facilities that are located outside of the United States. However, APHIS does not instruct federal research facilities to report activities involving animal use outside the United States. Of the 10 agencies with federal research facilities that submitted annual reports to APHIS, we identified three through our initial contacts and follow-up interviews that conduct activities outside the United States involving animals that may be covered by the Animal Welfare Act: the Departments of Commerce and Defense and the Smithsonian Institution. We found that officials from the three agencies had a different understanding of their obligation to report those activities to APHIS. A senior official from the Department of Commerce’s National Marine Fisheries Service said that he knew of no reason to not report on studies conducted outside the United States and that the agency had reported such activities in fiscal year 2017. On the other hand, officials from the Department of Defense and the Smithsonian Institution told us that APHIS officials have instructed them not to report activities conducted outside of the United States. As a result, the Department of Defense and the Smithsonian Institution did not report animal use in their non-domestic facilities in fiscal years 2014 through 2016. With instructions from APHIS that federal research agencies report all activities covered by the Animal Welfare Act, regardless of location, APHIS and the public would have greater assurance that annual reports fully reflect activities covered by the act and that agencies are reporting such activities consistently. The Animal Welfare Act and field studies Animal Welfare Act regulations “Field study means a study conducted on free-living w ild animals in their natural habitat. How ever, this term excludes any study that involves an invasive procedure, harms, or materially alters the behavior of an animal under study.” APHIS’s 2017 instructions for completing APHIS’s instructions do not sufficiently clarify the conditions under w hich a field study w ould be invasive, harmful, or materially alter behavior and, therefore, be covered under the act. APHIS exempts some research involving wild animals from the requirements of the Animal Welfare Act regulations, including annual reporting. Specifically, in promulgating the current definition of “field studies” in regulation, APHIS stated, “if the research project meets the definition of field studies, the research project would not fall under the regulation.” To qualify for this exemption, a study must take place in a free-living, wild animal’s natural habitat and not involve an invasive procedure, harm, or materially alter the behavior of an animal under study. APHIS’s instructions for annual reporting note this exemption. However, they do not sufficiently clarify the conditions under which a field study would qualify, nor do they point to any source providing clarifying language. For example, the instructions do not describe criteria research facilities could use to identify activities that are invasive, harmful, or materially alter behavior. We found that agencies have interpreted the field study exemption differently. For example, Officials from three agencies within the Department of the Interior told us that the agencies did field research with many species in fiscal years 2014 through 2016, but we found the agencies had different approaches to reporting that research to APHIS. Specifically, the U.S. Geological Survey and National Park Service reported using dozens of animal species to APHIS while the Fish and Wildlife Service did not report any. An official with the Fish and Wildlife Service explained to us that the agency did not report the animals to APHIS because they were only held temporarily. Officials from the Fish and Wildlife Service and U.S. Geological Service told us that APHIS’s guidance on field studies is confusing and causes discrepancies in reporting. NASA conducts research involving temporary capture, blood sampling, and tagging of animals to study any possible effects of NASA’s launch sites on the surrounding ecosystem, but the agency does not include these activities in its annual reports to APHIS. The National Marine Fisheries Service conducts field research also involving temporary capture, blood sampling, and tagging of marine mammals for various purposes. Some of the service’s research facilities have reported these types of activities to APHIS, and according to a service official, the other facilities plan to do so. An official from the service also told us that the agency has received inconsistent guidance from APHIS about what field research to report. The National Marine Fisheries Service’s facilities that have reported animal research to APHIS have represented a large portion of the overall number of animals that federal facilities reported in fiscal years 2014 through 2016. For example, in fiscal year 2016, the agency’s facilities accounted for nearly 16,000 of about 82,000 animals reported to APHIS by the 10 federal agencies in our review. Therefore, whether these activities should or should not be reported will have a large effect on the total number of animals that federal facilities reported using for research. APHIS officials told us that they are developing additional clarifying guidance on field studies and will publish the guidance for public comment in the third quarter of fiscal year 2018. However, APHIS has not yet released a draft of this guidance. A draft with criteria for identifying which field studies are covered by the Animal Welfare Act and therefore should be reported—for example, because the studies are considered to be invasive, harmful, or materially alter behavior—would enable APHIS to ensure that the research community’s views are incorporated. With clearer instructions that include such criteria, APHIS and the public would have greater assurance that annual reports fully reflect activities covered by the act. NIH has provided guidance to federal and nonfederal research facilities about what they are required to report on their animal use, and federal facilities we reviewed met those requirements. In order to obtain funding from the Public Health Service agencies, research facilities must obtain approval from NIH of their animal welfare assurance statement and must provide annual reports to NIH. To obtain an approved assurance, a research facility must provide NIH with information about its animal care and use program. NIH provides facilities with a sample assurance document that describes the required information, including assurances of compliance with animal welfare standards signed by appropriate officials, a roster of Institutional Animal Care and Use Committee membership, an average daily census of animals, and other information. NIH’s approval of an animal care program lasts up to 5 years, and according to NIH officials, the agency typically begins its review of a renewal after 4 years. To help facilities meet the annual report requirement, NIH provides an annual-reporting sample document that directs research facilities to update the animal care and use committee’s roster and to note any change in accreditation from the private accreditation organization AAALAC International and describe any significant changes in their animal care program, such as the species or number of animals maintained in housing. NIH officials told us the purpose of the assurances is to ensure that the proper facilities and procedures are in place to properly care for the animals, and that NIH does not use them as a public reporting tool. Health Research Extension Act of 1985 animal care committees at each entity w hich conducts biomedical and behavioral research with funds provided under this Act (including the National Institutes of Health and the national research institutes) to assure compliance w ith the guidelines established [by the Director of NIH]. NIH has procedures to ensure that facilities that seek to receive funding from Public Health Service agencies have animal care programs with active assurances. NIH provided us with its data for tracking which facilities were receiving Public Health Service funding and which facilities had approved programs. As of November 2017, according to NIH data, all of the federal facilities receiving funding from Public Health Service agencies for activities involving animals had an active assurance. Using a sample of 16 assurances from federal facilities, we found that these assurances contained information called for by NIH, including signatures from institutional officials, rosters of Institutional Animal Care and Use Committees, and animal inventories. NIH data show that all assured facilities submitted annual reports in calendar years 2014, 2015, and 2016. APHIS and NIH publicly report some information about federal agencies’ use of research animals. Although the Animal Welfare Act does not require APHIS to share this information, APHIS posts the following on its website: Annual reports from research facilities. Research facilities’ annual reports include data on the species and numbers of animals held and used for research, categorized by the steps taken to minimize pain and distress to the animal. The annual reports also include the facility’s explanation of any exceptions to the Animal Welfare Act’s standards and regulations during the reporting year. As of April 2018, APHIS’s website included research facilities’ annual reports from fiscal years 1999 through 2017. National summaries of the annual reports. APHIS prepares national summaries using the annual reports submitted by research facilities. APHIS’s annual national-summary reports include data provided by research facilities on species and numbers of animals, categorized by state and by the steps taken to minimize pain and distress to the animal. As of March 2018, APHIS’s website had national summary reports for fiscal years 2008 through 2016. The national summaries do not categorize the data by types of facilities, such as federal or nonfederal research facilities. Reports of APHIS inspections. The APHIS inspection reports— typically of nonfederal facilities—could contain such information as descriptions of non-compliance, the number of animals involved in noncompliance, a correction deadline and a description of what should be done to correct the problem, and the date of the inspection. As of March 2018, APHIS’s website contained reports of inspections at three federal facilities, including a zoo and an aquarium. This number does not include ARS research facilities, which APHIS inspects as part of its 2016 memorandum of understanding with ARS. As of March 2018, APHIS’s website contained inspection reports for 19 ARS research facilities. USDA’s Chief Information Officer has provided guidance directing the department’s agencies and offices to strive to ensure and maximize, among other things, the objectivity of information disseminated to the public. To ensure objectivity, the guidance directs that USDA agencies and offices ensure that the information they disseminate is presented in an accurate, clear, complete, and unbiased manner. APHIS has not fully implemented this guidance for the animal use data it shares publicly. In particular, APHIS does not explain on its website potential limitations related to the accuracy and completeness of the annual reports that it provides to the public or in the national summaries of the annual reports that APHIS prepares. For example, APHIS does not explain that research facilities’ annual reports may contain data on animals used for activities that are not covered by the Animal Welfare Act regulations, such as excluded field studies. Additionally, APHIS does not explain that the annual reports do not include birds not bred for research—and consequently covered by the Animal Welfare Act— because APHIS has instructed facilities to not report any birds. Furthermore, APHIS does not explain that it does not validate the accuracy and completeness of agencies’ reporting. In particular, APHIS officials told us that they have the opportunity to validate reporting when they inspect nonfederal facilities, but do not have the authority to inspect federal research facilities unless invited to do so. Some stakeholders responded to our survey that they use the data that APHIS reports on animal use to identify trends and practices within the research community. By fully implementing USDA guidance by explaining what the data represent and possible issues with their quality, APHIS could have more assurance that it is providing these data to users in a manner that is as accurate, clear, complete and unbiased as possible. Users could then be better equipped to properly analyze or assess the quality of the data, interpret the annual reports, and draw conclusions based on these data. NIH posts a list of federal and nonfederal facilities with active assurances on its website. The Health Research Extension Act does not require NIH to make such information available through a public website, but NIH policy directs the agency to provide to Public Health Service agencies a list of facilities with such assurances. The list includes facilities that receive Public Health Service funding and facilities that have voluntarily requested NIH’s review and approval of their programs. Our review did not identify federal facilities that were missing from or incorrectly included in NIH’s posted list of assured facilities. NIH does not regularly post other information—such as the facilities’ average daily inventory of animals, the date they obtained an assurance, or the date they submitted their most recent annual report.—from research facilities’ assurance documents. Therefore, we did not review in detail the information that agencies provide to NIH to determine its accuracy. Federal agencies may have additional information about their animal use programs. However, stakeholders who responded to our survey had different views about whether federal agencies should proactively and routinely make more information on animal use available to the public on their websites or other means than the data that APHIS and NIH currently provide. Stakeholders other than animal advocacy organizations— including federal agencies, research organizations, academia, and others—generally expressed the view that federal agencies should not routinely make additional information available to the public, citing reasons including the existence of other methods to obtain this information and administrative burden. In contrast, stakeholders from animal advocacy organizations cited the need for more transparency and oversight as reasons that federal agencies should make additional information routinely available to the public, among other reasons. (See app. III for more information about stakeholders’ responses to our questions). More specifically, we asked stakeholders to provide their views on whether federal agencies should proactively and routinely report certain types of information to the public. We selected 10 types of information for stakeholders to consider, including some types of information that federal agencies may have for internal purposes and, in some instances, may provide to other agencies or organizations but that neither they nor others are required to proactively share with the public. The types of information we asked stakeholders to consider included data on vertebrate animals that are not covered by the Animal Welfare Act, internal or external inspection reports, and general descriptions of agencies’ animal use programs. See table 2 for the complete list of types of information we asked stakeholders to consider. For stakeholder groups that generally expressed the view that federal agencies should not make additional information available to the public on a proactive and routine basis, one of the most frequently cited reasons included that the public could obtain this information through other publicly available means. For example, several stakeholders said that agencies’ reports of noncompliance to APHIS or NIH and data on resource expenditures are already available via the FOIA. One federal stakeholder said that it provides the public with information about the nature and extent of field research when it is required by the Marine Mammal Protection Act of 1972 or the Endangered Species Act of 1973 to obtain permits; the permitting processes include public notice and comment. In addition, some stakeholders said that certain types of information, such as the identity of the species used and the purpose and expected benefit of specific research projects are already published in peer-reviewed journals that are accessible to the public. Several stakeholders also responded that providing additional information would impose an administrative burden on agencies. For example, several stakeholders said any potential public benefit from the additional information shared with the public would not justify the effort to collect and share the information, and one stakeholder said that providing certain types of information would reduce the time they have to do actual research. In addition, one stakeholder said that a requirement to make additional information available to the public would be in direct conflict with a 2016 law that directed NIH, the Food and Drug Administration, and USDA to look for ways to reduce administrative burdens associated with animal welfare regulations. Other less frequently cited reasons that stakeholders gave for not believing that agencies should proactively and routinely share additional information with the public included: Certain information, such as expenditures on animal use, could be difficult to collect from disparate sources. For example, one federal agency said that much of its animal use funding is allocated in different areas of research and that it would need guidance to collect data on expenditures separately from each area. Disseminating information could jeopardize the security of facilities or personnel or disclose proprietary data. For example, one stakeholder said agency reports contain key details about federal research facilities that opposition groups could use to target personnel in those facilities. Disseminating information could confuse the public unless appropriate context is provided. One stakeholder said that the passive dissemination of data on animal research on a website, without appropriate context, would potentially increase public confusion and add misplaced scrutiny on animal use in federal research facilities. For those stakeholder groups that generally expressed the view that federal agencies should make additional information available to the public on a proactive and routine basis, the most frequently cited reasons were the importance of transparency to allow the public to assess and understand animal use in federal research facilities and the need for oversight and accountability of federal agencies’ use of animals. For example, some stakeholders responded that sharing additional information with the public would aid their efforts to monitor the reduction, refinement, and replacement of animals used in federal research. One stakeholder also mentioned that sharing additional information could be easily done on a website and would give the public a more complete picture of the use of animals by federal research facilities. Several stakeholders also expressed the need for greater oversight and accountability of federal agencies’ use of animals. For example, two stakeholders said that making additional information available about the degree to which animals experience pain or distress would help them assess whether federal programs’ animal use is in compliance with specific provisions related to pain and distress in the Animal Welfare Act. Stakeholder groups less frequently cited other reasons for favoring routine reporting, such as: FOIA requests can take several months and sometimes years for agencies to fulfil. Certain information, such as the number of all vertebrate animals used by each agency—including those not reported under the Animal Welfare Act—should be easy to disseminate because federal agencies already collect or compile it for internal purposes. Additional reporting would align the federal government with other countries’ practices. For example, according to one stakeholder, the European Union categorizes and publicly releases animal use numbers that are more detailed than those reported in the United States. APHIS and NIH routinely collect information about federal agencies’ research with vertebrate animals and provide the public with related information. Having access to this information can help the public observe trends in animal use in research and learn about facilities’ compliance with standards of humane care. Federal agencies met NIH’s requirements for reporting on their animal use, but the data federal agencies provided to APHIS were not always consistent or complete. This situation resulted in part from APHIS’s not providing sufficient instructions to federal research facilities for reporting on their use of animals covered by the Animal Welfare Act. In particular, APHIS instructs facilities to not report any birds in their annual reports, regardless of whether the birds are covered by the act. Although aware of this limitation, APHIS has not provided a schedule or plan for defining birds covered by the act or for developing reporting requirements for those birds. In addition, APHIS’s instructions have not sufficiently clarified two areas of confusion and differing understanding among federal agencies: first, activities that involve animal use outside the United States and, second, the specific conditions under which field studies are or are not covered by the act. APHIS plans to develop clarifying guidance on field studies and will publish the guidance for public comment. By defining the birds that need to be reported, by instructing federal research facilities to report research activities outside the United States, and by working with the research community to develop clear criteria for identifying field studies, APHIS would have greater assurance that the data it receives from research facilities fully reflect the activities covered by the Animal Welfare Act. APHIS has also not fully implemented the USDA’s information dissemination policy that calls for the department’s agencies to ensure information is presented in an accurate, clear, complete, and unbiased manner. In particular, APHIS does not explain issues related to the completeness and accuracy of the data it provides to the public, for example, issues such as inconsistencies in the types of field studies reported by federal agencies. By fully explaining these issues, the agency would improve users’ ability to accurately interpret and analyze the data. We are making the following four recommendations to APHIS: The Administrator of APHIS should develop a timeline for (1) defining birds that are not bred for research and that are covered by the Animal Welfare Act, and (2) requiring that research facilities report to APHIS their use of birds covered by the act. (Recommendation 1) The Administrator of APHIS should instruct federal agencies to report their use of animals covered by the Animal Welfare Act in federal facilities located outside of the United States. (Recommendation 2) In developing the definition of field studies, the Administrator of APHIS should provide research facilities with clear criteria for identifying field studies that are covered by the Animal Welfare Act’s regulations and that facilities should report to APHIS as well as field studies that facilities should not report. (Recommendation 3) The Administrator of APHIS should ensure APHIS fully describes on its website how the agency compiles annual report data from research facilities, what the data represent, and any potential limitations to the data’s completeness and accuracy. (Recommendation 4) We provided a draft of this report to Commerce, Defense, HHS, DHS, Interior, USDA, VA, EPA, NASA, and the Smithsonian Institution. USDA and VA provided written comments on the draft, which are presented in appendixes IV and V, respectively. In its written comments, USDA said that APHIS provided planned corrective actions and timeframes for implementing three of our four recommendations; APHIS disagreed with one recommendation. In its written comments, VA said that the report’s conclusions were consistent with our findings. Regarding our first recommendation that the Administrator of APHIS develop a timeline for (1) defining birds that are not bred for research and that are covered by the Animal Welfare Act, and (2) requiring that research facilities report to APHIS their use of birds covered by the act, USDA stated that APHIS will submit a recommendation and timeline by September 30, 2018, to USDA officials regarding the development of a definition for birds. USDA’s comments did not specifically respond to our recommendation that APHIS also develop a timeline for requiring that research facilities report their use of birds covered by the act; we continue to believe that APHIS should develop such a timeline. USDA’s written comments stated that APHIS disagreed with our second recommendation that the Administrator of APHIS should instruct federal agencies to report their use of animals covered by the Animal Welfare Act in federal facilities located outside of the United States. USDA provided several reasons for the disagreement: USDA stated that the absence of an exclusion to the requirements of the Animal Welfare Act or its regulations for federal research located outside of the United States does not create a requirement to collect information about such facilities’ use of animals. However, the Animal Welfare Act regulations define a reporting facility to include a department, agency, or instrumentality of the United States. In addition, officials from USDA’s Office of the General Counsel told us that there is no exclusion in the act or its regulations for federal research facilities that are located outside the United States. We have no reason to believe that such facilities should be excluded from the requirements of the Animal Welfare Act or its implementing regulations. We also note that in February 2018, APHIS officials told us that if federal agencies’ activities involving animals outside of the United States are in fact covered by the Animal Welfare Act based on the specific facts and circumstances of their activities, they should report those activities to APHIS. USDA’s comments stated that the collection of information related to research activities outside of the United States does not enable or inform its daily administration of the Animal Welfare Act and its charge to ensure the humane treatment of animals. Rather, USDA stated that our recommendation would impose an additional regulatory burden on federal research facilities. As stated above, we have no reason to believe that such facilities should be excluded from the requirements of the Animal Welfare Act or its implementing regulations. Without such an exclusion, the regulatory burden already exists; our recommendation would simply have APHIS instruct federal agencies to meet that regulatory requirement. Finally, USDA commented that our recommendation would place APHIS in the position of collecting different information from “reporting facilities,” as defined in the regulations, which in turn, would impact any summary presentation of information involving the use of animals. We understand that, if our recommendation were implemented, APHIS may receive “different” information from federal and nonfederal facilities; that is, federal research facilities might report activities outside of the United States while nonfederal facilities would not. However, as stated above, we have no reason to believe that such facilities should be excluded from the requirements of the Animal Welfare Act or its implementing regulations. Without such an exclusion, activities covered by the Animal Welfare Act in federal facilities located outside of the United States must already be reported. We also note that, as we state in our fourth recommendation, APHIS should inform the public about the nature of its data. That information could include describing any differences in reporting by federal and nonfederal research facilities. For the reasons given above, we continue to believe that the Administrator of APHIS should instruct federal agencies to report their use of animals in activities covered by the Animal Welfare Act in federal facilities located outside of the United States. In response to our third recommendation that the Administrator of APHIS take certain steps to clarify the definition of field studies that are covered by the Animal Welfare Act, USDA stated that APHIS agreed to issue a guidance document by December 31, 2018. We appreciate APHIS’s commitment to issuing new guidance on field studies, but note that USDA’s written comments did not directly respond to the language in our draft recommendation that called for the agency to provide research facilities with clear examples of field studies that are covered by the Animal Welfare Act regulations. We also note that the Forest Service stated in technical comments that the extensive number and variation in wildlife species preclude providing specific examples of activities that meet a prescribed definition of a field study. The Forest Service suggested that we modify our recommendation to call for APHIS to provide criteria for how research facilities should determine which studies qualify as an exempted field study. We agreed with that suggestion and modified our recommendation to call on APHIS to provide research facilities with criteria to help research facilities determine which studies are covered by the Animal Welfare Act. APHIS agreed with our fourth recommendation that the Administrator of APHIS direct the agency to fully describe animal use data on its website. USDA’s comments stated that, beginning with the fiscal year 2017 summary activities, APHIS will describe how it compiles annual report data from research facilities, what the data represent, and any potential limitations to the data’s completeness and accuracy. USDA stated that APHIS will update the website with this information by September 30, 2018. In its written comments, VA stated that our overall descriptions of its animal research program were accurate. The agency also stated that it looks forward to a time when the use of animals in research is no longer needed, but until that time, the agency will use all necessary research strategies to reduce and prevent the suffering of veterans. APHIS, HHS, and DHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Agriculture, the Secretary of Commerce, the Secretary of Defense, the Secretary of Health and Human Services, the Secretary of Homeland Security, the Secretary of the Interior, the Secretary of Veterans Affairs, the Administrator of the Environmental Protection Agency, the Administrator of the National Aeronautics and Space Administration, the Secretary of the Smithsonian Institution, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact us at (202) 512-3841 or morriss@gao.gov or neumannj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Federal agencies conduct research with animals for a variety of purposes, including to benefit human or animal populations. We identified 10 agencies that conducted research using vertebrate animals in fiscal years 2014, 2015, or 2016 with their own staff using their own facilities and equipment. Federal agencies also fund activities that use animals, meaning that the research is done by a nonfederal entity. However, we did not include those activities in our review. In the process of identifying federal agencies that conducted research with animals, we also identified the wide range of vertebrate animal species that these agencies used from fiscal years 2014 through 2016. In response to our survey of agencies, we learned that some agencies conducted research with a dozen or more species of animal while others conducted activities with hundreds of species. For example, NASA reported to GAO that it used 16 species while the National Museum of Natural History—one of the four animal research facilities within the Smithsonian Institution that responded to our survey—reported it conducted research on about 1,400. Table 3 shows groups of vertebrate species the 10 agencies reported to GAO that they used in research in fiscal years 2014 through 2016. Some of the species groups shown in table 3 are not covered by the Animal Welfare Act (i.e., amphibians, fish, and reptiles), while some animal species within a group may not be covered by the act. For example, farm animals are not covered by the Animal Welfare Act if researchers use them for agricultural purposes, such as improving animal nutrition, breeding management, or production efficiency, or for improving the quality of food or fiber, but are covered if researchers use them for human health purposes. Mice and rats are not covered by the Animal Welfare Act if they are of the genus Mus or Rattus and bred for use in research. Similarly, the act does not cover birds bred for use in research. Furthermore, agencies may have used animal species in a field study that is not covered by Animal Welfare Act regulations. Agencies are not required by the Animal Welfare Act to report their use of animals that are not covered by the act to the U.S. Department of Agriculture’s Animal and Plant Health Inspection Service (APHIS). Nevertheless, the agencies are required by other policies and statutes to ensure that they treat those animals humanely. American Association for Laboratory Animal Science AAALAC International (formerly known as the Association for Assessment and Accreditation of Laboratory Animal Care International) As described in this report, GAO conducted a survey of federal agencies and stakeholder groups regarding their opinions on whether federal agencies should proactively and routinely and publicly share information about their animals on a website or other means. The graphics in this appendix illustrate the responses to our survey by stakeholder group. The stakeholder groups included 20 federal departments, agencies, and sub- agencies that conduct animal research on vertebrate species; eight animal advocacy organizations that advocate on behalf of animals; six research and science organizations; and five other stakeholders including individuals in academia and other knowledgeable entities. Stakeholders from federal agencies, research organizations, and academia and other entities except animal advocacy organizations generally expressed the view that federal agencies should not make additional information made routinely available to the public. (See figs. 1, 2, and 3, respectively.) In contrast, animal advocacy organizations generally expressed the view that federal agencies should make additional information routinely available to the public. (See fig. 4.) Figure 5 provides examples of stakeholders’ statements explaining their views on whether federal agencies should or should not provide additional information to the public. GAO also asked stakeholder groups in the survey about their opinion regarding whether the Animal and Plant Health Inspection Service (APHIS) should modify how it collects and posts annual report data under the Animal Welfare Act. Seventeen of 39 stakeholders responded that they would like to see changes to the way APHIS collects and posts annual report data. Specifically, all stakeholders from animal advocacy organizations and individuals in academia would like to see changes to how APHIS collects and posts annual report data while some stakeholders from federal agencies and research and science organizations also noted that they would like to see changes. Table 4 provides examples of stakeholders’ views and suggestions regarding such changes. In addition to the individuals named above, Mary Denigan-Macauley (Acting Director), Joseph Cook (Assistant Director), Ross Campbell (Analyst-in-Charge), Kevin Bray, Tara Congdon, Hayden Huang, Marc Meyer, Amber Sinclair, and Rajneesh Verma made key contributions to this report.
|
Research facilities, including those managed by federal agencies, use a wide range of animals in research and related activities each year. The Animal Welfare Act and the Health Research Extension Act have varying requirements for federal agencies and others to protect the welfare of and report on the use of different research animals to APHIS and NIH. GAO was asked to review several issues related to animals used in federal research. This report examines (1) the extent to which APHIS and NIH have provided federal facilities with guidance for reporting their animal use programs, (2) the extent to which APHIS and NIH have shared agencies' animal use information with the public, and (3) stakeholder views on federal agencies' sharing additional information. GAO identified federal agencies that used vertebrate animals in research in fiscal years 2014 through 2016, reviewed their reports to APHIS and NIH, and examined publicly available data. GAO also surveyed a nongeneralizable sample of stakeholders from federal agencies and animal advocacy, research and science, and academic organizations. The Department of Health and Human Services' (HHS) National Institutes of Health (NIH) and the U.S. Department of Agriculture's (USDA) Animal and Plant Health Inspection Service (APHIS) have provided guidance to federal research facilities on what they must report about their animal use programs under the Health Research Extension Act and the Animal Welfare Act, respectively. Federal research facilities we reviewed met NIH's reporting instructions. However, APHIS's instructions have not ensured consistent and complete reporting in three areas: research with birds, activities outside the United States, and field studies outside a typical laboratory. By clarifying its instructions, APHIS could improve the quality of animal use data it receives from agencies. APHIS and NIH voluntarily share some information about agencies' animal research with the public. In particular, APHIS posts to its website data on agencies' annual use of animals covered by the Animal Welfare Act, and NIH publicly posts a list of research facilities with approved animal use programs. However, APHIS does not describe potential limitations related to the accuracy and completeness of the data it shares as called for by USDA guidance. For example, APHIS does not explain that the data do not include birds used for activities that are covered by the Animal Welfare Act and may include field studies that are not covered by the act. APHIS could increase the data's usefulness to the public by making such disclosures. Federal agencies may have additional information about their animal use programs, including data on vertebrate species used but not reported to APHIS; the purpose of research activities; and internal inspection reports. However, stakeholders GAO surveyed had different views on agencies' sharing such data with the public. Some stakeholders, particularly animal advocacy organizations, cited the need for more transparency and oversight while others, including federal agencies and research and science organizations, raised concerns about the additional administrative burden on agencies. Source: GAO analysis of the Animal Welfare Act and the Office of Laboratory Animal Welfare's Public Health Service Policy on Humane Care and Use of Laboratory Animals. | GAO-18-459 . a The act covers research funded by the public health service agencies of the U.S. government. GAO recommends that APHIS clarify its reporting instructions and fully describe the potential limitations of the animal use data it makes available to the public. USDA stated that APHIS will take steps to implement GAO's recommendations, with the exception of clarifying reporting instructions for activities outside the United States. GAO continues to believe that APHIS needs to ensure complete reporting of such activities by federal facilities.
|
In our prior work, we identified a range of factors that can affect permitting timeliness and efficiency. For the purposes of this statement, we have categorized the factors into five broad categories: 1) coordination and communication, 2) human capital, 3) collecting and analyzing accurate milestone information, 4) incomplete applications, and 5) significant policy changes. Effective coordination and communication between agencies and applicants is a critical factor in an efficient and timely permitting process. Standards for internal control in the federal government call for management to externally communicate the necessary quality information to achieve the entity’s objectives, including by communicating with and obtaining quality information from external parties. We found that better coordination between agencies and applicants could result in more efficient permitting. For example, in our February 2013 review of natural gas pipeline permitting, we reported that virtually all applications for pipeline projects require some level of coordination with one or more federal agencies, as well as others, to satisfy requirements for environmental review. For example, BIA is responsible for, among other things, approving rights of way across lands held in trust for an Indian or Indian tribe and must consult and coordinate with any affected tribe. We have reported on coordination practices that agencies use to streamline the permitting process, including the following. We have found that having a lead agency coordinate efforts of federal, state, and local stakeholders is beneficial to permitting processes. For example, in our February 2013 review on natural gas pipeline permitting, industry representatives and public interest groups told us that the interstate process was more efficient than the intrastate process because in the interstate process FERC was designated the lead agency for the environmental review. Other agencies may also designate lead entities for coordination. For example, in a November 2016 report, we described how BIA had taken steps to form an Indian Energy Service Center that was intended to, among other things, help expedite the permitting process associated with Indian energy development. We recommended that BIA involve other key regulatory agencies in the service center so that it could more effectively act as a lead agency. Establishing coordinating agreements among agencies can streamline the permitting process and reduce time required by routine processes. For example, in our February 2013 review of natural gas pipeline permitting, we reported that FERC and nine other agencies signed an interagency agreement for early coordination of required environmental and historic preservation reviews to encourage the timely development of pipeline projects. Agencies can also use mechanisms to streamline reviews of projects that are routine or less environmentally risky. For example, under NEPA, agencies may categorically exclude actions that an agency has found—in NEPA procedures adopted by the agency—do not individually or cumulatively have a significant effect on the human environment and for which, therefore, neither an environmental assessment nor an environmental impact statement is required. Also under NEPA, agencies may rely on “tiering,” in which broader, earlier NEPA reviews are incorporated into subsequent site-specific analyses. Tiering is used to avoid duplication of analysis as a proposed activity moves through the NEPA process, from a broad assessment to a site-specific analysis. Such a mechanism can reduce the number of required agency reviews and shorten the permitting process. Agency and industry representatives cited human capital factors as affecting the length of permitting reviews. Such factors include having a sufficient number of experts to review applications. Some examples include: In June 2015 and in November 2016, we reported concerns associated with BIA’s long-standing workforce challenges, such as inadequate staff resources and staff at some offices without the skills needed to effectively review energy-related documents. In November 2016 we recommended that Interior direct BIA to incorporate effective workforce planning standards by assessing critical skills and competencies needed to fulfill BIA’s responsibilities related to energy development. For a September 2014 report, representatives of companies applying for permits to construct LNG export facilities told us that staff shortages at the Pipeline and Hazardous Safety Materials Administration delayed spill modeling necessary for LNG facility reviews. In an August 2013 review of Interior’s Bureau of Land Management (BLM) and oil and gas development, industry representatives told us that BLM offices process applications for permit to drill at different rates, and inadequate BLM staffing in offices with large application workloads are one of the reasons for these different rates. Agencies have taken some actions to mitigate human capital issues. For example, we reported in August 2013 that BLM had created special response teams of 10 to 12 oil and gas staff from across BLM field offices to help process applications for permits to drill in locations that were experiencing dramatic increases in submitted applications. In July 2012, we recommended that Interior instruct two of its bureaus to develop human capital plans to help manage and prepare for human capital issues, such as gaps in critical skills and competencies. Our work has shown that a factor that hinders efficiency and timeliness is that agencies often do not track when permitting milestones are achieved, such as the date a project application is submitted or receives final agency approval to determine if they are achieving planned or expected results. In addition, our work has shown that agencies often do not collect accurate information, which prevents them from analyzing their processes in order to improve and streamline them. The following are examples of reports in which we discussed the importance of collecting accurate milestone information: In December 2017, we found that the National Marine Fisheries Service and the U.S. Fish and Wildlife Service were not recording accurate permit milestone dates, so it was not possible to determine whether agencies met statutory review time frames. We recommended that these agencies clarify how and when staff should record review dates so that the agencies could assess the timeliness of reviews. We found in June 2015 that BIA did not have a documented process or the data needed to track its review and response times; to improve the efficiency and transparency of BIA’s review process, we recommended that the agency develop a process to track its review and response times and improve efforts to collect accurate review and response time information. We found in an August 2013 report that BLM did not have complete data on applications for permits to drill, and without accurate data on the time it took to process applications, BLM did not have the information it needed to improve its operations. We recommended that BLM ensure that all key dates associated with the processing of applications for permits to drill are completely and accurately entered into its system to improve the efficiency of the review process. Standards for internal control in the federal government call for management to design control activities to achieve objectives and respond to risks, including by comparing actual performance with planned or expected results and analyzing significant differences. Without tracking performance over time, agencies cannot do so. The standards also call for agency management to use quality information to achieve agency objectives; such information is appropriate, current, complete, accurate, accessible, and provided on a timely basis. As we have found, having quality information on permitting milestones can help agencies identify the duration of the permitting process, analyze process deficiencies, and implement improvements. According to agency officials we spoke with and agency documents we reviewed, incomplete applications are a factor that can affect the duration of reviews. For example, in a 2014 BLM budget document, BLM reported that—due to personnel turnover in the oil and gas industry—operators were submitting inconsistent and incomplete applications for permits to drill, which was delaying the approval of permits. In a February 2013 report, officials we spoke with from Army Corps of Engineers district offices said that incomplete applications may delay their review because applicants are given time to revise their application information. Deficiencies within agency IT systems may also result in incomplete applications. As we noted in a July 2012 report, Interior officials told us that their review of oil and gas exploration and development plans was hindered by limitations in its IT system that allowed operators to submit inaccurate or incomplete plans, after which plans were returned to operators for revision or completion. Agencies can reduce the possibility of incomplete applications by encouraging early coordination between the prospective applicant and the permitting agency. According to agency and industry officials we spoke with, early coordination can make the permitting process more efficient. One example of early coordination is FERC’s pre-filing process, in which an applicant may communicate with FERC staff to ensure an application is complete before formally submitting it to the commission. Changes in U.S. policy unrelated to permitting are a factor that can also affect the duration of federal permitting reviews. For example, in September 2014, we reported that the Department of Energy did not approve liquefied natural gas exports to countries without free-trade agreements with the United States for a period of 16 months. We found that the Department stopped approving applications while it conducted a study of the effect of liquefied natural gas exports on the U.S. economy and the national interest. Exporting liquefied natural gas was an economic reversal from the previous decade in which the United States was expected to become an importer of liquefied natural gas. Policy changes can result from unforeseen events. After the Deepwater Horizon incident and oil spill in 2010, Interior strengthened many of its safety requirements and policies to prevent another offshore incident. For example, Interior put new safety requirements in place related to well control, well casing and cementing, and blowout preventers, among other things. In a July 2012 report, we found that after the new safety requirements went into effect, review times for offshore oil and gas drilling permits increased, as did the number of times that Interior returned a permit to an operator. In conclusion, our past reports have identified varied factors that affect the timeliness and efficiencies of federal energy infrastructure permitting reviews. Federal agencies have implemented a number of our recommendations and taken steps to implement more efficient permitting, but several of our recommendations remain open, presenting opportunities to continue to improve permitting processes. Chairmen Palmer and Gianforte, Ranking Members Raskin and Plaskett, and Members of the Subcommittees, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. If you or your staff members have any questions concerning this testimony, please contact Frank Rusco, Director, Natural Resources and Environment, who may be reached at (202) 512-3841 or RuscoF@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Christine Kehr (Assistant Director), Dave Messman (Analyst-in-Charge), Patrick Bernard, Marissa Dondoe, Quindi Franco, William Gerard, Rich Johnson, Gwen Kirby, Rebecca Makar, Tahra Nichols, Holly Sasso, and Kiki Theodoropoulos. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Congress recognizes the harmful effects of permitting delays on infrastructure projects and has passed legislation to streamline project reviews and hold agencies accountable. For example, in 2015 Congress passed the Fixing America's Surface Transportation Act, which included provisions streamlining the permitting process. Federal agencies, including the Department of the Interior and FERC, play a critical role by reviewing energy infrastructure projects to ensure they comply with federal statutes and regulations. This testimony discusses factors GAO found that can affect energy infrastructure permitting timeliness and efficiency. To do this work, GAO drew on reports issued from July 2012 to December 2017. GAO reviewed relevant federal laws, regulations, and policies; reviewed and analyzed federal data; and interviewed tribal, federal, state and industry officials, among others. GAO's prior work has found that the timeliness and efficiency of permit reviews may be affected by a range of factors. For the purposes of this testimony, GAO categorized these factors into five categories. Coordination and Communication. GAO found that better coordination between agencies and applicants is a factor that could result in more efficient permitting. Coordination practices that agencies can use to streamline the permitting process include the following: Designating a Lead Coordinating Agency . GAO found having a lead agency to coordinate the efforts of federal, state, and local stakeholders is beneficial to permitting processes. For example, in a February 2013 report on natural gas pipeline permitting, industry representatives and public interest groups told GAO that the interstate process was more efficient than the intrastate process because in the interstate process the Federal Energy Regulatory Commission (FERC) was lead agency for the environmental review. Establishing Coordinating Agreements among Agencies . In the February 2013 report, GAO reported that FERC and nine other agencies signed an interagency agreement for early coordination of required environmental and historic preservation reviews to encourage the timely development of pipeline projects. Human Capital. Agency and industry representatives cited human capital factors as affecting the length of permitting reviews. Such factors include having a sufficient number of experts to review applications. GAO reported in November 2016 on long-standing workforce challenges at the Department of the Interior's Bureau of Indian Affairs (BIA), such as inadequate staff resources and staff at some offices without the skills to effectively conduct such reviews. GAO recommended that Interior incorporate effective workforce planning standards by assessing critical skills and competencies needed to fulfill its responsibilities related to energy development. Interior agreed with this recommendation, and BIA stated that its goal is to develop such standards by the end of fiscal year 2018. Collecting and Analyzing Accurate Milestone Information. GAO's work has shown that a factor that hinders efficiency and timeliness is that agencies often do not track when permitting milestones are achieved, such as the date a project application is submitted or receives final agency approval. Having quality information on permitting milestones can help agencies better analyze process deficiencies and implement improvements. Incomplete Applications. Agency officials and agency documents cited incomplete applications as affecting the duration of reviews. For example, in a 2014 budget document, BLM reported that—due to personnel turnover in the oil and gas industry—operators were submitting inconsistent and incomplete applications for drilling permits, delaying permit approvals. Significant Policy Changes. Policy changes unrelated to permitting can affect permitting time frames. For example, after the 2010 Deepwater Horizon incident and oil spill, Interior issued new safety requirements for offshore drilling. GAO found that review times for offshore oil and gas drilling permits increased after these safety requirements were implemented. GAO has made numerous recommendations about ways to improve energy infrastructure permitting processes. Federal agencies have implemented a number of GAO's recommendations and taken steps to implement more efficient permitting, but several of GAO's recommendations remain open, presenting opportunities to continue to improve permitting processes.
|
Children enter foster care when they have been removed from their parents or guardians and placed under the responsibility of a child welfare agency. Reasons for a child’s removal can vary, though 61 percent of nearly 275,000 removals during fiscal year 2016 involved neglect and 34 percent involved drug abuse by the parent(s), according to the most recent available HHS data. Child welfare agencies most commonly place children with unrelated foster parents, with relatives, or in congregate care settings. Coordinating placement and support services for these children, such as physical and mental health services, education, child care, and transportation, is typically the responsibility of child welfare agency caseworkers. Caseworkers may also coordinate placements for children exiting foster care, which most commonly include reunifications with the child’s parents or permanent placements through adoption, legal guardianship, or other living arrangements with a relative. Children who age out of the foster care system without a permanent placement with a family may receive transitional supports, such as housing and job search services. Children placed in foster families—including unrelated foster parents, relatives, and fictive kin (e.g., close family friends who are not relatives)— live in the family’s home and are typically incorporated into an existing family structure. For example, these families may include biological children and other children in foster care. Families may receive a payment from the child welfare agency to help cover the costs of a child’s care, as determined by each state. Families who are trained to provide therapeutic foster care services are supervised and supported by qualified program staff to care for children who need a higher level of care. Therapeutic foster care families may have fewer or no other children in the home, and parents in these families may be required to provide a higher level of care and supervision for the child. In addition, the payment provided to these families may be higher. States are primarily responsible for administering their child welfare programs, consistent with applicable federal laws and regulations. Their responsibilities include recruiting and retaining foster families and finding other appropriate placements for children. In recruiting foster families, states generally require that families undergo a licensing process that includes a home study to assess the suitability of the prospective parents, including their health, finances, and criminal history, and take pre-service training on topics such as the effects of trauma on a child’s behavior. In retaining foster families, states may provide support to families, such as through ongoing training classes and regular visits from child welfare agency caseworkers if a child is placed in their home. State and county child welfare agencies may work with private foster care providers, commonly through contracts, to help them administer child welfare services. Private providers can include non-profit and for-profit organizations that provide a range of public and private services in addition to foster care, such as residential treatment, mental health, and adoption services. For foster care, private providers may be responsible for recruiting foster families, which may involve identifying prospective foster parents, providing information on and helping with the licensing process, and conducting home studies and training. If the child welfare agency places a child with a foster family working with a private provider, the private provider may also be responsible for activities that can help retain foster families, such as conducting regular visits with the family (in addition to visits from child welfare agency caseworkers) and helping them access needed services. Child welfare agencies may pay these providers based on the number of children placed. This payment may include an administrative payment to the private provider, as well as a payment that the private provider passes on to the foster family to help cover the costs of a child’s care. Child welfare agencies and private providers may also work with other entities to recruit and retain foster families. For example, they may collaborate with community partners, such as faith-based organizations and schools, to share information about foster care and recruit families. Child welfare agencies and private providers may also work with direct service providers, such as hospitals and community-based mental health clinics, to obtain services to support children in foster care and their foster families, which can help retain these families. HHS’s Administration for Children and Families (ACF) administers several federal funding sources that states can use to recruit and retain foster families, in addition to state, local, and other funds. For example, funding appropriated for title IV-E of the Social Security Act makes up the large majority of federal funding provided for child welfare, comprising about 89 percent of federal child welfare appropriations in fiscal year 2017 (approximately $7 billion of nearly $7.9 billion), according to ACF. These funds are available to states to help cover the costs of operating their foster care, adoption, and guardianship assistance programs. For example, in their foster care programs, states may use these funds for payments to foster families to help cover the costs of care for eligible children (e.g., food, clothing, and shelter) and for certain administrative expenses, including recruiting and training prospective foster parents. Title IV-E funds appropriated specifically for foster care programs totaled about $4.3 billion in fiscal year 2017, comprising about 61 percent of title IV-E funding, according to ACF. In addition, title IV-B of the Social Security Act is the primary source of federal child welfare funding available for child welfare services. States may use these funds for family support and family preservation services to help keep families together and reduce the need to recruit and retain foster families. Such services can include crisis intervention, family counseling, parent support groups, and mentoring. States may also use title IV-B funds to support activities to recruit and retain foster families. Federal appropriations for title IV-B comprised about 8 percent of federal child welfare appropriations (approximately $650 million of nearly $7.9 billion) in fiscal year 2017, according to ACF. ACF is responsible for monitoring states’ implementation of these programs. For example, ACF monitors state compliance with title IV-B plan requirements through its review of states’ 5-year Child and Family Services Plans and Annual Progress and Services Reports. Child and Family Services Plans set forth a state’s vision, goals, and objectives to strengthen its child welfare system, and Annual Progress and Services Reports provide annual updates on the progress made by states toward those goals and objectives. Child and Family Services Plans are required for a state to receive federal funding under title IV-B, and document the state’s compliance with federal program requirements. One requirement is that states must describe in their plans how they will “provide for the diligent recruitment of potential foster and adoptive families that reflect the ethnic and racial diversity of children in the State for whom foster and adoptive homes are needed.” In addition, ACF conducts Child and Family Services Reviews, generally every 5 years, to assess states’ conformity with requirements under these federal programs. These reviews involve case file reviews and stakeholder interviews, and are structured to help states identify strengths and areas needing improvement within their agencies and programs. States found not to be in substantial conformity with federal requirements must develop a program improvement plan and undergo more frequent review. In addition to the diligent recruitment requirements under title IV-B of the Social Security Act, states receiving federal foster care funds under title IV-E are generally required to search for relatives when a child enters foster care. In the three selected states—California, Georgia, and Indiana—child welfare officials said their first priority is to recruit relatives or fictive kin to care for children entering foster care, when appropriate. Officials in California and Georgia discussed recent initiatives to expand the search for relatives and fictive kin for children already in foster care. For example, county child welfare officials in California said they contracted with a private provider who they also use to recruit and retain foster families to conduct these searches. This particular private provider told us that they can access the child welfare agency’s case management system to review information about each child to determine which relatives or fictive kin have already been contacted. The private provider said they may contact these relatives or fictive kin to see whether circumstances have changed such that they would now be able to care for the child. In addition, the private provider said they may use existing contacts, social media, and an identity search program to locate additional relatives or fictive kin for a child. This private provider reported that from July to September 2017, their searches yielded 36 additional relatives or fictive kin, on average, for each of the 23 children in one county for whom the private provider conducted a search. In addition, officials in Georgia said they initiated pilot projects in two regional offices to train staff on how to search for relatives and fictive kin. Community outreach to a broad population of prospective foster families is a moderately or very useful recruitment strategy, according to 36 states that responded to our survey. In addition, child welfare officials and 11 of the 14 private providers in the three selected states said they engage in community outreach events to recruit prospective foster families. For example, they said they attend local events (e.g., state fairs) or visit local organizations (e.g., faith-based organizations or schools) to provide information about becoming a foster parent. One private provider said they attend local markets and summer festivals to talk with prospective families and provide them with informational materials. Another private provider said they hold meetings for prospective foster parents to answer questions and provide additional information about foster care and the role of the private provider. In addition, 20 states reported in our survey that marketing campaigns, such as mailings and media advertisements, are a moderately or very useful recruitment strategy. In the three selected states, child welfare officials and 12 of the 14 private providers said they use different forms of media, such as newspapers, radio, television, billboards, social media, or printed advertisements, to solicit foster families. Child welfare officials we interviewed in Georgia and Indiana said they have implemented statewide media campaigns that incorporate both traditional and digital media. Officials in Georgia told us the campaigns have successfully increased inquiries through the agency’s website and toll-free phone line. A private provider in one county said they worked with a marketing firm to create advertisements that were shown in movie theaters, which also resulted in additional inquiries from prospective families. With regard to therapeutic foster care services, private providers we spoke with in both of our discussion groups said they use strategies such as yard signs, television commercials, and social media to recruit therapeutic foster care families. In our survey, nearly all states reported having targeted recruitment strategies as part of their recruitment plans or practices, such as strategies that focus on certain populations of prospective foster parents (e.g., those in faith-based communities or of a certain race), families for certain populations of children in foster care (e.g., teenagers and sibling groups), and families living in specific geographic locations. To help inform their recruitment strategies, 39 states reported in our survey that they collect and use information on children awaiting placement, such as their backgrounds and service needs, and 31 states reported that they collect and use information on available foster families, such as their preferences for placements and where they are located. In the three selected states, child welfare officials and 8 of the 14 private providers we interviewed said they use targeted recruitment to identify prospective foster families. In addition, child welfare officials and five private providers said they collect or use demographic data on children needing placement and available foster families to inform their efforts. For example, child welfare officials in one county said they use data to target recruitment efforts in the neighborhoods where children entered foster care. Similarly, one private provider told us they use data on the demographics of successful foster families to target recruitment efforts toward those types of families, such as social workers and parents whose children have grown up and left home (i.e., “empty nesters”). Targeted recruitment can be a particularly useful strategy to identify families who can provide therapeutic foster care services for children who need a higher level of care, such as those who have severe mental health conditions or who are medically fragile. In the three selected states, child welfare officials and four private providers said they use targeted recruitment strategies to search for families who can provide therapeutic foster care services. For example, child welfare officials in one state said they focus on recruiting individuals with specific skillsets, such as doctors and nurses who have experience working with children who need more care. Private providers in both of our discussion groups also said they use targeted recruitment strategies for these purposes. When asked in our survey about the usefulness of various recruitment strategies, states most often cited referrals from current foster families as a moderately or very useful recruitment strategy. In the three selected states, child welfare officials and all 14 private providers said they use referrals from current foster families to recruit new families, and the majority of these officials and private providers said such referrals are the most effective recruitment strategy. One private provider emphasized that current foster families are better recruiters than private providers because these families can speak from first-hand experience about the potential benefits and difficulties of caring for a child in foster care. Another private provider said that referrals occur through regular interactions in the community or through information meetings and events facilitated by private providers, such as movie nights. To encourage referrals, 6 of the 14 private providers in the three states said they offer financial incentives to current foster families who help recruit new families. For example, three of these private providers said they offer incentives ranging from $100 to $500. In regard to therapeutic foster care services, private providers in both of our discussion groups said referrals are the most effective recruitment strategy. Private providers in one group said they offer financial incentives ranging from $200 to $300, which generally are paid after a new family becomes licensed to provide therapeutic foster care services and a child has been placed in their home. Eight of the 14 private providers in the three selected states said they try, in general, to employ multiple types of recruitment strategies. Further, many of these private providers explained that prospective foster parents typically hear about foster care through multiple mediums before applying to become a parent. For example, a prospective parent might hear a radio advertisement, then see a billboard, and later talk to a private provider at a state fair before deciding to apply. Foster parents we spoke with in the three states, as well as in discussion groups on therapeutic foster care services, discussed a number of reasons why they became foster parents, including knowing others who had provided foster care, having the desire to give back, and wanting to expand their family by fostering with the intention to adopt a child (see text box). In our survey, 49 states reported using private providers to recruit foster families, including 44 that use private providers to recruit families who can provide therapeutic foster care services for children who need a higher level of care. Specifically, 30 states reported that they use private providers to recruit both traditional and therapeutic foster care families, 14 reported that they use private providers to recruit therapeutic foster care families exclusively, and the remaining 5 reported that they use private providers to recruit traditional foster families exclusively. In the three selected states, child welfare officials said they initially developed agreements with private providers to recruit families who can provide therapeutic foster care services. However, as state caseloads have risen, these officials said they have also referred children who do not need therapeutic foster care services to private providers. Child welfare officials and private providers in the three selected states said that private providers in their states are responsible for both recruiting and retaining foster families. They said responsibilities of private providers can include helping families become licensed, suggesting possible matches between children and available families, and providing support to help families access services needed to care for children in foster care (see fig. 1). Child welfare officials and private providers in the three selected states described ways they have collaborated to recruit foster families, and discussed the benefits of using private providers to recruit and retain these families. For example, child welfare officials in one county said they collaborated with private providers to create common marketing materials that included information about the child welfare agency and each private provider, which helps prospective foster families decide which entity they want to work with. Officials and private providers in this county said collaborative recruitment efforts are an efficient use of resources and reduce competition in recruiting from the same pool of prospective foster families. Nearly all of the 14 private providers we interviewed in the three selected states said they can help child welfare agencies support foster families, particularly those who care for children who need more care than others, because they can maintain lower caseloads and be more accessible to families than child welfare agencies. These private providers explained that they accept placements for children only when they have available foster families and staff, whereas child welfare agencies cannot choose how many children they have in their caseloads. Specifically, four private providers noted that private providers typically maintain small caseloads, such as 10 children per private provider caseworker. In contrast, seven private providers said child welfare agencies manage larger caseloads—as high as 40 children per caseworker—which can strain their ability to support foster families. In addition, eight private providers said families can contact them 24 hours a day, which may not be the case with child welfare agency caseworkers. All of the 49 states that reported using private providers in our survey also reported having various oversight mechanisms to monitor them. These mechanisms include periodic audits and site visits, regular calls for information sharing, periodic check-ins with foster families working with private providers, and requirements for providers to develop recruitment plans. Child welfare officials in the three selected states provided detail on a range of oversight activities. For example, child welfare officials in Georgia said their agency conducts comprehensive audits of private providers annually, which include an examination of the facility, case file reviews, and staff interviews. In addition, county child welfare officials in California said their agency requires private providers to attend monthly meetings with agency staff and submit quarterly outcome reports. In response to our survey, 34 states reported that limited resources to focus on foster family recruitment made their recruitment efforts moderately or very challenging. In the three selected states, child welfare officials raised concerns about their ability to prioritize foster family recruitment efforts, given large increases in their foster care caseloads and other demands for resources. Nationwide, caseloads increased by over 10 percent from fiscal years 2012 through 2016, according to HHS data. In addition, 8 of the 14 private providers in the three states told us that a lack of dedicated funding for recruitment from child welfare agencies made recruitment efforts challenging. One private provider said they have recently put recruitment efforts on hold to focus on serving children in existing placements. States also reported in our survey that eligibility requirements for federal foster care funding have affected their ability to prioritize resources for recruitment. Specifically, of the 34 states that provided a response on this issue, almost half reported that requirements that tie eligibility for receiving federal funds under title IV-E of the Social Security Act to income eligibility standards under the discontinued Aid to Families with Dependent Children program have affected their recruitment efforts to a moderate or great extent. States may use title IV-E funds to assist with the costs of operating their foster care programs, and are generally entitled to receive these funds based on the number of eligible children they have in their programs. To be eligible for title IV-E foster care funds, a child must have been removed from a home that meets income eligibility standards under the Aid to Families with Dependent Children program as of July 1996, among other criteria. The Aid to Families with Dependent Children program was replaced by the Temporary Assistance for Needy Families program beginning in 1996, and the income eligibility standards for title IV-E foster care funding have not been changed since then. We reported in 2013 that a family of four had to have an annual income below $15,911 to meet the income eligibility threshold in 1996. If adjusted for inflation, the threshold would have been $23,550 in 2013. Due, in part, to fewer families meeting these income eligibility standards, we found that the number of children who currently meet title IV-E eligibility requirements has declined. As a result, we reported that states have received less federal funding under title IV-E and have paid an increasingly larger share of funds for their foster care programs. The percentage of children eligible for title IV-E foster care funds decreased from about 54 percent in fiscal year 1996 to nearly 39 percent in fiscal year 2015, according to data published by the Congressional Research Service (see fig. 2). Given fiscal constraints, child welfare agencies, like other state agencies, may need to make difficult choices about how to allocate their limited resources. The process for licensing foster families can help ensure that children are placed in safe and stable environments that meet their needs. However, 35 states reported in our survey that lengthy licensing processes made it moderately or very challenging to recruit new foster families. In the three selected states, child welfare officials and 7 of the 14 private providers discussed extensive state licensing processes that may discourage prospective foster families, including delays in getting fingerprints, completing background checks, or reviewing applications. Some private providers said delays are likely caused by competing priorities at state licensing agencies or limited staff in child welfare agencies. One private provider told us that families may wait several months for approval after completing an application. Another private provider told us that in the past year, approval time frames for licenses have, in some cases, increased from 1 to 2 weeks to 3 to 6 months. In regard to therapeutic foster care services, private providers in both discussion groups raised similar concerns (see text box). Child welfare officials in California told us they are in the process of restructuring their licensing process to improve efficiencies and reduce burden for foster families. In addition, county child welfare officials in the state told us they are offering families additional support to help them through the licensing process, such as assigning staff to prospective foster families as soon as they initiate the licensing process to help them complete required paperwork and schedule pre-service training. In response to our survey, states reported difficulties finding families who can meet the needs of children, particularly for therapeutic foster care services. Specifically, 37 states reported that the needs of children entering foster care have increased, and 35 reported that there are not enough foster families willing to care for the types of children needing placement. For example, nearly all states cited difficulties finding families for children with aggressive behaviors and severe mental health needs, as well as for teenagers and sibling groups. Consequently, 36 states reported difficulties appropriately matching children with families, and 30 reported having moderately or significantly too few therapeutic foster care families (see text box). In the three selected states, child welfare officials and 7 of 14 private providers discussed similar challenges finding appropriate families for children needing placement. For example, officials in one state said the increased demand for both traditional and therapeutic foster care families has caused them to place children in the first available home rather than match them with families based on the family’s preferences and ability to provide care. One private provider told us that due to the increasing number of referrals for placements, they are not able to be as selective during the matching process as they have been in the past. Another private provider said child welfare agencies may be so pressed to find placements for children that they may call foster families working with the private provider directly, which can put pressure on the family to agree to the placement even when the family does not believe the child is a good fit. One private provider told us that a foster family accepted a child who had been sleeping in the child welfare agency caseworker’s office, but the placement was not a good fit and was eventually disrupted, which was traumatic for both the child and the foster family. Private providers in both of our discussion groups said finding families willing to provide therapeutic foster care services to children can be difficult. They noted that parents may be required to take on more documentation and supervision responsibilities for a child who requires a higher level of care and complete more intensive training, which may be difficult for working parents. In addition to challenges finding appropriate families for children, 34 states reported in our survey that a negative perception of foster care made it moderately or very challenging to recruit new families. Child welfare officials in two states and 5 of the 14 private providers we interviewed raised similar concerns. For example, child welfare officials in one county told us that they recruit foster families in an environment where media reports have highlighted challenges with overburdened caseworkers and turnover of agency directors. These officials also said foster parents may share negative experiences with family and friends, leading to an unfavorable impression of child welfare agencies within the community. In addition, child welfare officials in one state and four private providers said some families who provide foster care services have faced false allegations of child abuse and subsequent investigations. Some private providers said these investigations can be emotionally draining or disruptive to the family, and some said that fear of such allegations and investigations may deter prospective families from becoming a foster family. Other recruitment challenges cited by several child welfare officials, private providers, and foster parents we interviewed included concerns by prospective foster families about caring for children who have high needs or who are certain ages, or that providing foster care will disrupt their nuclear family. While many child welfare officials and private providers we spoke with acknowledged these negative perceptions and fears, parents in all eight foster parent groups we interviewed in the three states also discussed how being a foster family can be a positive experience. For example, several foster parents said providing foster care to different types of children has enhanced their family. Private providers and foster parents also said it is important to share personal experiences to bring understanding about what it is like to be a foster family. For example, one foster parent told us about a blog she writes to describe normal family activities that include children in foster care, such as taking family trips. In response to our survey, 29 states reported that inadequate support for foster families from the child welfare agency made it moderately or very challenging to retain these families. In the three selected states, all 14 private providers we interviewed and foster parents in all eight of the foster parent groups we spoke with emphasized the importance of supporting families in order to retain them. All 14 private providers discussed concerns about communication with child welfare agencies, which they said can affect the quality of services they provide to foster families. For example, 10 of the private providers said they have difficulty contacting or receiving a response from child welfare agency caseworkers when they try to obtain information needed to comply with child welfare agency requirements. One private provider explained that they are required to develop a service plan for each child they place with a family, and the plan must be signed by the child welfare agency caseworker within 5 days of placement. However, this private provider said they often cannot reach the caseworker to have plans reviewed and approved within the required time frame. Seven private providers told us that there often is confusion on the part of child welfare agency caseworkers about the role of private providers. For example, these private providers said child welfare agency caseworkers may not know which tasks the private providers are responsible for or may be unfamiliar with the paperwork they need to give to the private provider. Similarly, foster parents in five groups expressed dissatisfaction with the level of support they have received from child welfare agency caseworkers. These foster parents described instances in which they were unable to reach their caseworker during emergencies, such as when they needed permission to administer medications to their foster child. One foster parent told us she had waited approximately 8 weeks for her caseworker to approve her child’s medication. This parent said she worked with her private provider to email the child welfare agency caseworker on a daily basis, but received no response. Foster parents in our discussion group raised similar concerns (see text box). Reasons why child welfare agency caseworkers may be limited in their ability to support foster families can include high caseloads and caseworker turnover. For example, 33 states reported in our survey that having too few staff and inadequate funding made it moderately or very challenging to retain foster families. In the three selected states, child welfare officials, 9 of 14 private providers, and foster parents in five of the eight foster parent groups noted that high caseloads contribute to a lack of support for foster families. Child welfare officials in one state said although their regulations stipulate a maximum caseload of 12 to 17, many caseworkers have caseloads that exceed those levels. In addition, a private provider in this state told us that child welfare agency caseworkers typically carry about 35 cases. Other private providers explained that the demands on child welfare caseworkers to meet basic paperwork and case planning requirements and conduct visits for a large caseload may prevent them from responding to requests or returning phone calls in a timely manner. Child welfare officials in two states, 11 private providers, and foster parents in three foster parent groups also explained that frequent caseworker turnover can affect the level of support foster families receive, particularly when new caseworkers are unfamiliar with a child’s history and needs. One foster parent told us that she had worked with eight different child welfare agency caseworkers in a 19-month period. Another foster parent said she maintains all of her foster children’s records, since in the past, documents have been lost in transfers between child welfare agency caseworkers. Child welfare officials in the three selected states acknowledged difficulties supporting foster families due to high caseloads or caseworker turnover. Officials in one state said they recently requested additional state funds to add 500 caseworker positions, and officials in another state said they have made efforts to revisit staffing levels following reductions during the economic recession in 2008. In addition, many private providers and foster parents we interviewed noted limitations with other supports for foster families. For example, 10 of 14 private providers and foster parents in three of the eight foster parent groups in the three states discussed their concerns about low payment rates for foster families, which some said may not adequately cover the costs of caring for a child. A 2012 study on payment rates for foster families found that basic payment rates (e.g., for traditional foster care services) in the majority of states fell below estimated costs of caring for a child, based on data from the U.S. Department of Agriculture. Five private providers and foster parents in five foster parent groups also discussed a lack of access to respite care services or a lack of “voice” for foster parents in contributing to decisions regarding children in their care. These private providers and foster parents said these circumstances can be frustrating and cause parents to leave the system. In response to our survey, 31 states reported that inadequate access to services, such as child care and transportation, made it moderately or very challenging to retain foster families. In the three selected states, child welfare officials, 9 of 14 private providers, and foster parents in six of eight foster parent groups discussed similar difficulties. For example, they discussed difficulties accessing child care services, which some said are particularly needed because of the increasing number of opioid- affected infants coming into care. Some officials, private providers, and foster parents said their state may offer child care subsidies, but waitlists can be long, and foster families may have difficulties finding an approved childcare center, particularly for children who need a higher level of care. Further, child welfare officials, private providers, and foster parents discussed challenges accessing transportation services. For example, child welfare officials said children are sometimes moved to homes outside their original community due to a lack of available homes, which places a burden on foster families to transport children to physical and mental health appointments, regular visits with their biological families, and school. A private provider we interviewed said many parents who provide transportation to these various appointments also must go through a burdensome process to claim mileage reimbursement from the child welfare agency, so many parents do not submit a claim. In addition, child welfare officials, private providers, and foster parents discussed challenges accessing mental health services. For example, one private provider said they have been unable to find a qualified mental health provider who accepts Medicaid to deliver needed services to an autistic child. Further, child welfare officials we interviewed in one county discussed difficulties connecting children with therapists who have an understanding of childhood trauma. In addition to these challenges, child welfare officials and private providers we interviewed said many foster families leave the foster care system due to family or life changes, including adoptions of children in their care, retirements, health issues, and relocation to a different state. HHS’s Administration for Children and Families (ACF) provides a number of supports to help state child welfare agencies in their efforts to recruit and retain foster families, according to ACF officials we interviewed and agency documents we reviewed. These supports include technical assistance, guidance and information, and funding. Technical assistance. ACF provided technical assistance through its National Resource Center for Diligent Recruitment (the Center), and subsequently, the Child Welfare Capacity Building Collaborative. The Center provided several types of technical assistance to achieve its aim of helping states develop and implement diligent recruitment programs to achieve outcomes such as improving permanency and placement stability for children in foster care. The Center provided on- and off-site coaching to states in a number of areas, such as developing a mix of general and targeted recruitment strategies, using existing data to target recruitment efforts, and developing a recruitment plan. Staff who worked at the Center reported providing direct technical assistance and training to 30 states. The Center also provided toolkits that guide states through the process of developing a comprehensive diligent recruitment plan to meet federal requirements. For example, the toolkits include discussion questions about the goals states have for their plans, suggestions on which stakeholders to include, and worksheets to help states analyze existing data. ACF officials told us that they also review states’ diligent recruitment plans and may provide feedback to states. In addition, ACF provides technical assistance to states through its Child and Family Services Reviews. These reviews are generally conducted every 5 years and examine a number of factors in states’ foster care programs to assess conformity with federal requirements, including factors related to recruiting and retaining foster families. In its reviews of 24 states in fiscal years 2015 and 2016, ACF reported deficiencies for 18. ACF officials said these deficiencies included a lack of adequate state recruitment plans and data used for recruitment efforts. In addition, they said they will be working with states to address identified deficiencies in subsequent program improvement plans, which are to be developed in consultation with ACF. Guidance and information. ACF provides a wide range of guidance and information to states to support their recruitment and retention efforts. For example, the Center distributed free monthly electronic newsletters that provided information on new tools, resources, and webinars related to foster family recruitment and retention. The Center also developed or provided links to publications on topics such as using data to inform recruitment efforts, taking a customer service approach in working with current and prospective foster families, and lessons learned from related projects funded by ACF. The Center facilitated information sharing among states by holding webinars, such as one on the benefits of implementing a comprehensive diligent recruitment program, and peer-to-peer networking events on topics such as recruiting, developing, and supporting therapeutic foster care families. In addition, ACF’s Child Welfare Information Gateway is a website that provides access to a broad array of electronic publications, websites, databases, and online learning tools for improving child welfare practice. For example, its resources related to recruiting and retaining foster families include publications on strategies and tools, as well as examples from state and local child welfare agencies on promising practices. Funding. HHS administers a number of federal funding sources that states said they used for their foster family recruitment and retention efforts. For example, in our survey, states most often cited using child welfare funds under title IV-E and IV-B of the Social Security Act for these purposes in fiscal year 2016 (see fig. 3). ACF also provided a number of discretionary grants to support state efforts to recruit and retain foster families through the Adoption Opportunities program, which funds projects designed to eliminate barriers to adoption and help find permanent families for children, particularly older children, minority children, and those with special needs. Specifically, ACF awarded cooperative agreements to 22 states, localities, and non-profit organizations in fiscal years 2008 through 2013 for 5-year projects that aim to enhance recruitment efforts and improve permanency outcomes for children, among other things. For example, ACF awarded a cooperative agreement in 2010 to the county child welfare agency in Los Angeles, California to launch a project that targeted recruitment efforts to prospective foster families in African American, Latino, LGBT, and deaf communities to increase permanency outcomes for their foster care population. In addition, it awarded a cooperative agreement in 2013 to Oregon’s state child welfare agency to implement a project that focused on developing customer service concepts in working with foster families, increasing community partnerships, and using data to inform recruitment efforts and outcome measures. In addition, ACF also awarded two cooperative agreements to Spaulding for Children to develop training for prospective and current foster and adoptive families. The first, awarded in fiscal year 2016, was for a 3-year project to develop a foster and adoptive parent training program to prepare families who can care for children who have high needs, such as children needing therapeutic foster care services. The second, awarded in fiscal year 2017, was for a 5-year project to develop a foster and adoptive parent training program for all individuals interested in becoming a foster family or adopting a child from foster care or internationally. In response to our survey, many states reported that they found these federal supports helpful to their recruitment and retention efforts. For example, guidance and information, such as the electronic newsletters, publications, and webinars provided by the Center, were cited most often by states as being moderately or very helpful (31 states). Over half the states reported that networking opportunities, such as peer-to-peer networking events facilitated by the Center, and technical assistance provided by the Center were moderately or very helpful to their efforts (28 and 27 states, respectively). However, similar to concerns raised by all 14 private providers in the three selected states about communication issues with child welfare agencies, several private providers told us they have not received guidance or information from these agencies about recruiting and retaining foster families, and most were unaware of some of the supports provided by ACF. Specifically, 11 of the 14 private providers said they were unaware of the National Resource Center for Diligent Recruitment, and 7 told us that the information offered by the Center would have been useful to their recruitment efforts had they known about it. For example, one private provider told us they have been trying to use data to more effectively recruit foster families, and the Center’s resources on recruitment strategies and tools would have been helpful in these efforts. Another private provider said each private provider in their area conducts recruitment activities based on its own ideas and experiences, and the Center’s resources would have been helpful in ensuring that they use the most effective strategies. ACF officials said they encourage states to involve all relevant stakeholders in their efforts to recruit and retain foster families. They acknowledged that ACF has not provided specific guidance and information to states on working with private providers, but noted that some supports, such as online publications and webinars, are available to private providers working in the public sector. ACF officials explained that their efforts have focused on child welfare agencies because these are the entities that receive federal funds. However, federal internal control standards state that agencies should communicate necessary information, both internally and externally, to achieve their objectives. The mission statement for ACF’s Children’s Bureau is to partner with federal, state, tribal, and local agencies to improve the overall health and well-being of the nation’s children and families. According to its website, the Children’s Bureau carries out a variety of projects to achieve its goals, such as providing guidance on federal law, policy, and program regulations, offering training and technical assistance to improve child welfare service delivery, and sharing research to help child welfare professionals improve their services. Given that almost all states use private providers to help them recruit foster families, and that private providers may be responsible for providing supports to help retain these families, it is important for HHS to determine whether additional information on working more effectively with private providers would be useful to states. This could help HHS better achieve its goals in supporting states’ efforts to recruit and retain foster families. States face challenges recruiting and retaining foster care families and almost all states rely on private providers to help them meet the demand for appropriate foster families, particularly those who can provide therapeutic foster care services. However, private providers used by child welfare agencies in the three states where we conducted interviews raised concerns about the level of communication they have with these agencies. Such communication issues can affect the quality of services provided to support foster families, as well as the level of guidance and information private providers receive from child welfare agencies. Although HHS has provided various supports that states have found useful in their efforts to recruit and retain foster families, many of the private providers we spoke with were unaware of some supports that they said could have helped them. Given the important role private providers play in recruiting and retaining foster families, state feedback to HHS on whether child welfare agencies could benefit from information on how to work more effectively with private providers could help HHS determine whether it needs to take action to better support states’ use of private providers. GAO recommends that the Secretary of Health and Human Services seek feedback from states on whether information on effective ways to work with private providers to recruit and retain foster families would be useful and if so, provide such information. For example, HHS can seek feedback from states through technical assistance and peer-to-peer networking activities. If states determine that information would be useful, examples of HHS actions could include facilitating information sharing among states on successful partnerships between states and private providers and encouraging states to share existing federal guidance and information. (Recommendation 1) We provided a draft of this report to the Secretary of HHS for review and comment. HHS agreed with our recommendation and said it will explore with states whether additional materials specific to private providers would be useful. While HHS noted that it has no authority over private providers, it provided examples of ways the agency has supported states’ efforts to recruit and retain foster families and encouraged them to involve private providers in these efforts. We believe that seeking feedback from states on whether they would like information on effective ways to work with private providers would be a useful first step. With that information, HHS could then determine if additional supports are needed to help states meet the demand for appropriate foster families. A letter conveying HHS’s formal comments is reproduced in appendix II. We are sending copies to the appropriate congressional committees, the Secretary of the Department of Health and Human Services, and other interested parties. The report will also be available at no charge on the GAO website at www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or larink@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report examines (1) how state child welfare agencies recruit foster families, including those who provide therapeutic foster care services, (2) challenges, if any, to recruiting and retaining families, and (3) the extent to which the U.S. Department of Health and Human Services (HHS) provides support to child welfare agencies in their efforts to recruit and retain foster families. To address our objectives, we administered a web- based survey of state child welfare agencies in the 50 states and the District of Columbia to obtain national information. To obtain more in- depth information, we interviewed child welfare officials, private providers, and foster parents in three selected states (California, Georgia, and Indiana). To obtain perspectives on providing therapeutic foster care services specifically, we conducted three discussion groups with private providers and foster parents at a national foster care conference. To develop our methodologies, we conducted a literature search related to foster care recruitment and retention, including for therapeutic foster care services, and we interviewed experts with a range of related research, policy, and direct service experience. To examine how HHS supports child welfare agencies in their efforts to recruit and retain foster families, we interviewed officials from HHS’s Administration for Children and Families (ACF), Centers for Medicare & Medicaid Services, Office of the Assistant Secretary for Planning and Evaluation, and Substance Abuse and Mental Health Services Administration. We reviewed relevant documents obtained in these interviews and other information available on HHS’s website, such as from the National Resource Center for Diligent Recruitment and the Child Welfare Information Gateway. We focused on HHS efforts from fiscal years 2012 through 2016. We also reviewed relevant federal laws, regulations, and HHS policies, as well as federal internal control standards. To obtain nationwide information on our objectives, we surveyed officials from state child welfare agencies in the 50 states and the District of Columbia. The survey was administered in September 2017, and we obtained a 100 percent response rate. The survey used a self- administered, Web-based questionnaire, and state respondents received unique usernames and passwords. To develop the survey, we performed a number of steps to ensure the accuracy and completeness of the information collected, including an internal peer review by an independent GAO survey expert, a review by an external foster care expert, and pre-testing of the survey instrument. Pre-tests were conducted over the phone with child welfare officials in four states to check the clarity of the question and answer options, as well as the flow and layout of the survey. The states that participated in pre- testing were selected based on recommendations from foster care experts and variation in child welfare administration systems (i.e., state- versus county-administered) and use of private providers. We revised the survey based on the reviews and pre-tests. The survey was designed to gather information from state child welfare agencies rather than county- level child welfare agencies or private providers. As such, we included questions in the survey to ensure that respondents were knowledgeable about foster family recruitment and retention efforts if the state child welfare agency was not directly involved. Our survey included a range of fixed-choice and open-ended questions related to recruiting and retaining foster families, including those who provide therapeutic foster care services. These questions were grouped into six subsections that covered (1) the states’ administrative structure for recruiting and retaining foster families, including the use of private providers; (2) information on states’ recruitment and retention plans and the usefulness of various strategies in recruiting and retaining foster families; (3) challenges states face in their efforts; (4) perspectives on various federal supports in this area and any additional supports needed; (5) data collected and used in recruitment and retention efforts; and (6) oversight of county child welfare agencies and private providers, if applicable. To obtain our 100 percent response rate, we made multiple follow-up contacts by email and phone in September 2017 with child welfare officials who had not yet completed the survey. While all surveyed officials affirmatively checked “completed” at the end of the web-based survey, not all state child welfare agencies responded to every question or the sub-parts of every question. We conducted additional follow-up with a small number of state child welfare agencies to verify key responses. Because this was not a sample survey, it has no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as non-sampling errors. For example, unwanted variability can result from differences in how a particular question is interpreted, the sources of information available to respondents, or how data from respondents are processed and analyzed. We tried to minimize these factors through our reviews, pre-tests, and follow-up efforts. In addition, the web-based survey allowed state child welfare agencies to enter their responses directly into an electronic instrument, which created an automatic record for each state in a data file. By using the electronic instrument, we eliminated the errors associated with a manual data entry process. Lastly, data processing and programming for the analysis of survey results was independently verified to avoid any processing errors and to ensure the accuracy of this work. To gather more in-depth information representing a variety of perspectives on our objectives, we interviewed officials from three state and three county child welfare agencies, representatives from 14 private foster care providers working with these agencies, and foster parents working with 8 of these private providers in the three selected states (California, Georgia, and Indiana). The states were selected based on factors such as recent changes in foster care and congregate care caseloads, opioid abuse rates estimated by HHS in June 2016, variation in child welfare administration systems (i.e., state- versus county- administered), and geographic location. Interviews were conducted during in-person site visits in California and Indiana and via phone in Georgia. We used semi-structured interview protocols for child welfare agencies, private providers, and foster parents that included open-ended questions on the strategies and challenges in recruiting and retaining foster families and federal supports in this area, among other topics. We interviewed officials from state-level child welfare agencies in each of these states. In California, the only selected state with a county-administered child welfare system, we selected three counties— Los Angeles, Sacramento, and Sonoma—and conducted interviews with officials from the respective county-level child welfare agency. These counties were selected based on factors similar to those mentioned above as well as variation in population density (i.e., rural versus urban). In addition, we interviewed 14 private providers in the three selected states, including 3 private providers in California (1 in each county we visited), 4 in Georgia, and 7 in Indiana. Private providers were chosen for interviews from a list of all private providers working with state child welfare agencies to recruit foster families. This list was provided by child welfare officials from each selected state. We considered factors such as the number of foster families private providers worked with, their involvement in recruiting families who provide therapeutic foster care services, and geographic location. We interviewed foster parents working with 8 of the private providers mentioned above, including 2 groups of foster parents in California, 1 group in Georgia, and 5 groups in Indiana. Each of these groups included between one and three sets of foster parents (e.g., one foster parent or a couple). Due to the sensitivity of the topics discussed, we worked with private providers to identify foster parents who were able and willing to participate in interviews. We discussed several considerations for selecting foster parents, such as gathering parents with a range of experience providing foster care services to children in both traditional and therapeutic foster care settings. Because foster parents we interviewed self-selected to participate and were all working with private providers we interviewed, their views do not represent the views of all foster parents, such as those working directly with child welfare agencies. We also reviewed relevant documents that corroborated the information obtained in our interviews with child welfare agencies and private providers, such as recruitment plans, marketing materials, and child placement reports. Because we conducted interviews with a non-generalizable sample of child welfare officials, private providers, and foster parents, the information gathered in the three selected states is not generalizable. Although not generalizable, our selection methodologies provide illustrative examples to support our findings. To obtain information specifically about efforts to recruit and retain families who provide therapeutic foster care services, we conducted three discussion groups at a conference hosted by the Family Focused Treatment Association, a non-profit organization that aims to develop, promote, and support therapeutic foster care services. The conference was held in July 2017 in Chicago, Illinois. We held two discussion groups with representatives from 17 private providers and one discussion group with eight sets of foster parents. To solicit participants, we used email to invite all individuals who registered for the conference to participate in our discussion groups. These emails explained our objectives and potential discussion topics related to recruiting and retaining therapeutic foster care families. Participants who volunteered were sorted into the three groups. Discussion groups for private providers and foster parents were guided by a GAO moderator using semi-structured interview protocols. These protocols included open-ended questions that encouraged participants to share their thoughts and experiences on recruiting and retaining therapeutic foster care families, including strategies and challenges in these efforts, as well as differences in providing therapeutic versus traditional foster care services. Discussion groups are not designed to (1) demonstrate the extent of a problem or to generalize results to a larger population, (2) develop a consensus to arrive at an agreed-upon plan or make decisions about what actions to take, or (3) provide statistically representative samples or reliable quantitative estimates. Instead, they are intended to generate in- depth information about the reasons for participants’ attitudes on specific topics and to offer insights into their concerns about and support for an issue. For these reasons, and because discussion group participants were self-selected volunteers, the results of our discussion groups are not generalizable. We conducted this performance audit from January 2017 to May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact name above, the following staff members made key contributions to this report: Elizabeth Morrison (Assistant Director); Nhi Nguyen (Analyst-in-Charge); Luqman Abdullah; Laura Gibbons; and Elizabeth Hartjes. Also contributing to this report were Sarah Cornetto; Tiffany Johnson Lapuebla; Cheryl Jones; Kirsten Lauber; Serena Lo; Hannah Locke; Mimi Nguyen; Samuel Portnow; Ronni Schwartz; Almeta Spencer, and Kathleen van Gelder.
|
Foster care caseloads have increased in recent years due, in part, to the national opioid epidemic. States have struggled to find foster families for children who can no longer live with their parents, including those who need TFC services. States may use private providers, such as non-profit and for-profit organizations, to help recruit and retain foster families. States may also use federal funds provided by HHS for these efforts. GAO was asked to review states' efforts to recruit and retain foster families. This report examines: (1) how state child welfare agencies recruit foster families, including those who provide TFC services, (2) any challenges in recruiting and retaining foster families, and (3) the extent to which HHS provides support to child welfare agencies in these efforts. GAO reviewed relevant federal laws, regulations, and guidance; interviewed HHS officials; surveyed child welfare agencies in all states and the District of Columbia; held discussion groups with private providers and foster parents who provide TFC services; and conducted interviews with officials in California, Georgia, and Indiana, which were selected for factors such as changes in foster care caseloads, opioid abuse rates, and geographic location. States employ a range of strategies to recruit foster families and nearly all use private providers to recruit, particularly for therapeutic foster care (TFC) services, in which parents receive training and support to care for children who need a higher level of care. Recruitment strategies include searching for relatives, conducting outreach to the community, targeting certain populations, and obtaining referrals from current foster families. In response to GAO's national survey, 49 states reported using private providers to recruit foster families. In the three selected states where GAO conducted interviews, private providers were responsible for both recruiting and retaining foster families, such as helping families become licensed and providing them with support (see fig.). States reported various challenges with recruiting and retaining foster families in response to GAO's survey. In recruiting families, over two-thirds of states reported challenges such as limited funding and staff, which can make prioritizing recruitment efforts difficult; extensive licensing processes; and difficulties finding families willing to care for certain children, such as those with high needs. In retaining families, 29 states reported concerns about inadequate support for foster families, which can include difficulties contacting child welfare agency caseworkers. In addition, 31 states reported limited access to services needed to care for children, such as child care. The U.S. Department of Health and Human Services (HHS) provides a number of supports to help states recruit and retain foster families, including technical assistance with their recruitment programs, guidance and information, and funding. Most states GAO surveyed found HHS's supports moderately or very helpful. However, several private providers GAO interviewed in three selected states said they have not received guidance or information from child welfare agencies about recruiting and retaining foster families. In addition, 11 of the 14 providers said they were unaware of related HHS supports and all of them described concerns about communication with child welfare agencies. HHS officials said they encourage states to involve all relevant stakeholders in their efforts, though HHS has focused on supporting child welfare agencies. Consistent with internal control standards on communication, determining whether information on working with private providers would be useful to states could help HHS better support states' use of private providers in efforts to recruit and retain foster families. GAO recommends HHS seek feedback from states on whether information on effective ways to work with private providers to recruit and retain foster families would be useful and if so, provide such information. HHS agreed with GAO's recommendation.
|
With the passage of the NDAA in December 2016, PLCY is to be led by an Under Secretary for Strategy, Policy, and Plans, who is appointed by the President with advice and consent of the Senate. The Under Secretary is to report directly to the Secretary of Homeland Security. Prior to the NDAA, the office was headed by an assistant secretary. Since the passage of the act, the undersecretary position has been vacant, and as of June 5, 2018, the President had not nominated an individual to fill the position. According to PLCY officials, elevating the head of the office to an undersecretary was important because it equalizes PLCY with other DHS management offices and DHS headquarters components. The NDAA further authorizes, but does not require, the Secretary to establish a position of deputy undersecretary within PLCY. If the position is established, the NDAA provides that the Secretary may appoint a career employee to the position (i.e., not a political appointee). In March 2018, the Secretary named a Deputy Under Secretary, who has been performing the duties of the Deputy Under Secretary and the Under Secretary since then. As shown in figure 1, PLCY is divided into five sub- offices, each with a different focus area. As of June 5, 2018, the top position in these sub-offices was an assistant secretary and two of the five positions were vacant. As of June 5, 2018, 6 of PLCY’s 12 deputy assistant secretary positions were vacant or filled by acting staff temporarily performing the duties in the absence of permanent staff placement. The NDAA codified many of the functions and responsibilities that PLCY had been carrying out prior to the act’s enactment and, with a few exceptions as discussed later in this report, were largely consistent with the duties the office was already pursuing. According to the act and PLCY officials, one of the office’s fundamental responsibilities is to lead, conduct, and coordinate departmentwide policy development and implementation, and strategic planning. According to PLCY officials, there are four categories of policy and strategy efforts that PLCY leads, conducts, or coordinates: Statutory responsibilities: among others, the Homeland Security Act, as amended by the NDAA, includes such responsibilities as establishing standards of validity and reliability for statistical data collected by the department, conducting or overseeing analysis and reporting of such data, and maintaining all immigration statistical information of U.S. Customs and Border Protection, U.S. Immigration and Customs Enforcement, and U.S. Citizenship and Immigration Services; the Immigration and Nationality Act includes such responsibilities as providing for a system for collection and dissemination to Congress and the public of information useful in evaluating the social, economic, environmental, and demographic impact of immigration laws, and reporting annually on trends in lawful immigration flows, naturalizations, and enforcement actions, Representing DHS in interagency efforts: coordinating or representing departmental policy and strategy positions for larger interagency efforts (e.g., interagency policy committees convened by the White House), Secretary’s priorities: leading or coordinating efforts that correspond to the Secretary of Homeland Security’s priorities (e.g., certain immigration or law-enforcement related issues), and Self-initiated activities: opportunities to better harmonize policy and strategy or create additional efficiencies given PLCY’s ability to see across the department. For example, PLCY officials said that DHS observed an increase in e-commerce and small businesses shipping items via carriers other than the U.S. Postal Service, thus exploiting a gap in DHS monitoring, which covers the U.S. Postal Service and other traditional shipping entities. PLCY officials noted that DHS’s interest in addressing e-commerce issues occurred just before opioids and other controlled substances were being mailed through small businesses and the U.S. Postal Service. As a result, PLCY developed an e-commerce strategy for, among other things, the shipping of illegal items and how to provide information to U.S. Customs and Border Protection before parcels are shipped to the United States from abroad. In accordance with the NDAA, as PLCY leads, conducts, and coordinates policy and strategy, it is to do so in a manner that promotes and ensures quality, consistency, and integration across DHS and applies risk-based analysis and planning to departmentwide strategic planning efforts. The NDAA further provides that all component heads are to coordinate with PLCY when establishing or modifying policies or strategic planning guidance to ensure consistency with DHS’s policy priorities. In addition to the roles PLCY plays that are directly related to leading, conducting, and coordinating policy and strategy, the office is responsible for select operational functions. For example, PLCY is charged with operating the REAL ID and Visa Waiver Programs. The NDAA also conferred responsibilities to PLCY that had not been responsibilities of the DHS Office of Policy prior to the NDAA’s enactment. Among other things, the NDAA charged PLCY with responsibility for establishing standards of reliability and validity for statistical data collected and analyzed by the department, and ensuring the accuracy of metrics and statistical data provided to Congress. In conferring this responsibility, the act also transferred to PLCY the maintenance of all immigration statistical information of the U.S. Customs and Border Protection, U.S. Immigration and Customs Enforcement, and U.S. Citizenship and Immigration Services. PLCY has established five performance goals: build departmental policy-making capacity, coordination, and foster the Unity of Effort, mature the office as a mission-oriented, component-focused organization that is responsive to DHS leadership, effectively engage and leverage stakeholders, enhance productivity and effectiveness of policy personnel through appropriate alignment of knowledge, skills, and abilities, and accountability, transparency, and leadership. PLCY officials stated that the office established the performance goals in fiscal year 2015 and they were still in effect as of fiscal year 2018. As previously discussed, DHS has eight operational components. DHS also has six support components. Although each one has a distinct role to play in helping to secure the homeland, there are operational and support functions that cut across mission areas. For example, nearly every operational component has, as part of its security operations, a need for screening, vetting, and credentialing procedures and risk- targeting mechanisms. Likewise, nearly all operational components have some form of international engagement, deploying staff abroad to help secure the homeland before threats reach U.S. borders. Finally, as shown in figure 2, different aspects of broad mission areas fall under the purview of more than one DHS operational component. PLCY is responsible for coordinating three key DHS strategic efforts: the QHSR, the DHS Strategic Plan, and the Resource Planning Guidance. The QHSR is a comprehensive examination of the homeland security strategy of the nation that is to occur every 4 years and include recommendations regarding the long-term strategy and priorities for homeland security of the nation and guidance on the programs, assets, capabilities, budget, policies, and authorities of DHS. The QHSR is to be conducted in consultation with the heads of other federal agencies, key DHS officials (including the Under Secretary, PLCY), and key officials from other relevant governmental and nongovernmental entities. The DHS Strategic Plan describes how DHS can accomplish the missions it identifies in the QHSR report, identifies high-priority mission areas within DHS, and lays the foundation for DHS to accomplish its Unity of Effort Initiative as well as various cross-agency priority goals in the strategic plan, such as cybersecurity. The Resource Planning Guidance describes DHS’s annual resource allocation process in order to execute the missions and goals of the QHSR and DHS Strategic Plan. The Resource Planning Guidance contains guidance over a 5-year period and informs several forward- looking reports to Congress, including the annual fiscal year Congressional Budget Justification as well as the Future Years Homeland Security Program Report. Although PLCY has effectively carried out key coordination functions at the senior level related to strategy, PLCY’s ability to lead and coordinate policy have been limited due to ambiguous roles and responsibilities and a lack of predictable, accountable, and repeatable procedures. According to our analysis and interviews with operational components, PLCY’s efforts to lead and coordinate departmentwide and crosscutting strategies—a key organizational objective—have been effective in providing opportunities for all relevant stakeholders to learn about and contribute to departmentwide or crosscutting strategy development. In this role, PLCY routinely serves as the executive agent for the Deputies Management Action Group and the Senior Leaders Council, which involve analytical and coordination support. PLCY also provides support for deputy- and principal-level decision making. For example, the Strategy and Policy Executive Steering Committee (S&P ESC) meetings have been used to discuss components’ implementation plans for crosscutting strategies, PLCY’s requests for information from components for an upcoming strategy, and updates on departmentwide strategic planning initiatives. According to PLCY and operational component officials, PLCY also provides leadership for the Resource Planning Guidance and Winter Studies, both of which help inform departmentwide resource decision- making. For example, officials from one operational component stated that PLCY’s leadership of the Resource Planning Guidance is a helpful practice for coordination and collaboration on departmentwide or crosscutting strategies. The officials stated that PLCY reaches out to ensure that the component is covering the Secretary’s priorities and this helps the component to ensure that its budget includes them. Furthermore, PLCY develops and coordinates policy options and opinions for the Secretary to present at the National Security Council and other White House-level meetings. For example, PLCY officials told us that, in light of allegations of Russian involvement in using poisonous nerve agents on two civilians in Great Britain, PLCY coordinated the collection of information to develop a policy recommendation for the Secretary to present at a National Security Council meeting. PLCY has encountered challenges leading and coordinating efforts to develop, update, or harmonize policy—also a key organizational objective—because it does not have clearly-defined roles, responsibilities, and mechanisms to implement these responsibilities in a predictable, repeatable, and accountable way. Standards for Internal Control in the Federal Government states that management should establish an organizational structure, assign responsibility, and delegate authority to achieve the entity’s objectives. As such, an organization’s management should develop an organizational structure with an understanding of the overall responsibilities and assign these responsibilities to discrete units to enable the organization to operate in an efficient and effective manner. An organization’s management should also implement control activities through policies. It is important that an organization’s management document and define policies and communicate those policies and procedures to personnel, so they can implement control activities for their assigned responsibilities. In addition, leading collaboration practices we have identified in our prior work include defining and articulating a common outcome, clarifying roles and responsibilities, and establishing mutually-reinforcing or joint strategies to enhance and sustain collaboration, such as the work that PLCY and the components need to do together to ensure that departmentwide and crosscutting policy is effective for all relevant parties. According to PLCY officials, in general, PLCY is responsible for leading the development of a policy when it crosses multiple components or if there is a national implication, including White House interest in the policy. However, PLCY officials acknowledged that this practice does not always make them the lead and there are no established criteria that define the circumstances under which PLCY (or another organizational unit) should lead development of policies that cut across organizational boundaries. PLCY officials said the lead entity for a policy is often announced in an email from the Secretary’s office, on a case-by-case basis. According to PLCY officials, once components have been assigned responsibility for a policy, they have generally tended to retain it, and PLCY may not have oversight for crosscutting policies that are maintained by operational components. Therefore, there is no established, coordinated system of oversight to periodically monitor the need for policy harmonization, revision, or rescission. In the absence of clear roles and responsibilities, and processes and procedures to support them, PLCY and officials in 5 of the 8 components have encountered challenges in coordinating with each other. Although PLCY and most component officials we interviewed described overall positive experiences in coordinating with each other, we identified multiple instances of (1) confusion about which parties should lead and engage in policy efforts, (2) not engaging components at the right times, (3) incompatible expectations around timelines, and (4) uncertainty about PLCY’s role and the extent to which it can and should identify and drive policy in support of a more cohesive DHS. Confusion about who should lead and engage. Officials from one operational component told us that they were tasked with leading a departmentwide policy development effort they believed was outside their area of responsibility and expertise. Officials in another operational component stated that components sometimes end up coordinating among themselves, but that policy development could be more effective and efficient if PLCY took the role of convener and facilitator to ensure the departmentwide perspective is present and all relevant stakeholders participate. Officials from a third component stated that they spent significant time and resources to develop a policy directly related to their component’s mission. As the component got ready to implement the policy, PLCY became aware of it and asked the component to stop working on the policy, so PLCY could develop a departmentwide policy. According to component officials, while they were supportive of a departmentwide policy, PLCY’s timing delayed implementation of the policy the component had developed and wasted the resources it had invested. Moreover, officials from four operational components told us that sometimes counselors from outside PLCY, such as the Secretary’s office, have led policy efforts that seem like they should be PLCY’s responsibility, which created more confusion about what PLCY’s ongoing role should be. PLCY officials agreed that, at times, it has been challenging to define PLCY’s role relative to counselors for the Secretary, and acknowledged that clear guidance to define who is leading which types of policy development and coordination would be helpful. Not engaging components at the right times. Officials from 5 of 8 operational components told us that they had not always been engaged at the right times by PLCY in departmentwide or crosscutting policies that affected their missions. For example, officials from an operational component described a crosscutting policy that had significant implications for some of its key operational resources, but the component was not made aware of the policy until it was about to be presented at the White House. Officials from another component stated that they learned of a new policy after it was in place and had to find significant training and software resources to implement it even though they viewed the policy as unnecessary for their mission. PLCY officials stated that, while they intend to identify all components that should be involved in a policy, there are times when PLCY is unaware a component is developing a policy that affects other components. PLCY officials said they will involve other components when PLCY becomes aware that a component is developing such a policy. PLCY officials stated that it would be helpful to have a process and procedures for cross-component coordination on policies to help guide engagement regardless of who is developing the policy. Incompatible expectations around timelines. Officials at 4 of 8 operational components stated that short timelines from PLCY to provide input and feedback can prevent PLCY from obtaining thoughtful and complete information from components. For example, officials from one component stated that PLCY asked them to perform an analysis that would inform major, departmental decision-making and quickly provide the analysis. Component officials told us that they did not understand why PLCY needed the analysis on such an accelerated timeline, which seemed inappropriate given the level of importance and purpose of the analysis. Officials from another component told us that PLCY had not always provided enough time to provide thoughtful feedback; therefore, component officials were not sure if PLCY really wanted their feedback. Officials from a third component stated that sometimes PLCY did not provide sufficient time for thoughtful input or feedback that had cleared the component’s legal review, so component officials elected to miss PLCY’s deadline and provide late feedback. PLCY officials told us that, frequently, timelines are not within their control, a situation that some component officials also noted during our interviews with them. However, PLCY officials agreed that a documented, predictable, and repeatable process and procedures for policies may help ensure PLCY provides sufficient comment time when in its control and may provide a basis to help negotiate timelines with DHS leadership in other situations. PLCY officials stated that, even with a documented process and procedures, there would still be circumstances when short timelines are unavoidable. Uncertainty about PLCY’s role in driving policy harmonization. Policy officials at 6 of 8 operational components told us that they were unsure or not aware of PLCY’s role in harmonizing policy across the department, and stated a desire for PLCY to be more involved in harmonizing or enhancing departmentwide and crosscutting policy or for greater clarity about PLCY’s responsibility to play this role. As previously discussed, PLCY’s policy and strategy efforts fall into four categories—statutory responsibilities, interagency efforts, Secretary’s priorities, and self- initiated activities; these activities include efforts to better harmonize policies and strategies. According to PLCY officials, the category with the lowest priority is self-initiated activities. PLCY officials stated that PLCY makes tradeoffs and rarely chooses to work on self-initiated projects over its other three categories of effort. According to the officials, PLCY’s work on the other three higher-priority categories is sufficient to ensure that the office is effectively leading, conducting, and coordinating strategy and policy across the department. Given its organizational position and strategic priorities, PLCY is uniquely situated to identify opportunities to better harmonize or enhance departmentwide and crosscutting policy, a role that is in line with its strategic priority to build departmental policymaking capacity and foster Unity of Effort. In the absence of clear articulation of the department’s expectations for PLCY in this role, it is difficult for PLCY and DHS leadership to make completely informed and deliberate decisions about the tradeoffs they make across any available resources. In addition to statutory authority that PLCY received in the NDAA, PLCY officials stated that a separate, clear delegation of authority—a mechanism by which the Secretary delegates responsibilities to other organizational units within DHS—is needed to help confront the ambiguous roles it has experienced in the past. PLCY officials stated that past efforts to finalize a delegation of authority have stalled during leadership changes and that the initiative has been a lower priority, in part, due to where PLCY is in its maturation process and DHS is in its evolution into a more cohesive department under the Unity of Effort. As of May 2018, the effort had been revived, but it is not clear whether and when DHS will finalize it. According to a senior official in the Office of the Under Secretary for Management, a delegation of authority is important for PLCY. He described the creation of a delegation of authority as a process that does more than simply delegate the Secretary’s authority. He noted that defining PLCY’s roles and responsibilities in relation to other organizational units presents an opportunity to engage all relevant components and agree on appropriate roles. He said that, earlier in the organizational life of the Office of the Under Secretary for Management, it went through a process like this, which has been vital in it being able to carry out its mission. He said now that PLCY has a deputy undersecretary in place, this is a good time to restart the process to develop the delegation of authority. Until the delegation or a similar process clearly and fully articulates PLCY’s roles and responsibilities, PLCY and the operational components are likely to continue to experience limitations in collaboration on crosscutting and departmentwide policy. PLCY determines its workforce needs through the annual budget process, but systematic identification of workforce demand, capacity gaps, and strategies to address them could help ensure that PLCY’s workforce aligns with its and DHS’s priorities and goals. To determine its workforce needs each year, PLCY officials told us that, as part of the annual budget cycle, they work with PLCY staff and operational components to determine the scope of activities required for each PLCY area of responsibility and the associated staffing needs. PLCY officials said there are three skill sets needed to carry out the office’s responsibilities: policy analysis, social science analysis, and regional affairs analysis. PLCY officials explained that the office’s priorities can change rapidly as events occur and the Secretary’s and administration’s priorities shift. Therefore, according to PLCY officials, their staffing model must be flexible. They said that, rather than a defined system of full-time equivalents with set position types and levels, PLCY officials start with their budget allotment and consider current and potential emerging needs to set position types and levels, which may fluctuate significantly from year to year. In addition, PLCY officials stated that PLCY staff are primarily generalists and, given the versatility in skill sets of their workforce, PLCY has a lot of flexibility to move staff around if there is an emerging need. For example, if there is an emerging law enforcement issue that affects all law enforcement agencies, PLCY may be tasked with developing a policy to ensure the issue is addressed quickly and that the resulting policy is harmonized across the department and with other law enforcement agencies, such as the Department of Justice. While PLCY completes some workforce planning activities as part of its annual budgeting process, PLCY does not systematically address several aspects of the DHS Workforce Planning Guide that may create more efficient operations and greater alignment with DHS priorities. According to the DHS Workforce Planning Guide, workforce planning is a process that ensures the right number of people with the right skills are in the right jobs at the right time for DHS to achieve the mission. This process provides a framework to: align workforce planning to the department’s mission and goals, predict, then assess how evolving missions, new processes, or environmental conditions may impact the way that work will be performed at DHS in the future, identify gaps in capacity, develop and implement strategies and action plans to address capacity and capability gaps, and continuously monitor the effectiveness of action plans and modify, as necessary. The DHS Workforce Planning Guide stipulates that an organization’s management should not only lead and show support during the workforce planning process, but ensure alignment with the strategic direction of the agency. Moreover, Standards for Internal Control in the Federal Government states that management should use quality information to achieve the entity’s objectives. For example, management uses an entity’s operational processes to make informed decisions and evaluate the entity’s performance in achieving key agency objectives. According to PLCY officials, the current staffing paradigm involves shifting the office’s staff when new and urgent issues arise from the Secretary or White House, and adding these unexpected tasks to staff’s existing responsibilities. However, this means that tradeoffs are made, resulting in some priority items taking longer to address or not getting attention at all. PLCY officials stated that they have been caught off-guard at times by changes in demands placed on PLCY and had to scramble to address the new needs. Additionally, PLCY officials said they have a number of vacancies, which hamper the office’s ability to meet certain aspects of its mission. For example, PLCY’s Office of Cyber, Infrastructure, and Resilience was created in 2015. According to PLCY officials, PLCY has had some resources to address cyber issues, however, there has not been funding to staff this office and an assistant secretary has not been appointed to lead it. Therefore, PLCY officials stated that PLCY has not been able to address its responsibilities for infrastructure resilience. Similarly, PLCY has limited capacity for risk analysis. A provision of the NDAA provides that PLCY is to: develop and coordinate strategic plans and long-term goals of the department with risk-based analysis and planning to improve operational mission effectiveness, including consultation with the Secretary regarding the quadrennial homeland security review under section 707 [6 U.S.C. § 347]. However, PLCY officials acknowledged that their focus on identifying needs for risk analyses and conducting them has been limited, in part, because DHS disbanded the risk management office. Officials from one component told us that they contribute to a report that PLCY coordinates, called Homeland Security National Risk Characteristics, which is prepared as a precursor to the DHS Strategic Plan. PLCY officials stated that, outside of these foundational documents and some risk-based analyses completed as part of specific policy development efforts, PLCY does not have the capacity to complete any additional risk analysis activities. Although PLCY officials said they conduct some analysis of potential demands as a starting point for how to allocate PLCY’s annual staffing budget, these efforts are largely informal and internal and have not resulted in a systematic analysis that provides PLCY and DHS management with the information they need to understand the effects of resource tradeoffs. Also, PLCY officials said they track accomplishments toward PLCY’s strategic priorities as part of a weekly meeting and report, however, officials acknowledged they do not analyze what role workforce decisions have played in achieving or not achieving strategic priorities. Moreover, although PLCY officials stated that they have intermittent, in- person, informal communication about resource use, they have not used the principles outlined in the DHS Workforce Planning Guide to systematically identify and communicate workforce demands, capacity gaps, and strategies to address workforce issues. According to PLCY officials, they have not conducted such analysis, in part, because the Secretary’s office has not requested it of them or the other DHS offices that are funded in the same part of the DHS budget. Regardless of whether the Secretary expects workforce analysis as part of the budgeting process, the DHS Workforce Planning Guide could be used within and outside of the budgeting process to help inform resource decision making throughout the year. PLCY officials stated that at the PLCY Deputy Under Secretary’s initiative, they recently began a review of all relevant statutory authorities, which they will map against the current organizational structure and day- to-day operations. The Deputy Under Secretary plans to use the results of the review to enhance PLCY’s efficiency and effectiveness, and the results could serve as a foundation for a more holistic and systematic analysis of workforce demand, any capacity gaps, and strategies to address them. Employing workforce planning principles—in particular, systematic identification of workforce demand, capacity gaps, and strategies to address them—consistent with the DHS Workforce Planning Guide could better position PLCY to use its workforce as effectively as possible under uncertain conditions. Moreover, using the DHS guide would help PLCY to systematically communicate information about any workforce gaps to DHS leadership, so there is transparency about how workforce tradeoffs affect PLCY’s ability to support DHS goals. As discussed earlier, officials from PLCY and DHS operational components praised existing mechanisms to coordinate and communicate at the senior level, especially about strategy. However, component officials identified opportunities for PLCY to better connect at the staff level to identify and respond to emerging policy and strategy needs. Leading practices for collaboration that we have identified in our prior work state that it is important to ensure that all relevant participants have been included in a collaborative effort, and positive working relationships among participants from different agencies or offices can bridge organizational cultures. These relationships build trust and foster communication, which facilitate collaboration. Also, as previously stated, PLCY has mechanisms like the S&P ESC to communicate and coordinate with operational components and other DHS stakeholders at the senior level (e.g., Senior Executive Service officials). However, PLCY does not have a mechanism to effectively engage in routine communication and collaboration at the staff level (e.g., program and policy specialists working at operational components to oversee or implement policy and strategy functions). Specifically, officials with responsibility for policy and strategy at 6 of 8 operational components told us that they did not have regular contact with or know who to contact at PLCY for questions about policies or strategies, or that the reason they knew who to contact was because of existing working relationships, not because of efforts PLCY had undertaken to facilitate such contacts. In addition, some component officials noted that, when they tried to use the PLCY website to coordinate, they found it to be out of date and lacking sufficient information. PLCY officials acknowledged that the website needs improvement. They stated that the office has developed improved content for the website, but does not have the necessary staff to update the website. According to the officials, the needed staff should be hired soon and improved content should be on the website by the end of summer 2018. Although officials at 5 of the 8 operational components we interviewed stated that the quality of PLCY’s coordination and collaboration has improved in the past 2 years or so, component officials offered several suggestions to enhance PLCY’s coordination and collaboration, especially at the staff level. Among these were: conduct routine information sharing meetings with staff-level officials who have policy and strategy responsibilities at each operational component, clearly articulate points of contact, their contact information, and their portfolios at PLCY as well as at other policy and strategy stakeholders, ensure the PLCY website is up-to-date with contact information for PLCY and components that work in strategy and policy areas, and with relevant information about crosscutting strategy and policy initiatives underway, host a forum—such as an annual conference—to bring together policy and strategy officials from PLCY and DHS components to share ideas and make contacts, and prepare a standard briefing for component officials with strategy and policy responsibilities to help ensure that staff at all levels understand what PLCY does, how it works, and opportunities for engagement on emerging policy and strategy needs or identified harmonization opportunities. For example, officials from one component told us that they would like PLCY officials to have in-person meetings with component staff to discuss what PLCY does, who to contact in PLCY, where to find information about policies and strategies, and other relevant information to ensure a smooth working relationship between the component and PLCY. According to PLCY officials, the office recognizes the value of creating mechanisms to connect staff, who work on policy and strategy at all levels in DHS. PLCY officials said they have historically done a better job in coordinating at the senior level, but are interested in expanding opportunities to connect other staff with policy and strategy responsibilities. PLCY officials stated that they are considering creating a working group structure that mirrors existing organizational mechanisms to coordinate at the senior level, but have not taken steps to do so. Routine collaboration among PLCY, operational components, and other DHS offices at the staff level is important to ensure that PLCY is able to carry out its functions under the NDAA, including the effective coordination of policies and strategies. A positive working relationship among these stakeholders can build trust, foster communication, and facilitate collaboration. Such enhanced communication and collaboration across PLCY and among component officials with policy and strategy responsibility could help the department more quickly and completely identify emerging, crosscutting strategy and policy needs and opportunities to enhance policy harmonization. PLCY’s efforts to lead, conduct, and coordinate departmentwide and crosscutting policies have sometimes been hampered by the lack of clearly-defined roles and responsibilities. In addition, PLCY does not have a consistent process and procedures for its strategy development and policymaking efforts. Without a delegation of authority or similar documentation from DHS leadership clearly articulating PLCY’s missions, roles, and responsibilities—along with defined processes and procedures to carry them out in a predictable and repeatable manner—there is continuing risk that confusion and uncertainty about PLCY’s authority, missions, roles, and responsibilities will limit its effectiveness. PLCY employs some workforce planning, but does not systematically apply key principles of the DHS Workforce Planning Guide to help predict workforce demand, and identify any workforce gaps and design strategies to address them. Without this analysis, PLCY faces limitations in ensuring that its workforce is aligned with its and DHS’s priorities and goals. Moreover, the results of this analysis would better position PLCY to communicate to DHS leadership any potential tradeoffs in workforce allocation that would affect PLCY’s ability to meet priorities and goals. PLCY could enhance its use of mechanisms for collaboration and communication with DHS stakeholders at the staff level. Implementation of additional mechanisms at the staff level for regular communication and coordination, including providing up-to-date information to stakeholders about the office, could help PLCY and operational components to better connect in order to identify and address emerging policy and strategy needs. We are making the following four recommendations to DHS: The Secretary of Homeland Security should finalize a delegation of authority or similar document that clearly defines PLCY’s mission, roles, and responsibilities relative to DHS’s operational and support components. (Recommendation 1) The Secretary of Homeland Security should create corresponding processes and procedures to help implement the mission, roles, and responsibilities defined in the delegation of authority or similar document to help ensure predictability, repeatability, and accountability in departmentwide and crosscutting strategy and policy efforts. (Recommendation 2) The Under Secretary for Strategy, Policy, and Plans should use the DHS Workforce Planning Guide to help identify and analyze any gaps in PLCY’s workforce, design strategies to address any gaps, and communicate this information to DHS leadership. (Recommendation 3) The Under Secretary for Strategy, Policy, and Plans should enhance the use of collaboration and communication mechanisms to connect with staff in the components with responsibilities for policy and strategy to better identify and address emerging needs. (Recommendation 4) We provided a draft of this report for review and comment to DHS. DHS provided written comments, which are reproduced in appendix I. DHS also provided technical comments, which we incorporated, as appropriate. DHS concurred with three of our recommendations and described actions planned to address them. DHS did not concur with one recommendation. Specifically, DHS did not concur with our recommendation that PLCY should use the DHS Workforce Planning Guide to help identify and analyze any gaps in PLCY’s workforce, design strategies to address any gaps, and communicate this information to DHS leadership. The letter described a number of actions, including actions that are also described in the report, which PLCY takes to help ensure alignment of its staff with organizational needs. In the letter, PLCY officials pointed to the workforce activities PLCY undertakes as part of the annual budgeting cycle. We acknowledge that the actions described to predict upcoming priorities and resource needs as part of the annual budgeting cycle are in line with the DHS workforce planning principles. However, as we noted, there are opportunities to apply the workforce planning principles outside the annual budgeting cycle to provide greater visibility and awareness of resource tradeoffs to management inside PLCY and in the Secretary’s office. In the letter, PLCY officials made note of the dynamic and changing nature of its operational environment, stating that it often required them to shift resources and priorities on a more frequent or ad hoc basis than many organizations. We acknowledged in the report that PLCY’s operating environment requires it to maintain flexibility in its staffing approach. However, PLCY has a number of important duties, including helping foster Unity of Effort throughout the department and helping to ensure the availability of risk information for departmental decision making, that require longer-term, sustained attention and strategic management. During interviews, PLCY officials acknowledged that striking a balance between these needs has been difficult and at times they have faced significant struggles. The report describes some areas where, during the time we were conducting our work, it was clear that some tasks and functions, such as risk analyses, lacked the resources or focus necessary to ensure they received sustained institutional attention. It is because of PLCY’s dynamic operating environment, coupled with the need for sustained institutional attention to other key responsibilities, that we recommended PLCY undertake workforce planning activities that would help generate better information for PLCY and DHS management to have full visibility and awareness of gaps and resource tradeoffs. Finally, the letter stated that because PLCY is a very small and flat organization, it is able to identify capacity gaps and develop action plans without obtaining all of the data collected through each recommended element, worksheet, form, and template of the model proposed in the DHS Workforce Planning Guide. We acknowledge that it would be counterproductive for PLCY to engage in data collection and analysis that are significantly more elaborate than its planning needs. Nevertheless, we continue to believe that PLCY could use the principles more robustly, outside the annual budgeting process, to help ensure that it identifies and communicates the effect that resource tradeoffs have on its ability to accomplish its multifaceted mission. We are sending copies of this report to the appropriate congressional committees and the Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (404) 679-1875 or CurrieC@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in Appendix II. In addition to the contact named above, Kathryn Godfrey (Assistant Director), Joseph E. Dewechter (Analyst-in-Charge), Michelle Loutoo Wilson, Ricki Gaber, Dominick Dale, Thomas Lombardi, Ned Malone, David Alexander, Sarah Veale, and Michael Hansen made key contributions to this report.
|
GAO has designated DHS management as high risk because of challenges in building a cohesive department. PLCY supports cohesiveness by, among other things, coordinating departmentwide policy and strategy. In the past, however, questions have been raised about PLCY's efficacy. In December 2016, the NDAA codified PLCY's organizational structure, roles, and responsibilities. GAO was asked to evaluate PLCY's effectiveness. This report addresses the extent to which (1) DHS established an organizational structure and processes and procedures that position PLCY to be effective, (2) DHS and PLCY have ensured alignment of workforce with priorities, and (3) PLCY has engaged relevant component staff to help identify and respond to emerging needs. GAO analyzed the NDAA, documents describing specific responsibilities, and departmentwide policies and strategies. GAO also interviewed officials in PLCY and all eight operational components. According to our analysis and interviews with operational components, the Department of Homeland Security's (DHS) Office of Strategy, Policy, and Plans' (PLCY) organizational structure and efforts to lead and coordinate departmentwide and crosscutting strategies—a key organizational objective–have been effective. For example, PLCY's coordination efforts for a strategy and policy executive steering committee have been successful, particularly for strategies. However, PLCY has encountered challenges leading and coordinating efforts to develop, update, or harmonize policies that affect multiple DHS components. In large part, these challenges are because DHS does not have clearly-defined roles and responsibilities with accompanying processes and procedures to help PLCY lead and coordinate policy in a predictable, repeatable, and accountable manner. Until PLCY's roles and responsibilities for policy are more clearly defined and corresponding processes and procedures are in place, situations where the lack of clarity hampers PLCY's effectiveness in driving policy are likely to continue. Development of a delegation of authority, which involves reaching agreement about PLCY's roles and responsibilities and clearly documenting them, had been underway. However, it stalled due to changes in department leadership. As of May 2018, the effort had been revived, but it is not clear whether and when DHS will finalize it. PLCY does some workforce planning as part of its annual budgeting process, but does not systematically apply key principles of the DHS Workforce Planning Guide to help ensure that PLCY's workforce aligns with its and DHS's priorities and goals. According to PLCY officials, the nature of its mission requires a flexible staffing approach. As such, a portion of the staff functions as generalists who can be assigned to meet the needs of different situations, including unexpected changing priorities due to an emerging need. However, shifting short-term priorities requires tradeoffs, which may divert attention and resources from longer-term priorities. As of June 5, 2018, PLCY also had a number of vacancies in key leadership positions, which further limited attention to certain priorities. According to PLCY officials, PLCY recently began a review to identify the office's authorities in the National Defense Authorization Act for Fiscal Year 2017 (NDAA) and other statutes, compare these authorities to the current organization and operations, and address any workforce capacity gaps. Employing workforce planning principles—in particular, systematic identification of workforce demand, capacity gaps, and strategies to address them—consistent with the DHS Workforce Planning Guide could better position PLCY to use its workforce as effectively as possible under uncertain conditions and to communicate effectively with DHS leadership about tradeoffs. Officials from PLCY and DHS operational components praised existing mechanisms to coordinate and communicate at the senior level, especially about strategy, but component officials identified opportunities to better connect PLCY and component staff to improve communication flow about emerging policy and strategy needs. Among the ideas offered by component officials to enhance communication and collaboration were holding routine small-group meetings, creating forums for periodic knowledge sharing, and maintaining accurate and up-to-date contact information for all staff-level stakeholders. GAO is making four recommendations. DHS concurred with three recommendations, including that DHS finalize a delegation of authority defining PLCY's roles and responsibilities and develop corresponding processes and procedures. DHS did not concur with a recommendation to apply the DHS Workforce Planning Guide to identify and communicate workforce needs. GAO believes this recommendation is valid as discussed in the report.
|
In fiscal year 2016, Medicaid covered an estimated 72.2 million low- income and medically needy individuals in the United States, and Medicaid estimated expenditures totaled over $575.9 billion. The federal government matches most state expenditures for Medicaid services on the basis of a statutory formula. States receive higher federal matching rates for certain services or populations, including an enhanced matching rate for Medicaid expenditures for individuals who became eligible for Medicaid under PPACA. Of the $575.9 billion in estimated expenditures for 2016, the federal share totaled over $363.4 billion and the states’ share totaled $212.5 billion. The Centers for Medicare & Medicaid Services (CMS)—a federal agency within the Department of Health and Human Services (HHS)—and states jointly administer and fund the Medicaid program. States have flexibility within broad federal requirements to design and implement their Medicaid programs. States must submit a state Medicaid plan to CMS for review and approval. A state’s approved Medicaid plan outlines the services provided and the groups of individuals covered. While states must cover certain mandatory populations and benefits, they have the option of covering other categories of individuals and benefits. PPACA permitted states to expand coverage to a new population—non- elderly, non-pregnant adults who are not eligible for Medicare and whose income does not exceed 138 percent of the FPL. This expansion population comprised 20 percent of total Medicaid enrollment in 2017. (See fig. 1.) As of December 2017, 31 states and the District of Columbia had expanded Medicaid eligibility to the new coverage population allowed under PPACA and 19 states had not. Figure 2, an interactive map, illustrates states’ Medicaid expansion status. See appendix II for additional information on figure 2. According to the NHIS estimates, 5.6 million low-income adults were uninsured in 2016. Of these 5.6 million, an estimated 1.9 million uninsured, low-income adults resided in expansion states, compared with an estimated 3.7 million in non-expansion states. Estimates of uninsured, low-income adults comprised less than 1 percent of the total population for all expansion states and 3 percent of the total population for all non- expansion states. NHIS estimates also showed that over half of uninsured, low-income adults were male, over half were employed, and over half had incomes less than 100 percent FPL. For some demographic characteristics, there were some statistically significant differences between uninsured, low- income adults in expansion states compared with these adults in non- expansion states. For example, expansion states had significantly larger percentages of uninsured, low-income males than non-expansion states. (See table 1.) See table 6 in appendix III for additional demographic characteristics of uninsured, low-income adults. Estimates from the 2016 NHIS showed some statistically significant differences in the health status of uninsured, low-income adults in expansion and non-expansion states. In particular, expansion states had a larger percentage of these adults who reported that their health was “good” and a smaller percentage who reported their health as “fair or poor” than those in non-expansion states. However, the percentages of uninsured, low-income adults with responses of “excellent or very good” in both expansion and non-expansion states were large—47 percent or larger, and the differences between the two groups of states were not statistically significant. (See fig. 3.) See table 7 in appendix III for additional information about the health status for uninsured, low-income adults. The 2016 NHIS estimates showed that smaller percentages of low- income adults in expansion states reported having any unmet medical needs compared with those in non-expansion states; and smaller percentages of those who were insured reported having any unmet medical needs compared with those who were uninsured, regardless of where they lived, for example: Low-income adults in expansion and non-expansion states. Access to Health Care: Measuring Any Unmet Medical Needs The National Center for Health Statistics, the federal agency that conducts the National Health Interview Survey (NHIS), developed a composite measure on any unmet medical needs, which was based on six survey questions on respondents’ ability to afford different types of needed health care services. These questions asked whether in the past 12 months respondents could not afford medical care at any time; delayed seeking medical care due to worries about costs; or could not afford needed prescription drugs, mental health or counseling, dental care, or eyeglasses. percent or less of the low-income adults who had Medicaid or private health insurance in expansion or non-expansion states reported having any unmet medical needs, compared with 50 percent or more of those who were uninsured in expansion or non-expansion states. Further, among the uninsured, 50 percent of low-income adults living in expansion states reported any unmet medical needs, compared with 63 percent of those in non-expansion states. (See fig. 4.) See tables 8 and 9 in appendix IV for estimates of the composite measure we reviewed on any unmet medical needs. The 2016 NHIS estimates showed that smaller percentages of low- income adults in expansion states reported financial barriers to needed health care compared with those in non-expansion states; and smaller percentages of those who were insured reported financial barriers to needed health care compared with those who were uninsured, regardless of where they lived, for example: Low-income adults in expansion and non-expansion states. Nine percent of low-income adults in expansion states reported that they could not afford needed medical care, compared with 20 percent of low-income adults in non-expansion states. Low-income adults who were insured and uninsured. Twelve percent or less of low-income adults who had Medicaid or private health insurance in expansion or non-expansion states reported financial barriers to needed medical care, compared with 27 percent or more of those who were uninsured in expansion or non-expansion states. In addition, among low- income adults who were uninsured, a smaller percentage of those who lived in expansion states reported financial barriers to two of the six needed health care services compared with those who lived in non-expansion states. (See fig. 5.) See tables 10 through 13 in appendix V for estimates of all survey questions we reviewed on financial barriers to health care. The 2016 NHIS also collected information on non-financial barriers to health care. Specifically, the survey asked whether respondents had delayed health care due to non-financial reasons, such as they lacked transportation, were unable to get through on the phone, were unable to get a timely appointment, experienced long wait time at the doctor’s office, or were not able to get to a clinic or doctor’s office when it was open. The 2016 NHIS showed that the same or similar percentages of low-income adults in expansion and non-expansion states reported delaying care due to a lack of transportation or other non-financial reasons. Further, generally similar or larger percentages of low-income adults with insurance reported delaying care due to non-financial reasons, compared with those who were uninsured. See tables 14 and 15 in appendix V for estimates of low-income adults in expansion and non- expansion states and by insurance status on non-financial barriers to health care. The 2016 NHIS estimates showed that a larger percentage of low-income adults in expansion states reported having a usual place of care compared with those in non-expansion states; and larger percentages of those who were insured reported having a usual place of care compared with those who were uninsured, regardless of where they lived, for example: Low-income adults in expansion and non-expansion states. Eighty-two percent of the low-income adults in expansion states reported having a usual place of care when they were sick or needed advice about their health, compared with 68 percent of those in non- expansion states. Access to Health Care: Having a Usual Place of Care The 2016 National Health Interview Survey (NHIS) asked respondents about whether they had a place they usually go when sick or need advice about their health. Low-income adults who were insured and uninsured. Seventy- eight percent or more of those who had Medicaid or private health insurance in expansion or non-expansion states reported having a usual place of care, compared with 46 percent or less of those who were uninsured in expansion or non-expansion states. Among the uninsured, similar percentages of low-income adults in expansion and non-expansion states reported having a usual place of care. (See fig. 6.) See tables 16 through 19 in appendix VI for estimates of all survey questions we reviewed on having a usual place of care. The 2016 estimates showed that larger percentages of low-income adults in expansion states reported receiving selected health care services, such as a flu vaccine, compared with those in non-expansion states; and larger percentages of those with insurance reported receiving selected health care services compared with those who were uninsured, regardless of where they lived, for example: Low-income adults in expansion and non-expansion states. Thirty-one percent of low-income adults in expansion states reported receiving flu vaccinations, compared with 24 percent of those in non- expansion states. having their blood cholesterol checked by having their blood pressure checked by a doctor, nurse, or other health professional; visiting a hospital emergency department. percent or more of low-income adults who had Medicaid or private health insurance in expansion or non-expansion states reported receiving blood cholesterol checks, compared with 28 percent or less of low-income adults who were uninsured in expansion or non- expansion states. Among the uninsured, generally similar percentages of low-income adults in expansion and non-expansion states reported blood cholesterol checks, flu vaccines, and other selected services. (See fig. 7.) See tables 20 and 21 in appendix VI for estimates of all survey questions we reviewed on selected health care services. The 2016 NHIS also asked respondents whether they visited or had spoken to a health care professional about their health, including: a general doctor, such as a general practitioner, family doctor, and a nurse practitioner, physician’s assistant, or midwife; and a doctor who specializes in a particular disease, with the exception of obstetricians, gynecologists, psychiatrists, and ophthalmologists. See tables 22 and 23 in appendix VI for estimates of low-income adults in expansion and non-expansion states and by insurance status on contacting health care professionals. We provided a draft of this report to HHS for comment. HHS provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the appropriate congressional committee, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you are your staff members have any questions about this report, please contact me at (202) 512-7114 or yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix VII. To describe national survey estimates of (1) the number and demographic characteristics of uninsured, low-income adults in expansion and non-expansion states; (2) unmet medical needs for low-income adults in expansion and non-expansion states and by insurance status; (3) barriers to health care for low-income adults in expansion and non- expansion states and by insurance status; and (4) having a usual place of care and receiving selected health care services for low-income adults in expansion and non-expansion states and by insurance status, we used data from the 2016 National Health Interview Survey (NHIS). The 2016 NHIS were the most recent data available when we conducted our analyses. This appendix describes the data source, study population, analyses conducted, study limitations, and data reliability assessment. The NHIS collects demographic, health status, health insurance, health care access, and health care service use data for the civilian, noninstitutionalized U.S. population. It is an annual, nationally representative, cross-sectional household interview survey. NHIS interviews are conducted continuously throughout the year for the National Center for Health Statistics (NCHS), which is a federal agency within the Department of Health and Human Services that compiles statistical information to help guide health policy decisions. Interviews are conducted in respondents’ homes, and interviewers may conduct follow- up interviews over the telephone to complete an interview. Information about some NHIS respondents, such as information about their health status, may be obtained through an interview with another family member on behalf of the respondent. NHIS data are organized into several data files. Estimates used for our study are based on data with the 2016 Family and Sample Adult Core components of the 2016 NHIS. Sociodemographic, insurance, and select health care access and utilization variables were defined using data collected in the Family Core component of the survey, which includes data on every household member for the families participating in NHIS. Other measures of health care access and utilization examined in this study are based on data collected in the Sample Adult Core component. In this component, the respondent (i.e., the sample adult) is randomly selected from among all adults aged ≥18 years in the family. A proxy respondent might respond for the sample adult if, because of health reasons, the sample adult is physically or mentally unable to respond themselves. The 2016 imputed income files were used to define poverty thresholds, which is based on reported and imputed family income. The NHIS publicly released data files for 2016 include data for 40,220 households containing 97,169 persons, and the total household response rate was 67.9 percent. For this study we asked NCHS to provide estimates of low-income, non- elderly adults, which we defined as individuals ages 19 to 64, with family incomes that did not exceed 138 percent of the federal poverty level (FPL). We also requested that estimates be provided separately for respondents based on whether they resided in an expansion or non- expansion state, and whether they were covered by private health insurance, Medicaid, or had no insurance. We gave NCHS specifications for the definition of low-income, non-elderly adults; the states that should be classified as expansion or non-expansion states in calendar year 2016; and the respondents who should be classified as having private health insurance, Medicaid, or no insurance. We asked NCHS to exclude respondents who were noncitizens, were covered by Medicare, only received health care services through military health care or through the Indian Health Service, or had Supplemental Social Security Income. We also excluded adult females from the Sample Adult file who responded they were pregnant at the time of the interview. In addition, we asked NCHS to exclude individuals for which information was missing—not recorded or not provided during the interview—on health insurance coverage (Medicaid, private health insurance, Indian Health Service, military health care, or no health insurance), receipt of Supplemental Social Security Income, and U.S. citizenship. We classified individuals in our study population as residing in an expansion or non-expansion state based on their state of residence when they were interviewed for the 2016 NHIS. We classified the 30 states and the District of Columbia that expanded their Medicaid eligibility before July 1, 2016, as expansion states. The remaining 20 states were classified as non-expansion states. Louisiana expanded Medicaid coverage on July 1, 2016; therefore, we classified it as a non-expansion state. We decided not to classify Louisiana as an expansion state because we allowed a 6- month period for the effects of expansion to appear. Therefore, for Louisiana we only included NHIS respondents interviewed from January through June 2016 when Louisiana was a non-expansion state. Similarly, for two expansion states—Alaska and Montana—we only included individuals who were interviewed March through December 2016 and July through December 2016, respectively, after the state expanded Medicaid to allow for a 6-month time period for the effect of expansion to take place. (See table 2.) Table 3 below illustrates the sample size and population estimates of low- income sample adults by expansion state, non-expansion state, and national total. We classified NHIS respondents as having private health insurance, Medicaid, or no insurance based on the health insurance classification approach used by NCHS for NHIS. NCHS assigned NHIS respondents’ health insurance classification based on a hierarchy of mutually exclusive categories in the following order: private health insurance, Medicaid, other coverage, and uninsured. Low-income adults with more than one coverage type were assigned to the first appropriate category in the hierarchy. Respondents were classified as having private health insurance if they reported that they were covered by any comprehensive private health insurance plan (including health maintenance and preferred provider organizations). Private coverage excluded plans that pay for one type of service, such as accidents or dental care. Respondents were classified as having Medicaid if they reported they were covered by Medicaid or by a state-sponsored health plan with no premiums or it was not known whether a premium was charged. Respondents were classified as being uninsured if they did not report having any private health insurance, Medicare, Medicaid, Children’s Health Insurance Program, state-sponsored or other government-sponsored health plan, or military health plan. Respondents were also classified as being uninsured if they only had insurance coverage with a private plan that paid for one type of service, such as accidents or dental care. We gave NCHS officials specifications to calculate estimates from the 2016 NHIS for demographic characteristics, access to care, as well as composite measures of access to health care based on selected survey questions. Composite measures are NCHS-developed measures based on responses to NHIS questions covering related topics. The analysis included two composite measures: 1. any unmet medical needs, which is based on responses to six underlying survey questions that asked respondents about whether during the past 12 months they needed medical care but did not get it because they could not afford it; delayed seeking medical care because of worry about the cost; or did not get prescription medicines, mental health care or counseling, eyeglasses, or dental care due to cost; and 2. any non-financial barriers to health care, which is based on five underlying questions that asked respondents whether they delayed care in the past 12 months for any of the following reasons: could not get through on the telephone; could not get an appointment soon enough; waited too long to see the doctor after arriving at the doctor’s office; the clinic/doctor’s office was not open when respondent could get there; and did not have transportation. NCHS officials calculated our requested estimates of groups within our study population based on whether respondents resided in an expansion or non-expansion state and whether they had private health insurance, Medicaid, or were uninsured at the time of the interview. For each comparison—such as comparisons of access to health care for respondents in expansion versus non-expansion states—we asked NCHS to test for statistically significant differences. We identified a statistically significant difference when the p-value from a t-test of the difference in the estimated proportions between two study subgroups had a value of less than 0.05. To describe the number and demographic characteristics of uninsured, low-income adults, we compared estimates of selected demographic characteristics (race and ethnicity, gender, poverty status, and employment status) and reported health status for this group in expansion and non-expansion states. These and other estimates of demographic characteristics and reported health status from the 2016 NHIS for uninsured, low-income adults by expansion states, non-expansion states, and all states are provided in tables 6 and 7 in appendix III. To describe unmet medical needs, barriers to health care, and having a usual place of care and receiving selected services for all low-income adults in expansion and non-expansion states and by insurance status, we asked NCHS to calculate estimates based on responses to selected NHIS questions and NCHS composite measures. We selected these survey questions and composite measures from the Family and Adult Access to Health Care and Utilization and Adult Health Behaviors sections of the 2016 NHIS. To summarize estimates of low-income adults in expansion and non-expansion states and by insurance status, responses to selected survey questions and composite measures were calculated as an estimated percentage of the relevant group’s total population for eight groups of low-income adults: (1) those in expansion states, (2) those in non-expansion states, (3) those who had Medicaid in expansion states, (4) those who had Medicaid in non-expansion states, (5) those who had private health insurance in expansion states, (6) those who had private health insurance in non-expansion states, (7) those who were uninsured in expansion states, and (8) those who were uninsured in non-expansion states. We asked NCHS to test for statistically significant differences for the estimates of access to care between selected groups of low-income adults. (See table 4.) The results of the tests for statistically significant differences for these comparison groups are in appendixes IV through VI. Our study has some limitations. First, our study did not examine whether statistically significant differences in estimates of access to health care between respondents in expansion and non-expansion states were associated with the choice to expand Medicaid. Second, NHIS data are based on respondent-reported data, which may be subject to potential biases and recall of participants’ use of health services and may be less accurate than administrative data or clinical data. Third, we could not report estimates of access to health care that did not meet NCHS’s standards of reliability or precision. We assessed the reliability of NHIS data by reviewing NHIS data documentation; interviewing knowledgeable NCHS officials and academic researchers; and examining the data for logical errors, missing values, and values outside of expected ranges. We determined that the data were sufficiently reliable for the purposes of these analyses. Under the Patient Protection and Affordable Care Act (PPACA), states may opt to expand their Medicaid programs’ eligibility to cover certain low-income adults beginning January 2014. As of December 2017, 31 states and the District of Columbia had expanded their Medicaid programs as permitted under PPACA and 19 states had not. Table 5 lists the states that expanded Medicaid eligibility and those that did not. It also includes state population and other Medicaid data, which is presented in the roll-over information in interactive figure 2. This appendix provides additional 2016 National Health Interview Survey (NHIS) estimates we obtained from the National Center for Health Statistics (NCHS). Table 6 presents estimates of selected demographic characteristics for low-income adults who were uninsured at the time of the survey interview. The table provides estimates for these adults based on whether they resided in states that expanded Medicaid eligibility as permitted under the Patient Protection and Affordable Care Act (PPACA) (referred to as expansion states) or states that did not (referred to as non- expansion states). We report statistically significant differences when comparing the responses of uninsured, low-income adults in expansion and non-expansion states. Table 7 shows estimates of the reported health status of uninsured, low- income adults based on whether they resided in an expansion or non- expansion state. The table provides the number and percent of these adults who reported that at the time of the interview their health status was excellent or very good; good; or fair or poor. The table also shows the extent to which these adults reported whether their health status was different at the time of the interview compared to the previous year. We report statistically significant differences when comparing the responses of uninsured, low-income adults in expansion and non-expansion states. This appendix provides estimates of any unmet medical needs for low- income adults—individuals ages 19 to 64, with family incomes that did not exceed 138 percent of the federal poverty level (FPL)—from the 2016 National Health Interview Survey (NHIS), which were produced by the National Center for Health Statistics (NCHS). Estimates are based on a composite measure of any unmet medical needs. Table 8 shows estimates of all low-income adults in expansion and non-expansion states. We also report statistically significant differences between low- income adults in expansion and non-expansion states. Table 9 shows estimates of six groups of low-income adults: (1) low- income adults who were uninsured in expansion states; (2) low-income adults who were uninsured in non-expansion states; (3) low-income adults who had Medicaid in expansion states; (4) low-income adults who had Medicaid in non-expansion states; (5) low-income adults who had private health insurance in expansion states; and (6) low-income adults who had private health insurance in non-expansion states. We also report any statistically significant differences when comparing the six groups of low-income adults, specifically: low-income adults who were uninsured in expansion states compared with each of the four groups of low-income adults who were insured— low-income adults who had Medicaid in expansion states, low-income adults who had Medicaid in non-expansion states, low-income adults who had private health insurance in expansion states, and low-income adults who had private insurance in non-expansion states; low-income adults who were uninsured in non-expansion states compared with each of the four groups of low-income adults who were insured; low-income adults who were uninsured in expansion states compared with low-income adults who were uninsured in non-expansion states; low-income adults who had Medicaid in expansion states compared with low-income adults who had Medicaid in non-expansion states; and low-income adults who had private health insurance in expansion states compared with low-income adults who had private health insurance in non-expansion states. This appendix provides estimates of barriers to health care for low- income adults—individuals ages 19 to 64, with family incomes that did not exceed 138 percent of the federal poverty level (FPL)—from the 2016 National Health Interview Survey (NHIS), which we obtained from the National Center for Health Statistics (NCHS). Estimates of financial barriers to needed medical, specialty, and other types of health care and prescription drugs are based on selected survey questions. Estimates of non-financial barriers to health care are based on responses to selected survey questions and a composite measure. Estimates are reported for: All low-income adults in expansion and non-expansion states. We also report statistically significant differences between low-income adults in expansion and non-expansion states. Six groups of low-income adults: (1) low-income adults who were uninsured in expansion states; (2) low-income adults who were uninsured in non-expansion states; (3) low-income adults who had Medicaid in expansion states; (4) low-income adults who had Medicaid in non-expansion states; (5) low-income adults who had private health insurance in expansion states; and (6) low-income adults who had private health insurance in non-expansion states. We also report any statistically significant differences when comparing the six groups of low-income adults, specifically: low-income adults who were uninsured in expansion states compared with each of the four groups of low-income adults who were insured—low-income adults who had Medicaid in expansion states, low-income adults who had Medicaid in non-expansion states, low-income adults who had private health insurance in expansion states, and low-income adults who had private insurance in non-expansion states; low-income adults who were uninsured in non-expansion states compared with each of the four groups of low-income adults who were insured; low-income adults who were uninsured in expansion states compared with low-income adults who were uninsured in non- expansion states; low-income adults who had Medicaid in expansion states compared with low-income adults who had Medicaid in non- expansion states; and low-income adults who had private health insurance in expansion states compared with low-income adults who had private health insurance in non-expansion states. Financial barriers to medical, specialty, and other types of health care. Tables 10 and 11 present estimates and differences in estimates of responses to survey question that asked whether respondents did not obtain different types of needed health care services in the past 12 months because they could not afford it. Financial barriers to prescription drugs. Tables 12 and 13 present estimates and differences in estimates of survey question that asked respondents who had been prescribed medications whether they had taken actions during the past 12 months to save money on medications. Non-financial barriers to health care. Tables 14 and 15 present estimates and differences in estimates of the NCHS composite measure on any non-financial barriers to health care, which was based on responses to five survey questions on whether respondents delayed care in the past 12 months due to long wait times, a lack of transportation, and other non-financial reasons. Additionally, these tables present estimates and differences in estimates of responses to the composite measure’s five underlying survey questions. This appendix provides estimates on having a usual place of care and receiving selected health care services for adults—individuals ages 19 to 64, with family incomes that did not exceed 138 percent of the federal poverty level (FPL)—from the 2016 National Health Interview Survey (NHIS), which we obtained from the National Center for Health Statistics (NCHS). Estimates are based on responses to selected survey questions on having a usual place of care, receiving selected health care services, and contacting health care professionals. Estimates are reported for: All low-income adults in expansion and non-expansion states. We also report statistically significant differences between low-income adults in expansion and non-expansion states. Six groups of low-income adults: (1) low-income adults who were uninsured in expansion states; (2) low-income adults who were uninsured in non-expansion states; (3) low-income adults who had Medicaid in expansion states; (4) low-income adults who had Medicaid in non-expansion states; (5) low-income adults who had private health insurance in expansion states; and (6) low-income adults who had private health insurance in non-expansion states. We also report any statistically significant differences when comparing the six groups of low-income adults, specifically: low-income adults who were uninsured in expansion states compared with each of the four groups of low-income adults who were insured—low-income adults who had Medicaid in expansion states, low-income adults who had Medicaid in non-expansion states, low-income adults who had private health insurance in expansion states, and low-income adults who had private insurance in non-expansion states; low-income adults who were uninsured in non-expansion states compared with each of the four groups of low-income adults who were insured; low-income adults who were uninsured in expansion states compared with low-income adults who were uninsured in non- expansion states; low-income adults who had Medicaid in expansion states compared with low-income adults who had Medicaid in non- expansion states; and low-income adults who had private health insurance in expansion states compared with low-income adults who had private health insurance in non-expansion states. Having a usual place of care. Tables 16 through 19 present estimates and differences in estimates of survey questions that asked respondents about the place of care they usually go to when sick or need advice about their health and the type of place that respondents most often went. Receiving selected health care services. Tables 20 and 21 present estimates and differences in estimates of survey questions that asked respondents whether they had received a blood cholesterol check, flu vaccine, or other selected services. Contacting health care professionals. Tables 22 and 23 present estimates and differences in estimates of survey questions that asked respondents whether they had visited or spoken to a general doctor, specialist, or other health care professionals about their health in the past 12 months. In addition to the contact named above, Katherine M. Iritani (Director), Tim Bushfield (Assistant Director), Deitra H. Lee (Analyst-in-Charge), Kristin Ekelund, Laurie Pachter, Vikki Porter, Merrile Sing, and Emily Wilson made key contributions to this report.
|
Under PPACA, states could choose to expand Medicaid coverage to certain uninsured, low-income adults. As of December 2017, 31 states and the District of Columbia chose to expand Medicaid to cover these adults, and 19 states did not. GAO was asked to provide information about the demographic characteristics of and access to health care services for low-income adults—those with household incomes less than or equal to 138 percent of the federal poverty level—in expansion and non-expansion states. This report describes 2016 national survey estimates of (1) the number and demographic characteristics for low-income adults who were uninsured in expansion and non-expansion states, (2) unmet medical needs for low-income adults in expansion and non-expansion states and by insurance status, (3) barriers to health care for low-income adults in expansion and non-expansion states and by insurance status, and (4) having a usual place of care and receiving selected health care services for low-income adults in expansion and non-expansion states and by insurance status. GAO obtained 2016 NHIS estimates from the National Center for Health Statistics (NCHS), the federal agency within the Department of Health and Human Services that maintains these survey data. NHIS is a household interview survey designed to be a nationally representative sample of the civilian, non-institutionalized population residing in the United States. Estimates were calculated for demographic characteristics for uninsured, low-income adults. In addition, estimates were calculated for unmet medical needs, barriers to health care, and having a usual place of care and receiving selected health services for low-income adults in expansion and non-expansion states and by insurance status The estimates were based on responses to selected survey questions. GAO selected these survey questions from the Family and Adult Access to Health Care and Utilization and another section of the 2016 NHIS. GAO took steps to assess the reliability of the 2016 NHIS estimates, including interviewing NCHS officials and examining the data for logical errors. GAO determined that the data were sufficiently reliable for the purposes of its analyses. The Department of Health and Human Services provided technical comments on a draft of this report, which GAO incorporated as appropriate. According to the 2016 National Health Interview Survey (NHIS), an estimated 5.6 million uninsured, low-income adults—those ages 19 through 64—had incomes at or below the income threshold for expanded Medicaid eligibility as allowed under the Patient Protection and Affordable Care Act (PPACA). Estimates from this nationally representative survey showed that about 1.9 million of the 5.6 million uninsured, low-income adults lived in states that chose to expand Medicaid under PPACA, while the remaining 3.7 million lived in non-expansion states—those that did not choose to expand Medicaid. In 2016, over half of uninsured, low-income adults were male, over half were employed, and over half had incomes less than 100 percent of the federal poverty level in both expansion and non-expansion states. The 2016 NHIS estimates showed that low-income adults in expansion states were less likely to report having any unmet medical needs compared with those in non-expansion states, and low-income adults who were insured were less likely to report having unmet medical needs compared with those who were uninsured. Among the low-income adults who were uninsured, those in expansion states were less likely to report having any unmet medical needs compared with those in non-expansion states. The 2016 NHIS estimates also showed that low-income adults in expansion states were less likely to report financial barriers to needed medical care and other types of health care, such as specialty care, compared with those in non-expansion states, and low-income adults who were insured were less likely to report financial barriers to needed medical care compared with those who were uninsured. Among low-income adults who were uninsured, those in expansion states were less likely to report financial barriers to needed medical care compared with those in non-expansion states. Finally, the 2016 NHIS estimates showed that low-income adults in expansion states were more likely to report having a usual place of care to go when sick or needing advice about their health and receiving selected health care services compared with those in non-expansion states. The estimates also showed that low-income adults who were insured were generally more likely to report having a usual place of care and receiving selected health care services compared with those who were uninsured. Among the uninsured, relatively similar percentages of low-income adults in expansion and non-expansion states reported having a usual place of care. Similarly, estimates showed that relatively similar percentages of low-income adults who were uninsured in expansion and non-expansion states reported receiving selected health care services, such as receiving a flu vaccine or a blood pressure check.
|
Federal agencies conduct a variety of procurements that are reserved for small business participation through small business set-asides. The set- asides can be for small businesses in general, or they can be specific to small businesses that meet additional eligibility requirements in the Service-Disabled Veteran-Owned Small Business (SDVOSB), Historically Underutilized Business Zone (HUBZone), 8(a) Business Development (8(a)), and WOSB programs. The WOSB program enables federal contracting officers to identify and establish a sheltered market, or set-aside, for competition among WOSBs and EDWOSBs in certain industries. To determine the industries eligible under the WOSB program, SBA is required to conduct a study to determine which NAICS codes are eligible under the program and to report on such studies every 5 years. WOSBs can receive set-asides in industries in which SBA has determined that women-owned small businesses are substantially underrepresented. EDWOSBs can receive set-asides in WOSB-eligible industries as well as in an additional set of industries in which SBA has determined that women-owned small businesses are underrepresented but not substantially so. As of February 2019, there were a total of 113 four-digit NAICS codes (representing NAICS industry groups) eligible under the WOSB program—92 eligible NAICS codes for WOSBs and 21 for EDWOSBs. Additionally, businesses must be at least 51 percent owned and controlled by one or more women who are U.S. citizens to participate in the WOSB program. The owner must provide documents demonstrating that the business meets program requirements, including a document in which the owner attests to the business’s status as a WOSB or EDWOSB. EDWOSBs are WOSBs that are controlled by one or more women who are citizens and who are economically disadvantaged in accordance with SBA regulations. According to SBA, as of early October 2018, there were 13,224 WOSBs and 4,488 EDWOSBs registered in SBA’s online certification database. SBA’s Office of Government Contracting administers the WOSB program by promulgating regulations, conducting eligibility examinations of businesses that receive contracts under a WOSB or EDWOSB set-aside, deciding protests related to eligibility for a WOSB set-aside, conducting studies to determine eligible industries, and working with other federal agencies in assisting WOSBs and EDWOSBs. According to SBA officials, the Office of Government Contracting also works at the regional and local levels with SBA’s Small Business Development Centers and district offices, and with other organizations (such as Procurement Technical Assistance Centers), to help WOSBs and EDWOSBs obtain contracts with federal agencies. The services SBA coordinates include training, counseling, mentoring, facilitating access to information about federal contracting opportunities, and business financing. According to SBA, as of October 2018, there were two full-time staff within the Office of Government Contracting whose primary responsibility was the WOSB program. Initially, the program’s statutory authority allowed WOSBs to be self- certified by the business owner or certified by an approved third-party national certifying entity as eligible for the program. Self-certification is free, but some third-party certification options require businesses to pay a fee. Each certification process requires businesses to provide signed representations attesting to their WOSB or EDWOSB eligibility. Businesses must provide documents supporting their status before submitting an offer to perform the requirements of a WOSB set-aside contract. In August 2016, SBA launched certify.sba.gov, which is an online portal that allows firms to upload required documents and track their submission and also enables contracting officers to review firms’ eligibility documentation. According to the Federal Acquisition Regulation (FAR), contracting officers are required to verify that all required documentation is present in the online portal when selecting a business for an award. In addition, businesses must register and attest to being a WOSB in the System for Award Management, the primary database of vendors doing business with the federal government. In 2011, SBA approved four organizations to act as third-party certifiers: El Paso Hispanic Chamber of Commerce, NWBOC (previously known as the National Women Business Owners U.S. Women’s Chamber of Commerce, and Women’s Business Enterprise National Council. These organizations have been the WOSB program’s third-party certifiers since 2011. According to SBA data, the Women’s Business Enterprise National Council was the most active third-party certifier in fiscal year 2017—performing 2,638 WOSB certification examinations. The U.S. Women’s Chamber of Commerce, NWBOC, and El Paso Hispanic Chamber of Commerce—completed 644, 105, and 12 certifications, respectively. As discussed previously, in 2014 we reviewed the WOSB program and found a number of deficiencies in SBA’s oversight of the four SBA- approved third-party certifiers and in SBA’s eligibility examination processes and we made related recommendations for SBA. In addition, in 2015 and 2018 the SBA OIG reviewed the WOSB program and also found oversight deficiencies, including evidence of WOSB contracts set aside for ineligible firms. In both reports, the SBA OIG also made recommendations for SBA. Further, in July 2015, we issued GAO’s fraud risk framework, which provides a comprehensive set of key components and leading practices that serve as a guide for agency managers to use when developing efforts to combat fraud in a strategic, risk-based way. In July 2016, the Office of Management and Budget issued guidelines requiring executive agencies to create controls to identify and respond to fraud risks. These guidelines also affirm that managers should adhere to the leading practices identified in GAO’s fraud risk framework. As of February 2019, SBA had implemented one of the three changes that the 2015 NDAA made to the WOSB program—sole-source authority. The two other changes—authorizing SBA to implement its own certification process for WOSBs and requiring SBA to eliminate the WOSB self-certification option—have not been implemented. The 2015 NDAA did not require a specific time frame for SBA to update its regulations. SBA officials have stated that they will not eliminate self- certification until the new certification process for the WOSB program is in place, which they expect to be completed by January 1, 2020. In September 2015, SBA published a final rule to implement sole-source authority for the WOSB program (effective October 2015). Among other things, the rule authorized contracting officers to award a contract to a WOSB or EDWOSB without competition, provided that the contracting officer’s market research cannot identify two or more WOSBs or EDWOSBs in eligible industries that can perform the requirements of the contract at a fair and reasonable price. In the final rule, SBA explained that it promulgated the sole-source rule before the WOSB certification requirements for two reasons. First, the sole-source rule could be accomplished by simply incorporating the statutory language into the regulations, whereas the WOSB certification requirements would instead require a prolonged rulemaking process. Second, SBA said that addressing all three regulatory changes at the same time would delay the implementation of sole-source authority. SBA described the sole-source mechanism as an additional tool for federal agencies to ensure that women-owned small businesses have an equal opportunity to participate in federal contracting and to ensure consistency among SBA’s socioeconomic small business procurement programs. According to SBA, most of the 495 comments submitted about the sole- source rule supported the agency’s decision to implement the authority quickly. However, the SBA OIG’s June 2018 audit report cautioned that allowing sole-source contracting authority while firms can still self-certify exposes the WOSB program to unnecessary risk of fraud and abuse, and the report recommended that SBA implement a new certification process for the WOSB program per the 2015 NDAA. In addition, our previous report identified risks of program participation by ineligible firms associated with deficiencies in SBA’s oversight structure. As we discuss in detail later, SBA has still not addressed these risks, which may be exacerbated by the implementation of sole-source authority without addressing the other changes made by the 2015 NDAA, including eliminating the self-certification option. As of February 2019, SBA had not published a proposed rule for public comment to establish a new certification process for the WOSB program. Previously, in October 2017, an SBA official stated that SBA was about 1–2 months away from publishing a proposed rule. However, in June 2018, SBA officials stated that a cost analysis would be necessary before the draft could be sent to the Office of Management and Budget for review. Certain stages of the rulemaking process have mandated time periods, such as the required interagency review process for certain rules. In June 2017, we reported that SBA officials said that an increase in the number of statutorily mandated rules in recent years had contributed to delays in the agency’s ability to promulgate rules in a more timely fashion. As of February 2019, SBA had not provided documentation or time frames for issuing a proposed rule or completing the rulemaking process. However, in response to the SBA OIG recommendation that SBA implement the new certification process, SBA stated that it would fulfill the recommendation (meaning implement a new certification process) by January 1, 2020. In December 2015, SBA published an advance notice of proposed rulemaking to solicit public comments to assist the agency with drafting a proposed rule to implement a new WOSB certification program. In the notice, SBA stated that it intends to address the 2015 NDAA changes, including eliminating the self-certification option, through drafting regulations to implement a new certification process. Previously, in its September 2015 final rule implementing sole-source authority, SBA stated that there was no evidence that Congress intended that the existing WOSB program, including self-certification, be halted before establishing the infrastructure and new regulations for a new certification program. The advance notice requested comments on various topics, such as how well the current certification processes were working, which of the certification options were feasible and should be pursued, whether there should be a grace period for self-certified WOSB firms to complete the new certification process, and what documentation should be required. Three third-party certifiers submitted comments in response to the advance notice of proposed rulemaking, and none supported the option of SBA acting as a WOSB certifier. One third-party certifier commented that such an arrangement is a conflict of interest given that SBA is also responsible for oversight of the WOSB program, and two certifiers commented that SBA lacked the required resources. The three third-party certifiers also asserted in their comments that no other federal agency should be allowed to become an authorized WOSB certifier, with one commenting that federal agencies should instead focus on providing contracting opportunities for women-owned businesses. All three certifiers also proposed ways to improve the current system of third-party certification—for example, by strengthening oversight of certifiers or expanding their number. The three certifiers also suggested that SBA move to a process that better leverages existing programs with certification requirements similar to those of the WOSB program, such as the 8(a) program. In the advance notice, SBA asked for comments on alternative certification options, such as SBA acting as a certifier or limiting WOSB program certifications to the 8(a) program and otherwise relying on state or third-party certifiers. Further, in June 2018, SBA officials told us that they were evaluating the potential costs of a new certification program as part of their development of the new certification rule. SBA has not fully addressed deficiencies in its oversight of third-party certifiers that we identified in our October 2014 report. We reported that SBA did not have formal policies for reviewing the performance of its four approved third-party certifiers, including their compliance with their agreements with SBA. Further, we found that SBA had not developed formal policies and procedures for, among other things, reviewing the monthly reports that certifiers submit to SBA. As a result, we recommended that SBA establish comprehensive procedures to monitor and assess the performance of the third-party certifiers in accordance with their agreements with SBA and program regulations. While SBA has taken some steps to address the recommendation, as of February 2019 it remained open. In response to our October 2014 recommendation, in 2016 SBA conducted compliance reviews of the four SBA-approved third-party certifiers. According to SBA, the purpose of the compliance reviews was to ensure the certifiers’ compliance with regulations, their signed third- party certifier certification form (or agreement) with SBA, and other program requirements. The compliance reviews included an assessment of the third-party certifiers’ internal certification procedures and processes, an examination of a sample of applications from businesses that the certifiers deemed eligible and ineligible for certification, and an interview with management staff. SBA officials said that SBA’s review team did not identify significant deficiencies in any of the four certifiers’ processes and found that all were generally complying with their agreements. However, one compliance review report described “grave concerns” that a third-party certifier had arbitrarily established eligibility requirements that did not align with WOSB program regulations and used them to decline firms’ applications. SBA noted in the report that if the third-party certifier failed to correct this practice SBA could terminate the agreement. As directed by SBA, the third-party certifier submitted a letter to SBA outlining actions it had taken to address this issue, among others. The final compliance review reports for the other third-party certifiers also recommended areas for improvement, including providing staff with additional training on how to conduct eligibility examinations and reviewing certification files to ensure they contain complete documentation. In addition, two of the three compliance review reports with recommendations (including the compliance review report for the certifier discussed above) required the certifier to provide a written response within 30 days outlining plans to correct the areas. SBA officials said that they reviewed the written responses and determined that no further action was required. In January 2017, SBA’s Office of Government Contracting updated its written Standard Operating Procedures (SOP) to include policies and procedures for the WOSB program, in part to address our October 2014 recommendation. The 2017 SOP discusses what a third-party-certifier compliance review entails, how often the reviews are to be conducted, and how findings are to be reported. The 2017 SOP notes that SBA may initiate a compliance review “at any time and as frequently as the agency determines is necessary.” In September 2018, SBA officials told us that they were again updating the SOP, in part to address deficiencies we identified in our prior work and during this review. However, as of February 2019, SBA had not provided an updated SOP. In addition, in April 2018, SBA finalized a WOSB Program Desk Guide that, according to SBA, is designed to provide program staff with detailed guidance for conducting oversight procedures, including compliance reviews of third-party certifiers. For example, the Desk Guide discusses how staff should prepare for a compliance review of a third-party certifier, review certification documents, and prepare a final report. However, the Desk Guide does not describe specific activities designed to oversee third-party certifiers on an ongoing basis. In November 2017, SBA officials told us that they planned to conduct additional compliance reviews of the third-party certifiers. However, in June 2018, officials said there were no plans to conduct further compliance reviews until the final rule implementing the new certification process was completed. Further, SBA officials said that the 2016 certifier compliance reviews did not result in significant deficiencies. However, as noted previously, one of the compliance review reports described a potential violation of the third-party certifier’s agreement with SBA. Per written agreements with SBA, third-party certifiers are required to submit monthly reports that include the number of WOSB and EDWOSB applications received, approved, and denied; identifying information for each certified business, such as the business name; concerns about fraud, waste, and abuse; and a description of any changes to the procedures the organizations used to certify businesses as WOSBs or EDWOSBs. In our October 2014 report, we noted that SBA had not followed up on issues raised in the monthly reports and had not developed written procedures for reviewing them. At that time, SBA officials said that they were unaware of the issues identified in the certifiers’ reports and that the agency was developing procedures for reviewing the monthly reports but could not estimate a completion date. In our interviews for this report, SBA officials stated that SBA still does not use the third-party certifiers’ monthly reports to regularly monitor the program. Specifically, SBA does not review the reports to identify any trends in certification deficiencies that could inform program oversight. Officials said the reports generally do not contain information that SBA considers helpful for overseeing the WOSB program, although staff sometimes use the reports to obtain firms’ contact information. SBA officials also said that staff very rarely receive information about potentially fraudulent WOSB firms from the third-party certifiers—maybe three firms per year—and that this information is generally received via email and not as part of the monthly reports. SBA officials said that when they receive information about potentially fraudulent firms, WOSB program staff conduct an examination to determine the firm’s eligibility and report the results back to the certifier. However, a third-party certifier told us it has regularly reported firms it suspected of submitting potentially fraudulent applications in its monthly reports and that SBA has not followed up with them. In addition, two third-party certifiers said that if SBA is not cross-checking the list of firms included in their monthly reports, a firm deemed ineligible by one certifier may submit an application to another certifier and obtain approval. The three third-party certifiers we spoke with said that SBA generally had not communicated with them about their implementation of the program since the 2016 compliance reviews. However, SBA officials noted that three of the four third-party certifiers attended an SBA roundtable in March 2017 to discuss comments on the proposed rulemaking. In addition, SBA officials said that the third-party certifiers may contact them with questions about implementing the WOSB program, but SBA generally does not reach out to them. Although SBA has taken steps to enhance its written policies and procedures for oversight of third-party certifiers, it does not have plans to conduct further compliance reviews of the certifiers and does not intend to review certifiers’ monthly reports on a regular basis. SBA officials said that third-party certifier oversight procedures would be updated, if necessary, after certification options have been clarified in the final WOSB certification rule. However, ongoing oversight activities, such as regular compliance reviews, could help SBA better understand the steps certifiers have taken in response to previous compliance review findings and whether those steps have been effective. In addition, leading fraud risk management practices include identifying specific tools, methods, and sources for gathering information about fraud risks, including data on fraud schemes and trends from monitoring and detection activities, as well as involving relevant stakeholders in the risk assessment process. Without procedures to regularly monitor and oversee third-party certifiers, SBA cannot provide reasonable assurance that certifiers are complying with program requirements and cannot improve its efforts to identify ineligible firms or potential fraud. Further, it is unclear when SBA’s final rule will be implemented. As a result, we maintain that our previous recommendation should be addressed—that is, that the Administrator of SBA should establish and implement comprehensive procedures to monitor and assess the performance of certifiers in accordance with the requirements of the third-party certifier agreement and program regulations. SBA also has not fully addressed deficiencies found in our 2014 review related specifically to eligibility examinations. We found that SBA lacked formalized guidance for its eligibility examination processes and that the examinations continued to identify high rates of potentially ineligible businesses. As a result, we recommended that SBA enhance its examination of businesses that register for the WOSB program to ensure that only eligible businesses obtain WOSB set-asides. Specifically, we suggested that SBA consider (1) completing the development of procedures to conduct annual eligibility examinations and implementing such procedures; (2) analyzing examination results and individual businesses found to be ineligible to better understand the cause of the high rate of ineligibility in annual reviews and determine what actions are needed to address the causes, and (3) implementing ongoing reviews of a sample of all businesses that have represented their eligibility to participate in the program. SBA has taken some steps to implement our recommendation—such as by completing its 2017 SOP and its Desk Guide, both of which include written policies and procedures for WOSB program eligibility examinations. The 2017 SOP includes a brief description of what activities are entailed in the examinations, the staff responsible for conducting them, and how firms are selected. In addition, as noted previously, SBA officials told us in September 2018 that a forthcoming update to the SOP would address deficiencies we identified regarding WOSB eligibility examinations. However, as of February 2019, SBA had not provided an updated SOP. The Desk Guide contains more detailed information on eligibility examinations. It notes that a sample of firms is to be examined annually and it provides selection criteria, which can include whether the agency has received information challenging the firm’s eligibility for the program. The Desk Guide also provides specific instructions on how to determine whether a firm meets the WOSB program’s ownership, control, and financial requirements and what documentation should be consulted or requested. SBA does not collect reliable information on the results of its annual eligibility examinations. According to SBA officials, SBA has conducted eligibility examinations of a sample of businesses that received WOSB program set-aside contracts each year since fiscal year 2012. However, SBA officials told us that the results of annual eligibility examinations— such as the number of businesses found eligible or ineligible—are generally not documented. As a result, we obtained conflicting data from SBA on the number of examinations completed and the percentage of businesses found to be ineligible in fiscal years 2012 through 2018. For example, based on previous information provided by SBA, we reported in October 2014 that in fiscal year 2012, 113 eligibility examinations were conducted and 42 percent of businesses were found to be ineligible for the WOSB program. However, during this review, we received information from SBA that 78 eligibility examinations were conducted and 37 percent of businesses were found ineligible in fiscal year 2012. We found similar disparities when we compared fiscal year 2016 data provided by SBA for this report with a performance memorandum summarizing that fiscal year’s statistics. Regardless of the disparity between the data sources, the rate of ineligible businesses has remained significant. For example, according to documentation SBA provided during this review, in fiscal year 2017, SBA found that about 40 percent of the businesses in its sample were not eligible. In addition, SBA continues to have no mechanism for evaluating examination results in aggregate to inform the WOSB program. In 2014, we reported that SBA officials told us that most businesses that were deemed ineligible did not understand the documentation requirements for establishing eligibility. However, we also reported that SBA officials could not explain how they knew a lack of understanding was the cause of ineligibility among businesses and had not made efforts to confirm that this was the cause. In June 2018, SBA officials told us they did not analyze the annual examinations in aggregate for common eligibility issues because the examination results are unique to each WOSB firm. They noted that this was not necessary as WOSB program staff are familiar with common eligibility issues through the annual eligibility examinations. As we noted in 2014, by not analyzing aggregate examination results, the agency is missing opportunities to obtain meaningful insights into the program, such as the reasons many businesses are deemed ineligible. Also, SBA still conducts eligibility examinations only of firms that have already received a WOSB award. In 2014, we concluded that this sampling practice restricts SBA’s ability to identify potentially ineligible businesses prior to a contract award. Similarly, during this review, SBA officials said that while some aspects of the sample characteristics have changed since 2012, the samples still generally consist only of firms that have been awarded a WOSB set-aside. In addition, officials said that the sample size of the eligibility examinations has varied over time and is largely based on the workload of WOSB program staff. Restricting the samples in this way limits SBA’s ability to better understand the eligibility of businesses before they apply for and are awarded contracts, as well as its ability to detect and prevent potential fraud. SBA officials said that their other means of reducing participation by ineligible firms and mitigating potential fraud is through WOSB or EDWOSB status protests—that is, allegations that a business receiving an award does not meet program eligibility requirements. A federal contractor can file a status protest against any firm receiving an award that represents itself as a WOSB in the System for Award Management for grounds that include failure to provide all required supporting documentation. The penalties for misrepresenting a firm’s status, per regulation, include debarment or suspension. However, one third-party certifier expressed in its comments to the advance notice of proposed rulemaking on certification that status protests alone are not a viable option for protecting the integrity of the WOSB program. The certifier questioned how a firm could have sufficient information about a competitor firm to raise questions about its eligibility. According to SBA officials, 11 status protests were filed under the WOSB program in fiscal year 2018. Of these, four firms were deemed ineligible for the WOSB program, four were deemed eligible, and three status protests were dismissed. In fiscal year 2017, 9 status protests were filed; of these, three firms were found ineligible, two were found eligible, and four status protests were dismissed. We recognize that SBA has made some effort to address our previous recommendation by documenting procedures for conducting annual eligibility examinations of WOSB firms. However, leading fraud risk management practices state that federal program managers should design control activities that focus on fraud prevention over detection and response, to the extent possible. Without maintaining reliable information on the results of eligibility examinations, developing procedures for analyzing results, and expanding the sample of businesses to be examined to include those that did not receive contracts, SBA limits the value of its eligibility examinations and its ability to reduce ineligibility among businesses registered to participate in the WOSB program. These deficiencies also limit SBA’s ability to identify potential fraud risks and develop any additional control activities needed to address these risks. As a result, the program may continue to be exposed to the risk of ineligible businesses receiving set-aside contracts. In addition, in light of these continued oversight deficiencies, the implementation of sole-source authority without addressing the other changes made by the 2015 NDAA could increase program risk. For these reasons, we maintain that our previous recommendation that SBA enhance its WOSB eligibility examination procedures should be addressed. In 2015 and 2018, the SBA OIG reported instances in which WOSB set- asides were awarded using NAICS codes that were not eligible under the WOSB program, and our analysis indicates that this problem persists. In 2015, the SBA OIG reported on its analysis of a sample of 34 WOSB set- aside awards and found that 10 awards were set aside using an ineligible NAICS code. The SBA OIG concluded that this may have been due to contracting officers’ uncertainty about NAICS code requirements under the program and recommended that SBA provide additional, updated training and outreach to federal agencies’ contracting officers on the program’s NAICS code requirements. In response, SBA updated WOSB program training and outreach documents in March 2016 to include information about the program’s NAICS code requirements. In 2018, the SBA OIG issued another report evaluating the WOSB program, with a focus on the use of the program’s sole-source contract authority. Here, the SBA OIG identified additional instances of contracting officers using inaccurate NAICS codes to set aside WOSB contracts. Specifically, the SBA OIG reviewed a sample of 56 awards and found that 4 were awarded under ineligible NAICS codes. The report included two recommendations for SBA aimed at preventing and correcting improper NAICS code data in FPDS-NG: (1) conduct quarterly reviews of FPDS- NG data to ensure contracting officers used the appropriate NAICS codes and (2) in coordination with the Office of Federal Procurement Policy and GSA, strengthen controls in FPDS-NG to prevent contracting officers from using ineligible NAICS codes. SBA disagreed with both of these recommendations. In its response to the first recommendation, SBA stated that it is not responsible for the oversight of other agencies’ contracting officers and therefore is not in a position to implement the corrective actions. With respect to the second recommendation, SBA stated that adding such controls to FPDS-NG would further complicate the WOSB program and increase contracting officers’ reluctance to use it. SBA also stated its preference for focusing its efforts on ensuring that contracting officers select the appropriate NAICS code at the beginning of the award process. In our review, we also found several issues with WOSB program set- asides being awarded under ineligible NAICS codes. Our analysis of FPDS-NG data on all obligations to WOSB program set-asides from the third quarter of fiscal year 2011 through the third quarter of fiscal year 2018 found the following: 3.5 percent (or about $76 million) of WOSB program obligations were awarded under NAICS codes that were never eligible for the WOSB program; 10.5 percent (or about $232 million) of WOSB program obligations made under an EDWOSB NAICS code went to women-owned businesses that were not eligible to receive awards in EDWOSB- eligible industries; and 17 of the 47 federal agencies that obligated dollars to WOSB program set-asides during the period used inaccurate NAICS codes in at least 5 percent of their WOSB set-asides (representing about $25 million). According to SBA officials we spoke with during this review, WOSB program set-asides may be awarded under ineligible NAICS codes because of human error when contracting officers are inputting data in FPDS-NG or because a small business contract was misclassified as a WOSB program set-aside. They characterized the extent of the issue as “small” relative to the size of the FPDS-NG database and said that such issues do not affect the program’s purpose. Rather than review FPDS-NG data that are inputted after the contract is awarded, SBA officials said that they have discussed options for working with GSA to add controls defining eligible NAICS codes for WOSB program set-aside opportunities on FedBizOpps.gov—the website that contracting officers use to post announcements about available federal contracting opportunities. Adding controls to this system, officials said, would help contracting officers realize as they are writing the contract requirements that they should not set aside contracts under the WOSB program without reviewing the proper NAICS codes. However, SBA officials said that the feasibility of this option was still being discussed and that the issue was not a high priority. For these reasons, according to officials, SBA’s updated oversight procedures described in the 2017 SOP and the Desk Guide do not include a process for reviewing WOSB program set-aside data in FPDS-NG to determine whether they were awarded under the appropriate NAICS codes. Further, as of November 2018, the WOSB program did not have targeted outreach or training that focused on specific agencies’ use of NAICS codes. As noted previously, in March 2016, SBA updated its WOSB program training materials to address NAICS code requirements in response to a 2015 SBA OIG recommendation. In fiscal year 2018, SBA conducted three WOSB program training sessions for federal contracting officers, including (1) a virtual learning session, (2) a session conducted during WOSB Industry Day at the Department of Housing and Urban Development, and (3) a session conducted during a Department of Defense Small Business Training Conference. However, with the exception of the virtual learning session, these training sessions were requested by the agencies. SBA officials did not identify any targeted outreach or training provided to specific agencies to improve understanding of WOSB NAICS code requirements (or other issues related to the WOSB program). Congress authorized SBA to develop a contract set-aside program specifically for WOSBs and EDWOSBs to address the underrepresentation of such businesses in specific industries. In addition, federal standards for internal control state that management should design control activities to achieve objectives and respond to risks and to establish and operate monitoring activities to monitor and evaluate the results. Because SBA does not review whether contracts are being awarded under the appropriate NAICS codes, it cannot provide reasonable assurance that WOSB program requirements are being met or identify agencies that may require targeted outreach or additional training on eligible NAICS codes. As a result, WOSB contracts may continue to be awarded to groups other than those intended, which can undermine the goals of and confidence in the program. Federal dollars obligated for contracts to all women-owned small businesses increased from $18.2 billion in fiscal year 2012 to $21.4 billion in fiscal year 2017. These figures include contracts for any type of good or service awarded under the WOSB program, under other federal programs, or through full and open competition. Contracts awarded to all women-owned small businesses within WOSB-program-eligible industries also increased during this period—from about $15 billion to $18.8 billion, as shown in figure 1. However, obligations under the WOSB program represented only a small share of this increase. In fiscal year 2012, WOSB program contract obligations were 0.5 percent of contract obligations to all women-owned small businesses for WOSB-program- eligible goods or services (about $73.5 million), and in fiscal year 2017 this percentage had grown to 3.8 percent (about $713.3 million) (see fig. 1). From fiscal years 2012 through 2017, 98 percent of total dollars obligated for contracts to all women-owned small businesses in WOSB-program- eligible industries were not awarded under the WOSB program. Instead, these contracts were awarded without a set-aside or under other, longer- established socioeconomic contracting programs, such as HUBZone, the SDVOSB, and 8(a). For example, during this period, dollars obligated to contracts awarded to women-owned small businesses without a set-aside represented about 34 percent of dollars obligated for contracts to all women-owned small businesses in these industries (see fig. 2). As shown in table 1, six federal agencies—DOD, DHS, Department of Commerce, Department of Agriculture, Department of Health and Human Services, and GSA—collectively accounted for nearly 83 percent of the obligations awarded under the WOSB program from the third quarter of fiscal year 2011 through the third quarter of fiscal year 2018, with DOD accounting for about 49 percent of the total. Contracting officers’ use of sole-source authority was relatively limited, representing about 12 percent of WOSB program obligations from January 2016 through June 2018. In fiscal year 2017—the only full fiscal year for which we have data on sole-source authority—about $77 million were obligated using sole-source authority. The share of sole-source awards as a percentage of total WOSB program set-asides also varied considerably by quarter—from as low as 5 percent in the third quarter of 2016 to as high as 21 percent in the first quarter of 2017 (see fig. 3). We spoke with 14 stakeholder groups to obtain their views on usage of the WOSB program. These groups consisted of staff within three federal agencies (DHS, DOD, and GSA), eight contracting offices within these agencies, and three third-party certifiers. Issues stakeholders discussed included the impact of sole-source authority and program-specific NAICS codes on program usage. Stakeholders also noted the potential effect of other program requirements on contracting officers’ willingness to use the program, and some suggested that SBA provide additional guidance and training to contracting officers. Sole-source authority. Participants in 12 of the 14 stakeholder groups commented on the effect of sole-source authority on WOSB program usage. Staff from 4 of the 12 stakeholder groups—including three contracting offices—said that sole-source authority generally had no effect on the use of the WOSB program. One of these stakeholders believed contracting officers seldom use the authority because they lack an understanding of how and when to use it; therefore, in this stakeholder’s opinion, use of the WOSB program has not generally changed since the authority was implemented. However, staff from two contracting offices and one third-party certifier said that sole-source authority was a positive addition because, for example, it can significantly reduce the lead time before a contracting officer can offer a contract award to a firm. Staff from one of these two contracting offices stated that the award process can take between 60 to 90 days using sole-source authority, compared to 6 to 12 months using a competitive WOSB program set-aside. These staff also said that negotiating the terms of a sole-source contract is easier, from a contracting officer’s perspective, because they can communicate directly with the firm. As discussed previously, SBA officials we interviewed said that adding sole-source authority to the WOSB program made the program more consistent with other existing socioeconomic set-aside programs, such as 8(a) and HUBZone. The remaining five stakeholder groups that discussed the effects of WOSB sole-source authority described difficulties with implementing it. Specifically, representatives from DHS, DOD, and one third-party certifier said that executing sole-source authority under the WOSB program is difficult for contracting officers because rules for sole-source authority under WOSB are different from those under other SBA programs, such as 8(a) and HUBZone. For example, the FAR’s requirement that contracting officers justify, in writing, why they do not expect other WOSBs or EDWOSBs to submit offers on a contract is stricter under the WOSB program than it is for the 8(a) program. Further, staff from one contracting office noted that justifications for WOSB set-asides must then be published on a federal website. In contrast, contracting officers generally do not need to prepare and publish a justification under the 8(a) program. According to staff from another contracting office, it may be difficult to find more than one firm qualified to do the work under some WOSB-eligible NAICS codes, but contracting officers would still have to conduct market research and explain why they do not expect additional offers in order to set the contract aside for a WOSB. Program-specific NAICS codes. Participants in 13 of the 14 stakeholder groups we interviewed commented on the requirement that WOSB program set-asides be awarded within certain industries, represented by NAICS codes. For example, two third-party certifiers we interviewed recommended that the NAICS codes be expanded or eliminated to provide greater opportunities for WOSBs to win contracts under the program. Another third-party certifier said that some of its members focus their businesses’ marketing efforts on industries specific to the WOSB program to help them compete for such contracts. Representatives from GSA and DHS made comments about limitations with respect to the WOSB program’s NAICS code requirement. Staff we interviewed from three contracting offices made similar statements, adding that the NAICS codes limit opportunities to award a contract to a WOSB or EDWOSB because they are sufficient in some industry areas but not others. All five of these stakeholder groups suggested that NAICS codes be removed from the program’s requirements to increase opportunities for WOSBs. Conversely, staff from five other contracting offices we interviewed generally expressed positive views about the WOSB program’s NAICS code requirements and stated that eligible codes line up well with the services for which they generally contract. Finally, SBA officials noted that there are no plans to reassess the NAICS codes until about 2020. However, SBA officials also stated that the NAICS code requirements complicate the WOSB program and add confusion for contracting officers who use program, as compared to other socioeconomic programs that do not have such requirements, such as HUBZone or 8(a). Requirement to verify eligibility documentation. Staff from 7 of the 14 stakeholder groups we interviewed discussed the requirement for the contracting officer to review program eligibility documentation and how this requirement affects their decision to use the program. For example, staff from one contracting office said that using the 8(a) or HUBZone programs is easier because 8(a) and HUBZone applicants are already certified by SBA; therefore, the additional step to verify documentation for eligibility is not needed. GSA officials noted that eliminating the need for contracting officers to take additional steps to review eligibility documentation for WOSB-program set-asides—in addition to checking the System for Award Management—could create more opportunities for WOSBs by reducing burden on contracting officers. However, staff from two contracting offices said it is not more difficult to award contracts under the WOSB program versus other socioeconomic programs. WOSB program guidance. Staff from 13 of the14 stakeholder groups we interviewed discussed guidance available to contracting officers under the WOSB program. Most generally said that the program requirements outlined in the FAR are fairly detailed and help contracting officers implement the program. According to SBA officials, SBA provides training on WOSB program requirements to contracting officers in federal agencies by request, through outreach events, and through an annual webinar. SBA officials also said that the training materials include all the regulatory issues that contracting officers must address. However, representatives from two third-party certifiers described feedback received from their members about the need to provide additional training and guidance for contracting officers to better understand and implement the WOSB program. Staff from two contracting offices also expressed the need for SBA to provide additional training and guidance. Staff from one of these contracting offices said that the last time they received training on the WOSB program was in 2011, when the program was first implemented. Staff in the other contracting office added that the most recent version of a WOSB compliance guide they could locate online was at least 6 years old. SBA officials estimated that the WOSB compliance guide was removed from their public website in March 2016 because it was difficult to keep the document current and officials did not want to risk publishing a guide that was out-of-date. SBA officials also said that there are no plans to issue an updated guide as the FAR is sufficient. The stakeholder groups also identified positive aspects of the WOSB program. Specifically, staff from seven stakeholder groups believed that the program provided greater opportunities for women-owned small businesses to obtain contracts in industries in which they are underrepresented. In addition, staff from three stakeholder groups mentioned that SBA-led initiatives, such as the Small Business Procurement Advisory Council and SBA’s co-sponsorship of the ChallengeHER program, help improve collaboration between federal agencies and the small business community and overall government contracting opportunities for women-owned small businesses. The WOSB program aims to enhance federal contracting opportunities for women-owned small businesses. However, weaknesses in SBA’s management of the program continue to hinder its effectiveness. As of February 2019, SBA had not fully implemented comprehensive procedures to monitor the performance of the WOSB program’s third- party certifiers and had not taken steps to provide reasonable assurance that only eligible businesses obtain WOSB set-aside contracts, as recommended in our 2014 report. Without ongoing monitoring and reviews of third-party certifier reports, SBA cannot ensure that the certifiers are fulfilling the requirements of their agreements with SBA, and it is missing opportunities to gain information that could help improve the program’s processes. Further, limitations in SBA’s procedures for conducting, documenting, and analyzing eligibility examinations inhibit its ability to better understand the eligibility of businesses before they apply for and potentially receive contracts, which exposes the program to unnecessary risk of fraud. In addition, given that SBA does not expect to finish implementing the changes in the 2015 NDAA until January 1, 2020, these continued oversight deficiencies increase program risk. As a result, we maintain that our previous recommendations should be addressed. In addition, SBA has not addressed deficiencies that the SBA OIG identified previously—and that we also identified during this review— related to WOSB set-asides being awarded under ineligible industry codes. Although SBA has updated its training and outreach materials for the WOSB program to address NAICS code requirements, it has not developed plans to review FPDS-NG data or provide targeted outreach or training to agencies that may be using ineligible codes. As a result, SBA is not aware of the extent to which individual agencies are following program requirements and which agencies may require targeted outreach or additional training. Reviewing FPDS-NG data would allow SBA to identify those agencies (and contracting offices within them) that could benefit from such training. Without taking these additional steps, SBA cannot provide reasonable assurance that WOSB program requirements are being met. The SBA Administrator or her designee should (1) develop a process for periodically reviewing FPDS-NG data to determine the extent to which agencies are awarding WOSB program set-asides under ineligible NAICS codes and (2) take steps to address any issues identified, such as providing targeted outreach or training to agencies making awards under ineligible codes. (Recommendation 1) We provided a draft of this report to DHS, DOD, GSA, and SBA for review and comment. DHS, DOD, and GSA indicated that they did not have comments. SBA provided a written response, reproduced in appendix II, in which it agreed with our recommendation. SBA stated that it will implement a process to review WOSB program data extracted from FPDS-NG and certified by each agency. Specifically, through the government-wide Small Business Procurement Advisory Council, SBA plans to provide quarterly presentations to contracting agencies’ staff that would include training and an analysis and review of the data. The response also reiterated that SBA has contacted GSA to implement a system change to FedBizOpps.gov that would prevent contracting officers from entering an invalid NAICS code for a WOSB program set-aside. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time we will send copies of this report to appropriate congressional committees and members, the Acting Secretary of DOD, the Secretary of DHS, the Administrator of GSA, the Administrator of SBA, and other interested parties. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This report examines (1) the extent to which the Small Business Administration (SBA) has implemented changes to the Women-Owned Small Business Program (WOSB program) made by the 2015 National Defense Authorization Act (2015 NDAA); (2) the extent to which SBA has implemented changes to address previously identified oversight deficiencies; and (3) changes in WOSB program use since 2011 and stakeholder views on its use, including since the 2015 implementation of sole-source authority. To describe the extent to which SBA has implemented changes to the WOSB program made by the 2015 NDAA, we reviewed relevant legislation, including the 2015 NDAA; related proposed regulations; and SBA documentation. We reviewed comment letters on the advance notice of proposed rulemaking for the new WOSB program certification process from three of the four SBA-approved third-party certifiers: the El Paso Hispanic Chamber of Commerce, the U.S. Women’s Chamber of Commerce, and the Women’s Business Enterprise National Council. To ensure the accuracy of our characterization of the comment letters, one staff member independently summarized the third-party certifiers’ comments on the advance notice, and a second staff member then reviewed the results. We also interviewed SBA officials, including officials from SBA’s Office of Government Contracting and Business Development. To respond to the second and third objectives, we conducted interviews on SBA’s implementation and oversight of the WOSB program and its use with SBA officials, three of the WOSB program’s four third-party certifiers, three selected agencies (and three agency components within two of the agencies), and a total of eight selected contracting offices within six selected agencies or components. Using data from the Federal Procurement Data System-Next Generation (FPDS-NG), we judgmentally selected the three federal agencies and three components (for a total of six federal agencies and components) because their WOSB program dollar obligations (including competed and sole-source) were among the largest or because we had interviewed them for our prior work. Specifically, we selected the following six agencies or agency components: the Department of Homeland Security (DHS) and, within DHS, the Coast Guard; the Department of Defense (DOD) and, within DOD, the U.S. Army and U.S. Navy; and the General Services Administration (GSA). Within the components and GSA, we judgmentally selected eight contracting offices (two each from Coast Guard, U.S. Army, U.S. Navy, and GSA) based on whether they had a relatively large amount of obligations and had used multiple types of WOSB program set- asides (competed or sole-source) to WOSBs or economically disadvantaged women-owned small businesses (EDWOSB). To address our second objective, we reviewed the findings and recommendations in our October 2014 report and in audit reports issued by the SBA Office of Inspector General (OIG) in May 2015 and June 2018. We also reviewed SBA documentation on the WOSB program, including SBA’s 2017 Standard Operating Procedures and 2018 WOSB Program Desk Guide, results from 2016 compliance reviews of the four third-party certifiers, and SBA eligibility examinations from fiscal years 2012 through 2018. In addition, we analyzed FPDS-NG data on contract obligations to WOSB program set-asides from the third quarter of fiscal year 2011 through the third quarter of fiscal year 2018 to determine whether set-asides were made using eligible program-specific North American Industry Classification System (NAICS) codes. To conduct this analysis, we compared contract obligations in FPDS-NG with the NAICS codes eligible under the WOSB program at the time of the award for the time frame under review. The WOSB program’s eligible NAICS codes have changed three times since the program was implemented in 2011, but the eligible industries have changed once. SBA commissioned the RAND Corporation to conduct the first study to assist SBA in determining eligible NAICS codes under the WOSB program. Based on the results of the RAND study, SBA identified 45 four-digit WOSB NAICS codes and 38 four-digit EDWOSB NAICS codes, for a total of 83 four-digit NAICS codes. WOSB and EDWOSB NAICS codes are different and do not overlap. In December 2015, the Department of Commerce issued the next study, which increased the total NAICS codes under the program to 113 four-digit codes, with 92 WOSB NAICS codes and 21 EDWOSB NAICS codes (which became effective March 2016). Often, there is a time lag between the effective date of NAICS codes and when they are entered in FPDS-NG. Therefore, we did not classify a contract as having an ineligible NAICS code if the code eventually became eligible under the WOSB program. We also excluded actions in FPDS-NG coded other than as a small business. These actions represented a small amount of contract obligations—approximately $125,000. We compared SBA information on its oversight activities and responses to previously identified deficiencies, federal internal control standards, and GAO’s fraud risk framework. We assessed the reliability of FPDS-NG data by considering their known strengths and weaknesses, based on our past work and through electronic testing for missing data, outliers, and inconsistent coding in the data elements we used for our analysis. We also reviewed FPDS-NG documentation, including the FPDS-NG data dictionary, FPDS-NG data validation rules, FPDS-NG user manual, prior GAO reliability assessments, and relevant SBA OIG audit reports. Based on these steps, we concluded that the data were sufficiently reliable for the purposes of reporting on trends in the WOSB program and the use of sole-source authority under the program. To describe how participation in the WOSB program has changed since 2011, including since the 2015 implementation of sole-source authority, we analyzed FPDS-NG data from the third quarter of fiscal year 2011 through the third quarter of fiscal year 2018. We identified any trends in WOSB program participation using total obligation dollars set aside for competitive and sole-source contracts awarded to WOSBs and EDWOSBs under the program. We also compared data on obligations for set-asides under the WOSB program with federal contract obligations for WOSB-program-eligible goods and services to all women-owned small businesses, including those made under different set-aside programs or with no set-asides, to determine the relative usage of the WOSB program. In our analysis, we excluded from WOSB program set-aside data actions in FPDS-NG coded other than as a small business (representing approximately $125,000) or coded under ineligible NAICS codes that were never eligible under the WOSB program (representing approximately $76.3 million). To describe stakeholder views on WOSB program use, we conducted semistructured interviews to gather responses from 14 stakeholder groups. These groups consisted of staff within three federal agencies (DHS, DOD, and GSA), eight contracting offices within these agencies, and three third-party certifiers (selection criteria described above). One person summarized the results of the interviews, and another person reviewed the summary of the interviews to ensure an accurate depiction of the comments. In addition, a third person then reviewed the summarized results. We conducted this performance audit from October 2017 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Allison Abrams (Assistant Director), Tiffani Humble (Analyst-in-Charge), Pamela Davidson, Jonathan Harmatz, Julia Kennon, Jennifer Schwartz, Rebecca Shea, Jena Sinkfield, Tyler Spunaugle, and Tatiana Winger made key contributions to this report.
|
In 2000, Congress authorized the WOSB program, allowing contracting officers to set aside procurements to women-owned small businesses in industries in which they are substantially underrepresented. To be eligible to participate in the WOSB program, firms have the option to self-certify or be certified by a third-party certifier. However, the 2015 NDAA changed the WOSB program by (1) authorizing SBA to implement sole-source authority, (2) eliminating the option for firms to self-certify as being eligible for the program and (3) allowing SBA to implement a new certification process. GAO was asked to review the WOSB program. This report discusses (1) the extent to which SBA has addressed the 2015 NDAA changes, (2) SBA's efforts to address previously identified deficiencies, and (3) use of the WOSB program. GAO reviewed relevant laws, regulations, and program documents; analyzed federal contracting data from April 2011 through June 2018; and interviewed SBA officials, officials from contracting agencies selected to obtain a range of experience with the WOSB program, and three of the four private third-party certifiers. The Small Business Administration (SBA) has implemented one of the three changes to the Women-Owned Small Business (WOSB) program authorized in the National Defense Authorization Act of 2015 (2015 NDAA). Specifically, in September 2015 SBA published a final rule to implement sole-source authority, effective October 2015. As of February 2019, SBA had not eliminated the option for program participants to self-certify that they are eligible to participate, as required by 2015 NDAA. SBA officials stated that this requirement would be addressed as part of the new certification process for the WOSB program, which they expect to implement by January 1, 2020. SBA has not addressed WOSB program oversight deficiencies identified in GAO's 2014 review (GAO-15-54). For example, GAO previously recommended that SBA establish procedures to assess the performance of four third-party certifiers—private entities approved by SBA to certify the eligibility of WOSB firms. While SBA conducted a compliance review of the certifiers in 2016, it has no plans to regularly monitor them. By not improving its oversight of the WOSB program, SBA is limiting its ability to ensure third-party certifiers are following program requirements. In addition, the implementation of sole-source authority in light of these continued oversight deficiencies can increase program risk. Consequently, GAO maintains that its prior recommendations should be addressed. In addition, similar to previous findings from SBA's Office of Inspector General, GAO found that about 3.5 percent of contracts using a WOSB set-aside were awarded for ineligible goods or services from April 2011 through June 2018. SBA does not review contracting data that could identify this problem and inform SBA which agencies making awards may need targeted outreach or training. As a result, SBA cannot provide reasonable assurance that WOSB program requirements are being met and that the program is meeting its goals. While federal contract obligations to all women-owned small businesses and WOSB program set-asides have increased since fiscal year 2012, WOSB program set-asides remain a small percentage (see figure). GAO recommends that SBA develop a process for periodically reviewing the extent to which WOSB program set-asides are awarded for ineligible goods or services and use the results to address identified issues, such as through targeted outreach or training on the WOSB program. SBA agreed with the recommendation.
|
The cost of the census has been escalating over the last several decennials. The 2010 decennial was the costliest U.S. Census in history at about $12.3 billion, and was about 31 percent more costly than the $9.4 billion 2000 Census (in 2020 dollars). The average cost for counting a housing unit increased from about $16 in 1970 to around $92 in 2010 (in 2020 dollars). According to the Department of Commerce (Department), the total cost of the 2020 Census is now estimated to be approximately $15.6 billion dollars, more than $3 billion higher than previously reported by the Bureau. Meanwhile, the return of census questionnaires by mail (the primary mode of data collection) declined over this period from 78 percent in 1970 to 63 percent in 2010 (see figure 1). Declining mail response rates—a key indicator in determining the cost-effectiveness of the census—are significant and lead to higher costs. This is because the Bureau sends temporary workers to each non-responding household to obtain census data. As a result, non-response follow-up is the Bureau’s largest and most costly field operation. In many ways, the Bureau has had to invest substantially more resources each decade to conduct the enumeration. Achieving a complete and accurate census is becoming an increasingly daunting task, in part, because the nation’s population is growing larger, more diverse, and more reluctant to participate. When the census misses a person who should have been included, it results in an undercount; conversely, an overcount occurs when an individual is counted more than once. Such errors are particularly problematic because of their impact on various subgroups. Minorities, renters, and children, for example, are more likely to be undercounted by the census. The challenges to an accurate count can be seen, for example, in the difficulties associated with counting people residing in unconventional and hidden housing units, such as converted basements and attics. In figure 2, what appears to be a small, single-family house could contain an apartment, as suggested by its two doorbells. If an address is not in the Bureau’s address file, its residents are less likely to be included in the census. The Bureau plans to rely heavily on both new and legacy IT systems and infrastructure to support the 2018 End-to-End Test and the 2020 Census operations. For example, the Bureau plans to deploy and use 43 systems in the 2018 End-to-End Test. Eleven of these systems are being developed or modified as part of an enterprise-wide initiative called Census Enterprise Data Collection and Processing (CEDCaP), which is managed within the Bureau’s IT Directorate. This initiative is a large and complex modernization program intended to deliver a system-of-systems to support all of the Bureau’s survey data collection and processing functions, rather than continuing to rely on unique, survey-specific systems with redundant capabilities. According to Bureau officials, the remaining 32 IT systems are being developed or modified by the 2020 Census Directorate or other Bureau divisions. To support the 2018 End-to-End Test, the Bureau plans to incrementally deploy and use the 43 systems for nine operations from December 2016 through the end of the test in April 2019. These nine operations are: (1) in-office address canvassing, (2) recruiting staff for address canvassing, (3) training for address canvassing, (4) in-field address canvassing, (5) recruiting staff for field enumeration, (6) training for field enumeration, (7) self-response (i.e., Internet, phone, or paper), (8) field enumeration, and (9) tabulation and dissemination. We added the 2020 Census to our list of high-risk programs in February, 2017, because (1) innovations never before used in prior enumerations will not be fully tested; (2) the Bureau continues to face challenges in implementing and securing IT systems; and (3) the Bureau needs to control any further cost growth and develop reliable cost estimates. Each of these key risks are discussed in greater detail below; if not sufficiently addressed, these risks could adversely impact the cost and/or quality of the enumeration. Moreover, they compound the inherent challenges of conducting a successful census such as the nation’s increasingly diverse population and concerns over personal privacy. The basic design of the enumeration—mail out and mail back of the census questionnaire with in-person follow-up for non-respondents—has been in use since 1970. However, a key lesson learned from the 2010 Census and earlier enumerations, is that this “traditional” design is no longer capable of cost-effectively counting the population. In response to its own assessments, our recommendations, and studies by other organizations, the Bureau has fundamentally re-examined its approach for conducting the 2020 Census. Specifically, its plan for 2020 includes four broad innovation areas: re-engineering field operations, using administrative records, verifying addresses in-office, and developing an Internet self-response option (see table 1). If they function as planned, the Bureau initially estimated that these innovations could result in savings of over $5 billion (in 2020 dollars) when compared to its estimates of the cost for conducting the census with traditional methods. However, in June 2016, we reported that the Bureau’s life-cycle cost estimate of $12.5 billion, developed in October 2015, was not reliable and did not adequately account for risk. As discussed earlier in this statement, the Department has recently updated this figure and now estimates a life-cycle cost of $15.6 billion. At this higher level, the cost savings would be reduced to around $1.9 billion. While the planned innovations could help control costs, they also introduce new risks, in part, because they include new procedures and technology that have not been used extensively in earlier decennials, if at all. Our prior work has shown the importance of the Bureau conducting a robust testing program, including the 2018 End-to-End Test. Rigorous testing is a critical risk mitigation strategy because it provides information on the feasibility and performance of individual census-taking activities, their potential for achieving desired results, and the extent to which they are able to function together under full operational conditions. To address some of these challenges we have made several recommendations aimed at improving reengineered field operations, using administrative records, verifying the accuracy of the address list, and securing census responses via the Internet The Bureau has held a series of operational tests since 2012, but according to the Bureau, has scaled back recent tests because of funding uncertainties. For example, the Bureau canceled the field components of the 2017 Census Test including non-response follow-up, a key census operation. In November 2016, we reported that the cancelation of the 2017 field test was a lost opportunity to test, refine, and integrate operations and systems, and that it put more pressure on the 2018 End- to-End Test to demonstrate that enumeration activities will function under census-like conditions as needed for 2020. However, in May 2017, the Bureau scaled back the operational scope of the 2018 End-to-End and, of the three planned test sites; only the Rhode Island site would fully implement the 2018 End-to-End Test. The Washington and West Virginia state test sites would test just one field operation, address canvassing. In addition, due to budgetary concerns, the Bureau decided to remove three coverage measurement operations (and the technology that supports them) from the scope of the test. Without sufficient testing, operational problems can go undiscovered and the opportunity to improve operations will be lost, in part because the 2018 End-to-End Test is the last opportunity to demonstrate census technology and procedures across a range of geographic locations, housing types, and demographic groups. On August 28, 2017, temporary census employees known as address listers began implementing the in-field component of address canvassing for the 2018 End-to-End Test. Listers walked the streets of designated census blocks at all three test sites to verify addresses and geographic locations. The operation ended on September 27, 2017. As part of our ongoing work, we visited all three test sites and observed 18 listers conduct address canvassing. Generally, we found that listers were able to conduct address canvassing as planned. However, we also noted several challenges. We shared the following preliminary observations from our site visits with the Bureau: Internet connectivity was problematic at the West Virginia test site. We spoke to four census field supervisors who described certain areas as dead spots where Internet and cell phone service were not available. We also were told by those same supervisors that only certain cell service providers worked in certain areas. In order to access the Internet or cell service in those areas, census workers sometimes needed to drive several miles. The allocation of lister assignments was not always optimal. Listers were supposed to be provided assignments close to where they live in order to optimize their local knowledge and to limit the numbers of miles being driven by listers to and from their assignment area. Bureau officials told us this was a challenge at all three test sites. Moreover, at one site the area census manager told us that some listers were being assigned work in another county even though blocks were still unassigned closer to where they resided. Relying on local knowledge and limiting the number of miles can increase both the efficiency and effectiveness of address canvassing. The assignment of some of the large blocks early in the operations was not occurring as planned. At all three 2018 End-to-End Test sites Bureau managers had to manually assign some large blocks (some blocks had hundreds of housing units). It is important to assign large blocks early on because leaving the large blocks to be canvassed until the end of the operation could jeopardize the timely completion of address canvassing. According to Bureau officials,during the test, completed address and map updates for some blocks did not properly transmit. This happened at all three test sites, and included data on 11 laptops for 25 blocks. The address and map information on seven of the laptops was permanently deleted. However, data on four laptops were still available. The Bureau is examining those laptops to determine what occurred that prevented the data from being transmitted. In Providence, Rhode Island, where the full test will take place, the Bureau recanvassed those blocks where data were lost to ensure that the address and map information going forward was correct. It will be important for the Bureau to understand what happened and ensure all address and map data is properly transmitted for the 2020 Census. We have discussed these challenges with Bureau officials who stated that overall they are satisfied with the implementation of address canvassing but also agreed that resolving challenges discovered during address canvassing, some of which can affect the operation’s efficiency and effectiveness, will be important before the 2020 Census. We plan to issue a report early in 2018 on address canvassing at the three test sites. We have previously reported that the Bureau faced challenges in managing and overseeing IT programs, systems, and contractors supporting the 2020 Census. Specifically, it has been challenged in managing schedules, costs, contracts, governance and internal coordination, and security for its IT systems. As a result of these challenges, the Bureau is at risk of being unable to fully implement key IT systems necessary to support the 2020 Census and conduct a cost- effective enumeration. We have previously recommended that the Bureau take action to improve its implementation and management of IT in areas such as governance and internal coordination. We also have ongoing work reviewing each of these areas. Our ongoing work has indicated that the Bureau faces significant challenges in managing the schedule for developing and testing systems for the 2018 End-to-End Test that began in August 2017. In this regard, the Bureau still has significant development and testing work that remains to be completed. As of August 2017, of the 43 systems in the test, the Bureau reported that 4 systems had completed development and integration testing, while the remaining 39 systems had not completed these activities. Of these 39 systems, the Bureau reported that it had deployed a portion of the functionality for 21 systems to support address canvassing for the 2018 End-to-End Test; however, it had not yet deployed any functionality for the remaining 18 systems for the test. Figure 3 summarizes the development and testing status for the 43 systems planned for the 2018 End-to-End Test. Moreover, due to challenges experienced during systems development, the Bureau has delayed key IT milestone dates (e.g., dates to begin integration testing) by several months for several of the systems in the 2018 End-to-End Test. Figure 4 depicts the delays to the deployment dates for the operations in the 2018 End-to-End Test, as of August 2017. Our ongoing work also indicates that the Bureau is at risk of not meeting the updated milestone dates. For example, in June 2017 the Bureau reported that at least two of the systems expected to be used in the self- response operation (the Internet self-response system and the call center system) are at risk of not meeting the delayed milestone dates. In addition, in September 2017 the Bureau reported that at least two of the systems expected to be used in the field enumeration operation (the enumeration system and the operational control system) are at risk of not meeting their delayed dates. Combined, these delays reduce the time available to conduct the security reviews and approvals for the systems being used in the 2018 End-to- End Test. We previously testified in May 2017 that the Bureau faced similar challenges leading up to the 2017 Census Test, including experiencing delays in system development that led to compressed time frames for security reviews and approvals. Specifically, we noted that the Bureau did not have time to thoroughly assess the low-impact components of one system and complete penetration testing for another system prior to the test, but accepted the security risks and uncertainty due to compressed time frames. We concluded that, for the 2018 End-to- End Test, it will be important that these security assessments are completed in a timely manner and that risks are at an acceptable level before the systems are deployed. The Bureau noted that, if it continues to be behind schedule, key field operations for the 2018 End-to-End Test (such as non-response follow- up) could be delayed or canceled, which may affect the Bureau’s ability to meet the test’s objectives. As we stated earlier, without sufficient testing, operational problems can go undiscovered and the opportunity to improve operations will be lost. Bureau officials are evaluating options to decrease the impact of these delays on integration testing and security review activities by, for example, utilizing additional staff. We have ongoing work reviewing the Bureau’s development and testing delays and the impacts of these delays on systems readiness for the 2018 End-to-End Test. The Bureau faces challenges in reporting and controlling IT cost growth. In April 2017, the Bureau briefed us on its efforts to estimate the costs for the 2020 Census, during which it presented IT costs of about $2.4 billion from fiscal years 2018 through 2021. Based on this information and other corroborating IT contract information provided by the Bureau, we testified in May 2017 that the Bureau had identified at least $2 billion in IT costs. However, in June 2017, Bureau officials in the 2020 Census Directorate told us that the data they provided in April 2017 did not reflect all IT costs for the 2020 program. The officials provided us with an analysis of the Bureau’s October 2015 cost estimate that identified $3.4 billion in total IT costs from fiscal years 2012 through 2023. These costs included, among other things, those associated with system engineering, test and evaluation, and infrastructure, as well as a portion of the costs for the CEDCaP program. Yet, our ongoing work determined the Bureau’s $3.4 billion cost estimate from October 2015 did not reflect its current plans for acquiring IT to be used during the 2020 Census and that the related costs are likely to increase: In August 2016, the Bureau awarded a technical integration contract for about $886 million, a cost that was not reflected in the $3.4 billion expected IT costs. More recently, in May 2017, we testified that the scope of work for this contract had increased since the contract was awarded; thus, the corresponding contract costs were likely to rise above $886 million, as well. In March 2017, the Bureau reported that the contract associated with the call center and IT system to support the collection of census data over the phone was projected to overrun its initial estimated cost by at least $40 million. In May 2017, the Bureau reported that the CEDCaP program’s cost estimate was increasing by more than $400 million—from its original estimate of $548 million in 2013 to a revised estimate of $965 million in May 2017. In June 2017, the Bureau awarded a contract for mobile devices and associated services for about $283 million, an amount that is about $137 million higher than the cost for these devices and services identified in its October 2015 estimate. As a result of these factors, the Bureau’s $3.4 billion estimate of IT costs is likely to be at least $1.4 billion higher, thus increasing the total costs to at least $4.8 billion. Figure 5 identifies the Bureau estimate of total IT costs associated with the 2020 program as of October 2015, as well as anticipated cost increases as of August 2017. IT cost information that is accurately reported and clearly communicated is necessary so that Congress and the public have confidence that taxpayer funds are being spent in an appropriate manner. However, changes in the Bureau’s reporting of these total costs, combined with cost growth since the October 2015 estimate, raise questions as to whether the Bureau has a complete understanding of the IT costs associated with the 2020 program. In early October 2017, the Secretary of Commerce testified that he expected the total IT costs for the 2020 Census to be about $4.96 billion. This estimate of IT costs is approximately $1.6 billion higher than the Bureau’s October 2015 estimate and further confirms our analysis of expected IT cost increases discussed above. As of late October 2017, the Bureau and Department were still finalizing the documentation used to develop the new cost estimate. After these documents are complete and made available for inspection, as part of our ongoing work, we plan to evaluate whether this updated IT cost estimate includes the cost increases, discussed above, that were not included in the October 2015 estimate. Our ongoing work also determined that the Bureau faces challenges in managing its significant contractor support. The Bureau is relying on contractor support in many key areas of the 2020 Census. For example, it is relying on contractors to develop a number of key systems and components of the IT infrastructure. These activities include (1) developing the IT platform that is intended to be used to collect data from those responding via the Internet, telephone, and non-response follow-up activities; (2) procuring the mobile devices and cellular service to be used for non-response follow-up; and (3) developing the infrastructure in the field offices. According to Bureau officials, contractors are also providing support in areas such as fraud detection, cloud computing services, and disaster recovery. In addition to the development of key technology, the Bureau is relying on contractor support for integrating all of the key systems and infrastructure. The Bureau awarded a contract to integrate the 2020 Census systems and infrastructure in August 2016. The contractor’s work was to include evaluating the systems and infrastructure and acquiring the infrastructure (e.g., cloud or data center) to meet the Bureau’s scalability and performance needs. It was also to include integrating all of the systems, supporting technical testing activities, and developing plans for ensuring the continuity of operations. Since the contract was awarded, the Bureau has modified the scope to also include assisting with operational testing activities, conducting performance testing for two Internet self-response systems, and technical support for the implementation of the paper data capture system. However, our ongoing work has indicated that the Bureau is facing staffing challenges that could impact its ability to manage and oversee the technical integration contractor. Specifically, the Bureau is managing the integration contractor through a government program management office, but this office is still filling vacancies. As of October 2017, the Bureau reported that 35 of the office’s 58 federal employee positions were vacant. As a result, this program management office may not be able to provide adequate oversight of contractor cost, schedule, and performance. The delays during the 2017 Test and preparations for the 2018 End-to- End Test raises concerns regarding the Bureau’s ability to effectively perform contractor management. As we reported in November 2016, a greater reliance on contractors for these key components of the 2020 Census requires the Bureau to focus on sound management and oversight of the key contracts, projects, and systems. As part of our ongoing work, we plan to monitor the Bureau’s progress in managing its contractor support. Effective IT governance can drive change, provide oversight, and ensure accountability for results. Further, effective IT governance was envisioned in the provisions referred to as the 2014 Federal Information Technology Acquisition Reform Act (FITARA), which strengthened and reinforced the role of the departmental CIO. The component CIO also plays a role in effective IT governance as subject to the oversight and policies of the parent department or agency implementing FITARA. To ensure executive-level oversight of the key systems and technology, the Bureau’s CIO (or a representative) is a member of the governance boards that oversee all of the operations and technology for the 2020 Census. However, in August 2016 we reported on challenges the Bureau has had with IT governance and internal coordination, including weaknesses in its ability to monitor and control IT project costs, schedules, and performance. We made several recommendations to the Department of Commerce to direct the Bureau to, among other things, better ensure that risks are adequately identified and schedules are aligned. The Department agreed with our recommendations. However, as of October 2017, the Bureau had only fully implemented one recommendation and had taken initial steps toward implementing others. Further, given the schedule delays and cost increases previously mentioned, and the vast amount of development, testing, and security assessments left to be completed, we remain concerned about executive- level oversight of systems and security. Moving forward, it will be important that the CIO and other Bureau executives continue to use a collaborative governance approach to effectively manage risks and ensure that the IT solutions meet the needs of the agency within cost and schedule. As part of our ongoing work, we plan to monitor the steps the Bureau is taking to effectively oversee and manage the development and acquisition of its IT systems. In November 2016, we described the significant challenges that the Bureau faced in securing systems and data for the 2020 Census, and we noted that tight time frames could exacerbate these challenges. Two such challenges were (1) ensuring that individuals gain only limited and appropriate access to the 2020 Census data, including personally identifiable information (PII) (e.g., name, personal address, and date of birth), and (2) making certain that security assessments were completed in a timely manner and that risks were at an acceptable level. Protecting PII, for example, is especially important because a majority of the 43 systems to be used in the 2018 End-to-End Test contain PII, as reflected in figure 6. To address these and other challenges, federal law and guidance specify requirements for protecting federal information and information systems, such as those to be used in the 2020 Census. Specifically, the Federal Information Security Management Act of 2002 and the Federal Information Security Modernization Act of 2014 (FISMA) require executive branch agencies to develop, document, and implement an agency-wide program to provide security for the information and information systems that support operations and assets of the agency. Accordingly, the National Institute of Standards and Technology (NIST) developed risk management framework guidance for agencies to follow in developing information security programs. Additionally, the Office of Management and Budget’s (OMB) revised Circular A-130 on managing federal information resources required agencies to implement the NIST risk management framework to integrate information security and risk management activities into the system development life cycle. In accordance with FISMA, NIST guidance, and OMB guidance, the Office of the CIO established a risk management framework. This framework requires that system developers ensure that each of the systems undergoes a full security assessment, and that system developers remediate critical deficiencies. In addition, according to the Bureau’s framework, system developers must ensure that each component of a system has its own system security plan, which documents how the Bureau plans to implement security controls. As a result, system developers for a single system might develop multiple system security plans which all have to be approved as part of the system’s complete security documentation. We have ongoing work that is reviewing the extent to which the Bureau’s framework meets the specific requirements of the NIST guidance. According to the Bureau’s framework, each of the 43 systems in the 2018 End-to-End Test will need to have complete security documentation (such as system security plans) and an approved authorization to operate prior to their use in the 2018 End-to-End Test. However, our ongoing work indicates that, while the Bureau is completing these steps for the 43 systems to be used in the 2018 End-to-End Test, significant work remains. Specifically, as we reported in October 2017: None of the 43 systems are fully authorized to operate through the completion of the 2018 End-to-End Test. Bureau officials from the CIO’s Office of Information Security stated that these systems will need to be reauthorized because, among other things, they have additional development work planned that may require the systems to be reauthorized; are being moved to a different infrastructure environment (e.g., from a data center to a cloud-based environment); or have a current authorization that expires before the completion of the 2018 End-to-End Test. The amount of work remaining is concerning because the test has already begun and the delays experienced in system development and testing mentioned earlier reduce the time available for performing the security assessments needed to fully authorize these systems before the completion of the 2018 End-to-End test. Thirty-seven systems have a current authorization to operate, but the Bureau will need to reauthorize these systems before the completion of the 2018 End-to-End Test. This is due to the reasons mentioned previously, such as additional development work planned and changes to the infrastructure environments. Two systems have not yet obtained an authorization to operate. For the remaining four systems, the Bureau has not yet provided us with documentation about the current authorization status. Figure 7 depicts the authorization to operate status for the systems being used in the 2018 End-to-End Test, as reported by the Bureau. Because many of the systems that will be a part of the 2018 End-to-End Test are not yet fully developed, the Bureau has not finalized all of the security controls to be implemented; assessed those controls; developed plans to remediate control weaknesses; and determined whether there is time to fully remediate any deficiencies before the systems are needed for the test. In addition, as discussed earlier, the Bureau is facing system development challenges that are delaying the completion of milestones and compressing the time available for security testing activities. While the large-scale technological changes (such as Internet self- response) increase the likelihood of efficiency and effectiveness gains, they also introduce many information security challenges. The 2018 End- to-End Test also involves collecting PII on hundreds of thousands of households across the country, which further increases the need to properly secure these systems. Thus, it will be important that the Bureau provides adequate time to perform these security assessments, completes them in a timely manner, and ensures that risks are at an acceptable level before the systems are deployed. We plan to continue monitoring the Bureau’s progress in securing its IT systems and data as part of our ongoing work. Earlier this month, the Department announced that it had updated the October 2015 life-cycle cost estimate and now projects the life-cycle cost of the 2020 Census will be $15.6 billion, more than a $3 billion (27 percent) increase over the Bureau’s earlier estimate. The higher estimated life-cycle cost is due, in part, as we reported in June 2016, to the Bureau’s failure to meet best practices for a quality cost-estimate. Specifically, we reported that, although the Bureau had taken steps to improve its capacity to carry out an effective cost estimate, such as establishing an independent cost estimation office, its October 2015 version of the estimate for the 2020 Census only partially met the characteristics of two best practices (comprehensive and accurate) and minimally met the other two (well-documented and credible). We also reported that risks were not properly accounted for in the cost estimate. We recommended that the Bureau take action to ensure its 2020 Census cost estimate meets all four characteristics of a reliable cost estimate, as well as properly account for risk to ensure there are appropriate levels for budgeted contingencies. The Bureau agreed with our recommendations. In response, the Department of Commerce reported that in May 2017, a multidisciplinary team was created to evaluate the 2020 Census program and to produce an independent cost estimate. Factors driving the increased cost-estimate include changes to assumptions relating to self- response rates, wage levels for temporary census workers, as well as the fact that major contracts and IT scale-up plans and procedures were not effectively planned, managed, and executed. The new estimate also includes a contingency of 10 percent of estimated costs per year as insurance against “unknown-unknowns”, such as a major cybersecurity event. The Bureau and Department are still finalizing the documentation used to develop the $15.6 billion cost-estimate. Until these documents are complete and made available for inspection, we cannot determine the reliability of the estimate. We will review the documentation when it is available. In order for the estimate to be deemed high quality, and thus the basis for any 2020 Census annual budgetary figures, the new cost- estimate will need to address the following four best practices, and do so as quickly as possible given the expected ramp-up in spending: Comprehensive. To be comprehensive an estimate should have enough detail to ensure that cost elements are neither omitted nor double-counted, and all cost-influencing assumptions are detailed in the estimate’s documentation, among other things, according to best practices. In June 2016, we reported that, while Bureau officials were able to provide us with several documents that included projections and assumptions that were used in the cost estimate, we found the estimate to be partially comprehensive because it was unclear if all life-cycle costs were included in the estimate or if the cost estimate completely defined the program. Accurate. Accurate estimates are unbiased and contain few mathematical mistakes. We reported in June 2016 that the estimate partially met best practices for this characteristic, in part because we could not independently verify the calculations the Bureau used within its cost model, which the Bureau did not have documented or explained. Well-documented. Cost estimates are considered valid if they are well-documented to the point they can be easily repeated or updated and can be traced to original sources through auditing, according to best practices. In June 2016, we reported that, while the Bureau provided some documentation of supporting data, it did not describe how the source data were incorporated. Credible. Credible cost estimates must clearly identify limitations due to uncertainty or bias surrounding the data or assumptions, according to best practices. In June 2016, we reported that the estimate minimally met best practices for this characteristic in part because the Bureau carried out its risk and uncertainty analysis only for about $4.6 billion (37 percent) of the $12.5 billion total estimated life-cycle cost, excluding, for example, consideration of uncertainty over what the decennial census’s estimated part will be of the total cost of CEDCaP. The difficulties facing the Bureau’s preparation for the decennial in such areas as planning and testing; managing and overseeing IT programs, systems, and contractors supporting the enumeration; developing reliable cost estimates; prioritizing decisions; managing schedules; and other challenges, are symptomatic of deeper organizational issues. Following the 2010 Census, a key lesson learned for 2020 we identified was ensuring that the Bureau’s organizational culture and structure, as well as its approach to strategic planning, human capital management, internal collaboration, knowledge sharing, capital decision-making, risk and change management, and other internal functions are aligned toward delivering more cost-effective outcomes. The Bureau has made improvements over the last decade, and continued progress will depend in part on sustaining efforts to strengthen risk management activities, enhancing systems testing, bringing in experienced personnel to key positions, implementing our recommendations, and meeting regularly with officials from its parent agency, the Department of Commerce. Going forward, our experience has shown that the key elements needed to make progress in high-risk areas are top-level attention by the administration and agency officials to (1) leadership commitment, (2) ensuring capacity, (3) developing a corrective action plan, (4) regular monitoring, and (5) demonstrated progress. Although important steps have been taken in at least some of these areas, overall, far more work is needed. On the one hand, the Secretary of Commerce has taken several actions towards demonstrating leadership commitment. For example, the previously noted multidisciplinary review team included members with Bureau leadership experience, as well as members with private sector technology management experience. Additional program evaluation and the independent cost estimate was produced by a team from the Commerce Secretary’s Office of Acquisition Management that included a member detailed from OMB. Commerce also reports senior officials are now actively involved in the management and oversight of the decennial. Likewise, with respect to monitoring, the Commerce Secretary reports having weekly 2020 Census oversight reviews with senior Bureau staff and will require metric tracking and program execution status on a real- time basis. On the other hand, demonstrating the capacity to address high risk concerns remains a challenge. For example, our ongoing work has indicated that the Bureau is facing staffing challenges that could impact its ability to manage and oversee the technical integration contractor. Specifically, the Bureau is managing the integration contractor through a government program management office, but this office is still filling vacancies. As of October 2017, the Bureau reported that 35 of 58, or 60 percent, of the office’s federal employee positions were vacant. As a result, this program management office may not be able to provide adequate oversight of contractor cost, schedule, and performance. In the months ahead, we will continue to monitor the Bureau’s progress in addressing in each of the 5 elements essential for reducing the risk to a cost-effective enumeration. At a time when strong Bureau management is needed, vacancies in the agency’s two top positions—Director and Deputy Director—are not helpful for keeping 2020 preparations on-track. These vacancies are due to the previous director’s retirement on June 30, 2017, and the previous deputy director’s appointment to be the Chief Statistician of the United States within the Office of Management and Budget in January 2017. Although interim leadership has since been named, in our prior work we have noted how openings in the Bureau’s top position makes it difficult to ensure accountability and continuity, as well as to develop and sustain efforts that foster change, produce results, mitigate risks, and control costs over the long term. The census director is appointed by the President, by and with the advice and consent of the Senate, without regard to political affiliation. The director’s term is a fixed 5-year term of office, and runs in 5-year increments. An individual may be reappointed and serve 2 full terms as director. The director’s position was first filled this way beginning on January 1, 2012, and cycles every fifth year thereafter. Because the new term began on January 1, 2017, the time that elapses until a new director is confirmed counts against the 5-year term of office. As a result, the next director’s tenure will be less than 5 years. Going forward, filling these top two slots should be an important priority. On the basis of our prior work, key attributes of a census director, in addition to the obvious ones of technical expertise and the ability to lead large, long-term, and high risk programs, could include abilities in the following areas: Strategic Vision. The Director needs to build a long-term vision for the Bureau that extends beyond the current decennial census. Strategic planning, human-capital succession planning, and life-cycle cost estimates for the Bureau all span the decade. Sustaining Stakeholder Relationships. The Director needs to continually expand and develop working relationships and partnerships with governmental, political, and other professional officials in both the public and private sectors to obtain their input, support, and participation in the Bureau’s activities. Accountability. The life-cycle cost for a decennial census spans a decade, and decisions made early in the decade about the next decennial census guide the research, investments, and tests carried out throughout the decennial census. Institutionalizing accountability over an extended period may help long-term decennial initiatives provide meaningful and sustainable results. Over the past several years we have issued numerous reports that underscored the fact that if the Bureau was to successfully meet its cost savings goal for the 2020 Census, the Bureau needs to take significant actions to improve its research, testing, planning, scheduling, cost estimation, system development, and IT security practices. Over the past decade, we have made 84 recommendations specific to the 2020 Census to help address these and other issues. The Bureau has generally agreed with those recommendations; however 36 of them had not been implemented as of October 2017. We have designated 20 of these recommendations as a priority for the Department of Commerce and 5 have been implemented. In August 2017, we sent the Secretary of Commerce a letter that identified our open priority recommendations at the Department, 15 of which concern the 2020 Census. We believe that attention to these recommendations is essential for a cost-effective enumeration. The recommendations included implementing reliable cost estimation and scheduling practices in order to establish better control over program costs, as well as taking steps to better position the Bureau to develop an Internet response option for the 2020 Census. Appendix I summarizes our priority recommendations related to the 2020 Census and the actions the Department has taken to address them. On October 3, 2017, in response to our August 2017 letter, the Commerce Secretary noted that he shared our concerns about the 2020 Census and acknowledged that some of the programs had not worked as planned, and are not delivering the savings that were promised. The Commerce Secretary also stated that he intends to improve the timeliness for implementing our recommendations. We meet quarterly with Bureau officials to discuss the progress and status of open recommendations related to the 2020 Census. We are encouraged by the actions taken by the Department and the Bureau in addressing our recommendations. Implementing our recommendations in a complete and timely manner is important because it would improve the management of the 2020 Census and help to mitigate continued risks. In conclusion, while the Bureau has made progress in revamping its approach to the census, it faces considerable challenges and uncertainties in (1) implementing key cost-saving innovations and ensuring they function under operational conditions; (2) managing the development and security of key IT systems; and (3) developing a quality cost estimate for the 2020 Census and preventing further cost increases. Without timely and appropriate actions, these challenges could adversely affect the cost, accuracy, and schedule of the enumeration. For these reasons, the 2020 Census is a GAO high risk area. Going forward, continued management and Congressional attention—such as hearings like this one—will be vital for ensuring risks are managed, preparations stay on-track, and the Bureau is held accountable for implementing the enumeration as planned. We will continue to assess the Bureau’s efforts to conduct a cost-effective enumeration and look forward to keeping Congress informed of the Bureau’s progress. Chairman Johnson, Ranking Member McCaskill, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have. If you have any questions about this statement, please contact Robert Goldenkoff at (202) 512-2757 or by e-mail at goldenkoffr@gao.gov or David A. Powner at (202) 512-9286 or by e-mail at pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other key contributors to this testimony include Lisa Pearson (Assistant Director); Jon Ticehurst (Assistant Director); Katherine Wulff (Analyst in Charge); Mark Abraham; Brian Bothwell; Jeffrey DeMarco; Hoyt Lacy; Jason Lee; Ty Mitchell; LaSonya Roberts; Kate Sharkey; Andrea Starosciak; Umesh Thakkar; and Timothy Wexler. The Department of Commerce and Census Bureau have taken some actions to address our recommendations related to implementation of the 2020 Census; however, a large number of recommendations remain open. Since just prior to the 2010 Census, we have made 84 recommendations in 23 reports to the Department of Commerce and Census Bureau aimed at helping the Bureau prepare for and implement a successful 2020 Census (table 1). Of those 84, the Department of Commerce and the Census Bureau have implemented 48 recommendations. Thirty-six recommendations require additional action. Of these 84 recommendations, we have designated 20 as priorities for Commerce to address. The Census Bureau has taken some action on our priority recommendations, implementing 5 of the 20 priority recommendations we have made. The following table presents each of the 20 priority recommendations along with a summary of actions taken to address it.
|
One of the Bureau's most important functions is to conduct a complete and accurate decennial census of the U.S. population. The decennial census is mandated by the Constitution and provides vital data for the nation. A complete count of the nation's population is an enormous undertaking as the Bureau seeks to control the cost of the census, implement operational innovations, and use new and modified IT systems. In recent years, GAO has identified challenges that raise serious concerns about the Bureau's ability to conduct a cost-effective count. For these reasons, GAO added the 2020 Census to its High-Risk list in February 2017. In light of these challenges, GAO was asked to testify about the reasons the 2020 Census was placed on the High-Risk List. To do so, GAO summarized its prior work regarding the Bureau's planning efforts for the 2020 Census. GAO also included observations from its ongoing work on the 2018 End-to-End Test. This information is related to, among other things, recent decisions on preparations for the 2020 Census; progress on key systems to be used for the 2018 End-to-End Test, including the status of IT security assessments; execution of the address canvassing operation at the test sites; and efforts to update the life-cycle cost estimate. GAO added the 2020 Census to its high-risk list because of challenges associated with (1) developing and testing key innovations; (2) implementing and securing IT systems; and (3) controlling any further cost growth and preparing reliable cost estimates. The Census Bureau (Bureau) is planning several innovations for the 2020 Decennial Census, including re-engineering field operations by relying on automation, using administrative records to supplement census data, verifying addresses in-office using on-screen imagery, and allowing the public to respond using the Internet. These innovations show promise for controlling costs, but they also introduce new risks, in part because they have not been used extensively in earlier enumerations, if at all. As a result, robust testing is needed to ensure that key systems and operations will function as planned. However, citing budgetary uncertainties, the Bureau canceled its 2017 field test and then scaled back its 2018 End-to End Test. Without sufficient testing, operational problems can go undiscovered and the opportunity to improve operations will be lost, as key census-taking activities will not be tested across a range of geographic locations, housing types, and demographic groups. The Bureau continues to face challenges in managing and overseeing the information technology (IT) programs, systems, and contracts supporting the 2020 Census. For example, GAO's ongoing work indicates that the system development schedule leading up to the 2018 End-to-End test has experienced several delays. Further, the Bureau has not addressed several security risks and challenges to secure its systems and data, including making certain that security assessments are completed in a timely manner and that risks are at an acceptable level. Given that certain operations for the 2018 End-to-End Test began in August 2017, it is important that the Bureau quickly address these challenges. GAO plans to monitor the Bureau's progress as part of its ongoing work. In addition, the Bureau needs to control any further cost growth and develop cost estimates that reflect best practices. Earlier this month, the Department of Commerce (Department) announced that it had updated the October 2015 life-cycle cost-estimate and now projects the life-cycle cost of the 2020 Census will be $15.6 billion, more than $3 billion (27 percent) increase over its earlier estimate. The higher estimated life-cycle cost is due, in part, to the Bureau's failure to meet best practices for a quality cost-estimate. The Bureau and Department are still finalizing the documentation used to develop the $15.6 billion cost-estimate. Until these documents are complete and made available for inspection, GAO cannot determine the reliability of the estimate. Over the past decade, we have made 84 recommendations specific to the 2020 Census to address the issues raised in this testimony and others. The Bureau generally has agreed with our recommendations. As of October 2017, 36 recommendations had not been implemented.
|
Collectively, the ongoing GPS acquisition effort aims to (1) modernize and sustain the existing GPS capability and (2) enhance the current GPS system by adding an anti-jam, anti-spoof cybersecure M-code capability. Figure 1 below shows how GPS satellites, ground control, and user equipment—in the form of receiver cards embedded in systems—function together as an operational system. Modernizing and sustaining the current GPS broadcast capability requires launching new satellites to replace the existing satellites that are near the end of their intended operational life as well as developing a ground control system that can launch and control both existing and new satellites. Sustaining the current GPS broadcast capability is necessary to ensure the quality and availability of the existing broadcast signals for civilian and military GPS receivers. The ongoing modernization of GPS began with three programs: (1) GPS III satellites; (2) OCX to control the satellites; and (3) MGUE increment 1 (which develops initial receiver test cards for military ships, ground vehicles, or aircraft). Table 1 describes these programs. Delays to OCX of more than 5 years led the Air Force to create two additional programs in 2016 and 2017 to modify the current GPS ground system to control GPS III satellites and provide a limited M-code broadcast. As a result, there are currently five total GPS modernization programs. Table 2 provides a description of the two new programs. All of the original GPS modernization programs—GPS III, OCX, and MGUE—have experienced significant schedule growth during development. Table 3 outlines several schedule challenges in the modernized GPS programs. We found in 2015 that unrealistic cost and schedule estimates of the new ground control system and receiver card development delays could pose significant risks to sustaining the GPS constellation and delivering M- code. At that time, we also made five recommendations so that DOD would have the information necessary to make decisions on how best to improve GPS modernization and to mitigate risks to sustaining the GPS constellation. We made four OCX-specific recommendations targeted to identify underlying problems, establish a high confidence schedule and cost estimate, and improve management and oversight. For MGUE, we recommended the Air Force add a critical design review before committing resources to allow the military services to fully assess the maturity of the MGUE design before committing test and procurement resources. DOD concurred with the four recommendations on OCX and partially concurred on the MGUE recommendation. Since 2015, our annual assessment of DOD weapon systems has shown that some of the original GPS programs have continued to face cost or schedule challenges, increasing the collective cost to modernize GPS by billions of dollars. Appendix III outlines the cost increases that have resulted. According to our analysis, over the next decade or more, DOD plans to achieve three key GPS modernization points: (1) constellation sustainment, (2) M-code broadcast, and (3) M-code receivers fielded. Figure 2 shows the current sequencing of the three points and the intervals when they are planned to be achieved, if known. Throughout this report, we will use figures based on this one to highlight the separately-managed programs DOD plans to synchronize to achieve each of the three identified modernization points. Some GPS capabilities require the delivery of more than one program, which must compete for limited resources, such as testing simulators. The Air Force coordinates the interdependent activities of the different programs and contractors in order to achieve each modernization point. The satellites in the GPS constellation broadcast encrypted military signals and unencrypted civilian signals and move in six orbital planes approximately 12,500 miles above the earth. What is a Global Positioning System (GPS) satellite orbital plane and how many are there? The GPS constellation availability performance standards commit the U.S. government to at least a 95 percent probability of maintaining a constellation of 24 operational GPS satellites to sustain the positioning services provided to both civilian and military GPS users. Therefore, while the minimum constellation consists of satellites occupying 24 orbital slots—4 slots in each of the six orbital planes—the constellation actually has 31 total satellites, generally with more than four in each plane to meet the 95 percent probability standard. These additional satellites are needed to provide uninterrupted availability in case a satellite fails. The constellation includes three generations of satellites with varying capabilities and design lives. An orbital plane is an imaginary flat disc containing an Earth satellite’s orbit. One orbital plane, as is shown above, represents the trajectory a GPS satellite follows as it circles the Earth in space. The GPS constellation has six orbital planes. Each contains at least 4 satellites that allow the constellation to meet the minimum requirement of 24 satellites. We found in 2010 and 2015 that GPS satellites have proven more reliable than expected, greatly exceeding their initially predicted life expectancies. Nevertheless, the Air Force must regularly replace satellites to meet the availability standard, since operational satellites have a finite lifespan. Excluding random failures, the operational life of a GPS satellite tends to be limited by the amount of power that its solar arrays can produce. This power level declines over time as the solar arrays degrade in the space environment until eventually they cannot produce enough power to maintain all of the satellite’s subsystems. Consequently, the Air Force monitors the performance of operational satellites in order to calculate when new satellites need to be ready to join the constellation. The 10 GPS III satellites currently under contract and in production with Lockheed Martin will provide a range of performance enhancements over prior GPS satellite generations. The GPS III satellites were designed to provide a longer life than previous generations, greater signal accuracy, and improved signal integrity—meaning that the user has greater assurance that the broadcast signal is correct. When they are eventually controlled through the OCX ground control system, the satellites will also offer a stronger M-code signal strength than prior GPS satellite generations. They will also include an additional civilian signal known as L1C, which will permit interoperability with European, Japanese, and other global navigation satellite systems for civilian users. Figure 3 describes the evolution of GPS satellite generations, including capabilities and life-span estimates. The current GPS ground control segment, OCS, primarily consists of software deployed at a master control station at Schriever Air Force Base, Colorado, and at an alternate master control station at Vandenberg Air Force Base, California. The ground control software is supported by 6 Air Force and 11 National Geospatial-Intelligence Agency monitoring stations located around the globe along with four ground antennas that communicate with the moving satellites. Information from the monitoring stations is processed at the master control station to determine satellite clock and orbit status. As each of the three ground control segment programs—COps, MCEU, and OCX—is completed or partially completed, they will each introduce new capabilities, eventually culminating in the delivery of the full M-code broadcast planned for January 2022. GPS receiver cards determine a user’s position and time by calculating the distance from four or more satellites using the navigation signals on the satellites to determine the card’s location. All warfighters currently acquire, train with, and use GPS receivers. Until MGUE receiver cards are developed and available for production, all DOD weapon systems that use GPS will continue to use the current GPS Selective Availability/Anti- Spoofing Module (SAASM) receiver card or an older version. The Ike Skelton National Defense Authorization Act for Fiscal Year 2011 generally prohibits DOD from obligating or expending funds to procure GPS user equipment after fiscal year 2017 unless that equipment is capable of receiving M-code. Under certain circumstances this requirement may be waived or certain exceptions may apply. The increment 1 receiver cards range in size from approximately 2 inches by 3 inches for the ground card up to 6 inches by 6 inches for the aviation/maritime card. Figure 4 below shows an illustration of a MGUE receiver card. DOD has previously transitioned its weapon systems gradually from one generation of GPS receivers to the next. For example, some weapon systems have either upgraded or are still in the process of upgrading to the current SAASM receivers that were introduced in 2003, while others are still equipped with older cards. DOD anticipates that the length of time necessary to transition to MGUE will require users to operate with a mix of receiver cards. Hundreds of different types of weapon systems require GPS receiver cards, including ships, aircraft, ground vehicles, missiles, munitions, and hand-held devices, among others, across all military services. The Air Force funds the MGUE program, providing funding to the military services so they can acquire, integrate, and operationally test the receiver cards on four service-specific lead platforms. These platforms are intended to test the card in the military services’ ground, aviation, and maritime environments: (1) Army—Stryker ground combat vehicle; (2) Air Force—B-2 Spirit bomber; (3) Marine Corps—Joint Light Tactical Vehicle (JLTV); and (4) Navy—DDG-51 Arleigh Burke destroyer. Figure 5 depicts selected weapon systems that will need to install M-code capable receiver cards. The Air Force has made some progress toward ensuring continued constellation sustainment since our September 2015 report and should be able to sustain the current service because of the length of life of the current satellites. The current GPS constellation is now projected to meet its availability performance standard (in the absence of operational GPS III satellites) into June 2021—an increase of nearly 2 years over previous projections. This increase will give the Air Force more schedule buffer in the event of any additional delays to the GPS III satellite program. However, the Air Force still faces technical risks and schedule pressures in both the short and long term. In the short term, schedule compression with the first GPS III satellite is placing the satellite’s launch and operation at risk of further delays. In the long term, most of the satellites under contract will have been launched before operational testing is completed, limiting Air Force corrective options if issues are discovered. Figure 6 shows the schedule for programs that need to be delivered to modernize and sustain the GPS satellite constellation. The Air Force has made progress since our last report in September 2015 on the three programs (GPS III, OCX, and COps) needed to support GPS constellation sustainment, readying both ground control and the satellite for the first GPS III satellite’s launch, testing, and eventual operation. Raytheon delivered OCX block 0, the launch and checkout system for GPS III satellites, in September 2017. The Air Force took possession of OCX block 0 in October 2017 and will finally accept it at a later date after OCX block 1 is delivered. Lockheed Martin completed the assembly, integration, and testing for the first GPS III satellite and in February 2017 the Air Force accepted delivery in advance of its currently scheduled May 2018 launch. As noted earlier, because of delays to OCX block 1, the Air Force initiated the COps program to ensure an interim means to control GPS III satellites. Without COps, no GPS III satellites can join the constellation to sustain it until OCX block 1 is operational in fiscal year 2022. In September 2016, COps formally started development, establishing a cost baseline of approximately $162 million to meet an April 2019 delivery. The COps program began software coding in November 2016, after a design review established that the product design would meet the Air Force’s intended needs. The Air Force continues to struggle with keeping multiple, highly compressed, interdependent, and concurrent program schedules synchronized in order to sustain and modernize the GPS constellation. Figure 7 shows some of the schedule challenges of the three programs needed for constellation sustainment and modernization. Launching and operating the new GPS III satellite is a highly complex effort, since it requires synchronizing the development and testing schedules of OCX block 0, the first GPS III satellite, and the COps programs. For the Air Force to achieve its objective of making the first GPS III satellite operational by September 2019, numerous challenges (discussed below) must be addressed in the next 18 months on all three programs. If any of the three programs cannot resolve their challenges, the operation of the first GPS III satellite—and constellation sustainment—may be delayed. OCX Block 0 and Pre-Launch Testing Schedules With the goal of launching the first GPS III satellite in March 2018, the Air Force restructured its pre-launch integrated satellite and ground system testing in the summer of 2016, compressing the overall testing timeframe from 52 weeks to 42 weeks. More OCX block 0 delays in early fiscal year 2017 complicated Air Force test plans, resulting in changes to the sequence and timing of events, the introduction of concurrency at various points throughout the testing, the use of incomplete software in early testing, and an increase in the likelihood of discovering issues later in pre- launch integrated testing. Air Force officials stated that some pre-launch testing revisions streamlined the overall test plan since the merging of certain test events allowed multiple objectives to be met by the same event. Raytheon delivered OCX block 0, the launch and checkout system for GPS III satellites, in September 2017. The Air Force took possession of OCX block 0 in October 2017 and will finally accept it at a later date after OCX block 1 is delivered. However, if issues requiring corrective work are discovered during subsequent integrated testing, the GPS III launch schedule may be delayed further since there is minimal schedule margin on OCX block 0 for correcting any additional problems that may be found. First GPS III Satellite Capacitors There are hundreds of capacitors—devices used to store energy and release it as electrical power—installed in each GPS III satellite. In 2016, while investigating capacitor failures, the Air Force discovered that the subcontractor, then known as Exelis (now Harris Corporation), had not conducted required qualification testing for the capacitor’s operational use in GPS III satellites. The Air Force conducted a review of the components over many months, delaying program progress while a subcontractor qualified the capacitor design as suitable for use on the GPS III satellite. However, the Air Force concluded that Harris Corporation failed to properly conduct a separate reliability test of the particular production lot from which the questionable capacitors originated. The Air Force directed the contractor to remove and replace the capacitors from that production lot from the second and third GPS III satellites. After weighing the technical data and cost and schedule considerations, the Air Force decided to accept the first satellite and launch it “as is” with the questionable capacitors installed. The COps program is also pursuing a compressed and concurrent development and testing schedule to be operational as planned in September 2019. The COps acquisition strategy document acknowledges that the program’s timeline is aggressive. DOT&E has highlighted the compressed COps schedule as a risk, since the limited time between the developmental and operational testing permits little time for the evaluation of test results and resolution of any deficiencies found. The COps program has already begun drawing from its 60-day schedule margin, with a quarter of this margin used within the first 5 months after development started. According to Air Force officials, this margin use was the result of unplanned delays certifying a software coding lab. Additionally, the program schedule has concurrent development and testing, which in our previous work we have noted is often a high risk approach but is sometimes appropriate for software development. COps faces further schedule risk from its need for shared test assets, particularly the GPS III satellite simulator, a hardware- and software- based ground system that simulates GPS III function, which is also required by the GPS III and OCX programs. According to a DOT&E official, the OCX program receives priority over COps for the use of the GPS III satellite simulator, since the testing asset is heavily needed in the development of the ground control system. Because of the competing demands for this resource, which Air Force and DOT&E officials maintain requires lengthy and complex software reconfigurations to repurpose the simulator from one test event to the next, the Air Force is using a less realistic and purely software-based simulator for the testing of COps, where possible. Recent data show that the current satellites in the GPS constellation are expected to remain operational longer than previously projected, creating an additional, nearly 2-year schedule buffer before the first GPS III satellite needs to be operational to sustain the current GPS constellation capability. The Air Force projected that the first GPS III satellite needed to be operational by September 2019 based on 2014 satellite performance data. However, our analysis of the Air Force’s more recent May 2016 GPS constellation performance data indicates that, in order to continue meeting the constellation availability performance standard without interruption, the operational need for the first GPS satellite is now June 2021. This projection incorporates updated Air Force data from the current satellites that take into account an increase in solar array longevity expected for IIR and IIR-M satellites, according to Air Force officials. The Air Force is likely to meet the constellation’s June 2021 operational requirement because there are seven GPS III satellites planned to be launched by June 2021. Figure 8 shows the events leading to the launch and operation of the first GPS III satellite, achieving constellation sustainment once the first GPS III is operational, and subsequent GPS III launches that continue to support sustainment. The nearly 2-year buffer between planned operation and actual need for the first GPS III satellite permits the Air Force additional time to resolve any development issues. Because of this additional 2-year schedule buffer, we are not making a recommendation at this time to address the short term challenges we have identified but will continue to assess the progress of each of the programs and risks to constellation sustainment in our future work. The Air Force risks additional cost increases, schedule delays, and performance shortfalls because operational testing to confirm that GPS III satellites work as intended with OCX blocks 1 and 2 will not be completed until after the planned launch of 8 of the 10 GPS III satellites currently under contract. Due to delays to the OCX final delivery, the new ground control system will not be completed in time to control the GPS III satellites for the first few years they are in orbit (approximately 3.5 years). Consequently, GPS III operational testing will now occur in three phases— 1. in late fiscal year 2019 to confirm the satellites can perform similarly to the existing GPS satellites with COps; 2. in fiscal year 2020 to confirm the GPS III satellites can perform some of the new M-code capabilities with MCEU; and 3. in fiscal year 2022 to confirm the GPS III satellites can perform all of the new M-code capabilities with OCX blocks 1 and 2. The first GPS III satellite is projected to complete operational testing of legacy signal capabilities in September 2019. By that point, the Air Force plans to have launched 3 of the 10 GPS III satellites, the fourth satellite is expected to be delivered, and major integration work will be underway on satellites 5 through 8. Therefore, if satellite shortcomings are discovered during any phase of the operational testing, the Air Force will be limited to addressing such issues through software corrections to satellites already on orbit. If any of the three phases of operational testing reveals issues, the Air Force may face the need for potentially costly contract modifications and delivery delays for satellites not yet launched. To offset this risk, the Air Force has obtained performance knowledge of GPS III satellites through ground testing of the first satellite, and findings from this testing have driven modifications to all ten satellites. Because of the rigor of the ground testing of the first satellite, Air Force officials maintain that the knowledge that might be obtained through on-orbit operational testing of the first GPS satellite would be minimal. However, a DOT&E official said that ground testing is limited to assessing system responses that are induced through the testing process and therefore may omit phenomena that might be experienced in actual system operation on orbit. We will continue to track the progress of operational testing in our future work. DOD has established high-risk schedules for modernizing the GPS broadcast, or M-code signal, produced by GPS satellites. These risks are manifest in different ways. In the near term, the Air Force plans to provide a limited M-code broadcast—one that does not have all of the capabilities of OCX—in the MCEU program in fiscal year 2020. However, the MCEU schedule is high risk for its dependency on the timely completion of the COps program, for its aggressive schedule, and because of competition for limited test resources. Further, the full M-code broadcast capability, planned for fiscal year 2022, is at high risk of additional delays because (1) it is dependent on unproven efficiencies in software coding, (2) the program has not yet completed a baseline review, which may identify additional time needed to complete currently contracted work, and (3) there are known changes to the program that must be done that are not included in the proposed schedule. As noted above, the Air Force’s plans for delivering the M-code broadcast involve two separate high-risk programs—MCEU and OCX blocks 1 and 2—delivered at separate times to make an operational M-code signal available to the warfighter. Figure 9 highlights the current forecasted operational schedules to deliver limited M-code broadcast capabilities with MCEU and full M-code broadcast with OCX. The MCEU program, created because of multiple delays to OCX and to partially address that program’s remaining schedule risk, is itself a high- risk program that is dependent on the timely development of COps. Estimated to cost approximately $120 million, MCEU formally entered the acquisition process in January 2017 as a software-specific program to modify OCS. To develop MCEU, Lockheed Martin officials stated they will leverage personnel with expertise maintaining and upgrading OCS as well as utilize the staff working on COps. With a planned December 2019 delivery for testing and a September 2020 target to begin operations, the MCEU program faces several schedule risks. The Air Force’s proposed plan anticipates a compressed software development effort, which the Air Force describes as aggressive. The Air Force has also identified potential risks to the MCEU schedule from competing demands by GPS III, OCX, COps, and MCEU for shared test resources. Air Force officials specifically noted competing demands for the GPS III simulator test resource. If development or testing issues arise in these other programs, those issues could delay the availability of the satellite simulator and thereby disrupt the planned MCEU development effort. According to program officials, the Air Force is working to mitigate this threat to the MCEU program through the use of a software-based simulator, when possible. Additionally, MCEU software development work is dependent on the timely conclusion of the COps effort—which, as previously mentioned, itself has an aggressive schedule and faces competition for a limited test resource. Air Force program officials have said that some Lockheed Martin staff planned to support MCEU will need to transfer from the COps effort. However, after reviewing the staffing plans at the MCEU contractor kickoff, Air Force officials said this is no longer viewed as a significant risk. OCX blocks 1 and 2 Raytheon has made some progress starting coding for OCX block 1 and taken the first steps toward implementing and demonstrating initial software development efficiencies that may benefit development for OCX blocks 1 and 2. The software efficiencies are built up in seven phases and need to be completed before the development process reaches each of the phases to take full advantage of the efficiencies they will create. Once ready, the efficiencies are inserted at different points in the software development schedule. For example, as of August 2017, the first of seven phases implementing the software development improvements was nearly complete, while the second phase was approximately two-thirds complete. Both are needed in place for insertion when the next phase of coding begins. Further, the Air Force proposed a new rebaselined schedule in June 2017 as the final step to getting the program back on track after declaring a critical Nunn-McCurdy unit cost breach in 2016 when the program exceeded the original baseline by more than 50 percent. A Nunn- McCurdy unit cost breach classified as critical is the most serious type of breach and requires a program to be terminated unless the Secretary of Defense submits a written certification to Congress that, among other things, the new estimate of the program’s cost is reasonable and takes other actions, including restructuring the program. In October 2016, DOD recertified the program, with a 24-month schedule extension. Under this newer proposed schedule Raytheon forecasts delivering blocks 1 and 2 in December 2020 with 6 months of extra schedule—a 30-month schedule extension—to account for unknown technical issues before OCX blocks 1 and 2 are due to the Air Force in June 2021. The Air Force projects operating OCX in fiscal year 2022 after completing 7 months of operational testing post-delivery. Three factors place delivery of OCX blocks 1 and 2 in June 2021 at high risk for additional schedule delays and cost increases: First, the newly proposed June 2017 rebaselined schedule assumes significant improvements in the speed of software coding and testing that have not yet been proven, but will be introduced at various periods as software development proceeds. Whether Raytheon can achieve the majority of these efficiencies will not be known until the end of fiscal year 2018. However, the Defense Contract Management Agency, which independently oversees Raytheon’s work developing OCX, noted in July 2017 a number of risks to the schedule, including that some initial assumed efficiencies had not been demonstrated. Specifically, they noted for initial coding on block 1 that Raytheon had achieved only 60 percent of the software integration maturity planned to that point in time in conjunction with greater numbers of software deficiencies that will require more time than planned to resolve. Second, the proposed rebaseline schedule has not yet undergone an integrated baseline review (IBR) to verify all of the work that needs to be done is incorporated into that schedule. The IBR is a best practice required by the Office of Management and Budget on programs with earned value management. An IBR ensures a mutual understanding between the government and the contractor of the technical scope, schedule, and resources needed to complete the work. We have found that too often, programs overrun costs and schedule because estimates fail to account for the full technical definition, unexpected changes, and risks. According to prior plans, the IBR would have taken place in early 2017, but it has been delayed multiple times for a number of reasons. A significant and recurring root cause of delays on the OCX program has been a lack of mutual understanding of the work between the Air Force and Raytheon. The IBR start was scheduled for November 2017 with completion in February 2018. Once conducted, the review may identify additional work not in the proposed schedule that needs to be completed before delivery. For example, Raytheon is conducting a review of hardware and software obsolescence. If significant additional obsolescence issues are found that need to be resolved before OCX blocks 1 and 2 are delivered, the projected delivery date may need to be delayed further at additional cost. Third, the OCX contract will likely be modified because the Air Force needs to incorporate into its contract with Raytheon a number of changes that are not currently a part of the proposed schedule. According to Air Force and contractor officials, negotiations are under way to determine which of these changes will be incorporated before OCX blocks 1 and 2 are delivered and which may be added after delivery. Air Force officials said that the incorporation of changes should be completed by February 2018. Schedule risk assessments for OCX blocks 1 and 2 delivery vary, making it unclear when the full M-code broadcast will finally be operational. Government assessments of Raytheon’s performance continue to indicate more schedule delays are likely. Table 4 shows the varying assessments of potential schedule delays by the Defense Contract Management Agency and the Air Force to the proposed June 2021 delivery date and the subsequent operational date that occurs 7 months later. In 2015, we made four recommendations to the Secretary of Defense, one of which was to use outside experts to help identify all underlying problems on OCX and develop high confidence cost and schedule estimates, among others, in order to provide information necessary to make decisions and improve the likelihood of success. To date, none of these recommendations have been fully implemented but DOD has taken steps to address some of them. Further, because the Air Force has undertaken the COps and MCEU programs to provide interim capabilities to mitigate OCX delays for the full broadcast capability, we are not making additional recommendations at this time but will continue to monitor progress and risks to the acquisition of OCX. While technology development for the M-code receiver cards is underway, DOD has developed preliminary—but incomplete—plans to fully develop and field M-code receiver cards across the more than 700 weapon systems that will need to make the transition from the current technology. DOD has prepared initial cost and schedule estimates for department-wide fielding for a fraction of these weapon systems. While the full cost remains unknown, it is likely to be many billions of dollars greater than the $2.5 billion identified through fiscal year 2021 because there is significant work remaining to verify the initial cards work as planned and to develop them further after the MGUE increment 1 program ends. Without greater coordination of integration test results, lessons learned, and design solutions DOD is at risk of duplicated development work as multiple weapon system programs separately mature and field similar technologies on their own. Further, with the full M-code broadcast available in fiscal year 2022, a gap—the extent of which is unknown—between operationally broadcasting and receiving M- code exists. Figure 10 highlights the gap between the time the M-code signal will be operational and the undefined time M-code can be used by the military services. The Air Force program to develop initial M-code receiver test cards has made progress by establishing an acquisition strategy for this effort and maturing receiver test cards. In January 2017, DOD approved the MGUE increment 1 program to formally begin development, and it defined the criteria to end the program as (1) verifying technical requirements on all types of final receiver test cards; (2) certifying readiness for operational testing by the Air Force Program Executive Officer; (3) completing operational testing for the four lead platforms for, at a minimum, at least the first card available; and (4) completing manufacturing readiness assessments for all three contractors. Within the MGUE increment 1 program, contractors are making progress toward delivering final hardware test cards and incremental software capabilities. For example, one contractor has achieved its initial security certification from the Air Force, which is a key step toward making the MGUE increment 1 receiver test card available for continued development and eventual procurement. Further, the MGUE increment 1 program is also conducting risk reduction testing in preparation for formal developmental verification testing, an important step that ensures the receiver cards meet technical requirements. Programs throughout DOD can make risk-based decisions to develop and test the receiver test cards after technical verification of the card’s hardware and software. According to MGUE program officials, this is significant because it allows non-lead platforms to obtain and work with the cards sooner than the end date of operational testing on lead platforms. Although the Air Force has made progress in maturing receiver test cards, significant development work remains to reach the point where the cards can ultimately be fielded on over 700 different weapon systems. For example, for MGUE increment 1, the Air Force must define additional technical requirements in order for the M-code receiver cards to be compatible and communicate with existing weapon systems. The Air Force will also need to conduct operational tests for each of the lead platforms—the Stryker ground combat vehicle; B-2 Spirit bomber; JLTV; and DDG-51 Arleigh Burke destroyer—before the full M-code signal is available with OCX. Because these tests will instead be conducted with the limited signal provided by MCEU, DOD risks discovering issues several years later once full operational testing is conducted. Further, according to military service officials and assessments by DOT&E, this operational testing will only be minimally applicable to other weapon systems because those other weapon systems have different operational requirements and integration challenges than the four lead platforms. As a result, additional development and testing will be necessary on an undetermined number of the remaining weapon systems to ensure the receiver cards address each system’s unique interfaces and requirements. In 2018, DOD will also formally begin development for MGUE increment 2. Increment 2 will provide more compact receiver cards to be used when size, weight, and power must be minimized, such as on handheld receivers, space receivers, and munitions where increment 1 receiver cards are too large to work. The military services are working to mitigate some of these development challenges. For example, Army officials told us they do not plan to field MGUE receiver cards on its lead platform, the Stryker, due to ongoing gaps in technical requirements. In addition, there is not a lead platform to demonstrate increment 1 on munitions since munition requirements were planned to be addressed in increment 2. However, to address its needs, the Army has initiated efforts to modify the MGUE increment 1 receiver card for some munitions that would otherwise need to wait for MGUE increment 2 technologies. Individual munition program offices within other military services have begun to do so as well. According to military service officials from the Army, Navy, and Marine Corps, it is essential that user needs are met by increment 2, or they will have to conduct additional development and testing. The Army previously identified gaps in increment 1 that the Air Force has either addressed in increment 1, has deferred to increment 2, or will need to be addressed outside of the MGUE increment 1 and 2 programs. Army and Navy officials also stated that they were concerned that any disagreements in requirements for increment 2 could lead to further fielding delays. Finally, the transition from existing GPS receiver cards to M-code receiver cards is likely to take many years. We recently reported that transitioning all DOD platforms to the next generation of receiver cards will likely take more than a decade. A lengthy transition has happened before, as previous efforts to modernize GPS to the current receiver cards, begun in 2003, are still underway and the older receiver cards are still being used. As a result, DOD anticipates that warfighters will have to operate with a mix of older and newer receiver cards. DOD has begun collecting preliminary information on M-code requirements for individual weapon systems. In December 2016, the USD AT&L directed the military services, the Missile Defense Agency (MDA), and Special Operations Command (SOCOM) to submit implementation plans with M-code investment priorities across weapon systems and munitions, including projected costs and schedules. According to DOD, these M-code implementation plans are intended to provide DOD with a management and oversight tool for the fielding effort. In February 2017, each organization submitted its own implementation plan to USD AT&L. These plans were then briefed to the PNT Executive Management Board and PNT Oversight Council in February and March, respectively. However, these implementation plans are preliminary and based on assumptions about the Air Force’s ability to achieve MGUE increment 1 and 2 technical requirements, the timeline required to do so, and the amount of development and test work that will remain for the receiver cards to be ready for production and fielding after the programs end. Since the MGUE increment 2 program has not started development, it has not yet finalized requirements. Once approved, the increment 2 program office will produce an acquisition strategy, schedule, and cost estimate. However, after the MGUE increment 2 program ends there is no detailed plan for completing development, testing, and fielding of M-code receiver cards for weapon systems across the department. DOD has preliminary cost and schedule estimates for some weapon programs, but lacks a total cost at this point because the department does not include all efforts initiated by programs to meet specific needs, including those outside the MGUE increment 1 and 2 programs. The initial M-code implementation plans responded to what was requested but do not individually identify what the total cost will be for each organization to develop and field M-code receiver cards, so a total cost can be determined across DOD. Because USD AT&L required that the implementation plans include funding and schedule estimates for 2 to 3 years while directing that plans be resubmitted, at a minimum, every 2 years, weapon systems that will need M-code but were not considered an immediate priority were not included in the initial submissions. In addition, the military services, MDA, and SOCOM provided only initial cost estimates. According to military service officials, these estimates were based on the current MGUE increment 1 program schedule and technical development and include risk-based decisions to partially fund specific programs until the MGUE increment 1 program matures. According to a USD AT&L official, the plans would both facilitate M-code implementation planning for the department and inform the issuance of waivers. The official stated that as the acquisition programs critical to providing M-code capability mature, future implementations plans should provide more comprehensive estimates of cost and schedule to achieve M-code implementation for the department. Our analysis of the M-code receiver card implementation plans found that initial funding estimates indicate a cost of over $2.5 billion to integrate and procure M-code receiver cards on only a small number of weapon systems out of the hundreds of types that need M-code receiver cards. The full cost will be much larger—likely many billions of dollars because the majority of the weapon systems that need M-code receiver cards are not funded yet or are only partially funded, according to the M-code implementation plans. Specifically, the military services, MDA, and SOCOM identified 716 types of weapon systems in their February 2017 implementation plans that require almost a million M-code receiver cards. For example, the JLTV fleet—which provides protection for passengers against current and future battlefield threats for multiple military services—is one type of weapon system that will eventually need almost 25,000 receiver cards. Of the 716 types of weapon systems that will need M-code receiver cards, only 28—or less than 4 percent—are fully funded through fiscal year 2021. The remainder have either partially funded M- code development and integration efforts (72 weapon systems), or do not yet have funding planned (616 weapon systems). Additionally, the preliminary estimates to develop and procure M-code receivers on selected weapon systems do not all include funding beyond fiscal year 2021 that will be needed for further development, integration, and procurement. This means that DOD and Congress do not have visibility into how much additional funding could be needed to fully fund the remaining 96 percent of all weapon systems that need M-code receivers. Figure 11 shows the M-code development and integration efforts that are funded, partially funded, or unfunded through fiscal year 2021 across DOD weapon systems that will need M-code receiver cards. Because the implementation plans are a first step toward providing DOD leadership insight on this large set of acquisitions and they will be updated at least every 2 years by the different organizations within DOD, we are not making a recommendation at this time. However, we will continue to monitor DOD’s cost and schedule planning. The level of development and procurement effort beyond MGUE increments 1 and 2 is significant and will require close coordination among the military services, MDA, and SOCOM. While Joint Staff officials stated that the DOD Chief Information Officer is working with the military services and Joint Staff to produce a user equipment roadmap to help guide that coordination, they said that these efforts are not yet complete. DOD has designated the Air Force to lead initial development of both larger and smaller test cards that other organizations will need to develop further to meet their individual needs. After the Air Force develops initial cards for both sizes, the breadth and complexity of this acquisition will multiply, as the offices responsible for upgrading hundreds of weapon systems begin their own individual efforts to further develop and test the cards so they work for the unique needs of their specific system. While some common solutions are being developed, Air Force officials said the military services and individual weapon systems will have the freedom to go to the contractors and begin their own development efforts. DOD does not have a developed plan in place to help ensure that common design solutions are employed and that DOD avoids duplication of effort as multiple entities separately mature receiver cards. We previously found that duplication occurs when two or more agencies or programs are engaged in the same activities. In this case, because the individual organizations and program offices are likely to be pursuing individual and uncoordinated receiver card programs at different times with different contractors, DOD is at risk for significant duplication of effort. We previously found that establishing formal mechanisms for coordination and information sharing across DOD programs reduces the risk of gaps and results in more efficient and more effective use of resources. Internal control standards also state that establishing clear responsibilities and roles in achieving objectives is key for effective management. Further, DOD previously reported clear leadership ensures that programs and stakeholders are aligned with common goals. According to MGUE program officials, the MGUE increment 1 program is already capturing all issues observed in receiver test card risk reduction testing and sharing this information through a joint reporting system. However, while non-lead platforms may also report deficiencies in this system, there is no requirement that they do so, nor is there an entity responsible for ensuring data from testing, design, and development is shared between programs. We previously found that the absence of a formal process for coordination results in the potential for duplication, overlap, and fragmentation. DOD therefore risks paying to repeatedly find design solutions to solve common problems because each program office is likely to undertake its own uncoordinated development effort. Some duplicated effort may already be occurring. Air Force officials have expressed concern that work is already being duplicated across the military services in developing embedded GPS systems to be integrated into aircraft. According to multiple DOT&E assessments, the absence of a plan across the wide variety of intended interfaces leaves significant risk in integrating the receiver cards, and therefore fielding cost and schedule risk for DOD. GPS is a national asset for civilians and the military service members who depend upon it each day. Any disruption to the system would have severe economic and military consequences. In keeping the system sustained and modernizing it with additional capabilities, DOD has spent billions of dollars more than planned developing five interdependent GPS programs. Developing these technologies is complex work with the collective effort already years behind initial estimates to provide the warfighter with a means to counter known threats, such as jamming, to the current system. It will be many years before M-code receiver cards are fielded at a cost that remains unknown but that will be substantially higher than the estimated $2.5 billion already identified through fiscal year 2021. In the short term, it is unclear when there will be a receiver card ready for production after the end of operational testing, and in the long term DOD risks wasting resources duplicating development efforts on weapon systems with similar requirements. Without better coordination of this effort, DOD risks unnecessary cost increases and schedule delays because there is no established process or place for collecting and sharing development and integration practices and solutions between programs. We are making the following recommendation to DOD: The Secretary of Defense should ensure that the Under Secretary of Defense for Acquisition, Technology, and Logistics, as part of M-code receiver card acquisition planning, assign an organization with responsibility for systematically collecting integration test data, lessons learned, and design solutions and making them available to all programs expected to integrate M-code receiver cards. (Recommendation 1) We provided a draft of this report to the Department of Defense for review and comment. In its written comments, reproduced in appendix II, DOD concurred with the recommendation. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of the Air Force, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by email at chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To determine the extent to which there are acquisition risks to sustaining the Global Positioning System (GPS) satellite constellation, we reviewed the Air Force GPS quarterly reports, program acquisition baselines, integrated master schedules, acquisition strategies, software development plans, test plans, and other documents to the extent they existed for GPS III, Next Generation Operational Control System (OCX), and Contingency Operations (COps) programs. We also interviewed officials from the GPS III, OCX, and COps programs; the Air Force Space and Missile Systems Center’s (SMC) GPS Enterprise Integrator office; the prime contractors from all three programs; the Defense Contract Management Agency; the Office of Cost Assessment and Program Evaluation; and the Office of the Director, Operational Test and Evaluation (DOT&E). We also reviewed briefings and other documents from each to evaluate program progress in development. We assessed the status of the currently operational GPS satellite constellation, interviewing officials from the Air Force SMC GPS program office and Air Force Space Command. To assess the risks that a delay in the acquisition and fielding of GPS III satellites could result in the GPS constellation falling below the 24 satellites required by the standard positioning service and precise positioning service performance standards, we employed a methodology very similar to the one we had used to assess constellation performance in 2009, 2010, and 2015. We obtained information dated May 2016 from the Air Force predicting the reliability for 63 GPS satellites—each of the 31 current (on-orbit as of July 2017) and 32 future GPS satellites—as a function of time. Each satellite’s total reliability curve defines the probability that the satellite will still be operational at a given time in the future. It is generated from the product of two reliability curves—a wear- out reliability curve defined by the cumulative normal distribution, and a random reliability curve defined by the cumulative Weibull distribution. For each of the 63 satellites, we obtained the two parameters defining the cumulative normal distribution, and the two parameters defining the cumulative Weibull distribution. For each of the 32 unlaunched satellites we included in our model, we also obtained a parameter defining its probability of successful launch, and its current scheduled launch date. The 32 unlaunched satellites include 10 GPS III satellites currently under contract and 22 GPS III satellites planned for contract award in late 2018; launch of the final GPS III satellite we included in our model is scheduled for October 2031. Using this information, we generated overall reliability curves for each of the 63 GPS satellites. We discussed with Air Force and Aerospace Corporation representatives, in general terms, how each satellite’s normal and Weibull parameters were calculated. However, we did not analyze any of the data used to calculate these Air Force provided parameters. Using the reliability curves for each of the 63 GPS satellites, we developed a Monte Carlo simulation to predict the probability that at least a given number of satellites would be operational as a function of time, based on the GPS launch schedule as of May 2016. We conducted several runs of our simulation—each run consisting of 10,000 trials—and generated “sawtoothed” curves depicting the probability that at least 24 satellites would still be operational as a function of time. We then used our Monte Carlo simulation model to examine the effect of delays to the operational induction of the GPS III satellites into the constellation. We reran the model based on month/year delay scenarios, calculating new probabilities that at least 24 satellites would still be operational as a function of time, determining in terms of month/year the point at which a satellite would be required to enter operations to maintain an uninterrupted maintenance of the 95 percent probability of 24 satellites in operation. The Air Force satellite parameters we used for the Monte Carlo simulation pre-dated the Air Force investigation into navigation payload capacitors and the subsequent decision to launch the first satellite “as is” with questionable parts. Therefore, the reliability parameters for this satellite were not informed by any possible subsequent Air Force consideration of the decision to launch the first GPS III satellite “as is” with these parts. To determine the extent to which the Department of Defense (DOD) faces acquisition challenges developing a new ground system to control the broadcast of a modernized GPS signal, we reviewed Air Force program plans and documentation related to cost, schedule, acquisition strategies, technology development, and major challenges to delivering M-code Early Use (MCEU) and OCX blocks 1 and 2. We interviewed officials from the MCEU and OCX program offices, SMC GPS Enterprise Integrator office, DOT&E, and the prime contractors for the two programs. For OCX, we also reviewed quarterly reviews, monthly program assessments, and slides provided by Raytheon on topics of our request. We also interviewed Office of Performance Assessments and Root Cause Analyses officials regarding root causes of the OCX program’s cost and schedule baseline breach and Defense Contract Management Agency officials charged with oversight of the OCX contractor regarding cost and schedule issues facing the program’s development efforts, major program risks, and technical challenges. To determine the extent to which DOD faces acquisition challenges developing and fielding modernized receiver cards across the department, we reviewed Air Force program plans and documentation related to M-code GPS User Equipment (MGUE) increment 1 cost, schedule, acquisition strategy, and technology development. We interviewed officials at the Air Force SMC GPS program office, MGUE program office, DOT&E, and the three MGUE increment 1 contractors— L3 Technologies, Raytheon, and Rockwell Collins. To identify the military services’ respective development efforts and challenges in integrating MGUE with their lead platforms, we interviewed officials from the lead program offices for the Army’s Defense Advanced GPS Receiver Distributed Device/Stryker, Air Force’s B-2 aircraft, Navy’s DDG-51 Arleigh Burke class destroyer, and Marine Corps Joint Light Tactical Vehicle. Additionally, to understand the extent to which DOD has a plan for implementing M-code for the warfighter, we analyzed DOD Positioning, Navigation, and Timing (PNT) plans and other DOD memorandum on GPS receiver cards. We also held discussions with and received information from officials at Office of the Undersecretary of Defense for Acquisition, Technology, and Logistics; Joint Staff / J-6 Space Branch; and military service officials from the offices responsible for developing M-code receiver card implementation plans. We conducted this performance audit from February 2016 to December 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, David Best, Assistant Director; Jay Tallon, Assistant Director; Karen Richey, Assistant Director; Pete Anderson; Andrew Berglund; Brandon Booth; Brian Bothwell; Patrick Breiding; Erin Carson; Connor Kincaid; Jonathan Mulcare; Sean Sannwaldt; Alyssa Weir; Robin Wilson and Marie P. Ahearn made key contributions to this report.
|
GPS provides positioning, navigation, and timing data to civilian and military users who depend on this satellite-based system. Since 2000, DOD—led by the Air Force—has been working to modernize GPS and to keep the current system of satellites—known as the GPS constellation—operational, although these efforts have experienced cost and schedule growth. The National Defense Authorization Act for Fiscal Year 2016 contained a provision that the Air Force provide reports to GAO on GPS acquisition programs and that GAO brief the congressional defense committees. GAO briefed the committees in 2016 and 2017. This report summarizes and expands on information presented in those briefings. This report assesses the extent to which DOD faces acquisition challenges (1) sustaining the GPS constellation; (2) developing a new ground control system; and (3) developing and fielding modernized receivers. GAO analyzed GPS quarterly acquisition reports and data, acquisition strategies, software and test plans, and other documents, and interviewed DOD and contractor officials. The Department of Defense's (DOD) acquisition of the next generation Global Positioning System (GPS) satellites, known as GPS III, faces a number of acquisition challenges, but these challenges do not threaten DOD's ability to continue operating the current GPS system, which DOD refers to as the constellation, in the near term. Projections for how long the current constellation will be fully capable have increased by nearly 2 years to June 2021, affording some buffer to offset any additional satellite delays. While the first GPS III satellite has a known parts problem, six follow-on satellites—which do not—are currently scheduled to be launched by June 2021. DOD is relying on a high-risk acquisition schedule to develop a new ground system, known as OCX, to control the broadcast of a modernized military GPS signal. OCX remains at risk for further delays and cost growth. To mitigate continuing delays to the new ground control system, the Air Force has begun a second new program—Military-code (M-code) Early Use—to deliver an interim, limited broadcast encrypted GPS signal for military use by modifying the current ground system. GAO will continue to monitor OCX progress. DOD has made some progress on initial testing of the receiver cards needed to utilize the M-code signal. However, additional development is necessary to make M-code work with over 700 weapon systems that require it. DOD has begun initial planning for some weapon systems, but more remains to be done to understand the cost and schedule needed to transition to M-code receivers. The preliminary estimate for integrating and testing a fraction of the weapon systems that need the receiver cards is over $2.5 billion through fiscal year 2021 with only 28 fully and 72 partially funded (see figure). The cost will increase by billions when as yet unfunded weapon systems are included. The level of development and procurement effort beyond the initial receiver cards is significant and will require close coordination across DOD. After the Air Force develops initial cards, the breadth and complexity of this acquisition will multiply, as the offices responsible for upgrading hundreds of weapon systems begin their own individual efforts to further develop and test the cards. However, DOD does not have an organization assigned to collect test data, lessons learned, and design solutions so that common design solutions are employed to avoid duplication of effort as multiple entities separately mature receiver cards. DOD therefore risks paying to repeatedly find design solutions to solve common problems because each program office is likely to undertake its own uncoordinated development effort. DOD should assign responsibility to an organization to collect test data, lessons learned, and design solutions so they may be shared. DOD concurred with the recommendation.
|
The federal government obligates tens of billions annually on IT. Prior IT expenditures, however, have too often produced failed projects—that is, projects with multimillion dollar cost overruns and schedule delays and with questionable mission-related achievements. In our 2017 high risk series update, we reported that improving the management of IT acquisitions and operations remains a high risk area because the federal government has spent billions of dollars on failed IT investments. Agencies are generally required to use full and open competition— meaning all responsible sources are permitted to compete—when awarding contracts. However, the Competition in Contracting Act of 1984 recognizes that full and open competition is not feasible in all circumstances and authorizes contracting without full and open competition under certain conditions. In addition, there are competition- related requirements for other types of contract vehicles, including multiple award indefinite-delivery/indefinite-quantity (IDIQ) contracts and the General Services Administration’s (GSA) Federal Supply Schedule (FSS). The rules regarding exceptions to full and open competition and other competition-related requirements are outlined in various parts of the Federal Acquisition Regulation (FAR). For example: Contracting officers may award a contract without providing for full and open competition if one of seven exceptions listed in FAR Subpart 6.3 apply. Examples of allowable exceptions include circumstances when products or services required by the agency are available from only one source, when disclosure of the agency’s need would compromise national security, or when the need for products and services is of such an unusual and compelling urgency that the federal government faces the risk of serious financial or other injury. Generally, exceptions to full and open competition under FAR subpart 6.3 must be supported by written justifications that contain sufficient facts and rationale to justify use of the specific exception. Depending on the proposed value of the contract, the justifications require review and approval at successively higher approval levels within the agency. Contracting officers are also authorized to issue orders under multiple award IDIQ contracts noncompetitively. Generally contracting officers must provide each IDIQ contract holder with a fair opportunity to be considered for each order unless exceptions apply. Contracting officers who issue orders over certain thresholds under an exception to fair opportunity are required to provide written justification for doing so. In April 2017 we found that government-wide, more than 85 percent of all order obligations under multiple-award IDIQ contracts were competed from fiscal years 2011 through 2015. Orders placed under GSA’s FSS program are also exempt from FAR part 6 requirements. However, ordering procedures require certain FSS orders exceeding the simplified acquisition threshold to be placed on a “competitive basis,” which includes requesting proposals from as many schedule contractors as practicable. If a contracting officer decides not to provide opportunity to all contract holders when placing an FSS order over the simplified acquisition threshold, that decision must be documented and approved. The FAR allows for orders to be placed under these circumstances based on the following justifications: when an urgent and compelling need exists; when only one source is capable of providing the supplies or services because they are unique or highly specialized; when in the interest of economy and efficiency, the new work is a logical follow-on to an original FSS order that was placed on a “competitive basis;” and when an item is “peculiar to one manufacturer.” Agencies may also award contracts on a sole-source basis in coordination with the Small Business Administration (SBA) to eligible 8(a) program participants. While agencies are generally not required to justify these sole-source awards, contracts that exceed a total value of $22 million require a written justification in accordance with FAR Subpart 6.3. In certain situations, it may become evident that services could lapse before a subsequent contract can be awarded. In these cases, because of time constraints, contracting officers generally use one of two options: (1) extend the existing contract or (2) award a short-term stand-alone contract to the incumbent contractor on a sole-source basis to avoid a lapse in services. While no government-wide definition of bridge contracts exists, we developed the following definitions related to bridge contracts that we used for our October 2015 report: Bridge contract. An extension to an existing contract beyond the period of performance (including base and option years), or a new, short-term contract awarded on a sole-source basis to an incumbent contractor to avoid a lapse in service caused by a delay in awarding a follow-on contract. Predecessor contract. The contract in place prior to the award of a bridge contract. Follow-on contract. A longer-term contract that follows a bridge contract for the same or similar services. This contract can be competitively awarded or awarded on a sole-source basis. Contracts, orders, and extensions (both competitive and noncompetitive) are included in our definition of a “bridge contract” because the focus of the definition is on the intent of the contract, order, or extension. DOD and some of its components, including the Navy, the Defense Logistics Agency (DLA), and the Defense Information Systems Agency (DISA), have established their own bridge contract definitions and policies. Congress enacted legislation in 2017 that established a definition of “bridge contracts” for DOD and its components. For the purposes of this report, we use the same definition as we used in our October 2015 report to define bridge contracts, unless otherwise specified. We acknowledge that in the absence of a government-wide definition, agencies may have differing views of what constitutes a bridge contract. We discuss these views further in the body of this report. In our October 2015 report on bridge contracts, we found that the agencies included in our review—DOD, HHS, and the Department of Justice—had limited or no insight into their use of bridge contracts. In addition, we found that while bridge contracts are typically envisioned as short term, some bridge contracts included in our review involved one or more bridges that spanned multiple years—potentially undetected by approving officials. The fact that the full length of a bridge contract, or multiple bridge contracts for the same requirement, is not readily apparent from documents that may require review and approval, such as an individual J&A, presents a challenge for those agency officials responsible for approving the use of bridge contracts. Approving officials signing off on individual J&As may not have insight into the total number of bridge contracts that may be put in place by looking at individual J&As alone. In October 2015, we recommended that the Administrator of the Office of Federal Procurement Policy (OFPP) take the following two actions: (1) take appropriate steps to develop a standard definition for bridge contracts and incorporate it as appropriate into relevant FAR sections; and (2) as an interim measure until the FAR is amended, provide guidance to agencies on: a definition of bridge contracts, with consideration of contract extensions as well as stand-alone bridge contracts; and suggestions for agencies to track and manage their use of these contracts, such as identifying a contract as a bridge in a J&A when it meets the definition, and listing the history of previous extensions and stand-alone bridge contracts. OFPP concurred with our recommendation to provide guidance to agencies on bridge contracts, and stated its intention is to work with members of the FAR Council to explore the value of incorporating a definition of bridge contracts in the FAR. As of November 2018, OFPP had not yet implemented our recommendations but has taken steps to develop guidance on bridge contracts. Specifically, OFPP staff told us they have drafted management guidance, which includes a definition of bridge contracts, and provided it to agencies’ Chief Acquisition Officers and Senior Procurement Executives for review. OFPP staff told us they received many comments on the draft guidance and were in the process of addressing those comments. Federal agencies reported annually obligating between $53 billion in fiscal year 2013 to $59 billion in fiscal year 2017 on IT-related products and services. Of that amount, agencies reported that more than $15 billion each year—or about 30 percent of all obligations for IT products and services—were awarded noncompetitively. However, in a generalizable sample of contracts and orders, we found significant errors in certain types of orders, which call into question the reliability of competition data associated with roughly $3 billion per year in obligations. As a result, the actual amount agencies obligated on noncompetitive contract awards for IT products and services is unknown. From fiscal years 2013 through 2017, we found that total IT obligations reported by federal agencies ranged from nearly $53 billion in fiscal year 2013 to $59 billion in fiscal year 2017. The amount obligated on IT products and services generally accounted for about one-tenth of total federal contract spending (see figure 1). For fiscal years 2013 through 2017, the three agencies we reviewed in more depth—DOD, DHS and HHS––collectively accounted for about two- thirds of federal IT spending (see figure 2). From fiscal years 2013 through 2017, agencies reported in FPDS-NG obligating more than $15 billion—about 30 percent of all annual IT obligations—each year on noncompetitively awarded contracts and orders. We determined, however, that the agencies’ reporting of certain competition data was unreliable (see figure 3). Specifically, we found that contracting officers miscoded 22 out of 41 orders in our sample, of which 21 cited “follow-on action following competitive initial action” or “other statutory authority” as the legal authority for using an exception to fair opportunity. DOD contracting officers had miscoded 11 of the 21 orders, while DHS and HHS contracting officers had miscoded 4 and 6 orders, respectively. This miscoding occurred at such a high rate that it put into question the reliability of the competition data on orders totaling roughly $3 billion per year in annual obligations. In each of these cases, contracting officers identified these orders as being noncompetitively awarded when they were, in fact, competitively awarded. As an assessment of the extent to which contracts and orders that were identified as being competitively awarded were properly coded was outside the scope of our review, we are not in a position to assess the overall reliability of competition information of IT-related contracts. For these 21 orders, we found that DHS was aware of issues surrounding most of their miscodings and had taken actions to fix the problems, while DOD and HHS generally had limited insights as to why these errors occurred. DHS miscoded 4 orders, 3 of which were orders awarded under single award contracts. DHS officials told us that orders issued from single award contracts should inherit the competition characteristics of the parent contract. However, as FPDS-NG currently operates, contracting officers have the ability to input a different competition code for these orders. In this case, each of the single award contracts was competitively awarded and therefore all the subsequent orders issued from these contracts should be considered competitively awarded, as there are no additional opportunities for competition. DHS has taken actions to address this issue. DHS officials stated that in conjunction with DOD they have asked GSA, which manages the FPDS-NG data system, to modify FPDS-NG to automatically prefill competition codes for orders awarded under single award contracts. DHS officials noted that GSA expects to correct the issue in the first quarter of fiscal year 2019, which should mitigate the risks of agencies miscoding orders issued under single award contracts in the future. DHS officials have also provided training to their contracting personnel that single award orders must inherit the characteristics of the parent contract. DOD and HHS officials, on the other hand, had limited insights as to why their orders were miscoded. For example, DOD miscoded a total of 11 orders (5 orders awarded under single award contracts and 6 awarded under multiple award contracts). For 8 of these orders, contracting officers did not provide the reasons as to why these errors occurred. For the remaining 3 orders awarded—each of which were issued under single award contracts—contracting officials told us that they had used the “follow-on action following competitive initial action” because the underlying contract had been competed. Similarly, at HHS, which miscoded a total of 6 orders (4 awarded under single award contracts and 2 awarded under multiple award contracts), component officials told us that these errors were accidental and could not provide any additional insight as to why these errors were made. While GSA’s changes in the FPDS-NG system, when implemented, may help address the issue of miscoding competition data on orders issued from single award contracts, it will not address errors in coding for multiple award orders that cited exceptions to competition even when they were competed. The FAR notes that FPDS-NG data are used in a variety of ways, including assessing the effects of policies and management initiatives, yet we have previously reported on the shortcomings of the FPDS-NG system, including issues with the accuracy of the data. Miscoding of competition requirements may hinder the accomplishment of certain statutory, policy, and regulatory requirements. For example, The FAR requires agency competition advocates, among other duties and responsibilities, to prepare and submit an annual report to their agencies’ senior procurement executive and chief acquisition officer on actions taken to achieve full and open competition in the agency and recommend goals and plans for increasing competition. OMB required agencies to reduce their reliance on noncompetitive contracts, which it categorized as high-risk, because, absent competition, agencies must negotiate contracts without a direct market mechanism to help determine price. Federal internal control standards state that management should use quality information to achieve an entity’s objectives. Without identifying the reasons why contracting officers are miscoding these orders in FPDS-NG, DOD and HHS are unable to take action to ensure that competition data are accurately recorded, and are at risk of using inaccurate information to assess whether they are achieving their competition objectives. After excluding the $3 billion in annual obligations we determined was not sufficiently reliable, we found that from fiscal years 2013 through 2017 about 90 percent of noncompetitive IT obligations reported in FPDS-NG were used to buy services, hardware, and software (see figure 4). Services include the maintenance and repair of IT equipment as well as professional technology support. Hardware includes products such as fiber optic cables and computers, and software includes items such as information technology software and maintenance service plans. The documentation for the contracts and orders at the three agencies we reviewed generally cited either that only one source could meet their needs or that they were awarding the contract sole-source to an 8(a) small business participant when noncompetitively awarding IT contracts or orders. Specifically, based on our generalizable sample, we estimate that nearly 60 percent of fiscal year 2016 noncompetitive contracts and orders at DOD, DHS, and HHS were awarded noncompetitively because agencies cited that only one contractor could meet the need, and approximately 26 percent of contracts and orders were awarded sole- source to an 8(a) small business participant. We estimate that agencies cited a variety of other reasons for not competing approximately 16 percent of noncompetitive contracts and orders, such as unusual and compelling urgency, international agreement, and national security. Within our sample of 142 contracts and orders, we analyzed J&As or similar documents to obtain additional detail as to why the contracts and orders were awarded noncompetitively. See table 2 for a breakdown of the overall reasons cited for awarding contracts noncompetitively within our sample. For 79 of the 142 contracts and orders we reviewed, agencies cited that only one source could meet the need. We found that this exception was the most commonly cited reason for a sole-source IT contract or order at DOD and DHS, but not at HHS. At HHS, the most common reason was that the contract or order was awarded on a sole source basis to an 8(a), which we discuss in more detail later. Agencies justified use of the “only one source” exception on the basis that the contractor owned the proprietary technical or data rights; the contractor had unique qualifications or experience; compatibility issues; or that a brand-name product was needed (see figure 5). The following examples illustrate the reasons cited by the agencies as to why only one contractor could meet their needs: Proprietary data rights issues and compatibility issues. The Navy issued a 9-month, approximately $350,000 order under an IDIQ contract for two data terminal sets. The terminal sets, which according to Navy officials, have been used by the Navy since the 1990s to exchange radar tracking and other information among airborne, land- based, and ship-board tactical data systems and with certain allies. The Navy’s J&A document noted that the contractor owned the proprietary data rights to the transmitting equipment and software, and the Navy required the equipment to be compatible and interchangeable with systems currently fielded throughout the Navy. Furthermore, the document noted that seeking competition through the development of a new source would result in additional costs that would far exceed any possible cost savings that another source could provide and would cause unacceptable schedule delays. This example illustrates that decisions the program officials make during the acquisition process to acquire or not acquire certain rights to technical data can have far-reaching implications for DOD’s ability to sustain and competitively procure parts and services for those systems, as we have previously reported. In our May 2014 report on competition in defense contracting, we found that 7 of 14 justifications we reviewed explained that the awards could not be competed due to a lack of technical data. All 7 of these justifications or supporting documents described situations, ranging from 3 to 30 years in duration, where DOD was unable to conduct a competition because data rights were not purchased with the initial award. We recommended in May 2014 that DOD ensure that existing acquisition planning guidance promotes early vendor engagement and allows both the government and vendors adequate time to prepare for competition. DOD concurred with our recommendation. In April 2015, DOD updated its acquisition guidance to incorporate new guidelines for creating and maintaining a competitive environment. These guidelines emphasize acquisition planning steps including involvement with industry in obtaining feedback on draft solicitations, market research, and requirements development. Unique qualifications and experience. DHS placed four separate orders under an IDIQ contract for data center support totaling approximately $7 million. The requirement was to maintain mission critical services during a data center support pilot, prototype, and transition period starting in fiscal year 2015. Among other things, DHS’s J&A noted that no other contractors had sufficient experience with DHS’s infrastructure and requirements necessary to maintain services at the required level during the transition period. HHS awarded an approximately $4 million contract to buy support services for an IT center for a 12-month ordering period, including options. HHS’s J&A noted that only the incumbent contractor had the requisite knowledge and experience to operate and maintain the mission and business systems in the IT center during the transition of operations from one location to another. The justification further stated that HHS had no efforts underway to increase competition in the future as this requirement is not anticipated to be a recurring requirement. Program officials stated that they are migrating from legacy IT systems to a new commercial off-the-shelf system. Brand-name products. DOD awarded a 5-month, approximately $500,000 contract for brand name equipment and installation that supported various video-teleconference systems. The J&A stated that this particular brand name product was the only product that would be compatible with current configurations installed in one of its complexes. To increase competition in the future, the J&A stated that technical personnel will continue to evaluate the marketplace for commercially available supplies and installation that can meet DOD’s requirements. For 42 of the 142 contracts and orders we reviewed, we found that agencies awarded a sole-source contract or order to 8(a) small business participants. HHS awarded 13 of its 23 sole-source contracts and orders we reviewed to 8(a) small business participants, DOD awarded 25 of 95, and DHS 4 of 24. We found that all contracts and orders in our review that were awarded on a sole-source basis to 8(a) small business participants were below the applicable competitive thresholds or otherwise below the FAR thresholds that require a written justification. As previously discussed, agencies may award contracts on a sole-source basis to eligible 8(a) participants, either in coordination with SBA or when they are below the competitive threshold. While agencies are generally not required to justify these smaller dollar value sole-source 8(a) awards, contracts that exceed a total value of $22 million require a written justification. Since none of the 8(a) sole source contracts and orders in our review required written justifications, the contract files generally did not provide the rationale behind the sole-source award. Policy and contracting officials from all three agencies we reviewed stated they made sole-source awards to 8(a) small business participants to help meet the agency’s small business contracting goals and save time. HHS officials further stated that they consider their awards to 8(a) small business participants a success because they are supporting small businesses. Officials stated that once a requirement is awarded through the 8(a) program, the FAR requires that requirement be set aside for an 8(a) contractor unless the requirement has changed or that an 8(a) contractor is not capable or available to complete the work. For 23 of the 142 contracts and orders we reviewed, we found that agencies cited other reasons for awarding contracts and orders noncompetitively. For example: Urgent and compelling need. DHS’s Coast Guard awarded an approximately 10-month, $6.5 million order (encompassing all options) for critical payroll services in its human resources management system under a GSA federal supply schedule contract. The Coast Guard justified the award based on an urgent and compelling need. A Coast Guard official explained that the efforts to competitively award a follow-on contract had been delayed as the Coast Guard had not developed a defined statement of work in a timely manner, and that the agency had received a larger number of proposals than initially anticipated. Therefore, the evaluation process took longer than expected. In addition, the Coast Guard’s competitive follow-on contract, which was awarded in June 2018, was protested. In October 2018, GAO denied the protest and the Coast Guard is currently planning to transition to the newly awarded contract. International agreement. The Army placed an approximately 8- month, $1 million order under an IDIQ contract for radio systems and cited international agreement as the reason for a noncompetitive award. This order was part of a foreign military sales contract with the Government of Denmark. Authorized or required by statute. The Defense Logistics Agency (DLA) cited “authorized or required by statute” when it placed an approximately $1.5 million, 12-month order under an IDIQ contract for sustainment support services for an application that is used for planning and initiating contracting requirements in contingency environments. DLA noted that this model was contracted under the Small Business Innovation Research Program, which supports scientific and technological innovation through the investment of federal research funds into various research projects. National security. The U.S. Special Operations Command (SOCOM) placed an approximately 8-month, $1 million order for radio spare parts and cited national security as the reason for a noncompetitive award. We estimate that about 8 percent of contracts and orders above $150,000 in fiscal year 2016 at DOD, DHS, and HHS were bridge contracts. Consistent with our October 2015 findings, agencies we reviewed face continued challenges with oversight of bridge contracts, based on 15 contracts and orders we reviewed in-depth. For example, we found that in 9 of the 15 cases, bridge contracts were associated with additional bridges not apparent in the documentation related to the contract and order we reviewed, such as a J&A, and corresponded with longer periods of performance and higher contract values than initially apparent. Agency officials cited a variety of reasons for needing bridge contracts, including acquisition planning challenges, source selection challenges, and bid protests. Based on our generalizable sample, we estimate that about 8 percent of contracts and orders above $150,000 in fiscal year 2016 at DOD, DHS, and HHS were bridge contracts. We verified, using our definition of bridge contracts as criteria, that 13 of 142 contracts and orders in our generalizable sample were bridge contracts based on reviews of J&As, limited source justifications, or exceptions to fair opportunity, among other documents. In addition, we found two additional bridge contracts related to our generalizable sample while conducting our in-depth review, bringing the total number of bridge contracts we identified during this review to 15. We found that the bridge contracts we reviewed were often longer than initially apparent from our review of related documentation, such as a J&A, and sometimes spanned multiple years. Bridge contracts can be a useful tool in certain circumstances to avoid a gap in providing products and services, but they are typically envisioned to be used for short periods of time. When we conducted an in-depth review of the bridge contracts, such as by reviewing the contract files for the predecessor, bridge, and follow-contracts, we found that in most cases, these involved one or more bridges that spanned longer periods and corresponded with higher contract values than initially apparent. Specifically, we found that 9 of the 15 bridge contracts had additional bridges related to the same requirement that were not initially apparent from documents requiring varying levels of approval by agency officials, such as the J&As. Collectively, agencies awarded bridge contracts associated with these 15 contracts and orders with estimated contract values of about $84 million (see table 3). The following examples illustrate contracts we reviewed in which the periods of performance were longer than initially apparent: HHS’s Indian Health Service (IHS) awarded a 4-month, approximately $1.6 million bridge order for project management and support services for IHS’s resource and patient management system. We found, however, that the predecessor contract had already been extended by 6 months before the award of the bridge order due to acquisition planning challenges associated with delays in developing the acquisition package for the follow-on contract. Subsequently, the 4- month bridge order was extended for an additional 6 months, in part because the follow-on award—which had been awarded to a new contractor—was protested by the incumbent contractor due to concerns over proposal evaluation criteria. Ultimately, the protest was dismissed. Following the resolution of the bid protest, officials awarded an additional 2-month bridge order for transition activities. In total, the bridge orders and extensions spanned 18 months and had an estimated value of about $4.7 million. Figure 6 depicts the bridge orders and extensions and indicates the 4-month bridge and 6-month extension we had initially identified. The Air Force awarded a 3-month, approximately $630,000 bridge contract to support a logistics system used to monitor weapon system availability and readiness. We found, however, that the Air Force had previously awarded a 3-month bridge contract due to delays resulting from a recent reorganization, which, according to Air Force officials, made it unclear which contracting office would assume responsibility for the requirement. The Air Force subsequently awarded an additional 3-month bridge contract due to acquisition planning challenges, such as planning for the award of the follow-on sole- source contract. The total period of performance for the bridges was 9 months with an estimated value of about $1.9 million (see figure 7). As of August 2018, 13 of the 15 bridge contracts had follow-ons in place—5 were awarded competitively and 8 were awarded noncompetitively. Two bridge contracts do not currently have follow-on contracts in place for various reasons. For example, in one instance, the Coast Guard’s requirement for human resources and payroll support services has continued to operate under a bridge contract because the Coast Guard’s planned follow-on contract—a strategic sourcing IDIQ— was awarded in June 2018, and subsequently protested, among other delays. Based on our reviews of contract documentation and information provided by agency officials, we found that acquisition planning challenges were the principal cause for needing to use a bridge contract across the 15 bridge contracts we reviewed. In particular, acquisition packages prepared by program offices to begin developing a solicitation were often not prepared in a timely fashion. Acquisition packages include statements of work and independent government cost estimates, among other documents, and are generally prepared by the program office, with the assistance of the contracting office. In addition to acquisition planning challenges, officials cited delays in source selection and bid protests, among others, as additional reasons justifying the need to use a bridge contract (see figure 8). The following examples illustrate reasons officials cited for needing a bridge contract: DOD’s DISA awarded a bridge contract for IT support services due to acquisition planning challenges, and specifically, the late submission of acquisition packages. According to contracting officials, the bridge contract was originally intended to consolidate 3 of the previous contracts associated with this requirement, but a fourth was added much later in the process. DISA contracting officials said that the program office did not submit acquisition package documentation in a timely manner, and, once submitted, the documentation required numerous revisions. These officials added that they had to award an additional bridge contract to avoid a lapse in service once they received a completed package from the program office because there was not enough time to do a competitive source selection and analysis. DOD’s SOCOM extended an IDIQ contract for radio supplies and services due to source selection delays and acquisition workforce challenges. For example, contracting officials said they extended the IDIQ for 12 months because the contracting office was working on a source selection for the follow-on contract for modernized radios and simply did not have the manpower to award a new sustainment contract for the existing radios at the same time. DHS’s Customs and Border Protection (CBP) awarded an approximately 16-month bridge contract in June 2016 for engineering and operations support of CBP’s Oracle products and services due to bid protests associated with March 2016 orders for this requirement. We found the protests were filed on the basis that CBP had issued the task order on a sole-source basis, which precluded other contractors from competing for the award. GAO dismissed the protest in May 2016 as a result of CBP’s stated intent to terminate the task order and compete the requirement as part of its corrective action plan. According to CBP contracting officials, they awarded the approximately16-month bridge contract to the incumbent contractor to continue services until GAO issued a decision and the services could be transitioned to the awardee. In September 2017, CBP officials awarded the competitive follow-on contract to a new vendor, but this award was also protested due to alleged organizational conflicts of interest, improperly evaluated technical proposals, and an unreasonable best-value tradeoff determination. As a result, CBP officials issued a stop-work order effective October 2017. To continue services during the protest, CBP officials extended the existing bridge contract by 3 months and then again by another 6 months. In January 2018, GAO dismissed the protest in its entirety and the stop-work order was lifted. According to a CBP contracting official, CBP did not exercise the final 3 months of options of the 6-month extension. In 2015, we found that the full length of a bridge contract, or multiple bridge contracts, is not always readily apparent from review of an individual J&A, which presents challenges for approving officials, as they may not have insight into the total number of bridges put into place by looking at individual J&As alone. We found a similar situation in our current review. For example, the J&As for the 8 bridge contracts with J&As did not include complete information on the periods of performance or estimated values of all related bridge contracts. OFPP has not yet taken action to address the challenges related to the use of bridge contracts that we found in October 2015. At that time, we recommended that OFPP take appropriate steps to develop a standard definition of bridge contracts and incorporate it as appropriate into relevant FAR sections, and to provide guidance to federal agencies in the interim. We further recommended that the guidance include (1) a definition of bridge contracts, with consideration of contract extensions as well as stand-alone bridge contracts, and (2) suggestions for agencies to track and manage their use of these contracts, such as identifying a contract as a bridge in a J&A when it meets the definition, and listing the history of previous extensions and stand-alone bridge contracts back to the predecessor contract in the J&A. However, as of November 2018, OFPP had not yet done so. As a result, agencies continue to face similar challenges with regard to the use of bridge contracts that we identified in 2015 and there is a lack of government-wide guidance that could help to address them. In the absence of a federal government-wide definition, others have taken steps to establish a bridge contracts definition. For example, Congress has established a statutory definition of bridge contracts that is applicable to DOD and its components. Specifically, Section 851 of the National Defense Authorization Act for Fiscal Year 2018 defined a bridge contract as (1) an extension to an existing contract beyond the period of performance to avoid a lapse in service caused by a delay in awarding a subsequent contract; or (2) a new short-term contract awarded on a sole- source basis to avoid a lapse in service caused by a delay in awarding a subsequent contract. Section 851 requires that, by October 1, 2018, the Secretary of Defense is to ensure that DOD program officials plan appropriately to avoid the use of a bridge contract for services. In instances where bridge contracts were awarded due to poor acquisition planning, the legislation outlines notification requirements with associated monetary thresholds for bridge contracts. Acting on this requirement and in response to our prior bridge contracts report, DOD established a bridge contracts policy memorandum in January 2018. The policy defines bridge contracts as modifications to existing contracts to extend the period of performance, increase the contract ceiling or value or both, or a new, interim sole-source contract awarded to the same or a new contractor to cover the timeframe between the end of the existing contract and the award of a follow-on contract. The DOD policy excludes extensions awarded using the option to extend services clause as bridge contracts unless the extension exceeds 6 - months. In addition, DOD’s bridge contract policy directs the military departments and DOD components to develop a plan to reduce bridge contracts and to report their results annually to the Office of the Under Secretary of Defense for Acquisition and Sustainment. As of August 2018, DHS and HHS did not have component- or department-level policies that define or provide guidance on the use of bridge contracts. Differing definitions of bridge contracts can lead to varying perspectives as to what constitutes a bridge contract. For example: Differing views on whether a contract within the 8(a) program can be a bridge. In one instance, we reviewed a 3-month, approximately $1.9 million bridge contract that DLA awarded to the incumbent contractor for a variety of IT contractor support services for DLA’s Information Operations (J6). This bridge contract was awarded to continue services until DLA could award a 12-month, roughly $2.9 million sole- source contract (including all options) to an 8(a) small business participant to consolidate tasks from 20 contracts as part of a reorganization effort within J6. After that contract expired, DLA awarded a second 12-month, about $3 million contract (including all options) to the same 8(a) small business participant to continue these task consolidation efforts. DLA subsequently awarded a 2-month $122,000 contract extension to continue services until it could award a follow-on order under DLA’s J6 Enterprise Technology Services (JETS) multiple award IDIQ contract, the award of which had also been delayed. Although the 8(a) contracts were not awarded to the incumbent of the initial 3-month bridge, we believe that these contracts could be considered bridge contracts as they were meant to bridge a gap in services until the reorganization efforts were complete and the JETS contract was awarded. DLA contracting officials, however, told us they do not consider the 8(a) contracts to be bridge contracts as these two contracts and the follow-on task order under JETS were awarded sole-source to 8(a) small business participants. DLA officials added that they plan to keep the requirement in the 8(a) program. Differing views as to whether contract extension are bridges. DOD’s policy generally does not include contract extensions using the “option to extend services” clause as bridges, unless the option is extended beyond the 6 months allowed by the clause. Navy policy, however, states that using the option to extend services clause is considered a bridge if the option was not priced at contract award. Similarly, HHS officials stated that the department does not consider contract extensions using the “option to extend services” clause to be bridge contract actions if the total amount of the services covered are evaluated in the initial award, and if the length does not extend beyond the allowable 6 months. The differences among agencies’ views and policies may be due to the extent to which the extensions are considered “competitive”. For the purposes of our definition, if the extension—whether it was competed or not—was used to bridge a gap in service until a follow-on contract could be awarded, then it would be considered a bridge. Without agreement as to what constitutes a bridge contract, agencies’ efforts to improve oversight of and to identify challenges associated with the use of bridge contracts will be hindered. While we are not making any new recommendations in this area, we continue to believe that our October 2015 recommendation to OFPP to establish a government-wide definition and provide guidance to agencies on their use remains valid. An estimated 7 percent of IT noncompetitive contracts and orders at selected agencies in fiscal year 2016 were in support of legacy IT systems as newly defined in statute, which is considerably fewer than we found when using the previous definition of legacy IT. At the time our review began, OMB’s draft definition for legacy IT systems stated that legacy IT spending was spending dedicated to maintaining the existing IT portfolio, excluding provisioned services such as cloud. Using this definition, and based on our generalizable sample, we estimated that about 80 percent of IT noncompetitive contracts and orders over $150,000 in fiscal year 2016 at DOD, DHS, and HHS were awarded in support of legacy IT systems. In December 2017, however, Congress enacted the Modernizing Government Technology Act (MGT) as part of the National Defense Authorization Act for Fiscal Year 2018. This act defined a legacy IT system as an “outdated or obsolete system of information technology.” Using this new statutory definition of a legacy IT system, we requested that each agency reassess how it would characterize the nature of the IT system using the revised definition provided under the MGT Act. For the 142 contracts and orders we reviewed, we found that when using the new definition, agencies significantly reduced the number of contracts and orders identified as supporting legacy IT systems. For example, using the OMB draft definition agencies identified that 118 out of 142 contracts and orders were supporting legacy IT systems. However, when using the more recent MGT Act definition, agencies identified only 10 out of 137 contracts and orders as supporting legacy IT systems (see figure 9). Consequently, using the definition provided under the MGT Act, we estimate that about 7 percent of IT noncompetitive contracts and orders over $150,000 in fiscal year 2016 at DOD, DHS, and HHS were awarded in support of outdated or obsolete legacy IT systems. Agencies’ program officials said that they are still supporting outdated or obsolete legacy IT systems (as defined by the MGT Act) because they are needed for the mission, or they are in the process of buying new updated systems or modernizing current ones. For example: Army officials awarded a 5-year, roughly $1.2 million contract to install, configure, troubleshoot, and replace Land Mobile Radio equipment at Ft. Sill, Oklahoma. An Army official noted that all equipment is older than 12 years and is nearing its end of life. The radio equipment, however, is required to support first responder and emergency service personnel critical communications. An Army official did not indicate any plans to modernize, but noted that the impact of this system not being supported would significantly affect all of Fort Sill’s land mobile radio communications. The Air Force awarded a $218,000 order to buy repair services for the C-130H aircraft’s radar display unit and electronic flight instrument. An Air Force official noted that legacy hardware that was bought through the order is part of critical systems that are required to safely fly the aircraft. The system, however, is obsolete and the associated hardware is no longer supported by the vendor. The official told us that there is currently a re-engineering effort to modernize the systems that use this hardware. HHS issued a 12–month, nearly $2.5 million order to buy operations and maintenance support for a Food and Drug Administration (FDA) system used to review and approve prescription drug applications. According to an FDA program official, efforts are underway to retire the system by gradually transferring current business processes to a commercial-off-the-shelf solution that can better meet government needs. This official, however, told us that the system currently remains in use because FDA’s Office of New Drugs is still heavily reliant on the system. Competition is a cornerstone of the federal acquisition system and a critical tool for achieving the best possible return on investment for taxpayers. In the case of information technology, federal agencies awarded slightly under a third of their contract dollars under some form of noncompetitive contract. Further, our current work was able to quantify that about a tenth of all information technology-related contracts and orders were made under some form of a noncompetitively awarded bridge contract, which provides new context for the issues associated with their use. The challenges themselves, however, remain much the same since we first reported on the issue in 2015. OFPP has yet to issue guidance or promulgate revised regulations to help agencies identify and manage their use of bridge contracts, and our current work finds that the full scope of bridge contracts or the underlying acquisition issues that necessitated their use in the first place may not be readily apparent to agency officials who are approving their use. We continue to believe that our 2015 recommendation would improve the use of bridge contracts, and we encourage OFPP to complete its ongoing efforts in a timely fashion. The frequency of the errors in reporting and their concentration within a specific type of contract action signals the need for more management attention and corrective action. These errors resulted in the potential misreporting of billions of dollars awarded under orders as being noncompetitively awarded when, in fact, they were competed. One agency included in our review—DHS—has taken steps to address the problems that underlie the errors in coding and provided additional training to its staff. DOD and HHS could benefit from additional insight as to the reasons behind the high rates of miscoding to improve the accuracy of this information. We are making a total of two recommendations, one to DOD and one to HHS. The Secretary of Defense should direct the Under Secretary of Defense for Acquisition and Sustainment to identify the reasons behind the high rate of miscoding for orders awarded under multiple award contracts and use this information to identify and take action to improve the reliability of the competition data entered into FPDS-NG. (Recommendation 1) The Secretary of Health and Human Services should direct the Associate Deputy Assistant Secretary for Acquisition to identify the reasons behind the high rate of miscoding for orders awarded under multiple award contracts and use this information to identify and take action to improve the reliability of the competition data entered into FPDS-NG. (Recommendation 2) We provided a draft of this report to DOD, DHS, HHS, and OMB for review and comment. DOD and HHS provided written comments and concurred with the recommendation we made to each department. In its written response, reproduced in appendix II, DOD stated it will analyze FPDS-NG data in an effort to identify why the miscoding of orders on multiple award contracts occurs, and use the information to advise the contracting community of actions to improve the reliability of competition data. In its written response, reproduced in appendix III, HHS stated that the Division of Acquisition within HHS’s Office of Grants and Acquisition Policy and Accountability uses a data quality management platform to ensure data accuracy. HHS is currently in the process of performing the annual data validation and verification of the acquisition community’s contract data for fiscal year 2018. Once this process is complete the Division of Acquisition will contact contracting offices that produced records that were flagged as containing errors and provide recommendations that should help improve the fiscal year 2019 accuracy rating. HHS added that it will closely monitor those checks and all others to ensure contract data are accurate. However, in its letter, HHS did not specify how its annual data validation and verification process would specifically address the fact that we found a high rate of miscoding of competition data for certain orders. OMB staff informed us that they had no comments on this report. DHS, HHS and the Air Force provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of Homeland Security, the Secretary of Health and Human Services, and the Director of the Office of Management and Budget. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or dinapolit@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our report examines (1) the extent to which agencies used noncompetitive contracts to procure Information Technology (IT) products and services for fiscal years 2013 through 2017; (2) the reasons for using noncompetitive contracts for selected IT procurements; (3) the extent to which IT procurements at selected agencies were bridge contracts; and (4) the extent to which noncompetitive IT procurements at selected agencies were in support of legacy systems. To examine the extent to which agencies used noncompetitive contracts and orders to procure IT products and services, we analyzed government-wide Federal Procurement Data System-Next Generation (FPDS-NG) data on IT obligations from fiscal years 2013 through 2017. To define IT, we used the Office of Management and Budget’s (OMB) Category Management Leadership Council list of IT products and service codes, which identified a total of 79 IT-related codes for IT services and products. Data were adjusted for inflation to fiscal year 2017 dollars, using the Fiscal Year Gross Domestic Product Price Index. To assess the reliability of the FPDS-NG data, we electronically tested for missing data, outliers, and inconsistent coding. Based on these steps, we determined that FPDS-NG data were sufficiently reliable for describing general trends in government-wide and IT contract obligations data for fiscal years 2013 through 2017. In addition, as we later describe, we compared data for a generalizable sample of 171 noncompetitive contracts and orders to contract documentation, and we determined that 29 of these had been inaccurately coded in FPDS-NG as noncompetitive. As such, we determined that the data were not reliable for the purposes of reporting the actual amount agencies obligated on noncompetitive contracts and orders for IT products and services. Specifically, we determined, that data for IT noncompetitive obligations awarded under multiple award contracts that cited “follow-on action following competitive initial action” or “other statutory authority” as the legal authority for using an exception to fair opportunity for the Departments of Defense (DOD), Homeland Security (DHS), and Health and Human Services (HHS) in fiscal year 2016 were not reliable. Evidence from our review of this sample suggests there was a high rate of miscoding for these orders; thus, we applied these findings to the remaining agencies and fiscal years because we do not have confidence that the data were more reliable than what we had found. To determine the reasons for using noncompetitive contracts for selected IT procurements, we selected the three agencies with the highest reported obligations on IT noncompetitive contracts for fiscal years 2012 through 2016 (the most recent year of data available at the time we began our review)—DOD, DHS and HHS. These three agencies collectively accounted for about 70 percent of all noncompetitively awarded contracts for IT during this period. From these agencies, we selected a generalizable stratified random sample of 171 fiscal year 2016 noncompetitive contracts and orders for IT above the simplified acquisition threshold of $150,000. The sample was proportionate to the amount of noncompetitive contracts and orders for IT at each agency. Based on our review of documentation collected for the generalizable sample, we excluded 29 contracts and orders because they were awarded competitively, but had been miscoded as noncompetitive or as having an exception to fair opportunity. As a result, our sample consisted of 142 contracts and orders. See table 4 for a breakdown by agency. To determine the extent to which IT procurements at selected agencies were bridge contracts or in support of legacy systems, agencies provided information as to whether the contracts and orders met GAO’s definition of a bridge contract—which we defined as an extension to an existing contract beyond the period of performance (including base and option years) or a new, short-term contract awarded on a sole-source basis to an incumbent contractor to avoid a lapse in service caused by a delay in awarding a follow-on contract—and whether they met the definitions of legacy IT systems in OMB’s draft IT Modernization Initiative and the Modernizing Government Technology Act (MGT). OMB’s draft IT Modernization Initiative defined legacy systems as spending dedicated to maintaining the existing IT portfolio but excluding provisioned services, such as cloud, while the MGT Act defines them as outdated or obsolete. We verified the agencies’ determinations of whether a contract or order was a bridge by reviewing documentation, such as justification and approval and exception to fair opportunity documents, for the contracts and orders in our generalizable sample, and conducting follow-up with agency officials as needed. We verified agencies’ determination of whether or not a contract or order was in support of a legacy system, as defined in OMB’s draft IT Modernization Initiative by reviewing the agencies’ determination and comparing these determinations to additional documentation, such as the statement of work, and conducting follow-up with program officials about the nature of the requirement where needed. We verified agencies’ determination of whether a contract or order was in support of a legacy system as defined in the MGT Act by reviewing agencies’ rationale for these determinations and following up with agency officials where we identified discrepancies between the determination and rationale. To obtain additional insights into bridge contracts and legacy systems, we selected a nonprobability sample of 26 contracts and orders from our generalizable sample of 142 contracts and orders for in-depth review. We selected these contracts based on factors such as obtaining a mix of bridge contracts and other contracts used in support of legacy IT systems and location of the contract files. For our in-depth review of contracts and orders, we collected and analyzed contract file documentation for the selected contracts and orders and interviewed contracting and program officials to gain insights into the facts and circumstances surrounding the awards of IT noncompetitive contracts and orders. In cases where we selected a potential bridge contract, we also reviewed the predecessor contract, additional bridge contracts (if any), and, follow-on contract, if awarded at the time of our review. For bridge contracts and orders, we asked about the reasons why bridges were needed and the status of follow-on contracts. We verified, using the definition of bridge contracts that we developed for our October 2015 report as criteria, that 13 of 142 contracts and orders in our generalizable sample were bridge contracts based on reviews of justification and approval documents, limited source justifications, or exceptions to fair opportunity, among other documents. We acknowledge, however, that in the absence of a government-wide definition, agencies may have differing views of what constitutes a bridge contract. In addition, we found 2 additional bridge contracts not included in our generalizable sample while conducting our in-depth review. For example, we selected three noncompetitive orders from our generalizable sample for in-depth review that were used to buy accessories and maintenance for the U.S. Special Operations Command (SOCOM) PRC-152 and 117G radios. We found that although the three orders were not bridge contracts, the underlying indefinite delivery/ indefinite quantity (IDIQ) contract—which outlines the terms and conditions, including pricing for the orders—had been extended 12 months to continue services until the follow-on IDIQ could be awarded. We also selected an Air Force order for equipment for the Joint Strike Fighter instrumentation pallet for in-depth review. Further analysis revealed that the underlying IDIQ was extended for 5 additional months to continue services until officials could award a follow-on contract for this requirement. Including these 2 additional bridge contracts brings the total number of bridge contracts we identified during this review to 15. For legacy contracts and orders we asked about the nature of the requirement and plan to move to newer technologies or systems. The selection process for the generalizable sample is described in detail below. We selected a generalizable stratified random sample of 171 contracts and orders from a sample frame of 3,671 fiscal year 2016 IT noncompetitive contracts and orders, including orders under multiple award indefinite delivery/indefinite quantity contracts over $150,000 to generate percentage estimates to the population. We excluded contracts and orders with estimated values below the simplified acquisition threshold of $150,000 as these contracts have streamlined acquisition procedures. We stratified the sample frame into nine mutually exclusive strata by agency and type of award, i.e. contract, order, and multiple award order for each of the three agencies. We computed the minimum sample size needed for a proportion estimate to achieve an overall precision of at least plus or minus 10 percentage points or fewer at the 95 percent confidence level. We increased the computed sample size to account for about 10 percent of the population to be out of scope, such as competitive or non-IT contracts or orders. We then proportionally allocated the sample size across the defined strata and increased sample sizes where necessary so that each stratum would contain at least 10 sampled contracts or orders. The stratified sample frame and sizes are described in table 5 below. We selected contracts and orders from the following components: DOD: Air Force, Army, Navy, Defense Information Systems Agency, Defense Logistics Agency, Defense Security Service, Defense Threat Reduction Agency, U.S. Special Operations Command, and Washington Headquarter Services; HHS: Centers for Disease Control, Centers for Medicare and Medicaid Services, Food and Drug Administration, Indian Health Service, National Institutes of Health, and the Office of the Assistant Secretary for Administration; DHS: Federal Emergency Management Agency, Office of Procurement Operations, U.S. Citizenship and Immigration Services, U.S. Coast Guard, U.S. Customs and Border Protection, and the U.S. Secret Service. We excluded 29 contracts and orders as we determined they had been miscoded as noncompetitive or as not having an exception to fair opportunity. Based on these exclusions, we estimate the number of noncompetitive contracts and orders in this population was about 3,000 (+/- 6.7 percent). All estimates in this report have a margin of error, at the 95 percent confidence level, of plus or minus 9 percentage points or fewer. We conducted this performance audit from April 2017 to December 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Janet McKelvey (Assistant Director), Pete Anderson, James Ashley, Andrew Burton, Aaron Chua, Andrea Evans, Lorraine Ettaro, Julia Kennon, Miranda Riemer, Guisseli Reyes-Turnell, Roxanna Sun, Alyssa Weir, and Kevin Walsh made key contributions to this report.
|
The federal government spends tens of billions of dollars each year on IT products and services. Competition is a key component to achieving the best return on investment for taxpayers. Federal acquisition regulations allow for noncompetitive contracts in certain circumstances. Some noncompetitive contracts act as “bridge contracts”—which can be a useful tool to avoid a lapse in service but can also increase the risk of the government overpaying. There is currently no government-wide definition of bridge contracts. GAO was asked to review the federal government's use of noncompetitive contracts for IT. This report examines (1) the extent that agencies used noncompetitive contracts for IT, (2) the reasons for using noncompetitive contracts for selected IT procurements, (3) the extent to which IT procurements at selected agencies were bridge contracts, and (4) the extent to which IT procurements were in support of legacy systems. GAO analyzed FPDS-NG data from fiscal years 2013 through 2017 (the most recent and complete data available). GAO developed a generalizable sample of 171 fiscal year 2016 noncompetitive IT contracts and orders awarded by DOD, DHS, and HHS—the agencies with the most spending on IT, to determine the reasons for using noncompetitive contracts and orders, and the extent to which these were bridge contracts or supported legacy systems. From fiscal years 2013 through 2017, federal agencies reported obligating more than $15 billion per year, or about 30 percent, of information technology (IT) contract spending on a noncompetitive basis (see figure). GAO found, however, that Departments of Defense (DOD), Homeland Security (DHS), and Health and Human Services (HHS) contracting officials misreported competition data in the Federal Procurement Data System-Next Generation (FPDS-NG) for 22 of the 41 orders GAO reviewed. GAO's findings call into question competition data associated with nearly $3 billion in annual obligations for IT-related orders. DHS identified underlying issues resulting in the errors for its orders and took corrective action. DOD and HHS, however, had limited insight into why the errors occurred. Without identifying the issues contributing to the errors, DOD and HHS are unable to take action to ensure that competition data are accurately recorded in the future, and are at risk of using inaccurate information to assess whether they are achieving their competition objectives. GAO found that DOD, DHS, and HHS primarily cited two reasons for awarding a noncompetitive contract or order: (1) only one source could meet the need (for example, the contractor owned proprietary technical or data rights) or (2) the agency awarded the contract to a small business to help meet agency goals. GAO estimates that about 8 percent of 2016 noncompetitive IT contracts and orders at DOD, DHS, and HHS were bridge contracts, awarded in part because of acquisition planning challenges. GAO previously recommended that the Office of Federal Procurement Policy define bridge contracts and provide guidance on their use, but it has not yet done so. GAO believes that addressing this recommendation will help agencies better manage their use of bridge contracts. Additionally, GAO estimates that about 7 percent of noncompetitive IT contracts and orders were used to support outdated or obsolete legacy IT systems. Officials from the agencies GAO reviewed stated these systems are needed for their mission or that they are in the process of modernizing the legacy systems or buying new systems. GAO recommended DOD and HHS identify the reasons why competition data for certain orders in FPDS-NG were misreported and take corrective action. DOD and HHS concurred.
|
The SBIR program was initiated in 1982 and has four main purposes: (1) use small businesses to meet federal R&D needs, (2) stimulate technological innovation, (3) increase commercialization of innovations derived from federal R&D efforts, and (4) encourage participation in technological innovation by small businesses owned by women and disadvantaged individuals. The STTR program was initiated a decade later, in 1992, and has three main purposes: (1) stimulate technological innovation, (2) foster technology transfer through cooperative R&D between small businesses and research institutions, and (3) increase private-sector commercialization of innovations derived from federal R&D. The SBIR and STTR programs are similar in that participating agencies identify topics for R&D projects and support small businesses, but the STTR program requires the small business to partner with a nonprofit research institution, such as a college or university or a federally funded research and development center. Each participating agency must manage its SBIR and STTR programs in accordance with program laws and regulations and the policy directives issued by SBA. In general, the programs are similar across participating agencies. All of the participating agencies follow the same general process to obtain proposals from and make awards to small businesses for both the SBIR and STTR programs. However, each participating agency has considerable flexibility in designing and managing specific aspects of these programs, such as determining research topics, selecting award recipients, and administering funding agreements. At least once a year, each participating agency issues a solicitation requesting proposals for projects in topic areas determined by the agency. Each participating agency uses its own process to review proposals and determine which proposals should receive awards. The agencies that participate in both SBIR and STTR programs usually use the same process for both programs. Also, each participating agency determines whether to provide the funding for awards as grants or contracts. According to the policy directives, SBA maintains a system that records SBIR and STTR award information—using data submitted by the agencies—as well as commercialization information, such as information about patents, sales, and investments reported by small businesses that received these awards. SBA is to use these data to assess small businesses that received awards against the benchmarks and identify any small businesses that did not meet the benchmarks. SBA is to initially assess the small businesses against the benchmarks and then in April of each year notify those that do not meet the benchmarks so that the businesses can review their award data and work with participating agencies to correct the database if necessary. SBA then is to analyze the award data again to identify, on June 1, those small businesses that still do not meet the benchmarks. These small businesses are then ineligible for certain awards from that date through May 31 of the following year. Data challenges have limited SBA’s and the 11 participating agencies’ efforts to fully implement the benchmarks. Since 2014, SBA and the participating agencies have regularly assessed small businesses against the Transition Rate Benchmark, but the assessments have been based on inaccurate or incomplete data. SBA and the participating agencies have assessed small businesses against the Commercialization Benchmark only once, in 2014, because of challenges in collecting and verifying the accuracy of data. In addition, SBA and the participating agencies have provided inconsistent information to small businesses about the consequence of not meeting the benchmarks. Since 2014, SBA and the participating agencies have regularly assessed small businesses against the Transition Rate Benchmark, which, in general, measures the rate at which businesses move projects from phase I to phase II. From 2014 through 2017, SBA determined that 4 to 7 small businesses did not meet the benchmark each year and placed those businesses on a list of those ineligible to receive certain additional awards. However, we found instances in which the data used to generate the list were inaccurate or incomplete. For example, we identified an instance in which the data in the awards database changed considerably after SBA’s initial assessment, indicating that the data used for that assessment were inaccurate. SBA’s list of small businesses subject to the benchmark in 2015 showed that a small business received 297 phase I awards during the assessment period. However, data received from SBA officials in August 2017 showed that this small business received only 1 phase I award. Agencies can update their data in the awards database at any time to, for example, submit additional award data or correct previously submitted award data, which is what an SBA official stated may have caused this change. Because the small business received only 1 award, it would not have been subject to the Transition Rate Benchmark. In this case, the change meant that SBA did not miss identifying a small business that should have been ineligible for an award; however, in other instances, changes to the data may lead SBA to miss identifying a small business that should have been ineligible for awards. In addition, we identified instances in which the publicly available data on awards were incomplete, including data that were missing or otherwise unusable. For example, based on our review of the award data from 2007 through 2016, we identified more than 2,700 small businesses that had multiple records with different spellings of the same business’s name. Furthermore, we identified more than 1,400 instances in which a unique identification number had errors, such as having an incorrect number of digits, all zeros, or hyphens. SBA officials told us that the quality of the award information in the database has been an issue, and that accurate information is important because small businesses may avoid being identified as subject to the benchmark if their business names and identification numbers are different across multiple records. For example, if the database contains 18 phase I awards made within the assessment period to a small business with a certain unique identification number but also contains 3 other phase I awards within that period with a different or missing unique identification number, the small business may avoid being identified as subject to the benchmark because the data would suggest it did not meet the threshold of receiving more than 20 phase I awards, even if it did. As a result, it could be difficult to determine which small businesses actually received more than 20 awards and should be subject to the benchmark. Standards for Internal Control in the Federal Government state that management should use quality information to achieve the entity’s objectives, and SBA’s Information Quality Guidelines state that SBA seeks to ensure the quality, utility, and integrity of the information it shares with the public, among other things. SBA’s policy directives for the SBIR and STTR programs state that SBA maintains a system that records SBIR and STTR award information, which is publicly available, and uses this information to calculate small businesses’ performance against the benchmark. SBA officials told us they depend on the accuracy of the data received from the participating agencies to perform SBA’s assessment. These officials also acknowledged that confirming the accuracy of SBA’s annual assessments against the benchmarks has been challenging because agencies can update their data over time. SBA officials stated that they have sought to improve the quality of the data after the data are entered into the database, such as fixing instances in which small businesses’ names were spelled differently across multiple records; however, the officials said that correcting the data already entered in the awards database is an ongoing and time-consuming process. SBA officials told us that there are errors in the database, in part because SBA has not worked with participating agencies to ensure that agencies enter high-quality, accurate data into the database. SBA officials provided us guidance on how to enter data that they said is available to agencies, but the errors we found suggest that agencies are not fully utilizing this guidance. As a result, SBA cannot reasonably ensure the quality and reliability of its award data and therefore cannot reasonably ensure that it has correctly assessed small businesses against the Transition Rate Benchmark. The Small Business Act requires agencies to evaluate whether small businesses have met a minimum performance standard for commercializing their technology. SBA and participating agencies do not know the extent to which small businesses are meeting the Commercialization Benchmark because SBA and the agencies have assessed businesses against the benchmark only once, in 2014, when SBA determined that 12 businesses did not meet the benchmark. This is in part because, according to officials from SBA and several agencies, they cannot collect and verify the accuracy of the data needed to implement the benchmark as written. For SBA and participating agencies to assess whether small businesses meet the Commercialization Benchmark, these small businesses must provide data on sales, investments, or patents resulting from the awards. However, agency officials told us about challenges related to obtaining the data they need to implement this benchmark. For example, agency officials told us that the needed data are not consistently applicable across agencies or projects. Specifically, these officials said that an agency may purchase the technology developed as a result of the SBIR or STTR award, while another agency may focus on funding technologies that will be sold on the commercial market, leading to different kinds of data on “sales.” Additionally, officials from SBA and several of the participating agencies told us they have been unable to collect and verify the accuracy of the information from small businesses to assess them against the Commercialization Benchmark. In addition, officials from 2 agencies told us that small businesses can easily circumvent the benchmark by submitting incorrect data. The Small Business Act and the policy directives provide agencies flexibility in how they can implement the Commercialization Benchmark. Officials from participating agencies said that they thought the Commercialization Benchmark should be revised, but they provided differing views on how to do it. Officials from SBA and 2 agencies told us that they would consider having individual agencies develop a benchmark or metric tailored to their agency, in part because the definition of successful commercialization could vary across the agencies. However, officials acknowledged that collecting and verifying the accuracy of the data would still be a concern with this approach. Officials from 2 participating agencies told us that collecting and verifying the accuracy of the data is a significant amount of work, and officials from a third agency added that implementing the benchmark independently is impractical because they do not have the capability to track small businesses’ commercialization efforts. Officials from 1 agency said they preferred to keep a uniform benchmark across the agencies, in part because having varying benchmarks could lead to a small business being eligible to participate in the programs with one agency but not with another. Although views differed across agencies, working together to find a way to implement the benchmark as designed or revising it so that it can be implemented could allow the agencies to fulfill the requirement in the Small Business Act. Officials from 3 agencies told us they would prefer to consider businesses’ prior commercialization experience as part of their overall evaluation of businesses’ proposals, rather than implement the current Commercialization Benchmark. The SBIR and STTR policy directives currently allow agencies to define the benchmark in terms other than revenue or investment, such as using a commercialization scoring system that rates awardees on their past commercialization success. Defining the benchmark in these terms could help agencies to implement the statutory requirement. Officials from SBA said they see the value of allowing reviewers to use professional judgment in determining the commercialization success of applicants, rather than assessing small businesses against standard criteria. Officials from 1 agency said that such a change could help achieve the goal of the benchmark without the challenges of collecting data from all small businesses participating in the programs. Nine of the 11 participating agencies currently consider prior commercialization experience as part of their evaluation when making award selections (see table 2), which shows that evaluating commercialization experience at individual agencies can be feasible. For example, project solicitations from the Department of Agriculture, the Department of Defense, and the National Science Foundation state that these agencies require applicants to provide sales or revenue information for products resulting from SBIR or STTR awards, and the Department of Homeland Security’s solicitation requires applicants to provide a history of previous federal and nonfederal funding and subsequent commercialization of their products. All agencies consider commercialization potential when selecting these awards. The consequence for small businesses not meeting the benchmarks is ineligibility to participate in phase I of the SBIR or STTR program for a year, according to the Small Business Act. SBA officials stated that they and the agencies initially interpreted this to mean that small businesses could not receive awards during the ineligibility period of June 1 through May 31 of the following year, and this is how the consequence is described in the SBIR and STTR policy directives. SBA officials told us that they and the participating agencies sought to change how to implement the consequence of businesses not meeting the benchmarks because of SBA’s and agencies’ difficulties in implementing the benchmarks. Officials from 4 agencies said that they generally evaluate and select awards shortly before SBA releases the list of ineligible companies, leading them to potentially select projects from small businesses that will be on the ineligible list by the time the award period begins. Based on our review of award data from October 2014 to May 2017, we identified 13 phase I awards across 5 small businesses with award start dates during the period that the business was ineligible to receive such awards. According to agency officials, each of these awards was selected before the small business became ineligible to receive the award. SBA and the participating agencies agreed to change how the consequence would be implemented, starting in 2017, so that small businesses that do not meet the benchmarks are ineligible to submit proposals, according to SBA officials. As of November 2017, however, the information available about this new way to implement the consequence was inconsistent because some of the agencies had not updated their project solicitations. Specifically, information in the most recent project solicitations available at that time for 2 agencies and one subunit of an agency stated that businesses that do not meet the benchmarks are ineligible to submit certain proposals, consistent with the revised approach for how to implement the consequence. However, the most recent project solicitations available at that time for 7 other agencies and the other subunit of the agency mentioned above instead stated that those businesses that do not meet the benchmarks are ineligible to receive certain awards, consistent with the prior approach for how to implement the consequence. One other agency directed users to SBA’s website in its solicitation. Table 3 shows the information about the consequence of not meeting the benchmarks that each agency included in its most recent project solicitations as of November 2017. As of November 2017, the SBIR and STTR policy directives stated that the consequence for not meeting these benchmarks is ineligibility to receive certain awards. SBA officials told us they are in the process of updating the policy directives to reflect this change in how the consequence is implemented, but these officials said that it is a long process and they could not provide a timeframe for when the update would be complete. As mentioned earlier in this report, SBA’s Information Quality Guidelines state that SBA seeks to ensure the quality, utility, and integrity of the information it shares with the public, among other things. Until participating agencies update their project solicitations and SBA updates its policy directives to accurately reflect agreed-upon practices about the consequence for small businesses that do not meet the benchmarks, small businesses may be confused about their eligibility to submit proposals and could invest time developing and submitting proposals when they are not eligible to do so. Under the SBIR and STTR programs, federal agencies have awarded billions of dollars to small businesses to help these businesses develop and commercialize innovative technologies. SBA and the participating agencies have assessed these small businesses against the Transition Rate Benchmark, but those assessments have been based on inaccurate or incomplete data. Without ensuring the reliability of its data, SBA cannot reasonably ensure that it has correctly assessed small businesses against the Transition Rate Benchmark. SBA and the participating agencies developed a Commercialization Benchmark across all the participating agencies but have not fully implemented it, in part because they have been unable to collect information from the small businesses and verify the accuracy of that information. Working together to implement the benchmark as written or revise it so that it can be implemented could allow the agencies to fulfill the requirement in the Small Business Act to evaluate whether small businesses have met a minimum performance standard for commercializing their technology. Lastly, SBA and the participating agencies have provided inconsistent information to small businesses about the consequence of not meeting the benchmarks. Officials from SBA and the participating agencies had agreed to change how the consequence would be implemented, starting in 2017, because of difficulties implementing the benchmarks. However, as of November 2017, seven agencies, and a subunit of one agency, had not updated their project solicitations and SBA had not updated its policy directives. Without consistent information on the benchmarks, small businesses may be confused about their eligibility to submit proposals and could invest time developing proposals that they are not eligible to submit. We are making a total of 11 recommendations, including 3 to SBA and 1 each to the Department of Commerce’s National Oceanic and Atmospheric Administration; the Departments of Defense, Education, Energy, Health and Human Services, and Homeland Security; the Environmental Protection Agency; and the National Science Foundation. Specifically: The Director of the Office of Investment and Innovation within SBA should work with participating agencies to improve the reliability of its SBIR and STTR award data (Recommendation 1). The Director of the Office of Investment and Innovation within SBA should work with participating agencies to implement the Commercialization Benchmark or, if that is not feasible, revise the benchmark so that it can be implemented (Recommendation 2). The Director of the Office of Investment and Innovation within SBA should update the SBIR and STTR policy directives to accurately reflect how the consequence of the benchmarks is to be implemented (Recommendation 3). The SBIR Program Manager of the Department of Commerce’s National Oceanic and Atmospheric Administration should update the agency’s SBIR project solicitation to accurately reflect how the consequence of not meeting the benchmarks is to be implemented (Recommendation 4). The SBIR Program Administrator within the Department of Defense should update the agency’s SBIR and STTR project solicitations to accurately reflect how the consequence of not meeting the benchmarks is to be implemented (Recommendation 5). The SBIR Program Manager within the Department of Education should update the agency’s SBIR project solicitation to accurately reflect how the consequence of not meeting the benchmarks is to be implemented (Recommendation 6). The SBIR Program Manager within the Department of Energy should update the agency’s combined SBIR and STTR project solicitation to accurately reflect how the consequence of not meeting the benchmarks is to be implemented (Recommendation 7). The SBIR/STTR Program Coordinator within the Department of Health and Human Services should update the agency’s SBIR and STTR project solicitations to accurately reflect how the consequence of not meeting the benchmarks is to be implemented (Recommendation 8). The SBIR Program Director within the Department of Homeland Security should update the agency’s SBIR project solicitation to accurately reflect how the consequence of not meeting the benchmarks is to be implemented (Recommendation 9). The SBIR Program Manager within the Environmental Protection Agency should update the agency’s SBIR project solicitation to accurately reflect how the consequence of not meeting the benchmarks is to be implemented (Recommendation 10). The SBIR and STTR Program Manager within the National Science Foundation should update the agency’s SBIR and STTR project solicitations to accurately reflect how the consequence of not meeting the benchmarks is to be implemented (Recommendation 11). We provided a draft of this report to SBA and the 11 participating agencies for review and comment. In written comments, the Department of Commerce’s National Oceanic and Atmospheric Administration; the Departments of Defense, Education, Energy, Health and Human Services, and Homeland Security; the Environmental Protection Agency; and SBA agreed with the respective recommendations directed to their agencies. Agencies’ written comments are reproduced in appendixes I through VIII. An official from one agency—the National Science Foundation—stated in an email that the agency concurred with the recommendation and did not have any further comments. Two agencies—the Department of Homeland Security and SBA—also provided technical comments, which we incorporated as appropriate. Three agencies—the Departments of Agriculture and Transportation, and the National Aeronautics and Space Administration—as well as the Department of Commerce’s National Institute of Standards and Technology stated via email that they had no technical or written comments. In its comments, SBA stated that it disagreed with a statement in our draft report that SBA had not worked with agencies to enter high-quality and accurate data into the database and provided us documentation of an instruction guide on entering data that SBA officials said was available to agencies. Based on our review of this information, we clarified the text of the report and modified the draft report’s recommendation by removing the suggested example that SBA provide guidance to the agencies to improve SBIR and STTR award data reliability. SBA agreed with the revised recommendation. After we provided a draft of the report to the agencies for comment, the Departments of Education and Homeland Security took action on their respective recommendations. Specifically, in December 2017, the agencies issued new project solicitations that reflected the updated consequence of not meeting the benchmarks. We agree that these agencies fully implemented the recommendations we made to them in this report. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, and Transportation; the Administrators of the Small Business Administration, the Environmental Protection Agency, and the National Aeronautics and Space Administration; the Director of the National Science Foundation; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or neumannj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IX. Error! No text of specified style in document. Appendix VII: Comments from the Department of Homeland Security Error! No text of specified style in document. Appendix IX: GAO Contact and Staff Acknowledgments Error! No text of specified style in document. In addition to the contact named above, Hilary Benedict (Assistant Director), John Barrett, Natalie Block, Antoinette Capaccio, Tanya Doriss, Justin Fisher, Ellen Fried, Juan Garay, Cindy Gilbert, Perry Lusk, William Shear, and Elaine Vaurio made key contributions to this report.
|
Through the SBIR and STTR programs, federal agencies have awarded about 162,000 contracts and grants totaling $46 billion to small businesses to help them develop and commercialize new technologies. Eleven federal agencies participate in the SBIR program, and 5 agencies also participate in the STTR program. Each program has three phases, which take projects from initial feasibility studies through commercialization activities. SBA oversees both programs. In response to the 2011 reauthorization of the programs, SBA and the participating agencies developed benchmarks to measure small businesses' progress in developing and commercializing technologies. GAO was asked to review SBA's and the agencies' efforts related to these benchmarks. This report examines the extent to which SBA and the participating agencies have implemented these benchmarks, including assessing businesses against them and establishing the consequence of not meeting them. GAO analyzed award data and interviewed officials from SBA and the 11 participating agencies. Data challenges have limited the Small Business Administration's (SBA) and the 11 participating federal agencies' efforts to assess businesses against two benchmarks—the Transition Rate Benchmark and the Commercialization Benchmark—of the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs. Transition Rate Benchmark. Small businesses that received more than 20 awards for the first phase of the programs in the past 5 fiscal years—excluding the most recent fiscal year—must have received an average of 1 award for the second phase of the programs for every 4 first phase awards. Since 2014, SBA and the agencies participating in the programs have regularly assessed small businesses against this benchmark. From 2014 through 2017, SBA determined that 4 to 7 businesses did not meet the benchmark each year. SBA officials provided GAO guidance on how to enter data into the programs' awards database they said is available to agencies, but GAO found evidence that suggests agencies are not fully utilizing it. For example, GAO found that the database used to perform the assessments contained inaccurate and incomplete data, such as about 2,700 businesses with multiple records with different spellings of their names and more than 1,400 instances in which a unique identification number had errors, such as an incorrect number of digits, all zeros, or hyphens. Thus, it could be difficult to determine which small businesses should be subject to the benchmark. Commercialization Benchmark. Small businesses that received more than 15 awards for the second phase of the programs in the past 10 fiscal years—excluding the most recent 2 fiscal years—must have received a certain amount of sales, investments, or patents resulting from their efforts. SBA and participating agencies have assessed small businesses against this benchmark only once, in 2014, and identified 12 businesses that did not meet the benchmark. This is, in part, due to challenges in collecting and verifying the accuracy of the data that small businesses report and that are needed to implement the benchmark, according to officials from SBA and several agencies. For example, agency officials told GAO that some needed data, such as for reported sales, are not consistently applicable across agencies or projects. The Small Business Act and policy directives provide flexibility in how the agencies can implement the benchmark. Working together to implement it as designed or revise it so that it can be implemented could allow the agencies to fulfill statutory requirements. SBA and the participating agencies have provided inconsistent information to small businesses about the consequence of not meeting the benchmarks. SBA and the agencies agreed to change how the consequence of not meeting the benchmarks was to be implemented, starting in 2017, from ineligibility to receive certain awards to ineligibility to submit certain proposals. However, as of November 2017, some agencies had not updated this information in their project solicitations. Furthermore, SBA has not updated this information in its policy directives. Without consistent information, businesses may be confused about their eligibility to submit proposals or receive awards and could invest time developing and submitting proposals when they are not eligible to do so. GAO is making 11 recommendations to SBA and other agencies to take actions to improve implementation of the benchmarks, including improving the reliability of award data; implementing or revising the Commercialization Benchmark; and updating information about the consequence of not meeting the benchmarks. SBA and these agencies agreed with GAO's recommendations.
|
RPA aircrews consist of a pilot and a sensor operator. The Air Force in most cases assigns officers to fly its RPAs. The Air Force relied solely on manned aircraft pilots to fly remotely piloted aircraft until 2010 when it established a RPA pilot career field (designated as Air Force Specialty Code 18X) for officers trained to fly only RPAs. As of December 2013, approximately 42 percent of the RPA pilots were temporarily assigned, manned aircraft pilots and manned aircraft pilot training graduates. Both of those groups of RPA pilots are temporarily assigned to fly RPAs with the assumption that after their tour they will return to flying their manned aircraft. By comparison, as of September 2018, manned aircraft pilots and manned aircraft pilot training graduates comprised only 17 percent of the RPA pilots. Further, the number of permanent RPA pilots has increased from 58 percent of all RPA pilots in December 2013, to 83 percent as of September 2018, as shown in figure 1. Additionally, Air Force enlisted personnel operate the RPAs’ sensors, which provide intelligence, surveillance, and reconnaissance capabilities. As a crewmember, the RPA sensor operators provide assistance to the RPA pilot with all aspects of aircraft use, such as tracking and monitoring airborne, maritime and ground objects and continuously monitoring the aircraft and weapons systems status. The Defense Officer Personnel Management Act, as amended, created a standardized system for managing the promotions for the officer corps of each of the military services. Pursuant to the established promotion system, the secretaries of the military departments must establish the maximum number of officers in each competitive category that may be recommended for promotion by competitive promotion boards. Within the Air Force, there are groups of officers with similar education, training, or experience, and these officers compete among themselves for promotion opportunities. There are several competitive categories including one that contains the bulk of Air Force officers called the Line of the Air Force, which includes RPA pilots, as well as pilots of manned aircraft and other operations-oriented careers. To determine the best-qualified officers for promotion to positions of increased responsibility and authority, the Air Force appoints senior officers to serve as members of a promotion selection board for each competitive category of officer in the Air Force. Promotion selection boards consist of at least five active-duty officers who are senior in grade to the eligible officers and who reflect the eligible population with respect to minorities and women, as well as career field, aviation skills, and command in an attempt to provide a balanced perspective. Promotion boards convene at the Air Force Personnel Center headquarters to perform a subjective assessment of each officer’s relative potential to serve in the next higher grade by reviewing the officer’s entire selection folder. This “whole-person concept” involves the assessment of such factors as job performance, professional qualities, leadership, job responsibility, depth and breadth of experience, specific achievements, and academic and professional military education. The Air Force developmental education programs expand expertise and knowledge as well as a path that helps to ensure that personnel receive the appropriate level of education throughout their careers. Officers have three opportunities to compete for intermediate developmental education programs, which focus on warfighting within the context of operations and leader development, such as at the Air Command and Staff College. Officers have four opportunities to compete for senior developmental education programs, such as at the Air War College, which are designed to educate senior officers to lead at the strategic level in support of national security, and in joint interagency, intergovernmental and multinational environments. A subset of developmental education is Professional Military Education, which includes resident and non-resident attendance options open to officers in both the intermediate and senior developmental education programs. Nonresident programs exist to provide individuals who have not completed resident programs an opportunity to complete them via correspondence, seminar, or other approved methods. Prior to 2017, officers who were identified by their promotion board as a developmental education candidate or “selectee” were assured of the opportunity to attend some form of developmental education in-resident program. However, in March 2017, the Air Force announced changes to its nomination process for officer developmental education by separating in- residence school selection status from promotion decisions. Since that time, commanders nominate candidates for in-residence, developmental education programs based on individual performance. Officers with aviation expertise, including RPA pilots, at various points in their careers, may rotate through both flying and nonflying positions to broaden their career experiences. Operational positions, whether flying or nonflying, include those positions that exist primarily for conducting a military action or carrying out a strategic, tactical, service, training or administrative military mission. Operational positions include a range of flying positions, such as for RPA pilots, operating aircraft to gather intelligence or conduct surveillance, reconnaissance or air strikes against a variety of targets. Operational positions that are non-flying positions could include assignments as a close-air-support duty officer in an Air Operations Center. Non-operational staff positions are generally non-flying positions and include assignments to headquarters or combatant command positions. Certain non-operational staff positions can be filled only by qualified pilots. Other non-operational positions are more general in nature and are divided among officer communities to help carry out support activities, training functions, and other noncombat related activities in a military service. These positions could include positions such as a recruiter, working as an accident investigator, advisor to foreign militaries, or a policy position at an Air Force major command. The Air Force views nonoperational staff positions as a means to develop leaders with the breadth and depth of experience required at the most senior levels inside and outside the Air Force. Various offices within the Air Force have roles and responsibilities for the management of aircrew positions and personnel. The Deputy Chief of Staff for Operations is to establish and oversee policy to organize, train and equip forces for the Department of the Air Force. This specifically includes the responsibility for all matters pertaining to aircrew management. The Directorate of Operations is responsible for developing and overseeing the implementation of policy and guidance governing aircrew training, readiness, and aircrew requirements. The directorate is the approval authority for aircrew distribution plans, rated allocation oversight and any other areas that have significant aircrew management implications. The Operational Training Division produces the official Air Force aircrew personnel requirements projections, and in conjunction with the Military Force Policy Division, develops and publishes the Rated Management Directive, formerly known as the Rated Staff Allocation Plan, as approved by the Chief of Staff of the Air Force as designed to meet near-term operational as well as long-term leadership development requirements. The Office of the Deputy Chief of Staff for Manpower, Personnel, and Services has responsibilities that include developing personnel policies, guidance, programs, and other initiatives to meet the Air Force’s strategic objectives to include accessions, assignments, retention, and career development. The Directorate of Force Management Policy, the Force Management Division analyzes officer, enlisted and civilian personnel issues. The division also maintains a variety of computer models and databases to analyze promotion, retention, accession, compensation and separation policy alternatives. Additionally, it is responsible for providing official aircrew personnel projections for use in various management analyses. The Air Force Personnel Center, one of three field-operating agencies reporting to the Deputy Chief of Staff of the Air Force, Manpower, Personnel and Services, conducts military and civilian personnel operations such as overseeing performance evaluations, promotions, retirements, separations, awards, decorations and education. The Center also directs the overall management and distribution of both military and civilian personnel. Based on our analysis of Air Force promotion data, the percentage of RPA pilots promoted were generally similar in comparison to the promotion rates of pilots in other career fields since 2013. However, it is important to note that since the population of eligible RPA pilots to be considered for promotion was smaller than other pilot populations, the promotion of one or two RPA pilots could have a large effect on their promotion rate. For example, the RPA pilot promotion rates were within 10 percentage points of the promotion rates for the other types of pilots in each year of those years in 8 out of 10 promotion boards to major and to lieutenant colonel held during that time frame. RPA pilot promotion rates from captain to major were generally similar as the promotion rates for other pilots from 2014 through 2017, as shown in figure 2. For example, in 2014, 94 percent of eligible RPA pilots (29 of 31), bomber pilots (47 of 50), fighter pilots (189 of 201) and 91 percent of eligible mobility pilots (355 of 388) were promoted from captain to major. This is an improvement in promotion rates for RPA pilots compared to 2006 through 2012, where RPA pilot promotion rates fell below those for all other pilots in 5 of the 7 promotion boards held. Additionally, the promotion rates for RPA pilots from major to lieutenant colonel relative to other types of pilots in 2013 through 2017 showed a similar improvement compared to 2006 through 2012, as shown in figure 3. For example, in 2017, 75 percent of eligible RPA pilots (15 of 20) were promoted, which is generally similar to the promotion rates for the other pilots—78 percent for bomber pilots (18 of 23), 83 percent for fighter pilots (75 of 90), and 72 percent for mobility pilots (143 of 199). However, in 7 of the 8 promotion boards held from 2006 through 2012, RPA pilot promotion rates from major to lieutenant colonel fell below the promotion rates for all other pilots. The one exception to the promotion rates being generally similar was the rate at which RPA pilots were promoted from lieutenant colonel to colonel. In this case, the rates for RPA pilots diverged notably from the promotion rates of bomber, fighter, and mobility pilots from 2013 to 2017. For example, in 2016, 1 out of the 5 (20 percent) eligible RPA pilots was promoted to colonel. In contrast, 13 of 21 (62 percent), bomber pilots, 32 of 51 (63 percent) fighter pilots, and 34 of 65 (52 percent) mobility pilots were promoted from lieutenant colonel to colonel. However, the promotion rates of RPA pilots from lieutenant colonel to colonel that we calculated should be considered cautiously as fewer than 10 RPA pilots were eligible for promotion boards each year through this time period. The promotion of one or two officers could have a large effect on the promotion rate due to the small number of eligible RPA pilots. In April 2014, we reported that Air Force officials attributed the low RPA pilot promotion rates from 2006 through 2012 generally to the process that it used to staff RPA pilot positions at that time. Specifically, they stated that commanders generally transferred less competitive pilots from other pilot career fields to RPA squadrons to address the increased demand. Air Force officials also stated that these officers generally had in their records fewer of the factors that the Air Force Personnel Center identified that positively influence promotions than their peers. They said that because the bulk of RPA pilots who competed for promotion during the time of our previous review was transferred using this process, these were the reasons that RPA pilots had been promoted at lower rates than their peers. Air Force officials stated that they believed the trend of increased promotion rates for RPA pilots from 2013 through 2017 mostly reflected the change in the population of eligible pilots who were recruited and specialized as an RPA pilot (i.e., the 18X career field). According to Air Force officials, the creation and establishment of this career field resulted in an increase in the number of skilled and more competitive promotion candidates. Specifically, as of September 2018, the number of permanent RPA pilots outnumbered all other types of pilots serving as RPA pilots combined. RPA pilots were nominated to attend developmental education programs, such as professional military education, at rates similar to the rates for other pilots from academic years 2014 through 2018, according to our analysis of Air Force data. An officer’s attendance at developmental education programs can be a factor that is taken into consideration when being assessed for promotion. Our analysis showed that, for the academic years 2014 through 2018, nomination rates for RPA pilots to Intermediate and Senior Developmental Education programs combined ranged from a low of 25 percent for academic year 2016 to a high of 31 percent for academic year 2015. In comparison, nomination rates across the same time period for pilots in other career fields ranged from a low of 21 percent for mobility pilots for academic year 2016 to a high of 35 percent for fighter pilots for academic year 2014. Table 1 provides the various nomination rates for each of the different types of pilots that we analyzed. The Air Force promoted enlisted RPA sensor operators at a rate similar to the rates of all enlisted servicemembers, according to our analysis of Air Force promotion data. Specifically, the Air Force promoted an average of 100 RPA sensor operators (or an average of 26 percent) annually for the period from 2013 through 2017. Similarly, the Air Force annually promoted an average of approximately 27,000 enlisted personnel (or an average of 25 percent) for the same period. Our analysis showed that in 2013 through 2017, promotion rates for RPA sensor operators ranged from a low of 18 percent in 2014 to a high of almost 35 percent in 2017. The promotion rates across the same time period for all other enlisted servicemembers ranged from a low of approximately 19 percent in 2014 to a high of 32 percent in 2017. Table 2 provides the various promotion rates that we analyzed. Air Force enlisted servicemembers in the lowest four levels (grades E1- E4) are selected for promotion based on time in grade and time in service. Selection for promotion to the next two levels, known as the non- commissioned officer levels (grades E5 and E6), is based on the Weighted Airman Promotion System to fill the requirement. This system provides weighted points for an individual’s performance record and service decorations received, and the results of tests to assess an individual’s promotion fitness and job skills and knowledge. Selection for promotion to the senior non-commissioned officer level (grades E7-E9) is based on the same Weighted Airman Promotion System plus the results from a central board evaluation. Servicemembers eligible for promotions to the non-commissioned ranks are assessed and then listed from the highest to lowest scores and offered promotion if they fall above a specific cutoff score established to meet quotas within each career field and for each rank. While enlisted servicemembers must pass knowledge and skills tests to qualify for promotions, officials explained that the resulting promotion rates essentially reflect requirements and are not indicative of competitiveness across career fields as with officer promotion rates. Officials stated that enlisted servicemember promotions are based on the service’s numeric personnel requirements for each enlisted grade. To consider an enlisted servicemember for promotion from among those who are eligible, a vacancy must first be required at the next higher grade within that servicemember’s occupational area, known as their Air Force Specialty Code that needs to be filled. For example, in 2017, the Air Force required promotions for 128 RPA sensor operators, and officials promoted that many enlisted servicemembers from the cohort of 370 eligible servicemembers. For each year since 2013, the Air Force has assigned over 75 percent of the non-operational staff positions that require an RPA pilot to the organizations that had requested those positions, according to our analysis of service headquarters data. However, the overall number of non-operational staff positions that require an RPA pilot is about one- tenth of the number of those requiring pilots in other career fields. For example, in fiscal year 2018 the Air Force had 83 non-operational staff positions that required an RPA pilot compared to 330 positions requiring fighter pilots. Air Force officials stated that the number of RPA positions was smaller than for other pilots because the career field is relatively new and still growing. Non-operational staff positions are generally non-flying positions and include assignments to headquarters or combatant command positions. Certain non-operational staff positions can be filled only by qualified pilots. Other non-operational positions are more general in nature and are divided among officer communities in a military service. Officers with aviation expertise, including RPA pilots, at various points in their careers may rotate through both flying and nonflying positions to broaden their career experiences and Air Force officials stated that staff assignments are essential to the development of officers who will assume greater leadership responsibilities. Headquarters Air Force prepares allocation or “assignment” plans to provide positions requiring aviator expertise to various Air Force commands and other entities. Under this process, these organizations identify the number of non-operational staff positions requiring aviator expertise (e.g., pilots) they require as well as indicate the type of aviator expertise that is needed to fill those positions, (e.g., fighter, bomber, RPA). Headquarters Air Force then determines the extent to which the staff position requirements can be met in accordance with senior leadership priorities designed to equitably manage the shortage of officers with aviation expertise. The results of this process are outlined in the Air Force’s annual Rated Management Directive which reinforces each organization’s flexibility for using their entitlements in non- operational staff and other positions. In some instances, the Air Force is able to assign enough positions to an organization to meet nearly all of its non-operational staff position requirements. For the purposes of our analyses, the assignment rate is determined by the number of positions assigned compared to the number of positions the organization required. For example, in fiscal year 2018 the Air Force assigned 99 percent of the non-operational staff positions that require an RPA pilot to the requesting entities. In other instances, the Air Force assignment rate of non-operational staff positions may be much lower because of competing management priorities or shortages of personnel in a career field. As a result, the Air Force’s assignment of staff positions can vary across the different career fields. For example, the Air Force fighter pilot career field has had fewer fighter pilots than its authorization number since 2013. Therefore, the Air Force assignment rate for staff positions requiring fighter pilots is significantly lower than the rate for staff positions requiring other types of pilots. For example, in fiscal year 2017, the Air Force assignment rate for staff positions requiring a fighter pilot was 18 percent, which was less than a quarter of the rate for staff positions requiring an RPA pilot, as shown in table 3. The Air Force has not reviewed its oversight process to ensure that it is effectively and efficiently managing its review of non-operational staff positions that require aviator expertise, such as RPA pilots. Air Force officials explained that its oversight process for managing these positions requiring pilot expertise consists of a time-consuming, labor-intensive process of exchanging emails and spreadsheets with 57 organizations, such as various Air Force Major commands like the Air Combat Command, the Air Force Special Operations Command, and the National Guard Bureau. According to these officials, this process consists of the maintenance and exchange of spreadsheets and briefing slides with information about every position found throughout the Air Force and in various other entities that are required to be reviewed and validated annually. Additionally, this process is maintained by one official within the Headquarters Air Force who must exchange the spreadsheets via email approximately twice a year with officials from each of the organizations that are responsible for annually justifying their continued need for non- operational staff positions requiring aviator expertise. Air Force officials stated that this process does not always produce complete and accurate information in a timely manner as in some instances the information produced is not relevant by the time a complete review of the positions is accomplished. Headquarters Air Force officials familiar with its oversight responsibilities stated that using a different system would more efficiently and effectively support their ability to manipulate, analyze and share information among the applicable organizations and make informed decisions. For example, these officials explained that over the last 10 years, the Air Force drew down the number of squadrons, but did not do a good job of cross checking that reduced number of squadrons with a revised number of staff positions required for support. Therefore, the number of non- operational staff positions was not adjusted and are now artificially high in some career fields and others may have fewer non-operational staff positions than needed. These officials added that as the new RPA pilot career field has developed, there has been no timely and widely accessible system of checks and balances to establish an accurate number of non-operational staff positions required to support the career field. Further, they said that using a different system that allows them to have more timely and quality information would enhance their ability to manage and make decisions regarding the appropriate mix of expensive pilots and others with aviator expertise between operational line positions and non-operational staff position needs. They said this would better ensure that there is a reasonable range of non-operational staff positions required for each career field, such as for the growing RPA pilot career field. An October 2017 memorandum from the Air Force Chief of Staff stated that the number of non-operational staff positions which require aviation expertise must be brought into balance with the Air Force’s ability to produce the appropriate number of officers with aviator expertise. The memorandum also stated that organizations were strongly encouraged to change their current requirements to meet the available current force levels including converting chronically unfilled non-operational staff positions requiring aviator expertise to positions specifically designated for RPA pilots. As a result of two separate reviews, Air Force officials identified hundreds of these positions that lacked adequate justification or qualifications to support the positions’ requirement to be filled by officers with aviator expertise. For example, in August 2018, out of 2,783 non- operational staff positions, the Air Force found that 513 of these positions were evaluated as lacking adequate justification or mission qualifications to support the need for aviator expertise and 61 positions were eliminated after further review. Prior to 2010, according to officials, the Headquarters Air Force maintained a web-based management oversight system to review and approve the justifications for its non-operational staff positions requiring aviator expertise that allowed for wide access to and manipulation and timely analyses of information. Additionally, this former system provided multilevel coordination among Headquarters Air Force and its major commands for reviewing the justifications of all of the positions. According to Headquarters Air Force officials, the use of this management oversight system was discontinued in 2010 due to a decision to no longer fund the contractor maintaining the system. In October 2018, officials from one of the Air Force’s Major Commands confirmed that the current oversight system in use is time-consuming, does not readily support information analysis and that plans to integrate it with another existing management system had not happened. The Headquarters Air Force official in charge of managing this process told us that he had submitted multiple requests over the last 3 years to integrate the information being managed with spreadsheets and emails into an existing personnel management system to improve the efficiency of the process. However, according to this official, higher priorities and funding issues have precluded the information from being integrated into another existing system. In September, 2018, another Air Force official told us that the Program Management Office that manages a system into which the information could be integrated was behind schedule in implementing several other system updates. Because of these delays, the official acknowledged that no review has yet been done of what is needed to provide the most efficient management oversight process of the information currently being managed via the spreadsheet process. The official said that before any actions could take place, a review of requirements and priorities would be needed in order to make a determination as to what changes could be made. Therefore, he said that there are no decisions or timelines available for reviewing a process that would provide the validation information for non-operational staff positions in a timelier and widely accessible manner. Air Force instructions state that major commands are required to perform annual aircrew requirements reviews including review and revalidation of all aircrew positions, except for rank of colonel or higher, to ensure aviator expertise is required, and report the results to the Headquarters Air Force Operations Training Division. Further, the Headquarters Air Force Operations Training Division has the responsibility to ensure a management process is in place to provide efficient and effective oversight of the major commands’ annual review and revalidation of the aircrew position requirements process. Additionally, Standards for Internal Control in the Federal Government states that management should identify needed information, obtain the relevant information from reliable sources in a timely manner, and process the information into quality data to make informed decisions and evaluate its performance in achieving key objectives and addressing risks. By reviewing its oversight process, the Air Force may be able to identify a more efficient manner to manage its non-operational staff positions that require aviator expertise. A management oversight process that provides timely and widely accessible position justification information may help ensure that the proper type of aviator expertise needed in these positions is up to date. In turn, this could result in a more efficient use of the Air Force’s short supply of expensive pilot resources, particularly fighter pilots, and could potentially improve its ability to assign and develop effective leaders, such as those within the growing RPA career field. The Air Force continues to expand the use of RPAs in its varied missions of intelligence gathering, surveillance and reconnaissance, and combat operations. While the overall number of eligible RPA pilots is much smaller compared to other pilots, over the last 5 years RPA pilots have achieved promotions and nominations to attend developmental education programs at rates that were generally similar in comparison to pilots in other career fields. Additionally, non-operational staff positions requiring RPA pilots have been assigned to entities at high rates since 2013, but the number of positions available to them is smaller than the number that require fighter, bomber, and mobility pilots because the career field is still growing. Air Force officials have noted problems with the current oversight process which may be hindering its ability to efficiently and effectively manage these non-operational staff positions as required by Air Force policy. For example, the Air Force has recently identified that a large number of these positions designated as requiring officers with aviator expertise lacked adequate justification for that requirement. By reviewing the efficiency and effectiveness of its management oversight process that provides information in a timelier and more widely accessible manner, the Air Force could better ensure that it makes informed decisions regarding the need for pilots in certain non-operational staff positions and is in compliance with policy. It also could help ensure that the Air Force more efficiently uses its short supply of expensive pilot resources. Ultimately, this may positively affect its ability to assign and develop effective leaders, such as those within the growing RPA career field. The Secretary of the Air Force should review its management oversight process that provides information and documents the justifications of the Air Force’s non-operational staff positions requiring aviator expertise, including RPA positions, to identify opportunities for increased efficiency and effectiveness and take any necessary actions. (Recommendation 1) In written comments reproduced in appendix II, DOD concurred with comments to the recommendation, and provided separate technical comments, which we incorporated as appropriate. DOD concurred with the recommendation to review the management oversight process that provides information and documents the justifications of the Air Force’s non-operational staff positions requiring aviator expertise, including RPA positions, to identify opportunities for increased efficiency and effectiveness and to take any necessary actions. In its comments, DOD stated that it agrees the current oversight process is time-consuming and could be more efficient. However, it believes this process is effective because the Air Force was able to validate the need for having pilots fill a majority of its non-operational staff positions during a recent congressionally-mandated review of these positions. As we reported, this review of all staff positions requiring aviator expertise across the Air Force and other defense entities discovered more than 500 of approximately 2,800 positions that were initially found to be lacking adequate justifications, and 61 positions eventually were eliminated. We believe the Air Force’s results from this one-time review is an example of how the current process is not consistently yielding up-to-date validations of positions. Further, DOD also stated that while a move to automating the process again has been considered, current funding shortfalls prevent the Air Force from establishing an automated system to increase the process’s efficiency. We continue to believe that the Air Force should review its current process in order to identify any viable means to increase its efficiency and effectiveness. Such a review may provide the Air Force with opportunities to more consistently provide the proper type of aviator expertise needed to fill its staff positions as well as potentially provide more leadership opportunities to those within growing career fields, such as RPA pilots. We provided a draft of this report to DOD for review and comment. We are sending copies of this report to the appropriate congressional committees, the Acting Secretary of Defense, and the Secretary of the Air Force. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Since 2014, we have issued three reports assessing the Air Force’s remotely piloted aircraft (RPA) workforce management. In April 2014, we found that the Air Force had shortages of pilots of remotely piloted aircraft (RPA) and faced challenges to recruit, develop, and retain pilots and build their morale. We also found that Air Force RPA pilots experienced potentially challenging working conditions and were promoted at lower rates than other career fields. We made seven recommendations, and the Air Force generally concurred with our recommendations. It has fully implemented all but one recommendation to analyze the career field effect of being an RPA pilot to determine whether and how being an RPA pilot is related to promotions. In May 2015, we found that the Air Force faced challenges ensuring that their RPA pilots completed their required training and that the Office of the Deputy Assistant Secretary of Defense for Readiness had not issued a training strategy that addresses if and how the services should coordinate with one another to share information on training pilots who operate unmanned aerial systems. We made one recommendation related to these findings with which DOD concurred. However, in September 2018, an official from the Office of Secretary of Defense for Readiness stated that there are compelling reasons why a training strategy is no longer necessary and that no action is planned to implement the recommendation. In January 2017, we found, among other things, that the Air Force had not fully tailored a strategy to address the UAS pilot shortage and evaluated their workforce mix of military, federal civilian, and private- sector contractor personnel to determine the extent to which these personnel sources could be used to fly UAS. We made five recommendations related to these findings with which the Air Force and DOD generally concurred. As of July 2018, the Air Force has taken some action to address the first three recommendations and officials from the Office of the Under Secretary of Defense for Personnel and Readiness have fully implemented the other two recommendations. In table 4, we present the recommendations that we made to the Air Force and the Under Secretary of Defense for Personnel and Readiness and summarize the actions taken to address those recommendations as of September 2018. In addition to the contact named above, Lori Atkinson (Assistant Director), Rebecca Beale, Amie Lesser, Felicia Lopez, Grant Mallie, Ricardo Marquez, Richard Powelson, Amber Sinclair, and John Van Schaik made key contributions to this report.
|
An increasing number of Air Force missions use unmanned aerial systems, or RPAs, to provide their specialized capabilities in support of combat operations. The demand for crew members for these systems has grown rapidly. For example, RPA pilot requirements increased by 76 percent since fiscal year 2013 while those for fighter pilots stayed about the same. These requirements include pilots who serve in non-operational staff positions, such as trainers. Senate Report 115-125 included a provision that GAO review career advancement for Air Force RPA pilots compared to other pilots. This report, among other things, describes (1) the rates that RPA and other pilots were promoted; (2) the rates that non-operational staff positions requiring RPA pilot expertise were assigned to various organizations, and (3) the extent to which the Air Force has reviewed its oversight process to effectively manage non-operational staff positions requiring aviator expertise. Among other things, GAO analyzed Air Force pilot promotion data from 2006-2017. GAO also analyzed non-operational staff position data from fiscal years 2013-2018 and interviewed officials regarding the management and oversight of these positions. The promotion rates for Air Force Remotely Piloted Aircraft (RPA) pilots have been generally similar to those of other pilots since 2013 and have increased over time. See figure below for promotion rates from major to lieutenant colonel. Air Force officials stated that RPA pilot promotion rates increased because the creation of a dedicated career field resulted in more competitive candidates. Since 2013, over 75 percent of non-operational staff positions requiring RPA pilot expertise were assigned to various organizations within the Air Force, according to GAO's analysis. These positions carry out support and other noncombat-related activities as well as training functions and are essential to the development of officers. However, the overall number of these positions that require a RPA pilot is about one-tenth of the combined number of those requiring other pilots. For example, in fiscal year 2018, 83 non-operational staff positions required RPA pilots compared to 330 requiring fighter pilots. Air Force officials stated that the small number of RPA positions is because the career field is new. The Air Force has not reviewed its oversight process to ensure that it is efficiently managing its non-operational staff positions that require aviator expertise. Air Force officials explained that over the last 10 years, the Air Force reduced the number of squadrons but had not reviewed the number of non-operational staff positions. Similarly, the Air Force has had no widely accessible oversight process to monitor whether it had established an accurate number of non-operational staff positions required to support the new RPA career field. In August 2018, the Air Force identified 513 non-operational staff positions (out of 2,783) as needing further review because they lacked adequate justification of the need for aviator expertise. Officials described the process for managing these positions as time and labor intensive, which can cause delays in obtaining reliable information needed to inform decision-making. By reviewing this process, the Air Force may be able to identify opportunities to create efficiencies and more effectively manage its non-operational staff positions requiring aviator expertise. GAO recommends that the Air Force review its oversight process for managing the non-operational staff positions, including those for RPA pilots, to identify opportunities to increase efficiencies. DOD concurred with this recommendation.
|
Enacted in 1970, NEPA, along with subsequent CEQ implementing regulations, sets out an environmental review process that has two principal purposes: (1) to ensure that an agency carefully considers information concerning the potential environmental effects of proposed projects; and (2) to ensure that this information is made available to the public. DOT’s Federal Highway Administration (FHWA) and Federal Transit Administration are generally the federal agencies responsible for NEPA compliance for federally funded highway and transit projects. Project sponsors—typically state DOTs and local transit agencies—may receive DOT funds, oversee the construction of highway and transit projects, develop the environmental review documents that are approved by federal agencies, and collaborate with federal and state stakeholders. In addition, the Clean Water Act and the Endangered Species Act are two key substantive federal environmental protection laws that may be triggered by a proposed transportation project and that may require the federal resource agencies to issue permit decisions or perform consultations before a project can proceed. Section 404 of the Clean Water Act generally prohibits the discharge of dredged or fill material, such as clay, soil or construction debris, into the waters of the United States, except as authorized through permits issued by the Corps. Before the Corps can issue a section 404 permit, it must determine that the discharge of material is in compliance with guidelines established by the Environmental Protection Agency. The Corps issues two types of permits: Individual permits: issued as a standard permit for individual projects, following a case-by-case evaluation of a specific project involving the proposed discharge of dredged or fill material and/or work or structures in navigable water. General permits: issued for categories of projects the Corps has identified as being similar in nature and causing minimal individual and cumulative adverse environmental impacts. General permits may be issued on a state, regional, or nationwide basis. In fiscal year 2016, the Corps completed approximately 250 individual permits and 10,750 general permits for transportation projects, based on agency data. The Corps is not required to complete its permit reviews within a specified time frame; however, it has performance metrics, including target time frames for issuing permit decisions based on permit type. The purpose of the Endangered Species Act is to conserve threatened and endangered species and the ecosystems upon which they depend. Section 7 of the Act directs federal agencies to consult with FWS or NMFS when an action they authorize, fund, or carry out, such as a highway or transit project, could affect listed species or their critical habitat. Section 7 also applies if non-federal entities receive federal funding to carry out actions that may affect listed species. Before authorizing, funding, or carrying out an action, such as a highway or transit project, lead federal agencies must determine whether the action may affect a listed species or its critical habitat. If a lead federal agency determines a proposed action may affect a listed species or its critical habitat, formal consultation is required unless the agency finds, with FWS’ or NMFS’ written concurrence, that the proposed action is not likely to adversely affect the species. Formal consultation is initiated when FWS or NMFS receives a complete application from the lead agency, which may include a biological assessment and other relevant documentation, which describe the proposed action and its likely effects. The formal consultation usually ends with the issuing of a biological opinion by FWS or NMFS, which generally must be completed within time frames specified in the Endangered Species Act and in its implementing regulations. Specifically, FWS and NMFS have 135 days to complete a formal consultation and provide a biological opinion to the lead federal agency and project sponsor in order for the project to proceed. The consultation period can be extended by mutual agreement of the lead federal agency and FWS or NMFS. In fiscal year 2016, FWS completed 179 formal consultations and NMFS completed 29 formal consultations for federally-funded highway and transit projects, based on agency data. The three most recent transportation reauthorization acts include provisions that are intended to streamline various aspects of the environmental review process for highway and transit projects. We identified 18 statutory provisions from these acts that could potentially affect time frames for the environmental permitting and consulting processes for highway and transit projects. Based on our review, we grouped the provisions into two general categories: Administrative and Coordination Changes and NEPA Assignment. See appendix II for a complete list and descriptions of the 18 provisions that we identified. The 16 Administrative and Coordination Changes provisions are process oriented. These provisions, for example: (1) establish time frames for the environmental review process, (2) encourage the use of planning documents and programmatic agreements, and (3) seek to avoid duplication in the preparation of environmental review documents. The two NEPA Assignment provisions authorize DOT to assign its NEPA responsibility to states. Resource agency and state DOT officials told us they believe that some actions called for by the 18 provisions we identified, such as programmatic agreements, have helped streamline the consulting and permitting processes. However, a lack of reliable agency data regarding permitting and consulting time frames hinders a quantitative analysis of the provisions’ impact. Further, limitations in FWS and NMFS data, such as missing or incorrect data and inconsistent data entry, could impair the agencies’ ability to determine whether the agencies are meeting statutory and regulatory requirements, such as the extent to which the agencies complete formal consultations and provide biological opinions within 135 days. FWS and NMFS have limited controls that would help ensure the completeness and accuracy of their data. Resource agency and state DOT officials we interviewed told us they believe that some actions called for by the provisions we identified have helped streamline the consulting and permitting processes. While these officials generally did not quantify or estimate the number of days review times may have been reduced, they did generally explain how the review processes were accelerated, depending upon the action being taken, for example: Programmatic agreements: Officials from 18 of the 23 state DOTs and federal resource agency field offices we spoke with told us that using programmatic agreements has generally helped reduce review times. Programmatic agreements can standardize the consulting and permitting processes for projects that are relatively routine in nature (e.g., repaving an existing highway). For example, one state DOT and an FWS field office have an agreement that establishes a consistent consultation process to address projects, such as pavement marking, that have either a minimal or no effect on certain federally protected species and their critical habitat. Programmatic agreements may contain review time targets that are shorter than those for reviews not subject to the agreements. For example, officials from one FWS field office said that they typically met the 60-day time limit that was established in one such agreement, compared to the standard 135- day period for completing formal consultations and issuing biological opinions. In part, DOT has assisted in establishing programmatic agreements affecting consultation and permit review processes. For example, according to DOT, its Every Day Counts initiative has helped create scores of programmatic agreements through efforts such as identifying best practices, performing outreach, developing new approaches, and improving existing ones. In our 2018 report on highway and transit project delivery, 39 of 52 state DOTs in our survey reported that programmatic agreements had sped up project delivery within their states. Federal liaison positions: Officials from 21 of the 23 selected state DOT and federal resource agency field offices told us that liaison positions at resource agency offices, which are positions held by federal employees who work on consultation and permit reviews for state DOTs, have streamlined the consultation and permit review processes. According to almost all of the selected officials, these positions provide benefits, such as dedicating staff to process the state DOTs’ applications for permits and consultations, allowing state DOTs to prioritize projects, and enabling enhanced coordination between agencies to avoid conflicts and delays in the review process. For example, officials from one state DOT said that having a dedicated liaison at an FWS field office gave the state DOT a responsive point of contact, helped address workload concerns at the FWS field office, and enabled FWS office staff to attend interagency coordination meetings. According to DOT, as of November 2017, states had 43 full-time equivalent positions at FWS and 11 at NMFS. Corps officials stated that states had more than 40 full-time equivalent positions at the Corps in fiscal year 2017. In our 2018 report on highway and transit project delivery, 32 of 52 state DOTs in our survey reported that they had used this provision. We found that 23 of those state DOTs reported that it had sped up project delivery within their states. Early coordination: Officials from 18 of the 23 state DOT and federal resource agency field offices we spoke with told us that early coordination in consultation and permit review processes has generally reduced review times. According to most selected state DOT and resource agency officials, this early coordination can provide benefits, such as improving the quality of applications, avoiding later delays by identifying concerns early in the process, and allowing permitting to be considered in the design phase of projects. For example, officials at one of the Corps’ district offices told us that they routinely hold pre-application meetings with state, DOT, and resource agency contacts to define what the Corps needs to process the application quickly and to avoid later problems. Similarly, in our 2018 report on highway and transit project delivery, 43 of 52 state DOTs in our survey reported that they had used this provision, and 27 of those reported that the provision had sped up project delivery within their states. Although selected federal resource agency and state DOT officials were able to identify actions called for by the provisions that they believe have helped streamline the consulting and permitting processes, officials from all three resource agencies said that their agencies had not analyzed the impact of the streamlining provisions on permit review or consultation time frames and did not have plans to do so in future. For two reasons, we were unable to quantify the impact the 18 streamlining provisions had on the three federal resource agencies’ consultation and permit review time frames. First, factors other than the streamlining provisions may have also affected review times, limiting our ability to discern the extent to which the provisions had an impact. Second, the resource agencies could not provide enough reliable data for us to analyze changes in consultation and permit review durations over time. With respect to the first reason, factors other than the streamlining provisions can influence the durations of permit reviews and consultations, a situation that would make it difficult to establish whether the streamlining provisions in the reauthorization acts had a direct impact. In particular, officials from resource agencies and state DOTs we interviewed informed us that some offices took actions included in some of the various streamlining provisions before the three transportation reauthorizations were enacted. For example, officials at one FWS field office said that the office completed a programmatic agreement in 2004. Officials at one state DOT said that they had funded positions at resource agency offices for two decades. Corps officials said that the Corps implemented early coordination before the provision requiring this action was enacted. DOT officials also said that the provisions generally codified and expanded on existing actions. Further, factors such as staffing shortages at state DOTs and resource agency offices may also affect the length of consultations and permit reviews. Therefore, even if the durations of permit reviews and consultations could be evaluated over time with enough reliable data, it could be difficult to connect changes in the durations to the streamlining provisions with any confidence. Second, none of the three resource agencies could provide enough reliable data to evaluate trends in the duration of consultations and permit reviews after the 15 provisions were introduced in SAFETEA-LU and MAP-21, and the FAST Act was enacted too recently to evaluate any trends following the 3 provisions it introduced. To evaluate trends in permit review and consultation durations before and after the provisions were enacted, we would need sufficient data before and after their enactment. The SAFETEA-LU, MAP-21, and FAST Act provisions were enacted in August 2005, July 2012, and December 2015, respectively. Available Corps’ data could not be used to determine trends in permit review durations before and after the SAFETEA-LU and MAP-21 provisions were enacted. Specifically, Corps officials told us that their data prior to October 2010 should not be used to evaluate trends due to changes in the Corps’ data tracking system and data entry practices. The Corps did not provide more than one full fiscal year of data prior to 2012, and we would need more than one year of data to establish an adequate baseline in order to control for variations that may occur from year to year. Further, FWS and NMFS could not provide reliable data to evaluate trends in the durations of consultations before or after enactment of SAFETEA-LU and MAP-21. FWS and NMFS officials informed us of limitations in their agencies’ consultation data that rendered the data incomplete prior to fiscal year 2009 and calendar year 2012 respectively, a circumstance that would prevent us from evaluating trends following SAFETEA-LU. Specifically, FWS officials told us that use of its data tracking system was not mandatory in all regions for consultation activities prior to fiscal year 2009. NMFS officials told us that data from its tracking system are incomplete prior to 2012, because some prior records did not transfer properly during a migration to a newer version of the database. Further, the weaknesses in more recent FWS and NMFS data that we identify below would also limit an analysis of changes in consultation durations following MAP-21. Finally, since the three agencies provided data through fiscal year 2016, we had less than one fiscal year of data following the December 2015 enactment of the FAST Act, an amount that was insufficient to evaluate trends in consultation and permit review durations following the Act’s enactment. We identified limitations, such as incorrect or missing data and inconsistent data entry practices, in more recent FWS and NMFS data, and such limitations would limit future analysis of trends in the duration of consultations. We did not identify similar limitations in Corps data. These limitations could also hinder analyses of the extent to which the agencies meet statutory and regulatory requirements, such as the extent to which the agencies completed formal consultations and issued biological opinions within 135 days. Standards for internal control in the federal government state that agency management should use quality information to achieve the agency’s objectives and should design appropriate controls for information systems that ensure that all transactions are completely and accurately recorded. Information systems should include controls to achieve validity, completeness, and accuracy of data during processing, including input, processing, and output controls. However, we identified errors in consultation data provided by FWS and NMFS officials. For example, FWS’s data included 1,568 unique transportation-related formal consultations that started and concluded within fiscal years 2009 through 2016. Of those records, 27 had formal consultation initiation dates that followed the conclusion date, resulting in a negative duration; 113 lacked an initiation date, precluding a determination of the duration; and 19 had formal consultation initiation dates that preceded the dates on which FWS could begin work. NMFS officials said that records cannot be removed from the database once saved—including duplicate, incomplete, withdrawn, or otherwise bad records—and that the database does not always retain corrections after they are made. As a result, data exported from the database are manually reviewed for errors, according to NMFS officials. However, data provided to us after this manual review process still contained errors. Further, FWS and NMFS officials described limited controls to ensure the completeness and accuracy of their data. FWS officials said that they do not currently conduct systematic reviews to examine the accuracy of the data. The officials also said that they do not have procedures for follow-up when errors are found, although regional or headquarters staff may conduct outreach to an affected office if errors are found. FWS officials also acknowledged that the database lacks sufficient electronic safeguards on all fields to prevent errors. Similarly, NMFS officials said that NMFS has not tracked the accuracy of its data and that many fields in NMFS’s database do not have safeguards to limit data entry errors. FWS and NMFS also lack procedures to ensure that they consistently track all data associated with consultation time frames. For example, FWS and NMFS officials could not provide data on whether formal consultations and the issuance of biological opinions that exceeded 135 days obtained extensions, data that officials would need to track the extent to which their agencies comply with the requirement to complete consultations and issue biological opinions within 135 days absent an extension. The officials said that the agencies do not require their staff to enter extension data, and that some staff enter extension dates but others do not. In addition, although hundreds of projects may be reviewed under a single programmatic agreement, FWS and NMFS do not record all projects reviewed under programmatic agreements. For example, NMFS officials told us that the agency’s system is not designed for staff to enter individual actions reviewed under programmatic agreements. This process prevents comparisons of review time frames for individual projects under programmatic agreements with projects not reviewed under those agreements. FWS’s database also does not require some critical information for determining consultation time frames, such as the initiation dates for formal consultations. Further, FWS headquarters officials acknowledged that differing field office procedures had contributed to varying record-keeping methods, and officials at five of the seven FWS field offices we interviewed told us that FWS’s database is not used consistently among field offices. The quality of FWS’s and NMFS’s consultation data may limit the ability of the agencies to determine whether they are completing consultations within required time frames, as described above, and may also impact other internal and external uses of the data. For example, the quality of the data may limit the agencies’ evaluation and management of their consultation processes. FWS officials said that FWS uses its data internally in calculating annual performance measures and to answer questions from senior leadership, among other purposes. NMFS officials said that NMFS uses its data internally to examine the agency’s Section 7 workload, help set agency funding priorities, and track projects through the consultation process. FWS and NMFS will also have to ensure that their data systems can provide reliable data to comply with an executive order requiring federal agencies to track major infrastructure projects, including the time required to complete the processing of environmental reviews. The August 2017 executive order directed the Office of Management and Budget, in coordination with the Federal Permitting Improvement Steering Council, to issue guidance for establishing a system to track agencies’ performance in conducting environmental reviews for certain major infrastructure projects. To meet this directive, this system is to include assessments of the time and costs for each agency to complete environmental reviews and authorizations for those projects, among other things. According to a multi-agency plan, system implementation is planned to begin in the fourth quarter of fiscal year 2018, and publishing of performance indicator data is planned to begin in the first quarter of fiscal year 2019. In addition, FWS has provided consultation data to outside researchers who have publicly reported them in a study and a web portal. NMFS makes some data for completed consultations publicly available through the internet. NMFS and FWS officials we interviewed said that the agencies are developing new versions of their databases, and FWS officials said that they will develop new standard-operating procedures and guidance for data entry. Specifically, FWS officials said that they have discussed the development of a new version of their database that would better track consultations chronologically and ensure greater data accuracy and consistency, but that effort is still in the planning stage. Those officials also said that they have formed a team to explore the development of new standard-operating procedures, training, and guidance for consistent data entry and that they are considering how to include data on whether consultations received extensions in the new system. NMFS officials said that the agency is modernizing its database, including improving data entry, error prevention, maintenance, and tracking of actions under programmatic agreements. However, FWS and NMFS officials could not provide specific time frames for implementation or documentation of these efforts. Therefore, it is not clear whether these efforts will include internal controls that address all of the types of issues we identified. Officials at 19 of the 23 federal resource agency field offices and state DOTs we spoke with generally mentioned two additional actions, beyond the 18 provisions we identified, for streamlining the consultation and permitting process: field office assistance to lead federal agencies and project sponsors, including state DOTs, to improve applications for permits and consultations; and electronic systems for environmental screening and document submission. First, officials from some of the 16 federal resource agency field offices we spoke with stated that they provide assistance to lead federal agencies and project sponsors to clarify the information required in permit and consultation applications before they are submitted to the resource agency. Officials from 8 of those 16 offices stated that they provided that assistance in order to improve the quality and completeness of information included in the applications. Resource agency officials stated that the permit or consultation process is delayed when the lead federal agency or project sponsor does not initially provide the quantity or quality of information necessary for resource agencies’ field office staff to complete permits and consultations. These staff must then request additional information from the lead federal agency or project sponsor, extending the permit or consultation reviews. Therefore, officials at 16 of the 23 federal resource agency field offices and state DOTs we spoke with said that field office staff provided training to state DOT staff to specify the information field offices required for initial permit or consultation applications. In addition, officials at 6 of the 23 resource agency field offices and state DOTs we spoke with created or were in the process of creating documents, such as application templates or checklists, that specify information required initially by field offices for applications. For example, according to officials at one FWS field office, a staff member created a standardized form letter for consultation applications that includes information for the state DOT to submit with its applications. Second, officials at federal resource agency field offices and state DOTs also identified electronic systems for environmental screening and document submission as helpful streamlining actions. Some state agencies created electronic systems for permitting and consultation applications, according to officials at 6 of the 23 resource agency field offices and state DOTs we spoke with. Some of those state agencies created systems for submitting application documentation, which can include multiple reports and studies related to an endangered species or its critical habitat. In addition, some of those state agencies created electronic tools that screen potential transportation project areas for environmental impacts. For example, in Pennsylvania, state agencies created two electronic systems. The first system allows application materials to be shared with multiple state and federal agencies while the second allows applicants to screen project areas for potential impacts on endangered species. The Pennsylvania Natural Heritage Program, a partnership between four state agencies, created a system that allows lead federal agencies or project sponsors to determine what potential environmental impacts, if any, exist in a proposed project’s geographic area (fig. 1). According to field office officials who use this resource, it saves time and improves agency coordination on transportation projects. Officials at two additional offices stated that their state agencies were in the process of establishing such electronic systems. In addition, FWS has piloted additional capabilities for its existing electronic system that screens for species information. According to FWS officials, the current pilot is restricted to specific species included in existing programmatic agreements, but this updated system would guide applicants through the consultation application and allow electronic document submission. The federal resource agencies continue to seek out additional opportunities for their field offices to streamline the permitting and consultation processes, according to officials at 11 of the 16 field offices. Officials at four of those offices stated that they discuss additional streamlining opportunities at regular transportation-related meetings with other federal and state agency offices. However, beyond the streamlining actions and provisions cited above, officials at resource agency field offices and state DOTs did not identify additional opportunities used by multiple field offices to streamline permits and consultations. DOT has a role in streamlining the overall NEPA process for transportation projects. Officials from DOT and its modal administrations, in coordination with federal resource agencies, participate in or support several efforts, including the following, to streamline the NEPA process: Coordination meetings: DOT officials participate in some early or regular coordination efforts, according to officials at some federal resource agency field offices and state DOTs we spoke with. For instance, according to officials at one Corps district office, DOT officials participate in some monthly meetings between federal and state agencies to discuss both specific transportation projects and recurring issues that may present streamlining opportunities. Transportation liaisons: As mentioned above, recipients of DOT funds may partially fund the transportation liaison positions at federal resource agency field offices. Officials at some resource agency field offices and state DOTs we spoke with stated that liaisons implemented streamlining actions at those offices. For example, officials at one FWS field office stated that the office’s transportation liaisons are responsible for creating and maintaining programmatic agreements with the state DOT. In addition, DOT currently has interagency agreements to provide national transportation liaisons at resource agencies—including the Corps, FWS, and NMFS—who lead nationwide efforts, such as meetings among field offices where officials can share streamlining actions. Streamlining resource database: DOT maintains an online database of resources created by DOT and transportation liaisons for streamlining the NEPA process. The database, which is part of the Transportation Liaison Community of Practice online portal, includes programmatic agreements, regional streamlining efforts, and liaison- funding agreements, among other resources. The purpose of this database is to provide examples of streamlining actions for transportation liaisons and state DOT officials to use in implementing these actions with state and federal agency offices to streamline NEPA processes. DOT also participates in multi-agency efforts to identify recommendations for streamlining the NEPA process. Those efforts produced two multi- agency reports that have identified best practices for improving streamlining of the NEPA process: Red Book: In 2015, DOT coordinated with multiple federal agencies, including the resource agencies, to update the Red Book, a resource to help both federal and state agencies conduct concurrent environmental review processes and to improve coordination in the NEPA process for major transportation and other infrastructure projects. For instance, the Red Book recommended electronic information systems, including systems that share geographic information with the agencies involved, as a way to streamline the NEPA process. Annual interagency report: DOT and multiple federal agencies, including the resource agencies, contribute to the Federal Permitting Improvement Steering Council’s annual report on recommended actions for federal agencies. In the reports for fiscal years 2017 and 2018, those recommended steps included actions taken by some resource agency field offices. For example, recommended steps in the 2017 report included the creation of electronic application submission systems and training to improve permit and consultation applications. DOT officials stated that they continue to seek additional streamlining opportunities with federal and state entities, including federal resource agencies and state DOTs, through outreach to those agencies. For example, the officials told us that they had reached out to the resource agencies and provided training to help them identify what basic application information is needed for certain types of projects that are unlikely to be fully designed at that point in the project’s design. DOT officials also suggested that expanding the current streamlining actions that resource agencies have taken, such as utilizing the transportation liaison positions, would help streamline the process. CEQ oversees NEPA implementation, reviews and approves federal agency NEPA procedures, and issues regulations and guidance documents that govern and guide federal agencies’ interpretation and implementation of NEPA. In addition, CEQ has focused some of its efforts on furthering the goal of streamlining environmental reviews. Those efforts have included publication of various guidance and memorandums on the effective use of programmatic reviews, according to CEQ officials. For example, CEQ issued regulations that direct agencies, to the fullest extent possible, to integrate the NEPA process into project planning at the earliest possible time to avoid delays and resolve potential issues, and to perform coordinated and concurrent environmental reviews to the extent possible to minimize duplication of effort. CEQ officials also noted that CEQ continues to co-chair the Transportation Rapid Response Team, a working group of federal agencies that facilitates interagency coordination and seeks to improve surface transportation project delivery consistent with environmental guidelines. CEQ periodically reviews and assesses its guidance and regulations to improve the effectiveness and timeliness of NEPA reviews, according to a CEQ official. For example, CEQ reviewed the environmental review processes of selected agencies in 2015 to identify model approaches that simplify the NEPA process and reduce the time and cost involved in preparing NEPA documents. CEQ used this review to identify and recommend changes to modernize NEPA’s implementation, including using information technology, such as a web-based application that identifies environmental data from federal, state, and local sources within a specific location, to improve the efficiency of environmental reviews. On August 15, 2017, the President signed an executive order that directed CEQ to develop a list of actions it will take to enhance and modernize the environmental review and authorization process. In September 2017, CEQ outlined its actions to respond to the executive order in a Federal Register Notice. According to CEQ officials, in response to the executive order, CEQ is in the process of reviewing its existing regulations on the implementation of the provisions of NEPA to identify changes needed to update and clarify its regulations. In June 2018, CEQ published an advance notice of proposed rulemaking to solicit public comment on potential revisions to its regulations to ensure a more efficient, timely, and effective NEPA process consistent with the national environmental policy. In addition, CEQ, along with the Office of Management and Budget, issued guidance for federal agencies for processing environmental reviews and authorizations in accordance with the executive order’s goal of reducing the time for completing environmental reviews for major infrastructure projects. Finally, CEQ officials stated that CEQ is leading an interagency working group, which includes representatives from the resource agencies, to review agency regulations and policies to identify impediments to the processing of environmental review and permitting decisions. CEQ anticipates the working group findings will address a number of issues relating to environmental reviews, including the environmental consulting and permitting processes. The federal government has enacted a number of statutory provisions aimed at streamlining the environmental review process for highway and transit projects. However, while Corps, FWS, and NMFS officials believe that these provisions have helped streamline their permit reviews and consultations, the lack of data hinders quantification of any trends in the duration of those reviews. Furthermore, agency and government-wide efforts to track major infrastructure projects, such as the planned Office of Management and Budget performance tracking system, will be hindered without accurate and reliable data. FWS and NMFS do not have adequate internal control procedures in place to ensure accurate and reliable data and cannot accurately assess their ability to meet statutory and regulatory requirements for completing consultations and issuing biological opinions. Although FWS and NMFS are in the process of upgrading their data systems, the agencies do not have documented plans or time frames that identify what controls they will use to ensure accurate data on the time taken for consultation reviews. We are making a total of two recommendations, one to the Fish and Wildlife Service and one to the National Marine Fisheries Service. Specifically, we are making the following recommendation to the Fish and Wildlife Service: The Principal Deputy Director of the Fish and Wildlife Service should direct the Fish and Wildlife Service to develop plans and time frames for improving its new consultation tracking system and develop appropriate internal controls, such as electronic safeguards and other data-entry procedures, to ensure accurate data on the time taken for consultations. (Recommendation 1) We are making the following recommendation to the National Marine Fisheries Service: The Assistant Administrator for Fisheries should direct the National Marine Fisheries Service to develop plans and time frames for improving its new consultation tracking system and develop appropriate internal controls, such as electronic safeguards and other data-entry procedures, to ensure accurate data on the time taken for consultations. (Recommendation 2) We provided a draft of the report to the Departments of Transportation, Defense, Commerce, and Interior and the Council on Environmental Quality. The Departments of Commerce and Interior each provided written responses, which are reprinted in appendixes III and IV, respectively. The Departments of Commerce and Interior agreed with our recommendations. In addition, the Departments of Transportation, Defense, Commerce, and Interior and the Council on Environmental Quality provided technical comments, which we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees, the Secretary of the Department of Transportation, Secretary of the Department of Defense, Secretary of the Department of the Interior, Secretary of the Department of Commerce, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our work focused on federal-aid highway and transit projects and the provisions included in the past three surface-transportation reauthorizations that are intended to streamline the environmental consulting and permitting processes performed by the three federal resource agencies: Fish and Wildlife Service (FWS), National Marine Fisheries Service (NMFS), and the U.S. Army Corps of Engineers (Corps). This report (1) addresses the extent to which identified streamlining provisions had an impact on the time frames for the environmental consulting and permitting processes; (2) identifies actions taken by the resource agencies to streamline their consulting and permitting reviews and identifies additional streamlining opportunities, if any; and (3) describes the actions taken by the Council on Environmental Quality (CEQ) to accelerate highway and transportation projects. To identify relevant provisions that were aimed at streamlining the consulting and permitting processes for highway and transit projects, we reviewed the last three surface transportation reauthorization acts and relevant federal statutes, regulations, and guidance. The three reauthorizations we reviewed are as follows: the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU); the Moving Ahead for Progress in the 21st Century Act (MAP-21); and the Fixing America’s Surface Transportation Act (FAST Act). We identified 18 provisions that are intended to streamline various aspects of the NEPA environmental review process and could potentially affect the permitting and consultation processes of the three federal resource agencies. Provisions were grouped into categories developed in a previous GAO report on project delivery for ease of understanding. In our review we identified relevant statutory provisions as they had been amended by the three surface transportation reauthorization acts. Some of the provisions, as originally enacted, were modified by subsequent legislation. To evaluate the extent to which the streamlining provisions had an impact on the consulting and permitting processes, we requested official responses from each of the three resource agencies on the impact of the 18 provisions we identified on the consulting and permitting processes. We also conducted interviews with resource agency officials in Washington, D.C. and the respective field, district, and regional offices to determine the use and impact of the streamlining provisions from the surface transportation reauthorization acts. To quantify the extent to which the streamlining provisions had an impact on the time frames for completing consultations and permit reviews, we requested data on the time frames of consulting and permitting from FWS, NMFS, and Corps data systems for fiscal years 2005 through 2016 for all federally funded highway and transit projects. We requested data from the resource agencies with a variety of information for each record that included the start and end dates for each consultation and permit decision, the type of consultation or permit decision, the project sponsor or entity requesting the consultation or permit decision, the project type, a description of the project, and the field, district, or regional office that received and entered each record. The agencies provided the most recently available data, which we analyzed. FWS was unable to provide us reliable data prior to fiscal year 2009; the Corps was unable to provide us reliable data prior to fiscal year 2011, and NMFS was unable to provide us reliable data prior to calendar year 2012. Agency officials stated that data prior to those years were unreliable because of various factors, such as NMFS’s performing a data migration to a new system where some records did not transfer properly and Corps changes to its database in 2011 that made earlier data incomparable to post-2011 permit records. We performed checks to determine the reliability of the agency data and to identify potential limitations, such as missing data fields, errors, and discrepancies in calculations between records. We determined that the data provided by FWS and NMFS were not sufficiently reliable for examining the impact of the streamlining provisions on the time frames for completing consultation reviews. We also determined that the data provided by the Corps was sufficiently reliable to conduct analysis of permitting time frames, but because the Corps was unable to provide reliable data prior to fiscal year 2010, we were unable to examine the impact of streamlining provisions on the time frames for completing permit reviews. Our discussion in the report of resource agency data focuses on these limitations. We reviewed agency policies and procedures on ensuring accurate and reliable data and compared them with federal standards for internal controls. To examine the actions used by resource agencies to streamline consulting and permitting reviews, we interviewed officials in seven FWS field offices, seven Corps district offices, two NMFS regional offices, three transit agencies, and seven state departments of transportation (state DOTs) to discuss leading practices and additional opportunities for streamlining the consulting and permitting processes, as well as the use of the respective agency data systems. We reviewed field office documents and policies used to accelerate consulting and permitting. To select the federal resource agency field and district offices for interviews, we used the consultation and permit data collected from the agencies. We selected the offices based on a number of criteria identified through analysis of federal resource agency data between fiscal years 2009 and 2016, including: the most consultations or permit decisions performed; a mix of the average length of time for consultations or permit a mix of the types of consultations (e.g., formal or programmatic) or permit decisions (e.g., general or individual) performed by office; and a mix of geographic regions. For the selection of state DOTs, we used a number of selection criteria including: the most consultations and permit decisions requested by state; a mix of the average consultation or permit decision time by state; a mix of the types of consultations or permit decisions the states a mix of geographic regions. To select the transit agencies for interviews, we used a number of selection criteria including: high ridership numbers, substantial federal capital funding between 2005 and 2015, and a mix of geographic regions. We interviewed officials from these offices to identify actions that the offices use to accelerate the consulting and permitting processes, challenges in the processes, and potential actions that could be implemented to further streamline the consulting and permitting processes. The officials we interviewed from three local transit agencies did not offer any perspectives on the use of streamlining practices or provisions related to environmental consulting and permitting, and are therefore not included in this report. These interviews are not generalizable to all resource agency, state DOT, or transit agency offices. In addition, we met with transportation and environmental advocacy groups to discuss potential additional actions for consulting and permitting. We also reviewed federal reports and recommendations on best practices for streamlining environmental reviews for federal infrastructure projects, including highway and transit. These reports included the Department of Transportation’s Red Book and the Federal Permitting Improvement Steering Council’s annual best practices reports. To describe actions taken by CEQ, we reviewed guidance and regulations issued by CEQ and interviewed CEQ officials on the actions the Council has taken to help streamline the environmental review process for federal transportation projects. We also interviewed officials at the Department of Transportation and resource agencies to discuss the extent to which CEQ actions helped streamline environmental reviews for transportation projects. We conducted this performance audit from March 2017 to July 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Programmatic approaches: Directs the Department of Transportation (DOT) to allow for programmatic approaches to conducting environmental reviews for an environmental impact statement and to the extent determined appropriate, other projects. Requires DOT to seek opportunities with states to enter into programmatic agreements to carry out environmental and other project reviews. MAP-21: §§ 1305(a) and 1318(d) and FAST Act: § 1304(b) (codified at 23 U.S.C. § 139(b)(3) and 23 U.S.C. § 109(note)) Identifying participating agencies: Requires the lead agency to identify, no later than 45 days after the date of publication of a notice of intent to prepare an environmental impact statement or the initiation of an environmental assessment, any other federal and non-federal agencies that may have an interest in the project, and to invite those agencies to become participating agencies in the environmental review process for the project. SAFETEA-LU: § 6002(a) as amended by FAST Act: § 1304(d)(1) (codified at 23 U.S.C. § 139(d)(2)) Concurrent reviews: Requires that each participating and cooperating agency carry out its obligations under other applicable law concurrently and do so in conjunction with the review required under the National Environmental Policy Act (NEPA), unless doing so would impair the ability of the agency to conduct needed analysis or otherwise to carry out those obligations, and that each agency should implement mechanisms to enable the agency to ensure completion of the environmental review process in a timely, coordinated, and environmentally responsible manner. SAFETEA-LU: § 6002(a) as amended by MAP-21: § 1305(c) (codified at 23 U.S.C. § 139(d)(7)) Use single NEPA document: Requires to the maximum extent practicable and consistent with federal law, that the project’s lead agency develop a single NEPA document to satisfy the requirements for federal approval or other federal action, including permits. FAST Act: § 1304(d)(2) (codified at 23 U.S.C. § 139(d)(8)) Limiting participating agency responsibilities: Requires that participating agencies provide comments, responses, studies, or methodologies on areas within the special expertise or jurisdiction of the agency, and that an agency use the environmental review process to address any environmental issues of concern to the agency. FAST Act: § 1304(d)(2) (codified at 23 U.S.C. § 139(d)(9)) Environmental checklist: Requires the development of a checklist by the lead agency, in consultation with participating agencies, as appropriate, to help identify natural, cultural, and historic resources. FAST Act: § 1304(e) (codified at 23 U.S.C. § 139(e)(5)) Alternatives analysis: Requires the lead agency to determine the range of alternatives for consideration in any document that the lead agency is responsible for preparing for a project, and requires that those alternatives should be used to the extent possible in all reviews and permit processes required for the project, unless the alternatives must be modified to address significant new information or circumstances or for the lead agency or a participating agency to fulfill the agency’s responsibilities under NEPA in a timely manner. SAFETEA-LU: § 6002(a) and FAST Act: § 1304(f) (codified at 23 U.S.C. § 139(f)(4)) Coordination and scheduling: Requires a coordination plan for public and agency participation in the environmental review process within 90 days of notice of intent to prepare an EIS or the initiation of an EA, including a schedule for completion of the environmental review process for the project. SAFETEA-LU: § 6002(a) as amended by MAP-21: 1305(e) and FAST Act: § 1304(g) (codified at 23 U.S.C. § 139(g)(1)) Description of the provision and the transportation reauthorization act reference 9. Issue resolution process: Establishes procedures to resolve issues between state DOTs and relevant resource agencies, including those issues that could delay or prevent an agency from granting a permit or approval, and describes lead and participating agency responsibilities. SAFETEA-LU: § 6002(a) as amended by MAP-21: § 1306, and FAST Act: § 1304(h) (codified at 23 U.S.C. § 139(h)) 10. Financial penalty provisions: Can cause a rescission of funding from the applicable office of the head of an agency, or equivalent office to which the authority for rendering the decision has been delegated by law, if that office fails to make a decision within certain time frames under any federal law relating to a project that requires the preparation of an EIS or EA, including the issuance or denial of a permit, license, or other approval. MAP-21: § 1306 as amended by FAST Act: § 1304(h)(3) (codified at 23 U.S.C. § 139(h)(7)) 11. Use of federal highway or transit funds to support agencies participating in the environmental review process: Allows a public entity to use its highway and transit funds to support a federal (including DOT) or state agency or Indian tribe participating in the environmental review process on activities that directly and meaningfully contribute to expediting and improving project planning and delivery. SAFETEA-LU: § 6002(a) as amended by MAP-21: § 1307, and FAST Act: § 1304(i) (codified at 23 U.S.C. § 139(j)) 12. 150-Day statute of limitations: Bars claims seeking judicial review of a permit, license, or approval issued by a federal agency for highway projects unless they are filed within 150 days after publication of a notice in the Federal Register announcing the final agency action, or unless a shorter time is specified in the federal law under which the judicial review is allowed. SAFETEA-LU: § 6002(a) as amended by MAP-21: § 1308 (codified at 23 U.S.C. § 139(l)) 13. Enhanced technical assistance and accelerated project completion: At the request of a project sponsor or a governor of the state in which the project is located, requires DOT to provide additional technical assistance for a project where EIS review has taken 2 years, and establish a schedule for review completion within 4 years. In providing assistance, DOT shall consult, if appropriate, with resource and participating agencies on all methods available to resolve the outstanding issues and projects delays as expeditiously as possible. MAP-21: § 1309 (codified at 23 U.S.C. § 139(m)) 14. Early coordination activities in environmental review process: Encourages early cooperation between DOT and other agencies, including states or local planning agencies, in the environmental review process to avoid delay and duplication, and suggests early coordination activities. Early coordination includes establishment of memorandums of agreement with states or local planning agencies. MAP-21: § 1320 (codified at 23 U.S.C. § 139(note)) 15. Planning documents used in NEPA review: To the maximum extent practicable and appropriate, authorizes the lead agency for a project and cooperating agencies responsible for environmental permits, approvals, reviews, or studies under federal law to use planning products, such as planning decisions, analysis, or studies, in the environmental review process of the project. MAP-21: § 1310 as amended by FAST Act: § 1305 (codified at 23 U.S.C. § 168(b)) 16. Programmatic mitigation plans used in NEPA review: Allows a state DOT or metropolitan planning organization to develop programmatic mitigation plans to address potential environmental impacts of future transportation projects. It also requires that any federal agency responsible for environmental reviews, permits, or approvals for a transportation project give substantial weight to the recommendations in a state or metropolitan programmatic mitigation plan, if one had been developed as part of the transportation planning process, when carrying out responsibilities under NEPA or other environmental law. MAP-21: § 1311 as amended by FAST Act: § 1306 (codified at 23 U.S.C. § 169(f)) Description of the provision and the transportation reauthorization act reference 17. Categorical exclusion determination authority: Authorizes DOT to assign and a state to assume responsibility for determining if projects can be categorically excluded from NEPA review, and allows states that have assumed that responsibility to also assume DOT’s responsibility for environmental review, consultation, or other actions required under federal law applicable to activities classified as categorical exclusions. SAFETEA-LU: § 6004(a), as amended by MAP-21: § 1312, and FAST Act: § 1307 (codified at 23 U.S.C. § 326) 18. Surface transportation project delivery program: Authorizes DOT to assign and a state to assume many federal environmental review responsibilities for highway, public transportation, and railroad projects, to be administered in accordance with a written agreement between DOT and the participating state. SAFETEA-LU: § 6005(a), as amended by MAP-21: § 1313 and FAST Act: § 1308 (codified at 23 U.S.C. § 327) In addition to the contact named above, Brandon Haller (Assistant Director), Lauren Friedman, Tobias Gillett, Rich Johnson, Delwen Jones, Hannah Laufe, Jeff Miller, Cheryl Peterson, Malika Rice, Alison Snyder, Kirsten White, and Elizabeth Wood made significant contributions to this report.
|
Since 2005, the federal government has enacted various statutes aimed at accelerating the environmental review process for highway and transit projects. In addition, the Clean Water Act and the Endangered Species Act may require three federal agencies—the Corps, FWS, and NMFS—to issue permits or perform consultations before a project can proceed. GAO is required by statute to assess the extent to which statutory provisions have accelerated and improved environmental permitting and consulting processes for highway and transit projects. This report examines, among other things: 1) the impact of streamlining provisions on consulting and permitting time frames, and (2) additional actions used by federal resource agencies to streamline their reviews. GAO analyzed permitting and consulting data from the 3 federal agencies and interviewed officials from the 3 agencies, 16 agency field offices, and 7 state DOTs for their perspectives on the effect of streamlining provisions and other efforts. GAO selected these offices to include a range of locations and those with a greater number of permits and consultations, among other factors. Federally funded highway and transit projects must be analyzed for their potential environmental effects, as required by the National Environmental Policy Act, and may be subject to other environmental protection laws, including the Clean Water Act and the Endangered Species Act. These laws may require the U.S. Army Corps of Engineers (Corps) to issue permit decisions and the U.S. Fish and Wildlife Service (FWS) and the National Marine Fisheries Service (NMFS) to conduct consultations before a project can proceed. These three agencies are referred to as “resource agencies” for this report. The three most recent transportation reauthorization acts include provisions that are intended to streamline various aspects of the environmental review process; 18 of these provisions could potentially affect time frames for the environmental permitting and consulting processes for highway and transit projects. While officials GAO interviewed at resource agencies and state departments of transportation (state DOT) noted that some actions called for by the 18 statutory provisions have helped streamline the consultation and permitting processes for highway and transit projects, GAO found that a lack of reliable agency data regarding permitting and consulting time frames hinders a quantitative analysis of the provisions' impact. Officials said, for example, that a provision that allows federal liaison positions at resource agencies to focus solely on processing applications for state DOT projects has helped avoid delays in permit and consultation reviews. However, none of the three resource agencies could provide enough reliable data to analyze changes in the durations of consultations and permit reviews over time for any of the provisions. Further, GAO identified limitations, such as negative or missing values, and inconsistent data entry practices for FWS and NMFS data. FWS and NMFS have limited controls, such as electronic safeguards and other data-entry procedures, to ensure the accuracy and reliability of their data on the duration of consultations. Left unaddressed, these data quality issues may impair the agencies' ability to accurately determine whether they are meeting their 135-day statutory and regulatory deadlines to complete consultations and provide biological opinions, and could affect their ability to provide accurate data on time frames for efforts of the Office of Management and Budget to track agencies' performance in conducting environmental reviews. While FWS and NMFS officials stated that the agencies plan to improve their tracking systems, the agencies do not have documented plans or time frames for the improvements and it is unclear whether the efforts will include internal controls to improve data reliability. Some federal resource agency and state DOT officials GAO interviewed identified additional actions that have been used to streamline the consultation and permitting processes to avoid delays in agency reviews. For example, 16 of the 23 resource agency and state DOT officials said that field office staff provided training to state DOT staff about the information field offices required for permit or consultation applications. Resource agency and state DOT officials also identified electronic systems with environmental data and for submitting documents as streamlining actions that have been helpful. GAO is making two recommendations, one to FWS and one to NMFS, to develop plans and time frames for improving their tracking systems and to develop internal controls to improve data reliability. The Departments of Commerce and Interior concurred with our recommendations.
|
Treasury established HHF in February 2010 to help stabilize the housing market and assist homeowners facing foreclosure in the states hardest hit by the housing crisis. The HHF program is implemented by Treasury’s Office of Financial Stability. Treasury obligated funds to 18 states and the District of Columbia. Treasury allocated funds to each state’s HFA to help unemployed homeowners and others affected by house price declines. HFAs, in turn, design their own programs under HHF specific to local economic needs and circumstances pursuant to their contracts with Treasury. Treasury allocated $9.6 billion in HHF funding to 19 HFAs in five rounds. As described below, Treasury allocated $7.6 billion to participating HFAs during the first four rounds of funding, all of which occurred in 2010. HFAs were required to disburse these funds by December 2017. Round one: In February 2010, Treasury allocated $1.5 billion to the HFAs in the five states that had experienced the greatest housing price declines—Arizona, California, Florida, Michigan, and Nevada. Round two: In March 2010, Treasury allocated $600 million to the HFAs in five states with a large proportion of their populations living in counties with unemployment rates above 12 percent in 2009—North Carolina, Ohio, Oregon, Rhode Island, and South Carolina. Round three: In August 2010, Treasury allocated $2 billion to the HFAs in nine of the states funded in the previous rounds, along with the HFAs for eight additional states and the District of Columbia, all of which had unemployment rates higher than the national average in 2009. The additional HFAs that received funding were Alabama, the District of Columbia, Georgia, Illinois, Indiana, Kentucky, Mississippi, New Jersey, and Tennessee. Round four: In September 2010, Treasury allocated an additional $3.5 billion to the same 19 HFAs that received HHF funding through the previous rounds. In December 2015, the Consolidated Appropriations Act, 2016 authorized Treasury to make an additional $2 billion in unused TARP funds available to existing HHF participants. In early 2016, Treasury announced a fifth round of HHF funding. According to Treasury and HFA officials and other stakeholders, by that time some of the participating HFAs had begun to wind down their programs by letting go of program staff or making other changes after they had disbursed most of their funding from the first four rounds. Treasury allocated this additional $2 billion in two phases. Round five, phase one: In February 2016, Treasury allocated $1 billion to 18 of the HFAs that had previously been awarded HHF funds based on each state’s population and utilization of previous HHF funds. In order to qualify for phase one funding, states had to have drawn at least 50 percent of their previously received funding. Round five, phase two: In April 2016, Treasury allocated an additional $1 billion to 13 HFAs that applied and sufficiently demonstrated to Treasury their states’ ongoing housing market needs and the ability to effectively utilize additional funds. The HFAs that received funding were California, District of Columbia, Illinois, Indiana, Kentucky, Michigan, Mississippi, New Jersey, North Carolina, Ohio, Oregon, Rhode Island, and Tennessee. In conjunction with the fifth round of funding, Treasury extended the deadline for disbursement to December 31, 2021. Treasury also determined that HFAs must finish reviewing and underwriting all applications for final approval to participate in the program no later than December 31, 2020. HFAs that do not disburse HHF funds by the December 31, 2021, deadline will have to return the remainder of the funds to Treasury. See figure 1 for an overview of the allocation amounts and disbursement deadlines. Under HHF, HFAs designed locally tailored programs that address HHF’s goals of preventing foreclosures and stabilizing housing markets. These programs had to meet the requirements of the Emergency Economic Stabilization Act of 2008 and be approved by Treasury. Treasury categorizes programs into six types, which are discussed in detail later in this report, including programs that provide monthly mortgage payment assistance and programs that reduce the principal of a mortgage. Programs vary by state in terms of eligibility criteria and other details. HFAs contract with various stakeholders to implement HHF programs, including mortgage servicers and, in some cases, housing counseling agencies and land banks. The types of stakeholders involved vary depending on program design. For example, HFAs with blight elimination programs may choose to provide HHF funding to a local land bank to demolish and green blighted properties in distressed housing markets. Also, HFAs may contract with housing counseling agencies approved by the Department of Housing and Urban Development (HUD) to identify eligible applicants at risk of foreclosure. HFAs are required to report performance information on each of their HHF programs to Treasury on a quarterly basis. This information includes outputs, such as the number of homeowners assisted or properties demolished, as well as outcomes, such as the number of homeowners who are no longer participating in HHF programs. The specific types of performance information that Treasury requires HFAs to report vary depending on the program type and include both intended and unintended consequences of the program. For example, HFAs with mortgage payment assistance programs must report on the number of homeowners who have transitioned out of the program due to specific changes in their circumstances, such as regaining employment. HFAs do not have to report on the number of borrowers who transitioned out of the program into foreclosure sales, short sales, or deeds-in-lieu of foreclosure for their down payment assistance programs because the assistance is provided on behalf of a buyer who is purchasing, not selling or otherwise exiting, the home. Treasury provides HFAs with spreadsheet templates, which HFAs are to fill out and submit back to Treasury. The templates include data-reporting guidance in the form of a data dictionary, which describes the data elements HFAs are to report. Participating HFAs’ HHF programs are governed by a participation agreement, or contract, with Treasury that outlines the terms and conditions in providing services that the HFA must meet as a recipient of HHF funds. Each agreement includes reporting requirements, program deadlines, and descriptions of permitted administrative expenses. Additionally, agreements include detailed descriptions of the HHF programs that Treasury has approved. Program descriptions include details such as eligibility criteria, structure of assistance, and the estimated number of participating homeowners. Participation agreements may be amended with Treasury approval to reflect changes to HHF programs, such as new requirements from Treasury or changes in the amounts HFAs allocate to each program. As an example, in 2015 Treasury added new conditions, called utilization thresholds, to each HFA’s participation agreement. The thresholds establish the percentage of allocated funds each HFA was required to draw from its Treasury account by the end of each year from 2016 through 2018. If an HFA did not meet a threshold, Treasury reallocated a portion of the additional funds received during the fifth round to HFAs that did meet the threshold. If an HFA would like to make a change to an HHF program, the HFA must submit a request to Treasury that outlines the proposed change. Treasury reviews the proposal through an interdisciplinary committee and, if the proposal is approved, amends the participation agreement. As of December 2017, the 19 participating HFAs had each received approval from Treasury and executed between 9 and 21 amendments to their individual participation agreements. Treasury’s policies and procedures to monitor HFAs’ implementation of the HHF program address 10 leading monitoring practices, including practices related to the collection of periodic performance reports and validation of performance through site visits. However, Treasury’s assessment of HFAs’ internal control programs, development of performance indicators, documentation of goals and measures, and documentation of HFAs’ monitoring could better address leading practices (see fig. 2). Treasury created policies and procedures to guide regular oversight of HFAs’ implementation of HHF. According to internal control standards for the federal government, management should design control activities to achieve objectives and implement control activities through policies— such as by periodically reviewing policies, procedures, and related control activities. In addition, management should establish and operate activities to monitor the internal control system and evaluate the results— for example, through ongoing monitoring procedures and separate evaluations. Treasury documented procedures for key areas of its monitoring framework, including providing funds to HFAs, evaluating HFAs’ requests to change their programs, collecting financial and performance information from HFAs, conducting site visits, and addressing fraud detection and mitigation for Treasury’s staff. Treasury regularly updates the policies and procedures it created and reviews its compliance oversight procedures annually. In addition, Treasury regularly conducts site visits to HFAs, as discussed below. Treasury uses a risk-based approach to selecting HFAs for its regular site visits. This approach is consistent with leading practices we have developed for managing fraud risk, which state that agencies should employ a risk-based approach to fraud monitoring by taking into account internal and external factors that can influence the control environment. In 2018, Treasury began using a point-based, 29-factor approach to selecting HFAs for site visits for compliance reviews, taking into account factors such as whether prior fraud was detected or reported, observations from HFAs’ compliance reviews, administrative dollars spent compared to program assistance provided, and whether HFAs have documented blight-specific policies and procedures. According to Treasury staff, during site visits Treasury determines its test and sample sizes for a risk-based review of an HFA’s programs. Treasury also uses a risk-based approach to responding to potentially impermissible payments, and according to Treasury staff, its responses depend on the circumstances. If an HFA notifies Treasury of issues related to inappropriate payments involving fraud, waste, or abuse, Treasury staff notify and work with the Office of the Special Inspector General for the Troubled Asset Relief Program (SIGTARP) to provide technical assistance as needed. In 2017, Treasury implemented additional procedures with regard to HFAs’ administrative expenses. If Treasury identifies an administrative expense issue during a site visit, Treasury requires the visited HFA to undertake a multistep review of its administrative expenses, including reviewing additional administrative expenses if similar problems are identified during the initial review. The HFA is required to reimburse HHF for any administrative expenses that were not made in accordance with federal cost principles. Additionally, Treasury may require the HFA to create a plan for corrective action. Treasury collects performance information from participating HFAs on a regular basis, which a compliance team receives and reviews. These efforts are consistent with internal control standards, which state that management should use quality information to achieve the entity’s objectives, such as by obtaining relevant data from reliable sources. Treasury tracks its receipt of agencies’ quarterly performance reports and financial statements, as well as HFAs’ annual internal control certifications. Quarterly performance reports include information about homeowners, such as the number of homeowners who receive or are denied assistance. These reports also include program-specific performance data, such as the median assistance amount, and outcomes, such as the number of program participants who still own their home. According to HFAs’ participation agreements, HFAs are required to report performance information through the end of their programs. In addition, Treasury collects informal monthly updates from HFAs on their program performance and is in frequent contact with HFAs by phone to obtain information on HFAs’ performance, including any challenges states are facing, according to Treasury staff and HFAs with whom we met. Treasury also collects reports on the impact of blight elimination programs, which HFAs with these programs are required to submit to Treasury. Treasury regularly analyzes the performance and financial data that it collects through quarterly performance reports, quarterly unaudited financial statements, and annual audited financial statements that HFAs are required to submit. Periodic analysis of these materials is consistent with standards for internal control, which state that management should design control activities to achieve objectives and respond to risks—for example, by establishing activities to monitor performance measures and indicators. Treasury uses information from quarterly performance reports to produce quarterly reports for the public on the number of homeowners who received or were denied assistance, among other things. Treasury also includes data on the extent to which states have spent their HHF funding in monthly reports to Congress. Additionally, Treasury analyzes quarterly unaudited and annual audited financial statements to monitor HFAs’ spending of program funds and identify any areas of concern. According to Treasury staff, the agency also uses performance information HFAs report quarterly, such as the number of homeowners who receive or are denied assistance, to assess whether HFAs are making sufficient progress in effectively utilizing program funds to reach the targets for assisting homeowners. Treasury has procedures to assess the quality of HFAs’ performance data when reviewing quarterly performance reports and conducting site visits. These procedures are consistent with internal control standards, which state that management should use quality information to achieve the entity’s objectives, such as by evaluating data sources for reliability. According to Treasury staff, beginning in the first quarter of 2018, Treasury required all participating HFAs to upload their performance data into a system that does basic data reliability testing, such as ensuring the numbers submitted by HFAs are consistent with data submitted for previous quarters. This system flags outliers or large changes for further review. Prior to this requirement, HFAs could use the system optionally. HFAs are able to upload their data as frequently as they want to check for errors or inconsistencies. After performance information is uploaded into the system, two Treasury staff review any issues flagged by the system and follow up with HFAs to resolve them. According to Treasury staff, as an additional validation step, Treasury staff conducts a reconciliation by checking whether the funds reported in HFAs’ performance reports match the data in the HFAs’ quarterly financial reports. After Treasury reviews each HFA’s performance data, it combines that information to create quarterly reports. In addition, Treasury staff told us that they do a detailed review of HFAs’ financial statements during site visits, including but not limited to the timeliness of financial reporting, corrections to reports after the reporting cycle, and supporting documentation for all categories of expenditures sampled during the review. Treasury documents the offices that are responsible for receiving and reviewing monitoring materials, the deadlines for receiving this information, and the responsibilities of staff who execute internal control. This documentation is consistent with internal control standards, which state that management should implement control activities through policies, such as by documenting each unit’s internal control responsibilities. The standards also state that management should remediate identified internal control deficiencies on a timely basis, such as by having personnel report internal control issues through established reporting lines. Treasury’s policies and procedures document which offices are in charge of executing its monitoring procedures, such as collecting required documentation, conducting site visits, and evaluating HHF performance. Treasury informs HFAs of reporting lines to Treasury through phone calls and emails. Treasury and HFA staff also noted that they are in frequent contact with each other regarding administration of the program. Treasury uses regular (at least biennial) site visits, biweekly calls with HFAs, and monthly informal performance updates as means of validating HFAs’ performance. These practices are consistent with OMB guidance, which states that a federal awarding agency may make site visits as warranted by program needs. Treasury uses its site visits to assess HFAs’ program implementation, conduct its own analyses of program results, review HFAs’ use of program funds, and review HFAs’ implementation of internal controls. According to Treasury staff, Treasury also uses site visits to corroborate the information HFAs report on their program performance and use of HHF funds. According to HFAs with whom we met, site visits typically last multiple days and include entrance and exit conferences between Treasury and HFA staff. During site visits, Treasury staff review documentation related to homeowners and properties associated with the programs, quality assurance processes, antifraud procedures, information technology and data security, finances, and legal matters. After the site visit, Treasury issues a report documenting its observations. Within 30 days of receiving Treasury’s written report, HFAs are required to provide Treasury with a written response describing how they will address any issues of concern. Treasury included some procedures for project closeout in HFAs’ participation agreements. Creating procedures for project closeout is consistent with OMB guidance, which states that agencies should close out federal awards when they determine that applicable administrative actions and all required work have been completed by the nonfederal entity. Participation agreements describe various procedures for closing out HHF programs, including requirements for the return of unexpended funds to Treasury and final reporting and provisions for reimbursement of expenses. In addition, according to Treasury staff, Treasury is in the process of developing and issuing wind-down guidance for HFAs in stages to address specific areas of program activity. Agency officials also discussed winding down the HHF program during Treasury’s 2018 Annual Hardest Hit Fund Summit. The annual summit is a meeting that HFAs, servicers, and other stakeholders are invited to attend to facilitate information sharing among stakeholders involved in HHF. At the 2018 summit, the agency discussed topics that included final compliance and financial reviews, program change requests, operational timelines, and budgeting and staffing as they relate to the wind-down of HHF programs and operations. In addition, as states have begun to close some of their programs, Treasury has issued clarifying guidance to HFAs in order to effectively wind down the HHF program—including on streamlining the process for requesting changes to programs. Treasury staff also performed outreach to each HFA in April 2018 about their wind-down plans and, according to Treasury staff, the agency expects to prepare written guidelines for HFAs on certain other topics related to winding down the program, including reporting requirements, as appropriate. Treasury uses performance information to assess whether HFAs are performing at a satisfactory level. This practice is consistent with internal control standards, which state that management should establish and operate monitoring activities to monitor the internal control system and evaluate results, which can include evaluating and documenting the results of ongoing monitoring and separate evaluations to identify internal control issues. In addition, management should remediate identified internal control deficiencies on a timely basis. This can entail management completing and documenting corrective actions to remediate internal control deficiencies on a timely basis. Treasury staff described the agency’s process of assessing HFAs’ performance as “holistic.” As a part of this process, Treasury staff review the targets HFAs set for assisting households or demolishing blighted properties and monitor HFAs’ utilization rates. According to Treasury staff, if performance and financial data suggest that an HFA is not making sufficient progress toward its performance targets or is drawing funds too slowly, Treasury collaborates with the HFA and the HFA must create a plan to improve its performance. If an HFA is not responsive to Treasury’s efforts, Treasury issues a performance memorandum requiring the HFA to create a plan to address its deficiencies. As of October 2018, Treasury had issued performance memorandums to seven HFAs—five in 2012 and two in 2015. Additionally, as mentioned previously, Treasury issues a report to each HFA following each site visit describing any issues of concern Treasury identified. Treasury requires HFAs to provide the agency with a written response to the report within 30 days of the report date describing the HFA’s plan for addressing any deficiencies. Treasury regularly communicates with HFAs, servicers, and other stakeholders interested in HHF, which is consistent with internal control standards that state management should externally communicate the necessary quality information to achieve the entity’s objectives. This can include communicating with, and obtaining quality information from, external parties using established reporting lines. According to Treasury staff, Treasury holds biweekly calls with HFAs and servicers, facilitates issue-specific working groups between HFAs and stakeholders, and holds an annual summit related to HHF. HFA staff said Treasury staff are very responsive to program-related questions. Treasury’s annual summit allows interested parties, such as HFAs, servicers, and other stakeholders, to discuss important issues related to HHF. To assist HFAs in designing their internal control activities, including defining program objectives, Treasury created an optional risk assessment matrix to help HFAs and their auditors identify and assess HFAs’ risks. The matrix includes control objectives and example control activities, and it allows HFAs to determine their risk tolerances for each control objective. For example, for the risk of improper use of administrative funds, the matrix includes “ensuring that appropriate documentation exists to support HHF administrative expenses” as a control objective, and it lists routine review of administrative payments by internal auditors as an example control activity. HFAs can identify their risk tolerances as low, medium, or high in the matrix. This matrix is consistent with federal internal control standards, which state that management should define objectives clearly to enable the identification of risks and define risk tolerances. However, Treasury does not systematically collect or evaluate HFAs’ risk assessments. HFAs’ participation agreements require them to submit an annual certification of their internal control programs by an independent auditor to Treasury. According to Treasury staff, independent auditors sometimes choose to include HFAs’ risk assessments with the annual certification, and during site visits Treasury obtains documentation of HFAs’ internal control programs, which sometimes includes their risk assessments. Outside of these instances, Treasury does not routinely collect HFAs’ risk assessments. Further, in those instances when Treasury does collect them, it does not analyze the assessments to evaluate whether the risk levels are appropriate. While Treasury does a more in-depth evaluation of HFAs’ internal controls during site visits, this review does not include evaluating the appropriateness of the risk levels HFAs identified. For example, one of the risk assessment matrixes we reviewed listed the HFAs’ administrative expenses as low-risk despite this HFA having a history of alleged improper-payment related issues with its HHF program, which Treasury’s review would not have evaluated. Treasury officials told us that during site visits they may discuss the risk levels that HFAs determine, but Treasury has not asked or required any HFAs to change a risk level. Failure to collect and evaluate HFAs’ risk assessments is inconsistent with an important practice for preventing fraud we have previously identified—monitoring and evaluating the effectiveness of preventive activities, including fraud risk assessments and the antifraud strategy, as well as controls to detect fraud and response efforts. Further, according to internal control standards, management should identify, analyze, and respond to risks related to achieving the defined objectives, and an oversight body may oversee management’s estimates of significance so that risk tolerances have been properly defined. According to Treasury staff, the risk assessment matrixes are intended for use by HFAs and their independent auditors in preparing for the annual certification. They said that risk tolerances, or levels, are to be assigned by HFAs and their independent auditors, not by Treasury, and that it would be inappropriate for Treasury to interfere with their determination. However, agreed-upon procedures performed by HFAs’ independent auditors do not provide assurance or conclusion as to whether HFAs’ risk levels are appropriate. For example, in two agreed-upon procedures reports we reviewed, the auditors stated that the procedures performed were based on the HFAs’ risk matrixes, but they did not mention assessing whether the risk levels assigned to different controls were appropriate. Treasury staff also said that Treasury expands its sample size and criteria for specific programs or categories of expenses during a compliance review where repeated or significant observations have been previously found. However, by not collecting and evaluating HFAs’ risk assessments, Treasury limits its ability to monitor the effectiveness of HFAs’ preventive activities, controls to detect fraud, and response efforts. In addition, Treasury is missing an opportunity to help ensure that risk levels are appropriate. Treasury’s documentation of its efforts to monitor HFAs is consistent with internal control standards, which state that management should establish and operate activities to monitor the internal control system and evaluate results and remediate deficiencies on a timely basis. More specifically, the standards cite as characteristics of these principles that management evaluate and document the results of ongoing monitoring and separate evaluations to identify internal control issues, and determine appropriate corrective actions for internal control deficiencies on a timely basis. Treasury addresses these criteria by documenting its monitoring findings through site visit reports, as previously discussed. Treasury requires HFAs to provide the agency with a plan to address any issue described in the site visit report within 30 days. In addition, Treasury addresses these criteria by documenting HFAs’ responses and assessing whether the issue has been addressed at the next site visit. Furthermore, Treasury sets deadlines for and documents receipt of HFAs’ annual internal control certifications, quarterly financial and performance reports, and annual audited financial statements. When underperforming HFAs are not responsive to Treasury’s attempts to work with them to improve their performance, Treasury documents the issues it has found and requires the HFAs to create and submit a corrective plan. Treasury also directs HFAs to establish and execute their own internal control system, but it does not require HFAs to consistently document which of their staff are responsible for internal control execution. HFAs were required to submit staffing information within 90 days of joining HHF. However, HFAs are not required to regularly update this information. Further, Treasury’s written procedures for reviewing HFAs’ internal control programs during site visits do not include reviewing documentation of which HFA staff are responsible for responding to or reporting internal control issues. These practices are inconsistent with standards for internal control, which state that management should establish an organizational structure, assign responsibility, and delegate authority to achieve the entity’s objectives. The standards also note that effective documentation can assist management’s design of internal control by establishing the “who, what, when, where, and why” of internal control execution. We asked Treasury if it encouraged HFAs to document which personnel are in charge of executing internal control procedures. Treasury staff referred us to the initial requirement that HFAs submit staffing information within 90 days of joining HHF and stated that there is no requirement that HFAs update this information. Further, Treasury staff said that during site visits they interview key HFA staff who execute internal controls and document these interviews. However, this practice does not help ensure that HFAs consistently provided updated information to their staff about which of their staff are responsible for internal control execution. Without requiring HFAs to routinely update their documentation, particularly as HFAs are winding down their HHF programs and staff begin to turn over, Treasury cannot be assured that HFAs are keeping their staff updated about who is responsible for monitoring issues and internal control execution. Treasury and HFAs created quantitative output and outcome measures to assess HFAs’ performance. For example, Treasury created utilization thresholds to help ensure HFAs spend their HHF funds in a timely manner. Also, HFAs created performance targets to estimate the number of homeowners they could assist (or blighted properties they could demolish) through HHF. These activities are consistent with an attribute of successful performance measures—specifically, that measures should have a numerical goal. However, some of Treasury’s performance measures are not clearly stated, and Treasury did not create consistent methodologies for HFAs to use to assess the performance of their HHF programs. In our previous work on attributes of successful measures, we identified that measures should be clearly stated and that the name and definition should be consistent with the methodology used to calculate them. While Treasury provided HFAs with a data dictionary to describe the information HFAs are required to report, Treasury defined the term “unique applicants” in a manner that allows HFAs to count applicants differently, leading to inconsistencies in HFAs’ methodologies for calculating some performance measures. As discussed later in this report, Treasury also allowed and sometimes required HFAs to self-define some data elements. Additionally, performance measures should indicate how well different organizational levels are achieving goals. However, Treasury did not design a consistent methodology for HFAs to use to develop targets for the number of homeowners and properties their HHF programs may assist, and as discussed later in this report, HFAs we interviewed used different methodologies. Because some of Treasury’s performance measures are not clearly stated and because Treasury did not design consistent methodologies for HFAs to use in setting targets, as HFAs close down their HHF programs, Treasury has a limited ability to compare performance across HFAs or aggregate these data to evaluate how well the HHF program as a whole is achieving its goals. Treasury created goals and measures to assess HHF performance, consistent with a practice we previously identified of creating performance goals and measures that address important dimensions of program performance and balance competing priorities. Treasury addressed this practice by creating utilization thresholds for HFAs and inserting them in HFAs’ participation agreements. Treasury also addressed this practice by documenting its performance measures, using standardized spreadsheets through which HFAs regularly report on outputs and outcomes related to the services provided to distressed homeowners. However, Treasury has not explicitly documented the relationship between program outputs and the overall goals of the HHF program, and it does not generally require HFAs to establish intermediate goals unless the HFA has not met Treasury’s performance expectations. This is inconsistent with practices we previously identified relating to results- oriented performance goals and measures. Among these practices are including explanatory information on goals and measures in performance plans and using intermediate goals to show progress or contributions toward intended results. The main goals of HHF are to prevent foreclosures and stabilize housing markets. However, Treasury has not documented the relationship between many of the program outputs it tracks and the main goals of the HHF program. According to Treasury, the relationship between its outputs and the goals of HHF can be inferred through various memorandums and materials it issued when HHF was created. However, these documents do not explicitly explain the rationale for the use of these output measures to assess HHF’s ability to stabilize neighborhoods and prevent foreclosures. By not documenting the relationship between HHF’s program outputs and services and the overall goals of the HHF program or requiring all HFAs to set intermediate goals, Treasury missed the opportunity to more proactively articulate a results- oriented focus for the HHF program. As of December 2017, the 19 participating HFAs had 71 active HHF programs. Active HHF programs fall under one of six Treasury-defined program types: mortgage assistance, reinstatement, transition assistance, principal reduction, down payment assistance, and blight elimination. Participating HFAs may have implemented additional HHF programs, but these programs had either stopped disbursing funds or had not received a total allocation from Treasury at the time of our review. Individual HFAs may implement multiple programs—for example, the Mississippi HFA had two active programs, and the South Carolina HFA had five. The most common type of HHF program as of December 2017 was mortgage assistance, as shown in table 1. All 19 HFAs had active mortgage payment assistance programs as of December 2017. In contrast, 3 HFAs had active transition assistance programs. As of December 2017, we found that the 71 active HHF programs had assisted approximately 400,000 homeowners and demolished almost 24,000 blighted properties. According to Treasury data, the majority of homeowners who received HHF assistance participated in a mortgage payment assistance program. Treasury data also indicate that transition assistance programs assisted the smallest number of homeowners relative to other HHF program types (see table 2). HHF programs of the same program type can vary in a number of ways, including eligibility criteria, length of time implemented, and number of homeowners assisted. Within each program type, HFAs designed programs that sometimes varied based on specific housing needs. For example, while both the Nevada and Florida HFAs had active reinstatement programs as of December 2017, these programs had different eligibility criteria. The Nevada HFA’s reinstatement program targeted low-to-moderate income homeowners who had fallen behind on their mortgages. The Florida HFA offered a similar reinstatement program for delinquent mortgages but also offered a program for senior homeowners who had fallen behind on property taxes and other fees. HHF programs also varied by duration and the amounts of assistance provided as of December 2017. For instance, since all HFAs initially launched mortgage payment assistance programs at the beginning of HHF, these programs have been active for an average of 7 years. In contrast, HFAs began implementing down payment assistance programs in 2015. Additionally, the median amount of assistance provided varied by program type. According to analysis of Treasury data from 2010 through 2017, assistance ranged from a median amount of $4,000 per household for transition assistance programs to over $42,000 per household for principal reduction programs. The HHF program is beginning to wind down. As of September 2018, Treasury had disbursed $9.1 billion of the $9.6 billion obligated under HHF. According to Treasury officials, although HFAs may continue issuing new approvals through December 31, 2020, most states have already begun to close down HHF programs or will do so by the end of 2018 as they exhaust their available funds. These include California and Florida, the two largest states in the program. According to Treasury officials, during the fifth round of funding Treasury established new conditions for HFAs, called utilization thresholds, to help maximize the use of the $2 billion in newly available funds. According to documentation from Treasury, if an HFA does not meet its utilization threshold, Treasury will reallocate a portion of the unused funds to HFAs that did. The amount reallocated to each HFA is determined by state population, the percentage of funds drawn by HFAs, and other factors. The utilization thresholds for 2016 and 2017 were structured as follows: 2016. If an HFA did not draw at least 70 percent of its funding from rounds one through four by December 31, 2016, 50 percent of its round five funding would have been reallocated. 2017. If an HFA did not draw at least 95 percent of its funding from rounds one through four by December 31, 2017, 75 percent of its round five funding would have been reallocated. Most HFAs have met Treasury’s 2016 and 2017 utilization thresholds. More specifically, the 18 HFAs eligible for round five funding met the 2016 utilization threshold. As a result, Treasury did not reallocate any HHF funds for that year. As of December 2017, 17 of the 18 HFAs eligible for round five funding met the 2017 utilization threshold. The Nevada HFA drew 70 percent of its funding for rounds one through four as of December 31, 2017, and therefore did not meet the 2017 utilization threshold. As a result, Treasury reallocated approximately $6.7 million of the Nevada HFA’s unused fifth round HHF funds to the 17 other HFAs. As of September 2018, all HFAs had met the 2018 utilization threshold, and Treasury had disbursed most of the funds obligated under HHF. If an HFA did not draw at least 80 percent of its participation cap by December 31, 2018, an amount equal to the portion of round five funding that had not been drawn from Treasury would have been reallocated. The targets that HFAs set are of limited use for evaluating the performance of individual programs, program types, HFAs, or the HHF program overall. In their participation agreements, HFAs were required to estimate the number of homeowners they intended to assist and, if they had a blight elimination program, the number of blighted properties they intended to demolish for each of their HHF programs. Treasury refers to these estimates as targets. HFAs that we spoke with used different methodologies to calculate these targets. For instance, one of the HFAs we spoke to calculated targets for the number of homeowners they could assist by dividing the program’s total allocation by the average amount of assistance it anticipated awarding to each homeowner. In contrast, another HFA calculated its target for assisting homeowners by dividing that program’s total allocation by the maximum amount of assistance homeowners could be awarded through the program. According to Treasury staff, they did not develop a consistent methodology for HFAs to use in setting these targets because, in their view, HFAs are most familiar with local conditions and should have flexibility in adjusting the program criteria or creating new programs based on these conditions. Internal control standards state that management should define objectives clearly to enable the identification of risks and define risk tolerances. In particular, the standards note the importance of stating measurable objectives in a form that permits reasonably consistent measurement. Further, our guide to designing evaluations states that where federal programs operate through multiple local public or private agencies, it is important that the data agencies collect are sufficiently consistent to permit aggregation nationwide, which allows evaluation of progress toward national goals. Because Treasury did not develop a consistent methodology for HFAs to use when setting performance targets, the targets HFAs developed do not permit consistent measurement of program performance or an evaluation of how well the HHF program as a whole met its goals. However, with the program beginning to wind down, any changes going forward would not improve the consistency of previously collected data or Treasury’s ability to evaluate the program as a whole. Treasury collects quarterly data on outcomes from HFAs that implement four of the six HHF program types: mortgage payment assistance, principal reduction, reinstatement programs, and transition assistance programs. HFAs must track outcomes, both intended and unintended, until a household is no longer involved with an HHF program. Intended outcomes include, for example, the number of homeowners who completed or transitioned out of an HHF program as a result of regaining employment. Unintended outcomes include the number of homeowners who transitioned out of an HHF program into a foreclosure sale. The type of outcomes Treasury requires HFAs to track depends on the program type. Treasury did not design outcome measures in a way that would permit it to use these data to evaluate whether HFAs or the overall program are achieving the stated goals. More specifically, Treasury officials told us that the data they collect on outcomes cannot be used to compare the outcomes achieved by different HFAs or through different HHF program types. According to Treasury officials, HFAs have historically had different interpretations of Treasury’s outcome measures. Treasury revised its template for HHF reporting in 2015 and 2017 to clarify certain performance-related terms. However, Treasury officials told us that conclusions drawn from HHF data on some outcomes are of limited use because HFAs interpret Treasury’s guidance on these data differently. Additionally, after it made revisions to guidance on performance reporting in 2015, Treasury allowed—and in some cases required—HFAs to self- define certain data elements. For example, Treasury required HFAs to define how they calculate the median principal forgiveness awarded by an HHF program. As previously discussed, a key attribute of effective performance measurement is clearly stated performance measures with names and definitions that are consistent with the methodology used to calculate the measure. Additionally, we have noted in our guide to designing evaluations that a program’s outcomes signal the ultimate benefits achieved by a program and should be considered when evaluating a program. Further, OMB has set the expectation that agencies should conduct evaluations of federal programs. However, because Treasury did not clarify certain outcome measures until 5 years into the program, or take steps to ensure that HFAs calculated alternative outcomes consistently, even after Treasury clarified its reporting guidance, the alternative outcomes data that Treasury collects are of limited use for evaluating the performance of HFAs, HHF programs by program type, or the HHF program overall. As many programs are closing, further clarification or changes would not capture the full scope of the program and would not improve such evaluations. Treasury requires HFAs with blight elimination and down-payment assistance programs to identify indicators that are intended to track and quantify the HHF program’s impact on targeted areas, although HFAs are not required to report outcomes data to Treasury in their quarterly performance reports for these program types. According to Treasury, blight elimination and down payment assistance programs are focused on stabilizing housing markets in targeted distressed areas to prevent foreclosures, and therefore they are not required to report individual-level outcomes for HFAs to report in quarterly performance reports. Treasury officials told us that the impact of these program types upon neighborhoods, such as increases in the values of properties in neighborhoods where down-payment assistance or blight elimination programs were used, may not be observable immediately but may appear over time. As of August 2018, four of eight HFAs with blight elimination programs had submitted impact studies to Treasury. Also, all HFAs with down payment assistance programs have submitted studies to Treasury. Three blight elimination program impact studies suggest that the programs had positive impacts on targeted areas, although two of the studies have important limitations. Studies on the programs in Michigan and Ohio found that home prices increased in communities where blighted properties were demolished. For example, the Ohio study found there was about a 4-dollar increase in home values for every dollar spent on the HHF-funded blight elimination program. However, this study examined only 1 of the 18 counties that were served by the Ohio HFA’s blight elimination program. A study on the Illinois program found that certain key economic indicators had improved over a 6-year period in areas targeted by the program. For example, the percentage of negative equity mortgages in 9 of the 10 areas studied declined by an average of 7 percent between 2010 and 2016. However, the findings of this study do not isolate the independent effect of the Illinois HFA’s blight elimination program because other factors, such as local economic conditions, could also affect the performance of key economic indicators. HHF stakeholders with whom we spoke described challenges in implementing HHF programs related to staffing and multiple funding rounds, program implementation, outreach to borrowers, program accessibility, the variety of programs and their status, and external factors. Both Treasury staff with responsibilities for monitoring HFAs’ implementation of HHF and stakeholders told us that these were the types of topics discussed during regular phone calls and annual meetings. Stakeholders included staff from four HFAs that are implementing HHF programs, mortgage servicers and housing counseling agencies that are involved with HHF, and other interested organizations, including those that work with HFAs. Staffing and multiple funding rounds. All four HFAs and various stakeholders with whom we spoke told us that staff turnover at HFAs presents challenges. In some cases, turnover has been related to the way the HHF program has been funded. For example, staff from two HFAs mentioned that either they let staff go or their temporary staff found more permanent positions as the agencies spent down their initial HHF funds. When Congress authorized Treasury to make additional TARP funds available to HHF beginning in 2016, these HFAs had to hire and train new staff. Treasury officials told us that many HFAs encountered staffing challenges as a result of the program’s fifth funding round. Additionally, staff from two servicers and an organization that advocates for HFAs told us that HFA turnover presents challenges because it takes time for new staff to become familiar with the program and for programs to ramp back up. Program implementation. Staff from most of the HFAs and servicers with whom we spoke, as well as Treasury staff and other stakeholders, told us that implementation of the HHF program was challenging. Specific implementation challenges mentioned by HFAs included creating an in- house information system to manage HHF data; managing refinancing requests from homeowners who have been awarded HHF funds (to help ensure the HFA’s place as a lien-holder); and sharing information with servicers. While Treasury helped to develop a system to facilitate the sharing of loan-level information for the HHF program, one HFA and some servicers noted that the system has not always worked smoothly. Additionally, Treasury staff told us that a challenge HFAs are currently facing is the wind-down of the HHF program. They stated that HFAs must determine how they should advertise to the public, internal staff, and external partners that programs are closing; when they should stop accepting applications; and what resources are available for activities related to program closeout. Outreach to homeowners. All four HFAs and an advocacy organization told us that it can be challenging to effectively reach eligible homeowners. As an example, staff from one HFA told us that housing counseling agencies have been an effective tool for making homeowners aware of HHF programs but that there are fewer foreclosure counselors available to homeowners now compared to when the HHF program started in 2010. Staff from an HFA that closed its HHF programs to new applicants after the initial funding rounds told us that it was challenging to communicate to the public, and therefore to potential clients, that its HHF programs were reopening after they received additional funding. Additionally, a representative of a nonprofit organization that works to address challenges in the mortgage market told us that many people did not know about the HHF program and that program information was hard for consumers to find on many states’ websites. Program accessibility. According to academic research and two stakeholders (an advocacy group and a housing counseling agency), the accessibility of an HFA’s program can affect program participation. A 2014 study of Ohio’s HHF program found that the design of the program hampered accessibility and therefore program participation. The program was designed to require registrants (those who started the application process) to continue the application process by working with a housing counseling agency. The study found that registrants who lived within 5 miles of their assigned housing counseling agency submitted a complete application almost 32 percent of the time, while those who lived over 50 miles away submitted a complete application about 18 percent of the time. Similarly, a representative for an organization that advocates on behalf of low-income homeowners noted that the design of one state HHF program requires applicants to meet with specific housing counseling agencies to complete the application process. However, the housing counseling agencies to which applicants are assigned may not be nearby. The representative stated that in some cases, homeowners are assigned to a housing counseling agency that is located 3 or 4 hours away from where the homeowners live. According to the advocacy group representative, this design is particularly challenging for elderly homeowners who may have trouble applying online and need personal help. Additionally, representatives for a housing counseling agency told us that their state HFA stopped involving community organizations to guide applicants throughout the application process once the HFA received additional HHF funding in 2016 and instead chose to work with applicants directly. They said this design may hurt homeowners who do not live near the HFA and would benefit from in-person assistance that could be provided close to their homes. A representative from the state’s HFA confirmed that the HFA decided to work directly with applicants once it received additional HHF funds in 2016. The representative stated that while homeowners could also apply for HHF assistance online (after the HFA changed the program design in 2016), the HFA’s system did not accept electronic signatures. Thus, homeowners without the ability to print and scan documents would need to come to the HFA’s office to complete the application process. Variety of programs and their status. Treasury officials noted that the wide variety of programs that HFAs are implementing can create operational challenges for HFAs. As an example, the officials explained that HFAs may encounter challenges when their programs require coordination with local partners. For example, land banks can encounter delays in acquiring properties for demolition, and contractors may not do demolition work properly or may attempt to increase the amounts that they charge for their work after winning a contract. Five mortgage servicers with whom we spoke described similar challenges. For example, representatives from one servicer told us that it was challenging to work with the 19 different HFAs because they all implemented different HHF programs. The representative added that it was particularly challenging if an HFA had a change in either leadership or points of contact for the HHF program. Another servicer explained that servicers have to review each HFA’s participation agreement and subsequent updates. This servicer noted that updates to agreements can create challenges, as the servicer needs to determine whether it can provide what the HFA is requesting. Representatives from this and a third servicer told us that it would have been helpful for servicers to have an up-to-date list of active HHF programs. Further, one servicer told us that it is challenging to help homeowners understand that each HFA and program has different requirements and guidelines. As previously discussed, Treasury communicates information to stakeholders, such as servicers, through regular conference calls. However, Treasury expects HFAs to keep their servicers abreast of the status of HHF programs because HFAs contract directly with servicers. Representatives from one HFA noted that it was challenging to keep servicers updated on changes to their HHF programs. For example, they reported that when the HFA made changes to its unemployment program, servicers confused the program with another of the agency’s HHF programs. The representatives also stated that they have had to make many phone calls to try to keep servicers up to date. External factors. Treasury officials and other stakeholders noted that external factors such as changing market needs and natural disasters have created challenges for some HFAs. Treasury officials noted that some HFAs have had to change their HHF programs over time to respond to changes in local housing conditions. An organization that advocates for HFAs as well as an HFA similarly noted that changing housing markets present challenges for HFAs, which have to adjust their program offerings in an effort to continue to serve homeowners. As previously discussed, HFAs must obtain Treasury approval to add or revise their HHF programs, and they must document the changes by amending participation agreements. Treasury officials also noted that natural disasters can affect HHF programs because HFAs have to turn their attention to post-disaster housing needs. Additionally, Treasury officials stated that after a natural disaster it can become difficult to verify the eligibility of applicants, particularly if key documents have been lost or communication channels with homeowners or servicers are affected. Through its on-site monitoring efforts, Treasury has identified issues that participating HFAs must address for their HHF programs. During on-site reviews in 2016 and 2017, Treasury staff assessed selected HFAs’ efforts in one or more Treasury-identified areas. As previously noted, Treasury’s policy at the time of our review was to conduct on-site reviews of each participating HFA at least once every 2 years. In 2016 Treasury conducted on-site monitoring visits for 14 HFAs and identified issues that the HFAs needed to address to improve their HHF programs. Issues Treasury identified primarily fell into two areas. The first of these was monitoring processes and internal controls—for example, Treasury found that one HFA had not developed documentation of its compliance procedures for a down payment assistance program. The other primary area was homeowner eligibility—for example, Treasury found that an HFA had misclassified the reasons that some homeowners were not admitted into the state’s HHF program. In 2017 Treasury conducted site visits to 15 HFAs. For this period, Treasury’s most common issues related to homeowner eligibility and administrative expenses. According to Treasury officials, the increase in issues related to administrative expenses between 2016 and 2017 was a result of greater agency focus on this topic. Treasury observed, for example, that one HFA lacked sufficient documentation to support some administrative expenses and that another HFA had misclassified some administrative expenses. As previously discussed, HFAs are required to provide Treasury with a written plan describing how they will address issues Treasury identifies and reimburse HHF for any impermissible expenses. Through its oversight activities, SIGTARP reported that some participating HFAs have encountered challenges related to appropriate use of administrative expenses, management of their programs, and blight removal. In August 2017, SIGTARP reported that participating HFAs used $3 million in HHF funds for unnecessary expenses. The report maintained that some HFAs were using their administrative funds for expenses that were unnecessary. In a May 2018 hearing, SIGTARP testified that some HFAs were not following federal cost principles related to administrative expenses. Additionally, SIGTARP has issued reports describing mismanagement of the HHF program by specific HFAs, as well as challenges related to blight removal. While Treasury has disagreed with the dollar amount of administrative expenses used inappropriately by HFAs, it has also worked with HFAs and SIGTARP to address SIGTARP’s findings. As HHF programs begin to close and participating HFAs take steps to ensure they spend all of their HHF funds before the program deadline, opportunities exist in two areas for Treasury to manage risk and improve program operation and closeout: By not consistently and routinely collecting HFAs’ risk assessments, Treasury limits its ability to monitor and evaluate the effectiveness of HFAs’ preventive activities, controls to detect fraud, and response efforts. Further, by not evaluating these risk assessments, Treasury is missing an opportunity to help ensure that risk levels are appropriate. As HFAs wind down their HHF programs and HFA staff are relieved of their HHF-related positions, maintaining updated and accurate staffing information can help ensure that HFA staff are informed of who in their own offices is responsible for internal control execution. Because Treasury did not implement the HHF program in a manner that is consistent with standards for program evaluation design we previously identified, the performance data that Treasury collects do not provide significant insights into the program’s effectiveness. More specifically, Treasury did not clearly state some of its performance measures; lacks documentation of the relationship between program outputs and overall goals; did not design consistent methodologies for HFAs to use in setting did not require participating HFAs to use consistent methodologies to calculate outcomes. As a result, Treasury cannot aggregate key performance data or compare performance data across HFAs or HHF program types to demonstrate the results of the HHF program. As we have previously reported, OMB has set the expectation that agencies should conduct evaluations of federal programs. Moreover, our guide to designing evaluations states that where federal programs operate through multiple local public or private agencies, it important to ensure the data these agencies collect are sufficiently consistent to permit aggregation nationwide in order to evaluate progress toward national goals. Although HHF programs must stop disbursing funds by December 31, 2021, many of the programs have already ended or are in the process of winding down, making it too late for changes to Treasury’s approach to performance measurement to have a meaningful impact. However, we note that if Treasury were to extend the current program, as it did after Congress provided additional funding in 2015, or if Congress were to establish a similar program due to a future housing crisis, it would be useful at that time for Treasury to develop a program evaluation design that would allow the agency to assess overall program performance, as well as assess performance across HFAs and program types. We are making the following two recommendations to Treasury: The Assistant Secretary for Financial Institutions should annually collect and evaluate HFAs’ risk assessments, which include HFAs’ risk levels. (Recommendation 1) The Assistant Secretary for Financial Institutions should ensure that the documentation listing the HFA staff responsible for internal control execution is updated routinely. (Recommendation 2) We provided a draft of this report to Treasury for review and comment. In its comments, reproduced in appendix IV, Treasury agreed with our recommendations and stated that it has already taken steps toward addressing them by enhancing the existing review procedures for HFA’s risk assessments and staffing updates. Treasury also provided a technical comment, which we incorporated. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Treasury, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. We will make copies available to others upon request. The report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or ortiza@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are listed on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. The objectives of this report were to (1) determine the extent to which the Department of the Treasury’s (Treasury) monitoring of the Hardest Hit Fund (HHF) addresses leading practices for program oversight, (2) provide information on housing finance agencies’ (HFA) active programs and the status of HFAs’ progress toward program targets, and (3) describe challenges in implementing HHF programs that HFAs and others identified. To determine the extent to which Treasury’s monitoring of HHF addresses leading practices for program oversight, we used a scorecard methodology to compare Treasury’s monitoring policies and procedures, as implemented by 2016, against leading practices for an effective monitoring framework. To create the framework, we reviewed key reports and guidance related to monitoring, oversight, and performance management. In particular we reviewed relevant leading practices from internal control standards; previous GAO work on results-oriented performance goals and measures, key attributes for successful performance measures, characteristics for successful hierarches of performance measures, and managing fraud risk; and Office of Management and Budget guidance on oversight. Although Treasury is not required to follow all of the guidance that we identified, we determined that the guidance describes practices that are helpful for creating an effective monitoring framework. To select the practices for the scorecard, we focused on practices relevant to the structure of an oversight framework (including fraud risk); performance measures; goal setting; and communication with external parties. We reviewed key reports and guidance and then vetted our selected practices with stakeholders knowledgeable about performance measurement, design methodology, fraud risk, and the law. Based on this review and input, we consolidated identified practices into 14 leading practices to apply to Treasury’s monitoring framework. We then assessed Treasury’s policies and procedures against the framework. Specifically, we reviewed the agencies’ documented policies and procedures, reviewed documentation of how Treasury followed its policies and procedures, conducted interviews with Treasury staff responsible for overseeing HHF, and interviewed stakeholders, such as mortgage servicers, about Treasury’s monitoring of HHF. We also interviewed staff from four HFAs about Treasury’s monitoring of their programs; we selected the HFAs based on their mix of HHF programs, proportion of HHF funds disbursed, and geographic diversity. We also took into account whether stakeholders indicated that an HFA’s implementation of the program was particularly successful or challenging. With regard to the documentation Treasury collects as part of its monitoring, we limited our review to its 2016 and 2017 monitoring activities, and we limited our review of Treasury’s written policies and procedures to those implemented from January 2016 to September 2018. Two analysts independently reviewed agency policies and procedures to determine whether the policies were consistent with the 14 identified leading practices. Any disagreements in the determinations were resolved through discussion or with a third party, including the General Counsel’s office. We categorized each practice as follows: Addressed: Treasury’s policies and procedures reflect each component of the leading practice. Partially addressed: Treasury’s policies and procedures reflect some but not all components of the leading practice. Not addressed: Treasury’s policies and procedures do not reflect any of the components of the leading practice To describe active HHF programs and the status of HFAs’ progress toward program goals, we reviewed program documents, administered a data collection instrument, and spoke with officials at four HFAs (selected as previously described) and Treasury. We defined active programs as those that had a total allocation approved by Treasury and were accepting applications and still disbursing funds to households or blight elimination projects as of December 2017. In order to identify which programs were active, we developed, collected, and reviewed a questionnaire in which HFAs provided information on when each of their HHF programs started and stopped disbursing funds. For each of the 71 active programs we identified, we reviewed quarterly performance reports as of December 2017 to compile descriptive information such as program outputs and outcomes. Through the review of program documentation and interviews with knowledgeable officials, we found that Treasury’s output data were sufficiently reliable for our description of homeowners assisted and properties demolished. We also found that the data Treasury collected from HFAs on program outcomes were not reliable for the purpose of summarizing alternative outcomes by HFA or by program type. Treasury officials noted that the conclusions that can be drawn from alternative outcome data are inherently limited, particularly for the purpose of making comparisons between HFAs or program types, due to HFAs interpreting certain outcome measures differently, among other factors. Additionally, by comparing Treasury’s outcome measures to leading practices, we found that their definitions were not clearly stated. We also identified four studies on the impact of HHF blight elimination programs and reviewed them for reliable methodology. We determined that one of the four studies was not reliable for the purpose of assessing the impact of blight programs on targeted areas. Two of the three studies that we determined to be reliable had important limitations. One study examined 1 of the 18 counties that were served by that HFA’s blight elimination program. The other study did not isolate the independent effect of the HFA’s blight elimination program because other factors, such as local economic conditions, could also affect the performance of key economic indicators. We reviewed each HFA’s contract with Treasury as of December 2017 to identify each program’s target for assisting homeowners or demolishing blighted properties. Through comparison with internal control standards, we found that these targets were not reliable for the purpose of describing HFAs’ progress toward program goals because they were not stated in a form that permitted reasonably consistent measurement. To describe the factors Treasury identified as challenges for the HHF program, we analyzed Treasury’s on-site compliance monitoring reports for 2016 and 2017. As a part of our analysis, we identified the HFAs that Treasury visited in 2016 and 2017 and the extent to which Treasury had observations related to five Treasury-identified areas: monitoring processes and internal controls, eligibility, program expenses and income, administrative expenses, and reporting. We also interviewed key stakeholders regarding their views of challenges related to implementation of the HHF program, particularly since 2012. We discussed challenges with Treasury staff with responsibilities for monitoring HFAs’ implementation of the program; staff from four HFAs that are implementing HHF programs; six mortgage servicers that are involved with the HHF program; and two housing counseling agencies that are involved with the HHF program. For two of the HFAs with blight elimination programs, we conducted site visits to observe activities related to blight elimination. Additionally, we discussed challenges with other interested organizations, including an association for HFAs and an organization that brings together housing counselors, mortgage companies, investors, and other mortgage market participants to help address challenges in the mortgage market. Further, we reviewed reports issued by the Special Inspector General for the Troubled Asset Relief Program. We summarized the challenges that stakeholders described. We conducted this performance audit from November 2017 through December 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To determine the extent to which the Department of the Treasury’s (Treasury) policies and procedures for monitoring and oversight address leading monitoring practices, we identified factors for an effective monitoring framework based on a review of key reports and guidance and input from stakeholders knowledgeable about performance measurement, design methodology, fraud risk, and the law. To select the practices for the scorecard, we focused on factors relevant to the structure of an oversight framework (including fraud risk); performance measures; goal setting; and communication with external parties. We consolidated identified factors into 14 leading practices to apply to Treasury’s oversight and monitoring framework. See Table 3 for the 14 leading practices and their underlying factors. As shown in table 4, housing finance agencies (HFA) were implementing from one to seven Hardest Hit Fund (HHF) programs (excluding blight programs) as of the fourth quarter of 2017. We included programs for which HFAs were disbursing funds to homeowners. As of December 2017, individual HFAs had assisted from 807 to 86,220 homeowners. Eight HFAs were implementing active blight elimination programs as of December 2017, as shown in table 5. The number of blighted properties demolished by individual HFAs ranged from 0 to 13,925. The Department of the Treasury’s 2017 utilization threshold requires that HFAs draw at least 95 percent of their HHF funding from rounds one through four by December 31, 2017 (see table 6). As of December 2017, 17 of 18 HFAs had drawn 95 percent or more of their funding from rounds one through four. The Nevada HFA had drawn 70 percent of its funding from rounds one through four. In addition to the contact named above, Jill Naamane, Assistant Director; Lisa Moore, Analyst in Charge; Vida Awumey; Farrah Graham; John Karikari; Moira Lenox; Benjamin Licht; Dan Luo; John McGrail; Marc Molino; Jennifer Schwartz; Shannon Smith; Estelle Tsay-Huang; and Erin Villas made key contributions to this report.
|
Treasury established the HHF program in 2010 to help stabilize the housing market and assist homeowners facing foreclosure in the states hardest hit by the housing crisis. Through HHF, Treasury has obligated a total of $9.6 billion in Trouble Asset Relief Program funds to 19 state HFAs. HFAs use funds to implement programs that address foreclosure and help stabilize local housing markets—for example, by demolishing blighted properties. Congress extended HHF in 2015, and HFAs must disburse all HHF funds by December 31, 2021, or return them to Treasury. The Emergency Economic Stabilization Act of 2008 included a provision for GAO to report on Troubled Asset Relief Program activities. This report focuses on the HHF program and examines, among other objectives, (1) the extent to which Treasury's monitoring addresses leading practices for program oversight and (2) HFAs' progress toward program targets. GAO reviewed documentation of Treasury's HHF monitoring practices, interviewed HFAs (selected based on differences in program types implemented) and Treasury officials, and reviewed information on how HFAs developed program targets. For its Housing Finance Agency Innovation Fund for Hardest Hit Markets (HHF), the Department of the Treasury (Treasury) has addressed or partially addressed all 14 leading monitoring practices that GAO identified. For example, Treasury periodically collects performance data from housing finance agencies (HFA) and analyzes and validates these data. However, while Treasury requires HFAs to regularly assess the risks of their programs, it does not systematically collect or analyze these assessments. As a result, Treasury is missing an opportunity to ensure that HFAs are appropriately assessing their risk. Also, Treasury does not require HFAs to consistently document which of their staff are responsible for internal control execution. This documentation could help HFAs wind down their programs, particularly as staff turn over. Most HFAs met Treasury's goals for drawing down HHF funds, with $9.1 billion disbursed to HFAs as of September 2018. HHF programs have assisted hundreds of thousands of distressed homeowners since 2010. However, the data Treasury has collected are of limited use for determining how well HFAs met their goals for assisting households and demolishing blighted properties, or for evaluating the HHF program overall. For example, Treasury did not develop a consistent methodology for HFAs to use when setting performance targets, which limits Treasury's ability to compare across programs or assess the HHF program as a whole. Further, GAO's guide to designing evaluations states that where federal programs operate through multiple local public or private agencies, it is important that the data these agencies collect are sufficiently consistent to permit aggregation nationwide. Although HFAs have until the end of 2021 to disburse their HHF funds, many programs are beginning to close, making it too late for meaningful changes to Treasury's approach to performance measurement. However, should Congress authorize Treasury to extend the program beyond December 2021 or establish a similar program in the future, it would be useful at that time for Treasury to develop a program evaluation design that would allow the agency to assess overall program performance, as well as performance across HFAs and program types. GAO recommends that Treasury collect and evaluate HFAs' risk assessments and routinely update staffing documentation. Treasury agreed with these recommendations and stated that it has already taken steps toward addressing them.
|
IT systems supporting federal agencies and our nation’s critical infrastructures are inherently at risk. These systems are highly complex and dynamic, technologically diverse, and often geographically dispersed. This complexity increases the difficulty in identifying, managing, and protecting the numerous operating systems, applications, and devices comprising the systems and networks. Compounding the risk, federal systems and networks are also often interconnected with other internal and external systems and networks, including the Internet. This increases the number of avenues of attack and expands their attack surface. As systems become more integrated, cyber threats will pose an increasing risk to national security, economic well-being, and public health and safety. Advancements in technology, such as data analytics software for searching and collecting information, have also made it easier for individuals and organizations to correlate data (including PII) and track it across large and numerous databases. For example, social media has been used as a mass communication tool where PII can be gathered in vast amounts. In addition, ubiquitous Internet and cellular connectivity makes it easier to track individuals by allowing easy access to information pinpointing their locations. These advances—combined with the increasing sophistication of hackers and others with malicious intent, and the extent to which both federal agencies and private companies collect sensitive information about individuals—have increased the risk of PII being exposed and compromised. Cybersecurity incidents continue to impact entities across various critical infrastructure sectors. For example, in its 2018 annual data breach investigations report, Verizon reported that 53,308 security incidents and 2,216 data breaches were identified across 65 countries in the 12 months since its prior report. Further, the report noted that cybercriminals can often compromise a system in just a matter of minutes—or even seconds, but that it can take an organization significantly longer to discover the breach. Specifically, the report stated nearly 90 percent of the reported breaches occurred within minutes, while nearly 70 percent went undiscovered for months. These concerns are further highlighted by the number of information security incidents reported by federal executive branch civilian agencies to DHS’s U.S. Computer Emergency Readiness Team (US-CERT). For fiscal year 2017, 35,277 such incidents were reported by the Office of Management and Budget (OMB) in its 2018 annual report to Congress, as mandated by the Federal Information Security Modernization Act (FISMA). These incidents include, for example, web-based attacks, phishing, and the loss or theft of computing equipment. Different types of incidents merit different response strategies. However, if an agency cannot identify the threat vector (or avenue of attack), it could be difficult for that agency to define more specific handling procedures to respond to the incident and take actions to minimize similar future attacks. In this regard, incidents with a threat vector categorized as “other” (which includes avenues of attacks that are unidentified) made up 31 percent of the various incidents reported to US-CERT. Figure 1 shows the percentage of the different types of incidents reported across each of the nine threat vector categories for fiscal year 2017, as reported by OMB. These incidents and others like them can pose a serious challenge to economic, national, and personal privacy and security. The following examples highlight the impact of such incidents: In March 2018, the Mayor of Atlanta, Georgia, reported that the city was victimized by a ransomware cyberattack. As a result, city government officials stated that customers were not able to access multiple applications that are used to pay bills or access court related information. In response to the attack, the officials noted that they were working with numerous private and governmental partners, including DHS, to assess what occurred and determine how best to protect the city from future attacks. In March 2018, the Department of Justice reported that it had indicted nine Iranians for conducting a massive cybersecurity theft campaign on behalf of the Islamic Revolutionary Guard Corps. According to the department, the nine Iranians allegedly stole more than 31 terabytes of documents and data from more than 140 American universities, 30 U.S. companies, and five federal government agencies, among other entities. In March 2018, a joint alert from DHS and the Federal Bureau of Investigation (FBI) stated that, since at least March 2016, Russian government actors had targeted the systems of multiple U.S. government entities and critical infrastructure sectors. Specifically, the alert stated that Russian government actors had affected multiple organizations in the energy, nuclear, water, aviation, construction, and critical manufacturing sectors. In July 2017, a breach at Equifax resulted in the loss of PII for an estimated 148 million U.S. consumers. According to Equifax, the hackers accessed people’s names, Social Security numbers (SSN), birth dates, addresses and, in some instances, driver’s license numbers. In April 2017, the Commissioner of the Internal Revenue Service (IRS) testified that the IRS had disabled its data retrieval tool in early March 2017 after becoming concerned about the misuse of taxpayer data. Specifically, the agency suspected that PII obtained outside the agency’s tax system was used to access the agency’s online federal student aid application in an attempt to secure tax information through the data retrieval tool. In April 2017, the agency began notifying taxpayers who could have been affected by the breach. In June 2015, OPM reported that an intrusion into its systems had affected the personnel records of about 4.2 million current and former federal employees. Then, in July 2015, the agency reported that a separate, but related, incident had compromised its systems and the files related to background investigations for 21.5 million individuals. In total, OPM estimated 22.1 million individuals had some form of PII stolen, with 3.6 million being a victim of both breaches. Safeguarding federal IT systems and the systems that support critical infrastructures has been a long-standing concern of GAO. Due to increasing cyber-based threats and the persistent nature of information security vulnerabilities, we have designated information security as a government-wide high-risk area since 1997. In 2003, we expanded the information security high-risk area to include the protection of critical cyber infrastructure. At that time, we highlighted the need to manage critical infrastructure protection activities that enhance the security of the cyber and physical public and private infrastructures that are essential to national security, national economic security, and/or national public health and safety. We further expanded the information security high-risk area in 2015 to include protecting the privacy of PII. Since then, advances in technology have enhanced the ability of government and private sector entities to collect and process extensive amounts of PII, which has posed challenges to ensuring the privacy of such information. In addition, high- profile PII breaches at commercial entities, such as Equifax, heightened concerns that personal privacy is not being adequately protected. Our experience has shown that the key elements needed to make progress toward being removed from the High-Risk List are top-level attention by the administration and agency leaders grounded in the five criteria for removal, as well as any needed congressional action. The five criteria for removal that we identified in November 2000 are as follows: Leadership Commitment. Demonstrated strong commitment and top leadership support. Capacity. The agency has the capacity (i.e., people and resources) to resolve the risk(s). Action Plan. A corrective action plan exists that defines the root cause, solutions, and provides for substantially completing corrective measures, including steps necessary to implement solutions we recommended. Monitoring. A program has been instituted to monitor and independently validate the effectiveness and sustainability of corrective measures. Demonstrated Progress. Ability to demonstrate progress in implementing corrective measures and in resolving the high-risk area. These five criteria form a road map for efforts to improve and ultimately address high-risk issues. Addressing some of the criteria leads to progress, while satisfying all of the criteria is central to removal from the list. Figure 2 shows the five criteria and illustrative actions taken by agencies to address the criteria. Importantly, the actions listed are not “stand alone” efforts taken in isolation from other actions to address high- risk issues. That is, actions taken under one criterion may be important to meeting other criteria as well. For example, top leadership can demonstrate its commitment by establishing a corrective action plan including long-term priorities and goals to address the high-risk issue and using data to gauge progress—actions which are also vital to monitoring criteria. As we reported in the February 2017 high-risk report, the federal government’s efforts to address information security deficiencies had fully met one of the five criteria for removal from the High-Risk List— leadership commitment—and partially met the other four, as shown in figure 3. We plan to update our assessment of this high-risk area against the five criteria in February 2019. Based on our prior work, we have identified four major cybersecurity challenges: (1) establishing a comprehensive cybersecurity strategy and performing effective oversight, (2) securing federal systems and information, (3) protecting cyber critical infrastructure, and (4) protecting privacy and sensitive data. To address these challenges, we have identified 10 critical actions that the federal government and other entities need to take (see figure 4). The four challenges and the 10 actions needed to address them are summarized following the table. In addition, we also discuss in more detail each of the 10 actions in appendices II through XI. The federal government has been challenged in establishing a comprehensive cybersecurity strategy and in performing effective oversight as called for by federal law and policy. Specifically, we have previously reported that the federal government has faced challenges in establishing a comprehensive strategy to provide a framework for how the United States will engage both domestically and internationally on cybersecurity related matters. We have also reported on challenges in performing oversight, including monitoring the global supply chain, ensuring a highly skilled cyber workforce, and addressing risks associated with emerging technologies. The federal government can take four key actions to improve the nation’s strategic approach to, and oversight of, cybersecurity. Develop and execute a more comprehensive federal strategy for national cybersecurity and global cyberspace. In February 2013 we reported that the government had issued a variety of strategy- related documents that addressed priorities for enhancing cybersecurity within the federal government as well as for encouraging improvements in the cybersecurity of critical infrastructure within the private sector; however, no overarching cybersecurity strategy had been developed that articulated priority actions, assigned responsibilities for performing them, and set time frames for their completion. In October 2015, in response to our recommendation to develop an overarching federal cybersecurity strategy that included all key elements of the desirable characteristics of a national strategy, the Director of OMB and the Federal Chief Information Officer issued a Cybersecurity Strategy and Implementation Plan for the Federal Civilian Government. The plan directed a series of actions to improve capabilities for identifying and detecting vulnerabilities and threats, enhance protections of government assets and information, and further develop robust response and recovery capabilities to ensure readiness and resilience when incidents inevitably occur. The plan also identified key milestones for major activities, resources needed to accomplish milestones, and specific roles and responsibilities of federal organizations related to the strategy’s milestones. Since that time, the executive branch has made progress toward outlining a federal strategy for confronting cyber threats. For example, a May 2017 presidential executive order required federal agencies to take a variety of actions, including better manage their cybersecurity risks and coordinate to meet reporting requirements related to cybersecurity of federal networks, critical infrastructure, and the nation. Additionally, the December 2017 National Security Strategy cites cybersecurity as a national priority and identifies related needed actions, such as including identifying and prioritizing risk, and building defensible government networks. Further, DHS issued a cybersecurity strategy in May 2018, which articulated seven goals the department plans to accomplish in support of its mission related to managing national cybersecurity risks. The strategy is intended to provide DHS with a framework to execute its cybersecurity responsibilities during the next 5 years to keep pace with the evolving cyber risk landscape by reducing vulnerabilities and building resilience; countering malicious actors in cyberspace; responding to incidents; and making the cyber ecosystem more secure and resilient. These efforts provide a good foundation toward establishing a more comprehensive strategy, but more effort is needed to address all of the desirable characteristics of a national strategy that we have previously recommended. The recently issued executive branch strategy documents did not include key elements of desirable characteristics that can enhance the usefulness of a national strategy as guidance for decision makers in allocating resources, defining policies, and helping to ensure accountability. Specifically, the documents generally did not include milestones and performance measures to gauge results, nor did they describe the resources needed to carry out the goals and objective. Further, most of the strategy documents lacked clearly defined roles and responsibilities for key agencies, such as DHS, the Department of Defense (DOD), and OMB, who contribute substantially to the nation’s cybersecurity programs. Ultimately, a more clearly defined, coordinated, and comprehensive approach to planning and executing an overall strategy would likely lead to significant progress in furthering strategic goals and lessening persistent weaknesses. For more information on this action area, see appendix II. Mitigate global supply chain risks. The global, geographically disperse nature of the producers and suppliers of IT products is a growing concern. We have previously reported on potential issues associated with IT supply chain and risks originating from foreign- manufactured equipment. For example, in July 2017, we reported that the Department of State had relied on certain device manufacturers, software developers, and contractor support which had suppliers that were reported to be headquartered in a cyber-threat nation (e.g., China and Russia). We further pointed out that the reliance on complex, global IT supply chains introduces multiple risks to federal agencies, including insertion of counterfeits, tampering, or installation of malicious software or hardware. In July 2018, we testified that if such global IT supply chain risks are realized, they could jeopardize the confidentiality, integrity, and availability of federal information systems. Thus, the potential exists for serious adverse impact on an agency’s operations, assets, and employees. These factors highlight the importance and urgency of federal agencies appropriately assessing, managing, and monitoring IT supply chain risk as part of their agency-wide information security programs. For more information on this action area, see appendix III. Address cybersecurity workforce management challenges. The federal government faces challenges in ensuring that the nation’s cybersecurity workforce has the appropriate skills. For example, in June 2018, we reported on federal efforts to implement the requirements of the Federal Cybersecurity Workforce Assessment Act of 2015. We determined that most of the Chief Financial Officers (CFO) Act agencies had not fully implemented all statutory requirements, such as developing procedures for assigning codes to cybersecurity positions. Further, we have previously reported that DHS and DOD had not addressed cybersecurity workforce management requirements set forth in federal laws. In addition, we have reported in the last 2 years that federal agencies (1) had not identified and closed cybersecurity skills gaps, (2) had been challenged with recruiting and retaining qualified staff, and (3) had difficulty navigating the federal hiring process. A recent executive branch report also discussed challenges associated with the cybersecurity workforce. Specifically, in response to Executive Order 13800, the Department of Commerce and DHS led an interagency working group exploring how to support the growth and sustainment of future cybersecurity employees in the public and private sectors. In May 2018, the departments issued a report that identified key findings, including: the U.S. cybersecurity workforce needs immediate and sustained improvements; the pool of cybersecurity candidates needs to be expanded through retraining and by increasing the participation of women, minorities, and veterans; a shortage exists of cybersecurity teachers at the primary and secondary levels, faculty in higher education, and training instructors; and comprehensive and reliable data about cybersecurity workforce position needs and education and training programs are lacking. The report also included recommendations and proposed actions to address the findings, including that private and public sectors should (1) align education and training with employers’ cybersecurity workforce needs by applying the National Initiative for Cybersecurity Education Cybersecurity Workforce Framework; (2) develop cybersecurity career model paths; and (3) establish a clearinghouse of information on cybersecurity workforce development education, training, and workforce development programs and initiatives. In addition, in June 2018, the executive branch issued a government reform plan and reorganization recommendations that included, among other things, proposals for solving the federal cybersecurity workforce shortage. In particular, the plan notes that the administration intends to prioritize and accelerate ongoing efforts to reform the way that the federal government recruits, evaluates, selects, pays, and places cyber talent across the enterprise. The plan further states that, by the end of the first quarter of fiscal year 2019, all CFO Act agencies, in coordination with DHS and OMB, are to develop a critical list of vacancies across their organizations. Subsequently, OMB and DHS are to analyze these lists and work with OPM to develop a government-wide approach to identifying or recruiting new employees or reskilling existing employees. Regarding cybersecurity training, the plan notes that OMB is to consult with DHS to standardize training for cybersecurity employees, and should work to develop an enterprise-wide training process for government cybersecurity employees. For more information on this action area, see appendix IV. Ensure the security of emerging technologies. As the devices used in daily life become increasingly integrated with technology, the risk to sensitive data and PII also grows. Over the last several years, we have reported on weaknesses in addressing vulnerabilities associated with emerging technologies, including: IoT devices, such as fitness trackers, cameras, and thermostats, that continuously collect and process information are potentially vulnerable to cyber-attacks; IoT devices, such as those acquired and used by DOD employees or that DOD itself acquires (e.g., smartphones), may increase the security risks to the department; vehicles that are potentially susceptible to cyber-attack through technology, such as Bluetooth; the unknown impact of artificial intelligence cybersecurity; and advances in cryptocurrencies and blockchain technologies. Executive branch agencies have also highlighted the challenges associated with ensuring the security of emerging technologies. Specifically, in a May 2018 report issued in response to Executive Order 13800, the Department of Commerce and DHS issued a report on the opportunities and challenges in reducing the botnet threat. The opportunities and challenges are centered on six principal themes, including the global nature of automated, distributed attacks; effective tools; and awareness and education. The report also provides recommended actions, including that federal agencies should increase their understanding of what software components have been incorporated into acquired products and establish a public campaign to support awareness of IoT security. For more information on this action area, see appendix V. In our previously discussed reports related to this cybersecurity challenge, we made a total of 50 recommendations to federal agencies to address the weaknesses identified. As of August 2018, 48 recommendations had not been implemented. These outstanding recommendations include 8 priority recommendations, meaning that we believe that they warrant priority attention from heads of key departments and agencies. These priority recommendations include addressing weaknesses associated with, among other things, agency-specific cybersecurity workforce challenges and agency responsibilities for supporting mitigation of vehicle network attacks. Until our recommendations are fully implemented, federal agencies may be limited in their ability to provide effective oversight of critical government-wide initiatives, address challenges with cybersecurity workforce management, and better ensure the security of emerging technologies. In addition to our prior work related to the federal government’s efforts to establish key strategy documents and implement effective oversight, we also have several ongoing reviews related to this challenge. These include reviews of: the CFO Act agencies’ efforts to submit complete and reliable baseline assessment reports of their cybersecurity workforces; the extent to which DOD has established training standards for cyber mission force personnel, and efforts the department has made to achieve its goal of a trained cyber mission force; and selected agencies’ ability to implement cloud service technologies and notable benefits this might have on agencies. The federal government has been challenged in securing federal systems and information. Specifically, we have reported that federal agencies have experienced challenges in implementing government-wide cybersecurity initiatives, addressing weaknesses in their information systems and responding to cyber incidents on their systems. This is particularly concerning given that the emergence of increasingly sophisticated threats and continuous reporting of cyber incidents underscores the continuing and urgent need for effective information security. As such, it is important that federal agencies take appropriate steps to better ensure they have effectively implemented programs to protect their information and systems. We have identified three actions that the agencies can take. Improve implementation of government-wide cybersecurity initiatives. Specifically, in January 2016, we reported that DHS had not ensured that the National Cybersecurity Protection System (NCPS) had fully satisfied all intended system objectives related to intrusion detection and prevention, information sharing, and analytics. In addition, in February 2017, we reported that the DHS National Cybersecurity and Communications Integration Center’s (NCCIC) functions were not being performed in adherence with the principles set forth in federal laws. We noted that, although NCCIC was sharing information about cyber threats in the way it should, the center did not have metrics to measure that the information was timely, relevant and actionable, as prescribed by law. For more information on this action area, see appendix VI. Address weaknesses in federal information security programs. We have previously identified a number of weaknesses in agencies’ protection of their information and information systems. For example, over the past 2 years, we have reported that: most of the 24 agencies covered by the CFO Act had weaknesses in each of the five major categories of information system controls (i.e., access controls, configuration management controls, segregation of duties, contingency planning, and agency-wide security management); three agencies—the Securities Exchange Commission, the Federal Deposit Insurance Corporation, and the Food and Drug Administration—had not effectively implemented aspects of their information security programs, which resulted in weaknesses in these agencies’ security controls; information security weaknesses in selected high-impact systems at four agencies—the National Aeronautics and Space Administration, the Nuclear Regulatory Commission, OPM, and the Department of Veterans Affairs—were cited as a key reason that the agencies had not effectively implemented elements of their information security programs; DOD’s process for monitoring the implementation of cybersecurity guidance had weaknesses and resulted in the closure of certain tasks (such as completing cyber risk assessments) before they were fully implemented; and agencies had not fully defined the role of their Chief Information Security Officers, as required by FISMA. We also recently testified that, although the government had acted to protect federal information systems, additional work was needed to improve agency security programs and cyber capabilities. In particular, we noted that further efforts were needed by agencies to implement our prior recommendations in order to strengthen their information security programs and technical controls over their computer networks and systems. For more information on this action area, see appendix VII. Enhance the federal response to cyber incidents. We have reported that certain agencies have had weaknesses in responding to cyber incidents. For example, as of August 2017, OPM had not fully implemented controls to address deficiencies identified as a result of its 2015 cyber incidents; DOD had not identified the National Guard’s cyber capabilities (e.g., computer network defense teams) or addressed challenges in its exercises; as of April 2016, DOD had not identified, clarified, or implemented all components of its support of civil authorities during cyber incidents; and as of January 2016, DHS’s NCPS had limited capabilities for detecting and preventing intrusions, conducting analytics, and sharing information. For more information on this action area, see appendix VIII. In the public versions of the reports previously discussed for this challenge area, we made a total of 101 recommendations to federal agencies to address the weaknesses identified. As of August 2018, 61 recommendations had not been implemented. These outstanding recommendations include 14 priority recommendations to address weaknesses associated with, among other things, the information security programs at the National Aeronautics and Space Administration, OPM, and the Security Exchange Commission. Until these recommendations are implemented, these federal agencies will be limited in their ability to ensure the effectiveness of their programs for protecting information and systems. In addition to our prior work, we also have several ongoing reviews related to the federal government’s efforts to protect its information and systems. These include reviews of: Federal Risk and Authorization Management Program (FedRAMP) implementation, including an assessment of the implementation of the program’s authorization process for protecting federal data in cloud environments; the Equifax data breach, including an assessment of federal oversight of credit reporting agencies’ collection, use, and protection of consumer PII; the Federal Communication Commission’s Electronic Comment Filing System security, to include a review of the agency’s detection of and response to a May 2017 incident that reportedly impacted the system; DOD’s efforts to improve the cybersecurity of its major weapon DOD’s whistleblower program, including an assessment of the policies, procedures, and controls related to the access and storage of sensitive and classified information needed for the program; IRS’s efforts to (1) implement security controls and the agency’s information security program, (2) authenticate taxpayers, and (3) secure tax information; and the federal approach and strategy to securing agency information systems, to include federal intrusion detection and prevention capabilities and the intrusion assessment plan. The federal government has been challenged in working with the private sector to protect critical infrastructure. This infrastructure includes both public and private systems vital to national security and other efforts, such as providing the essential services that underpin American society. As the cybersecurity threat to these systems continues to grow, federal agencies have millions of sensitive records that must be protected. Specifically, this critical infrastructure threat could have national security implications and more efforts should be made to ensure that it is not breached. To help address this issue, the National Institute of Standards and Technology (NIST) developed the cybersecurity framework—a voluntary set of cybersecurity standards and procedures for industry to adopt as a means of taking a risk-based approach to managing cybersecurity. However, additional action is needed to strengthen the federal role in protecting the critical infrastructure. Specifically, we have reported on other critical infrastructure protection issues that need to be addressed. For example: DHS did not track vulnerability reduction from the implementation and verification of planned security measures at the high-risk chemical facilities that engage with the department, as a basis for assessing performance. Entities within the 16 critical infrastructure sectors reported encountering four challenges to adopting the cybersecurity framework, such as being limited in their ability to commit necessary resources towards framework adoption and not having the necessary knowledge and skills to effectively implement the framework. DOD and the Federal Aviation Administration identified a variety of operations and physical security risks that could adversely affect DOD missions. Major challenges existed to securing the electricity grid against cyber threats. These challenges included monitoring implementation of cybersecurity standards, ensuring security features are built into smart grid systems, and establishing metrics for cybersecurity. DHS and other agencies needed to enhance cybersecurity in the maritime environment. Specifically, DHS did not include cyber risks in its risk assessments that were already in place nor did it address cyber risks in guidance for port security plans. Sector-specific agencies were not properly addressing progress or metrics to measure their progress in cybersecurity. For more information on this action area, see appendix IX. We made a total of 21 recommendations to federal agencies to address these weaknesses and others. These recommendations include, for example, a total of 9 recommendations to 9 sector-specific agencies to develop methods to determine the level and type of cybersecurity framework adoption across their respective sectors. As of August 2018, all 21 recommendations had not been implemented. Until these recommendations are implemented, the federal government will continue to be challenged in fulfilling its role in protecting the nation’s critical infrastructure. In addition to our prior work related to the federal government’s efforts to protect critical infrastructure, we also have several ongoing reviews focusing on: the physical and cybersecurity risks to pipelines across the country responsible for transmitting oil, natural gas, and other hazardous liquids; the cybersecurity risks to the electric grid; and the privatization of utilities at DOD installations. The federal government has been challenged in protecting privacy and sensitive data. Advances in technology, including powerful search technology and data analytics software, have made it easy to correlate information about individuals across large and numerous databases, which have become very inexpensive to maintain. In addition, ubiquitous Internet connectivity has facilitated sophisticated tracking of individuals and their activities through mobile devices such as smartphones and fitness trackers. Given that access to data is so pervasive, personal privacy hinges on ensuring that databases of PII maintained by government agencies or on their behalf are protected both from inappropriate access (i.e., data breaches) as well as inappropriate use (i.e., for purposes not originally specified when the information was collected). Likewise, the trend in the private sector of collecting extensive and detailed information about individuals needs appropriate limits. The vast number of individuals potentially affected by data breaches at federal agencies and private sector entities in recent years increases concerns that PII is not being properly protected. Federal agencies should take two types of actions to address this challenge area. In addition, we have previously proposed two matters for congressional consideration aimed toward better protecting PII. Improve federal efforts to protect privacy and sensitive data. We have issued several reports noting that agencies had deficiencies in protecting privacy and sensitive data that needed to be addressed. For example: The Department of Health and Human Services’ (HHS) Centers for Medicare and Medicaid Services (CMS) and external entities were at risk of compromising Medicare Beneficiary Data due to a lack of guidance and proper oversight. The Department of Education’s Office of Federal Student Aid had not properly overseen its school partners’ records or information security programs. HHS had not fully addressed key security elements in its guidance for protecting the security and privacy of electronic health information. CMS had not fully protected the privacy of users’ data on state- based marketplaces. Poor planning and ineffective monitoring had resulted in the unsuccessful implementation of government initiatives aimed at eliminating the unnecessary collection, use, and display of SSNs. For more information on this action area, see appendix X. Appropriately limit the collection and use of personal information and ensure that it is obtained with appropriate knowledge or consent. We have issued a series of reports that highlight a number of the key concerns in this area. For example: The emergence of IoT devices can facilitate the collection of information about individuals without their knowledge or consent; Federal laws for smartphone tracking applications have not generally been well enforced; The FBI has not fully ensured privacy and accuracy related to the use of face recognition technology. For more information on this action area, see appendix XI. We have previously suggested that Congress consider amending laws, such as the Privacy Act of 1974 and the E-Government Act of 2002, because they may not consistently protect PII. Specifically, we found that while these laws and guidance set minimum requirements for agencies, they may not consistently protect PII in all circumstances of its collection and use throughout the federal government and may not fully adhere to key privacy principles. However, revisions to the Privacy Act and the E-Government Act have not yet been enacted. Further, we also suggested that Congress consider strengthening the consumer privacy framework and review issues such as the adequacy of consumers’ ability to access, correct, and control their personal information; and privacy controls related to new technologies such as web tracking and mobile devices. However, these suggested changes have not yet been enacted. We also made a total of 29 recommendations to federal agencies to address the weaknesses identified. As of August 2018, 28 recommendations had not been implemented. These outstanding recommendations include 6 priority recommendations to address weaknesses associated with, among other things, publishing privacy impact assessments and improving the accuracy of the FBI’s face recognition services. Until these recommendations are implemented, federal agencies will be challenged in their ability to protect privacy and sensitive data and ensure that its collection and use is appropriately limited. In addition to our prior work, we have several ongoing reviews related to protecting privacy and sensitive data. These include reviews of: IRS’s taxpayer authentication efforts, including what steps the agency is taking to monitor and improve its authentication methods; the extent to which the Department of Education’s Office of Federal Student Aid’s policies and procedures for overseeing non-school partners’ protection of federal student aid data align with federal requirements and guidance; data security issues related to credit reporting agencies, including a review of the causes and impacts of the August 2017 Equifax data breach; the extent to which Equifax assessed, responded to, and recovered from its August 2017 data breach; federal agencies’ efforts to remove PII from shared cyber threat indicators; and how the federal government has overseen Internet privacy, including the roles of the Federal Communications Commission and the Federal Trade Commission, and strengths and weaknesses of the current oversight authorities. In conclusion, since 2010, we have made over 3,000 recommendations to agencies aimed at addressing the four cybersecurity challenges. Nevertheless, many agencies continue to be challenged in safeguarding their information systems and information, in part because many of these recommendations have not been implemented. Of the roughly 3,000 recommendations made since 2010, nearly 1,000 had not been implemented as of August 2018. We have also designated 35 as priority recommendations, and as of August 2018, 31 had not been implemented. The federal government and the nation’s critical infrastructure are dependent on IT systems and electronic data, which make them highly vulnerable to a wide and evolving array of cyber-based threats. Securing these systems and data is vital to the nation’s security, prosperity, and well-being. Nevertheless, the security over these systems and data is inconsistent and urgent actions are needed to address ongoing cybersecurity and privacy challenges. Specifically, the federal government needs to implement a more comprehensive cybersecurity strategy and improve its oversight, including maintaining a qualified cybersecurity workforce; address security weaknesses in federal systems and information and enhance cyber incident response efforts; bolster the protection of cyber critical infrastructure; and prioritize efforts to protect individual’s privacy and PII. Until our recommendations are addressed and actions are taken to address the four challenges we identified, the federal government, the national critical infrastructure, and the personal information of U.S. citizens will be increasingly susceptible to the multitude of cyber-related threats that exist. We are sending copies of this report to the appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Nick Marinos at (202) 512-9342 or marinosn@gao.gov or Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XII. Critical Infrastructure Protection: DHS Should Take Actions to Measure Reduction in Chemical Facility Vulnerability and Share Information with First Responders. GAO-18-538. Washington, D.C.: August 8, 2018. High-Risk Series: Urgent Actions Are Needed to Address Cybersecurity Challenges Facing the Nation. GAO-18-645T. Washington, D.C.: July 25, 2018. Information Security: Supply Chain Risks Affecting Federal Agencies. GAO-18-667T. Washington, D.C.: July 12, 2018. Information Technology: Continued Implementation of High-Risk Recommendations Is Needed to Better Manage Acquisitions, Operations, and Cybersecurity. GAO-18-566T. Washington, D.C.: May 23, 2018. Cybersecurity: DHS Needs to Enhance Efforts to Improve and Promote the Security of Federal and Private-Sector Networks, GAO-18-520T. Washington, D.C.: April 24, 2018. Electronic Health Information: CMS Oversight of Medicare Beneficiary Data Security Needs Improvement. GAO-18-210. Washington, D.C.: March 6, 2018. Technology Assessment: Artificial Intelligence, Emerging Opportunities, Challenges, and Implications. GAO-18-142SP. Washington, D.C.: March 28, 2018. GAO Strategic Plan 2018-2023: Trends Affecting Government and Society. GAO-18-396SP. Washington, D.C.: February 22, 2018. Critical Infrastructure Protection: Additional Actions Are Essential for Assessing Cybersecurity Framework Adoption. GAO-18-211. Washington, D.C.: February 15, 2018. Cybersecurity Workforce: Urgent Need for DHS to Take Actions to Identify Its Position and Critical Skill Requirements. GAO-18-175. Washington, D.C.: February 6, 2018. Homeland Defense: Urgent Need for DOD and FAA to Address Risks and Improve Planning for Technology That Tracks Military Aircraft. GAO-18-177. Washington, D.C.: January 18, 2018. Federal Student Aid: Better Program Management and Oversight of Postsecondary Schools Needed to Protect Student Information. GAO-18-121. Washington, D.C.: December 15, 2017. Defense Civil Support: DOD Needs to Address Cyber Incident Training Requirements. GAO-18-47. Washington, D.C.: November 30, 2017. Federal Information Security: Weaknesses Continue to Indicate Need for Effective Implementation of Policies and Practices. GAO-17-549. Washington, D.C.: September 28, 2017. Information Security: OPM Has Improved Controls, but Further Efforts Are Needed. GAO-17-614. Washington, D.C.: August 3, 2017. Defense Cybersecurity: DOD’s Monitoring of Progress in Implementing Cyber Strategies Can Be Strengthened. GAO-17-512. Washington, D.C.: August 1, 2017. State Department Telecommunications: Information on Vendors and Cyber-Threat Nations. GAO-17-688R. Washington, D.C.: July 27, 2017. Internet of Things: Enhanced Assessments and Guidance Are Needed to Address Security Risks in DOD. GAO-17-668. Washington, D.C.: July 27, 2017. Information Security: SEC Improved Control of Financial Systems but Needs to Take Additional Actions. GAO-17-469. Washington, D.C.: July 27, 2017. Information Security: Control Deficiencies Continue to Limit IRS’s Effectiveness in Protecting Sensitive Financial and Taxpayer Data. GAO-17-395. Washington, D.C.: July 26, 2017. Social Security Numbers: OMB Actions Needed to Strengthen Federal Efforts to Limit Identity Theft Risks by Reducing Collection, Use, and Display. GAO-17-553. Washington, D.C.: July 25, 2017. Information Security: FDIC Needs to Improve Controls over Financial Systems and Information. GAO-17-436. Washington, D.C.: May 31, 2017. Technology Assessment: Internet of Things: Status and implications of an increasingly connected world. GAO-17-75. Washington, D.C.: May 15, 2017. Cybersecurity: DHS’s National Integration Center Generally Performs Required Functions but Needs to Evaluate Its Activities More Completely. GAO-17-163. Washington, D.C.: February 1, 2017. High-Risk Series: An Update. GAO-17-317. Washington, D.C.: February 2017. IT Workforce: Key Practices Help Ensure Strong Integrated Program Teams; Selected Departments Need to Assess Skill Gaps. GAO-17-8. Washington, D.C.: November 30, 2016. Electronic Health Information: HHS Needs to Strengthen Security and Privacy Guidance and Oversight. GAO-16-771. Washington, D.C.: September 26, 2016. Defense Civil Support: DOD Needs to Identify National Guard’s Cyber Capabilities and Address Challenges in Its Exercises. GAO-16-574. Washington, D.C.: September 6, 2016. Information Security: FDA Needs to Rectify Control Weaknesses That Place Industry and Public Health Data at Risk. GAO-16-513. Washington, D.C.: August 30, 2016. Federal Chief Information Security Officers: Opportunities Exist to Improve Roles and Address Challenges to Authority. GAO-16-686. Washington, D.C.: August 26, 2016. Federal Hiring: OPM Needs to Improve Management and Oversight of Hiring Authorities. GAO-16-521. Washington, D.C.: August 2, 2016. Information Security: Agencies Need to Improve Controls over Selected High-Impact Systems. GAO-16-501. Washington, D.C.: May 18, 2016. Face Recognition Technology: FBI Should Better Ensure Privacy and Accuracy. GAO-16-267. Washington, D.C.: May 16, 2016. Smartphone Data: Information and Issues Regarding Surreptitious Tracking Apps That Can Facilitate Stalking. GAO-16-317. Washington, D.C.: May 9, 2016. Vehicle Cybersecurity: DOT and Industry Have Efforts Under Way, but DOT Needs to Define Its Role in Responding to a Real-world Attack. GAO-16-350. Washington, D.C.: April 25, 2016. Civil Support: DOD Needs to Clarify Its Roles and Responsibilities for Defense Support of Civil Authorities during Cyber Incidents. GAO-16-332. Washington, D.C.: April 4, 2016. Healthcare.gov: Actions Needed to Enhance Information Security and Privacy Controls. GAO-16-265. Washington, D.C.: March 23, 2016. Information Security: DHS Needs to Enhance Capabilities, Improve Planning, and Support Greater Adoption of Its National Cybersecurity Protection System. GAO-16-294. Washington, D.C.: January 28, 2016. Critical Infrastructure Protection: Sector-Specific Agencies Need to Better Measure Cybersecurity Progress. GAO-16-79. Washington, D.C.: November 19, 2015. Critical Infrastructure Protection: Cybersecurity of the Nation’s Electricity Grid Requires Continued Attention. GAO-16-174T. Washington, D.C.: October 21, 2015. Maritime Critical Infrastructure Protection: DHS Needs to Enhance Efforts to Address Port Cybersecurity. GAO-16-116T. Washington, D.C.: October 8, 2015. Cybersecurity: National Strategy, Roles, and Responsibilities Need to Be Better Defined and More Effectively Implemented. GAO-13-187. Washington, D.C.: February 14, 2014. Information Resellers: Consumer Privacy Framework Needs to Reflect Changes in Technology and the Marketplace. GAO-13-663. Washington, D.C.: September 25, 2013. Privacy: Alternatives Exist for Enhancing Protection of Personally Identifiable Information. GAO-08-536. Washington, D.C.: May 19, 2008. Federal law and policy call for a risk-based approach to managing cybersecurity within the government, as well as globally. We have previously reported that the federal government has faced challenges in establishing a comprehensive strategy to provide a framework for how the United States will engage both domestically and internationally on cybersecurity related matters. More specifically, in February 2013, we reported that the government had issued a variety of strategy-related documents that addressed priorities for enhancing cybersecurity within the federal government as well as for encouraging improvements in the cybersecurity of critical infrastructure within the private sector; however, no overarching cybersecurity strategy had been developed that articulated priority actions, assigned responsibilities for performing them, and set time frames for their completion. Accordingly, we recommended that the White House Cybersecurity Coordinator in the Executive Office of the President develop an overarching federal cybersecurity strategy that included all key elements of the desirable characteristics of a national strategy including, among other things, milestones and performance measures for major activities to address stated priorities; cost and resources needed to accomplish stated priorities; and specific roles and responsibilities of federal organizations related to the strategy’s stated priorities. In response to our recommendation, in October 2015, the Director of OMB and the Federal Chief Information Officer, issued a Cybersecurity Strategy and Implementation Plan for the Federal Civilian Government. The plan directed a series of actions to improve capabilities for identifying and detecting vulnerabilities and threats, enhance protections of government assets and information, and further develop robust response and recovery capabilities to ensure readiness and resilience when incidents inevitably occur. The plan also identified key milestones for major activities, resources needed to accomplish milestones, and specific roles and responsibilities of federal organizations related to the strategy’s milestones. Since that time, the executive branch has made progress toward outlining a federal strategy for confronting cyber threats. Table 1 identifies these recent efforts and a description of their related contents. These efforts provide a good foundation toward establishing a more comprehensive strategy, but more effort is needed to address all of the desirable characteristics of a national strategy that we recommended. The recently issued executive branch strategy documents did not include key elements of desirable characteristics that can enhance the usefulness of a national strategy as guidance for decision makers in allocating resources, defining policies, and helping to ensure accountability. Specifically: Milestones and performance measures to gauge results were generally not included in strategy documents. For example, although the DHS Cybersecurity Strategy stated that its implementation would be assessed on an annual basis, it did not describe the milestones and performance measures for tracking the effectiveness of the activities intended to meet the stated goals (e.g., protecting critical infrastructure and responding effectively to cyber incidents). Without such performance measures, DHS will lack a means to ensure that the goals and objectives discussed in the document are accomplished and that responsible parties are held accountable. According to officials from DHS’s Office of Cybersecurity and Communications, the department is developing a plan for implementing the DHS Cybersecurity Strategy and expects to issue the plan by the end of calendar year 2018. The officials stated that the plan is expected to identify milestones, roles, and responsibilities across DHS to inform the prioritization of future efforts. The strategy documents generally did not include information regarding the resources needed to carry out the goals and objectives. For example, although the DHS Cybersecurity Strategy identified a variety of actions the agency planned to take to perform their cybersecurity mission, it did not articulate the resources needed to carry out these actions and requirements. Without information on the specific resources needed, federal agencies may not be positioned to allocate such resources and investments and, therefore, may be hindered in their ability meet national priorities. Most of the strategy documents lacked clearly defined roles and responsibilities for key agencies, such as DHS, DOD, and OMB. These agencies contribute substantially to the nation’s cybersecurity programs. For example, although the National Security Strategy discusses multiple priority actions needed to address the nation’s cybersecurity challenges (e.g., building defensible government networks, and deterring and disrupting malicious cyber actors), it does not describe the roles, responsibilities, or the expected coordination of any specific federal agencies, including DHS, DOD, or OMB, or other non-federal entities needed to carry out those actions. Without this information, the federal government may not be able foster effective coordination, particularly where there is overlap in responsibilities, or hold agencies accountable for carrying out planned activities. Ultimately, a more clearly defined, coordinated, and comprehensive approach to planning and executing an overall strategy would likely lead to significant progress in furthering strategic goals and lessening persistent weaknesses. The exploitation of information technology (IT) products and services through the supply chain is an emerging threat. IT supply chain-related threats can be introduced in the manufacturing, assembly, and distribution of hardware, software, and services. Moreover, these threats can appear at each phase of the system development life cycle, when an agency initiates, develops, implements, maintains, and disposes of an information system. As a result, the compromise of an agency’s IT supply chain can degrade the confidentiality, integrity, and availability of its critical and sensitive networks, IT-enabled equipment, and data. Federal regulation and guidance issued by the National Institute of Standards and Technology (NIST) set requirements and best practices for mitigating supply chain risks. The Federal Acquisition Regulation established codification and publication of uniform policies and procedures for acquisition by all executive branch agencies. Agencies are required by the Federal Acquisition Regulation to ensure that contracts include quality requirements that are determined necessary to protect the government’s interest. In addition, the NIST guidance on supply chain risk management practices for federal information systems and organizations intends to assist federal agencies with identifying, assessing, and mitigating information and communications technology supply chain risks at all levels of their organizations. We have previously reported on risks to the IT supply chain and risks originating from foreign-manufactured equipment. For example: In July 2018, we testified that if global IT supply chain risks are realized, they could jeopardize the confidentiality, integrity, and availability of federal information systems. Thus, the potential exists for serious adverse impact on an agency’s operations, assets, and employees. We further stated that in 2012 we determined that four national security-related agencies—the Departments of Defense, Justice, Energy, Homeland Security (DHS)—varied in the extent to which they had addressed supply chain risks. We recommended that three agencies take eight actions, as needed, to develop and document policies, procedures, and monitoring capabilities that address IT supply chain risk. The agencies generally concurred with the recommendations and subsequently implemented seven recommendations and partially implemented the eighth recommendation. In July 2017, we reported that, based on a review of a sample of organizations within the Department of State’s telecommunications supply chain, we were able to identify instances in which device manufacturers, software developers and contractor support were reported to be headquartered in a leading cyber-threat nation. For example, of the 52 telecommunications device manufacturers and software developers in our sample, we were able to identify 12 that had 1 or more suppliers that were reported to be headquartered in a leading cyber-threat nation. We noted that the reliance on complex, global IT supply chains introduces multiple risks to federal agencies, including insertion of counterfeits, tampering, or installation of malicious software or hardware. Figure 5 illustrates possible manufacturing locations of typical network components. Although federal agencies have taken steps to address IT supply chain deficiencies that we previously identified, this area continues to be a potential threat vector for malicious actors to target the federal government. For example, in September 2017, DHS issued a binding operating directive which calls on departments and agencies to identify any use or presence of Kaspersky products on their information systems and to develop detailed plans to remove and discontinue present and future use of the products. DHS expressed concern about the ties between certain Kaspersky officials and Russian intelligence and other government agencies, and requirements under Russian law that allow Russian intelligence agencies to request or compel assistance from Kaspersky and to intercept communications transiting Russian networks. On May 11, 2017, the President issued an executive order on strengthening the cybersecurity of federal networks and critical infrastructure. The order makes it the policy of the United States to support the growth and sustainment of a workforce that is skilled in cybersecurity and related fields as the foundation for achieving our objectives in cyberspace. It directed the Secretaries of Commerce and Homeland Security (DHS), in consultation with other federal agencies, to assess the scope and sufficiency of efforts to educate and train the American cybersecurity workforce of the future, including cybersecurity- related education curricula, training, and apprenticeship programs, from primary through higher education. Nevertheless, the federal government continues to face challenges in addressing the nation’s cybersecurity workforce. Agencies had not effectively conducted baseline assessments of their cybersecurity workforce or fully developed procedures for coding positions. In June 2018, we reported that 21 of the 24 agencies covered by the Chief Financial Officer’s Act had conducted and submitted to Congress a baseline assessment identifying the extent to which their cybersecurity employees held professional certifications, as required by the Federal Cybersecurity Workforce Assessment Act of 2015. However, we found that the results of these assessments may not have been reliable because agencies did not address all of the reportable information and agencies were limited in their ability to obtain complete and consistent information about their cybersecurity employees and the certifications they held. We determined that this was because agencies had not yet fully identified all members of their cybersecurity workforces or did not have a consistent list of appropriate certifications for cybersecurity positions. Further, 23 of the agencies reviewed had established procedures for identifying and assigning the appropriate employment codes to their civilian cybersecurity positions, as called for by the act. However, 6 of the 23 did not address one or more of 7 activities required by OPM in their procedures, such as reviewing all filled and vacant positions and annotating reviewed position descriptions with the appropriate employment code. Accordingly, we made 30 recommendations to 13 agencies to fully implement two of the act’s requirements on baseline assessments and coding procedures. The extent to which these agencies agreed with the recommendations varied. DHS and the Department of Defense (DOD) had not addressed cybersecurity workforce management requirements set forth in federal laws. In February 2018, we reported that, while DHS had taken actions to identify, categorize, and assign employment codes to its cybersecurity positions, as required by the Homeland Security Cybersecurity Workforce Assessment Act of 2014, its actions were not timely and complete. For example, DHS did not establish timely and complete procedures to identify, categorize, and code its cybersecurity position vacancies and responsibilities. Further, DHS had not yet completed its efforts to identify all of its cybersecurity positions and accurately assign codes to all filled and vacant cybersecurity positions. Table 2 shows DHS’s progress in implementing the requirements of the Homeland Security Cybersecurity Workforce Assessment Act of 2014, as of December 2017. Accordingly, we recommended that DHS take six actions, including ensuring that its cybersecurity workforce procedures identify position vacancies and responsibilities; reported workforce data are complete and accurate; and plans for reporting on critical needs are developed. DHS agreed with our six recommendations, but had not implemented them as of August 2018. Regarding DOD, in November 2017, we reported that instead of developing a comprehensive plan for U.S. Cyber Command, the department submitted a report consisting of a collection of documents that did not fully address the required six elements set forth in Section 1648 of the National Defense Authorization Act for Fiscal Year 2016. More specifically, DOD’s 1648 report did not address an element related to cyber incident training. In addition to not addressing the training element in the report, DOD had not ensured that staff were trained as required by the Presidential Policy Directive on United States Cyber Incident Coordination or DOD’s Significant Cyber Incident Coordination Procedures. Accordingly, we made two recommendations to DOD to address these issues. DOD agreed with one of the recommendations and partially agreed with the other, citing ongoing activities related to cyber incident coordination training it believed were sufficient. However, we continued to believe the recommendation was warranted. As of August 2018, both recommendations had not yet been implemented. Agencies had not identified and closed cybersecurity skills gaps. In November 2016, we reported that five selected agencies had made mixed progress in assessing their information technology (IT) skill gaps. These agencies had started focusing on identifying cybersecurity staffing gaps, but more work remained in assessing competency gaps and in broadening the focus to include the entire IT community. Accordingly, we made a total of five recommendations to the agencies to address these issues. Four agencies agreed and one, DOD, partially agreed with our recommendations citing progress made in improving its IT workforce planning. However, we continued to believe our recommendation was warranted. As of August 2018, all five of the recommendations had not been implemented. Agencies had been challenged with recruiting and retaining qualified staff. In August 2016, we reported on the current authorities chief information security officers (CISO) at 24 agencies. Among other things, CISOs identified key challenges they faced in fulfilling their responsibilities. Several of these challenges were related to the cybersecurity workforce, such as not having enough personnel to oversee the implementation of the number and scope of security requirements. In addition, CISOs stated that they were not able to offer salaries that were competitive with the private sector for candidates with high-demand technical skills. Furthermore, CISOs stated that certain security personnel lacked the skill sets needed or were not sufficiently trained. To assist CISOs in carrying out their responsibilities and better define their roles, we made a total of 34 recommendations to the Office of Management and Budget (OMB) and 13 agencies in our review. Agency responses to the recommendations varied; as of August 2018, 18 of the 34 recommendations had not been implemented. Agencies have had difficulty navigating the federal hiring process. In August 2016, we reported on the extent to which federal hiring authorities were meeting agency needs. Although competitive hiring has been the traditional method of hiring, agencies can use additional hiring authorities to expedite the hiring process or achieve certain public policy goals. Among other things, we noted that agencies rely on a relatively small number of hiring authorities (as established by law, executive order, or regulation) to fill the vast majority of hires into the federal civil service. Further, while OPM collects a variety of data to assess the federal hiring process, neither it nor agencies used this information to assess the effectiveness of hiring authorities. Conducting such assessments would be a critical first step in making more strategic use of the available hiring authorities to more effectively meet their hiring needs. Accordingly, we made three recommendations to OPM to work with agencies to strengthen hiring efforts. OPM generally agreed with the recommendations; however, as of August 2018, two of them had not been implemented. The emergence of new technologies can potentially introduce security vulnerabilities for those technologies which were previous unknown. As we have previously reported, additional processes and controls will need to be developed to potentially address these new vulnerabilities. While some progress has been made to address the security and privacy issues associated with these technologies, such as the Internet of Things (IoT) and vehicle networks, there is still much work to be done. For example: IoT devices that continuously collect and process information are potentially vulnerable to cyber-attacks. In May 2017, we reported that the IoT has become increasingly used to communicate and process vast amounts of information using “smart” devices (such as fitness trackers, cameras, and thermostats). However, we noted that this emerging technology also presents new issues in areas such as information security, privacy, and safety. For example, IoT devices, networks, or the cloud servers where they store data can be compromised in a cyberattack. Table 3 provides examples of cyber- attacks that could affect IoT devices and networks. IoT devices may increase the security risks to federal agencies. In July 2017, we reported that IoT devices, such as those acquired and used by Department of Defense (DOD) employees or that DOD itself acquires (e.g., smartphones), may increase the security risks to the department. We noted that these risks can be divided into two categories, risks with the devices themselves, such as limited encryption, and risks with how they are used, such as unauthorized communication of information. The department has also identified notional threat scenarios, based on input from multiple DOD entities, which exemplify how these security risks could adversely impact DOD operations, equipment, or personnel. Figure 6 highlights a few examples of these scenarios. In addition, we reported that DOD had started to examine the security risks of IoT devices, but that the department had not conducted required assessments related to the security of its operations. Further, DOD had issued policies and guidance for these devices, but these did not clearly address all of the risks relating to these devices. To address these issues, we made two recommendations to DOD. The department agreed with our recommendations; however, as of August 2018, they had not yet been implemented. Vehicles are potentially susceptible to cyber-attack through networks, such as Bluetooth. In March 2016, we reported that many stakeholders in the automotive industry acknowledge that in-vehicle networks pose a threat to the safety of the driver, as an external attacker could gain control to critical systems in the car. Further, these industry stakeholders agreed that critical systems and other vehicle systems, such as a Bluetooth connection, should be separate in-vehicle networks so they could not communicate or interfere with one another. Figure 7 identifies the key interfaces that could be exploited in a vehicle cyber-attack. To enhance the Department of Transportation’s ability to effectively respond in the event of a real-world vehicle cyberattack, we made one recommendation to the department to better define its roles and responsibilities. The department agreed with the recommendation but, as of August 2018, had not yet taken action to implement it. Artificial intelligence holds substantial promise for improving cybersecurity, but also posed new risks. In March 2018, we reported on the results of a forum we convened to discuss emerging opportunities, challenges, and implications associated with artificial intelligence. At the forum, participants from industry, government, academia, and nonprofit organizations discussed the potential implications of this emerging technology, including assisting with cybersecurity by helping to identify and patch vulnerabilities and defending against attacks; creating safer automated vehicles; improving the criminal justice system’s allocation of resources; and improving how financial services govern investments. However, forum participants also highlighted a number of challenges and risks related to artificial intelligence. For example, if the data used by artificial intelligence are biased or become corrupted by hackers, the results could be biased or cause harm. Moreover, the collection and sharing of data needed to train artificial intelligence systems, a lack of access to computing resources, and adequate human capital were also challenges facing the development of artificial intelligence. Finally, forum participants noted that the widespread adoption raises questions about the adequacy of current laws and regulations. Cryptocurrencies provide an alternative to traditional government-issued currencies, but have security implications. In February 2018, we reported on trends affecting government and society, including the increased use of cryptocurrencies—digital representations of value that are not government-issued—that operate online and verify transactions using a public ledger called blockchain. We highlighted the potential benefits of this technology, such as anonymity and lower transaction costs, as well as drawbacks, including making it harder to detect money laundering and other financial crimes. Because of these capabilities and others, we noted the potential for virtual currencies and blockchain technology to reshape financial services and affect the security of critical financial infrastructures. Lastly, we pointed out that the use of blockchain technology could have more security vulnerabilities as computing power increases as a result of new advancements in quantum computing, an area of quantum information science. In January 2008, the President issued National Security Presidential Directive 54/Homeland Security Presidential Directive 23. The directive established the Comprehensive National Cybersecurity Initiative, a set of projects with the objective of safeguarding federal executive branch government information systems by reducing potential vulnerabilities, protecting against intrusion attempts, and anticipating future threats against the federal government’s networks. Under the initiative, the Department of Homeland Security (DHS) was to lead several projects to better secure civilian federal government networks. Specifically, the agency established the National Cybersecurity and Communications Integration Center (NCCIC), which functions as the 24/7 cyber monitoring, incident response, and management center. Figure 8 depicts the Watch Floor, which functions as a national focal point of cyber and communications incident integration. The United States Computer Emergency Readiness Team (US-CERT), one of several subcomponents of the NCCIC, is responsible for operating the National Cybersecurity Protection System (NCPS), which provides intrusion detection and prevention capabilities to entities across the federal government. Although DHS is fulfilling its statutorily required mission by establishing the NCCIC and managing the operation of NCPS, we have identified challenges in the agency’s efforts to manage these programs: DHS had not ensured that NCPS has fully satisfied all intended system objectives. In January 2016, we reported that NCPS had a limited ability to detect intrusions across all types of network types. In addition, we reported that the system’s intrusion prevention capability was limited and its information-sharing capability was not fully developed. Furthermore, we reported that DHS’s current metrics did not comprehensively measure the effectiveness of NCPS. Accordingly, we made nine recommendations to DHS to address these issues and others. The department agreed with our recommendations and has taken action to address one of them. However, as of August 2018, eight of these recommendations had not been implemented. DHS had been challenged in measuring how the NCCIC was performing its functions in accordance with mandated implementing principles. In February 2017, we reported instances where, with certain products and services, NCCIC had implemented its functions in adherence with one or more of its principles, as required by the National Cybersecurity Protection Act of 2014 and Cybersecurity Act of 2015. For example, consistent with the principle that it seek and receive appropriate consideration from industry sector-specific, academic, and national laboratory expertise, NCCIC coordinated with contacts from industry, academia, and the national laboratories to develop and disseminate vulnerability alerts. However, we also identified instances where the cybersecurity functions were not performed in adherence with the principles. For example, NCCIC is to provide timely technical assistance, risk management support, and incident response capabilities to federal and nonfederal entities, but it had not established measures or other procedures for ensuring the timeliness of these assessments. Further, we reported that NCCIC faces impediments to performing its cybersecurity functions more efficiently, such as tracking security incidents and working across multiple network platforms. Accordingly, we made nine recommendations to DHS related to implementing the requirements identified in the National Cybersecurity Protection Act of 2014 and the Cybersecurity Act of 2015. The department agreed with our recommendations and has taken action to address two of them. However, as of August 2018, the remaining seven recommendations had not been implemented. The Federal Information Security Modernization Act of 2014 (FISMA) requires federal agencies in the executive branch to develop, document, and implement an information security program and evaluate it for effectiveness. The act retains many of the requirements for federal agencies’ information security programs previously set by the Federal Information Security Management Act of 2002. These agency programs should include periodic risk assessments; information security policies and procedures; plans for protecting the security of networks, facilities, and systems; security awareness training; security control assessments; incident response procedures; a remedial action process, and continuity plans and procedures. In addition, Executive Order 13800 states that the President will hold agency heads accountable for managing cybersecurity risk to their enterprises. In addition, according to the order, it is the policy of the United States to manage cybersecurity risk as an executive branch enterprise because risk management decisions made by agency heads can affect the risk to the executive branch as a whole, and to national security. Over the past several years, we have performed numerous security control audits to determine how well agencies are managing information security risk to federal information systems and data through the implementation of effective security controls. These audits have resulted in the identification of hundreds of deficiencies related to agencies’ implementation of effective security controls. Accordingly, we provided agencies with limited official use only reports identifying technical security control deficiencies for their respective agency. In these reports, we made hundreds of recommendations related to improving agencies’ implementation of those security control deficiencies. In addition to systems and networks maintained by federal agencies, it is also important that agencies ensure the security of federal information systems operated by third party providers, including cloud service providers. Cloud computing is a means for delivering computing services via information technology networks. Since 2009, the government has encouraged agencies to use cloud-based services to store and process data as a cost-savings measure. In this regard, the Office of Management and Budget (OMB) established the Federal Risk and Authorization Management Program (FedRAMP) to provide a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services. FedRAMP is intended to ensure that cloud computing services have adequate information security, eliminate duplicative efforts, and reduce costs. Although there are requirements and government-wide programs to assist with ensuring the security of federal information systems maintained by federal agencies and third party providers, we have identified weaknesses in agencies’ implementation of information security programs. Federal agencies continued to experience weaknesses in protecting their information and information systems due to ineffective implementation of information security policies and practices. In September 2017, we reported that most of the 24 agencies covered by the Chief Financial Officers (CFO) Act had weaknesses in each of the five major categories of information system controls (i.e., access controls, configuration management controls, segregation of duties, contingency planning, and agency-wide security management). Weaknesses in these security controls indicate that agencies did not adequately or effectively implement information security policies and practices during fiscal year 2016. Figure 9 identifies the number of agencies with information security weaknesses in each of the five categories. In addition, we found that several agencies had not effectively implemented some aspects of its information security program, which resulted in weaknesses in these agencies’ security controls. In July 2017, we reported that the Security Exchange Commission did not always keep system security plans complete and accurate or fully implement continuous monitoring, as required by agency policy. We made two recommendations to the Security Exchange Commission to effectively manage its information security program. The agency agreed with our recommendations; however, as of August 2018, they had not been implemented. In another July 2017 report, we noted that the Internal Revenue Service (IRS) did not effectively support a risk-based decision to accept system deficiencies; fully develop, document, or update information security policies and procedures; update system security plans to reflect changes to the operating environment; perform effective tests and evaluations of policies, procedures, and controls; or address shortcomings in the agency’s remedial process. Accordingly, we made 10 recommendations to IRS to more effectively implement security-related policies and plans. The agency neither agreed nor disagreed with the recommendations; as of August 2018, all 10 recommendations had not been implemented. In May 2017, we reported that the Federal Deposit Insurance Corporation did not include all necessary information in procedures for granting access to a key financial application; fully address its Inspector General findings that security control assessments of outsourced service providers had not been completed in a timely manner; fully address key previously identified weaknesses related to establishing agency-wide configuration baselines and monitoring changes to critical server files; or complete actions to address the Inspector General’s finding that the Federal Deposit Insurance Corporation had not ensured that major security incidents are identified and reported in a timely manner. We made one recommendation to the agency to more fully implement its information security program. The agency agreed with our recommendation and has taken steps to implement it. In August 2016, we reported that the Food and Drug Administration did not fully implement certain security practices involved with assessing risks to systems; complete or review security policies and procedures in a timely manner; complete and review system security plans annually; always track and fully train users with significant security responsibilities; fully test controls or monitor them; remediate identified security weaknesses in a timely fashion based on risk; or fully implement elements of its incident response program. Accordingly, we issued 15 recommendations to the Food and Drug Administration to fully implement its agency-wide information security program. The agency agreed with our recommendations. As of August 2018, all 15 recommendations had been implemented. In May 2016, we reported that a key reason for the information security weaknesses in selected high-impact systems at four agencies—National Aeronautics and Space Administration, Nuclear Regulatory Commission, the Office of Personnel Management, and Department of Veterans Affairs—was that they had not effectively implemented elements of their information security programs. For example, most of the selected agencies had conducted information security control assessments for systems, but not all assessments were comprehensive. We also reported that remedial action plans developed by the agencies did not include all the required elements, and not all agencies had developed a continuous monitoring strategy. Table 4 identifies the extent to which the selected agencies implemented key aspects of their information security programs. Accordingly, we made 19 recommendations to the four selected agencies to correct these weaknesses. Agency responses to the recommendations varied. Further, as of August 2018, 16 of the 19 recommendations had not been implemented. DOD’s monitoring of progress in implementing cyber strategies varied. In August 2017, we reported that the DOD’s progress in implementing key strategic cybersecurity guidance—the DOD Cloud Computing Strategy, DOD Cyber Strategy, and DOD Cybersecurity Campaign—has varied. More specifically, we determined that the department had implemented the cybersecurity objectives identified in the DOD Cloud Computing Strategy and had made progress in implementing the DOD Cyber Strategy and DOD Cybersecurity Campaign. However, the department’s process for monitoring implementation of the DOD Cyber Strategy had resulted in the closure of tasks as implemented before the tasks were fully implemented. In addition, the DOD Cybersecurity Campaign lacked time frames for completion and a process to monitor progress, which together provide accountability to ensure implementation. We made two recommendations to improve DOD’s process of ensuring its cyber strategies are effectively implemented. The department partially concurred with these recommendations and identified actions it planned to take to address them. We noted that, if implemented, the actions would satisfy the intent of our recommendations. However, as of August 2018, DOD had not yet implemented our recommendations. Agencies had not fully defined the role of their Chief Information Security Officers (CISO), as required by FISMA. In August 2016, we reported that 13 of 24 agencies covered by the CFO Act had not fully defined the role of their CISO. For example, these agencies did not always identify a role for the CISO in ensuring that security controls are periodically tested; procedures are in place for detecting, reporting, and responding to security incidents; or contingency plans and procedures for agency information systems are in place. Thus, we determined that the CISOs’ ability to effectively oversee these agencies’ information security activities can be limited. To assist CISOs in carrying out their responsibilities and better define their roles, we made a total of 34 recommendations to OMB and 13 agencies in our review. Agency responses to the recommendations varied; as of August 2018, 18 of the 34 recommendations had not been implemented. Presidential Policy Directive-41 sets forth principles governing the federal government’s response to any cyber incident, whether involving government or private sector entities. According to the directive, federal agencies shall undertake three concurrent lines of effort when responding to any cyber incident: threat response; asset response; and intelligence support and related activities. In addition, when a federal agency is an affected entity, it shall undertake a fourth concurrent line of effort to manage the effects of the cyber incident on its operations, customers, and workforce. We have reviewed federal agencies’ preparation and response to cyber incidents and have identified the following weaknesses: The Office of Personnel Management (OPM) had not fully implemented controls to address deficiencies identified as a result of a cyber incident. In August 2017, we reported that OPM did not fully implement the 19 recommendations made by the Department of Homeland Security’s (DHS) United States Computer Emergency Readiness Team (US-CERT) after the data breaches in 2015. Specifically, we noted that, after breaches of personnel and background investigation information were reported, US-CERT worked with the agency to resolve issues and develop a comprehensive mitigation strategy. In doing so, US-CERT made 19 recommendations to OPM to help the agency improve its overall security posture and, thus, improve its ability to protect its systems and information from security breaches. In our August 2017 report, we determined that OPM had fully implemented 11 of the 19 recommendations. For the remaining 8 recommendations, actions for 4 were still in progress. For the other 4 recommendations, OPM indicated that it had completed actions to address them, but we noted further improvements were needed. Further, OPM had not validated actions taken to address the recommendations in a timely manner. As a result of our review, we made five other recommendations to OPM to improve its response to cyber incidents. The agency agreed with four of these and partially concurred with the one related to validating its corrective action. The agency did not cite a reason for its partial concurrence and we continued to believe that the recommendation was warranted. As of August 2018, three of the five recommendations had not been implemented. The Department of Defense (DOD) had not identified the National Guard’s cyber capabilities (e.g., computer network defense teams) or addressed challenges in its exercises. In September 2016, we reported that DOD had not identified the National Guard’s cyber capabilities or addressed challenges in its exercises. Specifically, DOD had not identified and did not have full visibility into National Guard cyber capabilities that could support civil authorities during a cyber incident because the department has not maintained a database that identifies National Guard cyber capabilities, as required by the National Defense Authorization Act for Fiscal Year 2007. In addition, we identified three types of challenges with DOD’s cyber exercises that could limit the extent to which DOD is prepared to support civilian authorities in a cyber incident: limited access because of classified exercise environments; limited inclusion of other federal agencies and critical infrastructure owners; and inadequate incorporation of joint physical-cyber scenarios. In our September 2016 report, we noted that DOD had not addressed these challenges. Furthermore, we stated that DOD had not addressed its goals by conducting a “tier 1” exercise (i.e., an exercise involving national-level organizations and combatant commanders and staff in highly complex environments), as stated in the DOD Cyber Strategy. Accordingly, we recommended that DOD (1) maintain a database that identifies National Guard cyber capabilities and (2) conduct a tier 1 exercise to prepare its forces in the event of a disaster with cyber effects. The department partially agreed with our recommendations, stating that its current mechanisms and exercises are sufficient to address the issues highlighted in our report. However, we continued to believe the recommendations were valid. As of August 2018, our two recommendations had not been implemented. DOD had not identified, clarified, or implemented all components of its incident response program. In April 2016, we also reported that DOD had not clarified its roles and responsibilities for defense support of civil authorities during cyber incidents. Specifically, we found that DOD’s overarching guidance about how it is to support civil authorities as part of its Defense Support of Civil Authorities mission did not clearly define the roles and responsibilities of key DOD entities, such as DOD components, the supported command, or the dual-status commander, if they are requested to support civil authorities in a cyber incident. Further, we found that, in some cases, DOD guidance provides specific details on other types of Defense Support of Civil Authorities-related responses, such as assigning roles and responsibilities for fire or emergency services support and medical support, but does not provide the same level of detail or assign roles and responsibilities for cyber support. Accordingly, we recommended that DOD issue or update guidance that clarifies DOD roles and responsibilities to support civil authorities in a domestic cyber incident. DOD concurred with the recommendation and stated that the department will issue or update guidance. However, as of August 2018, the department had not implemented our recommendation. DHS’s NCPS had limited capabilities for detecting and preventing intrusions, conducting analytics, and sharing information. In January 2016, we reported that NCPS had a limited ability to detect intrusions across all types of network types. In addition, we reported that the system’s intrusion prevention capability was limited and its information-sharing capability was not fully developed. Furthermore, we reported that DHS’s current metrics did not comprehensively measure the effectiveness of NCPS. Accordingly, we made nine recommendations to DHS to address these issues and others. The department agreed with our recommendations and has taken action to address one of them. However, as of August 2018, eight of these recommendations had not been implemented. The nation’s critical infrastructure include both public and private systems vital to national security and other efforts including providing the essential services, such as banking, water, and electricity—that underpin American society. The cyber threat to critical infrastructure continues to grow and represents a national security challenge. To address this cyber risk, the President issued Executive Order 13636 in February 2013 to enhance the security and resilience of the nation’s critical infrastructure and maintain a cyber environment that promotes safety, security, and privacy. In accordance with requirements in the executive order which were enacted into law in 2014, the National Institute of Standards and Technology (NIST) facilitated the development of a set of voluntary standards and procedures for enhancing cybersecurity of critical infrastructure. This process, which involved stakeholders from the public and private sectors, resulted in NIST’s Framework for Improving Critical Infrastructure Cybersecurity. The framework is to provide a flexible and risk-based approach for entities within the nation’s 16 critical infrastructure sectors to protect their vital assets from cyber-based threats. Since then, progress has been made to protect the critical infrastructure of the nation but we have reported that challenges to ensure the safety and security of our infrastructure exist. The Department of Homeland Security (DHS) had not measured the impact of its efforts to support cyber risk reduction for high- risk chemical sector entities. In August 2018, we reported that DHS had strengthened its processes for identifying high-risk chemical facilities and assigning them to tiers under its Chemical Facility Anti- Terrorism Standards program. However, we found that DHS’s new performance measure methodology did not measure reduction in vulnerability at a facility resulting from the implementation and verification of planned security measures during the compliance inspection process. We concluded that doing so would provide DHS an opportunity to begin assessing how vulnerability is reduced—and by extension, risk lowered—not only for individual high-risk facilities but for the Chemical Facility Anti-Terrorism Standards program as a whole. We also determined that, although DHS shares some Chemical Facility Anti-Terrorism Standards program information, first responders and emergency planners may not have all of the information they need to minimize the risk of injury or death when responding to incidents at high-risk facilities. This was due to first responders at the local level not having access or widely using a secure interface that DHS developed (known as the Infrastructure Protection Gateway) to obtain information about high-risk facilities and the specific chemicals they process. To address the weaknesses we identified, we recommended that DHS take actions to (1) measure reduction in vulnerability of high-risk facilities and use that data to assess program performance, and (2) encourage access to and wider use of the Infrastructure Protection Gateway among first responders and emergency planners. DHS concurred with both recommendations and outlined efforts underway or planned to address them. The federal government had identified major challenges to the adoption of the cybersecurity framework. In February 2018, we reported that there were four different challenges to adopting the cybersecurity framework, including limited resources and competing priorities, reported by entities within their sectors. We further reported that none of the 16 sector-specific agencies were measuring the implementation by these entities, nor did they have qualitative or quantitative measures of framework adoption. While research had been done to determine the use of the framework in the sectors, these efforts had yielded no real results for sector wide adoption. We concluded that, until sector-specific agencies understand the use of the framework by the implementing entities, their ability to understand implementation efforts would be limited. Accordingly, we made a total of nine recommendations to nine sector-specific agencies to address these issues. Five agencies agreed with the recommendations, while four others neither agreed nor disagreed; as of August 2018, all five recommendations had not been implemented. Agencies had not addressed risks to their systems and the information they maintain. In January 2018, we reported that the Department of Defense (DOD) and Federal Aviation Administration (FAA) identified a variety of operations and physical security risks related to Automatic Dependent Surveillance-Broadcast Out technology that could adversely affect DOD missions. These risks came from information broadcast by the system itself, as well as from potential vulnerabilities to electronic warfare- and cyber-attacks, and from the potential divestment of secondary-surveillance radars. However, DOD and FAA had not approved any solutions to address the risks they identified to the system. Accordingly, we recommended that DOD and FAA, among other things, take action to approve one or more solutions to address Automatic Dependent Surveillance- Broadcast Out-related security risks. DOD and FAA generally agreed with our recommendations; however, as of August 2018, they had not been implemented. Major challenges existed to securing the electricity grid against cyber threats. In October 2015, we testified on the status of the electricity grid’s cybersecurity, reporting that entities associated with the grid have encountered several challenges. We noted that these challenges included implementation monitoring, built-in security features in smart grid systems, and establishing metrics for cybersecurity. We concluded that continued attention to these issues and cyber threats in general was required to help mitigate these risks to the electricity grid. DHS and other agencies needed to enhance cybersecurity in the maritime environment. In October 2015, we testified on the status of the cybersecurity of our nation’s ports, concluding that steps needed to be taken to enhance their security. Specifically, we noted that DHS needed to include cyber risks in its risk assessments that are already in place as well as addressing cyber risks in guidance for port security plans. We concluded that, until DHS and the other stakeholders take steps to address cybersecurity in the ports, risk of a cyber-attack with serious consequences are increased. Sector-specific agencies were not properly addressing progress or metrics to measure their progress in cybersecurity. In November 2015, we reported that sector-specific agencies were not comprehensively addressing the cyber risk to the infrastructure, as 11 of the 15 sectors had significant cyber risk. Specifically, we noted that these entities had taken actions to mitigate their cyber risk; however, most had not identified incentives to promote cybersecurity in their sectors. We concluded that while the sector-specific agencies have successfully disseminated the information they possess, there was still work to be done to properly measure cybersecurity implementation progress. Accordingly, we made seven recommendations to six agencies to address these issues. Four of these agencies agreed with our recommendation, while two agencies did not comment on the recommendations. As of August 2018, all seven recommendations had not been implemented. Advancements in technology, such as new search technology and data analytics software for searching and collecting information, have made it easier for individuals and organizations to correlate data and track it across large and numerous databases. In addition, lower data storage costs have made it less expensive to store vast amounts of data. Also, ubiquitous Internet and cellular connectivity make it easier to track individuals by allowing easy access to information pinpointing their locations. the effectiveness of these procedures. Based on a survey of the schools, the majority of the schools had policies in place for records retention but the way these policies were implemented was highly varied for paper and electronic records. We also found that the oversight of the school’s programs was lacking, as Federal Student Aid conducts reviews but does not consider information security as a factor for selecting schools. out provisions of the Patient Protection and Affordable Care Act. We made three recommendations to CMS related to defining procedures for overseeing the security of state-based marketplaces and requiring continuous monitoring of state marketplace controls. HHS concurred with our recommendations. As of August 2018, two of the recommendations had not yet been implemented. Poor planning and ineffective monitoring had resulted in the unsuccessful implementation of government initiatives designed to protect federal data. In July 2017, we reported that government initiatives aimed at eliminating the unnecessary collection, use, and display of Social Security numbers (SSN) have had limited success. Specifically, in agencies’ response to our questionnaire on SSN reduction efforts, the 24 agencies covered by the Chief Financial Officers Act reported successfully curtailing the collection, use, and display of SSNs. Nevertheless, all of the agencies continued to rely on SSNs for important government programs and systems, as seen in figure 10. Given that access to data is so pervasive, personal privacy hinges on ensuring that databases of personally identifiable information (PII) maintained by government agencies or on their behalf are protected both from inappropriate access (i.e., data breaches) as well as inappropriate use (i.e., for purposes not originally specified when the information was collected). Likewise, the trend in the private sector of collecting extensive and detailed information about individuals needs appropriate limits. The vast number of individuals potentially affected by data breaches at federal agencies and private sector entities in recent years increases concerns that PII is not being properly protected. The emergence of IoT devices can facilitate the collection of information about individuals without their knowledge or consent. In May 2017, we reported that the IoT has become increasingly used to communicate and process vast amounts of information using “smart” devices (such as a fitness tracker connected to a smartphone). However, we noted that this emerging technology also presents new issues in areas such as information security, privacy, and safety. Smartphone tracking apps can present serious safety and privacy risks. In April 2016, we reported on smartphone applications that facilitated the surreptitious tracking of a smartphone’s location and other data. Specifically, we noted that some applications could be used to intercept communications and text messages, essentially facilitating the stalking of others. While it is illegal to use these applications for these purposes, stakeholders differed over whether current federal laws needed to be strengthened to combat stalking. We also noted that stakeholders expressed concerns over what they perceived to be limited enforcement of laws related to tracking apps and stalking. In particular, domestic violence groups stated that additional education of law enforcement officials and consumers about how to protect against, detect, and remove tracking apps is needed. The Federal Bureau of Investigation (FBI) has not ensured privacy and accuracy related to the use of face recognition technology. In May 2016, we reported that the Department of Justice had not been timely in publishing and updating privacy documentation for the FBI’s use of face recognition technology. Publishing such documents in a timely manner would better assure the public that the FBI is evaluating risks to privacy when implementing systems. Also, the FBI had taken limited steps to determine whether the face recognition system it was using was sufficiently accurate. We recommended that the department ensure required privacy-related documents are published and that the FBI test and review face recognition systems to ensure that they are sufficiently accurate. Of the six recommendations we made, the Department of Justice agreed with one, partially agreed with two, and disagreed with three. We continued to believe all the recommendations made were valid. As of August 2018, the six recommendations had not been implemented. In addition to the contacts named above, Jon Ticehurst, Assistant Director; Kush K. Malhotra, Analyst-In-Charge; Chris Businsky; Alan Daigle; Rebecca Eyler; Chaz Hubbard; David Plocher; Bradley Roach; Sukhjoot Singh; Di’Mond Spencer; and Umesh Thakkar made key contributions to this report.
|
Federal agencies and the nation's critical infrastructures—such as energy, transportation systems, communications, and financial services—are dependent on information technology systems to carry out operations. The security of these systems and the data they use is vital to public confidence and national security, prosperity, and well-being. The risks to these systems are increasing as security threats evolve and become more sophisticated. GAO first designated information security as a government-wide high-risk area in 1997. This was expanded to include protecting cyber critical infrastructure in 2003 and protecting the privacy of personally identifiable information in 2015. This report provides an update to the information security high-risk area. To do so, GAO identified the actions the federal government and other entities need to take to address cybersecurity challenges. GAO primarily reviewed prior work issued since the start of fiscal year 2016 related to privacy, critical federal functions, and cybersecurity incidents, among other areas. GAO also reviewed recent cybersecurity policy and strategy documents, as well as information security industry reports of recent cyberattacks and security breaches. GAO has identified four major cybersecurity challenges and 10 critical actions that the federal government and other entities need to take to address them. GAO continues to designate information security as a government-wide high-risk area due to increasing cyber-based threats and the persistent nature of security vulnerabilities. GAO has made over 3,000 recommendations to agencies aimed at addressing cybersecurity shortcomings in each of these action areas, including protecting cyber critical infrastructure, managing the cybersecurity workforce, and responding to cybersecurity incidents. Although many recommendations have been addressed, about 1,000 have not yet been implemented. Until these shortcomings are addressed, federal agencies' information and systems will be increasingly susceptible to the multitude of cyber-related threats that exist. GAO has made over 3,000 recommendations to agencies since 2010 aimed at addressing cybersecurity shortcomings. As of August 2018, about 1,000 still needed to be implemented.
|
Human trafficking exploits individuals and often involves transnational criminal organizations, violations of labor and immigration codes, and government corruption. Many forms of trafficking—including sex trafficking and labor trafficking—can take place anywhere in the world and occur without crossing country boundaries. As discussed in State’s annual Trafficking in Persons Report, trafficking victims include, for example, Asian and African women and men who migrate to the Persian Gulf region for domestic labor but then suffer both labor trafficking and sexual abuse in the homes of their employers. Some victims are children. For example, Pakistani children as young as 5 years are sold or kidnapped into forced labor to work in brick kilns, some of which are owned by government officials. Other victims are subjected to sexual exploitation. In some cases, women and girls have been bought and sold as sex slaves by members of the Islamic State. In other cases, adult men and women have been forced to engage in commercial sex, and children induced to do the same. Individuals, including men, are exploited in forced labor in a variety of industries. Burmese men, for example, have been forced to labor 20 hours a day, 7 days a week on fishing boats in Thailand. See figure 1 for examples of victims of trafficking in persons. Among other U.S. agencies involved in counter-trafficking in persons, State, DOL, USAID, DOD, and Treasury have various roles and responsibilities related to international counter-trafficking in persons, including some internationally-focused programs and activities that do not involve awards made to implementing partners, as follows: State. State leads the global engagement of the United States, and supports the coordination of efforts across the U.S government in counter-trafficking in persons. State’s Office to Monitor and Combat Trafficking in Persons (TIP Office), established pursuant to the Trafficking Victims Protection Act of 2000, is responsible for bilateral and multilateral diplomacy, targeted foreign assistance, and public engagement on trafficking in persons. The office also prepares and issues an annual Trafficking in Persons Report that assesses the counter-trafficking efforts of governments and assigns them tier rankings. Furthermore, the TIP Office develops annual regional programming strategies, awards projects to implementing partners and oversees the project award process, and provides technical assistance to implementing partners. Other parts of State, including regional bureaus that cover geographic regions and functional bureaus that cover global issues such as human rights, are also responsible for work related to combating trafficking in persons. DOL. Within DOL, the Bureau of International Labor Affairs’ (ILAB) Office of Child Labor, Forced Labor, and Human Trafficking (OCFT) conducts research, publishes reports, and administers projects awarded to implementing partners on international child labor, forced labor, and trafficking in persons. ILAB’s reports include the annual Findings on the Worst Forms of Child Labor report, which assesses the efforts of approximately 140 countries and territories to eliminate the worst forms of child labor in the areas of laws and regulations, institutional mechanisms for coordinating and enforcement, and government policies and programs. ILAB also reports on the List of Goods Produced by Child Labor or Forced Labor showing goods and their source countries which ILAB has reason to believe are produced by child labor or forced labor in violation of international standards. USAID. USAID administers projects awarded to implementing partners that address counter-trafficking in persons, including increased investments in conflict and crisis areas, and integrating such projects into broader development projects. USAID field missions manage the majority of these counter-trafficking activities through projects that address trafficking challenges specific to the field mission’s region or country. USAID’s Center of Excellence on Democracy, Human Rights and Governance (DRG Center) in Washington, D.C. is responsible for oversight of USAID’s counter- trafficking policy. The DRG Center is responsible for coordinating and reporting on USAID-wide counter-trafficking in persons efforts; oversees the implementation of USAID’s counter-trafficking in persons policy in collaboration with regional bureaus and country missions; works with regional bureaus and country missions to gather counter- trafficking best practices and lessons learned; provides technical assistance and training to field and Washington-based staff on designing, managing, and monitoring and evaluating trafficking in persons projects; and conducts and manages research and learning activities related to combating trafficking in persons to collect data to inform the design of field projects. DOD. DOD’s Combating Trafficking in Persons Program Management Office, under the Under Secretary of Defense for Personnel and Readiness in the Defense Human Resources Activity, develops trafficking awareness and training material for all DOD components. On December 16, 2002, the President signed National Security Presidential Directive 22, which declared the United States had a zero tolerance policy for trafficking in persons. The Combating Trafficking in Persons Program Management Office is responsible for overseeing, developing, and providing the tools necessary for implementing National Security Directive 22 within DOD. The office has developed several different training programs, designed to provide an overview of trafficking in persons (including signs of trafficking, key policies and procedures, and reporting procedures), as well as awareness materials for distribution to DOD components and defense contractors overseas. Treasury. Treasury has activities, but not specific programs, that may support wider U.S. efforts to address counter-trafficking in persons, according to Treasury officials. Pursuant to its mandate, components of Treasury’s Office of Terrorism and Financial Intelligence (TFI), including Financial Crimes Enforcement Network (FinCEN), Office of Terrorist Financing and Financial Crimes (TFFC), and Office of Foreign Assets Control (OFAC) work on addressing illicit finance activities that support the wider goal of combating global trafficking in persons. Pursuant to the Trafficking Victims Protection Act of 2000, the President established the President’s Interagency Task Force to Monitor and Combat Trafficking in Persons (PITF), which is a cabinet-level entity that consists of agencies across the federal government responsible for coordinating implementation of the Trafficking Victims Protection Act of 2000, among other activities. It is chaired by the Secretary of State; State, DOL, USAID, DOD, and Treasury are all PITF agencies. In addition, the Trafficking Victims Protection Act, as amended in 2003, established the Senior Policy Operating Group, which consists of senior officials designated as representatives of the PITF agencies. State, DOL, and USAID managed 120 projects in counter-trafficking in persons carried out by implementing partners during fiscal year 2017, according to information provided by officials with these agencies. These projects, as identified by agency officials, ranged from those focused on counter-trafficking in persons, to those in which counter-trafficking in persons was integrated into but was not the primary goal of the project. At these agencies, project officers work with the implementing partner on the administration and technical guidance of the project, such as reviewing progress reports. Table 1 shows a summary of these agencies’ project information; appendix II provides more detailed information on all 120 projects. During fiscal year 2017, State managed 79 counter-trafficking projects, from those focused on individual countries, to regional and global ones that covered several countries, with a total award amount of approximately $62 million, according to information provided by State officials. State TIP Office managed 75 projects with total awarded amount of around $57 million. Award amounts per project ranged from approximately $150,000 to $2.55 million. For example, State TIP Office had 11 global projects totaling about $10 million and 6 regional projects in Africa amounting to about $4 million. State TIP Office had two projects in Ghana that received the highest amount of awards, approximately $2.5 million for each project. State TIP Office had four projects in India amounting to around $3 million, and four in Thailand totaling around $2.35 million. In addition to State TIP Office’s projects, State’s Bureau of Democracy, Human Rights, and Labor (DRL) managed four counter-trafficking projects with a reported total award amount of about $5 million, with two projects in Mauritania making up around 70 percent of DRL’s total awarded amount. DOL’s ILAB/OCFT managed six projects in fiscal year 2017 with a total award amount of approximately $31 million, according to DOL officials. These projects ranged from one scheduled to last for 5 years with an awarded amount of about $1 million, to one scheduled to last for about 4 years with an awarded amount of about $14 million. Three of DOL’s projects were global projects, while two others focused on two countries each and one project focused on one country. USAID’s projects during fiscal year 2017 consisted of 2 regional projects in Asia, and 33 individual projects in 22 different countries. Some of these USAID-identified projects were integrated projects with a broader development focus that includes USAID programmatic objectives other than counter-trafficking in persons. According to information provided by USAID officials, the award amount for all counter-trafficking in persons projects active in fiscal year 2017, including all integrated projects and standalone projects with a sole focus on combatting trafficking in persons, totaled around $296 million; and USAID’s committed funding to these projects’ activities related to counter-trafficking in persons was about $79 million as of September 2018. During fiscal year 2017, USAID focused on a few countries where the agency awarded multiple counter-trafficking projects, such as four projects in Nepal and four projects in Burma. According to officials, State, DOL, and USAID generally design projects to align with the “3Ps approach”—prevention, protection, and prosecution— and to consider trends and recommendations identified in agency reports on foreign governments’ counter-trafficking efforts. According to State’s publicly available information, the “3Ps” approach serves as the fundamental counter-trafficking in persons framework used around the world, and the U.S. government follows this approach to 1. prevent trafficking in persons through public awareness, outreach, education, and advocacy campaigns; 2. protect and assist victims by providing shelters as well as health, psychological, legal, and vocational services; and 3. investigate and prosecute trafficking in persons crimes by providing training and technical assistance for law enforcement officials, such as police, prosecutors, and judges. State’s publicly available information on the 3Ps noted that prevention, protection, and prosecution efforts are closely intertwined. Prosecution, for example, can function as a deterrent, potentially preventing the occurrence of human trafficking. Likewise, protection can empower those who have been exploited so that they are not victimized again once they re-enter society. A victim-centered prosecution that enables a survivor to participate in the prosecution is integral to protection efforts. In addition to the “3Ps,” a “4th P”—for partnership—serves as a complementary means to achieve progress across the “3Ps” and enlist all segments of society in the fight against human trafficking, according to State’s publicly available information. Addressing the partnerships element, USAID’s counter-trafficking policy seeks to increase coordination across a broad range of national, regional, and global stakeholders from civil society, government, the private sector, labor unions, media, and faith-based organizations. Monitoring is the collecting of data to determine whether a project is being implemented as intended and the tracking of progress through preselected performance indicators during the life of a project. State, DOL, and USAID use a number of similar tools—according to their current policies, guidance, and agency officials—to monitor the performance of their counter-trafficking in persons projects, including monitoring plans, indicators and targets, periodic progress reports, and final progress reports. The agencies also conduct site visits, but their policies vary on whether site visits are required for every project during implementation. Monitoring plan. The monitoring plan—according to monitoring policies of the three agencies—documents, among other things, all of the indicators and targets for the project as well as data collection frequency for each indicator. In addition, according to State TIP Office officials, the monitoring plan’s indicators and targets for TIP Office- managed counter-trafficking in persons projects are to be organized in a logic model, which is a visual representation that shows the linkages among the project’s goals, objectives, activities, outputs, and outcomes (see table 2). The logic model is intended to show relationships between what the project will do and what changes it expects to achieve. Indicators and Targets. Performance indicators—according to monitoring policies of the three agencies—are used to monitor progress and measure actual results compared to expected results. Targets are to be set for each performance indicator to indicate the expected results over the course of each period of performance. According to agency officials, the monitoring plan documents indicators and targets to be tracked and reported on through periodic progress reports to assess whether the project is likely to achieve the desired results. GAO has also found that a key attribute of effective performance measures is having a measurable target. Periodic progress reports. The reporting templates for the three agencies show that periodic progress reports—which are submitted at established intervals during the project’s implementation—compare actual to planned performance and indicate the progress made in accomplishing the goals and objectives of the project, including reporting on progress toward the monitoring plan’s indicator targets. Final progress report. The final progress report—according to monitoring policies of the agencies or agency officials—is a stand- alone report that provides a summary of the progress and achievements made during the life of the project. Site Visits. The three agencies policies vary on whether site visits are required for every project during implementation. For example, State’s policy notes that site visits may be conducted to review and evaluate recipient records, accomplishments, organizational procedures, and financial control systems, as well as to conduct interviews and provide technical assistance as necessary. In 2015, the State TIP Office established a goal to conduct at least one site visit during the life time of every project. While site visits during a project’s implementation are not required under DOL’s policy, DOL officials explained that they use site visits when deemed necessary to supplement information from other forms of oversight. USAID’s policy requires that a site visit be conducted for every project during implementation to provide activity oversight, inspect implementation progress and deliverables, verify monitoring data, and learn from activity implementation. In addition to these monitoring tools, State, USAID, and DOL officials told us that they rely on frequent communication with implementing partners as part of their monitoring process. Overall, monitoring is intended to help agencies determine whether the project is meeting its goals, update and adjust interventions and activities as needed, and ensure that funds are used responsibly. We found, based on our review of 54 selected counter-trafficking in persons projects (37 State, 3 DOL, and 14 USAID), that DOL and USAID had fully documented their performance monitoring activities, while State did not fully document its activities for 16 of 37 (43 percent) of the projects we reviewed with project start dates between fiscal years 2011 to 2016. DOL’s documented monitoring activities included the monitoring plan for each project as well as fiscal year 2017 semi-annual progress reports, including indicators and targets. USAID’s documented monitoring activities included the monitoring plan for each project; fiscal year 2017 progress reports at the reporting frequency specified in the agreements for each project; the final progress report, including indicators and targets, for the three projects that ended as of December 2017; and evidence that at least one site visit was conducted during each project’s implementation. Overall, the three agencies reported having conducted at least one site visit during the life time of the project for 47 of 54 (87 percent) of the selected projects. As shown in table 3, State did not fully document its monitoring activities (monitoring plan; fiscal year 2017 quarterly progress reports; and final progress report, including indicators and targets, for projects that ended as of December 2017) for 16 of the 37 selected projects we reviewed. Specifically, State did not have nine monitoring plans, five complete progress reports, or targets for each indicator in six of seven final progress reports for projects that ended as of December 2017. (See appendix III for detailed information on each of the 37 projects.) For the nine projects for which the monitoring plan was not documented, the State TIP Office indicated that it was unable to locate these documents or they were not completed because the projects were finalized when the TIP Office was beginning to institute the monitoring plan requirement. Although TIP Office officials told us that the TIP Office piloted and began to phase in the monitoring plan requirement over the course of 2014 and early 2015, eight of the nine projects without monitoring plans started in September or October 2015. We found that each of the nine projects had a logic model used to report progress in the fiscal year 2017 quarterly progress reports we reviewed, which would have provided TIP Office officials a basis for monitoring project performance at that point. However, federal standards for internal control call for agency management to design monitoring activities so that all transactions are completely and accurately recorded and so that management can evaluate project results. Specifically, internal controls specify that monitoring should be ongoing throughout the life of the project, which is consistent with State’s current policy that generally requires completion of the monitoring plan prior to award. Without timely documentation of the monitoring plans at the start of the project, TIP Office officials may not be able to ensure that projects are achieving their goals, as intended, from the beginning of project operations. For the three projects for which the quarterly progress report for the first quarter of fiscal year 2017 had been partially completed, the State TIP Office indicated that the implementing partners began to use the TIP Office’s quarterly reporting template for subsequent reports after TIP Office officials instructed the implementing partner to do so. For the one project where the quarterly progress report was not completed for the third quarter of fiscal year 2017, or partially completed for the fourth quarter of fiscal year 2017, the project officer provided possible reasons why the documents were not in the project’s file, including that the implementing partner lacked the capacity to design a logic model. The project ended December 31, 2017. Federal standards for internal control call for agency management to design monitoring activities, such as performance reporting, so that all transactions are completely and accurately recorded, and project results can be continuously evaluated. As previously discussed, performance progress reports should compare actual to planned performance and indicate the progress made in accomplishing the goals and objectives of the project. Therefore, the TIP Office may lack information needed to assess project performance if it does not have access to complete monitoring documentation. For the six projects for which targets were not fully documented in the final progress reports, we found that targets were lacking for 110 of 253 (43 percent) of indicators across the six final progress reports. Our prior work on performance measurement identified 10 key attributes of performance measures—such as having a measurable target—that GAO has found are key to successfully measuring a project’s performance. For example, our prior work has shown that numerical targets or other measurable values facilitate future assessments of whether overall goals and objectives are achieved because comparisons can be easily made between projected performance and actual results. State TIP Office officials explained that the final progress reports we reviewed lacked targets because the TIP Office had not required targets for each indicator for the projects we reviewed that started in fiscal years 2011 to 2016. State TIP Office officials also said that project officers may not have set targets due to limited resources in previous years. A lack of actual targets limits the TIP Office’s ability to assess project performance, including effectiveness, and determine if implementation is on track or if any timely corrections or adjustments may be needed to improve project efficiency or effectiveness. According to State TIP Office officials, the TIP Office has taken steps to improve its documentation of monitoring activities, such as instituting a monitoring plan requirement; increasing staff, including hiring a monitoring and evaluation specialist; and developing standard templates for implementing partners to use for reporting. Moreover, in November 2017, State established a new policy asserting that, building on the logic model or project charter, bureaus and independent offices must set targets for each performance indicator to indicate the expected change over the course of each period of performance. It further notes that bureaus and independent offices should maintain documentation of project design, including the logic model. Additionally, State TIP Office officials said that State is developing a department-wide automated information management system (State Assistance Management System - Domestic, or SAMS-D) that officials expect to standardize entry of performance information and, under the new system targets, must be recorded for each indicator. State TIP Office officials have worked to pilot- test SAMS-D to provide feedback on the system, including suggestions to improve the completeness of data collection, according to TIP Office officials. Despite these efforts, the TIP Office’s documentation of all monitoring activities, and implementation of its November 2017 requirement to set targets for all performance indicators, is uncertain. For example, even though the TIP Office informed us that it began to institute a monitoring plan requirement over the course of 2014 and early 2015, as previously noted, eight projects we reviewed that started in September or October 2015 did not have monitoring plans. In addition, according to State officials, in SAMS-D, targets could be recorded as “to be determined” and there are no controls in place to ensure that “to be determined” entries are replaced with actual targets. State officials said that SAMS-D has the capability to implement controls to alert users to update “to be determined” targets, but pilot users of SAMS-D, which include the TIP Office, have not provided feedback for this capability so far. Furthermore, State TIP Office officials informed us that the TIP Office cannot require all implementing partners to set targets, but that the TIP Office aspires to update relevant targets regularly in the future and would encourage implementing partners to update target values when appropriate. Without controls to ensure full documentation of monitoring activities and established performance targets, State is limited in its ability to assess project performance, including project efficiency or effectiveness. In our review of selected indicators in two State TIP Office and two USAID projects, we found that State and USAID used inconsistent and incomplete performance information to monitor these projects. We found that State TIP Office and USAID do not have sufficient controls in place to ensure that the performance information they use is reliable. In contrast, we found that DOL had consistent and complete performance information in a project we reviewed, and we identified no controls in DOL’s process that were insufficient for assuring the reliability of this information. For selected indicators in two State TIP Office and two USAID projects, we found numerous errors or omissions in progress reports we reviewed, which resulted in inconsistent and incomplete performance information agencies used to monitor these projects. Specifically, we found examples of inconsistent information, which included many instances in which quarterly indicator totals differed from annual or cumulative totals reported separately on the same projects, and numbers reported in narrative information that differed from numbers reported as indicator values. In addition, we found examples of incomplete information, including narrative elements that were missing in whole or in part. Inconsistent Performance Information. We found numerous instances in which quarterly totals differed from annual or cumulative totals reported separately on the same projects. When these errors occurred, it was not possible to independently determine project performance based on report information. For example, For one State TIP Office project, reported cumulative progress overstated quarterly progress for at least 11 indicators (3 of which by 25 percent or more) and understated quarterly progress for at least 5 indicators (once by 25 percent or more). For example, for the indicator “number of standardized reintegration protocols/guidelines/tools developed (case forms, family assessment, etc.,)” State’s cumulative performance report as of the 4th quarter of fiscal year 2017 indicated that two tools had been developed, whereas quarterly reports showed that only one had been developed. For one USAID project, the indicator “number of assisted communes allocating and accessing funds for trafficking in persons prevention activities” showed that annual results were 60, while quarterly report data combined showed that the number was 6, which USAID officials confirmed was the correct figure. For another USAID project, the indicator, “number of food security private enterprises (for profit), producers organizations, water users associations, women’s groups, trade and business associations, and community-based organizations receiving U.S. government assistance” showed an annual result of one, while quarterly totals combined showed a total of three, which USAID officials confirmed was the correct figure. For the projects we reviewed, implementing partners produced narrative descriptions of progress made to accompany indicator results. We found cases in which numbers reported in narrative information were not consistent with numbers reported as indicator values. For example, for the State TIP Office indicator “number of criminal justice practitioners trained” for one project, indicator results for two quarters differed from results presented in the corresponding narrative during fiscal years 2016 to 2017. State officials found that the narrative information was correct for one of these inconsistencies and the indicator result was correct for the other. In addition, for one USAID indicator—number of public awareness tools on trafficking in persons developed and disseminated—the narrative report for one quarter described distributions that added up to 21,765 products, while the reported quantitative indicator total was 21,482. USAID officials confirmed that 21,765 was the correct figure. Incomplete Performance Information. Additionally, some quarterly reports had narrative elements that were incomplete in whole or in part, which made independent interpretation of project performance difficult or impossible. The implementing partner in one State TIP Office project copied and pasted significant portions of narrative information in quarterly reports for 2 years and, according to State TIP Office officials, did not fulfill a request by State TIP Office to include only current quarterly information in formal quarterly reports because it was focused on other activities. For nearly the entire period, the implementing partner indicated that it was “following up” with government entities in three countries to set up counter-trafficking in persons training for government officials, but no indication was made in formal quarterly reports about the results of any of these follow-up activities. For one State TIP Office project, the indicator “number of children receiving care, whose cases are reported to the police” had no narrative information or incomplete narrative information provided for three of the four quarters in which activity occurred during our period of review (comprising almost 90 percent of reported performance under this indicator). For a USAID project, the implementing partner reported a combined performance number of approximately 200 from the first through third quarters of fiscal year 2017 for the indicator “number of members of producer organizations and community based organizations receiving U.S. government assistance.” However, annual performance for fiscal year 2017 was reported as nearly 1,700 organizations. USAID officials explained that this difference was the result of the implementing partner’s misinterpretation of the indicator’s definition when producing the quarterly reports, but the annual report narrative did not explain this correction. Additionally, for USAID’s indicator on the “number of public awareness tools on trafficking in persons developed and disseminated,” no narrative information in the quarterly or annual reports explained how the last quarter of fiscal year 2016 performance approximately doubled from that of the previous quarter. Narrative information in the annual report described performance for the year only in general terms and did not clarify this significant change. In addition to direct project oversight, State TIP Office and USAID officials stated that performance information from progress reports that the agencies use to monitor counter-trafficking in persons projects is regularly used for internal and external reporting, program decisions, and lessons learned. For example, according to officials, this information is used by senior agency officials to inform their decision-making, in reports such as the Attorney General’s Annual Report to Congress and Assessment of U.S. Government Activities to Combat Trafficking in Persons, and to fulfil other requests from Congress. Neither State TIP Office nor USAID has sufficient controls to ensure consistent and complete performance information, and both face challenges to data reliability stemming from information reported in non- standard formats, implementing partners with limited capacities to report performance information, and the time-consuming nature of reviewing reported information. Federal internal control standards state that management should obtain data from reliable internal and external sources. According to these standards, reliable internal and external sources should provide data that are reasonably free from error and bias and faithfully represent what they purport to represent; and management should evaluate both internal and external sources of data for reliability. Without implementing additional controls to ensure that performance information are consistent and complete, State and USAID officials may not fully or accurately understand what projects are, or are not, achieving and, therefore, how their efforts could be altered as needed. Further, reports that are prepared or program decisions that are made using the TIP Office monitoring reports could be based on inconsistent or incomplete information that does not accurately present project results. State TIP Office currently receives performance information using documents submitted by implementing partners, although this information is not compiled into a single data system and is not in a standardized format. While State provides suggested templates for reporting information, officials said that they cannot require implementing organizations to use these templates and we found that implementing partners provided information in varying formats. According to State TIP Office officials, project officers perform manual reviews of quantitative information in monitoring reports but have insufficient time to carry out detailed reviews of data reliability for all indicators. State TIP Office project officers also stated that the process of comparing narrative information to indicator information was time consuming and difficult. According to these officials, the quality of the information in progress reports also depends on the priorities and resources—which can be limited—of the implementing partner. In addition to reviewing progress reports, State project officers we spoke to said that they rely on site visits and frequent, less formal communication as part of their oversight process. Project officers for the State TIP Office projects we reviewed stated that they did not always examine performance trends over time or review consistency in reported cumulative totals—which should be the sums of the previous and current quarters’ reported results—with quarterly totals, for reasons including the difficulty in assembling quarterly information in this manner and resource limitations. State TIP Office officials noted that they are aware of data quality problems in counter-trafficking in persons monitoring reports. State is developing SAMS-D, a system that officials expect to standardize entry of information from common performance indicators and logic models, according to State officials. These officials stated that if SAMS-D is deployed, State TIP Office could find it easier to analyze and revise logic models that implementing partners submit, as well as examine performance indicator results over time, since standardized data would be available in a centralized location. According to State officials, SAMS-D could be programmed with automatic checks or alerts under conditions defined by the TIP Office and the database programmer. For example, the system could require that fields be filled out in particular formats or provide an alert if performance under a certain indicator has significantly deviated from prior quarters or the indicator’s target. State TIP Office officials said they were uncertain whether SAMS-D would become operational in 2019, as currently planned. According to officials, State TIP Office has participated in planning and pilot activities for SAMS- D, including testing monitoring tools with implementing partners. According to these officials, additional work is needed to develop rules and controls necessary to operationalize SAMS-D to meet the TIP Office’s particular needs and ensure improved data. Another challenge to implementation of SAMS-D, according to these officials, is that some implementing partners are unable to maintain consistent internet connections necessary to upload information, impeding full roll-out of the system, and an alternative upload mechanism does not yet exist. According to USAID officials, overseas missions currently set many of their own policies and procedures for data quality oversight. For the two projects we reviewed, USAID relied on implementing partners to manage information, while it reviewed this information in addition to conducting site visits and communicating with implementing partners on a regular basis to monitor the projects. USAID officials attributed errors in the project reports we reviewed to factors including implementing partners’ errors in manual computation and misunderstandings of indicator definitions. According to USAID officials, data quality errors due to factors such as transcription errors can also occur in the performance information USAID uses to monitor counter-trafficking in persons projects. USAID project officers for the projects we reviewed said that they regularly conducted manual analysis of information received from implementing partners, but USAID and implementing partners are often pressed for time during the quarterly reporting cycle. According to these project officers, some of the errors GAO found had already been identified by USAID implementing partners during their annual review process and corrected in the annual reports we reviewed. For example, for the USAID indicator “value of new private sector investments in select value chains,” quarterly totals overstated corrected annual results by more than $120,000—approximately $170,000 instead of approximately $50,000. USAID officials said that they and the implementing partner had identified that the implementing partner was incorrectly including additional, unrelated data when producing its quarterly totals and while the annual total had been corrected to approximately $50,000, the annual report did not indicate that this error had occurred in the quarterly reports. USAID officials noted that the quality of the information in the progress reports also depends on the experience and capacity—which can be limited—of the implementing partner. According to USAID officials, USAID is currently building the Development Information Solution (DIS), an agency-wide information system that would provide USAID’s operating units (such as headquarters bureaus or field missions) with a tool to better collect, track, and analyze information to improve how they manage their projects and overall strategies. Implementing partners would be able to access the DIS via a portal where they would directly enter project information and upload reports and supporting information, according to this official. In addition, this information would better inform USAID’s decision-making at the operating unit level and agency level. A USAID official explained that USAID developed DIS partly as a result of USAID senior management’s concern about the lack of one corporate system to collect data in a timely fashion and improve efficiency. A USAID official responsible for managing DIS informed us that the business case for DIS was approved in fiscal year 2016. Developers have regularly solicited input from across the agency, according to this official, and a pilot with six missions is expected to begin in November 2018. This official explained that USAID plans to have DIS operational by the end of 2019, but DIS’s timeframe has been accelerated by a year, to 2019 from 2020, which may create programming and budget challenges, and unexpected challenges may also arise during the pilot process as mission needs for DIS are more fully assessed. USAID is currently developing training, deployment, and communications plans to prepare the agency for implementing DIS, according to officials. We reviewed selected indicators and targets information in one DOL project and identified no significant consistency or completeness issues beyond early project stages. For example, for the indicator “number of countries that ratify the International Labor Organization Protocol on Forced Labor,” the October 2016 report contained no reported value for this indicator, while the subsequent report (April 2017) updated this figure to indicate a value of “4” for October 2016. DOL officials explained that a data reporting form had not yet been developed as of October 2016, but indicator performance was discussed in the October 2016 narrative and added to the data reporting form when it was developed. While DOL does not require that a project progress report discuss every indicator associated with an activity in the performance report narrative, according to officials, we found that explanations were present for every significant performance-related event that we identified for the fiscal year 2016 and fiscal year 2017 period. We did not identify any controls in DOL’s process that were insufficient to ensure the reliability of performance monitoring information. DOL officials said that they use a system of spreadsheets with automated calculations and validation checks that are intended to standardize information submission and assure consistency and completeness of submitted information. These officials said that the project’s Comprehensive Monitoring and Evaluation Plan defines rules for how information for indicators is to be collected and how indicators are to be computed from this information. According to these officials, DOL develops a customized indicator reporting form for each project in conjunction with implementing partners, which implementing partners complete as part of their regular reporting requirements. According to these officials, these spreadsheets contain formula checks to mitigate the risk of implementing partners making undisclosed changes to indicator results and array information in a standardized manner across reporting periods. Officials also commented that for internal reporting purposes, such as the Government Performance and Results Act, project officers can extract information from indicator templates in a manner that is not overly burdensome. According to officials, DOL is developing an enhancement to existing tools, expected in late 2019, which will provide a traceable way to send and receive reports from grant recipients; timestamps when reports are sent, received, and accepted; and tracking of performance monitoring communications between DOL and implementing partners. They plan to continue to use a spreadsheet-based system for tracking indicator information. State TIP Office does not have a process to regularly review the number and content of indicators for counter-trafficking in persons projects to ensure that these indicators are useful and that collecting and reviewing information for them is not overly burdensome. State TIP Office officials acknowledged there are too many indicators for many counter-trafficking in persons projects. Project officers have the discretion to revise indicators if the scope of the project is not altered, according to State officials. In addition, according to these officials, changes that alter the project scope are possible with the consent of the implementing partner. However, State TIP Office project officers do not formally indicate which indicators they have determined are most useful and informed us that they have insufficient time and resources to do so as projects progress. One official who focuses on monitoring issues stated that, ideally, there should be three to five indicators per activity, and efforts have been made to reduce the number of indicators in some projects. For example, in one of the State TIP Office projects we reviewed—which was designed prior to the hiring of this official—had more than 230 indicators across 20 activities as of the first quarter of fiscal year 2017, which had been reduced to about 150 by the fourth quarter of fiscal year 2017. Our review of two State TIP Office projects showed that indicators did not change in some situations even when the project officer considered the indicator to have become less relevant. State project officers explained that, instead of only relying on indicator information, they regularly spoke with implementing partners for an understanding of what performance level to expect. While acknowledging errors in the numerical information for some indicators, project officers for the two projects we reviewed said that they sometimes overlooked reviews of all reported indicators in the quarterly progress reports because they consider some indicators to be less useful or unimportant and not needed for monitoring purposes, and burdensome to review in depth. These officials said that project officers focus on the indicators that they consider to be most important for project oversight or congressional requests. State TIP Office officials said that logic models, which include indicators, have improved significantly in recent years (including improvements to the suggested logic model template and the glossary of definitions), partly due to hiring additional monitoring staff, but that State has found the analysis of logic models to be difficult because of the absence of centralized and standardized information and a lack of staff capacity. In addition, project officers stated that they often rely on implementing partners for suggestions with regard to changing indicators. However, according to State officials, these implementing partners may be reluctant to bring up challenges they encounter out of concern that doing so may damage their relationship with State. State’s Program Design and Performance Management Toolkit, rolled-out in 2017, states that indicators can be costly to collect and manage and should therefore be “useful,” which includes having a clear utility for learning, tracking, informing decisions, or addressing ongoing program needs. This policy further states that indicators should also be “adequate,” which includes having only as many indicators in overall monitoring plan as are necessary and feasible to track key progress and results, inform decisions, conduct internal learning, and meet any external communication or reporting requirements. Further, federal internal control standards state that management should establish and operate monitoring activities, and, after doing so, may determine how often it is necessary to change the design of the internal control system as conditions change to effectively address objectives. Without a process to ensure that the number and content of counter-trafficking in persons project indicators are reviewed and modified as needed, project monitoring may be less efficient and effective as implementing partners and State TIP Office staff spend time collecting and reviewing indicator information that is not useful for project monitoring and management. DOL and USAID had processes in place to regularly review indicators for the projects we selected. DOL officials told us that project officers work with subject-matter experts to review the relevance of indicators in each semi-annual reporting period. These officials also stated that grantees are required to review their monitoring and evaluation plan annually, which includes the project’s indicators, and to provide the most recent work plan with each semi-annual report. According to DOL officials, while not a DOL requirement, the project we reviewed incorporated a work plan for each component of the project defining when important activities were planned under each output indicator. We found that DOL and the implementing partner made regular changes to these project plans in response to changing conditions. These plans were consistently included in the monitoring documents and most elements were discussed in the associated narrative text. USAID conducts its project oversight primarily out of its overseas missions, according to USAID officials. According to USAID officials associated with the projects we reviewed, these officials should review the project’s indicators annually, as well as when they determine a review is needed, such as when projects have changes in planned activities. USAID officials stated that this annual review process may be explicitly required in some agreements. According to these officials, missions or other operating units are required to manage and update reference sheets for indicators, which officials said are intended to define each indicator and the information to be collected to measure each indicator. Changes to these reference sheets are tracked, according to these officials. Projects we reviewed showed evidence of regular changes to indicators and associated targets. We spoke to project officers about several specific changes that we had identified. For many of these changes, the project officers provided information about their work with implementing partners to appropriately adjust program goals and expectations, such as adapting the project indicators and targets to unexpected or changing conditions. Given the grave suffering of victims and damaging effects on society that trafficking in persons imposes, and the U.S. government’s reliance on implementing partners to carry out its counter-trafficking projects, performance monitoring is important to ensure that the United States funds projects that are effective, efficient, and achieve their intended counter-trafficking goals. In fiscal year 2017, State, DOL, and USAID managed 120 counter-trafficking projects and monitored the performance of the projects. However, weaknesses in State’s and USAID’s monitoring processes limit their ability to collect reliable performance information and assess project performance. First, we found that the State TIP Office did not fully document its monitoring activities for many of the projects we reviewed that started in between fiscal years 2011 to 2016. Monitoring the implementation of projects and fully documenting the results of such monitoring are key management controls to help ensure that project recipients use federal funds appropriately and effectively. The State TIP Office was also not setting targets for some project indicators, which may have limited the TIP Office’s ability to determine if implementation was on track or if corrections needed to be made. Furthermore, we found that the State TIP Office and USAID used project performance information reported by the implementing partners—used for internal and external reporting purposes—that was not always consistent or complete, and did not have sufficient controls to ensure the reliability of performance information. Finally, to ensure effective and efficient monitoring, projects need to establish a reasonable number of indicators and update them as needed. However, we found that the State TIP Office does not regularly evaluate and revise all of its indicators for counter-trafficking in persons projects, which can have large numbers of indicators. As a result, the State TIP Office may be using information to monitor project performance that that is less useful and relevant for understanding project progress, and requires more resources and time for the implementing partners to produce and agency officials to review. State TIP Office officials noted that the TIP Office has taken steps to improve its monitoring process, and State and USAID officials explained that State and USAID are developing information management systems that may increase the quality and usefulness of the monitoring information they use. However, these systems are not fully designed or operational and their capabilities are not yet known. Thus, the potential of these systems to strengthen the ability of State and USAID to collect reliable performance information and assess their efforts to combat the serious problem of global trafficking in persons is unclear. State and USAID could benefit from making additional improvements to ensure their projects are being implemented as intended and achieving project goals to prevent trafficking in persons, protect victims, and prosecute trafficking crimes. We are making a total of five recommendations, including four to State and one to USAID. Specifically: The Secretary of State should ensure that the Director of the TIP Office establishes targets for each performance indicator. (Recommendation 1) The Secretary of State should ensure that the Director of the TIP Office maintains documentation of all required monitoring activities, including monitoring plans, progress reports, and performance targets. (Recommendation 2) The Secretary of State should ensure that the Director of the TIP Office establishes additional controls to improve the consistency and completeness of performance information that the TIP Office uses to monitor counter-trafficking in persons projects. (Recommendation 3) The Secretary of State should ensure that the Director of the TIP Office establishes a process to review and update performance indicators, with the participation of implementing partners, to ensure that project monitoring remains efficient and effective. (Recommendation 4) The Administrator of USAID should establish additional controls to improve the consistency and completeness of performance information that USAID uses to monitor counter-trafficking in persons projects. (Recommendation 5) We provided a draft of this report to State, DOL, USAID, DOD, and the Treasury for review and comments. In State’s and USAID’s letters, reproduced in appendixes IV and V, respectively, both agencies concurred with our recommendations and described their planned actions to address the recommendations. In addition, State’s letter indicated that our draft report did not fully recognize the investment State has made, and the changes underway, to improve the TIP Office’s performance measurement and ensure complete and consistent documentation. State cited additional dedicated financial and personnel resources for monitoring and evaluation added over the past two years. We acknowledge and report on these positive steps, including the hiring of a monitoring and evaluation specialist and other TIP Office staff, in our report. USAID’s letter included other comments that we have responded to in appendix V. Furthermore, State, DOL, USAID, and the Treasury provided technical comments, which we incorporated as appropriate. DOD had no comments. We are sending copies of this report to the appropriate congressional committees; the Secretaries of State, Labor, Defense, and Treasury; and the Administrator of USAID. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7141, or groverj@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. The National Defense Authorization Act for Fiscal Year 2017 includes a provision for GAO to report on the programs conducted by the Department of State (State), the Department of Labor (DOL), the United States Agency for International Development (USAID), the Department of Defense (DOD), and the Department of the Treasury (Treasury) that address human trafficking and modern slavery, including a detailed analysis of the effectiveness of such programs in limiting human trafficking and modern slavery. Three of these agencies—State, DOL, and USAID—have programs that design and award counter-trafficking projects to implementing partners, through contracts, grants, or cooperative agreements. These agencies then oversee and monitor these projects. Since DOD and Treasury officials did not identify these types of projects as part of their counter-trafficking in persons efforts, we provided background information on their efforts but did not cover these agencies in our reporting objectives. This report (1) identifies the recent projects in international counter-trafficking in persons that key U.S. agencies have awarded to implementing partners, and for selected projects, assesses the extent to which key agencies have (2) documented their monitoring activities, (3) ensured the reliability of the performance information they use in monitoring projects, and (4) reviewed the usefulness of the performance indicators they use in monitoring projects. To address these objectives, we reviewed relevant agency documents and interviewed agency officials. To report on agencies’ programs, we asked knowledgeable officials at State, DOL, USAID, DOD, and Treasury to identify their projects that (1) had an international focus; (2) were delivered by implementing partners to external recipients, such as trafficking victims or host governments, as project beneficiaries; and (3) addressed trafficking in persons, modern slavery, or forced labor. Because State, DOL, and USAID managed such projects, we focus on them as the three key agencies for the purposes of our reporting objectives. According to officials from these three agencies, the projects they identified range from those with counter- trafficking in persons as a primary goal, to those in which this goal was integrated as part of each agency’s activities. We used the lists of projects that these agencies provided to report the relevant counter- trafficking projects that agencies awarded to implementing partners to carry out the projects. For our first objective, we determined the projects that were active during fiscal year 2017, including those which began, were ongoing, or ended during fiscal year 2017, and interviewed agency officials to confirm project information. To analyze the effectiveness of agencies’ programs in limiting human trafficking and modern slavery, we assessed the key agencies’ monitoring efforts for selected projects by examining the extent to which agencies have documented their monitoring activities, ensured the reliability of the performance information, and reviewed the usefulness of the performance indicators they use in monitoring projects. To assess the extent to which State, DOL, and USAID documented their monitoring activities for selected counter-trafficking in persons projects, we reviewed these agencies’ monitoring policies and related guidance as well as the full agreements for the projects to identify specific required monitoring activities. The policies and related guidance included State’s Grants Policy Directive Number 42 (GPD-42) related to monitoring assistance awards; Federal Assistance Policy Directive (FAPD), which according to a State official superseded State’s grants policy directives, including GPD-42; Federal Assistance Directive, which superseded the FAPD; Program Design and Performance Management Toolkit; and Program and Project Design, Monitoring, and Evaluation Policy. We also reviewed State’s Office to Monitor and Combat Trafficking in Persons standard operating procedures. For DOL, we reviewed its Management Procedures and Guidelines (MPG) as well as the Comprehensive Monitoring and Evaluation Plan Guidance Document referenced in the fiscal year 2017 MPG. For USAID, we reviewed—from its Automated Directives System or ADS—Chapter 203 on Assessing and Learning and Chapter 201 on Program Cycle Operational Policy, which according to USAID officials superseded Chapter 203. Once we determined what tools the agencies use to monitor their counter-trafficking in persons projects, we sought documentation of those tools to determine whether agencies were implementing those tools. To assess the agencies’ monitoring efforts, we identified all of State’s, DOL’s, and USAID’s projects that started before or during October 2015, which corresponds to the first quarter of fiscal year 2016, and were active through September 30, 2017, which corresponds to the fourth and last quarter of fiscal year 2017. This produced a list of a total of 57 State, DOL, and USAID projects. Out of these 57 projects, we excluded 3 projects from our selection for various reasons. We excluded one DOL project because DOL identified the project as being a research project for which certain agency performance monitoring requirements (e.g., indicators, targets) are not applicable. We also excluded two USAID projects because USAID identified each project as including several projects with various start and end dates, thus making it difficult to determine their time frames for inclusion in our report. This resulted in a selection of 54 projects—37 from State, 3 from DOL, and 14 from USAID. We reviewed documentation of key monitoring activities as specified in agency policy or the project award agreements to determine the extent to which the agencies had full documentation of key monitoring activities. We also applied federal standards for internal control, which call for agency management to design monitoring activities so that all transactions are completely and accurately recorded, and GAO’s key attributes of effective performance measures, specifically the attribute of having a numerical target. We made our determinations of the extent to which agencies had full documentation of key monitoring activities, as follows: State (37 projects). To determine whether State had fully documented its monitoring activities, we reviewed the monitoring plan for each project; fiscal year 2017 quarterly progress reports for each project; and the final progress report, including indicators and targets, for the seven projects that ended as of December 2017. We determined that State had “fully documented” the monitoring plan, if State provided a monitoring plan worksheet for the project. If State did not provide a monitoring plan worksheet for the project, we determined the monitoring plan was “not documented.” For each quarterly progress report for fiscal year 2017 as well as the final progress report for projects that ended as of December 2017, we determined that State had “fully documented” the report, if the report included both a qualitative and quantitative summary of progress. For the State TIP Office projects we reviewed, the qualitative summary of progress is captured in a narrative and the quantitative summary of progress is captured in the logic model. For the State DRL project we reviewed, the qualitative summary of progress is captured in a narrative and the quantitative summary of progress is captured in the monitoring plan. If either component—narrative or quantitative summary—was not documented, we determined that the report was “partially documented.” If both components were not documented, we determined that the report was “not documented.” We determined that State had “fully documented” indicators and targets for projects that ended as of December 2017, if the final progress report for the project included indicators as well as targets for each indicator. If the final progress report included indicators but did not specify targets for each indicator, we determined that indicators and targets were “partially documented.” If the final progress report did not include indicators and targets, we determined that indicators and targets were “not documented.” (We did not find any instances of “not documented.”) DOL (3 projects). To determine whether DOL had full documentation of its monitoring activities, we reviewed the monitoring plan as well as fiscal year 2017 semi-annual progress reports for each project. Because DOL’s three projects were ongoing as of December 2017, we reviewed the second semi-annual progress report for fiscal year 2017 to determine whether DOL had “fully documented” indicators and targets for each project. Overall, we determined that DOL had “fully documented” (1) the monitoring plan for each project, if the monitoring plan documented the performance metrics and data collection frequency for the project; (2) each fiscal year 2017 semi- annual progress report for the project, if the report included a qualitative and quantitative summary of progress for the period of performance; and (3) indicators and targets for the project, if the second semi-annual progress report included indicators as well as targets for each applicable indicator. USAID (14 projects). To determine whether USAID had full documentation of its monitoring activities, we reviewed the monitoring plan for each project; fiscal year 2017 progress reports at the reporting frequency specified in the agreements for each project; and the final progress report, including indicators and targets, for the three projects that ended as of December 2017. We also reviewed evidence of site visits conducted during the life time of the projects. Overall, we determined that USAID had “fully documented” (1) the monitoring plan for each project, if the monitoring plan documented performance metrics for the project; (2) the periodic progress reports for fiscal year 2017 as well as the final progress report for projects that ended as of December 2017, if the report included a qualitative and quantitative summary of progress for the period of performance; and (3) indicators and targets for the three projects that ended as of December 2017, if the final progress report included indicators as well as targets for each applicable indicator. We determined that USAID “fully documented” a project’s site visit, if USAID provided evidence of having conducted at least one site visit during the life time of the project. Additionally, we interviewed knowledgeable monitoring officials from each agency to understand agencies’ monitoring process and application of monitoring requirements for counter-trafficking in persons projects. Because State and DOL officials also identified site visits as a key tool they use to monitor their counter-trafficking in persons projects, we reviewed evidence of site visits conducted during the life time of the projects to report on these efforts. We also interviewed State TIP Office officials to discuss instances in which the agency did not have full documentation of key monitoring activities. To assess the extent to which key agencies have ensured the reliability of the performance information they use to monitor selected projects, we selected for review a nongeneralizable sample of 5 projects—2 State projects, 1 DOL project, and 2 USAID projects—out of the 54 counter- trafficking in persons projects identified by agencies that started before or during October 2015 and were active through fiscal year 2017. We based our selection of these projects primarily on largest total award amounts. For these selected projects, we obtained 2 years of progress reports and other documents to assess the quantitative and qualitative performance information. We developed a standardized template to capture all quarterly or semi-annual indicator performance information reported for each of these projects and assessed whether quarterly or semi-annual totals were consistent with annual and cumulative totals where these were reported. Using this quantitative information, we judgmentally selected indicators for inclusion in agency interviews where it appeared likely that numerical errors had occurred or there appeared to be significant project events, such as large over- or under-performance or the elimination of the indicator. We interviewed agency officials, including managers of these five projects, about the consistency and completeness of monitoring information in these projects for about 60 indicators identified through our analysis. Additionally, we questioned these officials about performance report narrative information describing project activities that, in our judgement, appeared to be incomplete or inconsistent with respect to indicator results. We also used these interviews to determine whether our findings for these selected projects reflected general agency policies and procedures. We assessed the completeness and consistency of project performance data that State, DOL, and USAID use to monitor projects as part of our data reliability assessment. We found State and USAID data to be unreliable in the projects we reviewed. We discuss the implications of these unreliable data for State and USAID’s project management and reporting in our findings and recommendations. We found the performance data that DOL used were consistent and complete for the project we reviewed. While we examined indicator data and narrative information for consistency and completeness, we did not verify the accuracy of performance information. To assess the extent to which key agencies have reviewed the usefulness of the performance indicators they use to monitor selected projects, we used the same nongeneralizable sample of five projects— two State projects, one DOL project, and two USAID projects. We interviewed agency officials, including managers of these five projects, about processes and systems they use to review the usefulness of indicators on an ongoing basis, such as when conditions in the project activity region change or if the agency and implementing partner learn that certain project activities are less effective than expected. We identified examples of indicators that had apparently been discontinued, as well as continued indicators that showed minimal progress, and we asked these officials to explain what had or had not been discontinued. We also used these interviews to determine whether our findings for these selected projects reflected general agency policies and procedures. We conducted this performance audit from October 2017 to December 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Departments of State (State) and Labor (DOL), and U.S. Agency for International Development (USAID) managed 120 projects in counter- trafficking in persons carried out by implementing partners during fiscal year 2017, according to information provided by officials with these agencies. The three agencies used different approaches to identify relevant projects. For example, State reported projects with a primary goal of counter-trafficking in persons, while DOL and USAID included projects that may not have counter-trafficking in persons as a primary goal. Table 4 lists these agencies’ reported project information for projects that were active during fiscal year 2017. The Department of State (State) did not fully document its monitoring activities (monitoring plan; fiscal year 2017 quarterly progress reports; and final progress report, including indicators and targets, for projects that ended as of December 2017) for 16 of the 37 selected projects we reviewed with start dates between fiscal years 2011 to 2016. (See table 5.) For example, State’s Office to Monitor and Combat Trafficking in Persons did not have monitoring plans for nine projects or targets for each indicator in six of seven final progress reports for projects that ended as of December 2017. 1. USAID commented that it does not believe that our draft report reflected the existing controls the USAID mission in Ghana shared with us, and that the mission had furnished us with a file that, according to USAID, contained correct information for all indicators and their results from the time the activity began until our audit. While the mission provided us with a spreadsheet, this document included only annual performance totals for several years without accompanying quarterly totals, or quarterly or annual narrative information. We focused our analysis on the quarterly and annual performance reports to understand the extent to which USAID was ensuring the consistency and completeness of performance information, including associated narratives, underlying its aggregate and higher-level performance reports. We reported on inconsistent or incomplete performance information only after discussing and substantiating the specific errors we identified with USAID officials. Further, we recognize USAID’s efforts to address errors that the agency identified prior to our review and we provide an example of such efforts in the report. 2. We have incorporated USAID’s comment. Our report no longer characterizes USAID’s regular activity monitoring and conversations with implementing partners as “informal.” 3. USAID noted that our report does not discuss how the USAID mission in Ghana uses its third-party monitoring project—Monitoring, Evaluation and Technical Support Services (METSS)—to work with local organizations to improve their collection and analysis of data. We have added a reference to USAID’s third-party monitoring project to the report where we discussed limited capacity of local partners as a cause of data reliability issues. 4. USAID commented that one of the Ghana counter-trafficking in persons indicators we examined in the integrated project (“value of new private sector investments in selected value-chains”), was not related to trafficking in persons and, therefore, was not directly related to the focus of our audit. As discussed in the Objectives, Scope, and Methodology section of our report (see app. I), we selected projects, including the integrated project in Ghana, based on a list of counter- trafficking in persons projects provided by USAID. Because the same operational policy that sets the monitoring and evaluation standards for the agency applied to all indicators within a given project, we examined available quarterly or semi-annual indicator data for all reported indicators in selected projects to determine the completeness and consistency of the data. We then conducted interviews with agency officials to discuss instances in which we identified potentially incomplete and inconsistent performance information, as well as whether our findings about the management of performance information for these selected projects reflected general agency policies and procedures. In addition to the contact named above, Leslie Holen (Assistant Director), Victoria Lin (Analyst-in-Charge), Esther Toledo, and Andrew Kurtzman made key contributions to this report. The team benefited from the expert advice and assistance of Neil Doherty, Justin Fisher, Benjamin Licht, Grace Lui, and Aldo Salerno. Human Trafficking: State Has Made Improvements in Its Annual Report but Does Not Explicitly Explain Certain Tier Rankings or Changes, GAO-17-56 (Washington, D.C.: December 5, 2016). Human Trafficking: Oversight of Contractors’ Use of Foreign Workers in High-Risk Environments Needs to Be Strengthened. GAO-15-102 (Washington, D.C.: November 18, 2014). Human Trafficking: Monitoring and Evaluation of International Projects Are Limited, but Experts Suggest Improvements. GAO-07-1034 (Washington, D.C.: July 26, 2007). Human Trafficking: Better Data, Strategy, and Reporting Needed to Enhance U.S. Antitrafficking Efforts Abroad. GAO-06-825 (Washington, D.C.: July 18, 2006).
|
Human trafficking is a pervasive problem throughout the world. Victims are often held against their will in slave-like conditions. The National Defense Authorization Act for Fiscal Year 2017 includes a provision for GAO to report on the programs conducted by specific agencies, including State, DOL, and USAID, that address trafficking in persons. Among other objectives, this report (1) identifies the recent projects in international counter-trafficking in persons that key U.S. agencies have awarded to implementing partners; and, for selected projects, assesses the extent to which key agencies have (2) documented their monitoring activities and (3) ensured the reliability of project performance information. GAO reviewed State, DOL, and USAID project documents and interviewed agency officials. GAO reviewed monitoring documents for 54 of the 57 projects that were active from the beginning of fiscal year 2016 through the end of fiscal year 2017. Of these 54 projects, GAO selected a nongeneralizable sample of 5 projects, based primarily on largest total award amounts, for review of the reliability of project performance information. The Departments of State (State), Labor (DOL), and the U.S. Agency for International Development (USAID)—through agreements with implementing partners—managed 120 international counter-trafficking in person projects during fiscal year 2017. GAO reviewed a selection of 54 counter-trafficking projects (37 State, 3 DOL, and 14 USAID), and found that DOL and USAID had fully documented their monitoring activities, while State had not. All three agencies used similar tools to monitor the performance of their projects, such as monitoring plans, performance indicators and targets, progress reports, and site visits. GAO found, however, that State did not fully document its monitoring activities for 16 of its 37 projects (43 percent). GAO found that State did not have the monitoring plans or complete progress reports for one-third of its projects and often lacked targets for performance indicators in its final progress reports. State officials said they had not required targets for each performance indicator for the projects GAO reviewed, or had not set targets due to limited resources in prior years. State has taken steps to improve its monitoring efforts, including issuing a November 2017 policy that requires targets to be set for each performance indicator and developing an automated data system that would require targets to be recorded. However, because the pilot data system allows targets to be recorded as “to be determined” and does not have controls to ensure entry of actual targets, it is uncertain whether performance targets will be regularly recorded. Without full documentation of monitoring activities and established performance targets, State has limited ability to assess project performance, including project efficiency or effectiveness. GAO reviewed the reliability of project performance information for 5 of the 54 counter-trafficking projects (2 State, 1 DOL, and 2 USAID) and found that State and USAID used inconsistent and incomplete performance information, while DOL used consistent and complete information. For example, some quarterly indicator results in State and USAID progress reports were inconsistent with annual total results, and narrative explanations for significant deviations from performance targets were sometimes not present in quarterly reports. According to agency officials, performance information from these projects is regularly used not only for direct project oversight but also for internal and external reporting, program decisions, and lessons learned. GAO found that State's and USAID's processes lack sufficient controls to ensure the reliability of project performance information, but did not find inadequate controls in DOL's process. For example, neither State nor USAID consistently used automated checks on indicator results to ensure consistency and completeness of performance indicator result calculations. In contrast, DOL used automated checks as part of its process. Without implementing controls to ensure that performance information is consistent and complete, State and USAID officials cannot fully or accurately understand what projects are, or are not, achieving, and how their efforts might be improved. GAO is making four recommendations to State and one recommendation to USAID, including that both agencies establish additional controls to improve the consistency and completeness of project performance information, and that State maintain monitoring activity documentation and establish targets for each performance indicator. State and USAID concur with GAO's recommendations.
|
Export credit agencies such as the Bank are usually government agencies, although some private institutions operate export credit programs on their respective governments’ behalf, according to a Bank report on global export credit competition. These agencies offer financing for domestic companies to make sales to foreign buyers, in the form of products such as loans, guarantees, and insurance for exporters, according to the Organisation for Economic Co-operation and Development, which monitors international export credit activity. The Bank is one of several federal agencies promoting U.S. exports. According to the Bank, as of December 31, 2016, it had identified 96 export credit agencies worldwide. There have been significant changes in the role of export credit agencies since 2007 and the global financial crisis and the European debt crisis, according to the Bank. This is because ready access to credit before the global financial crisis has given way to caution in lending among private-sector banks, and also because other nations have adopted export credit agencies as a tool for national growth. For fiscal year 2014—which the Bank says is the most recent year in which it operated with full authority— the Bank reported authorizing nearly $20.5 billion in financing in support of an estimated $27.5 billion worth of U.S. exports and nearly 165,000 American jobs. For fiscal year 2017, operating under reduced authority, the Bank reported authorizing more than $3.4 billion in financing to support $7.4 billion of exports and an estimated 40,000 jobs. The Bank, which has about 430 employees, was established under the Export-Import Bank Act of 1945. Under the act, the Bank must have a “reasonable assurance” of repayment when providing financing; it must supplement, and not compete with, private capital; and it must provide terms that are competitive with foreign export credit agencies. Also relevant to whether the Bank provides assistance is whether foreign competitors of the U.S. exporter are receiving export credit assistance from their home nations, and thus the American exporter would need assistance to stay competitive. Over time, Congress has directed the Bank to support certain specific types of exports. Such requirements include using at least 25 percent of its authority to finance small-business exports; promoting exports related to renewable energy sources; and promoting financing for sub-Saharan Africa. As described in figure 1, to support U.S. exports, the Bank offers four major types of financing: direct loans, loan guarantees, export-credit insurance, and working capital guarantees. Bank products generally have three maturity periods: Short-term transactions are for less than 1 year; medium-term transactions are from 1 to 7 years long; and long-term transactions are more than 7 years. For fiscal year 2017, the Bank reported it had exposure in 166 countries. Figure 2 shows Bank exposure by product type, geographic region, and economic sector, for fiscal year 2017. Its greatest exposure, by product type, was in loan guarantees. By geographic region, the largest exposure was the Asian market. By economic sector, exposure was biggest in aircraft products. Because the Bank’s mission is to support U.S. jobs through exports, there are foreign-content eligibility criteria and limitations on the level of foreign content that may be included in a Bank financing package. For medium- and long-term transactions, for example, the Bank limits its support to 85 percent of the value of goods and services in a U.S. supply contract, or 100 percent of the U.S. content of an export contract, whichever is less. There are also requirements that certain products supported by the Bank must be shipped only on U.S.-flagged vessels. Defaults occur when transaction participants fail to meet their financial obligations. The Bank must report default rates to Congress quarterly. It calculates the default rate as overdue payments divided by financing provided. If the rate is 2 percent or more for a quarter, the Bank may not exceed the amount of loans, guarantees, and insurance outstanding on the last day of that quarter until the rate falls under 2 percent. As of March 31, 2018, the Bank reported its default rate at 0.438 percent. The Bank is overseen by a Board of Directors (the Board), which has a key role in approving Bank transactions, because directors must approve medium- and long-term transactions of greater than $10 million. Since July 2015, however, the Board has lacked a quorum (at least three members), which has precluded approval of these large transactions. Also due to the lack of a quorum, new transaction activity has shifted away from larger transactions, according to Bank managers. The Bank’s total exposure has recently declined by about a third, from $113.8 billion at the end of fiscal year 2013 to $72.5 billion at the close of fiscal year 2017, according to the Bank. In part during the period when the Board has lacked a quorum and been unable to approve large transactions, the amount of earnings the Bank has transferred to the Department of the Treasury has declined steadily, according to Bank figures. Since 2012, the amount the Bank transferred to the Treasury peaked at $1.1 billion in fiscal year 2013. In successive years, that transfer fell to $674.7 million in fiscal year 2014, $431.6 million in fiscal year 2015, and $283.9 million in fiscal year 2016, before reaching zero in fiscal year 2017. As the Board vacancies have continued, a backlog of Board-level transactions has grown, reaching an estimated $42.2 billion as of December 2017. The Board also has a key role in risk management, with members serving on the Bank’s Risk Management Committee, which oversees portfolio stress testing and risk exposure, according to the Bank. Board members also approve the appointment of the chief risk officer (CRO), the chief ethics officer, and members of advisory committees. During the course of our review, in addition to the Board quorum issue, Bank senior leadership changed. According to the Bank, the following took place: The acting chairman of the Board and president of the Bank resigned. The vice chairman, first vice president, and acting agency head also later resigned. Subsequently, a new executive vice president, chief operating officer, and acting agency head was named. Following that, an acting president and Board chairman was named. Fraud and “fraud risk” are distinct concepts. Fraud—obtaining something of value through willful misrepresentation—is challenging to detect because of its deceptive nature. Fraud risk exists when individuals have an opportunity to engage in fraudulent activity, have an incentive or are under pressure to commit fraud, or are able to rationalize committing fraud. When fraud risks can be identified and mitigated, fraud may be less likely to occur. Although the occurrence of fraud indicates there is a fraud risk, a fraud risk can exist even if actual fraud has not yet been identified or occurred. According to federal standards and guidance, executive-branch agency managers are responsible for managing fraud risks and implementing practices for combating those risks. Federal internal control standards call for agency management officials to assess the internal and external risks their entities face as they seek to achieve their objectives. The standards state that as part of this overall assessment, management should consider the potential for fraud when identifying, analyzing, and responding to risks. Risk management is a formal and disciplined practice for addressing risk and reducing it to an acceptable level. We issued our Fraud Risk Framework in July 2015. The Fraud Risk Framework provides a comprehensive set of leading practices, arranged in four components, which serve as a guide for agency managers developing efforts to combat fraud in a strategic, risk-based manner. The Fraud Risk Framework is also aligned with Principle 8 (“Assess Fraud Risk”) of the Green Book. The Fraud Risk Framework describes leading practices in four components: commit, assess, design and implement, and evaluate and adapt, as depicted in figure 3. The Fraud Reduction and Data Analytics Act of 2015, enacted in June 2016, requires the Office of Management and Budget (OMB) to establish guidelines for federal agencies to create controls to identify and assess fraud risks, and to design and implement antifraud control activities. The act also requires OMB to incorporate the leading practices of the Fraud Risk Framework in those guidelines. In July 2016, OMB published guidance on enterprise risk management and internal controls in federal executive departments and agencies. Among other things, this guidance affirms that managers should adhere to the leading practices identified in the Fraud Risk Framework. The act also requires federal agencies to submit to Congress a progress report each year, for 3 consecutive years, on implementation of the controls established under the OMB guidelines. The Bank has identified a dedicated entity to lead fraud risk management activities, as called for in the first component of GAO’s Fraud Risk Framework. In addition, employees generally have a positive view of antifraud efforts across the Bank, according to our employee survey. However, we also found that management and staff have differing views on key aspects of the Bank’s antifraud culture. In particular, we identified issues inconsistent with the notion of “an antifraud tone that permeates the organizational culture,” as the Fraud Risk Framework calls for, in which there is agreement across the organization on key fraud issues and practices. These areas of disagreement on aspects of the Bank’s antifraud culture include how active the Bank should be in preventing, detecting, and addressing fraud; and the adequacy of time for underwriting, which the Bank says is its primary safeguard against fraud. Bank managers said that our findings provide an opportunity for additional staff training on fraud issues. The Bank has identified two managers who serve as a dedicated entity for leading fraud risk management activities, managers told us. These are a vice president of the Credit Review and Compliance division (CRC) and an assistant general counsel in the Bank’s Office of the General Counsel (OGC). According to Bank managers, they work together under the direction of the CRO, who was permanently named to the position on a part-time basis in September 2016. GAO’s Fraud Risk Framework provides that the dedicated entity can be an individual or a team, depending on the needs of the agency. Hence, the Bank’s arrangement is consistent with the framework. Before recently identifying the two managers as the dedicated entity, Bank managers told us there was no centralized entity responsible for fraud risk management. Likewise, Bank written procedures, dated February 2015, for preventing, detecting, and prosecuting fraud provided there is no “central figure in charge” of such efforts. The CRO told us that he oversees the two managers in their work as the dedicated entity. We also found that the two managers named to form the dedicated entity are involved in one of the key activities contemplated by the Fraud Risk Framework. Overall, these activities include serving as a repository of knowledge on fraud risks and controls; leading or assisting with trainings and other fraud-awareness activities; and coordinating antifraud initiatives. The two managers have helped develop and provide training, some of which is mandatory and targeted directly at fraud issues, managers told us. The Bank provides semiannual fraud training through OGC for claims-processing staff, Bank managers also said. Other training, while nominally not directed at fraud, can nevertheless involve fraud issues, Bank managers told us. For instance, managers told us recent training on shipping matters included a review of fraudulent shipping documentation, which is one way fraud can be perpetrated. GAO’s Fraud Risk Framework calls for creating an organizational culture to combat fraud, such as by demonstrating senior-level commitment to fighting fraud and involving all levels of the agency in setting an antifraud tone. Bank managers, in interviews, and staff, in our employee survey, generally expressed positive views of the Bank’s antifraud culture. For example, according to Bank managers, the Bank has maintained an antifraud culture, which they attribute to factors including: fraud and ethics training; internal controls; tone set at the top by management; a realization after fraud cases in the 2000s that the Bank cannot be solely reactive to fraud; and the pursuit of fraud cases by the Bank and its OIG. Our survey results indicate that Bank employees also generally have a positive view of antifraud tone across the Bank and attention paid to combating fraud. For example: Eighty percent said Bank management in general has established a clear antifraud tone, to the extent of “a great deal” or “a lot.” Employees said that based on senior management’s actions, preventing, detecting, and addressing fraud is “extremely” or “very” important to the Bank (86 percent). Staff expressed “a great deal” or “a lot” of confidence in senior management (76 percent), managers in their division (85 percent), and their peers (82 percent), to respond to fraud on a timely and appropriate basis. Illustrative Comments from GAO’s Survey of Bank Employees “The Bank has become much more sensitized to the risks of fraud over the last 10 years.” “The progress made on combating fraud is tremendous. When I started, no one really cared, and fraud was common…. Now, blatant attempts at fraud are a rarity.” “There is a high degree of concern at all levels of the Bank regarding potential fraud, which has resulted in good oversight.” We also found indications of disagreement among managers and staff about how active the Bank should be in preventing, detecting, and addressing fraud. Overall, Bank managers told us, the Bank’s current approach has been appropriate for dealing with fraud. In particular, an OGC manager told us that with its underwriting and due diligence standards—the process for assessing and evaluating an application before approval—and established fraud procedures, the Bank has an appropriate strategy to mitigate fraud risks it knows about or envisions occurring. However, about one-third of survey respondents (35 percent) said the Bank should be “much more active” or “somewhat more active” in preventing, detecting, and addressing fraud. Less than half (44 percent) said the current level of activity should remain the same. Asked whether what they see as the Bank’s current approach for overseeing fraud and fraud risk, based on the level of responsibilities of various parties involved, is the most effective way to do so, about 6 in 10 (62 percent) said yes. While Bank managers characterized our survey results as positive, these divergent views indicate room for strengthening antifraud culture, in light of the Fraud Risk Framework’s goal of achieving shared views across the organization. Illustrative Comments from GAO’s Survey of Bank Employees “The Bank should be much more active in preventing, detecting, and addressing fraud, because the Bank handles business transactions that involve taxpayers’ money.” “The Bank needs more funding for technology to help with fraud prevention and additional Bank staff to spot/monitor fraud.” “The first- and second-level managers have not done all they could to ensure fraud prevention. The front-line credit officers are the ones in the best position to detect fraud and management does not always support it.” “A more proactive approach to fraud detection, rather than a reactive approach, would be more prudent. This means trying to sniff out fraud the preapplication and underwriting stages.” Another area where we identified differing views is in the adequacy of time for underwriting. Preapproval underwriting, and the due diligence done as part of that process, is the Bank’s main control against fraud, according to Bank managers and procedures. However, during our review, Bank managers also acknowledged in interviews that their business involves potentially competing objectives: performing sufficient due diligence to prevent and detect fraud prior to approving transactions, while still processing transactions in a timely manner to meet customers’ needs and achieve the Bank’s mission. Some comments we received in our employee survey illustrated the tension between the competing objectives of thorough due diligence and timely processing of transactions. Illustrative Comments from GAO’s Survey of Bank Employees “Detecting fraud is a very high priority, as is appropriate. But overemphasis on managing that risk would lead to a sense of paranoia when approaching any new risk.” “Given all the other obligations we have, even more time spent on fraud detection means less time for other transaction-related work, with only marginal benefit.” “Risk is part of the business, and being overly cautious leads to never taking any risk and consequently not serving the customers.” “Fraud is important to discuss, but it should not become the main force driving the organization. There needs to be more of a risk-based analysis when determining how much to concentrate on fraud.” According to a Bank report on global export credit competition, transaction processing time is an important factor in customers’ decisions to choose the Bank over foreign export-financing agencies. In recent years, the Bank has significantly reduced processing time. Bank statistics show that the percentage of transactions completed in 30 days or fewer grew from 57 percent in fiscal year 2009 to 91 percent in fiscal year 2016. For 100 days or fewer, the rate has increased from 90 percent to 99 percent over the same period. Bank managers told us they seek to strike the right balance between the competing objectives and believe they have done so. For example, according to the CRC division, the Bank chooses to perform some of its fraud-detection and mitigation activities after application approval—such as through reviews of transactions selected on both a random and risk- based basis—in order to not unduly delay processing applications. Under Bank practices, document review can be abbreviated, and, after underwriting approval, lenders may accept certain transaction documentation, such as invoices or shipping documents, at face value unless something appears suspicious, managers told us. In the particular case of processing short- and medium-term transactions, the Bank is alert to “red flag” items—known warning signs, such as use of nonbank financial institutions, or participants that are trading entities rather than original equipment manufacturers, managers told us. But otherwise, the Bank limits the extent of its application investigation, according to the Bank’s OGC. In particular, as the Bank’s OGC told us, the Bank is required by law to make medium-term offerings a “simple product.” There is pressure both legally and commercially to process transactions quickly, because, otherwise, an exporter could lose its business opportunity, the Bank’s OGC told us. In many of these transactions, both the exporter and buyer are small, the OGC also said, so it is more difficult to get information. As a result, according to the OGC, the Bank relies more on self-reporting by transaction parties. For these reasons, the Bank’s OGC told us, for both short- and medium-term products, there are not as many “inherent checks and balances” in the process. We note that based on previous GAO work, self-reporting can present an opportunity for fraud. However, our survey results suggest that significant portions of Bank staff question whether the Bank is striking the right balance in providing sufficient time for preapproval review of transactions. Specifically, Bank staff raised concerns about the amount of time dedicated to the key task of preapproval review of applications. For each of the Bank’s three major product maturity categories, we asked whether the application process provides enough time for Bank staff to conduct thorough due diligence on potential fraud risks. For short-term products—which Bank managers said, as a category in general, have been the most susceptible to fraud recently—less than half (47 percent) said there is “always” or “usually” enough time; and about 20 percent said there is “sometimes,” “seldom,” or “never” enough time. For both medium- and long-term products, about 6 in 10 (56 percent and 61 percent, respectively) said the application process “always” or “usually” provides enough time. As noted, while Bank managers characterized our survey results as positive, these views indicate an opportunity for the Bank to further set an antifraud tone that permeates the organizational culture. Illustrative Comments from GAO’s Survey of Bank Employees “More due diligence should be required in order to qualify for the U.S. government’s support.” “The Bank is more concerned with increasing sales than preventing fraud.” Our survey also identified that while nearly half (48 percent) of respondents rated fraud as a “very significant” or “significant” risk to the Bank, there may be misunderstanding among employees on where responsibility lies for fraud risk management. We asked employees to describe the extent to which each of six offices or groups—OGC, the OIG, the Office of Risk Management, Bank senior management, all bank staff and managers collectively, or others—are responsible for overseeing fraud risk management activities at the Bank. The OIG received the highest response, with 73 percent saying it has “a great deal of responsibility.” Bank managers told us this result is to be expected, because staff associate issues of fraud with the OIG. However, these survey results suggest confusion—lack of a shared view, from the standpoint of antifraud culture—around the OIG’s role, which includes investigating suspected fraud, rather than overseeing the Bank’s fraud risk management activities. The OIG acknowledged to us that its role does not include responsibility for overseeing fraud risk management activities at the Bank. Asked about our findings overall, Bank managers told us they view our survey results as positive because the results indicate employees have a strong awareness of fraud and the risk it presents to the Bank. For example, regarding the results about the role of the OIG, they noted that staff are actively encouraged to report suspected fraud through channels—first to OGC, for subsequent referral to the OIG. Thus, employees would understand the OIG as being responsive to fraud, and Bank managers believe this likely accounts for the survey result. Nevertheless, they said, our survey results provide an opportunity for more detailed training, to better communicate with staff. In particular, the Bank managers told us such training would focus on the Bank’s approach to fraud, plus the Bank’s organizational structure for addressing fraud. The training will also clarify that the OIG has an investigative function as well as an auditing function, they said. Our employee survey results underscore the potential benefit of further fraud training. Among respondents who said they have received fraud or fraud risk-related training provided by the Bank in the last 2 years, three-quarters said it was “extremely” or “very” relevant to their job duties. Nearly two-thirds (63 percent) said it was “extremely” or “very” useful to their duties. Overall, about half (52 percent) of respondents said fraud or fraud risk-related information obtained from management, or any Bank resources, has increased their understanding of fraud “a great deal” or “a lot.” The differences we identified in perceptions of fraud risk and fraud management responsibilities do not, by themselves, implicate the performance of any particular antifraud control, or suggest that any additional control is necessary. However, to the extent views on significant antifraud issues, such as how active the Bank should be in preventing, detecting, and addressing fraud, or adequacy of time devoted to underwriting, differ across the organization, the Bank cannot ensure that it is best setting an antifraud tone that permeates the organizational culture, as provided in the Fraud Risk Framework. In particular, as the framework describes, antifraud tone and culture are important parts of effective fraud risk management. These elements can provide an imperative among peers within an organization to address fraud risks, rather than have the organization rely solely on top-down directives. The Bank has taken some steps to assess fraud risk. However, it has not conducted a fraud risk assessment, tailored to its operations, or created a fraud risk profile, both as provided in the second component of GAO’s Fraud Risk Framework. Further, under the framework, recent changes in the Bank’s operating environment indicate a heightened need to do so. We also found that although the Bank has been compiling a “risk register” intended to catalog risks it faces across the organization, this compilation does not include some known fraud risks, indicating that the Bank’s assessment is incomplete. In addition, we found that while the Bank has adopted a general position on the degree of risk it will tolerate, its current risk tolerance is not specific and measurable, as provided by federal internal control standards. Bank managers told us they will revise their fraud risk management practices to fully adopt the Fraud Risk Framework. A leading practice of the Fraud Risk Framework calls for agencies to conduct fraud risk assessments at regular intervals, as well as when there are changes to the program or operating environment, because assessing fraud risks is an iterative process. Managers should determine where fraud can occur and the types of internal and external fraud the program faces. This includes an assessment of the likelihood and impact of fraud risks inherent to the program; that is, meaning both fraud risks known through fraud that has been experienced, as well as other fraud risk that can be identified, based on the nature of the program. According to a Bank report, FY2016 Enterprise Risk Assessment, the Bank is more susceptible to fraud, due to “the nature of the Bank’s mission, the high volume of transactions it executes, and the need for various groups within the Bank to work together to successfully defend against fraud.” The Bank’s short- and medium-term products are more susceptible to fraud, according to Bank managers. Other indicators of fraud, according to the managers, include domestic geography, transactions that involve truck shipments; international geography, since conducting adequate due diligence can be more difficult in remote locations; and when there are smaller, less well-known parties on both sides of the transaction. In this environment, the Bank has taken some steps to assess known fraud risks. Generally, the Bank’s practice has been to assess particular fraud risks and lessons learned following specific instances of fraud encountered, according to Bank managers. Because it has focused on fraud already encountered, the Bank’s practice has not been of the comprehensive nature provided in the Fraud Risk Framework. As an example of its current approach, according to Bank managers, the Bank experienced “significant fraud” in the early 2000s. This was chiefly in the medium-term program, and to a lesser degree, the short-term program, the managers said. As a result, the Bank made changes that reduced the fraud significantly, they said. Otherwise, according to the CRO, fraud has been addressed within product lines, as appropriate. Under its current approach, the Bank’s risk assessments do not include areas where fraud has not already been detected, according to Bank managers. They acknowledged that approach could expose the Bank to fraud risks for activities not yet discovered. A key difference between the Bank’s current approach, as illustrated above, and leading practices as provided in the Fraud Risk Framework, can be seen in how fraud risks are assessed. As described later, the Bank has been compiling risks it faces across the organization, with fraud risk among them. These efforts have focused on soliciting views of Bank staff. By contrast, the framework envisions a more comprehensive approach. Effective fraud risk assessments identify specific tools, methods, and sources for gathering information about fraud risks, according to the framework. Among other things, this can include data on trends from monitoring and detection activities. Under the framework, programs might develop surveys that specifically address fraud risks and related control activities. It may be possible, the framework suggests, to conduct focus groups, or engage relevant stakeholders, both internal and external, in one-on-one interviews or brainstorming about types of fraud risks. Thus, we found, the Bank’s current process for assessing fraud risk has been generally reactive and episodic, rather than regularly planned and comprehensive. Rather than adopt a more proactive approach, the Bank has instead relied on the normal processing and review of transactions— which build in experience with previous fraud schemes—as the truest test for identifying fraud issues or concerns, according to Bank managers. Recent changes in the Bank’s program and operating environment also heighten the need for comprehensively assessing fraud risks, according to the Fraud Risk Framework. Such changes include the Bank’s inability to approve large transactions due to the absence of a quorum. This has meant transaction activity has shifted to smaller transactions, which carry a greater risk of fraud, according to bank managers. Additionally, Congress recently mandated that the Bank increase its focus on small businesses, whose transactions present a different risk profile than those of the Bank’s large customers, according to Bank managers. Further, the Bank’s transaction backlog could also become an issue in the future. If a Board quorum is restored, there could be pressure to process transactions quickly in order to clear the backlog, which could undermine the quality of the underwriting process, according to documentation from the Office of the CRO. According to our review, the Bank’s current antifraud controls further the goal of protecting Bank resources and providing “reasonable assurance” of repayment. However, without planning and conducting regular fraud risk assessments, as identified in GAO’s Fraud Risk Framework, the Bank is vulnerable to not identifying material risks that can hurt performance or its ability to fulfill its mission. As Bank managers acknowledged to us, the Bank faces acute reputational risk if new instances of large or otherwise significant fraud emerge. The Bank has taken some steps in an effort to identify, manage, and respond to risks, including those related to fraud. It has been developing a “risk register”—a compilation of risks across the organization. It has also recently completed an “enterprise risk assessment” through an outside consultant. However, these efforts do not reach the full extent of the relevant leading practices of the Fraud Risk Framework. Specifically, the framework call for agencies to identify inherent fraud risks of a program, examine the suitability of existing fraud controls, and then to prioritize “residual” fraud risks—that is, risks remaining after antifraud controls are adopted. For the risk register, individual business units contribute items, such as indicating types of risk and likelihood, and methods to mitigate the risk. The register, through the Bank’s Office of Risk Management, notes the risk of fraudulent deals generally, characterizing the likelihood as “somewhat likely,” but having the possibility of “major” financial, operational, legal, and reputational impacts. However, particular methods of fraud known to the Bank through experience—such as applicants submitting fraudulent documentation—are absent thus far. This indicates the register is incomplete, from the standpoint of identifying where fraud can occur and the types of internal and external fraud risks the program faces, as provided in GAO’s Fraud Risk Framework. Other inherent fraud risks, such as those posed by the Bank’s more limited understanding of transactions made when it delegates lending authority to other institutions, are also absent from its risk register. Work continues on developing the risk register, Bank managers told us. However, adoption of the risk register has been delayed, due to a reorganization of Bank management and the vacancies on the Board. Without a more comprehensive assessment of inherent fraud risks, the Bank cannot be assured of the extent to which existing controls effectively mitigate inherent risks. According to the chief risk officer, the Bank’s risk register is part of a more wide-ranging “enterprise risk management” strategy, which includes documenting a range of risks across the organization, including fraud. In March 2017, as part of this strategy, the Bank completed the enterprise risk assessment. Based on assessments by senior Bank managers, it identifies fraud risk—defined as a “significant and high-profile fraud” conducted against the Bank—as one among a range of risks facing the Bank. Consistent with Bank managers’ representations to us, the enterprise risk assessment ranks the likelihood of fraud risk as low against other risks the Bank faces—fourth out of five among “operational” risks, and 24th out of 26 total identified risks. Figure 4 depicts how the Bank evaluates these operational risks, in a schematic pairing likelihood of the event with expected impact if they were to occur. In this context, fraud risk is the least prominent risk among the top operational risks identified. In addition to operational risks, the enterprise risk assessment also details six high risks facing the Bank overall. Among them are new or unfamiliar deal structures, which may present increased repayment risk; and doing business in new and unfamiliar technologies, sectors, and industries where the Bank has limited experience. Although fraud is not explicitly identified as a risk, we note these new activities could provide an opening for those seeking to commit fraud. During our review, Bank managers maintained that the enterprise risk assessment represents a “comprehensive fraud risk assessment” undertaken by the Bank. They also, however, acknowledged that this assessment does not contain all the elements of a fraud risk assessment as described in GAO’s Fraud Risk Framework. For instance, as noted, the Bank has not conducted a comprehensive assessment of inherent fraud risks, tailored to its operations. We note that because, as described above, the Bank has not undertaken a fraud risk assessment as envisioned by the Fraud Risk Framework, its ranking of fraud risk compared to other risks may change after it has completed such an assessment. This is because a comprehensive assessment may identify new fraud risks or produce revised assessments of known fraud risks, both of which could affect relative rankings of other risks. A leading practice of the Fraud Risk Framework calls for agencies to determine fraud risk tolerance. Further, federal internal control standards state that managers should consider defining risk tolerances that are specific and measurable. In addition, under the framework, tolerance cannot be determined until the agency has identified inherent fraud risks and assessed their likelihood or impact. As part of its overall risk management activities, the Bank has adopted a general position on its fraud risk tolerance. Specifically, Bank managers told us that, by its nature, the Bank accepts more risk than the commercial sector; and some level of fraud is to be expected because it is not reasonable to eliminate all fraud in its programs. The instances of fraud encountered by the Bank in recent years have centered on small exposures, according to bank managers. Thus, the current level of fraud the Bank experiences is “defensible,” given the Bank’s mission and number of transactions it undertakes, according to the CRO. Bank managers said that fraud activity has steadily declined over the last decade, based on what they cited as fraud indicators that are reviewed by the Bank’s OGC. Bank managers also pointed to claims as another indication of declining fraud activity. Transaction participants file claims for losses covered under Bank loan guarantee and insurance products, such as if a borrower fails to make required payments. The Bank considers fraud to be a subset of transactions that result in claims, and managers cited declining claims activity over the last decade as an indirect measure of fraud activity. Table 1 shows a history of claims paid for fiscal years 2008 through 2017. Overall, Bank managers told us that in light of the decline in fraud they described, the task facing the Bank is to make sure that staff do not lose their focus on fraud and become too comfortable. We asked the Bank to provide statistics supporting the claimed long-term decline in fraud activity, based on fraud indicators. In response, managers told us the indicators are actually not “precise or numerical measures.” Instead, OGC noted the office is aware of fraud activity through “consultations and general sense of day-to-day business.” As for claims, we note that not all fraud activity may result in claims. Consequently, an analysis of claims alone may not reveal a complete or accurate view of fraud activity. In addition, although Bank statistics we reviewed show a decline in number of claims filed from fiscal year 2014 through nearly the end of fiscal year 2017, the decline is likely attributable to the lapse in the Bank’s authority in fiscal year 2015, according to a Bank report. While the Bank has adopted a general position on its fraud risk tolerance—that the current level of fraud is defensible, given the Bank’s mission—its current risk tolerances are not specific and measurable. Without more specific and measurable risk tolerances, the Bank cannot be assured of the extent to which any fraud risks exceed the Bank’s fraud risk tolerance. For example, a measurable risk tolerance could express willingness to tolerate an estimated amount of potentially fraudulent activity, given resource constraints in eliminating all fraud risks. After initially telling us that the Bank’s fraud risk management practices are working well and do not need modification, Bank managers later told us they will revise their approach. They now plan to conduct periodic fraud risk assessments and assess risks to determine a fraud risk profile, as provided in GAO’s Fraud Risk Framework, they said. Asked what prompted the changes, the CRO attributed them to our inquiries plus the Bank’s own growing experience with enterprise risk management. Bank managers also noted that since 2013, there has been an evolution in Bank antifraud controls, as part of what they refer to as a continuous improvement process. Specifically, the Bank’s new effort will include a range of new fraud management activities, according to the managers, starting with a fraud risk assessment and also including determining a fraud risk profile, on a priority-risk basis. The Bank also plans to identify residual risks and mitigating factors. In addition, according to the managers, this new work in addressing fraud risk is planned to include developing specific fraud risk tolerance or tolerances, with a metric for measuring such tolerance. As for implementation of the planned new approach, Bank managers stated they plan to complete a fraud risk assessment by December 2018 and to determine the Bank’s fraud risk profile by February 2019. However, Bank managers did not provide us with documentation describing in detail how they plan to ensure their fraud risk assessments and fraud risk profile are consistent with GAO’s Fraud Risk Framework. For example, we requested documentation of any specific plans to adopt any of the four components of GAO’s framework. Bank managers told us they plan to work with an outside consultant, and provided an outline of planned activities. However, the information did not describe how the Bank will ensure its risk assessments and profile include a full range of inherent fraud risks, including known fraud risks that are absent from its current risk register. Similarly, the managers did not provide documentation describing how the Bank’s fraud risk assessments and profile will include risk tolerances that are specific and measurable. Our employee survey results highlight the importance of the Bank’s planned new approach. In comments, some respondents noted the changing nature of fraud, underscoring the importance of taking a wider, more proactive approach to fraud, which the Fraud Risk Framework encourages. Illustrative Comments from GAO’s Survey of Bank Employees “There are tricks that financial fraudsters would use that many of our staff are unaware of.” “The biggest risk is that we cease to see fraud controls as an ever-evolving process.” “Types of fraud are constantly changing.” “To assume that thieves don’t evolve is inane, and to assume that you have the best, most evolved mechanisms for combating fraud is presumptuous.” Given the importance, under a more proactive approach, of being able to identify and react to new forms of fraud, we also asked employees how well they believe Bank senior management understands new or changing ways of attempting or committing fraud. About two-thirds (67 percent) said senior Bank management understands “very well” or “for the most part,” with the remaining respondents undecided or believing otherwise. The Bank has instituted a number of antifraud controls but has not developed an antifraud strategy based on a fraud risk profile, or implemented specific control activities to achieve such a strategy. This is because, as discussed earlier, it has not yet completed a fraud risk assessment tailored to its operations. As described in the third component of GAO’s Fraud Risk Framework, agencies should design and implement a strategy with specific control activities to address risks identified in the fraud risk assessment. We also found the Bank has opportunities to improve antifraud controls through greater fraud awareness and use of data analytics. Leading practices for fraud risk management under the third component include fraud awareness and data analytics activities, which can enhance the agency’s ability to prevent and detect fraud. The Bank currently employs a number of antifraud controls, both before and after transaction approval, which Bank managers told us include: Specific antifraud activities within individual business units, as they operate their respective programs. Review of transactions, including checking for fraud activity, following transaction approval. Later-stage review, such as examinations and recommendations by the Bank’s OIG. Preapproval antifraud efforts: Underwriting is the initial step in preventing fraud, and underwriters have a heightened awareness of fraud and irregularities, Bank managers told us. Under the Bank’s antifraud procedures, underwriters in the business units should be aware of fraud risks in their transactions and be alert to indications of fraud. Prior to approval, transactions and their participants go through several evaluations. These can assist underwriters in preventing fraud, according to Bank procedures. Figure 5 describes selected preapproval evaluations. According to the Bank, additional preapproval measures include analyzing lenders, focusing on sufficiency of due diligence or what appear to be a high level of claims; requiring collateral on most medium-term transactions; not allowing online applications to proceed unless applicants provide required information; and using a two-step approval process, in which both the underwriter and the underwriter’s supervisor must approve certain transactions. Postapproval antifraud efforts: Postapproval monitoring is generally not directed specifically at fraud, but plays a key role in fraud detection. Specifically, Bank managers told us that the Bank typically learns of fraud through the claims process—that is, after transactions are approved. Figure 6 describes postapproval monitoring. Later, third parties, such as the Bank’s OIG, review transactions and operations, the chief risk officer told us. The Bank has developed a policy and expectations for employee conduct in matters of possible fraud, imposing a duty to report any “suspicion” of fraud to OGC or the OIG. In particular, OGC is not selective about what information it passes to OIG, a manager told us—anything about Bank transactions is referred, no matter the strength of the evidence. In our employee survey, some respondents expressed concern that there is reliance on postapproval monitoring, versus greater scrutiny at the time of application. Illustrative Comments from GAO’s Survey of Bank Employees The current division of responsibilities “is not the most effective way for the Bank to oversee fraud and fraud risk, as responsibility needs to be given to the teams on the front end—such as the individual relationship managers and loan officers—not on the back end.” The current arrangement “seems to be more of an after-the-fact approach to potentially (if reluctantly) detecting fraud than any proactive encouragement to actively prevent fraud.” Although the Bank has instituted these pre- and postapproval antifraud controls, they may not provide the most effective protection available. According to GAO’s Fraud Risk Framework, the leading practice is for agencies to design and implement antifraud controls based on a strategy determined after performing a fraud risk assessment and creating a fraud risk profile. However, as previously discussed, the Bank has not yet completed such an assessment to determine such a profile. Consequently, the Bank cannot develop an antifraud strategy and associated controls that meet the leading practice until it has completed a fraud risk assessment and documented the results in a fraud risk profile. As noted earlier, Bank managers told us they now recognize the need to conduct assessments and develop a fraud risk profile for the Bank, and that they plan to complete this work by February 2019. They further told us that, after conducting a risk assessment and developing a fraud risk profile, they plan to design and implement antifraud controls as may be indicated by the assessment, in keeping with the framework’s third component. Until the Bank creates an antifraud strategy based explicitly on a fraud risk assessment and corresponding fraud risk profile, and has designed and implemented specific control activities to prevent and detect fraud, it is at risk of failing to address fraud vulnerabilities that could hurt its performance, undermine its reputation, or impair its ability to fulfill its mission. As provided in GAO’s Fraud Risk Framework, increasing awareness of potential fraud schemes can serve a preventive purpose, by helping to create a culture of integrity and compliance, as well as to enable staff to better detect potential fraud. The Bank currently takes some steps to share information on fraud risks across the institution, through a variety of mechanisms, but it has opportunities to further improve information sharing to build fraud awareness. Training, cited earlier, is a leading practice of the Fraud Risk Framework, by which an agency can build fraud awareness. In particular, the framework cites requiring that all employees, including managers, attend training when hired and then on an ongoing basis thereafter. As discussed earlier, the Bank now conducts some training, and Bank managers told us they see our survey results as an opportunity to provide additional training. By extending training requirements to all employees, the Bank can seek to build awareness as broadly as possible, and with that, further reinforce antifraud tone and culture. Currently, according to our assessment of information the Bank provided, it does not offer dedicated fraud training across the organization, for all employees and on an ongoing basis. Another way to build fraud awareness is information sharing. For example, a manager in the Bank’s OGC told us he monitors fraud activity and communicates relevant fraud-related information to other units in the Bank, based on considerations such as whether a situation could be repeated in other cases. However, there are limitations in information- sharing. For example, the Bank’s OGC told us it restricts how widely it shares information on parties placed on an internally generated “watch list” of parties that should be scrutinized. The Bank also cannot share information provided by OIG on parties in a confidential law enforcement database as being under investigation, managers said, because those parties may not know they are under investigation. The reasons for such caution, according to managers, include the Privacy Act of 1974 and fear of creating a “de facto debarment list” absent any formal findings of fraud. In addition, CRC division managers told us that when the division discovers fraud-related information, it communicates such information to appropriate Bank staff. Despite concerns, we found there are opportunities for greater compilation and sharing of information, and employees said in our survey that they believe wider sharing of fraud-related information would be beneficial to building fraud awareness and performing their duties. For example, one way of boosting fraud awareness would be if Bank managers comprehensively tracked referrals of suspected fraud matters to the OIG and shared case outcomes with Bank staff, Bank managers told us. However, Bank managers told us they do not currently maintain and share such information on cases of suspected fraud referred to the OIG. Relatedly, GAO’s Fraud Risk Framework notes the opportunity for an agency to collaborate with its OIG when planning or conducting training, and promoting the results of successful OIG investigations internally. Some program managers also told us maintaining a repository of known fraud cases could aid in compliance and transaction approvals, but the Bank does not maintain and share this information with staff. In addition, as Bank managers acknowledge, compiling and maintaining information collected through the Bank’s database checks on transaction participants could serve as a library of useful information. However, Bank managers told us they do not currently maintain and share such information. In our survey, we asked employees whether Bank management provides any information on outcomes of fraud cases involving the Bank or Bank staff. Nearly half of respondents (49 percent) said no. About a third (35 percent) said yes. Among a subset of employees who reported that their job duties include direct responsibility for fraud matters, the “Yes” figure was higher but still less than a majority (41 percent). Some survey respondents noted lack of information-sharing about fraud practices and case outcomes, including that staff processing transactions must rely on personal memory for fraud issues that arose in previous transactions. Illustrative Comments from GAO’s Survey of Bank Employees “In some cases, there is no way to track bad actors or suspected fraudsters unless someone working the new transaction remembers that there was an issue with the actor in a previous transaction.” “Management seems to not want to discuss any fraud with staff. Instead, they should use the opportunity to educate staff about fraud that occurs and show the consequences that result. They need to be more open.” “While the Bank has put a lot of best practices in place, more could be done to more regularly communicate to staff about changing practices in committing and detecting fraud.” “Outcomes are rarely relayed to staff.” Underscoring the value of sharing information, our survey also found that when Bank management does share fraud-related information, Bank staff tend to find it useful in carrying out their duties. For those reporting that management does share fraud information, more than half of respondents (54 percent) said they found such information was “extremely” or “very” helpful in their job duties. Similarly, for those who reported they can readily access fraud-related information on their own from internal Bank resources, nearly two-thirds (63 percent) said the information was “extremely” or “very” helpful. In response to our inquiries, Bank managers said they plan to evaluate the feasibility of maintaining and sharing case outcome and database query information. In addition, they said OGC is exploring how it might share more fraud-related information, but in a protected way. In particular, the Bank wants to be able to share information on “integrity factors,” especially at the underwriting level. One way to do this might be distribution of fraud case studies as a refresher for staff, they said. Until the Bank makes greater efforts to share information on known fraud schemes or bad actors, the Bank forgoes the opportunity, as described in the Fraud Risk Framework, to build staff awareness that could enhance antifraud efforts in these ways. For example, by not sharing the outcomes of suspected fraud matters referred to the OIG, the Bank forgoes the opportunity to build awareness through lessons learned from actual cases, which could give staff especially relevant insight into future attempts at fraud. GAO’s Fraud Risk Framework cites data analytics as a leading practice for preventing and detecting fraud; in particular, to mitigate the likelihood and impact of fraud. We found the Bank makes limited use of data analytics for antifraud purposes. For example, it conducts analyses of claims cases, according to Bank managers, and, as noted earlier, considers fraud to be a subset of transactions that result in claims. Documentation of such activity provided to us by the Bank includes analyses and statistical summaries, such as number and types of claims filed, and tallies of claim decisions (for example, approved, denied). However, the Bank does not perform data analytics, which are additional leading practices described in the Fraud Risk Framework. According to one manager, the Bank does not perform data analytics on its transaction-related data because the Bank OIG does not provide a specific transaction number (or “deal number”) necessary to link fraud cases it successfully pursues to the specific transactions from which the OIG action arises. Without that link, the Bank cannot distinguish transactions proven to be fraudulent from other, nonfraudulent transactions in its data, the Bank manager said. The link would be necessary for data-analytics purposes, the manager said. This inability to tie proven fraud cases to individual transactions, based on inability to obtain the key identifying information from the OIG, is a significant weakness in the Bank’s postapproval transaction monitoring, the manager further said. The Bank and its OIG take different views on this linking information. The Bank has asked the OIG to provide these specific transaction numbers in an effort to link proven fraud cases to its transaction data, according to one Bank manager. OIG officials, meanwhile, told us they always notify the Bank when a conviction is made, and provide as much information as possible and appropriate under the circumstances, including company name and individual name. OIG officials also noted that, even without the specific transaction number the Bank requests, the Bank should nevertheless be able to use OIG-provided case data to search its own transaction files and successfully locate corresponding transactions. In response to our inquiries, Bank managers said they are now considering a move into data analytics, including predictive analytics, to guard against fraud. However, until the Bank has a feasible and cost- effective means of linking OIG cases to specific transactions, its ability to use data-analytics for antifraud purposes will be limited. Without the ability to make use of data-analytics, the Bank forgoes the opportunity to develop a best-practices antifraud tool that could aid in identifying potential fraud retrospectively, on transactions already approved, or prospectively, in advance of approval. The fourth and final component of GAO’s Fraud Risk Framework calls for ongoing monitoring and periodic evaluations of the effectiveness of antifraud controls. This monitoring and evaluation should be from the specific perspective of antifraud controls established based on a comprehensive fraud risk assessment. Such activities can serve as an early warning system to help identify and resolve issues in fraud risk management—whether they involve current controls or prospective changes. Ongoing monitoring and periodic evaluations provide assurances to managers that they are effectively preventing, detecting, and responding to potential fraud. Further, according to the framework, effective monitoring and evaluation focuses on measuring outcomes and progress toward achieving objectives. Because the Bank has not completed a comprehensive fraud risk assessment, or designed antifraud controls based on such an assessment, it is not in a position to fulfill this final component. Even at that, however, we found the Bank does not generally evaluate the effectiveness or efficiency of its current fraud risk management practices. For example, OGC and CRC managers—who form the dedicated entity for managing fraud risks (as described earlier in component one)—both told us they are unaware of any procedure to periodically assess the effectiveness of the Bank’s fraud risk management policies. In addition, the Bank currently has no formal method for tracking fraud activity, according to a Bank manager. Thus, the Bank is not in a position to explicitly judge the effectiveness of antifraud controls. Further, as described earlier, Bank managers told us the fraud indicators they do track are not precise or numerical measures and that, instead, OGC is aware of fraud activity through a general sense of daily business. Following our inquiries, Bank managers told us they plan to revise their approach to monitoring, evaluating, and adapting their fraud risk management practices. They said they now plan to evaluate the effectiveness of those practices, following adoption of the second and third components of GAO’s Fraud Risk Framework, and with the intent to adapt controls as indicated necessary, in accordance with the framework’s fourth component. Timing will depend on implementation of the underlying fraud risk assessment, Bank managers told us. The Bank cannot be assured that its antifraud controls are optimal until it has fulfilled component four of GAO’s Fraud Risk Framework in the comprehensive fashion envisioned, following previous full implementation of components two and three. In particular, it cannot be assured that current practices are adequate, based on inherent program risks. Proactively and strategically managing fraud risks can aid the Bank’s mission of supporting American jobs by facilitating U.S. exports, by reducing not only the risk of financial loss to the government, but also the risk of serious reputational harm to the Bank. The Bank has taken some steps to address fraud that are among leading practices identified in GAO’s Fraud Risk Framework. But overall, the Bank has approached fraud risk management on a fragmented, reactive basis, and its antifraud activities have not been marshalled into the kind of comprehensive, strategic fraud risk management regime envisioned by GAO’s Fraud Risk Framework and its leading practices. Chiefly, this is because the Bank has not anchored its fraud risk management policies in a comprehensive fraud risk assessment and corresponding risk profile, tailored to its operations, and then implemented controls designed to address the specific fraud risks identified in the assessment. Some fraud risk facing the Bank is already known, such as fabricated documentation. But as the Bank acknowledges, in addition to fraud risk inherent in its complex lines of business, it also faces significant risk from new or unfamiliar deal structures it may employ, and in new and unfamiliar technologies and industries it may service, where it has limited experience. Regular, comprehensive fraud risk assessments will address not only known types of fraud, but also seek to identify where fraud can occur and the types of fraud the program faces, including likelihood and impact. Accordingly, until the Bank begins conducting thorough, systematic assessments of its fraud risks, and compiles a risk profile prioritizing such risks, it cannot be assured that it satisfactorily understands its vulnerabilities to fraud and any gaps in its capabilities for addressing them. Following on from that, without developing and implementing an antifraud strategy that builds on the findings of the comprehensive risk assessments and risk profile, the Bank cannot be assured that its antifraud control activities are optimally designed for, and targeted to, the actual fraud risks its faces—meaning that it could be failing to address significant risks or targeting the wrong ones. Finally, without establishing outcome-oriented metrics and then regularly reviewing progress toward meeting these goals, the Bank cannot be assured that its antifraud control activities are working as intended. As we concluded our review, the Bank, encouragingly, said it would adopt the more proactive approach described by GAO’s Fraud Risk Framework. Thus, the Bank now needs to follow through on its stated intent to change its practices, and accomplish the tasks, described to us by Bank managers, as intended and in a timely fashion. This is true not only for current operations, but also prospectively, for the large transaction backlog the Bank faces, which Bank managers will process if or when the Bank’s quorum issue is resolved, and which could stress Bank fraud controls. The Bank’s identification of a dedicated entity to lead fraud risk management activities can be an important step in the right direction if that move now becomes the start of a sustained commitment. By fully adopting the elements of the framework, the Bank can strengthen its antifraud culture, better understand fraud risks facing its products and programs, and reshape how it monitors and evaluates the outcomes of its fraud risk management activities. In doing so, it will be better positioned to protect taxpayers and its multi-billion-dollar portfolio, while still meeting its mission to support American jobs and exports. Even though Bank managers have already told us they plan to implement the framework, they did not provide us documentation describing in detail how they will ensure their fraud risk assessment and fraud risk profile are consistent with leading practices of the framework—such as by ensuring the risk assessment considers all inherent fraud risks and the risk profile reflects risk tolerances that are specific and measurable. Thus, we include the following framework-specific recommendations in order to comprehensively enumerate relevant issues we identified, as well as to present clear benchmarks of accountability for assessing Bank progress. This complete listing is important in light of the Bank’s recent embrace of the framework; changes in the Bank’s executive leadership and vacancies on the Bank Board; and expected congressional consideration of the Bank’s reauthorization in 2019. We are making the following seven recommendations to the Bank: The acting Bank president and Board chairman should ensure that the Bank evaluates and implements methods to further promote and sustain an antifraud tone that permeates the Bank’s organizational culture, as described in GAO’s Fraud Risk Framework. This should include consideration of requiring training on fraud risks relevant to Bank programs, for new employees and all employees on an ongoing basis, with the training to include identifying roles and responsibilities in fraud risk management activities across the Bank. (Recommendation 1) As the agency begins efforts to plan and conduct regular fraud risk assessments and to determine a fraud risk profile, the acting Bank president and Board chairman should ensure that the Bank’s risk assessments and profile address not only known methods of fraud, including those that are absent from its current risk register, but other inherent fraud risks as well. (Recommendation 2) As the agency begins efforts to plan and conduct regular fraud risk assessments and to determine a fraud risk profile, the acting Bank president and Board chairman should ensure that the risk profile includes risk tolerances that are specific and measurable. (Recommendation 3) The acting Bank president and Board chairman should ensure that the Bank develops and implements an antifraud strategy with specific control activities, based upon the results of fraud risk assessments and a corresponding fraud risk profile, as provided in GAO’s Fraud Risk Framework. (Recommendation 4) The acting Bank president and Board chairman should ensure that the Bank identifies, and then implements, the best options for sharing more fraud-related information—including details of fraud case referrals and outcomes—among Bank staff, to help build fraud awareness, as described in GAO’s Fraud Risk Framework. (Recommendation 5) The acting Bank president and Board chairman should lead efforts to collaborate with the Bank’s OIG to identify a feasible, cost-effective means to systematically track outcomes of fraud referrals from the Bank to the OIG, including creating a means to link the OIG’s proven cases of fraud to the specific Bank transactions from which the OIG actions arose. If any such means are found to be feasible and cost-effective, the acting Bank president and Board chairman should direct appropriate staff to implement them, with such information to be used for purposes consistent with GAO’s Fraud Risk Framework, such as data analytics. (Recommendation 6) The acting Bank president and Board chairman should ensure that the Bank monitors and evaluates outcomes of fraud risk management activities, using a risk-based approach and outcome-oriented metrics, and that it subsequently adapts antifraud activities or implements new ones, as determined to be appropriate and consistent with GAO’s Fraud Risk Framework. (Recommendation 7) We provided a draft of this report to the Bank for review and comment. In written comments, summarized below and reproduced in appendix III, the Bank agreed with our recommendations. The bank also provided technical comments, which we incorporated as appropriate. In its written comments, the Bank said it will take several steps to implement our recommendations to improve its fraud risk management activities. For example, the Bank stated it would continue to evaluate and implement methods to promote and sustain an antifraud tone that permeates the Bank’s organizational culture. In assessing fraud risks, the Bank stated it will include not only known risks, but also other inherent risks not yet known to have led to fraud. Following a fraud risk assessment as provided in GAO’s Fraud Risk Framework, the Bank stated that it will develop antifraud controls based on that assessment, subject to cost-benefit analysis. The Bank also stated that it will monitor and evaluate outcomes of its fraud risk management activities, and adapt existing controls or implement new controls as indicated, subject to cost- benefit analysis. The Bank further stated it will identify and implement ways to share more fraud-related information. In its written comments, the Bank also raised four concerns about our work. First, the Bank stated that it keeps substantial reserves for losses, which protect against taxpayer costs. We clarified our report to indicate that Bank officials told us they maintain reserves to protect against taxpayer costs. We did not evaluate the extent to which these reserves protect against taxpayer costs because doing so was outside the scope of our review. Second, the Bank stated our employee survey does not directly support some of the conclusions that we draw from responses received, and that only 24 percent of respondents were in the Export Finance area, which handles underwriting of Bank transactions. We note that the leading practices of the Fraud Risk Framework call for involving all levels of the agency in setting an antifraud tone that permeates the organizational culture. We also note that the Office of the Export Finance is not the only division involved in fraud control activities. For example, during our review, Bank managers told us that employees in the Credit Review and Compliance division, the Office of the General Counsel, and the Office of the Chief Financial Officer, among other offices, are also involved in fraud control activities. Thus, we believe it is appropriate that survey responses from those who work in these and other offices are included in our survey results. As noted in our report, Bank managers, in interviews, and staff, in our employee survey, generally expressed positive views of the Bank’s antifraud culture, but they hold different views on key aspects of that culture. We believe that our survey results support these findings, as well as related conclusions and recommendation (Recommendation 1), with which the Bank agreed. Third, the Bank stated that it has been very effective in preventing, detecting, and prosecuting fraud in Bank transactions. Our review evaluated the extent to which the Bank has adopted leading practices for managing fraud risks, as described in the Fraud Risk Framework. We did not evaluate the operational effectiveness of specific Bank control activities for preventing, detecting, and prosecuting fraud because doing so was beyond the scope of our review. Fourth, the Bank stated that our report and the employee survey did not clearly and consistently distinguish between fraud and fraud risk, which may lead to confusion in both the survey responses and the analysis in the report. However, we define the terms “actual fraud” and “fraud risk” in our employee survey, which appears in appendix II. Further, as described in greater detail in appendix I, we pretested and modified the survey to ensure questions were understood by respondents and that we used correct terminology. This process allowed us to determine whether survey questions and answer choices were clear and appropriate. Thus, we believe the survey results support our findings. Overall, as noted, these findings include positive views of the Bank’s antifraud culture as well as differing views on some aspects of that culture. We are sending copies of this report to the appropriate congressional committees, the acting president and Board chairman of the Bank, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-6722 or bagdoyans@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report examines management by the Export-Import Bank of the United States (the Bank) of fraud risks in its export credit activities, by evaluating the extent to which the Bank has adopted the four components described in GAO’s A Framework for Managing Fraud Risks in Federal Programs (Fraud Risk Framework). Specifically, we evaluate the extent to which the Bank has established an organizational culture and structure conducive to planned regular fraud risk assessments and assessed risks to determine a fraud risk profile; designed and implemented a strategy with specific control activities to mitigate assessed fraud risks; and evaluated outcomes using a risk-based approach and adapted activities to improve fraud risk management. To examine the extent to which the Bank has adopted the components of GAO’s Fraud Risk Framework, we reviewed Bank policy and governance documentation, plus other documentation; reviewed GAO and Bank Office of the Inspector General reports on fraud and fraud risk management topics; reviewed relevant reports of the Congressional Research Service and the Congressional Budget Office; and reviewed other reports and background information. Documentation we reviewed included Bank operating procedures, details of database search procedures, Bank annual reports, reports to Congress, the Bank’s strategic plan, risk assessments, and other materials. We also interviewed a range of Bank managers, at both the senior- management level and those overseeing relevant Bank operating units. These included the Bank’s chief financial officer, its chief risk officer, its acting chief operating officer, those with specific antifraud responsibilities, and others responsible for individual business units. These individual business units included those with responsibilities for monitoring transactions following approval. We then assessed our findings on the Bank’s fraud risk management practices and its antifraud controls against provisions of the Fraud Risk Framework, which also incorporates concepts from GAO’s Standards for Internal Control in the Federal Government. To examine the extent to which the Bank has established an organizational culture and structure conducive to fraud risk management, we conducted a web-based survey of Bank employees. In our survey, we assessed, among other things, perceptions of the Bank’s organizational culture and attitudes toward fraud and fraud risk management, and whether employees viewed senior Bank management as committed to establishing and maintaining an antifraud culture. We surveyed all non- senior-management Bank employees, regardless of their position or length of employment, who are responsible for implementing, but not determining, Bank policy (that is, those below the level of senior vice president). There were 403 employees in our survey population, and we received 296 responses, thus producing a response rate of 73.5 percent. We received sufficient representation across Bank offices and divisions, and, overall, obtained a range of employee views. To develop our survey instrument, we utilized background research, leading practices as identified in GAO’s Fraud Risk Framework, interviews with Bank senior managers, and other sources. We conducted in-person pretests of survey questions with five Bank employees, varying in position, Bank office or division, and seniority, at Bank headquarters in Washington, D.C. We pretested the survey instrument to ensure the questions were understood by respondents, that we used correct terminology, and that the survey was not burdensome to complete. This process allowed us to determine whether the survey question and answer choices were clear and appropriate. We modified our survey instrument as appropriate based on pretest results and suggestions made by an independent survey specialist. The final survey instrument included closed- and open-ended questions on Bank management and tone-at- the-top; fraud-related training and information; antifraud environment; and personal experiences with fraud at the Bank. Throughout the survey instrument, we defined important terms, such as “senior management,” so respondents could interpret key concepts consistently through the survey. We administered the survey, via the World Wide Web, from July 31, 2017, through September 22, 2017. To do so, we obtained from Bank management a file of Bank employees with relevant identifying information. Before we opened the survey, the Bank president, at our suggestion, sent an email to employees notifying them of the forthcoming survey and encouraging them to respond. We also sent Bank employees a notification email describing the forthcoming survey, in advance of sending employees another email providing a unique username and password to access the web-based survey. To improve the response rate, we contacted Bank employees by phone who had not yet completed the survey (nonrespondents), to determine their eligibility, update their contact information, answer any questions or concerns about the survey, and seek their commitment to participate. We also sent multiple follow-up emails to nonrespondents encouraging them to respond, and provided instructions for taking the survey. These follow-up contacts reduced the possibility of nonresponse error. We sent our follow-up reminder emails to the survey population on August 10, 17, and 29, 2017, and September 1 and 14, 2017. Because we surveyed all non-senior-management employees, the survey did not involve sampling error. To minimize nonsampling errors, and to enhance data quality, we employed recognized survey design practices in the development of the survey instrument and in the collection, processing, and analysis of the survey data. We calculated frequencies for closed-ended responses and reviewed open-ended response for themes and illustrative examples. When we analyzed the survey data, an independent analyst checked statistical programs used to collect and process responses. We selected survey excerpts—tallies of answers to selected questions, plus individual comments received from respondents—presented in the main text of this report based on relevance to the respective subject matter. We conducted our performance audit from October 2016 to July 2018, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Results of GAO Survey of Bank Employees: “Anti-Fraud Controls at the Export-Import Bank of the United States” As described in appendix I, GAO conducted a survey of employees of the Export-Import Bank of the United States (the Bank), to obtain their views on the Bank’s organizational culture and attitudes toward fraud and fraud risk management. We surveyed 403 employees and obtained 296 responses, for a response rate of 73.5 percent. Our survey did not rely on a sample, as we distributed it to the entire employee population identified. Although originally presented through the World Wide Web, the questions and answer choices that follow are the same wording as shown to Bank employees. Results are tallied for each question. We omit, however, all individual responses to open-ended questions, in order to protect respondent anonymity. Underlined items indicate terms for which hyperlinked definitions were available in the original survey form. “Fraud” generally means obtaining something of value through willful misrepresentation; and in particular, misconduct involving Bank transactions. We mean it to include actual fraud, as found through the judicial system or an administrative process; as well as “fraud risk” – an opportunity, situation, or vulnerability that could allow for someone to engage in fraudulent activity. For this section and elsewhere, two additional definitions— “Senior management” refers to Bank managers at the senior vice president level and above. “Management in general” refers to a broader management group – first-level supervisors and above. 4. In your view, to what extent has Bank management in general established a clear anti-fraud tone for the Bank? A great deal A lot Some A little Not at all Unsure/don’t know Valid responses: 296 50.3% 29.4% 10.8% 2.7% 1.4% 5.4% 5. Based on the actions of Bank senior management in particular, how important do you think preventing, detecting, and otherwise addressing fraud is to the Bank? Extremely important Very important Not at all important Unsure/don’t know 61.5% 25.0% 7.1% 1.7% 1.0% 3.7% 6. Based on the actions of the managers of your division in particular, how important do you think preventing, detecting, and otherwise addressing fraud is to the Bank? Extremely important Very important Not at all important Unsure/don’t know 60.5% 27.9% 5.1% 1.7% 1.4% 3.4% 7. How clearly has Bank management in general communicated a standard of conduct that applies to all employees, and which includes the Bank’s expectations of behavior concerning fraud? Extremely clearly Very clearly Somewhat clearly Slightly clearly Not at all clear Unsure/don’t know Valid responses: 294 44.6% 33.3% 16.0% 1.7% 1.7% 2.7% 8. Based on your experience, for each entity below, which category best describes the level of responsibility the entity has for overseeing fraud risk management activities at the Bank? for the Bank to oversee fraud and fraud risk? Yes No Unsure/don’t know Valid responses: 295 62.4% 7.1% 30.5% 9(a). Why, or why not, is this the most effective way for the Bank to oversee fraud and fraud risk? 23. In your view, should the Bank be more, or less, active in preventing, detecting, and otherwise addressing fraud or fraud risk? Much more active Somewhat more active Remain the same Much less active Unsure/don’t know 9.8% 25.7% 43.6% 1.7% – 19.3% 23(a). Why do you feel this is the appropriate level of activity for addressing fraud or fraud risk? Excluding “Not applicable to my job or experience”— Always enough time Usually enough time Sometimes enough time Seldom enough time Never enough time Unsure/don’t know 14.9% 32.2% 14.4% 4.6% 1.1% 32.8% 31. If you have additional comments on any of the items above, or on fraud- or fraud risk-related issues at the Bank generally, please feel free to provide them below. 32. Would you be willing to speak with GAO regarding your answers to the survey, the topics raised above, or other fraud-related matters? 32(a). Please provide your name and contact information. In addition to the contact named above, Jonathon Oldmixon (Assistant Director), Marcus Corbin, Carrie Davidson, David Dornisch, Paulissa Earl, Colin Fallon, Dennis Fauber, Kimberly Gianopoulos, Gina Hoover, Farahnaaz Khakoo-Mausel, Heather Latta, Flavio Martinez, Maria McMullen, Carl Ramirez, Christopher H. Schmitt, Sabrina Streagle, and Celia Thomas made key contributions to this report.
|
According to the Bank, it serves as a financier of last resort for U.S. firms seeking to sell to foreign buyers but that cannot obtain private financing for their deals. Its programs support tens of thousands of American jobs and enable billions of dollars in U.S. export sales annually, the Bank says. The Bank is also backed by the full faith and credit of the United States government, meaning that taxpayers could be responsible for Bank losses. The Export-Import Bank Reform Reauthorization Act of 2015 included a provision for GAO to review the Bank's antifraud controls within 4 years, and every 4 years thereafter. This report examines the extent to which the Bank has adopted the four components of GAO's Fraud Risk Framework—commit to combating fraud; regularly assess fraud risks; design a corresponding antifraud strategy with relevant controls; and evaluate outcomes and adapt. GAO reviewed Bank documentation; interviewed a range of Bank managers; and surveyed Bank employees about the extent to which the Bank has established an organizational culture and structure conducive to fraud risk management. In managing its vulnerability to fraud, the Export-Import Bank of the United States (the Bank) has adopted some aspects of GAO's A Framework for Managing Fraud Risks in Federal Programs (Fraud Risk Framework). This framework describes leading practices in four components: organizational culture, assessment of inherent program risks, design of tailored antifraud controls, and evaluation of outcomes. As provided in the framework, for example, the Bank has identified a dedicated entity within the Bank to lead fraud risk management. GAO also found that Bank managers and staff generally hold positive views of the Bank's antifraud culture. However, GAO also found that management and staff hold differing views on key aspects of that culture. These differing views include how active the Bank should be in addressing fraud. For example, Bank managers told GAO the Bank's current approach has been appropriate for dealing with fraud. However, about one-third of Bank staff responding to a GAO employee survey said the Bank should be “much more active” or “somewhat more active” in preventing, detecting, and addressing fraud. These and other divergent views indicate an opportunity to better ensure the Bank sets an antifraud tone that permeates the organizational culture, as provided in the Fraud Risk Framework. GAO found the Bank has taken some steps to assess fraud risk. For example, the Bank's practice has generally been to assess particular fraud risks and lessons learned following specific instances of fraud encountered, according to Bank managers. However, the Bank has not conducted a comprehensive fraud risk assessment, as provided in the framework. The Bank has also been compiling a “register” of risks identified across the organization, including fraud. This register, however, does not include some known methods of fraud, such as submission of fraudulent documentation, thus indicating it is incomplete. Without planning and conducting regular fraud risk assessments as called for in the framework, the Bank is vulnerable to failing to identify fraud risks that can damage its reputation or harm its ability to support U.S. jobs through greater exports. As provided in the framework, managers should determine where fraud can occur and the types of internal and external fraud the program faces, including an assessment of the likelihood and impact of fraud risks inherent to the program. At the conclusion of GAO's review, Bank managers said they will fully adopt the GAO framework. They said they plan to complete a fraud risk assessment by December 2018, and to determine the Bank's fraud risk profile—that is, document key findings and conclusions from the assessment—by February 2019. Work to adopt other framework components will begin afterward, the managers said. However, they did not provide details of how their efforts will be in accord with leading practices of the framework. As a result, GAO makes framework-specific recommendations in order to enumerate relevant issues and to present clear benchmarks for assessing Bank progress. This complete listing of recommendations is important in light of the Bank's recent embrace of the framework; recent changes in Bank leadership; and expected congressional consideration of the Bank's reauthorization in 2019. GAO makes seven recommendations, centering on conducting a fraud risk assessment, tailored to the Bank's operations, to serve as the basis for the design and evaluation of appropriate antifraud controls. The Bank agreed with GAO's recommendations, saying it will take steps to improve its fraud risk management activities.
|
Over the last 3 decades employers have shifted away from sponsoring defined benefit (DB) plans and toward DC plans. This shift also transfers certain types of risk—such as investment risk—from employers to employee participants. DB plans generally offer a fixed level of monthly annuitized retirement income based upon a formula specified in the plan, which usually takes into account factors such as a participant’s salary, years of service, and age at retirement, regardless of how the plan’s investments perform. In contrast, benefit levels in DC plans—such as 401(k) plans—depend on the contributions made to the plan and the performance of the investments in individual accounts, which may fluctuate in value. As we have previously reported, some experts have suggested that the portability of DC plans make them better-suited for a mobile workforce, and that such portability may lead to early withdrawals of retirement savings. DOL reported there were 656,241 DC and 46,300 DB plans in the United States in 2016. Tax incentives are in place to encourage employers to sponsor retirement plans and employees to participate in plans. Under the Employee Retirement Income Security Act of 1974 (ERISA), employers may sponsor DC retirement plans, including 401(k) plans—the predominant type of DC plan, in which benefits are based on contributions to and the performance of the investments in participants’ individual accounts. To save in 401(k) plans, participants contribute a portion of their income into an investment account, and in traditional 401(k) plans taxes are deferred on these contributions and associated earnings, which can be withdrawn without penalty after age 59½ (if permitted by plan terms). As plan sponsors, employers may decide the amount of employer contributions (if any) and how long participants must work before having a non-forfeitable (i.e., vested) interest in their plan benefit, within limits established by federal law. Plan sponsors often contract with service providers to administer their plans and provide services such as record keeping (e.g., tracking and reporting individual account contributions); investment management (i.e., selecting and managing the securities included in a mutual fund); and custodial or trustee services for plan assets (e.g., holding the plan assets in a bank). Individuals also receive tax incentives to save for retirement outside of an employer-sponsored plan. For example, traditional IRAs provide certain individuals with a way to save pre-tax money for retirement, with withdrawals made in retirement taxed as income. In addition, Roth IRAs allow certain individuals to save after-tax money for retirement with withdrawals in retirement generally tax-free. IRAs were established under ERISA, in part, to (1) provide a way for individuals not covered by a pension plan to save for retirement; and (2) give retiring workers or individuals changing jobs a way to preserve assets from 401(k) plans by transferring their plan balances into IRAs. The Investment Company Institute (ICI) reported that 34.8 percent of households in the United States owned an IRA in 2017, a percentage that has generally remained stable since 2000. In 2017, IRA assets accounted for almost 33 percent (estimated at $9.2 trillion) of total U.S. retirement assets, followed by DC plans, which accounted for 27 percent ($7.7 trillion). Further, according to ICI, over 94 percent of funds flowing into traditional IRAs from 2000 to 2015 came from rollovers—primarily from 401(k) plans. IRS, within the Department of the Treasury, is responsible for enforcing IRA tax laws, while IRS and DOL share responsibility for overseeing prohibited transactions relating to IRAs. IRS also works with DOL’s Employee Benefits Security Administration (EBSA) to enforce laws governing 401(k) plans. IRS is primarily responsible for interpreting and enforcing provisions of the Internal Revenue Code (IRC) that apply to tax- preferred retirement savings. EBSA enforces ERISA’s reporting and disclosure and fiduciary responsibility provisions, which, among other things, include requirements related to the type and extent of information that a plan sponsor must provide to plan participants. Employers sponsoring employee benefit plans subject to ERISA, such as a 401(k) plans, generally must file detailed information about their plan each year. The Form 5500 serves as the primary source of information collected by the federal government regarding the operation, funding, expenses, and investments of employee benefit plans. The Form 5500 includes information about the financial condition and operation of their plans, among other things. EBSA uses the Form 5500 to monitor and enforce plan administrators and other fiduciaries, and service providers’ responsibilities under Title I of ERISA. IRS uses the form to enforce standards that relate to, among other things, how employees become eligible to participate in benefit plans, and how they become eligible to earn rights to benefits. In certain instances, sponsors of 401(k) plans may allow participants to access their tax-preferred retirement savings prior to retirement. Plan sponsors have flexibility under federal law and regulations to choose whether to allow plan participants access to their retirement savings prior to retirement and what forms of access to allow. Typically, plans allow participants to access their savings in one or more of the following forms: Loans: Plans may allow participants to take loans and limit the number of loans allowed. If the plan provides for loans, the maximum amount that the plan can permit as a loan generally cannot exceed the lesser of (1) the greater of 50 percent of the vested account balance, or $10,000 or (2) $50,000 less the excess of the highest outstanding balance of loans during the 1-year period ending on the day before the day on which a new loan is made over the outstanding balance of loans on the day the new loan is made. Plan loans are generally not treated as early withdrawals unless they are not repaid within the terms specified under the plan. Hardship withdrawals: Plans may allow participants facing a hardship to take a withdrawal on account of an immediate and heavy financial need, and if the withdrawal is necessary to satisfy the financial need. Though plan sponsors can decide whether to offer hardship withdrawals and approve applications for hardship withdrawals, IRS regulations provide “safe harbor” criteria regarding circumstances when a withdrawal is deemed to be on account of an immediate heavy financial need. IRS regulations allow certain expenses to qualify under the safe harbor including: (1) certain medical expenses; (2) costs directly relating to the purchase of a principal residence; (3) tuition and related educational fees and expenses for the participant, and their spouse, children, dependents or beneficiary; (4) payments necessary to prevent eviction from, or foreclosure on, a principal residence; (5) certain burial or funeral expenses; and (6) certain expenses for the repair of damage to the employee’s principal residence. Plans that provide for hardship withdrawals generally specify what information participants must provide to the plan sponsor to demonstrate a hardship meets the definition of an immediate and heavy financial need. Early withdrawals of retirement savings may have short-term and long- term impacts on participants’ ability to accumulate retirement savings. In the short term, IRA owners and participants in 401(k) plans who received a withdrawal before reaching age 59½ generally pay an additional 10 percent tax for early distributions in addition to income taxes on the taxable portion of the distribution amount. The IRC exempts certain distributions from the additional tax, but the exceptions vary among 401(k) plans and IRAs. Early withdrawals of any type can result in the permanent removal of assets from retirement accounts thereby reducing the amounts participants can accumulate before retirement, including the loss of compounded interest or other earnings on the amounts over the participant’s career. According to DOL’s Bureau of Labor Statistics (BLS), U.S. workers are likely to have multiple jobs in their careers as average employee tenure has decreased. In 2017, BLS reported that from 1978 to 2014, workers held an average of 12 jobs between the ages of 18 and 50. BLS also reported in 2016 that the median job tenure for a worker was just over 4 years. Employees who separate from a job bear responsibility for deciding what to do with their accumulated assets in their former employer’s plan. Recent research estimated that 10 million people with a retirement plan change jobs each year, many of whom faced a decision on how to treat their account balance at job separation. Plan administrators must provide a tax notice detailing participants’ options for handling the balance of their accounts. When plan participants separate from their employers, they generally have one of three options: 1. They may leave the balance in the plan, 2. They may ask their employer to roll the money directly into a new qualified employer plan or IRA (known as a direct rollover), or 3. They may request a distribution. Once the participant receives the distribution he or she can (1) within 60 days, roll the distribution into a new qualified employer plan or IRA (in which case the money would remain tax-preferred); or (2) keep the distributed amount, and pay any income taxes or additional taxes associated with the distribution (known as a cashout). Sponsors of 401(k) plans may cash out or transfer separating participant accounts if an account balance falls below a certain threshold. The Economic Growth and Tax Relief Reconciliation Act of 2001 (EGTRRA) amended the IRC to provide certain protections for separating participants with account balances between $1,000 and $5,000 by requiring, in the absence of participant direction, plan sponsors to either keep the account in the plan or to transfer the account balance to an IRA to preserve its tax-preferred status. Plan sponsors may not distribute accounts with balances of more than $5,000 without participant direction, but have discretion to distribute account balances of $1,000 or less. The IRC imposes an additional 10 percent tax (in addition to ordinary income tax) on certain early withdrawals from qualified retirement plans, which includes IRAs and 401(k) plans in an effort to discourage the use of plan funds for purposes other than retirement and ensure the favorable tax treatment for plan funds is used to provide retirement income. Employers are required to withhold 20 percent of the amount cashed out to cover anticipated income taxes unless the participant pursues a direct rollover into another qualified plan or IRA. Research has found that many employees are concerned about their level of savings and ability to manage their retirement accounts, and some employers provide educational services to improve employees’ financial wellness and financial literacy and encourage them to save for retirement. A 2017 survey on employee financial wellness in the workplace found more than one-half of workers experienced financial stress and that insufficient emergency savings was a top concern for employees. Research has also found that limited financial literacy is widespread among Americans over age 50, and those who lack financial knowledge are less likely to successfully plan for retirement. In 2018, the Federal Reserve reported that three-fifths of non-retirees with participant-directed retirement accounts had little to no comfort managing their own investments. As we have previously reported, some employers have developed comprehensive programs aimed at overall improvement in employees’ financial health. These programs, often called financial wellness programs, may help employees with budgeting, emergency savings, and credit management, in addition to the traditional information and assistance provided for retirement and health benefits. In 2013, individuals ages 25 to 55 withdrew at least $68.7 billion early from their retirement accounts. Of this amount, IRA owners in this age group withdrew the largest share (about 57 percent) and 401(k) plan participants in this age group withdrew the rest (about 43 percent) However, a total amount withdrawn from 401(k) plans cannot be determined due to data limitations. IRA withdrawals were the largest source of early withdrawals of retirement savings, accounting for an estimated $39.5 billion of the total $68.7 billion in early withdrawals made by individuals ages 25 to 55 in 2013. According to IRS estimates, 12 percent of IRA owners in this age group withdrew money early that year from their IRAs in 2013. The amount they withdrew early comprised a small percentage of their total IRA assets. Specifically, in 2013, the amount of early withdrawals was equivalent to 3 percent of the cohort’s total IRA assets and, according to IRS estimates, the total amount withdrawn by this cohort exceeded their total contributions to IRAs in that year. At least $29.2 billion left 401(k) plans in 2013 in the form of hardship withdrawals, cashouts at job separation, and unrepaid plan loans, according to our analysis of 2013 SIPP data and data from DOL’s Form 5500. Specifically, we found that: Hardship withdrawals were the largest source of early withdrawals from 401(k) plans with an estimated 4 percent (+/- 0.25) of plan participants ages 25 to 55 withdrawing an aggregate $18.5 billion in 2013. The amount of hardship withdrawals was equivalent to 0.5 percent (+/- 0.06) of the cohort’s total plan assets and 8 percent (+/- 0.9) of the cohort’s plan contributions made in 2013. Cashouts of account balances of $1,000 or more at job separation were the second largest source of early withdrawals from 401(k) plans. In 2013, an estimated 1.1 percent (+/- 0.11) of plan participants ages 25 to 55 withdrew an aggregate $9.8 billion from their plans that they did not roll into another qualified plan or IRA. Additionally, 86 percent (+/- 2.9) of these participants taking a cashout of $1,000 or more did not roll over the amount in 2013. The amounts cashed out and not rolled over were equivalent to 0.3 percent (+/- 0.05) of the cohort’s total plan assets and 4 percent (+/- 0.75) of the cohort’s total contributions made in 2013. Loan defaults accounted for at least $800 million withdrawn from 401(k) plans in 2013; however, the amount of distributions of unpaid plan loans is likely larger as DOL data cannot be used to quantify plan loan offsets that are deducted from participants’ account balances after they leave a plan. As a result, the amount of loan offsets among terminating participants ages 25 to 55 cannot be determined with certainty. Specifically, DOL’s Form 5500 instructions require plan sponsors to report unpaid loan balances in two separate places on the Form 5500, depending on whether the loan holder is an active or a terminated participant. For active participants, plan sponsors report loan defaults as a single line item on the Form 5500 (i.e., the $800 million in 2013 listed above). For terminated participants, plan sponsors report unrepaid plan loan balances as benefits paid directly to participants—a category that also includes rollovers to employer plans and IRAs. According to a DOL official, as a result of this commingling of benefits on this line item, isolating the amount of loan offsets for terminated participants using the Form 5500 data is not possible. Without better data of the amount of unrepaid plan loans, the amount of loan offsets and the characteristics of plan participants who did not repay their plan loans at job separation cannot be determined. IRA owners and plan participants taking early withdrawals paid $6.2 billion as a result of the additional 10 percent tax for early distributions in 2013, according to IRS estimates. Although the taxes are generally treated separately from the amounts withdrawn, IRA owners and plan participants are expected to pay any applicable taxes resulting from the additional 10 percent tax when filing their income taxes for the tax year in which the withdrawal occurred. Individuals with certain demographic and economic characteristics that we analyzed had higher incidence of early withdrawals of retirement savings, according to our analysis of SIPP data. The characteristics described below reflect statistically significant differences between comparison groups (a full listing of all demographic groups can be found in appendix III). Age. The incidence of IRA withdrawals was higher among individuals ages 45 to 54 (8 percent) than individuals ages 25 to 34 and 35 to 44. Education. Individuals with a high school education or less had higher incidence of cashouts (97 percent) and hardship withdrawals (7 percent) than individuals with some college or some graduate school education. Family size. Individuals in families of seven or more (8 percent) or in families of five to six (7 percent) had higher incidence of hardship withdrawals than individuals in smaller family groups we analyzed. Individuals living alone had higher incidence of IRA withdrawals than individuals living in the larger family groups. Marital status. Widowed, divorced, or separated individuals had higher incidence of IRA withdrawals (11 percent) and hardship withdrawals (7 percent) than married or never married individuals. Race. The incidence of hardship withdrawals among African American (10 percent) and Hispanic individuals (6 percent) was higher than among individuals who were White, Asian, or Other. Residence. The incidence of IRA withdrawals and hardship withdrawals was higher among individuals living in nonmetropolitan areas (7 percent and 6 percent, respectively) than among individuals living in metropolitan areas. Similarly, individuals with certain economic characteristics that we analyzed had higher incidence of early withdrawals of retirement savings, according to our analysis of SIPP data. The characteristics described below reflect statistically significant differences between comparison groups (a full listing of all demographic groups can be found in appendix III). Employer size. Individuals working for employers with fewer than 25 employees had higher incidence of IRA withdrawals (9 percent) than individuals working for employers with higher number of employees. Employment. Individuals working fewer than 35 hours per week had higher incidence of IRA withdrawals (7 percent) than employees working 35 hours or more. Household debt. Individuals with household debt of $5,000 up to $20,000 had higher incidence of IRA withdrawals (14 percent) than individuals with other debt amounts. Household income. Individuals with household income of less than $25,000 or $25,000 up to $50,000 had higher incidence of IRA withdrawals (12 percent and 9 percent, respectively) and hardship withdrawals (9 percent and 7 percent, respectively) than individuals with higher income amounts. Personal cash reserves. Individuals with personal cash reserves of less than $1,000 had higher incidence of IRA withdrawals (10 percent) and hardship withdrawals (6 percent) than individuals with larger reserves. Retirement assets. Individuals with combined IRA and 401(k) plan assets valued at less than $5,000 had higher incidence of hardship withdrawals (7 percent) than individuals with higher valued assets. Tenure in retirement plan. Individuals with fewer than 3 years in their retirement plan had higher incidence of hardship withdrawals (6 percent) than individuals with longer tenures. Stakeholders we interviewed said that plan rules related to the disposition of account balances at job separation can lead participants to remove more than they need, up to and including their entire balance. We previously reported U.S. workers are likely to change jobs multiple times in a career. Plan sponsors may cash out balances of $1,000 or less at job separation, although they are not required to do so. As a result, plan participants with such balances, including younger employees and others with short job tenures, risk having their account balances distributed in full each time they change jobs. As shown in table 1, a separating employee must take multiple steps to ensure that an account balance remains tax-preferred. Participants who take a distribution from a plan with the intent of rolling it into another qualified plan or IRA must acquire additional funds to complete the rollover and avoid adverse tax consequences. Plan sponsors are required to withhold 20 percent of the account balance to pay anticipated taxes on the distribution. As a result, the sponsor then sends 80 percent of the account balance to the participant, who must acquire outside funds to compensate for the 20 percent withheld or forgo the preferential tax treatment of that portion of their account balance. For example, a participant seeking to roll over a retirement account with a $10,000 balance would receive an $8,000 distribution after tax withholding, requiring them to locate an additional $2,000 to complete the rollover within the 60-day period to avoid a taxable distribution of the withheld amount. If participants can replace the 20 percent withheld and complete the rollover within the 60-day period, they do not owe taxes on the distribution. Stakeholders said that the complexity of rolling a 401(k) account balance from one employer to another may encourage participants to take the relatively simpler route of rolling their balance into an IRA or cashing out altogether. They noted that separating participants had many questions when evaluating their options and had difficulty understanding the notice provided. For example, participants may not fully understand how the decisions made at job separation can have a significant impact on their current tax situation and eventual retirement security. One plan sponsor, describing concerns about giving investment advice, said she watched participants make what she judged to be poor choices with their account balances and felt helpless to intervene. Stakeholders also noted that the lack of a standardized rollover process sometimes bred mistrust among employers and complicated separating participants’ ability to successfully facilitate a rollover between plans. For example, one stakeholder told us that some plans were hesitant to accept funds from other employer plans fearing that the funds might come from plans that have failed to comply with plan qualification requirements and could create problems for the receiving plan later on. Another stakeholder suggested that the requirement for plan sponsors to provide a notice to separating participants likely caused more participants to take the distribution. Stakeholders described loans as a useful source of funds in times of need and a way to avoid more expensive options, such as high-interest credit cards. They also noted that certain plan loan policies could lead to early withdrawals of retirement savings. (See fig. 1.) Loan repayment at job separation: Stakeholders said loan repayment policies can increase the incidence of defaults on outstanding loans. When participants do not repay their loan after separating from a job, the outstanding balance is treated as a distribution, which may subject it to income tax liability and, possibly, an additional 10 percent tax for early distributions. According to stakeholders, the process of changing jobs can inadvertently lead to a distribution of a participant’s outstanding loan balance, when the participant could have otherwise repaid the loan. Extended loan repayment periods: Some plan sponsors allow participants to take loans to purchase a home. Stakeholders told us that the amounts of these home loans tended to be larger than general purpose loans and had longer repayment periods that these extended from 15 to 30 years. A stakeholder further noted that these loans could make it more likely that participants would have larger balances to repay if they lost or changed jobs. Multiple loans: While some plan sponsors noted that their plans limited the number of loans participants can take from their retirement plan, others do not. Some plan sponsors limited participants to between one and three simultaneous loans, and one plan administrator indicated that 92 percent of their plan-sponsor clients allowed no more than two simultaneous loans. Other plan sponsors placed no limit on the number of participant loans or limited loans to one or two per calendar year, in which case a participant could take out a new loan at the start of a calendar year regardless of whether or not outstanding loans had been repaid. Stakeholders described some participants as “serial” borrowers, who take out multiple loans and have less disposable income as a result of ongoing loan payments. One plan administrator stated that repeat borrowing from 401(k) plans was common, and some participants took out new loans to pay off old loans. Other loan restrictions: Allowing no loans or one total outstanding loan can cause participants facing economic shocks to take a hardship withdrawal, resulting in the permanent removal of their savings and subjecting them to income tax liability and, possibly, an additional 10 percent tax for early distributions and a suspension on contributions. Minimum loan amounts: Minimum loan amounts may result in participants borrowing more than they need to cover planned expenses. For example, a participant may have a $500 expense for which they seek a loan, but may have to borrow $1,000 due to plan loan minimums. Stakeholders said that plan participants take plan loans and hardship withdrawals for pressing financial needs. Many plan sponsors we interviewed said they used the IRS safe harbor exclusively as criteria when reviewing a participant’s application for a hardship withdrawal. Stakeholders said the top two reasons participants took hardship withdrawals were to prevent imminent eviction or foreclosure and to cover out-of-pocket medical costs not covered by health insurance. Participants generally took loans to reduce debt, for emergencies, or to purchase a primary residence. Stakeholders also said that participants who experienced economic shocks stemming from job loss made early withdrawals. They said retirement plans often served as a form of insurance for those between jobs or facing a sudden economic shock and participants accessed their retirement accounts because, for many, they were the only source of savings. They cited personal debt, health care costs, and education as significant factors that affected employees across all income levels. Stakeholders said some participants also used their retirement savings to pay for anticipated expenses. Two plan administrators said education expenses were one of the reasons participants took hardship withdrawals. They said that participants accessed their retirement savings to address the cost of higher education, including paying off their own student loan debt or financing the college costs for family members. For example, plan administrators told us that some participants saved with the expectation of taking a hardship withdrawal to pay for college tuition. Other participants utilized hardship withdrawals to purchase a primary residence. IRA owners generally may take withdrawals at any time and IRS does not analyze the limited information it receives on the reasons for IRA withdrawals. IRA owners can withdraw any amount up to their entire account balance at any time. In addition, IRAs have certain exceptions from the additional 10 percent tax for early distributions. For example, IRA withdrawals taken for qualified higher education expenses, certain health insurance premiums, and qualified “first-time” home purchases (up to $10,000) are excepted from the additional 10 percent tax. IRA owners who make an IRA distribution receive a Form1099-R or similar statement from their provider. On the Form 1099-R, IRA providers generally identify whether the withdrawal, among other things, can be categorized as a normal distribution, an early distribution, or a direct distribution to a qualified plan or IRA. For an early distribution, the IRA provider may identify whether a known exception to the additional 10 percent tax applies. For their part, IRA owners are required to report early withdrawals on their income tax returns, as well as the reason for any exception from the additional 10 percent tax for a limited number of items. In written responses to questions, an IRS official indicated that IRS collected data on the exemption reason codes, but did not use them. Some plan sponsors we interviewed had policies in place that may reduce the long-term impact of early withdrawals of retirement savings taken at job separation. Policies suggested by plan sponsors included: Providing a periodic installment distribution option: Although some plan sponsors may require participants wanting a distribution to take their full account balance at job separation, other plan sponsors provided participants with an option of receiving their account balance in periodic installments. For example, one plan sponsor gives separating participants an option to receive periodic installment distributions at intervals determined by the participants. This plan sponsor said separating participants could select distributions on a monthly, quarterly, semi-annual or annual basis. These participants could also elect to stop distributions at any time, preserving the remaining balance in the employer’s plan. The plan sponsor said the plan adopted this option to help separating participants address any current financial needs, while preserving some of the account balance for retirement. Another plan sponsor adopted a similar policy to address the cyclical nature of the employer’s business, which can result in participants being terminated and rehired within one year. Offering partial distributions: One plan sponsor provided separated participants with the option of receiving a one-time, partial distribution. If a participant opted for partial distribution, the plan sponsor issued the distribution for the requested sum and preserved the remainder of the account balance in the plan. The plan sponsor adopted the partial distribution policy to provide separating participants with choices for preserving account balances, while simultaneously providing access to address any immediate financial needs. Providing plan loan repayment options for separated participants: Some plan sponsors allowed former participants to continue making loan repayments after job separation. Loan repayments after job separation reduce the loan default risk and associated tax implications for participants. Some plan sponsors said that separating participants who have the option to continue repaying an outstanding loan balance generally have three options: (1) to continue repaying the outstanding loan, (2) to repay the entire balance of the loan at separation within a set repayment period, or (3) not to repay the loan. Those participants who continue repaying their loans after separation generally have the option to set up automatic debit payments to facilitate the repayment. Those separated participants who do not set up loan repayment terms within established timeframes, or do not make a payment after the loan repayment plan has been established, default on their loan and face the associated tax consequences, including, possibly, an additional 10 percent tax for early distributions. Some plan sponsors we spoke with placed certain limits on participant loan activity, which may reduce the incidence of loan defaults (see fig. 2). Limiting loan amounts to participant contributions: Some plan sponsors said they limited plan loans to participant contributions and any investment earnings from those contributions to reduce early withdrawals of retirement savings. For example, one plan sponsor’s policy limited the amount a participant could borrow from their plan to 50 percent of participant contributions and earnings, compared to 50 percent of the total account balance. Implementing a waiting period after loan repayment before a participant can access a new loan: Some plan sponsors said they had implemented a waiting period between plan loans, in which a participant, having fully paid off the previous loan, was temporarily ineligible to apply for another. Among plan sponsors who implemented a waiting period, the length varied from 21 days to 30 days. Reducing the number of outstanding loans: Some plan sponsors we spoke with limited the number of outstanding plan loans to either one or two loans. One plan sponsor had previously allowed one new loan each calendar year, but subsequently revised plan policy to allow participants to have a total of two outstanding loans. The plan sponsor said the rationale was to balance limiting participant loan behavior with the ability of participants to access their account balance. Some plan sponsors said they had expanded the definition of immediate and heavy financial need beyond the IRS safe harbor to better align with the economic needs of their participants. For example, one plan sponsor approved a hardship withdrawal to help a participant pay expenses related to a divorce settlement. Another plan sponsor developed an expanded list of qualifying hardships, including past-due car, mortgage, or rent payments; and payday loan obligations. Some plan sponsors implemented loan programs outside their plan, contracting with third-party vendors to provide short-term loans to employees. For example, one plan sponsor instituted a loan program that allowed employees to borrow up to $5,000 from a third-party vendor that would be repaid through payroll deduction. This plan sponsor said the loan program featured an 8 to 12 percent interest rate, and approval was not based on a participant’s credit history. The plan sponsor also observed that they had fewer 401(k) loan applications since the third- party loan program was implemented. A second plan sponsor instituted a similar loan program that allowed employees to borrow up to $500 interest free from a third-party vendor. According to this sponsor, to qualify for a loan, an employee must demonstrate financial hardship and have no outstanding plan loans, and is required to attend a financial counseling course if their loans are approved. Some plan sponsors said they have provided workplace-based financial wellness resources for their participants to improve their financial literacy. Some implemented optional financial wellness programs that covered topics such as investment education, how plan loans work, and the importance of saving for emergencies. These plan sponsors told us they offered on-site financial counseling with representatives of the plan administrator to help provide guidance on financial decision-making; however, other plan sponsors said that—despite their investment in participant-specific financial education—participation in these programs was low. Stakeholders suggested strategies that they believed could help mitigate the long-term effects of early withdrawals of retirement savings on IRA owners and plan participants. They noted that any of these proposed strategies, if implemented, could (1) increase the costs of administering IRAs and plans, (2) require changes to federal law or regulations, and (3) involve tradeoffs between providing access to retirement savings and preserving savings for retirement. Stakeholders suggested several strategies that, if implemented, could help reduce early withdrawals from IRAs. These strategies centered on modifying existing rules to reduce early withdrawals from IRAs (and subsequently the amount paid as a result of the additional 10 percent tax for early distributions). Specifically, stakeholders suggested: Raising the age at which the additional 10 percent tax applies: Some stakeholders noted that raising the age at which the additional 10 percent tax for early distributions applies from 59½ to 62 would align it with the earliest age of eligibility to claim Social Security and may encourage individuals to consider a more comprehensive retirement distribution strategy. However, other stakeholders cautioned that it could have drawbacks for employees in certain situations. For example, individuals who lose a job late in their careers could face additional tax consequences for accessing an IRA before reaching the age 62. In addition, one stakeholder said some individuals may shift to a part-time work schedule later in their careers as they transition to retirement and plan on taking IRA withdrawals to compensate for their lower wages. Allowing individuals to roll existing plan loans into an IRA: Some stakeholders said that allowing individuals to include an existing plan loan as part of a rollover into an IRA, although currently not allowed, would likely reduce plan loan defaults by giving individuals a way to continue repaying the loan balance. One stakeholder suggested that rolling an existing plan loan into an IRA could be administratively challenging for IRA providers, but doing so to repay the loan may ultimately preserve retirement savings. Allowing IRA loans: While currently a prohibited transaction that could lead to the cessation of an IRA, some stakeholders suggested that IRA loans could theoretically reduce the amounts being permanently removed from the retirement system through early IRA withdrawals. One stakeholder said an IRA loan would present a good alternative to an early withdrawal from an IRA account because it would give the account holder access to the balance, defer any tax implications, and improve the likelihood the loaned amount would ultimately be repaid. However, another stakeholder said that allowing IRA loans could increase early withdrawals, given the limited oversight of IRAs, as well as additional administrative costs and challenges for IRA providers. Stakeholders suggested several strategies that, if implemented, could reduce the effect of cashouts at job separation from 401(k) plans. Simplifying the rollover process: Stakeholders proposed two modifications to the current rollover process that they believe could make the process more seamless and reduce the incidence of cashouts. First, stakeholders suggested that a third-party entity tasked with facilitating rollovers between employer plans for a separating participant would likely reduce the incidence of cashouts at job separation. Such an entity could automatically route a participant’s account balance from the former plan to a new one. One stakeholder said having a third-party entity facilitate the rollover would eliminate the need for a plan participant to negotiate the process. Such a service, however, would likely come at cost that may likely be passed onto participants. Stakeholders also suggested direct rollovers of account balances between plans could further reduce the incidence of cashouts. One stakeholder, however, cautioned that direct rollovers could have downsides for some participants. For example, participants who prefer to keep their balance in their former employer’s plan but provide no direction to the plan sponsor may inadvertently find their account balance rolled into a new employer’s plan. Restricting cashouts to participant contributions only: Some stakeholders suggested limiting the assets a participant may access at job separation. For example, some stakeholders said that participants should not be allowed to cash out vested plan sponsor contributions, thus preserving those contributions and their earnings for retirement. However, this strategy could result in participants overseeing and monitoring several retirement accounts. Stakeholders suggested several strategies that, if implemented, could limit the adverse effect of hardship withdrawals on retirement savings. Narrowing the IRS safe harbor: Although some plan sponsors are expanding the reasons for a hardship to align with perceived employee needs, some stakeholders said narrowing the IRS safe harbor would likely reduce the incidence of early withdrawals. For example, some stakeholders suggested narrowing the definition of a hardship to exclude the purchase of a primary residence or for postsecondary education costs. In addition, one stakeholder said alternatives exist to finance home purchases (mortgages) and postsecondary education (student loans). Stakeholders noted that eliminating the purchase of a primary residence and postsecondary education costs from the IRS safe harbor would make hardship withdrawals a tool more strictly used to avoid sudden and unforeseen economic shocks. In combination with the two exclusions, one stakeholder suggested consideration be given to either reducing or eliminating the additional 10 percent tax for early distributions that may apply to hardship withdrawals. Replacing hardship withdrawals with hardship loans: Stakeholders said replacing a hardship withdrawal, which permanently removes money from the retirement system, with a no-interest hardship loan, which would be repaid to the account, would reduce early withdrawals. Under this suggestion, if the loan were not repaid within this predetermined time frame, the remaining loan balance could be considered a deemed distribution and treated as income (similar to the way a hardship withdrawal is treated now). Incorporating emergency savings features into 401(k) plans: Stakeholders said incorporating an emergency savings account into the 401(k) plan structure may help participants absorb economic shocks and better prepare for both short-term financial needs and long-term retirement planning. (See fig. 3.) In addition, stakeholders said participants with emergency savings accounts could be better prepared to avoid high interest rate credit options, such as credit cards or payday loans, in the event of an economic shock. Stakeholders had several ideas for implementing emergency savings accounts. For example, one stakeholder suggested that, were it allowed, plan sponsors could revise automatic account features to include automatic contributions to an emergency savings account. Some stakeholders also said emergency savings accounts could be funded with after-tax participant contributions to eliminate the tax implications when withdrawing money from the account. However, another stakeholder said emergency savings contributions could reduce contributions to a 401(k) plan. In the United States, the amount of aggregate savings in retirement accounts continues to grow, with nearly $17 trillion invested in 401(k) plans and IRAs. Early access to retirement savings in these plans may incentivize plan participation, increase participant contributions, and provide participants with a way to address their financial needs. However, billions of dollars continue to leave the retirement system early. Although these withdrawals represent a small percentage of overall assets in these accounts, they can erode or even deplete an individual’s retirement savings, especially if the retirement account represents their sole source of savings. Employers have implemented plan policies that seek to balance the short- term benefits of providing participants early access to their accounts with the long-term need to build retirement savings. However, the way plan sponsors treat outstanding loans after a participant separates from employment has the potential to adversely affect retirement savings. In the event of unexpected job loss or separation, plan loans can leave participants liable for additional taxes. Currently, the incidence and amount of loan offsets in 401(k) plans cannot be determined due to the way DOL collects data from plan sponsors. Additional information on loan offsets would provide insight into how plan loan features might affect long-term retirement savings. Without clear data on the incidence of these loan offsets, which plan sponsors are generally required to include, (but not itemize) on the Form 5500, the overall extent of unrepaid plan loans in 401(k) plans cannot be known. To better identify the incidence and amount of loan offsets in 401(k) plans nationwide, we recommend that the Secretary of Labor direct the Assistant Secretary for EBSA, in coordination with IRS, to revise the Form 5500 to require plan sponsors to report qualified plan loan offsets as a separate line item distinct from other types of distributions. (Recommendation 1) We provided a draft of this product to the Department of Labor, the Department of the Treasury, and the Internal Revenue Service for review and comment. In its written comments, reproduced in appendixes IV and V, respectively, DOL and IRS generally agreed with our findings, but neither agreed nor disagreed with our recommendation. DOL said it would consider our recommendation as part of its overall evaluation of the Form 5500, and IRS said it would work with DOL as it responds to our recommendation. The Department of Treasury provided no formal written comments. In addition, DOL, IRS, Treasury and two third-party subject matter experts provided technical comments, which we incorporated in the report, as appropriate As agreed with your staff, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Labor, Secretary of the Treasury, Commissioner of Internal Revenue, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or jeszeckc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff making key contributions to this report are listed in appendix VI. The objectives of this study were to determine: (1) what are the incidence and amount of retirement savings being withdrawn early; (2) what is known about the factors that might lead individuals to access their retirement savings early; and (3) what strategies or policies, if any, might reduce the incidence and amount of early withdrawals of retirement savings. To examine the incidence and amount of early withdrawals from individual retirement accounts (IRA) and 401(k) plans, we analyzed the most recent nationally representative data available in three relevant federal data sources, focusing our analysis on individuals in their prime working years (ages 25 to 55), when possible. For consistency, we analyzed data from 2013 from each data source because it was the most recent year that data were available for all types of early withdrawals we examined. We adjusted all dollar-value estimates derived from each data source for inflation and reported them in constant 2017 dollars. We determined that the data from these sources were sufficiently reliable for the purposes of our report. First, to examine recent incidence and amount of early withdrawals from IRAs and the associated tax consequences for individuals ages 25 to 55, we analyzed IRS estimates based on tax returns as filed by taxpayers before enforcement activity published by the Internal Revenue Service’s (IRS) Statistics of Income Division for tax year 2013. Specifically, we analyzed the number of taxpayers reporting early withdrawals from their IRAs in 2013 and the aggregate amount of these withdrawals. To provide additional context on the scope of these early withdrawals, we analyzed the age cohort’s total IRA contributions and the end-of-year fair market value of the IRAs, and compared these amounts to the aggregate amount withdrawn. To examine the incidence and amount of taxes paid as a result of the additional 10 percent tax for early distributions, we analyzed estimates on the additional 10 percent tax paid on qualified retirement plans in 2013. Although IRS did not delineate these data by age, we used these data as proxy because IRS assesses the additional 10 percent tax on distributions to taxpayers who have not reached age 59½. Given the delay between a withdrawal date and the date of the tax filing, it is possible that some of the taxes were paid in the year following the withdrawal. We reviewed technical documentation and developed the 95 percent confidence intervals that correspond to these estimates. Second, to examine the incidence and amount of early withdrawals from 401(k) plans, we analyzed data included in the 2014 panel of the U.S. Census Bureau’s Survey of Income and Program Participation (SIPP)—a nationally representative survey of household income, finances, and use of federal social safety net programs—along with retirement account contribution and withdrawal data included in the SIPP’s Social Security Administration (SSA) Supplement on Retirement, Pensions, and Related Content. Specifically, we developed percentage and dollar-value estimates of the incidence and amount of lump sum payments received and hardship withdrawals taken by participants in 401(k) plans in 2013. Because the SIPP is based upon a complex probability sample, we used Balanced Repeated Replication methods with a Fay adjustment to derive all percentage, dollar-total, and dollar-ratio estimates and their 95 percent confidence intervals. To better understand the characteristics of individuals who received a lump sum and/or took a hardship withdrawal in 2013, we analyzed a range of selected individual and household demographic variables and identified characteristics associated with a higher incidence of withdrawals. We applied domain estimation methods to make estimates for these subpopulations. (For a list of variables used and the results of our analysis, please see appendix III.) We attempted to develop a multiple regression model to estimate the unique association between each characteristic and withdrawals, but determined that the SIPP did not measure key variables in enough detail to develop persuasive causal explanations. The sample size of respondents receiving lump sums was too small to precisely estimate the partial correlations of many demographic variables at once. Even with adequate sample sizes, associations between broad demographic variables, such as age and income, likely reflected underlying causes, such as retirement and financial planning strategies, which SIPP did not measure in detail. Third, to examine the incidence and amount of unrepaid plan loans from 401(k) plans, we analyzed the latest filing of annual plan data that plan sponsors reported on the Form 5500 to the Department of Labor (DOL) for the 2013 plan year. We looked at unrepaid plan loans reported by sponsors of large plans (Schedule H) and small plans (Schedule I). For each schedule, we analyzed two variables related to unrepaid plan loans: (1) deemed distributions of participant loans (which captures the amount of loan defaults by active participants) and (2) benefits distributed directly to participants (which includes plan loan offsets for a variety of reasons, including plan loans that remain unpaid after a participant separates from a plan). Because plan sponsors report data in aggregate and do not differentiate by participant age, we calculated and reported the aggregate of loan defaults identified as deemed distributions in both schedules. We could not determine the amount of plan loan offsets based on the way that plan sponsors are required to report them. Specifically, plan sponsors are required to treat unrepaid loans occurring after a participant separates from a plan as reductions or offsets in plan assets, and are required to report them as part of a larger commingled category of offsets that also includes large-dollar items like rollovers of account balances to another qualified plan or IRA. As a result, we were unable to isolate and report the amount of this category of unrepaid plan loans. To identify what is known about the factors that might lead individuals to access their 401(k) plans and IRAs and what strategies or policies might reduce the early withdrawal of retirement savings, we performed a literature search using multiple databases to locate documents regarding early withdrawals of retirement savings published since 2008 and to identify experts for interviews. The search yielded a wide variety of scholarly articles, published articles from various think tank organizations, congressional testimonies, and news reports. We reviewed these studies and identified factors that lead individuals to withdraw retirement savings early, as well as potential strategies or policies that might reduce this behavior. The search also helped us identify additional potential interviewees. To answer our second and third objectives, we visited four metropolitan areas and conducted 51 interviews with a wide range of stakeholders that we identified in the literature. In some cases, to accommodate stakeholder schedules, we conducted phone interviews or accepted written responses. Specifically, we interviewed human resource professionals from 22 private-sector companies (including 4 written responses), representatives from 8 plan administrators, 13 retirement research experts (including 1 written response), representatives from 4 industry associations, representatives from 2 participant advocacy organizations, and representatives from 2 financial technology companies. We conducted in-person interviews at four sites to collect information from three different groups: (1) human resource officials in private-sector companies, (2) top 20 plan administrators or recordkeepers, and (3) retirement research experts. We selected site visit locations in four metropolitan locations that were home to representatives of each group. To select companies for potential interviews, we reached out to a broad sample of Fortune 500 companies that offered a 401(k) plan to employees and varied by geographic location, industry, and number of employees. We selected plan administrators based on Pensions and Investments rankings for assets under management and number of individual accounts. We selected retirement research experts who had published research on early withdrawals from retirement savings, as well as experts that we had interviewed in our prior work. Based on these criteria, we conducted site visits in Boston, Massachusetts; Chicago, Illinois; the San Francisco Bay Area, California; and Seattle, Washington. We held interviews with parties in each category who responded affirmatively to our request. In each interview, we solicited names of additional stakeholders to interview. We also interviewed representatives of organizations, such as financial technology companies, participant advocacy organizations, industry associations, and plan administrators focused on small businesses, whose work we deemed relevant to our study. We developed a common question set for each stakeholder category that we interviewed. We based our interview questions on our literature review, research objectives, and the kind of information we were soliciting from each stakeholder category. In each interview, we asked follow-up questions based on the specific responses provided by interviewees. In our company interviews, we asked how companies administered retirement benefits for employees; company policies and procedures regarding separating employees and the disposition of their retirement accounts; company policies regarding plan loans, hardship withdrawals, and rollovers from other 401(k) plans; and company strategies to reduce early withdrawals from retirement savings. In our interviews with plan administrators, we asked about factors that led individuals to access their retirement savings early, how plan providers interacted with companies and separating employees, available data on loans and hardship withdrawals from client retirement plans, and potential strategies to reduce the incidence and amount of early withdrawals. In our interviews with retirement research experts, financial technology companies, participant advocacy organizations, and industry associations we asked about factors that led individuals to make early withdrawals from their retirement savings and any potential strategies that may reduce the incidence and amount of early withdrawals. In our interviews with plan administrators and retirement research experts, we also provided a supplementary table outlining 37 potential strategies to reduce early withdrawals from retirement savings. We asked interviewees to comment on the strengths and weaknesses of each strategy in terms of its potential to reduce early withdrawals, and gave them opportunity to provide other potential strategies not listed in the tables. We developed the list of strategies based on the results of our literature review. Some interviewees also provided us with additional data and documents to assist our research. For example, some companies and plan administrators we interviewed provided quantitative data on the number of plan participants, the average cashout or rollover amounts, the percentage of participants who took loans or hardship withdrawals from their retirement accounts, and known reasons for these withdrawals. Some research experts also provided us with documentation, including published articles and white papers that supplemented our interviews and literature review. All data collected through these methods are nongeneralizable and reflect the views and experiences of the respondents and not the entire population of their respective constituent groups. To answer our second and third objectives, we analyzed the content of our stakeholder interview responses and corroborated our analysis with information obtained from our literature review and quantitative information provided by our interviewees. To examine what is known about the factors leading individuals to access retirement savings early, we catalogued common factors that stakeholders identified as contributing to early withdrawals from retirement savings. We also collected information on plan rules governing early participant withdrawals of retirement savings. To identify potential strategies or policies that might reduce the incidence and amount of early withdrawals, we analyzed interview responses and catalogued (1) company practices that employers identified as having an effect in reducing early withdrawals and (2) strategies that stakeholders suggested that could achieve a similar outcome. GAO is not endorsing or recommending any strategy in this report, and has not evaluated these strategies for their behavioral or other effects on retirement savings or on tax revenues. Appendix II: Selected Provisions Related to Early Withdrawals from 401(k) Plans and Individual Retirement Accounts (IRAs) Requirements Provides an exception for distributions for qualified higher education expenses and for qualified “first-time” home purchases made before age 59½ from the additional 10 percent tax for early distributions Defines “qualified first-time homebuyer distribution” and “first-time homebuyer,” and prescribes the lifetime dollar limit on such distributions, among other things. Allows eligible individuals to make tax-deductible contributions to individual retirement accounts, subject to limits based, for example, on income and pension coverage. Provides for the loss of exemption for an IRA if the IRA owner engages in a prohibited transaction, which results in the IRA being treated as distributing all of its assets to the IRA owner at the fair market value on the first day of the year in which the transaction occurred. Defines a prohibited transaction to include the lending of money or other extension of credit between a plan and a disqualified person. Requirements Allows eligible individuals to make contributions to a Roth IRA that are not tax- deductible. Distributions from the account can generally be treated as a qualified distribution if a distribution is made on or after the Roth IRA owner reaches age 59½ and the distributions is made after the 5-taxable year period beginning when the account was initially opened. Defines a prohibited transaction to include the lending of money or other extension of credit between a plan and a disqualified person. Appendix III: Estimated Incidence of Certain Early Withdrawals of Retirement Savings 401(k) plans 401(k) plans ($1000 or more) 401(k) plans 401(k) plans ($1000 or more) 401(k) plans 401(k) plans ($1000 or more) Legend: * Sampling error was too large to report an estimate. In addition to the contact named above, Dave Lehrer (Assistant Director); Jonathan S. McMurray (Analyst-in-Charge); Gustavo O. Fernandez; Sean Miskell; Jeff Tessin; and Adam Wendel made key contributions to this report. James Bennett, Holly Dye, Sara Edmondson, Sarah Gilliland, Sheila R. McCoy, Ed Nannenhorn, Katya Rodriguez, MaryLynn Sergent, Linda Siegel, Rachel Stoiko, Frank Todisco, and Sonya Vartivarian also provided support. The Nation’s Fiscal Health: Action Is Needed to Address the Federal Government’s Future. GAO-18-299SP. Washington, D.C.: June 21, 2018. The Nation’s Retirement System: A Comprehensive Re-evaluation is Needed to Better Promote Future Retirement Security. GAO-18-111SP. Washington, D.C.: October 18, 2017. Retirement Security: Improved Guidance Could Help Account Owners Understand the Risks of Investing in Unconventional Assets. GAO-17-102. Washington, D.C.: December 8, 2016. 401K Plans: Effects of Eligibility and Vesting Policies on Workers’ Retirement Savings. GAO-17-69. Washington, D.C.: October 21, 2016. Retirement Security: Low Defined Contribution Savings May Pose Challenges. GAO-16-408. Washington, D.C.: May 5, 2016. Retirement Security: Shorter Life Expectancy Reduces Projected Lifetime Benefits for Lower Earners. GAO-16-354. Washington, D.C.: March 25, 2016. Social Security’s Future: Answers to Key Questions. GAO-16-75SP. Washington, D.C.: October 27, 2015. Retirement Security: Federal Action Could Help State Efforts to Expand Private Sector Coverage. GAO-15-556. Washington, D.C.: September 10, 2015. Highlights of a Forum: Financial Literacy: The Role of the Workplace. GAO-15-639SP. Washington, D.C.: July 7, 2015. 401(K) Plans: Greater Protections Needed for Forced Transfers and Inactive Accounts. GAO-15-73. Washington, D.C.: November 21, 2014. Older Americans: Inability to Repay Student Loans May Affect Financial Security of a Small Percentage of Retirees. GAO-14-866T. Washington, D.C.: September 10, 2014. Financial Literacy: Overview of Federal Activities, Programs, and Challenges. GAO-14-556T. Washington, D.C.: April 30, 2014. Retirement Security: Trends in Marriage and Work Patterns May Increase Economic Vulnerability for Some Retirees. GAO-14-33. Washington, D.C.: January 15, 2014. 401(K) Plans: Labor and IRS Could Improve the Rollover Process for Participants. GAO-13-30. Washington, D.C.: March 7, 2013. Retirement Security: Women Still Face Challenges. GAO-12-699. Washington, D.C.: July 19, 2012. 401(K) Plans: Policy Changes Could Reduce the Long-term Effects of Leakage on Workers’ Retirement Savings. GAO-09-715. Washington, D.C: August 28, 2009.
|
Federal law encourages individuals to save for retirement through tax incentives for 401(k) plans and IRAs—the predominant forms of retirement savings in the United States. In 2017, U.S. plans and IRAs reportedly held investments worth nearly $17 trillion dollars. Federal law also allows individuals to withdraw assets from these accounts under certain circumstances. DOL and IRS oversee 401(k) plans, and collect annual plan data—including financial information—on the Form 5500. For both IRAs and 401(k) plans, GAO was asked to examine: (1) the incidence and amount of early withdrawals; (2) factors that might lead individuals to access retirement savings early; and (3) policies and strategies that might reduce the incidence and amounts of early withdrawals. To answer these questions, GAO analyzed data from IRS, the Census Bureau, and DOL from 2013 (the most recent complete data available); and interviewed a diverse range of stakeholders identified in the literature, including representatives of companies sponsoring 401(k) plans, plan administrators, subject matter experts, industry representatives, and participant advocates. In 2013 individuals in their prime working years (ages 25 to 55) removed at least $69 billion (+/- $3.5 billion) of their retirement savings early, according to GAO's analysis of 2013 Internal Revenue Service (IRS) and Department of Labor (DOL) data. Withdrawals from individual retirement accounts (IRA)—$39.5 billion (+/- $2.1 billion)—accounted for much of the money removed early, were equivalent to 3 percent (+/- 0.15 percent) of the age group's total IRA assets, and exceeded their IRA contributions in 2013. Participants in employer-sponsored plans, like 401(k) plans, withdrew at least $29.2 billion (+/- $2.8 billion) early as hardship withdrawals, lump sum payments made at job separation (known as cashouts), and loan balances that borrowers did not repay. Hardship withdrawals in 2013 were equivalent to about 0.5 percent (+/-0.06 percent) of the age group's total plan assets and about 8 percent (+/- 0.9 percent) of their contributions. However, the incidence and amount of certain unrepaid plan loans cannot be determined because the Form 5500—the federal government's primary source of information on employee benefit plans—does not capture these data. Stakeholders GAO interviewed identified flexibilities in plan rules and individuals' pressing financial needs, such as out-of-pocket medical costs, as factors affecting early withdrawals of retirement savings. Stakeholders said that certain plan rules, such as setting high minimum loan thresholds, may cause individuals to take out more of their savings than they need. Stakeholders also identified several elements of the job separation process affecting early withdrawals, such as difficulties transferring account balances to a new plan and plans requiring the immediate repayment of outstanding loans, as relevant factors. Stakeholders GAO interviewed suggested strategies they believed could balance early access to accounts with the need to build long-term retirement savings. For example, plan sponsors said allowing individuals to continue to repay plan loans after job separation, restricting participant access to plan sponsor contributions, allowing partial distributions at job separation, and building emergency savings features into plan designs, could help preserve retirement savings (see figure). However, they noted, each strategy involves tradeoffs, and the strategies' broader implications require further study. GAO recommends that, as part of revising the Form 5500, DOL and IRS require plan sponsors to report the incidence and amount of all 401(k) plan loans that are not repaid. DOL and IRS neither agreed nor disagreed with our recommendation.
|
Generally, the responsibility for reducing lead in drinking water and ensuring safe drinking water overall is shared by EPA, states, and local water systems. EPA is responsible for, among other things, national implementation of the Lead and Copper Rule, setting standards, overseeing states’ implementation of the rule, and conducting some enforcement activities. However, most states have primary responsibility for enforcing the requirements under SDWA as amended. Water systems are generally subject to requirements under SDWA as amended, such as the Lead and Copper Rule, and are responsible for managing and funding the activities and infrastructure needed to meet those requirements. Such infrastructure includes storage facilities and drinking water mains and may include other pipes such as service lines. There are 1 million miles of drinking water mains in the country, according to a 2017 American Society of Civil Engineers study. As figure 1 illustrates, service lines are the smaller pipes that connect the water mains to homes and buildings. According to EPA guidance, service lines also include any smaller pipes used for connecting a service line to the water mains (e.g., gooseneck pipes which are also known as pigtails). Service lines can generally be made of lead, steel, copper, or plastic. Service lines can be fully owned by the water system (publicly owned) or by the homeowner (privately owned), or ownership can be shared. In most communities, lead service lines are partially owned by the water system and partially owned by the homeowner. With shared ownership, the water system typically owns the service line from the water main to the curb stop, and the homeowner owns the service line from the curb stop into the home. In such cases, each party is responsible for maintaining the part of the service line that it owns. In some circumstances, if lead levels are higher than the Lead and Copper Rule allows and other measures do not alleviate the problem, the Lead and Copper Rule requires water systems to replace lead service lines under the systems’ control. The Lead and Copper Rule does not require homeowners to replace the portion of lead service lines they own, but if they choose to do so they are generally responsible for the associated costs. The Lead and Copper Rule allows for a partial replacement by the water system when an owner of a home or building is unable or unwilling to pay for replacement of the portion of the service line not owned by the water system. The total number of lead service lines is unknown and while national, state, and local estimates exist, approaches used to count lead service lines vary. The total number of lead service lines is unknown because, among other things, the Lead and Copper Rule does not require all water systems to collect such information. National, state, and local estimates exist, but the methods used to arrive at these estimates vary, making it challenging to compare estimates. The total number of lead service lines is unknown, in part because the Lead and Copper Rule does not require all water systems to develop and maintain a complete inventory of lead service lines, and there are no national repositories of information about lead service lines. According to EPA headquarters officials we interviewed in 2017, the materials inventory required under the Lead and Copper Rule is not intended to be a census of lead service lines (and other lead pipes such as goosenecks/ pigtails). Instead, it is intended to provide sufficient information to develop a plan for periodically obtaining tap samples. For example, according to 2008 EPA guidance to water systems, if a system contains lead service lines, then, if possible, half of the sample sites should include those served by a lead service line. The Lead and Copper Rule requires water systems to conduct complete inventories only if the water system is required to begin replacing lead service lines. In these instances, water systems are required to expand the materials inventory to a complete inventory that identifies the total number of lead service lines for the purpose of tracking replacements over time. As we reported in 2017, based on the available data, the majority of the 68,000 water systems subject to the Lead and Copper Rule at the time of our review had not been required to replace lead service lines and therefore were not required to conduct complete inventories. Moreover, there are no national repositories for information about lead service lines. In September 2017, we recommended that, as a part of revisions to the Lead and Copper Rule, EPA require states to report data on lead pipes (including lead service lines) and incorporate these data in the agency’s Safe Drinking Water Information System. EPA agreed with the recommendation but has not implemented it. In May 2018, EPA noted that it was in the process of reviewing comments received through consultations with state and local officials and tribes. According to EPA officials, final revisions to the Lead and Copper Rule are expected by February 2020. We continue to believe that EPA should collect data about lead pipes (including lead service lines) from states. By doing so, EPA and congressional decision makers would have important information at the national level on what is known about lead infrastructure in the country, thereby facilitating oversight of the Lead and Copper Rule. The total number of lead service lines is unknown, and while some entities have developed estimates of lead service lines at the national, state, or local water system level, the estimates we reviewed have significant limitations to their reliability. Moreover, the approaches used to arrive at these estimates vary, making it challenging to compare estimates. Nationally, according to EPA’s October 2016 Lead and Copper Rule Revisions White Paper, there are an estimated 6.5 million to 10 million homes served by lead service lines. This range of estimates, based in part on data from a study for the 1991 Lead and Copper Rule, has significant limitations. In appendix I we explain why EPA’s estimate may not accurately reflect the total number of lead service lines, nationwide. An April 2016 American Water Works Association study estimated 6.1 million lead service lines nationwide. The authors of this study extrapolated the number based on survey responses from 978 water systems in 2011 and 2013. While this study is the most recent attempt to provide a national estimate, it has significant limitations. First, the sample was not statistically representative of all 68,000 water systems subject to the Lead and Copper Rule. Rather, the water systems that responded to the American Water Works Association’s survey are not a statistical sample. Second, according to the study’s authors, survey responses were based on water systems’ best guesses of the number of lead service lines in their systems. However, since water systems have not been required to maintain inventories of lead service lines, many of them do not know the exact number. For these reasons, we are not confident that the number accurately reflects the total number of lead service lines nationwide. An American Water Works Association official told us that the organization is not planning to update the study. EPA officials told us that they were not aware of a more recent study than the association’s 2016 study. In addition, EPA officials said in May 2018 that the results in the American Water Works Association study likely represent a lower-bound estimate for the number of lead service lines in the country because the sample was not generalizable, and had other data quality issues. EPA officials in one region we interviewed said that estimates of lead service lines can decrease or increase as a water system replaces lead service lines and as a water system does or does not count lead service lines on private property. The Lead and Copper Rule does not require states to collect statewide information about lead service lines, but at least two states collected data from water systems in their states and published reports with these data: A 2016 report by the Massachusetts Department of Environmental Protection’s Drinking Water Program reported 22,023 lead service lines and 15,809 lead goosenecks and pigtails statewide. The report counted goosenecks and pigtails separately from lead service lines. Officials from the Massachusetts Department of Environmental Protection told us that the state has about 2 million service lines total; therefore, about 1 percent of the total service lines are lead. A 2017 report by the Washington State Department of Health estimated 1,000-2,000 lead service lines statewide and 8,000 goosenecks statewide. According to Washington State officials, they continued to update their estimates in early 2018 with selected water utilities. Generally, the purpose of both studies, as stated in each report, was to identify areas in which water systems would need technical assistance in complying with the Lead and Copper Rule or state requirements. However, for the purposes of estimating the number of lead service lines, complete details were not available about the methodologies and some systems that did respond were only able to provide rough guesses rather than precise counts of lead service lines. EPA headquarters officials told us that Massachusetts and Washington were at the forefront of states’ efforts to gather information about lead service lines. EPA officials also told us that they were not aware of any other states with published reports estimating the number of lead service lines. However, at least two states have also collected information about lead service lines but have not published the information in official reports, at the time of our review. For example, in 2016, officials in Indiana and Maryland sent questionnaires to water systems in their states asking for information about the number of lead service lines. A representative of a water association told us that, generally, water systems were in the beginning stages of conducting complete inventories of lead service lines. However, some local water systems also have estimates. For example, EPA officials told us that water systems in the states of Ohio, Michigan, and Washington had estimates of lead service lines. In May 2018, a representative of the Greater Cincinnati Water Works water system estimated there were approximately 7 percent of publicly owned and approximately 18 percent privately owned lead service lines out of a total of 240,000 service lines in the area served by that water system. In March 2018, representatives of the Greater Cincinnati Water Works water system said that their estimates of lead service lines are best characterized as what is known at any given point in time. These representatives also told us that they collect this information on a continual basis from historical and on-going maintenance records, reports of lead service lines by customers, and the water system’s lead service line replacement program, among other sources. To conduct complete inventories and develop estimates, water systems have used varying approaches, which can hinder comparisons among states and water systems. The publicly available reports that existed as of May 2018 provide some insight into the various approaches water systems have used. For example, to identify lead service lines, water systems have used visual inspection or a combination of visual inspections, existing water system records, and discussions with homeowners. In addition, water systems have used various definitions of lead service lines. For example, water systems have counted: only active service lines delivering water to customers, or both active and inactive (no longer delivering water to customers) service lines; or only the publicly owned lead service lines, or both the publicly and privately owned portions of the lead service lines; or only lead service lines or the lead service lines and goosenecks/pigtails separately. While most states informed EPA that they intend to fulfill the agency’s request to work with water systems to publicize inventories of lead service lines, EPA has identified potential challenges to these efforts. Nonetheless, the agency has not followed up with all states since 2016 to share information about how to address these challenges. Most states that said they intended to fulfill EPA’s request to encourage water systems to publicize materials inventories reported in subsequent letters to or meetings with EPA that they did so; however, as of May 2018, most large waters systems had not made such information public. Our analysis of states’ written responses to EPA’s 2016 request, and information obtained through interviews with EPA officials as of February 2018, found that most (43) of the 50 states indicated an intent to fulfill EPA’s request, 3 states said that they may consider it, and 4 states did not intend to fulfill EPA’s request. Of the approximately 43 states that responded that they would fulfill EPA’s request, almost all (39) reported in subsequent letters to or meetings with EPA that they had encouraged water systems to publicize their materials inventories or other information about lead service lines. In these letters and meetings, states also reported taking other actions to increase their knowledge about lead service lines such as requesting that water systems update the materials inventory required by the Lead and Copper Rule, creating online repositories of maps of lead service lines, posting reports on lead service lines, and issuing requirements for water systems to collect information on lead service lines. For example, in May 2016, the governor of Washington issued a directive requiring the state’s Department of Health to work with certain water systems to identify all lead service lines and lead components within 2 years. Figure 2 shows the number of states that reported fulfilling EPA’s request or taking other related actions. Because EPA asked states to prioritize large water systems (those servicing populations greater than 50,000), we reviewed the websites for the 100 largest water systems. As of January 2018, we found 12 of these water systems had publicized information on the inventory of lead service lines; the rest had not. The information on the websites for the 12 water systems varied. For example, the water system for Tulsa, Oklahoma posted a map that highlighted where lead service lines may be present. Water systems such as Cincinnati, Ohio, Boston, Massachusetts, and Washington, D.C., provided interactive maps that showed locations identified as having lead service lines. See figure 3 for examples of the interactive maps of lead service lines that some selected large water systems have provided to the public. Water systems that serve populations greater than 50,000 but were not among the 100 largest water systems at the time of our review may have also publicized information on the inventory of lead service lines. For example, the water systems for Akron, Ohio, Flint, Michigan, and Providence, Rhode Island each publicized an interactive or other type of map of lead service lines. EPA officials in the regional offices provided a range of reasons why water systems may be challenged in conducting inventories of lead service lines and making any information about lead service lines public, however, it has not followed up with all states about how to address such challenges since 2016. In September 2017, we reported that the six states that would not fulfill EPA’s 2016 request had highlighted challenges in finding historical documentation about lead pipes to create plans for collecting tap water samples or in dedicating staff resources to do so. In January and February 2018, some officials whom we interviewed in EPA’s 10 regional offices agreed that these would be challenges for states and water systems. The officials also mentioned additional potential challenges in conducting complete inventories of lead service lines or publicizing information about lead service lines. Table 1 describes the challenges mentioned by EPA officials in the 10 regional offices. Since the February 2016 letter, EPA followed up in July 2016 with a letter to the Association of State and Territorial Health Officials and Environmental Council of States, which represents all states. In that letter, EPA provided two examples of state practices that increase public transparency: some drinking water systems are providing online searchable databases that provide information on known locations of lead service lines, or are providing videos that show homeowners how to determine whether their home is served by a lead service line. The letter also said that EPA would continue to work with states to ensure that the identification of the locations of lead service lines remains a priority for drinking water systems. However, EPA has conducted limited follow-up since then, mainly, EPA headquarters and regional officials said, because they have focused their efforts on ensuring states appropriately comply with the Lead and Copper Rule. As previously noted in this report, posting materials inventories or other information about the location of lead service lines is not a requirement of the Lead and Copper Rule. In May 2018, EPA headquarters officials we interviewed said that they learned of some states’ and water systems’ efforts toward making information about lead service lines available to the public since 2016, through conferences and discussions with states. These headquarters officials told us that they have shared such efforts with those states who, in 2016, said they did not intend to fulfill EPA’s 2016 request. For example, EPA shared how states that were publicizing information about lead service lines were addressing privacy concerns with states that originally said they would not fulfill EPA’s request. However, as of January 2018, most of the 100 largest water systems had not made their materials inventories or additional maps or updated inventories public. According to EPA’s February 2016 letter, the agency’s objective in encouraging states to work with water systems to post, on a public website, the water system’s original materials inventory along with any additional updated map or inventories of lead service lines was to assure the public that lead risks were being addressed. Under federal standards for internal control, management should externally communicate the necessary quality information, so that external parties can help to achieve the entity’s objectives. By sharing information with all states about the approaches that some states and water systems are using to successfully identify and publicize information about lead service lines, including responses to potential challenges, EPA could encourage states to be more transparent to the public and support the agency’s objectives for safe drinking water. Lead service lines present a significant risk of lead contamination in drinking water. Publicizing drinking water systems’ knowledge about lead service lines, and other lead infrastructure, would facilitate oversight of the Lead and Copper Rule. In September 2017, we recommended that, as a part of revisions to the Lead and Copper Rule expected by February 2020, EPA require states to report data on lead pipes (including lead service lines) and incorporate these data in the agency’s Safe Drinking Water Information System. EPA agreed with the recommendation, and we continue to believe that EPA should require data about lead pipes (including lead service lines) from states. Most states reported that they had encouraged their water systems to publicize information about lead service lines in response to EPA’s February 2016 requests. EPA headquarters officials told us that they had learned of some states’ and water systems’ efforts since 2016 and shared this information with the few states that said that they would not take action in response to EPA’s letter. This information did in fact help at least one state take action, according to information we received from EPA and the state. By sharing information with all states about the approaches that some states and water systems are using to successfully identify and publicize information about lead service lines, including responses to potential challenges, EPA could encourage states to be more transparent to the public and support the agency’s objectives for safe drinking water. The Assistant Administrator for Water of EPA’s Office of Water should share information with all states about the approaches that some states and water systems are using to successfully identify and publicize information on lead service lines, including responses to potential challenges. (Recommendation 1) We provided a draft of this report to EPA for review and comment. In its comments, reproduced in appendix II, EPA agreed with our recommendation. The agency also highlighted a recently developed website that showcases efforts to identify and replace lead service lines and said that it will continue to ensure states and water systems are aware of this resource. We are sending copies of this report to the appropriate congressional committees, the Administrator of EPA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to examine (1) what is known about the number of existing lead service lines nationally, and among states and water systems; and (2) state responses to EPA’s February 2016 request to work with water systems to publicize inventories of lead service lines and any steps EPA has taken to follow up on these responses. To examine what is known about the number of existing lead service lines nationally, and among states and water systems, we relied on interviews and publicly available reports for which we could assess the reliability of the data. We reviewed the requirements under the Lead and Copper Rule for assessing the number of lead service lines. We interviewed officials from EPA’s Office of Water and the following water organizations concerning what these officials knew about the number of lead service lines nationally and among states and water systems: the American Water Works Association, Association of State Drinking Water Administrators, and Regional Community Assistance Partnership. We also interviewed an official with the Environmental Defense Fund regarding the available information about the number of lead service lines nationally and among states and water systems. We selected these organizations because they are all members of the Lead Service Line Replacement Collaborative, a consortium that provides information about voluntary lead service line replacement for states and water systems. On behalf of the Lead Service Line Replacement Collaborative, the organizations we spoke with are collecting examples of states’ and water systems’ experiences in conducting inventories of lead service lines, as the first step in replacing lead service lines. Using information from these interviews, we identified three published studies from the American Water Works Association, the state of Massachusetts, and the state of Washington. We interviewed the authors of the studies to determine the reliability, completeness, and accuracy of the data presented in the studies. For the 2016 American Water Works Association study, we determined that the data were of undetermined reliability because the responses of the water systems surveyed were not generalizable to all water systems and the study authors could not verify the accuracy of the information. Specifically, the sample in the 2016 American Water Works Association study was not based on a statistical sample, and therefore the sampling error was not calculated and information was not available to determine whether responding water systems were similar to nonresponding water systems. For example, the estimate is based on survey responses from 978 of the approximately 23,000 water systems that existed around the time of the surveys, and therefore may not represent all water systems nationwide. In addition, since many water systems do not have complete inventories of their lead service lines, the accuracy of data that water systems submitted in response to the survey is difficult to verify. For example, our interview with the study authors indicates that the information provided by water systems varied in quality, with some systems basing their responses on rough estimates. We based our determination about the data using the criteria of Total Survey Error, which is a framework for assessing the validity and reliability of survey estimates. It includes sampling error (the difference between the population and the sample), nonresponse error, measurement error (the difference between the true response and the response provided by the respondent) and coverage error (the discrepancy between the list of individuals that is used to select a sample and the target population). EPA’s 2016 Lead and Copper Rule Revisions White Paper also identified an estimate of lead service lines. According to EPA officials, this estimate used data from the 2016 American Water Works Association study and a 1988 American Water Works Association study cited in the regulatory impact analysis for the 1991 Lead and Copper Rule. The 1991 estimate also had significant limitations in measurement error and representation error as well as a lack of documentation about key aspects of the methodology. As such, we determined the estimate was not reliable for the purposes of establishing the total number of lead service lines in existence as of 1991. The two state-specific studies represent reasonable efforts to estimate the number of lead service lines in these states. However, they generally could not verify the accuracy of the information provided by these systems because, as we note elsewhere in this report, water systems may not know the number of lead service lines they have. Therefore, for the state-specific studies, we also determined that the data were also of undetermined reliability. Finally, while the Greater Cincinnati Water Works water system did not publish a report about lead service lines, we collected the information through an in-person interview and corroborated the information through a review of the water system’s geographic information system database. The Greater Cincinnati Water Works’ GIS database includes the location and material information for all of the water system’s distribution system. According to the Greater Cincinnati Water Works website, the water system continues to update its map as it obtains more information from its customers. Based on these steps we deemed the data provided by the water system to be sufficiently reliable for the purposes of describing the estimate reported by representatives of the Greater Cincinnati Water Works system. To examine states’ responses to EPA’s February 2016 request to work with water systems to publicize inventories of lead service lines and any steps EPA has taken to follow up on these responses, we relied both on the publicly available letters from each state to EPA and on interviews with EPA regional and headquarters officials. We did not interview state officials in all 50 states, but reviewed some state documents, where available. We used a standard set of open-ended questions to interview officials in EPA’s headquarters and in each of the 10 regional offices. To analyze states’ and EPA officials’ responses, we conducted two analyses. Specifically, we conducted two analyses to summarize updates in state responses to EPA’s February 2016 letter and EPA’s responses to challenges states and water systems may face in conducting and publicizing materials inventories. To confirm each analysis, one analyst independently summarized the information and another analyst verified the accuracy of the information. All initial disagreements were discussed and reconciled. All numbers in our analysis are considered approximate because interpretations of the states’ responses to EPA’s 2016 letter can differ, and states may have taken actions after our interviews with EPA regional officials, or may have taken actions that they did not report to EPA. Figure 4 shows the EPA regions and the states within those regions. We also reviewed EPA documents related to EPA’s request that states take certain actions following the events in Flint, Michigan. In addition, we reviewed federal regulations; EPA guidance to water systems on how to implement the Lead and Copper Rule; and other relevant documents such as an EPA white paper. Because EPA asked states to place an emphasis on working with large water systems to publicize their materials inventories or updated inventories or maps of lead service lines, we reviewed the websites of the 100 largest water systems by population. Our review was conducted in January to February 2018; and since then, additional water systems may have provided information to the public on lead service lines. We identified the largest water systems, based on population served, from data in EPA’s Safe Drinking Water Information System/Fed. EPA has stated on its website that the agency acknowledges challenges related to the data in the Safe Drinking Water Information System/Fed, specifically underreporting of some data by states. GAO has also reported on EPA’s challenges with the Safe Drinking Water Information System/Fed. Even with these challenges, the information on the populations served by water systems in the Safe Drinking Water Information System/Fed is generally reliable. We used a standard set of search terms on each website to ensure the consistency of our searches, as well as information from water organizations and EPA officials, where applicable. We counted a water system as having an inventory if the water system provided a map, interactive map, list of pipes or service lines, or numerical count of lead service lines available to the public. To ensure the completeness of this analysis, one analyst independently conducted the search of websites and another analyst verified the search. All initial disagreements were discussed and reconciled. We compared EPA’s actions to follow up on state responses with federal standards for internal control for information and communication. We conducted this performance audit from October 2017 to September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Diane Raynes (Assistant Director); Tahra Nichols (Analyst in Charge); David Blanding, Jr.; Mark Braza; Lawrence Crockett, Jr.; Justin Fisher; Richard P. Johnson, and Jeanette Soares made key contributions to this report. In addition, Cynthia Norris and Dan Royer made important contributions. Drinking Water: Additional Data and Statistical Analysis May Enhance EPA’s Oversight of the Lead and Copper Rule. GAO-17-424. Washington, D.C.: September 1, 2017. Water Infrastructure: Information on Selected Midsize and Large Cities with Declining Populations. GAO-16-785. Washington, D.C.: September 15, 2016. Water Infrastructure: EPA and USDA Are Helping Small Water Utilities with Asset Management; Opportunities Exist to Better Track Results. GAO-16-237. Washington, D.C.: January, 27, 2016. Drinking Water: Unreliable State Data Limit EPA’s Ability to Target Enforcement Priorities and Communicate Water Systems’ Performance. GAO-11-381. Washington, D.C.: June 17, 2011. Drinking Water: The District of Columbia and Communities Nationwide Face Serious Challenges in Their Efforts to Safeguard Water Supplies. GAO-08-687T. Washington, D.C.: April 15, 2008. Drinking Water: EPA Should Strengthen Ongoing Efforts to Ensure That Consumers Are Protected from Lead Contamination. GAO-06-148. Washington, D.C.: January 4, 2006. District of Columbia’s Drinking Water: Agencies Have Improved Coordination, but Key Challenges Remain in Protecting the Public from Elevated Lead Levels. GAO-05-344. Washington, D.C.: March 31, 2005. Drinking Water: Safeguarding the District of Columbia’s Supplies and Applying Lessons Learned to Other Systems. GAO-04-974T. Washington, D.C.: July 22, 2004.
|
The crisis in Flint, Michigan, brought increased attention to lead in drinking water infrastructure. Lead in drinking water primarily comes from corrosion of service lines connecting the water main to a house or building. In 1991, EPA issued the Lead and Copper Rule that required water systems to conduct a “materials inventory” of lead service lines. In light of the events in Flint, EPA sent a letter to all states in February 2016 encouraging them to work with water systems to publicly post the materials inventory, along with any additional updated maps or inventories of lead service linesactions the rule does not require. A House Committee report accompanying a bill for the Department of the Interior, Environment and Related Agencies Appropriations Act, 2017, includes a provision for GAO to review lead service lines. This report examines (1) what is known about the number of existing lead service lines among states and water systems and (2) states' responses to EPA's February 2016 request to work with water systems to publicize inventories of lead service lines and any steps EPA has taken to follow up on these responses. GAO reviewed existing studies of lead service lines, reviewed the websites of the 100 largest water systems, and interviewed EPA officials in headquarters and its 10 regional offices. The total number of lead service lines is unknown and while national, state, and local estimates exist, approaches used to count lead service lines vary. A 2016 American Water Works Association study estimated that nationally there were 6.1 million lead service lines, but the study has significant sampling limitations and, as a result, likely does not accurately reflect the total number of lead service lines nationwide. In addition, at least two statesMassachusetts and Washingtonpublished reports with estimates of lead service lines and reported 22,023 and 1,000-2,000 lead service lines as of 2016 and 2017, respectively. Certain water systems also have estimates, such as the approximately 7 percent of publicly owned lead service lines out of the area's total number of service lines cited by a representative for the system serving Cincinnati, Ohio and surrounding areas, as of May 2018. While most states informed the Environmental Protection Agency (EPA) that they intend to fulfill the agency's request to publicize inventories of lead service lines, EPA has identified potential challenges to these efforts. Of the approximately 43 states that responded that they would fulfill EPA's request, almost all (39) reported to EPA that, although they had encouraged water systems to publicize inventories, few systems had completed these actions. GAO found in January 2018 that, of the 100 largest water systems, 12 had publicized information on the inventory of lead service lines. According to EPA, among challenges in conducting inventories of lead service lines and publicizing information about lead service lines were concerns about posting on public websites information about lead service lines on private property; and a lack of records about the locations of lead service lines. EPA told GAO the agency was focused on state compliance with drinking water rules, and not following up with information on how states could address the challenges cited. By sharing information with all states about the approaches that some states and water systems are using to successfully identify and publicize information about lead service lines, including responses to potential challenges, EPA could encourage states to be more transparent to the public and support the agency's objectives for safe drinking water. GAO recommends that EPA share information about the successful approaches states and water systems use to identify and publicize locations of lead service lines with all states. EPA agreed with the recommendation.
|
Many consumer products—such as deodorants, shaving products, and hair care products—are differentiated to appeal specifically to men or women through differences in packaging, scent, or other product characteristics (see fig. 1). These differences related to gender can affect manufacturing and marketing costs that may contribute to price differences in products targeted to different genders. However, firms may also charge consumers different prices for the same (or very similar) goods and services even when there are no differences in costs to produce. To maximize profits, firms use a variety of techniques to charge prices close to the highest price different consumers are willing to pay. Firms may attempt to get segments of the consumer market to pay a higher price than another segment by slightly altering or differentiating the product. Based on the differentiated products, consumers self-select into different groups according to their preferences and what they are willing to pay. For example, some consumer goods have different versions of what is essentially the same product—except for differences in packaging or features, such as scent—with one version intended for women and another version intended for men. The two products may be priced differently because the firm expects that one gender will be willing to pay more for the product than the other based on preference for certain product attributes. Firms may also use some group characteristic, such as age or gender, to charge different prices because some groups may have differences in willingness or ability to pay. For example, a firm may offer discounted movie tickets to students or seniors, as they may have less disposable income. For the seller the cost of providing the movie is the same for any customer, but the seller is able to maximize its profits by offering tickets to different groups of customers at different prices. A firm’s ability to differentiate prices depends on multiple factors, such as the firm’s market power (so that competitors cannot put downward pressure on prices to eliminate the price differences), the presence of consumer segments with different demands and willingness to pay, and control over the sale of its product so it cannot be easily resold to exploit price differences. In addition, the extent to which consumers pay different prices for the same or similar goods can depend on other factors, such as consumers’: willingness to purchase an item they believe may be priced higher for ability to compare prices and product characteristics and choose a product based on its characteristics rather than its price, choices about whether to purchase a more expensive version of the product (e.g., a branded item versus a cheaper store brand), choices about where to purchase the item (i.e., when different retailers sell the same item at different prices), and use of coupons or promotions. No federal law expressly prohibits businesses from charging different prices for the same or similar consumer goods and services targeted to men and women. However, consumer protection laws do prohibit sex discrimination in credit and real estate transactions. Specifically, the Equal Credit Opportunity Act (ECOA) prohibits creditors from discriminating against credit applicants based on sex or certain other characteristics and the Fair Housing Act (FHA) prohibits discrimination in the housing market on the basis of sex or certain other characteristics. ECOA and FHA (collectively known as the fair lending laws) prohibit lenders from, among other things, refusing to extend credit or using different standards in determining to extend credit based on sex. Credit, such as a credit card account or mortgage loan, is generally made available and priced based on a number of risk factors, including credit score, income, and employment history. A borrower with a lower credit score is likely to pay a higher interest rate on a loan, reflecting the greater risk to the lender that the borrower could default on the loan. In addition to the interest rate, borrowing costs for consumers can also include fees and other costs charged by lenders or brokers. However, there may be differences in average outcomes for men and women—such as for availability of credit or interest rates—if there are differences related to gender in the factors that determine creditworthiness, such as income. BCFP, FTC, the federal prudential regulators, and DOJ have the authority to investigate alleged violations of ECOA and are primarily responsible for enforcing the act’s requirements, while HUD and DOJ share responsibility for enforcing the provisions of FHA. Further, BCFP and the prudential regulators oversee regulated entities for compliance with ECOA by, among other things, collecting complaints from the public and through routine inspections of the financial institutions they oversee. HUD and DOJ have the authority to bring enforcement actions for alleged violations of FHA. In 5 out of 10 product categories we analyzed, personal care products targeted to women sold at higher average prices than those targeted to men after controlling for certain observable factors. For 2 of the 10 product categories, men’s versions sold at higher average prices. While the factors we controlled for likely proxy for various costs and consumer preferences, we could not fully observe all underlying differences in costs and demand for products targeted to different genders. As a result, we could not determine the extent to which the gender-based price differences we observed may be attributed to gender bias as opposed to other factors. Women’s versions of personal care products sold at a statistically significant higher average price than men’s versions for 5 of the 10 personal care product categories we analyzed—using two different price measures and after controlling for observable factors that could affect price, such as brands, product size or quantity, promotional expenses (see table 1) and other product-specific attributes (e.g., scent, special claims, form). Because women’s and men’s versions of the same product were frequently sold in different sizes, we compared prices using two price measures: average item price and average price per ounce or count of product. For 2 of the 10 product categories—shaving gel and nondisposable razors—men’s versions sold at a statistically significant higher price using both price measures. For one category (razor blades), women’s versions sold at a statistically significant higher average price per count, but there was no gender price difference using average item prices. Additionally, for two product categories—disposable razors and mass-market perfumes—there were no statistically significant price differences between men’s and women’s products using either price measure. In addition to this analysis of retail price scanner data, we also manually collected advertised online prices for a limited selection of personal care products targeted to women and men from several online retailers. Some price comparisons of advertised online prices for men’s and women’s versions of a product were similar to comparisons of average prices paid based on the Nielsen retail price scanner data. For example, for three pairs of comparable underarm deodorants, the women’s deodorant was listed at a higher price per ounce on average than the men’s deodorant (see app. II). In addition, for one pair of shaving gel products we analyzed, the men’s shaving gel was listed at a higher price per ounce on average. However, for both pairs of nondisposable razors we analyzed, the women’s razors were listed at a higher average price per count than the men’s razors. This contrasted with the Nielsen data showing that men’s nondisposable razors sold at higher prices on average than women’s. An important limitation of our analysis of these advertised prices is that we were unable to determine the extent to which consumers actually paid these prices and in what volume the products were sold, and our results are not generalizable to the broader universe of prices for these products sold at other times or by other online retailers. Though we found that the target gender for a product is a significant factor contributing to price differences we identified, we do not have sufficient information to determine the extent to which these gender- related price differences were due to gender bias as opposed to other factors. Versions differentiated to appeal to men and women can result in different costs for the manufacturer. Our econometric analysis controlled for many observable factors related to costs, such as product size, promotional activity, and packaging type. We also controlled for many product attributes such as forms, scents, and special claims that products make to account for underlying manufacturing cost differences. In addition, we controlled for brands, which can reflect consumer preferences. However, we do not have firm-level data on all cost differences—for example, those related to advertising and packaging. As a result, we could not determine the extent to which the price differences we observed may be explained by remaining cost differences between men’s and women’s products. We also do not have the data to determine the extent to which men and women have different demands and willingness to pay for a product, which would be expected to affect the prices firms charge for differentiated products. For example, some academic experts we spoke with said that women may value some product attributes, such as design and scent, more than men do. If products differentiated to incorporate those attributes do not result in different costs, then differences in prices could be part of a firm’s pricing strategy based on the willingness of one gender to pay more than another. The conditions necessary for firms to be able to implement a strategy of price differentiation likely exist for the personal care products we analyzed. First, our analysis suggests that due to industry concentration, there is limited market competition for the 10 personal care products we analyzed. With more market power, firms can more easily set different prices for different consumer segments. Second, firms have the ability to segment the market for personal care products by tailoring product characteristics related to gender, such as by labeling the product as women’s deodorant or men’s deodorant, or by altering scent or colors. Third, while men and women are able to freely purchase a product targeted to the opposite gender, certain factors may limit the extent to which this occurs. For example, some product differences such as scents may discourage one gender from buying products targeted to another gender. In addition, consumers may find it difficult and time- consuming to compare prices for similar men’s and women’s products because of the ways they are differentiated (such as product size and scents) and because they may be sold in different parts of a store. We reviewed studies that compared prices for men and women in four markets where the product or service is not differentiated by gender: mortgages, small business credit, auto purchases, and auto repairs. First, we reviewed studies on mortgage and small business credit that analyzed interest rates and access to credit to identify any differences for men and women. Second, we reviewed studies that compared prices quoted to men and women in auto purchase and repair markets. However, several of these studies have important limitations, such as using nonrepresentative data samples, and the results are not generalizable. Studies we reviewed found that women as a group pay higher interest rates on average than men in part due to weaker credit characteristics. After controlling for borrower credit characteristics and other factors, three studies did not find statistically significant differences in interest rates between men and women for the same type of mortgage, while one study found that women paid higher mortgage rates for certain subprime loans. In addition, one study found that female borrowers defaulted less frequently on their loans than male borrowers with similar credit characteristics, suggesting that women as a group may pay higher mortgage rates than men relative to their default risk. While these studies attempted to control for factors other than gender or sex that could affect borrowing costs, several lacked important data on certain borrower risk characteristics. For example, several studies we reviewed rely on Home Mortgage Disclosure Act of 1975 (HMDA) data, which did not include data on risk factors such as borrower credit scores that could affect analysis of disparities between men and women. Also, several studies analyzed nonrepresentative samples of loans, such as subprime loans or loans originated more than 10 years ago, which limits the generalizability of the results (see table 2). Three of the studies we reviewed found that while women on average were charged higher interest rates on mortgage loans than men, this difference was not statistically significant after controlling for other factors. For example, one study found that differences in mortgage interest rates between men and women became insignificant after controlling for differences in how men and women shop for mortgage rates. The authors used data from the 2004 Survey of Consumer Finances (SCF) to analyze the effect on interest rates of mortgage features, borrower characteristics such as gender, and market conditions. However, their analysis did not include data on some borrower credit characteristics such as credit score and debt-to-income ratio that could affect borrowing costs. Another study found that women were charged higher interest rates for subprime loans made in 2005, but once the authors controlled for observed risk characteristics there was no evidence of disparity in interest rates by gender of the borrower in the subprime market. However, the authors’ data did not include any fees paid at loan origination, which could affect the overall cost of borrowing. A third study that examined disparities between men and women in subprime loans found no significant evidence that gender affected the cost of borrowing within the subprime market, though it did find that women—particularly African American women—were more likely to have subprime loans. The authors found that, even after controlling for some financial characteristics and loan terms, single African American women were more likely than non-Hispanic white couples to have subprime loans. One study analyzed subprime loans made by one large lender from 2003 through 2005 and found that women paid more for subprime mortgages than men after controlling for some risk factors. This study found that women had higher average borrowing costs—as measured by annual percentage rate—than men, and controlling for credit characteristics such as credit scores and debt-to-income ratios did not fully explain the differences. However, the authors did not control for other factors that could also affect borrowing costs, such as differences in education, shopping behaviors, and geographic location. Additionally, a research paper found that female-only borrowers—that is, where the only borrower is a woman—default less than male-only borrowers with similar loans and credit characteristics. The authors found that female-only borrowers on average pay more for their mortgage loans because they generally have weaker credit characteristics, such as lower income, and also because a higher percentage of these mortgage loans are subprime. However, after controlling for credit characteristics such as credit score, loan term, and loan-to-value ratio, among others, the analysis showed that these weaker credit characteristics do not accurately predict how well women pay their mortgage loans. Since pricing is tied to credit characteristics and not performance, women may pay more relative to their actual risk than do similar men. Studies we reviewed on small business loans generally did not find differences in interest rates, though some found differences in denial rates and other accessibility issues between female- and male-owned firms. Most of the studies we reviewed used data from the 1993, 1998, or 2003 Survey of Small Business Finances (SSBF), which could limit the applicability or relevance of their findings today. A study that analyzed data from the 1993 SSBF did not find evidence that businesses owned by women paid more for credit than firms owned by white men. However, when the authors took into account the market concentration and competition, they found that white female-owned firms experienced increased denial rates in less competitive markets. In addition, the study found that women may avoid applying for credit in those markets because of the fear of being denied. For example, almost half of all small business owners who needed credit reported that they did not apply for credit, and these rates were even higher for businesses owned by women and minorities. Other studies found that women may have less access to small business credit than men, in part because of higher denial rates and because they may not apply for credit out of fear of rejection. For example, one study found that women-owned firms have higher loan denial rates compared with men; however, this is mainly due to differences in business characteristics of female- and male-owned firms. The authors also found that even when denial rates are the same for small businesses with similar characteristics, women’s loan application rates are lower, suggesting that women may be discouraged from applying for credit by the higher overall denial rates for female-owned firms. Another study by one of the same authors examined the reasons why female borrowers may be discouraged from applying for a business loan compared to male business owners and found that it was mainly because they fear that their application will be rejected. A third study by the same author found that women in general did not have less access to credit than men, though newer female-owned firms received significantly lower loan amounts than requested compared to their male-owned counterparts. Similarly, the study also found that women with few years of experience managing or owning a business received significantly lower loan amounts compared with men with similar years of experience. A fourth study looked at six different types of loans, including lines of credit, and found that white women were significantly more likely than white men to avoid applying for a loan because they assume they would be denied. However, once the authors’ model controlled for education differences, all gender disparities in applying for credit disappeared, though white women were still less likely than white men to have loans. Studies we reviewed on auto purchases and repairs found that a seller’s expectation of what customers are willing to pay and how informed they seemed can differ by gender, which can affect the price customers are quoted. However, these studies were published in 1995 and 2001, which may limit the applicability or relevance of their findings today. The 2001 study we reviewed on auto purchases found that though women paid higher prices than men for car purchases on average, these differences declined when cars were purchased online. The authors suggest that this may be because Internet consumers can effectively convey their level of price knowledge and therefore may seem better informed to the sellers. They also suggest it could be because the dealerships have less information about online consumers and their willingness to pay, which may limit the extent of price differentiation. The 1995 study on auto purchases found that the dealers quoted significantly lower prices to white males than to female or African American test buyers using identical, scripted bargaining strategies in part because dealers may have made assumptions about women’s willingness to bargain for lower prices. We also reviewed one study on auto repairs that found that women were quoted higher prices than men if they seemed uninformed about the cost of car repair when requesting a quote, but the price differences disappeared if the study participant mentioned an expected price. The study suggests that a potential explanation for this result could be that auto repair shops expect women to accept a price that is higher than the market average and men to accept a price below it. BCFP and HUD have responsibilities to monitor consumer complaints in the consumer credit and housing markets, respectively. Additionally, FTC monitors complaints about the consumer credit and consumer goods markets. All three agencies play a role in potentially monitoring or addressing issues of gender-related price differences and have online complaint forms for submission of consumer complaints: BCFP collects and reviews consumer complaints about financial products and services and provides complaints and related data in its Consumer Complaint Database. In 2017 BCFP received approximately 320,200 consumer complaints. The products that generated the most complaints in 2017 were “Credit or consumer reporting,” “Debt collection,” and “Mortgage." According to BCFP officials, BCFP also analyzes loan and demographics data collected through HMDA and other data sources to monitor and identify market trends. In addition, BCFP and the federal financial regulators examine fair lending practices of the institutions they regulate, and these examinations have uncovered sex discrimination in credit products by FDIC and NCUA. FTC receives complaints and the complaints are stored in the Consumer Sentinel Network, a database of consumer complaints received by FTC, as well as those filed with other federal and state agencies and organizations, such as mass marketing fraud complaints from the Council of Better Business Bureaus. The complaints in the Consumer Sentinel Network focus on consumer fraud, identity theft, and other consumer protection matters, such as debt collection, and can include complaints related to consumer credit markets. HUD receives consumer complaints about potential FHA violations through its website, via its toll-free phone hotline, and in writing. HUD monitors those complaints through its online HUD Enforcement Management System. HUD investigates all complaints for which it has jurisdictional authority. HUD may monitor complaints to identify trends, but HUD officials stated that the agency does not generally monitor consumer credit and housing market data, absent a specific complaint. In cases where HUD has jurisdictional authority under FHA, HUD offers conciliation between the parties. If resolution is not reached, and HUD determines there is reasonable cause to believe a violation has occurred, the parties may elect to have the matter heard in U.S. District Court or at HUD. In their oversight of federal antidiscrimination statutes, BCFP officials said they have not identified significant consumer concerns about price differences based on a consumer’s sex or gender. FTC and HUD officials identified some examples of concerns of this nature. For example, FTC has taken enforcement actions alleging unlawful race- and gender-related price differences. HUD has also identified several cases where pregnant women and their partners applied for a mortgage while the woman was on maternity leave, and the couple’s mortgage loan application was denied. BCFP, FTC, and HUD have received few consumer complaints about price differences related to sex or gender, according to our analysis of a sample of each agency’s 2012–2017 complaint data (see table 3). In separate samples of 100 gender-related complaints at BCFP, HUD, and FTC, we found that 0, 4, and 1 complaint, respectively, were related to price differences based on sex or gender. Three of the complaints from HUD also cited differences in price based on other protected classes (such as race or ethnicity). Half of the academic experts and consumer groups we interviewed told us that in some markets it is difficult for consumers to observe and compare prices paid by other consumers, such as when prices are not posted or can be negotiated (e.g., car sales). In such cases, consumers may not know if other consumers are paying a higher or lower price than the price quoted to them. Most academic experts also told us that when consumers are aware that price differences could exist, they may make different decisions when making purchases. Additionally, officials from BCFP noted that price differences related to gender may be difficult for consumers to identify, or that consumers may not know where to complain. The consumer education resources of BCFP, FTC, and HUD provide general consumer education resources on discrimination (i.e., consumer user guide or a website) and consumer awareness. Officials from BCFP and HUD said they have not identified a need to develop other consumer education resources specific to gender-related price differences. For example, BCFP’s print and online consumer education materials are intended to inform consumers of their rights and protections related to credit discrimination, which includes discrimination based on sex or gender. The three agencies’ consumer education materials also provide advice that could help consumers avoid paying higher prices regardless of their gender—such as home-buying resources and resources on comparison shopping. However, the agencies have not developed additional educational resources focused specifically on potential gender- related price differences in part because few complaints on this topic have been collected in their databases, agency officials told us. FTC officials noted that it tries to focus its education efforts on topics that will have the greatest benefit to consumers, often determined by information it gathers through complaints and investigations. Representatives of five consumer groups and industry associations told us that they have received few complaints about gender-related price differences. However, four consumer groups noted that low concern could be the result of consumers being unaware of price differences related to gender. For example, as indicated above, price differences related to gender may be difficult for consumers to identify when they cannot determine whether they are paying a higher price than others. Representatives of two retailing industry associations similarly stated that they have not heard concerns about price differences related to gender. In response to consumer complaints or concerns about gender disparities in pricing, at least one state (California) and two municipalities (Miami- Dade County and New York City) have passed laws or ordinances to prohibit businesses from charging different prices for the same or similar goods or services solely based on gender (see table 4). In addition, two of these laws included requirements related to promoting price transparency. California enacted the Gender Tax Repeal Act of 1995, which prohibits businesses from charging different prices for the same or similar services based on a consumer’s gender. The law also requires certain businesses to display price information and disclose prices upon request, according to state officials with whom we spoke. Similarly, in 1997, Miami-Dade County passed the Gender Pricing Ordinance, which prohibits businesses from charging different prices based solely on a consumer’s gender (though businesses are permitted to charge different prices if the goods or services involve more time, difficulty, or cost). In the same year, it also passed an ordinance that prohibits dry cleaning businesses from charging different prices for similar services based on gender. This ordinance also requires those businesses to post all prices on a clear and conspicuous sign, according to county officials with whom we spoke. State and local officials we interviewed identified benefits and challenges associated with these laws. For example, California, New York City, and Miami-Dade County officials noted that these laws give them the ability to intervene to address pricing practices that may lead to discrimination based on gender. In addition, California state officials said that the state’s efforts to implement the Gender Tax Repeal Act helped to improve consumer awareness about gender price differences. However, officials from California and Miami-Dade County cited challenges associated with tracking relevant complaints. For example, Miami-Dade County’s online complaint form includes a narrative section but does not ask for the complainant’s gender. Consumers do not always identify their gender in the narrative or state that that was the reason for their treatment. Additionally, officials from California and Miami-Dade County stated that seeking out violations would be very resource-intensive, and they rely on residents to submit complaints about violations. We provided a draft of this report to BCFP, DOJ, FTC, and HUD. BCFP, FTC, and HUD provided technical comments on the report draft, which we incorporated where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, BCFP, DOJ, FTC, HUD, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. We used a multivariate regression model to estimate the effect of gender (to which a product is targeted to) on the price of that product while controlling for other factors that may also affect the product’s price. The factors that we controlled for were the product size, promotional and packaging costs, and other product characteristics discussed in detail later. We used scanner data from the Nielsen Company (Nielsen) for calendar year 2016 and analyzed the following 10 product categories: (1) underarm deodorants, (2) body deodorants, (3) shaving cream, (4) shaving gel, (5) disposable razors, (6) nondisposable razors, (7) razor blades, (8) designer perfumes, (9) mass-market perfumes, and (10) mass-market body sprays. We estimated the following regression model for each of our 10 product categories: P=α+β*Male + λ* Size + θ*Owner +η*Promotion+ μ*X + δ*Y + ε The dependent variable P in the above equation represents price. For our analysis, we constructed two measures of price. The first is the item price, estimated as the total dollar sales of an item (each item is depicted by a unique Universal Product Code (UPC) in the Nielsen data), divided by the total units sold of that item. The second measure of price that we use is price per ounce or price per count. This is estimated as the item price divided by the total quantity of product, where quantity or size depicts the number of ounces (as in the case of fragrances) or the count of blades in razor blade packs. The total quantity of the product is the ounces or counts of one item multiplied by the number of items included in a specific product configuration. For example, a 2-pack of deodorant sticks where each deodorant stick is 2.7 ounces would be a total quantity of 5.4 ounces. The variable Male in the above equation is an indicator variable depicting whether the product is designated as a “men’s” product in the Nielsen data. It is represented as a value of “1” for men’s products and a value of “0” for women’s products. The co-efficient for this variable, parameter β, would therefore show the price difference between a men’s and women’s product. A negative value would imply a lower price for products designated as men’s products. The variable Size represents the most appropriate specification of the size of the product. Owner is a set of indicator variables representing all the brand owners selling a particular product. The brand of a product can be expected to have a substantial effect on prices for the kind of products we analyze because brands can be a proxy for quality for some consumers. However, we also found that firms often create gender-specific brands, so holding brands constant rendered most gender-based price comparisons infeasible. To overcome this, we hold owners instead of brands constant for our price comparison analysis. The variable Promotion represents the percentage of dollar sales that were sold on any type of promotion. This variable proxies for promotional costs to some extent based on the assumption that the greater the proportion of sales due to promotional activity, the greater the promotional costs. The variables X represent a set of indicator variables for packaging characteristics such as package delivery method (for example, roll-on or aerosol spray deodorants) or package shape (for example, bottle, tube, or can). We expect these characteristics to proxy for different costs associated with different packaging methods. The variables Y represent a set of indicator variables representing different product characteristics (for example, forms such as gel stick or smooth solid and claims such as “active cooling” or “anti-wetness” for underarm deodorants, and blade types such as “triple edge” and “flexible six” for razors). These product characteristics may proxy for some underlying manufacturing costs or even consumer preferences. Since firms may create gender-specific product attributes—scents like “sweet petals” and “pure sport” or razor head types and colors to differentiate products between genders—we did not always keep every product attribute constant when comparing prices. The idiosyncratic error term is represented by ε. All of our regressions are weighted, with the proportion of units sold for a particular item in that year as the weight. This is because, for personal care products, there are large differences in units sold of various product types and brands, and therefore it not useful to compare simple un- weighted average prices. For example, for one company the highest selling men’s deodorant stick sold almost 12 million units in 2016, and the highest selling women’s deodorant stick sold over 8 million units. The average units sold for underarm deodorants as a whole was just over 300,000 units, and 1,000 products out of a total of almost 3,000 products had less than 100 units sold in 2016. The linear model we used has the usual shortcomings of being subject to specification bias to the extent the relationship between price and each of the independent variables is not linear. The model also does not include complete data on costs, such as advertising and packaging, or consumers’ willingness to pay, both of which have an effect on the price differences. The model may thus also be subject to omitted variable bias. In addition, the model may have some endogeneity issues to the extent the product characteristics themselves are influenced by consumers’ willingness to pay for some of those product features. To reduce the impact of any model misspecifications or heteroscedasticity, we used the robust (or Huber-White sandwich) estimator. We estimated the regression model above for each of the 10 products separately and for each of the two measures of price. We used Nielsen’s in-store, retail price scanner data, which include information on total volume sold and dollar sales for items purchased at 228 retailers including grocery stores, drug stores, mass merchandisers (such as Target), dollar stores, club stores (such as Sam’s Club), and convenience stores. The data capture 82 percent of all U.S. sales. Nielsen also projects sales for the remaining noncooperating retailers, and that information is included in this dataset. We excluded some very small brands that did not have enough units sold from our regression analysis in order to avoid outliers. These brands usually had less than 50,000 units sold over the entire year, and for some products they represented less than 1 percent of all units sold. We found that average retail prices paid were significantly higher for women’s products than for men’s in 5 out of 10 personal care products. In 2 categories, men’s versions sold at a significantly higher price. One category had mixed results based on two price measures analyzed, and two others showed no significant gender price differences. A summary of our regression results is presented in table 5. We manually collected prices for 16 pairs of selected personal care products from the websites of four online retailers that also operated physical store locations. We selected comparable pairs of similar men’s and women’s products that were differentiated by product attributes, such as scent or color, and were sold at most or all of the four retailers. The products were selected based on several comparability factors such as brand, product claims, and number of blades in a razor. For two 1-week time periods in January and March 2018, we collected prices manually between 1:00 p.m. and 7:00 p.m. (ET) over two 7-day time periods. We collected listed prices and did not adjust the prices for any promotions that were available, such as online coupons or buy-one-get-one-free offers. Table 6 presents the results of our online price collection. These results have important limitations: The average prices shown are not generalizable to the broader universe of prices for these products sold at other times or by other online retailers. The data reflect prices advertised to consumers rather than the prices consumers actually paid. The data do not capture the volume of sales for each item for each retailer; in our analysis, we weighted all advertised prices equally across the retailers. As a result, differences we found within these advertised prices may not have translated into comparable differences in prices female and male consumers paid for these products online. The prices do not reflect any promotional discounts, volume discounts, or other discounts that may have been available to some or all consumers. This report examines (1) how prices compared for selected categories of consumer goods that are differentiated for men and women, and potential reasons for any significant price differences; (2) what is known about the extent to which men and women may pay different prices in, or experience different levels of access to, markets for credit and goods and services that are not differentiated based on gender; (3) the extent to which federal agencies have identified and taken steps to address any concerns about gender-related price differences; and (4) state and local government efforts to address concerns about gender-related price differences. To compare prices for selected goods that are differentiated for men and women, we purchased and analyzed Nielsen Company (Nielsen) data on retail prices paid for 10 personal care product categories for calendar year 2016. The product categories included underarm deodorants, body deodorants (typically sold as a spray), disposable razors, nondisposable razors, razor blades, shaving creams, shaving gels, and three categories of fragrances. We selected these categories of personal care products because they are commonly purchased consumer goods that were categorized by gender in the Nielsen data. The women’s and men’s versions of personal care products we selected are generally more similar in terms of the form, size, and packaging in comparison to certain other consumer product categories that are also differentiated by gender, such as clothing. We used regression models to analyze data on retail prices paid for the 10 categories of personal care products differentiated for women and men. To assess the reliability of the Nielsen data, we reviewed relevant documentation and conducted interviews with Nielsen representatives to review steps they took to collect and ensure the reliability of the data. In addition, we electronically tested data fields for missing values, outliers, and obvious errors. We determined that these data were sufficiently reliable for our purposes. For more details on the methodology for, and limitations of, our analysis of these retail price data, see appendix I. We also manually collected listed prices for 16 pairs of selected personal care products from four different retailer websites over two 7-day periods in January and March 2018. For each pair, we selected comparable men’s and women’s products that were differentiated by product attributes, such as scent or color, and were commonly sold across retailers. For more details on our online price data collection and the limitations associated with interpreting the results, see appendix II. To examine what is known about the extent to which men and women may be offered different prices or access for the same goods or services, we reviewed academic literature identified through a literature search covering the last 25 years. To identify existing studies from peer-reviewed journals, we conducted searches using subject and keyword searches of various databases, such as EconLit, Scopus, ProQuest, and Social SciSearch. We also used a snowball search technique—meaning we reviewed relevant academic literature cited in our selected studies—to identify additional studies. We performed these searches and identified articles from December 2016 to April 2018. From these searches, we identified 21 studies that appeared in peer-reviewed journals or research institutions’ publications from 1995 through 2016 and were relevant to gender-related price differences for the same products. We reviewed and assessed each study’s evaluation methodology based on generally accepted social science standards. See the bibliography at the end of this report for a list of the 21 studies. We then summarized the research findings. A GAO economist read and assessed each study, using the same data collection instrument. The assessment focused on information such as the types of disparities examined, the research design and data sources used, and methods of data analysis. The assessment also focused on the quality of the data used in the studies as reported by the researchers and any limitations of data sources for the purposes for which they were used. A second GAO economist reviewed each completed data collection instrument to verify the accuracy of the information included. As a result, the 21 studies that we selected for our review met our criteria for methodological quality. We found the studies we reviewed to be reliable for purposes of determining what is known about price differences for the same products. However, these studies have important limitations, such as using nonrepresentative data samples, and the results are not generalizable. To examine the federal role in overseeing gender-related price differences, we reviewed relevant federal statutes and agency guidance, and interviewed officials from the Federal Trade Commission (FTC), Bureau of Consumer Financial Protection (BCFP), the Department of Housing and Urban Development (HUD), and the Department of Justice (DOJ). To help identify the extent of concerns about gender-related price differences, we interviewed representatives from eight consumer groups, three industry associations, and four academic experts. Additionally, we reviewed a sample of consumer complaints from databases managed by BCFP, FTC, and HUD (Consumer Complaint Database, Consumer Sentinel Network, and Enforcement Management System, respectively). Complaints were submitted by consumers across the United States about various financial products, housing grievances, and other consumer protection concerns. To identify our universe of gender-related consumer complaints in BCFP and FTC databases, we used the following search terms that targeted sex or gender discrimination: discriminat, unfair, treat, decept, abus, female, woman, women, man, men, male, gender, sex, female, woman, women, man, men, male, gender, and sex. HUD’s consumer complaint database is categorized by protected class (e.g., race, sex, national origin), so we did not need to use search terms to identify gender-related complaints. For the years 2012 through 2017, we identified 6,117 BCFP consumer complaint narratives; 10,472 FTC consumer complaints narratives; and 5,421 HUD consumer complaint narratives that were relevant to our scope. We then drew a stratified random probability sample of 100 gender-related consumer complaints from each database. To determine which complaints in our samples were about price differences related to gender or sex, two team members read through each complaint narrative and coded whether or not the complainant’s narrative indicated that they felt that they paid or were charged more because of their gender or sex. A third team member conducted a final review of the results, and made a final determination in cases where there were differences in the first two team member’s assessments. With this probability sample, each member of the study population had a nonzero probability of being included, and that probability could be computed for any member. We followed a probability procedure based on random selections and our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (with a margin of error of 5.9 percent). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. We assessed the reliability of these data by reviewing documentation and interviewing agency officials about the databases used to collect these complaints. We determined that these data were sufficiently reliable for our purposes of identifying complaints of gender- related price differences. To explore state and local efforts to address concerns about gender- related price differences, we conducted a literature search and identified three state or local laws or ordinances that specifically address gender- related price differences: California, Miami-Dade County, Florida, and New York City, New York. We reviewed these laws and ordinances and interviewed officials from these jurisdictions to discuss motivations for, oversight of, and the impact of these laws. We conducted this performance audit from October 2016 to August 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. For each of 10 personal care product categories we analyzed, we compared the overall average prices for women’s products and men’s products using two measures of average price: average item price and average price per ounce or count. While the second price measure adjusts the average price for quantity of product, these comparisons did not take into account the effect on price of differences in product brand, packaging, and other characteristics. As shown in table 7, adjusting the average item price to account for differences in product quantity (ounces or count) significantly affected the size and magnitude of gender price differences for several product categories. This is because men’s products in the dataset were frequently larger in size or count compared with women’s products in the same category. For example, women’s disposable razors sold for 11 percent less than those targeted to men when we compared average item prices. However, when we compared average price per count of razors, women’s disposable razors sold for 19 percent more on average than men’s. This is because women’s disposable razors had on average about one fewer razor per package. In 5 out of 10 product categories, women’s versions of the product on average sold for a higher price per ounce or count than men’s and these differences were statistically significant at the 95 percent confidence level for 4 products and at the 90 percent level for one product. Information about sales and relative sizes of different products targeted to men and women are presented in table 8 below. This appendix provides additional details about the consumer complaint processes at the Bureau of Consumer Financial Protection (BCFP), Federal Trade Commission (FTC), and Department of Housing and Urban Development (HUD). Consumers with a complaint about unfair treatment related to gender could submit a complaint to one of these agencies. BCFP and FTC monitor consumer complaints related to violations under the Equal Credit Opportunity Act, while HUD and the Department of Justice (DOJ) investigate housing discrimination complaints under the Fair Housing Act. These complaints could be about price differences because of gender. Alicia Puente Cackley, (202) 512-8678 or cackleya@gao.gov. In addition to the contact named above, John Fisher (Assistant Director), Jeff Harner (Analyst in Charge), Vida Awumey, Bethany Benitez, Namita Bhatia-Sabharwal, Kelsey Kreider, and Kelsey Sagawa made key contributions to this report. Also contributing to this report were Abigail Brown, Michael Hoffman, Jill Lacey, Oliver Richard, Tovah Rom, and Paul Schmidt. We reviewed literature to identify what is known about the extent to which female and male consumers may face different prices or access in markets for credit and goods and services that are not differentiated based on gender. This bibliography contains citations for the 20 studies and articles that we reviewed that compared prices or access for female and male consumers in markets where the product is not differentiated by gender (mortgages, small business credit, auto purchases, and auto repairs). Asiedu, Elizabeth, James A. Freeman, and Akwasi Nti-Addae. “Access to Credit by Small Businesses: How Relevant Are Race, Ethnicity, and Gender?” The American Economic Review, vol. 102, no. 3 (2012): 532- 537. Ayers, Ian and Peter Siegelman. “Race and Gender Discrimination in Bargaining for a New Car.” The American Economic Review, vol. 85, no. 3. (1995): 304-321. Blanchard, Lloyd, Bo Zhaob, and John Yinger. “Do lenders discriminate against minority and woman entrepreneurs?” Journal of Urban Economics 63 (2008): 467–497. Blanchflower, David G., Phillip B. Levine, and David J. Zimmerman. “Discrimination in the Small-Business Credit Market.” The Review of Economics and Statistics, vol. 85, no. 4 (2003): 930-943. Busse, Meghan R., Ayelet Israeli, and Florian Zettelmeyer. “Repairing the Damage: The Effect of Price Expectations on Auto Repair Price Quotes.” National Bureau of Economic Research, Working Paper 19154 (2013). Cavalluzzo, Ken S., Linda C. Cavalluzzo, and John D. Wolken. “Competition, Small Business Financing, and Discrimination: Evidence from a New Survey.” The Journal of Business, vol. 75, no. 4 (2002): 641- 679. Cheng, Ping, Zhenguo Lin, and Yingchun Liu. “Do Women Pay More for Mortgages?” The Journal of Real Estate Finance and Economics, vol. 43 (2011): 423-440. Cheng, Ping, Zhenguo Lin, and Yingchun Liu. “Racial Discrepancy in Mortgage Interest Rates.” The Journal of Real Estate Finance and Economics, vol. 51 (2015): 101-120. Cole, Rebel, and Tatyana Sokolyk. “Who Needs Credit and Who Gets Credit? Evidence from the Surveys of Small Business Finances”. Journal of Financial Stability, vol. 24 (2016), 40-60. Coleman, Susan. “Access to Debt Capital for Women- and Minority- Owned Small Firms: Does Educational Attainment Have an Impact?” Journal of Developmental Entrepreneurship, vol. 9, no. 2 (2004): 127-143. Duesterhas, Megan, Liz Grauerholz, Rebecca Weichsel, and Nicholas A. Guittar. “The Cost of Doing Femininity: Gendered Disparities in Pricing of Personal Care Products and Services,” Gender Issues, vol. 28, (2011): 175-191. Goodman, Laurie, Jun Zhu, and Bing Bai. “Women Are Better than Men at Paying Their Mortgages.” Urban Institute, Research Report (2016). Haughwout, Andrew, et al. “Subprime Mortgage Pricing: The Impact of Race, Ethnicity, and Gender on the Cost of Borrowing.” Brookings- Wharton Papers on Urban Affairs (2009): 33-63. Mijid, Naranchimeg. “Gender differences in Type 1 credit rationing of small businesses in the US.” Cogent Economics & Finance, vol. 3 (2015). Mijid, Naranchimeg. “Why are female small business owners in the United States less likely to apply for bank loans than their male counterparts?” Journal of Small Business & Entrepreneurship, vol. 27, no. 2 (2015): 229- 249. Mijid, Naranchimeg and Alexandra Bernasek. “Gender and the credit rationing of small businesses.” The Social Science Journal, vol. 50 (2013): 55-65. Morton, Fiona Scott, Florian Zettelmeyer, and Jorge Silva-Risso. “Consumer Information and Price Discrimination: Does the Internet Affect the Pricing of New Cars to Women and Minorities?” National Bureau of Economic Research, Working Paper 8668 (2001). O’Connor, Sally. “The Impact of Gender in the Mortgage Credit Market.” University of Wisconsin-Milwaukee Doctoral Dissertation (1996). Van Rensselaer, Kristy N., et al. “Mortgage Pricing and Gender: A Study of New Century Financial Corporation.” Academy of Accounting and Financial Studies Journal, vol. 18, no. 4 (2014): 95-110. Wyly, Elvin and C.S. Ponder. “Gender, age, and race in subprime America.” Housing Policy Debate, vol. 21, no. 4 (2011): 529-564. Zimmerman Treichel, Monica and Jonathan A. Scott. “Women-Owned Businesses and Access to Bank Credit: Evidence from Three Surveys Since 1987.” Venture Capital, vol. 8, no. 1 (2006): 51-67.
|
Gender-related price differences occur when consumers are charged different prices for the same or similar goods and services because of factors related to gender. While variation in costs and consumer demand may give rise to such price differences, some policymakers have raised concerns that gender bias may also be a factor. While the Equal Credit Opportunity Act and Fair Housing Act prohibit discrimination based on sex in credit and housing transactions, no federal law prohibits businesses from charging consumers different prices for the same or similar goods targeted to different genders. GAO was asked to review gender-related price differences for consumer goods and services sold in the United States. This report examines, among other things, (1) how prices compared for selected goods and services marketed to men and women, and potential reasons for any price differences; (2) what is known about price differences for men and women for products not differentiated by gender, such as mortgages; and (3) the extent to which federal agencies have identified and addressed any concerns about gender-related price differences. To examine these issues, GAO analyzed retail price data, reviewed relevant academic studies, analyzed federal consumer complaint data, and interviewed federal agency officials, industry experts, and academics. Firms differentiate many consumer products to appeal separately to men and women by slightly altering product attributes like color or scent. Products differentiated by gender may sell for different prices if men and women have different demands or willingness to pay for these product attributes. Of 10 personal care product categories (e.g., deodorants and shaving products) that GAO analyzed, average retail prices paid were significantly higher for women's products than for men's in 5 categories. In 2 categories—shaving gel and nondisposable razors—men's versions sold at a significantly higher price. One category—razor blades--had mixed results based on two price measures analyzed, and two others—disposable razors and mass-market perfumes—showed no significant gender price differences. GAO found that the target gender for a product is a significant factor contributing to price differences identified, but GAO did not have sufficient information to determine the extent to which these gender-related price differences were due to gender bias as opposed to other factors, such as different advertising costs. Though the analysis controlled for several observable product attributes, such as product size and packaging type, all underlying differences in costs and demand for products targeted to different genders could not be fully observed. Studies GAO reviewed found limited evidence of gender price differences for four products or services not differentiated by gender—mortgages, small business credit, auto purchases, and auto repairs. For example, with regard to mortgages, women as a group paid higher average mortgage rates than men, in part due to weaker credit characteristics, such as lower average income. However, after controlling for borrower credit characteristics and other factors, three studies did not find statistically significant differences in borrowing costs between men and women, while one found women paid higher rates for certain subprime loans. In addition, one study found that female borrowers defaulted less frequently than male borrowers with similar credit characteristics, and the study suggested that women may pay higher mortgage rates than men relative to their default risk. While these studies controlled for factors other than gender that could affect borrowing costs, several lacked important data on certain borrower risk characteristics, such as credit scores, which could affect analysis of gender disparities. Also, several studies analyzed small samples of subprime loans that were originated in 2005 or earlier, which limits the generalizability of the results. In their oversight of federal antidiscrimination statutes, the Bureau of Consumer Financial Protection, Federal Trade Commission, and Department of Housing and Urban Development have identified limited consumer concerns based on gender-related pricing differences. GAO's analysis of complaint data received by the three agencies from 2012–2017 found that they had received limited consumer complaints about gender-related price differences. The agencies provide general consumer education resources on discrimination and consumer awareness. However, given the limited consumer concern, they have not identified a need to incorporate additional materials specific to gender-related price differences into their existing consumer education resources.
|
The federal budget process provides the means for the President and Congress to make informed decisions between competing national needs and policies, to allocate resources among federal agencies, and ensure laws are executed according to established priorities. OMB, as part of the Executive Office of the President, is to guide the annual budget process, make decisions on executive agencies’ budgets, aggregate submissions for agencies, and submit the consolidated document for the executive branch as the President’s Budget Request to Congress. In support of the President’s budget request, departments are to submit budget justifications to the congressional appropriations committees, typically to explain the key changes between the current appropriation and the amounts requested for the next fiscal year. During the process, OMB is to ensure that budget requests are consistent with presidential objectives and issue guidance to federal agencies through OMB Circular A-11, which provides instructions for submitting budget data and materials, as well as for developing budget justifications. Various offices within ICE are involved in developing ICE’s annual budget request for immigration detention (see fig. 1). Two ICE entities integral to the budget request formulation are the Office of Budget and Program Performance (OBPP) and Enforcement and Removal Operations (ERO). Within ICE’s Office of the Chief Financial Officer, OBPP is responsible for guiding ICE’s annual budget request process, including analyzing and validating budget projections for all of ICE’s directorates, including ERO. ERO is responsible for estimating the total amount of funding to cover costs of immigration detention. For the upcoming budget year, ERO determines the projected ADP, while OBPP determines the projected bed rate. ERO then utilizes the two variables of bed rate and ADP in its estimate of future detention costs. Other offices within ICE, such as Custody Management, Field Operations, Operations Support, Management and Administration, and the Office of Policy are involved in the formulation of other aspects of ICE’s budget or in supervisory roles. Figure 1 is an organizational chart of ICE offices that are involved in the annual budget request for immigration detention resources. ICE follows budget formulation guidance from DHS, and uses two key variables—the bed rate and ADP—when formulating its budget request. Approximately 20 months before the start of a particular fiscal year, the Secretary of Homeland Security provides its Resource Planning Guidance to all DHS components. This document works to align the department’s planning, programming, and budgeting activities and execution activities over a five-year period, and sets forth the resource planning priorities of the department as they relate to its mission. The department planning priorities are to guide the DHS components as they develop their respective Resource Allocation Plans (RAP). After the Secretary issues the Resource Planning Guidance, DHS’s Office of the Chief Financial Officer provides fiscal guidance to ICE that identifies an estimated allocation amount, which ICE is to budget to in its RAP submission. In developing its RAP, each of ICE’s program offices determines its current budget needs and then submits Program Decision Options (PDO) to ICE leadership for any changes from the prior year’s budget. Every ICE program and activity submits, in the form of a PDO, any changes that are to occur, including all programmatic increases, initiatives, reductions, or eliminations. Once all of the program offices submit their PDOs to ICE leadership, a council of leadership representatives from across ICE convenes to approve and prioritize the selected PDOs moving forward to DHS. ICE submits its RAP to DHS for a final decision with all pertinent information attached, such as the prioritized PDOs based on mission and department needs, fiscal changes to programs, and potential capital investments. During the Resource Allocation Decision (RAD) process, DHS leadership reviews all of the RAP submissions from across the department and approves or rejects the PDOs. Individual program offices work out any changes that may have occurred during the RAD process prior to the completion of the budget request and submission to OMB. DHS then submits a budget proposal on behalf of the entire department, inclusive of ICE, to OMB. OMB is to prepare a budget request for all of the executive departments and agencies, which is submitted to Congress as the President’s budget. Following OMB decisions on agency budget requests, DHS submits a budget justification, inclusive of ICE, with more details to the congressional appropriations committees. Key steps in the overall process are shown in figure 2. When preparing the budget submission, ICE uses two key variables, the bed rate and ADP (see sidebar), to calculate a cost estimate for the resources needed for managing the immigration detention system. In order to determine the amount necessary to operate the detention system for adult detainees, ICE multiplies the projected ADP by the projected bed rate by the number of days in the year (see fig. 3). ICE then includes these costs as part of its Custody Operations account. ICE does not have a documented review process to ensure the accuracy of its budget calculations presented in its yearly congressional budget justifications (CBJ). Based on our review of CBJs from fiscal year 2014 to fiscal year 2018, there are a number of inconsistencies and errors in the numerical calculations pertaining to immigration detention costs. During our review of ICE’s fiscal year 2014 and fiscal year 2015 budget requests, we calculated the total amounts requested for ICE’s immigration detention costs using its formula (see fig. 3) and the ADP and bed rate figures provided in the budget request and compared it with ICE’s requested amount. Based on our calculations, the amounts ICE requested are not consistent (by a difference of $34.7 million for fiscal year 2014 and $129 million for fiscal year 2015) with the figures used to develop their estimate. ICE officials acknowledged the error. Additionally, ICE’s fiscal year 2017 budget request erroneously applied $2 million in costs from detention beds to transportation and removal, resulting in a request for $2 million less for detention beds and $2 million more for transportation and removal, a total of $4 million in errors in the agency’s estimate. In response to the misapplication of $2 million, ICE officials stated that the CBJ still provided for the same net total because the two mistakes offset each other. Officials also stated that the final appropriation ultimately was not based on its budget request numbers and ICE’s detention activities were funded at an amount that was greater than what they requested. The fiscal year 2018 request also contains a multiplication error that resulted in ICE requesting less funds—$4,000— than using the correct calculation. ICE officials told us that there are multiple reviews of the budget documents prior to submission to ensure that the numbers presented are accurate and supportable. However, ICE could not provide us with any documentation that the reviews were conducted. ICE officials stated that reviews were typically completed using hard copies and then approval was verbal and not documented formally. Standards for Internal Control in the Federal Government states that management should design control activities to achieve objectives and respond to risks. Such activities include review processes to ensure the accuracy of budget calculations prior to official submission and appropriate documentation of the reviews. While the final appropriations that Congress determines for ICE may ultimately be higher or lower than what ICE requested, generating and presenting an accurate picture of ICE’s funding needs is necessary to provide Congress the information needed to make informed decisions. By developing and implementing a documented review process, it is more likely that relevant ICE officials are accountable for ensuring the accuracy of the budget requests and underlying calculations. Without a documented review process, ICE is not positioned to demonstrate the credibility of its budget requests. Furthermore, Congress may not have reliable information to make informed decisions about funding immigration detention needs. Bed Rate ICE’s bed rate is based on four cost categories. Bed/guard costs: The contract costs of beds and guards at U.S. Immigration and Customs Enforcement’s (ICE) various detention facilities. Health care: Medical expenses of the detainee population. Other direct costs: All costs that directly concern detainees, including payments to detainees for work programs, provisions and supplies for detainees, and telecommunications billed to individual facilities. Service-wide or indirect costs: Overhead expenses for ICE’s management of the detention system, including rent, security, office equipment, and liability insurance. Although ICE bases its projected adult bed rate on historical costs, from fiscal year 2014 through fiscal year 2017, ICE underestimated the actual rate. ICE calculates the adult bed rate by tracking obligations and expenditures in four categories—bed/guard costs, health care, other direct costs, and service-wide costs, also known as indirect costs. (See sidebar for more information.) We found that ICE has improved its process for collecting this information from its financial management system since 2014, when we previously reported that limitations in its data system required ICE personnel to manually enter codes to categorize relevant data. In fiscal year 2014, ICE introduced a new financial coding process that allows staff to pull costs—the obligations and expenditures—directly from its financial management system. This system is an improvement over the manual workarounds that ICE previously used and allows staff to pull the necessary data more easily for the purposes of calculating the projected bed rate. To estimate what ICE’s projected adult bed rate will be two years into the future, ICE calculates and averages the year-over-year percentage change in costs since fiscal year 2009 and multiplies the current bed rate by this figure twice, following the formula outlined in figure 4. ICE calculates the year-over-year percentage change for each cost category—bed/guard costs, health care, other direct costs, and service- wide costs—and then applies the average of these changes to the current cost of the category. The final projected bed rate is the sum of the four cost categories. According to ICE, the average of the year-over-year percentage change serves as its inflation rate and more accurately reflects the annual escalation of its detention costs. Given that ICE must determine the projected bed rate almost two years into the future, ICE applies its inflation rate twice to the current costs. Although the formula outlined in figure 4 summarizes ICE’s adult bed rate methodology, ICE’s guidance notes that situations may occur in which it is advisable to adjust national bed rate projections to account for new trends or other changes. For example, in response to concerns from Congress about ICE’s application of indirect costs, and the opportunity to revise the fiscal year 2017 bed rate, ICE officials told us they changed some of the methodology for the projected 2017 and 2018 bed rates. Although ICE’s bed rate model is based on historical costs, from fiscal year 2014 through fiscal year 2017 ICE’s adult bed rate projections underestimated the actual bed rate. Specifically, ICE underestimated the bed rate by $2.16 in fiscal year 2014, by $8.08 in fiscal year 2015, by $5.42 in fiscal year 2016, and by $0.31 in fiscal year 2017 (see fig. 5). For illustrative purposes, underestimating the bed rate by $5 per day, assuming an ADP of 34,000, yields a more than $62 million underestimation in the detention budget request. The bed rate model assumes that operations in the immigration detention system will continue without drastic changes and that past trends will continue since it bases its projections on historical costs. According to ICE officials, the bed rate model cannot anticipate a need to increase the capacity of the entire system, or anticipate a policy decision to close or continue operation of a facility. Either of these situations may cause the bed rate to change. Although certain situations may lead to unanticipated changes in the bed rate, we identified a number of factors in ICE’s current bed rate model that have led to inaccuracies, including using incorrect inflation factors and mixing costs for family and adult facilities. ICE calculates the projected bed rate by using its own inflation rate based on the escalation of detention costs instead of a standard inflation rate provided by OMB or DHS, but did not provide documentation of its rationale. As described previously, ICE’s inflation factor is based on an average of the year-over-year changes in costs since fiscal year 2009. OMB guidance states that it will provide agencies with economic assumptions to be used for budget requests, including inflation rates, and that agencies can consider price changes, such as bed/guard costs, as a factor in developing estimates. ICE officials told us that historical costs more accurately reflect potential increases, but did not provide us with documentation to support that rationale. According to ICE officials, by accepting the inflation factor used in ICE’s budget request, OMB has given tacit, if not direct, approval for its usage. Based on our review of ICE’s adult bed rate projections, historical costs may not be the best method for predicting future costs and assumes that past trends will continue, including negative inflation rates. Because the bed rate model accounts for changes on a per person basis, negative inflation factors could be due to decreasing costs or an increasing detainee population, both of which may change in the following year. For example, ICE’s fiscal year 2018 bed rate model incorporates a negative inflation factor for health care costs even though in its budget justification ICE attributes part of the bed rate increase over the prior year to rising health care costs. Relying on historical costs may lead to inaccuracies if a deflationary trend does not continue as the model assumes. In our examination of the bed rate model, we also found that ICE did not calculate the percentage change correctly. Year-over-year percentage change compares the difference in costs in percentage terms and can be calculated by dividing the difference in costs by the starting costs. Instead of following this formula, ICE’s bed rate model calculated the actual monetary difference between the two years and represented it as a percentage change. For example, from fiscal year 2009 to fiscal year 2010, the bed/guard rate increased from $77.50 to $81.59. Whereas the percentage change in the rate is 5.28 percent, ICE calculated the percentage change by subtracting one rate from the other ($4.09) and adding a percent sign (4.09%), thereby treating the dollar difference as a percentage change. (See table 1.) ICE officials stated that they decided to use the actual monetary difference as a way to account for inflation for the fiscal year 2018 adult bed rate. However, using the actual monetary difference in costs does not provide a percentage of change. It misrepresents a difference in price as a percentage. Further, we found that because ICE did not appropriately calculate the percentage change for each year, the average of year-over- year changes, which ICE uses as its inflation factor, is not correct. For example, ICE’s inflation factor for the bed/guard rate is 2.74 percent, while the appropriate calculation is 3.28 percent. (See table 1.) (See Appendix I for more information and calculations.) In addition, when calculating the fiscal year 2018 projected bed rate, rather than following formulas contained in the bed rate model, ICE manually entered a different inflation factor for two cost categories—other direct costs and service-wide costs—instead of relying on the historical data. ICE added together the inflation factors indicated by the model for other direct costs and service-wide costs and then applied the combined inflation factor to both categories. By combining and manually entering the factors, ICE mistakenly introduced an additional error. Officials did not provide an explanation or documentation of why they manually entered these numbers or combined the two inflation factors except to state that it stemmed from the Congressional request to separate the costs. ICE’s adult bed rate model includes information for family facilities, even though family facilities are budgeted separately and in a different manner from adult facilities. For its adult facilities, ICE contracts with the individual facilities to provide beds and the cost is dependent on the number of adults detained. ICE’s family detention facilities, however, are operated by local governments or private companies and are funded through fixed price contracts that are not dependent on the number of people detained. (See sidebar for more information.) While ICE budgeted $291.4 million for its family facilities in fiscal year 2018, our analysis showed that ICE also included the population in its family facilities in the calculations of the adult bed rate. For example, in fiscal year 2018, ICE divided the obligations and expenditures for health care, other direct costs, and service-wide costs across the entire detainee population of adults and families, resulting in an adult bed rate that was lower than if the costs were divided by the adult population alone. Using this underestimated bed rate has resulted in a lower cost estimate than what ICE may need to sustain its adult population. Additionally, ICE double-counted some costs by budgeting for family facilities in both the adult bed rate and the total cost for family facilities. Specifically, we found that ICE included “other direct costs” associated with its family facilities when calculating its adult bed rate. Given that ICE already budgeted for these family facilities’ costs as a line item within its budget for family facilities, calculating the adult bed rate in this way double-counts the costs for family facilities in the budget. ICE officials did not provide documentation or their rationale for including the family facilities in their adult bed rate model. (See Appendix I for more information and calculations.) Standards for Internal Control in the Federal Government states that management should use quality information to achieve objectives, defining quality information as appropriate, current, complete, accessible, and provided on a timely basis. Quality information is based on relevant data from reliable sources and relatively free from error. According to GAO’s Cost Estimating and Assessment Guide, having a realistic estimate of projected costs facilitates effective resource allocation. Because information requirements should consider the expectations of external users, by basing its detention cost estimates on quality information, ICE would help ensure they are useful to Congress for making resource allocation decisions. Additionally, GAO’s cost estimating guide states that applying correct inflation rates is an important step to ensure accurate cost estimates and that inflation assumptions should be well documented. According to ICE officials, ICE’s most substantial change to the bed rate model since its creation in 2009 was a revision in 2014 to account for the costs of family facilities. In our review, we found that ICE includes information for family facilities in the adult bed rate model. By reviewing its bed rate model and methodology and correcting identified inaccuracies and other potential issues, ICE could improve its adult bed rate projections and better ensure its funding requests are credible and reliable. To calculate its budget needs, ICE reported using ADP figures that are based on policy decisions, but it is unclear if the ADP figures were based on statistical analysis. Further, ICE did not provide documentation on how it calculated the final ADP numbers used in its budget requests. For example, the fiscal year 2018 budget justification includes a projected ADP of 48,879 adults, a 63 percent increase over the fiscal year 2017 projected adult ADP (29,953) and a 49 percent increase over the fiscal year 2016 actual adult ADP (32,770). Although ICE provided a general explanation of various factors that influence ADP, including policy changes such as executive orders regarding immigration enforcement, the agency did not provide documentation quantifying the effect of these factors nor the calculations or methodology used to arrive at the 48,879 figure. In the absence of documentation, we reviewed ICE’s CBJs from fiscal year 2014 through fiscal year 2018 and we could not identify a clear methodology that ICE used across the years for developing the ADP and using it to calculate its detention-related budget needs. For example, in the fiscal year 2018 CBJ, ICE did not independently determine the projected ADP for use as an input into its cost estimate. Rather, officials started with the prior year’s funding level for detention costs, which officials told us they were directed to do by OMB, and calculated the ADP it could house with that amount. In the fiscal year 2017 budget justification, ICE used its projected ADP numbers from the previous year as starting points to calculate changes in its budget request. Additionally, while the appropriations act for fiscal year 2014 included a proviso that ICE’s funding support at least 34,000 detention beds during the fiscal year, ICE included a lower number of detention beds (30,539) in its 2015 budget request. According to ICE officials, the ADP figures used in its budget requests are initially projected by ERO, but may be changed by ICE leadership, DHS leadership, or OMB. Officials said the final ADP figure is based on policy decisions that account for factors that could affect the detainee population—for example, delays in immigration courts or the number of asylum officers on staff. According to officials, ICE prepares the budget request two years in advance of the year of execution with the best knowledge they have available at that time, including ADP projections. Officials stated that ADP is difficult to estimate given the unpredictable nature of events such as natural disasters, gang activity, or political upheaval in another part of the world, which may lead to an unanticipated increase in migration. Additionally, officials told us that various policy developments across the administration, DHS, or other agencies may affect immigration trends or enforcement. ICE officials also stated that because immigration detention facilities may receive detainees from other parts of the immigration system, ADP can be affected by actions taken by other actors involved in immigration enforcement, such as the Executive Office for Immigration Review, U.S. Customs and Border Protection, and U.S. Citizenship and Immigration Services. Such events could include, for example, delays in immigration court cases or an increase in the number of asylum cases, which could increase ADP. When asked to provide documentation for the fiscal year 2018 ADP projection of 51,379, ICE provided us a document containing tables and justification that explained the factors that impact ADP, but did not provide us the calculations or methodology used to arrive at the projected ADP. While the ADP used in its budget requests may be developed based on policy decisions, documenting the calculations and rationale by which the figure was developed would help to demonstrate how the number was determined and that it was based on sound decisions. Although ICE officials stated that ADP is difficult to forecast, the agency has developed a statistical model that may help predict the ADP. ERO’s Law Enforcement Systems and Analysis (LESA) Office has developed a statistical model that uses population data directly pulled from ICE’s Enforcement Information Database to forecast the ADP in upcoming years. (See sidebar for more information.) ERO began using the model in 2014, and according to officials, ICE currently uses it to estimate how much funding the agency will need for detention costs for the remainder of the fiscal year. The model describes historical trends, seasonal fluctuations, and random movement in the ADP, and then uses these historical patterns to make forecasts. Based on our evaluation, we found that this type of model was a reasonable method to forecast ADP, and that LESA’s particular modeling choices were generally consistent with accepted statistical practices and appropriate for the data and application. Using LESA’s model, ICE can produce a range of ADP forecasts under different scenarios, as well as confidence intervals for any particular forecast. Confidence intervals indicate the level of certainty around the model’s forecast, depending on how wide the range is for the ADP forecast. Confidence in the model’s forecasts decreases when the ADP range is smaller and when forecasting for later time periods. Because the model relies on historical data in making ADP forecasts, LESA is able to incorporate separate analysis of external or unexpected events to help inform the effects of similar events on ADP in the future. For example, according to ICE officials, LESA can conduct ad hoc analysis outside of the model of how potential policy decisions, such as a change in the number of field officers, may affect future ADP, if a similar event occurred in the past. Although new policies, processes, or political or economic events may cause the dynamics of ICE’s detainee population to change in ways that historical data would not predict, incorporating this type of model into ICE’s process to project ADP could potentially help provide useful and accurate forecasts in instances where ICE does have relevant historical data. ICE officials stated that ICE has used the LESA model in the past to inform the budget during the year of execution, but has only recently used it to provide confidence intervals for the ADP inputs into the budget projections when revising the projected fiscal year 2017 bed rate. According to GAO’s Cost Estimating and Assessment Guide, having a realistic estimate of projected costs facilitates effective resource allocation. In addition, federal standards for internal control state that management should design control activities to achieve objectives, and as part of those control activities, management should clearly document significant events in a manner that allows the documentation to be readily available for examination. Without documenting the methodology or rationale behind the ADP numbers ICE uses to develop its budget request for immigration detention, Congress and other stakeholders do not have clear visibility into the number upon which ICE is basing its budget request. Additionally, by considering how or whether the LESA model could be incorporated into ICE’s process for projecting ADP, ICE could leverage an existing model and identify potential improvements in the accuracy of its ADP projections based on historical data. ICE’s cost estimate for immigration detention resources does not fully meet best practices outlined in GAO’s Cost Estimating and Assessment Guide. As described earlier, the characteristics of a reliable cost estimate are comprehensive, well documented, accurate, and credible. As noted in table 2, ICE’s cost estimate for fiscal year 2018 substantially met the comprehensive characteristic, partially met the well documented and accurate characteristics, and minimally met the credible characteristic. By not sufficiently meeting the best practices in all of the characteristics, the cost estimate for the immigration detention cannot be considered reliable. Based on our analysis, ICE substantially met the comprehensive characteristic by including all costs, but has double-counted certain costs, as described earlier, and has not clearly documented all ground rules and assumptions. Based on our analysis, ICE’s cost estimate appears to include all government and contractor labor costs as well as material, equipment, facilities, and services to fund immigration detention, accounting for both the salary and expenses categories of the budget. ICE also adheres to DHS’s Common Appropriations Structure, and follows the OMB Object Class structure for planning and tracking costs at a more granular level. Officials stated that they use past execution reports, historical data, and spend plans to help inform the necessary distribution of funding for immigration detention by project and object code. While ICE accounted for all costs, ICE did not directly address how the agency prevents omissions or double-counting in its cost estimate, and double-counted costs by including other direct costs for family facilities when estimating the cost to house adult detainees. Additionally, ICE did not identify ground rules and assumptions influencing the estimate. Officials said that several documents list ground rules and assumptions; however, the ground rules cited are very broad or have not been followed. For example, ICE guidance states that ICE shall fund sufficient detention beds to support current enforcement and removal priorities and mandatory detention requirements, but it does not provide a basis for determining a sufficient number of detention beds. Another important factor in determining the bed/guard rate for adult beds is tier utilization. Tier utilization refers to the use of bed space in detention centers. For example, at a given detention center, ICE may pay a lower rate if it houses more detainees. When determining the bed rate based on tier utilization, ICE did not provide documentation of the ground rules or assumptions behind the tier utilization percentage used to calculate the fiscal year 2018 bed rate. Finally, as noted earlier in this report, ICE has not documented its rationale for not following DHS or OMB guidance for applying inflation rates to the estimate. According to GAO’s guide, given that cost estimates are based on limited information, defining ground rules and assumptions is important because they help identify the risks associated with these assumptions, including how changes in the assumptions could influence cost. Without clear documentation and rationale behind ground rules and assumptions, the estimate will not be able to be reconstructed when the budget staff and information used to develop the estimate are no longer available. Based on our analysis, ICE partially met the well documented characteristic by showing that its cost estimate had been reviewed by management and providing documentation that described its methodology in general. However, ICE did not show the formulas used to develop the cost estimate in sufficient detail to enable an outside party to fully follow its calculations or to re-create the fiscal year 2018 bed rate. Although the agency provided the bed rate model and showed what numbers were used as inputs into the model to project the fiscal year 2018 bed rate, it did not provide documentation that described the formulas used to calculate the projected bed rate. During our review of the bed rate model, we had to reconstruct the calculations step-by-step to identify the formulas and variables used to create the fiscal year 2018 bed rate. Additionally, ICE officials provided conflicting explanations regarding how they applied inflation to develop the projected fiscal year 2018 adult bed rate. In one instance, ICE officials said that they applied a 2.66 percent inflation factor to develop the fiscal year 2017 adult bed rate and then calculated and applied a cost adjustment to add more than 8,800 new beds, to produce the fiscal year 2018 bed rate. In another instance, ICE officials stated that the inflation factor was adjusted to 3.73 percent overall to develop the fiscal year 2017 bed rate and then they applied the cost adjustment to develop the fiscal year 2018 projected bed rate. These two explanations also differ from how the bed rate model applies inflation as described earlier in this report. ICE also did not document how the cost adjustment was calculated or the actual costs that the adjustment is based upon. When asked about documentation, ICE officials stated that the budget justification was not the appropriate document to cite detailed methodologies, but did not provide any additional supporting documentation. Documentation is essential for validating a cost estimate, including demonstrating that it is a reliable estimate of future costs. Consistent with GAO’s guide, without a well documented cost estimate, ICE is not positioned to present the estimate’s validity or answer questions about its basis. According to GAO’s Cost Estimating and Assessment Guide, estimates that lack sufficient documentation are not useful for updates or information sharing and can hinder understanding and proper use. Based on our analysis, ICE partially met the accurate characteristic by basing the cost estimate on historical cost data and tracking the differences between the projected and actual bed rate and ADP. ICE officials stated that they utilized historical cost data for bed/guard contract costs, health care costs, overhead expenses, detainee wages and supplies, and detainee headcount and capacity utilization, among other categories to estimate detention costs. However, ICE did not provide evidence that it analyzes the reasons behind the variances between the cost estimate and actual numbers for each year, and as mentioned previously, we identified issues with the inflation rates used to project the bed rate and the inclusion of family facilities in the adult bed rate. While ICE tracks differences between the projected bed rate used in the cost estimate and the actual numbers for each fiscal year, officials did not provide evidence that they analyze the reasons for these variances nor that they use this information to reassess its assumptions or models and improve them. ICE officials said that variances between the projected and actual bed rates are documented in a quarterly report that is publicly available. While these reports track the bed rate in the execution year, they do not demonstrate that ICE tracks explanations for variances between that bed rate and the original cost estimate figures presented in the budget request. ICE provided a document that showed the bed rate projection and the year-end result for fiscal years 2013 through 2016 and quarter-end results for fiscal year 2017, but the document did not explain most of the changes from the projected and actual numbers. ICE officials also said that they conduct ad hoc analyses to identify and communicate sources of variance, but did not provide any related documentation. Without a comparison and analysis of the reasons behind the differences between the actual figures and the original estimates, ICE is not positioned to assess the quality of its projections and use that information to improve cost estimates. Tracking the forecast rate against the actual rate and tracking budget justification assumptions against actual conditions could offer insight into the quality of the forecasts, according to GAO’s cost estimating guide. Based on our analysis, ICE minimally met the credible characteristic, and in particular did not conduct sensitivity or risk and uncertainty analyses to capture the cumulative effects if variables change. ICE also did not conduct any cross checks on the major cost elements using alternate methods to estimate cost. A sensitivity analysis reveals how a change in a single assumption, or variable, affects the cost estimate. A risk and uncertainty analysis would provide ICE a clear level of confidence about the estimate. ICE did not conduct a risk and uncertainty analysis for either the fiscal year 2018 cost estimate or the fiscal year 2018 bed rate model. Additionally, ICE’s description of the LESA model to project ADP discussed forecast confidence levels, but ICE did not quantify the uncertainty around the ADP projection of 51,379 detainees used in the fiscal year 2018 budget justification. ICE also did not discuss the range of potential costs due to uncertainty in the ADP and bed rate projections. Having a range of costs around a point estimate is useful to decision makers because it conveys the level of confidence in achieving the most likely cost. Additionally, ICE did not provide any documentation showing that major cost elements were cross checked using a different method for calculating the cost estimate to see if results were similar. According to GAO’s cost estimating guide, one way to reinforce the credibility of the cost estimate is to determine whether applying a different method produces similar results. If so, then confidence in the estimate increases, leading to greater credibility. ICE officials stated that internal and external auditors vetted the bed rate model and determined it to be credible, but this does not constitute an estimate cross check and using an alternate cost estimating method to cross check its estimate would provide greater assurance of its credibility. As noted previously, we found ICE’s bed rate model underestimated the actual bed rates over several years. Unless all characteristics are met or substantially met, the cost estimate cannot be considered reliable. Additionally, a poor cost estimate can negatively affect a program by eventually requiring a transfer or reprogramming of funds. In recent years, ICE has consistently transferred and reprogrammed millions of dollars of funds to account for budgeting too little or too much for immigration detention costs. By improving the budget estimation to better reflect cost estimating best practices, ICE could ensure a more reliable budget request. As an agency, ICE operates the immigration detention system on a budget of nearly $3 billion. Although estimating immigration detention costs may be difficult, taking steps to improve ICE’s cost estimating and budget request processes could help provide Congress with a more accurate picture of ICE’s funding needs. Developing and implementing a documented review process for its annual budget request calculations could help ICE better ensure that its budget requests are consistently credible and reliable. Additionally, assessing its bed rate model and addressing the identified inaccuracies in its methodology could help ICE more accurately project the bed rate in upcoming years. As we noted, a difference of just five dollars in the bed rate amounts to a difference of tens of millions of dollars in the final budget calculation. Documenting the methodology or rationale behind the ADP projections would better position ICE to support the basis for its budget requests each year, and incorporating the use of a statistical model may help decision makers by providing more information about the numbers that ICE presents. Furthermore, taking steps to ensure that ICE fully addresses cost estimating best practices could ensure a more reliable overall estimate. We are making the following five recommendations to ICE: The Director of ICE should take steps to document and implement its review process to ensure accuracy in its budget documents. The Director of ICE should take steps to assess ICE’s adult bed rate methodology to determine the most appropriate way to project the adult bed rate, including any inflation rates used. The Director of ICE should take steps to update ICE’s adult bed rate methodology by incorporating necessary changes based on its assessment, and ensure the use of appropriate inflation rates and the removal of family beds from all calculations. The Director of ICE should take steps to determine the most appropriate way to project the ADP for use in the congressional budget justification and document the methodology and rationale behind its ADP projection. As part of that determination, ICE should consider the extent to which a statistical model could be used to accurately forecast ADP. The Director of ICE should take steps to ensure that ICE’s budget estimating process more fully addresses cost estimating best practices. We provided a draft of this report to DHS for the department’s review and comment. DHS provided written comments, which are noted below and reproduced in full in appendix II, and technical comments, which we incorporated as appropriate. DHS concurred with our recommendations and described actions underway or the actions it plans to take in response. To our first recommendation, DHS stated that ICE recently implemented a more stringent process for the fiscal year 2020 budget cycle, and will work to more effectively document its review process and decisions during the budget formulation process. To our second recommendation, DHS stated that ICE has completed multiple third-party assessments of its bed rate methodology. We will evaluate any assessments provided and determine the extent to which those assessments meet the intent of the recommendation. To our third recommendation, DHS stated that ICE will provide GAO with documentation demonstrating updates to the adult bed rate methodology, including the use of an appropriate inflation rate and removal of family beds from calculation. We will evaluate any documentation provided and determine the extent to which ICE’s actions meet the intent of the recommendation. To our fourth recommendation, DHS stated that ICE ERO developed a statistical modeling capability and provided that documentation and methodology to GAO. As previously noted in this report, we found that this type of model was a reasonable method to forecast ADP, and the particular modeling choices were generally consistent with accepted statistical practices and appropriate for the data and application. DHS began leveraging the model for its fiscal year 2019 budget cycle, and it will be important to see how the model is used in future budget justifications. To our fifth recommendation, DHS stated that ICE will implement the best practices for cost estimating to the degree that it is possible, specifically performing sensitivity and cost risk and uncertainty analyses to strengthen the credibility of its estimates. Implementing the best practices should help position ICE to produce a more reliable cost estimate. If implemented effectively, these actions should address the intent of our recommendations. We are sending copies of this report to the appropriate congressional committees and the Secretary of the Department of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or GamblerR@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. U.S. Immigration and Customs Enforcement (ICE) calculated a bed rate for fiscal year 2018 using a bed rate model built in Excel with data from its Federal Financial Management System and Enforcement Information Database. To project the fiscal year 2018 bed rate, ICE officials told us they used a different inflation factor from the ones set forth in guidance from the Office of Management and Budget (OMB) or the Department of Homeland Security (DHS). Specifically, ICE used an inflation factor based on the historical service costs. ICE did not provide a documented rationale for not using the OMB’s inflation rate, written descriptions of the calculations within the bed rate model, or detailed ground rules and assumptions for the bed rate model. In examining the adult bed rate model used by ICE to project the fiscal year 2018 bed rate, we identified a number of inaccuracies and errors in the formulas used. Specifically: Instead of using the average of the percentage change in year-over- year costs, ICE used the average of the actual monetary difference in year-over-year costs and then applied that figure as a percentage; ICE added the inflation factors for two cost categories and then applied the combined rate to each category, which led to additional negative inflation; and ICE included information for family facilities, which were already budgeted as fixed priced contracts, in the calculation of the adult bed rate. ICE calculates a projected bed rate for two years into the future based on actual obligations and expenditures for four cost categories—bed/guard costs, health care, other direct costs, and service-wide or indirect costs. Table 3 shows ICE’s historical costs since fiscal year 2009 for these categories. Table 4 shows ICE’s calculations to determine the projected fiscal year 2018 bed rate. To calculate the projected fiscal year 2018 bed rate, ICE applied its inflation factors twice to the fiscal year 2016 costs and then added a cost adjustment to account for the cost of adding new beds. ICE notes that the initial projected rate is for fiscal year 2017; however, this figure follows the formula that ICE would use to determine the fiscal year 2018 bed rate. With the change in administration during fiscal year 2017, ICE had the opportunity to revise its projected bed rate. ICE officials told us that they applied their inflation factors to fiscal year 2016 costs once to project the bed rate one year into the future and then applied their inflation factors a second time in order to account for an operational adjustment, which they estimated to be approximately 3 percent. ICE officials did not provide us with documentation of their calculations or analysis showing that compounding the inflation factors over two years was equivalent to one year’s inflation plus an operational adjustment. In addition, because the inflation factors used in the bed rate model are based on historical costs, any operational costs should already have been accounted for in the model itself. Using Actual Monetary Difference in Costs Instead of Percentage Change ICE’s bed rate model is designed to use the average of year-over-year percentage change as its inflation rate. However, for the revised fiscal year 2017 and the projected fiscal year 2018 bed rates, ICE did not calculate the inflation rate based on year-over-year percentage changes, but based it on the actual monetary difference in yearly costs. ICE officials told us that in response to Congress’s concerns about service- wide costs, ICE began separating service-wide costs from other direct costs in fiscal year 2017. Previously, the two cost categories had been combined as an “other costs, miscellaneous” cost category. ICE officials told us that when other direct costs were separated from service-wide costs, they discovered that the average of year-over-year percentage changes showed a large decrease (negative 20 percent) for other direct costs which was not reflected in a separate analysis conducted by ICE. Therefore, officials decided to use the average of the actual monetary difference in year-over-year costs instead. ICE officials did not provide documentation of this separate analysis. According to ICE officials, for consistency they decided to use the average of the actual monetary difference in year-over-year costs for all of the cost categories including bed/guard, health care, and service-wide costs. The bed rate model then applied these figures as inflation factors. Table 5 shows the results from ICE’s calculation of yearly cost changes as percentages. In this table, ICE uses the formula of (Year 2 - Year 1)/100 and displays it as a percentage. For example, as noted in table 1, the fiscal year 2010 bed/guard rate was $81.59 and the fiscal year 2009 rate was $77.50. ICE calculated the change in the bed/guard rate for fiscal year 2010 as $81.59 - $77.50 = $4.09, and then replaced the dollar sign with a percent sign, thereby treating the dollar difference as a percentage change. Table 6 shows the results if the year-over-year change were calculated by comparing the actual percentage difference in costs. In this table, we use the formula of (Year 2 - Year 1) / Year 1 and display it as a percentage. For example, for fiscal year 2010, the percentage change in the bed/guard rate is 5.28 percent (or ($81.59 - $77.50) / $77.50), not 4.09 percent as calculated by ICE. Because of how ICE presented the percentage change for each year, the average of year-over-year changes, which ICE uses as its inflation factors, is not correct. For example, ICE’s inflation factor for the bed/guard rate is 2.74 percent (see table 3), while the appropriate calculation is 3.28 percent (see table 4). Applying Combined Inflation Factor Twice In developing its fiscal year 2018 projected adult bed rate, ICE combined the inflation factors for two cost categories—other direct costs and service-wide costs—and applied the combined rate to each category. By using this combined rate, the bed rate model applies an additional -0.54 percent factor to the categories, which it otherwise would not have done if ICE applied the individual inflation factors for the categories. As noted in Table 7, ICE’s year-over-year average change for other direct costs was -1.33 percent when ICE calculated it individually for the category, and was 0.78 percent for service-wide costs. Instead of applying these inflation factors (-1.33 and 0.78 percent) to the fiscal year 2016 costs for these categories, ICE added the two inflation factors for a total of -0.54 percent, based on the following calculation: - 1.3267 + 0.7833 = -0.5433. ICE then applied this combined inflation factor to both categories (see table 2). Officials did not provide us with a rationale or documentation of why they manually entered these numbers, or combined the two rates except that it stemmed from the Congressional request to separate the costs. By applying the combined inflation factor to both categories, ICE mistakenly introduced an additional error for these two cost categories. Counting Families in the Adult Bed Rate ICE’s bed rate model divides the obligations and expenditures for health care, other direct costs, and service-wide costs by the entire detainee population of adults and families, resulting in an adult bed rate that is lower than if the costs were divided by the adult population alone. ICE’s bed rate model is used to calculate a bed rate to estimate detention costs for the adult population. Family facilities operate on firm fixed price contracts and all cost categories for the family facilities—bed/guard costs, health care costs, other direct costs, and service-wide costs—are budgeted for separately from costs for adult detention in ICE’s budget request. By dividing adult bed costs across its entire detainee population, ICE may be underestimating the total detention costs. To calculate the daily per person cost of health care, other direct costs, and service-wide or indirect costs, the bed rate model divides the total obligations and expenditures for each category by the number of mandays. Table 8 shows ICE’s calculations using the formula: Obligations and Expenditures / Mandays for Adults and Families = Daily Per Person Rate By spreading these costs across the entire population, the bed rate model derives a lower daily per person cost than by considering only the adult detainee population. For example, ICE calculated the daily per person cost of health care in fiscal year 2016 as: $148,186,091 / 9,096,014 = $16.29. Table 9 shows what the daily per person cost of health care would be if the family population were removed from the calculation. Specifically, the daily per person health care cost would be $148,186,091 / 8,696,453 = $17.04 The result of a $0.75 underestimate in health care costs is an overall underestimation of approximately $13.4 million for the fiscal year 2018 immigration detention system cost estimate based on the calculation: $0.75 x 48,879 x 365 = $13,380,626. Including Family Facilities in Cost Data In addition to spreading total costs across the entire population, rather than just the adult population, ICE’s bed rate model includes obligations and expenditures for family facilities. In examining ICE’s data for other direct costs, we found that data from the three family facilities (Berks, Karnes, and South Texas) were included in the facility cost data. These three facilities’ other direct costs totaled $222,425. Because these facilities operate on firm fixed price contracts that include other direct costs, and these costs were already budgeted at $5.5 million in the $291.4 million allotted for family facilities, these costs were double- counted in the model and the costs were added to the adult bed rate. It is unclear if cost data for family facilities are also included in the health care and in the service-wide costs used to calculate the adult bed rate. ICE officials did not provide documentation or their rationale for including the family facilities in their adult bed rate model. Table 10 demonstrates the effect of removing information for family facilities from the other direct cost data and then dividing by the adult population alone. This calculation results in a daily per adult rate for other direct costs of $1.75 for fiscal year 2016, which is 3 cents lower than the rate if the other direct costs for family facilities are included (and the costs are divided by the adult population alone). In addition to the contact named above, Kirk Kiester (Assistant Director), Brian Bothwell, Pamela Davidson, Eric Hauswirth, Susan Hsu, Heather Keister, Sasan J. “Jon” Najmi, Leah Q. Nash, Karen Richey, Daniela Rudstein, Jack Sheehan, and Jeff Tessin made significant contributions to this report.
|
In fiscal year 2017, ICE operated on a budget of nearly $3 billion to manage the U.S. immigration detention system, which houses foreign nationals whose immigration cases are pending or who have been ordered removed from the country. In recent years, ICE has consistently had to reprogram and transfer millions of dollars into, out of, and within its account used to fund its detention system. The explanatory statement accompanying the DHS Appropriations Act, 2017, includes a provision for GAO to review ICE's methodologies for determining detention resource requirements. This report examines (1) how ICE formulates its budget request for detention resources, (2) how ICE develops bed rates and determines ADP for use in its budget process, and (3) to what extent ICE's methods for estimating detention costs follow best practices. GAO analyzed ICE's budget documents, including CBJs, for fiscal years 2014 to 2018, examined ICE's models for projecting ADP and bed rates, and evaluated ICE's cost estimating process against best practices. U.S. Immigration and Customs Enforcement (ICE) formulates its budget request for detention resources based on guidance from the Office of Management and Budget and the Department of Homeland Security (DHS). To project its detention costs, ICE primarily relies on two variables—the average dollar amount to house one adult detainee for one day (bed rate) and the average daily population (ADP) of detainees. U.S. Immigration and Customs Enforcement's (ICE) Formula to Calculate Detention Costs GAO found a number of inconsistencies and errors in ICE's calculations for its congressional budget justifications (CBJs). For example, in its fiscal year 2015 budget request, ICE made an error that resulted in an underestimation of $129 million for immigration detention expenses. While ICE officials stated their budget documents undergo multiple reviews to ensure accuracy, ICE was not able to provide documentation of such reviews. Without a documented review process for reviewing the accuracy of its budget request, ICE is not positioned to ensure the credibility of its budget requests. ICE has models to project the adult bed rate and ADP for purposes of determining its budget requests. However, ICE consistently underestimated the actual bed rate due to inaccuracies in the model, and it is unclear if the ADP used in the budget justification is based on statistical analysis. GAO identified factors in ICE's bed rate model—such as how it accounts for inflation and double counts certain costs—that may lead to its inaccurate bed rate projections. For example, in fiscal year 2016, ICE's projections underestimated the actual bed rate by $5.42 per day. For illustrative purposes, underestimating the bed rate by $5 per day, assuming an ADP of 34,000, yields a more than $62 million underestimation in the detention budget request. By assessing its methodology and addressing identified inaccuracies, ICE could ensure a more accurate estimate of its actual bed rate cost. Additionally, ICE reported that the ADP projections in its CBJs are based on policy decisions that account, for example, for anticipated policies that could affect the number of ICE's detainees. While ICE's projected ADP may account for policy decisions, documenting the methodology and rationale by which it determined the projected ADP would help demonstrate how the number was determined and that it was based on sound assumptions. ICE's methods for estimating detention costs do not fully meet the four characteristics of a reliable cost estimate, as outlined in GAO's Cost Estimating and Assessment Guide . For example, while ICE's fiscal year 2018 detention cost estimate substantially met the comprehensive characteristic, it partially met the well-documented and accurate characteristics, and minimally met the credible characteristic. By taking steps to fully reflect cost estimating best practices, ICE could better ensure a more reliable budget request. GAO recommends that the Director of ICE: (1) document and implement its review process to ensure accuracy in its budget documents; (2) assess ICE's adult bed rate methodology; (3) update ICE's adult bed rate methodology; (4) document the methodology and rationale behind the ADP projection used in budget requests; and (5) take steps to ensure that ICE's detention cost estimate more fully addresses best practices. DHS concurred with the recommendations.
|
Spinal cord injuries are complex, lifelong injuries that typically result from acute traumatic damage to the spinal cord or nerves within the spinal column. In spinal cord injury patients, certain nervous system functions may be impaired temporarily or permanently lost, depending on the level and severity of the patient’s injury. In addition to lower level nervous system functioning, spinal cord injury patients may develop secondary medical complications that can further decrease functional independence and quality of life, including, but not limited to: Autonomic dysreflexia: a condition that may result in life threatening hypertension—high blood pressure—due to impaired nervous system response, below the level of spinal cord injury. Depression: a medical mood disorder—commonly affecting about one in five spinal cord injury patients—that can cause physical and psychological symptoms (including changes in sleep and appetite, and thoughts of death or suicide). Impaired bowel and bladder functioning: potential inability to move waste through the colon and control, stop or release, urine—which can lead to other life-threatening illnesses (such as autonomic dysreflexia) and/or infections. Pressure ulcers: a common complication affecting up to 80 percent of spinal cord injury patients that results from an area of the skin or underlying tissue that is damaged due to decreased blood flow, which can occur after extended periods of inactive sitting or lying, among other ways. Pressure ulcers—also known as pressure sores or wounds—can occur years after initial injury and may also result in life- threatening infections or amputation. Spasticity: a common condition that affects 65 to 78 percent of spinal cord injury patients and can result in symptoms ranging from mild muscle stiffness to severe, uncontrollable leg movements. Syringomyelia: a rare disorder that occurs when cerebrospinal fluid— normally found outside of the spinal cord and brain—enters the interior of the spinal cord to form a cyst known as a syrinx. This cyst expands and elongates over time, destroying the center of the spinal cord. Symptoms can develop slowly and can include numbness, pain, effects on bowel and bladder function, or paralysis. While this condition can occur as a result of a trauma, such as a spinal cord injury, the majority of cases are associated with a complex brain abnormality. Acquired brain injuries occur after birth and are not hereditary, congenital, degenerative, or a result of birth trauma. Acquired brain injuries result in changes to the brain’s neuronal activity, which can affect the physical integrity, metabolic activity, or functional ability of nerve cells in the brain. Acquired brain injuries can be either non-traumatic or traumatic in nature: non-traumatic brain injuries are caused by an internal force—such as in the case of stroke, tumors, or drowning—and traumatic brain injuries are caused by an external force—such as in the case of car accidents, gunshot wounds, or falls. The severity of brain injury can often result in changes to physical, behavioral, and/or cognitive functioning. For example, according to one source, nearly 50 percent of all people with a traumatic brain injury experience depression within the first year after injury, and nearly two-thirds experience depression within 7 years post- injury. Depression can develop as a result of physical changes in the brain, emotional response to the injury, and other unrelated factors—such as family history. Due to impaired cognitive functioning, traumatic brain injury patients may also experience difficulty communicating, concentrating, and processing and understanding information. Acute care hospitals and LTCHs are paid under different Medicare payment systems by law. Acute care hospitals are paid under the inpatient prospective payment system (IPPS). LTCHs are paid under the LTCH PPS. Under both systems, Medicare classifies patients based on Medicare diagnosis groups, which organize patients based on their conditions and the care they receive. Medicare payments for LTCHs are typically higher than payments for acute care hospitals, to reflect the average resources required to treat Medicare beneficiaries who need long-term care. Traditionally, all LTCH discharges were paid at the LTCH PPS standard federal payment rate. The Pathway for SGR Reform Act of 2013 modified the LTCH PPS by establishing a two-tiered payment system— such that certain LTCH discharges continue to be paid at the standard rate and others are paid at a generally lower, site-neutral rate. In its March 2013 report, MedPAC described concerns regarding growth in the number of LTCHs and the extent to which some of their patients may otherwise be treated appropriately in less costly settings. To continue to be eligible for the standard rate, the discharge must generally have a preceding acute care hospital stay with either an intensive care unit stay of at least 3 days or an assigned diagnosis group based on the receipt of at least 96 hours of mechanical ventilation services in the LTCH, unless an exception applies. Discharges that do not qualify for the standard rate are to receive a blended site-neutral rate—equal to 50 percent of the site-neutral rate and 50 percent of the standard rate—for discharges in cost reporting periods beginning in fiscal years 2016 through 2019, and the full site-neutral rate for discharges in cost reporting periods beginning in fiscal year 2020. Beginning with cost reporting periods in fiscal year 2020, if fewer than half of an LTCH’s discharges meet the statutory requirements to be paid at the standard rate, the LTCH will no longer receive any payments at that rate for discharges in future cost reporting periods until eligibility for receiving payments under that rate is reinstated. Under this scenario, all discharges in succeeding cost reporting periods would be paid at the generally lower rate that an acute care hospital would receive for providing comparable care until eligibility for receiving payments at the standard rate is reinstated. According to officials from HHS, the department intends to establish a process for how hospitals would have their eligibility for receiving payments at the standard rate reinstated as part of the fiscal year 2020 rule-making cycle. Since the two qualifying hospitals are currently only excepted from the statutory two-tiered payment structure for cost reporting periods beginning during fiscal years 2018 and 2019, these two hospitals must also meet the statutory 50 percent threshold in fiscal year 2020 and beyond in order to receive the standard rate for any future discharges until reinstated. See table 1 for more information on Medicare’s LTCH PPS payment policies. Two LTCHs have qualified for the temporary exception to site-neutral payments, according to CMS officials. Craig Hospital is a private, not-for- profit facility that has specialized in medical treatment, research, and rehabilitation for patients with spinal cord and brain injury since 1956. Craig Hospital is classified as an LTCH for the purposes of Medicare payment, and is licensed as a general hospital by the state of Colorado— which does not have separate designations for LTCHs. Craig Hospital has been selected as one of 14 NIDILRR Spinal Cord Injury Model Systems and one of 16 Traumatic Brain Injury Model Systems and is accredited by the Joint Commission. Shepherd Center is a private, not-for-profit facility that specializes in medical treatment, research, and rehabilitation for people with traumatic spinal cord injury and brain injury—as well as neuromuscular disorders, including multiple sclerosis. Shepherd Center is classified as an LTCH for the purposes of Medicare payment, and as a specialty hospital—which includes LTCHs—by the state of Georgia. Shepherd Center is also currently designated as a NIDILRR Spinal Cord Injury Model System and is accredited by the Joint Commission. Shepherd Center also has several CARF International accredited specialty programs. Specifically, it has CARF-accredited inpatient rehabilitation specialty programs in spinal cord injury and brain injury—for adults, children, and adolescents; and interdisciplinary outpatient medical rehabilitation specialty programs in spinal cord injury and brain injury—for adults, children, and adolescents, among others. More than half of the Medicare discharges in fiscal year 2013 at the two qualifying hospitals—43 of 75 at Craig Hospital and 47 of 88 at Shepherd Center—were within the diagnosis groups designated in section 15009(a) of the 21st Century Cures Act. (See table 2 below for more information.) Patients treated for these diagnosis groups may receive treatment for spinal disorders and injuries; medical back problems; degenerative nervous system disorders; skin grafts for skin ulcers; acquired brain injuries, such as traumatic brain injuries; or other significant traumas with major complicating and comorbid (simultaneous) conditions. Both qualifying hospitals have a variety of specialized inpatient and outpatient programs to help treat the complex health care needs of their patients, including those covered by Medicare. For example, both hospitals have wheelchair positioning clinics that can help prevent skin complications, such as pressure ulcers, that can occur in spinal cord patients. Both hospitals also have programs for those patients who need ventilator support such as diaphragmatic pacing—support for patients with respiratory problems whose diaphragm, lungs, and nerves have limited function—and ventilator weaning programs. In addition to clinical programs, both qualifying hospitals also provide transitional support, such as providing counseling and education to families of patients with these injuries. We found that most Medicare beneficiaries at the two qualifying hospitals need specialized services to manage the chronic, long-term effects of a catastrophic spinal cord or brain injury. Most of these patients are younger than 65 and ineligible for Medicare at the time of their initial injury, according to officials from the qualifying hospitals. Instead, according to officials, these patients typically become eligible for Medicare 2 years or more after their initial injury due to disability. Medicare beneficiaries at the two qualifying hospitals typically need care to manage comorbidities or the associated long-term complications of their injury. Officials from Craig Hospital said a significant number of their Medicare beneficiaries have comorbid conditions—such as diabetes or cardiac problems—upon admission, that can be further complicated by their injury. The officials said managing these comorbidities is as much of a medical challenge as managing the spinal or brain injury. Officials from both qualifying hospitals noted their Medicare beneficiaries who have a spinal cord or brain injury also frequently seek care after initial injury to address secondary complications resulting from their injury, including urinary tract infections; respiratory problems; and pressure ulcers. While the qualifying hospitals primarily treated traumatic spinal cord or brain injuries, we found that their Medicare populations differed from each other during the period from fiscal year 2013 to 2016. Specifically, Craig Hospital. Our review of Medicare claims data indicates more than 50 percent of the 246 Medicare discharges during this time were associated with Medicare diagnosis groups for spinal cord conditions. Specifically, during this time, Craig Hospital’s Medicare discharges were commonly assigned to three diagnosis groups covering spinal procedures and spinal disorders and injuries. For example, officials from Craig Hospital told us that about 60 percent of Medicare beneficiaries in fiscal year 2016 required surgical care for a spinal cord injury. According to officials, most of these patients received surgery for syringomyelia—a complication in spinal cord patients that generally develops years after their initial injury. These officials told us that Craig Hospital provided the pre- and post-operative care for those patients in fiscal year 2016; however, currently, Craig Hospital is only responsible for pre-operative assessments. The remaining 40 percent of their Medicare beneficiaries in fiscal year 2016 received care for new spinal cord injuries. Shepherd Center. Our review of Medicare claims data indicates the most common diagnosis group of the 365 Medicare discharges during this time—fiscal year 2013 to fiscal year 2016—related to treatment for skin grafts that can be associated with pressure ulcers, among other things. Shepherd Center officials confirmed that most of their Medicare beneficiaries received treatment for a pressure ulcer that occurred after initial injury which, as previously noted, can be so severe as to result in life-threatening infections. According to officials, most of their post-injury Medicare beneficiaries receive post-operative care and other wound management services following surgery to treat pressure ulcers, to ensure that the site will not tear again and to avoid reoccurrence. Other diagnosis groups for Medicare patients at Shepherd Center included those for spinal disorders and injuries and extensive operating room procedures unrelated to principal diagnosis. According to officials, beneficiaries in these diagnosis groups received treatment for a range of conditions, including traumatic injuries, urinary tract infections, neurogenic bladder and bowel or respiratory complications. Officials told us the hospital also served Medicare beneficiaries recovering from other acquired brain injuries, such as stroke, and paralyzing neuromuscular conditions, such as multiple sclerosis. Stakeholders we interviewed—including providers at other facilities— noted that traumatic spinal cord and brain injury patients—including those covered by Medicare—require significant levels of care due to the complexity of their injuries as well as the immediate and long-term complications that can occur from the injuries. For example, most stakeholders told us these patients often require lifelong care due to the complexity and reoccurrence of comorbidities or secondary complications. Some of these stakeholders noted, for example, spinal cord and brain injury patients often face mental health or psychosocial conditions, such as depression or anxiety. Some stakeholders also emphasized that many spinal cord injury patients risk secondary complications that may not occur until years after injury, such as pneumonia, pressure ulcers, and other infections. A few stakeholders told us spinal cord and brain injury patients are often among the most complex patients they treat. As such, patients with spinal cord or brain injuries often require interdisciplinary care that covers a wide range of specialties—including physiatry (rehabilitation medicine), neurology, cardiology, and pulmonology—as well as specialized equipment or technology, such as eye glance tools to control call systems or the television. Simulations of Medicare payments illustrate the potential effects of Medicare’s site-neutral payment policies, which were required by law, on the qualifying hospitals. Specifically, our simulations calculated what the qualifying hospitals would have been paid for Medicare patient discharges that occurred in two baseline years—fiscal year 2013 (baseline year 1) and fiscal year 2016 (baseline year 2)—if applicable payment policies from future years (2017 through 2021) were applied to those discharges. We selected two baseline years to account for differences in data, such as the number of discharges, between fiscal year 2016—the most recent year of complete data available at the time we began our analysis—and fiscal year 2013. Table 3 below provides a summary of Medicare discharges and payments to the qualifying hospitals during these two baseline years. Variation in utilization and patient mix across the baseline years allows the simulations to cover a range of possible changes in payments for the two hospitals. Our simulations indicated how Medicare’s payment policies could have affected these baseline payments to each qualifying hospital: Fiscal Year 2017 Blended Site-Neutral Rate Policy: Discharges that do not meet criteria to receive the standard rate are to receive a blended site-neutral rate—equal to 50 percent of the site-neutral rate and 50 percent of the standard rate. We found that while some of the baseline discharges would qualify for the standard rate, most discharges would have been paid at the blended site-neutral rate. Specifically, 8 to 20 percent of Craig Hospital’s baseline Medicare discharges would have qualified for the standard rate, resulting in simulated payments of about $3.86 million (baseline year 1) and $3.22 million (baseline year 2) under blended site-neutral rate policy. For Shepherd Center, between 23 percent and 40 percent of baseline Medicare discharges would have qualified for the standard rate, resulting in simulated payments of about $5.16 million (baseline year 1) and $5.31 million (baseline year 2). Each of these simulated payments is an increase compared to actual payments made in the baseline years. Fiscal Years 2018 and 2019 Temporary Exception: The qualifying hospitals are receiving the standard rate for all discharges, due to the temporary exception. As a result, simulated payments under the temporary exception are about $3.74 million (baseline year 1) and $3.18 million (baseline year 2) for Craig Hospital and about $5.64 million (baseline year 1) and $5.75 million (baseline year 2) for Shepherd Center, which is an increase compared to actual payments made in the baseline years. Fiscal Year 2020 Two-Tiered Payment Rate: The temporary exception for the qualifying hospitals no longer applies; therefore, the site- neutral rate will apply to discharges not qualifying for the standard rate. We found that both qualifying hospitals would receive some payments at the standard rate, but that most of their discharges would be paid at the lower, site-neutral rate—assuming similar caseloads (e.g., patient mix). As a result, simulated baseline year payments at Craig Hospital are about $3.47 million (baseline year 1) and $3.03 million (baseline year 2), and simulated baseline payments to Shepherd are about $4.42 million (baseline year 1) and $4.55 million (baseline year 2). The simulated payments therefore decrease compared to those in fiscal year 2019, and also generally decrease compared to actual payments made in the baseline years. Future Years Under 50 Percent Threshold: Under statute, unless 50 percent or more of the hospital’s discharges in cost reporting periods beginning during or after fiscal year 2020 qualify for the standard rate, no subsequent payments will be made to a hospital at that rate in each succeeding cost reporting period. Most of the baseline year discharges did not qualify for the standard rate, and therefore simulated payments are based on the generally lower comparable acute care rate. However, simulated payments stayed about the same between fiscal year 2020 and 2021, in part due to differences in calculations for high-cost outlier payments. A high-cost outlier payment is made to hospitals for those cases that are extraordinarily costly, which can occur because of the severity of the case and/or a particularly long length of stay. Specifically, simulated payments were about $3.49 million (baseline year 1) and $3.02 million (baseline year 2) for Craig Hospital and about $4.24 million (baseline year 1) and $4.16 million (baseline year 2) for Shepherd Center. Without the high-cost outlier payments, the simulated payments would have decreased by at least $2 million. If the mix of patients at Craig Hospital and Shepherd Center changes so that they meet the 50 percent threshold in fiscal year 2020, then simulated payments for fiscal year 2021 could be higher. As of September 2018, Craig Hospital officials told us that they expect to meet the 50 percent threshold with their current patient mix. Shepherd Center officials told us they do not expect to meet the 50 percent threshold. See figures 1 and 2 below for the results of our simulations. Our simulations of payments assume the number and type of Medicare discharges at the two qualifying hospitals remain the same as those in fiscal years 2013 and 2016. However, the full effect of payment policy on future Medicare payments to the qualifying hospitals will depend on three key factors that are subject to change: 1. Severity of patient conditions: Medicare payment is typically higher for more severe injuries, such as a traumatic injury with major comorbidities or complications, relative to less severe injuries. In the two baseline years we used for our simulations—fiscal year 2013 and fiscal year 2016—more than half of the Medicare discharges at the qualifying hospitals were associated with conditions with multiple comorbidities and complications, as indicated by the diagnosis groups, and this level of severity is reflected in the simulation results. Future payments to qualifying hospitals will depend on the extent to which the severity of patient conditions changes over time. 2. Volume of discharges meeting criteria for the standard rate: As previously noted, for a hospital to receive the standard rate for a discharge, the discharge must meet certain criteria, such as having a preceding acute care hospital stay with either an intensive care unit stay of at least 3 days or an assigned diagnosis group based on the receipt of at least 96 hours of mechanical ventilation services in the LTCH. Our simulations reflect that in the two baseline years, about 23 percent of the fiscal year 2013 discharges and about 40 percent of the fiscal year 2016 discharges met the criteria to receive the standard rate for Shepherd Center; and about 8 percent of the fiscal year 2013 discharges and about 20 percent of the fiscal year 2016 discharges met the criteria for Craig Hospital. Changes to these amounts could affect future payments to the qualifying hospitals. In particular, if 50 percent or more of either hospital’s discharges beginning in fiscal year 2020 meet the standard rate criteria, then the hospitals would be eligible for payments at the standard rate in fiscal year 2021, which may result in higher payments compared to our simulations. 3. Payment adjustments: LTCHs may receive a payment adjustment for certain types of discharges, such as short-stay outliers, interrupted stays, or high-cost outliers. In particular, most discharges at Craig Hospital received high-cost outlier payments (additional payments for extraordinarily costly cases) during the two baseline years—76 percent in fiscal year 2013 and 85 percent in fiscal year 2016. At Shepherd Center, at least 40 percent of discharges during the two baseline years received high-cost outlier payments—about 42 percent in fiscal year 2013 and about 58 percent in fiscal year 2016. The amount of future payments to qualifying hospitals will depend on the extent to which they continue to have a high proportion of discharges with high-cost outlier payments. In addition to the effect on payments, officials from both qualifying hospitals and some stakeholders we interviewed noted that the LTCH site-neutral payment policies may result in fewer services provided and fewer patients served by the qualifying hospitals and other LTCHs. For example, officials from Craig Hospital told us they stopped providing post- operative care to patients requiring spinal surgery, such as patients with syringomyelia, in 2016—instead referring them to other facilities—in part because these discharges do not meet the criteria for the standard rate. As of September 2018, they told us they do not plan to provide this care in the future unless the temporary exception is extended. Officials from Shepherd Center told us while they have not yet made changes to services they offer to Medicare patients, they may limit which Medicare beneficiaries they serve in the future. For example, they told us that most of their Medicare beneficiaries were admitted from home or sought care in their outpatient clinic. When the temporary exception expires after fiscal year 2019, hospital officials expected that these patients will not qualify for the standard rate. Shepherd Center officials said they may not be able to serve similar patients in future years. MedPAC officials and some stakeholders—a specialty association and health care providers with experience treating patients with similar conditions at other LTCHs—told us that some LTCHs have changed the services they offer and the patients they treat to increase the proportion of discharges that qualify for the standard rate. For example, MedPAC officials cited reports that indicate how some LTCHs have adjusted to the site-neutral policies. For example, a 2018 MedPAC report indicated that LTCHs in one large for-profit chain were able to make adjustments so that, as of September 30, 2016, close to 100 percent of their Medicare discharges met the criteria to receive the standard rate. A representative from an LTCH association told us that many LTCHs have adjusted their patient mix by increasing the number of discharges that meet criteria for the standard rate and turning away some Medicare beneficiaries to reduce the number of discharges subject to the site-neutral rate. The representative noted that certain LTCHs have already been able to adjust their patient mix because they have existing programs in place that focus on chronic, critically ill patients who would have a preceding acute care hospital stay. The representative told us that some LTCHs specialize in care for patients who do not meet the criteria to receive the standard rate and would generally be paid at the site-neutral rate; therefore, changing their patient mix is not a viable strategy for these LTCHs. According to the stakeholder, as of February 2018, about two-thirds of all LTCHs are above the 50 percent threshold. Providers from another LTCH told us that before the site-neutral payment policy went into effect, only about 40 to 45 percent of its discharges met criteria for the standard rate. However, they worked to ensure most patients referred to the LTCH would qualify for the standard rate. Officials told us patients who do not meet the criteria for that rate typically either stay longer in the acute care hospital or are transferred to a different post-acute care setting, such as a skilled nursing facility. Officials noted that, in both cases, the patient may not receive the specialized services often required for their injuries, including those patients with spinal cord or brain injuries. A provider we interviewed from another LTCH said that, historically, the LTCH has accepted patients who acquire pressure ulcers at home following discharge, but they may choose not to continue this practice because the patients’ discharges would not meet the criteria to receive the standard rate. A few of these stakeholders told us some LTCHs are in markets that do not have alternative providers of care, such as skilled nursing facilities, for patients who do not meet the criteria. These LTCHs may have difficulty adjusting their patient mix to avoid site-neutral payments. For example, a provider from one LTCH said his facility continues to take “site-neutral patients” because those patients often do not have another option to receive the specialized services they need. The provider emphasized concerns about the long-term viability of caring for those patients at the facility, because their care is paid at lower rates. Our review of Medicare claims data, other information, and interviews with stakeholders indicated the two qualifying hospitals treated Medicare beneficiaries with different conditions than most of those treated at other LTCHs. Our analysis of Medicare claims data indicates Craig Hospital and Shepherd Center treat very few patients in the Medicare diagnosis groups that are most common to other LTCHs. Specifically, for several years, MedPAC has reported that LTCH patient discharges are concentrated in a relatively small number of diagnosis groups. For example, in March 2018, MedPAC reported that 20 diagnosis groups accounted for over 61 percent of LTCH discharges at both for-profit and not-for-profit facilities, in fiscal year 2016. However, in fiscal year 2016, these diagnosis groups accounted for approximately 30 percent of Medicare discharges—26 out of 88—at Shepherd Center, and most of these discharges fell within a single diagnosis group which covers a range of conditions. Craig Hospital did not discharge any Medicare beneficiaries assigned to these 20 diagnosis groups, in fiscal year 2016. The seven diagnosis groups that were used in the statutory criteria to except Craig Hospital and Shepherd Center from site-neutral payments were also not among these 20 diagnosis groups. For more information on the 20 diagnosis groups common to LTCHs in fiscal year 2016, see Appendix III, table 5. Our review of Medicare claims data and other information indicates the two qualifying hospitals also treat a relatively small number of Medicare beneficiaries, a key distinguishing factor from most other LTCHs. In March 2018, MedPAC reported that, on average, Medicare beneficiaries account for about two-thirds of LTCH discharges. However, Medicare claims data and other information provided by the two qualifying hospitals indicate Medicare beneficiaries account for a significantly smaller proportion (about 8 percent) of patients discharged from Craig Hospital and Shepherd Center in 2016. Specifically, 40 of the 486 patients discharged from Craig Hospital in fiscal year 2016 and 75 of the 912 patients discharged from Shepherd Center in calendar year 2016, were Medicare beneficiaries. Officials from the qualifying hospitals told us they treat few Medicare patients primarily because of the younger average age of persons with spinal cord injuries and acquired brain injuries. While patients with spinal cord and brain injuries may receive care in other LTCHs, most stakeholders we interviewed also suggested the two qualifying hospitals treat patients that are different from those treated at most other LTCHs, and can offer specialized care. Officials from the two qualifying hospitals told us that, relative to most other facilities—including most traditional LTCHs—they offer a more complete continuum of care to meet the needs of patients at different stages of spinal cord and brain injury treatment, without the need to transfer to different facilities. Officials also stated that, unlike most traditional LTCHs, they are able to offer more specialized care for patients with spinal cord and brain injuries, including more comprehensive rehabilitation services. Stakeholders we interviewed generally agreed that the two qualifying hospitals have developed expertise in treating spinal cord and brain injury patients and offer intensive rehabilitation services that are not provided in most other LTCHs. In addition, officials from the Colorado Department of Health Care Policy & Financing noted that Craig Hospital treats a patient population that is different from most other LTCHs in the state of Colorado. Specifically, according to officials, in comparison to other LTCHs in the state, Craig Hospital treats: (1) a higher percentage of patients with more severe conditions, (2) more patients from outside the state of Colorado, (3) fewer patients requiring ventilator weaning or requiring wound care— conditions typically characteristic of LTCH patients—and (4) patients that are, on average, younger than most other LTCHs in the state of Colorado. In addition, a 2014 study of LTCHs conducted for the Georgia Department of Community Health found Shepherd Center was “distinctly different” from other LTCHs in the state of Georgia, and most LTCHs nationwide. Most stakeholders we interviewed suggested some IRFs provide specialty care to patients with catastrophic spinal cord, acquired brain injuries, or other paralyzing neuromuscular conditions. Most of the stakeholders we interviewed noted that—like the two qualifying hospitals—some IRFs have the expertise to treat patients with catastrophic spinal cord, acquired brain injuries, or other paralyzing neuromuscular conditions patients and thus, may also treat patients with similar conditions. According to CMS officials, IRFs are specifically designed to provide post-acute rehabilitation services to patients with spinal cord injuries, brain injuries, and other neuromuscular conditions. CMS officials noted that patients with these conditions typically respond well to intensive rehabilitation therapy provided in a resource intensive inpatient hospital environment and to the specific interdisciplinary approach to care that is provided in the IRF setting. Stakeholders also noted that patients with spinal cord injuries, brain injuries, and other neuromuscular conditions may receive care in other settings. However, some stakeholders noted that some of these providers—such as skilled nursing facilities—generally do not offer the specialized care these patients generally require. Differences in payment systems and data limitations make it difficult to directly compare the attributes of Medicare beneficiaries discharged from the two qualifying hospitals and IRFs, including the costs of care they receive. Medicare uses separate payment systems to pay LTCHs and IRFs, for care provided to beneficiaries. LTCHs are paid pre-determined fixed amounts for care provided to Medicare beneficiaries, under the LTCH PPS. Medicare beneficiaries treated in LTCHs are assigned to diagnosis groups (MS-LTC-DRGs) for each stay—based on the patient’s primary and secondary diagnoses, age, gender, discharge status, and procedures performed. IRFs are also paid pre-determined fixed amounts for care provided to Medicare beneficiaries, but under a separate system—IRF PPS. Medicare beneficiaries treated in IRFs are assigned to case-mix groups—based on age, and level of motor and cognitive function—and then further assigned to one of four tiers (within these groups) based on the presence of specific comorbidities that may increase their cost of care. According to CMS officials, because the payment groups and assignments to those groups are different, it is difficult to directly compare LTCH patients, classified in diagnosis groups, with IRF patients, classified in case-mix groups. See Appendix II for more information on these payment systems. MedPAC has previously reported the differences in patient assessment tools used by post-acute care providers undermines Medicare’s ability to compare the patients admitted, costs of care, and outcomes beneficiaries achieve in these settings, on a risk-adjusted basis. MedPAC has also reported that while similar beneficiaries can receive care in each setting, payments can differ considerably for comparable conditions, due to differences in payment systems. It has made recommendations to address these issues. The Improving Medicare Post-Acute Care Transformation Act of 2014 also requires the Secretary of HHS to collect and analyze common patient assessment information and, in consultation with MedPAC, submit a report to Congress recommending a post-acute care PPS. Such efforts may make future comparison of beneficiaries, costs of services, and outcomes of care across these settings possible. While data limitations make a direct comparison difficult, based on our review of other data and information, and interviews with stakeholders, we identified similarities and differences between the qualifying hospitals and certain IRFs that provide specialty treatment for catastrophic spinal cord injuries, acquired brain injuries, or other paralyzing neuromuscular conditions. Key similarities and differences include the following: Volume of services. Our review of Medicare claims data, other information, and interviews with stakeholders indicate that—similar to the two qualifying hospitals—some IRFs treat a high volume (at least 100) of patients with complex spinal cord injury, brain injury, and other related conditions. Officials from the two qualifying hospitals, as well as some other stakeholders we interviewed—including officials from the Christopher & Dana Reeve Foundation and the Brain Injury Association of America—emphasized the importance of facilities treating a high volume of patients with these specialized conditions, which can be an indicator of expertise in treating these patients. Our review of Medicare claims data for 1,148 IRFs in fiscal year 2016 identified 21 IRFs that treated at least 100 Medicare beneficiaries with non-traumatic and traumatic spinal cord injuries and 109 IRFs that treated at least 100 Medicare beneficiaries with non-traumatic and traumatic brain injuries. Our review of Medicare claims data indicated that, similar to the two qualifying hospitals—some IRFs also treat a high volume of patients with “catastrophic” injuries—traumatic brain injury, traumatic spinal cord injury, and major multiple traumas with brain or spinal cord injuries. Specifically, we identified 25 IRFs that treated a high volume (at least 100) of Medicare beneficiaries with catastrophic injuries, in fiscal year 2016. In the absence of patient assessment data from the facilities, we did not independently evaluate the level and severity of these patients’ injuries, which can vary due to the presence of other co-morbid conditions. The Medicare case mix indexes we reviewed for these 25 IRFs indicated that, relative to other IRFs, most of these facilities treat patients who are more resource intensive. Specialty accreditation and designation as model systems. Like Shepherd Center, some IRFs receive CARF-accreditation for specialty programs to treat spinal cord and brain injuries. According to most stakeholders, this accreditation indicates expertise in treating these patients, as CARF International has established standards using evidence-based practices, among other factors. Officials from the two qualifying hospitals also noted CARF International has a specific focus on quality and outcomes. However, officials from Shepherd Center noted similarities in care and services offered at CARF-accredited facilities would depend on the specialties for which they are certified. Most of the stakeholders we interviewed also noted that designation as a NIDILRR model system is an indicator of similar expertise in treating patients with spinal cord and brain injuries. According to the Model Systems Knowledge Translation Center, spinal cord injury and brain injury model systems are recognized as national leaders in medical research and patient care and provide the highest level of comprehensive specialty services from the point of injury through eventual re-entry into full community life. While stakeholders we interviewed from NIDILRR model systems indicated the model system designation is focused primarily on research, rather than clinical care, most noted that model systems’ research often complements the facilities’ clinical efforts to address the unique needs of these patients. Officials from HHS’s Administration for Community Living also noted that all model system grantees must provide a continuum of care—emergency care, acute medical care, acute medical rehabilitation, and post-acute care—and that can happen in various provider types. According to officials from the qualifying hospitals and stakeholders from one other NIDILRR model system we interviewed, Craig Hospital and Shepherd Center are the only two LTCHs currently classified as spinal cord injury model systems; 12 of 14 spinal cord injury model systems are IRFs. Specialized programs and services. Similar to the two qualifying hospitals, some IRFs may also offer specialized programs and services for patients with brain and spinal cord injuries, but the availability of these programs and services may vary by facility. Officials from some of the IRFs that responded to our information request—which included both NIDILRR facilities and IRFs with CARF-accredited programs—told us they provide specialized programs and services for patients with similar conditions as those treated at two qualifying hospitals, and sometimes compete with the two qualifying hospitals for the same patients. For example, each IRF reported having interdisciplinary treatment teams; the capacity to provide medical management of medically complex and high acuity patients with spinal cord injury, traumatic brain injury, or other major multiple traumas associated with a brain or spinal cord injury; family education and training; and skin and wound programs or services, among other services. However, the availability of certain services—including but not limited to ventilator-dependent weaning programs, diaphragmatic pacing, and outpatient programs for spinal cord and traumatic brain injury patients—varied by facility. Staff with specialized training and clinical expertise. Similar to the two qualifying hospitals, most facilities that responded to our information request also reported having physicians, nurses, and physical and occupational therapists with specialty training in medical rehabilitation, spinal cord, and/or brain injury. However, the number of staff with these trainings, varied by facility. In comparison to the other facilities that responded to our information request, the number of nurses and physical and occupational therapists with these specialty trainings were generally higher at Craig Hospital and Shepherd Center. According to an American Spinal Injury Association consumer guideline that the Christopher & Dana Reeve Foundation typically provides to spinal cord injury patients and families, programs should regularly admit persons with spinal cord injury each year, to develop and maintain the necessary skills to manage a person with spinal cord injury, and a substantial portion of those admitted should have traumatic injuries. Out-of-state Admissions. Officials from the two qualifying hospitals emphasized they admit a significant number of patients from out-of-state, and our review of information provided by the qualifying hospitals and a select group of IRFs indicated the qualifying hospitals admit a higher percentage of patients from out-of-state. Specifically, information provided by these IRFs indicates that less than a quarter of patients admitted to these facilities, in 2016, were from out-of-state. Information provided by Craig Hospital and Shepherd Center indicate that about half of their patients were admitted from out-of-state in 2016. Officials from the Colorado Department of Health Care Policy & Financing also noted Craig Hospital treats a higher percentage of out-of-state patients, compared to IRFs in the state. Ability to treat medically complex patients. Officials from the two qualifying hospitals told us they treat more medically complex patients and provide a more complete range of medical services to spinal cord and brain injury patients, not provided by most IRFs. Specifically, officials from the two qualifying hospitals both noted they are able to treat patients much sooner in their recovery process than most IRFs, due to their LTCH status. Officials from the Shepherd Center noted that they have a 10-bed intensive care unit which allows them to take patients with certain injures that some IRFs may not be equipped to admit—such as patients requiring advance medical management and advanced level procedural services and monitoring. Information provided by Shepherd Center indicated that, in calendar year 2017, approximately 20 percent of all inpatients were admitted to this unit and 13 percent of all inpatients were internally transferred to this unit after developing medical complications. According to officials, Craig Hospital does not have an intensive care unit, but noted their ability to similarly care for medically complex patients—including telemetry (e.g., specialized heart monitoring) and one-to-one nursing care, if necessary. Most stakeholders we interviewed agreed that both qualifying hospitals’ LTCH status provides certain advantages over IRFs, such as the ability to admit some medically complex patients earlier in the recovery process and longer lengths of stay. Stakeholders from most of the IRFs we interviewed also reported having the flexibility to admit some medically complex patients requiring more advanced level monitoring and resources earlier in the recovery process—such as patients with disorders of consciousness. Officials from the two qualifying hospitals also said they offer a continuum of care that can meet patient’s changing needs, without the need to transfer them to different facilities. Information provided by Craig Hospital indicated that 83 percent of patients treated at its facility, in 2016, were discharged to home, 13 percent were discharged to another post-acute care facility, and 3 percent were discharged to an acute care hospital. In 2016, approximately 91 percent of patients treated at Shepherd Center were discharged to home, 7 percent were discharged to another post- acute care facility, and 2 percent were discharged to an acute care hospital. Information provided by the IRFs that responded to our written request varied by facility, but—similar to the two qualifying hospitals— each facility discharged more than 65 percent of patients to home. IRF payment criteria. CMS and most other stakeholders we interviewed noted that two Medicare payment policies applicable to IRFs, but not LTCHs, may contribute to their different patient populations. Specifically, to be classified for payment under Medicare’s IRF PPS, at least 60 percent of the IRF’s total inpatient population must require intensive rehabilitative treatment for one or more of 13 conditions—which includes both spinal cord and brain injury. To be admitted to an IRF, Medicare beneficiaries must reasonably be expected to actively participate in and benefit from the intensive rehabilitation therapy program, typically provided in IRFs. According to HHS, per industry standard, the intensive rehabilitation therapy program is often demonstrated by providing three hours of rehabilitation services per day for at least five days per week, but this is not the only way such intensity can be demonstrated. Officials from the two qualifying hospitals told us they generally use Medicare’s intensive rehabilitation requirement as a minimum standard for their rehabilitation patients—even though they are not held to this requirement, for the purposes of Medicare payment—but noted that some of their patients may not meet this requirement, due to their medical complexity. Length of stay and site-neutral payment requirements, for LTCHs. As previously noted, LTCHs—including the two qualifying hospitals—must have an average length of stay of greater than 25 days; IRFs are not subject to this requirement. The average length of stay for patients discharged from the Craig Hospital was about 60 days, in fiscal year 2016, and the average length of stay for patients discharged from Shepherd Center was about 53 days, in calendar year 2016. Stakeholders from the IRFs that responded to our information request reported average lengths of stay ranging from 14 to 31 days, for patients discharged in fiscal year 2016; the ranges of lengths of stay were slightly higher for spinal cord injury and traumatic brain injury inpatients for the IRFs, during the same period. LTCHs are also generally subject to site- neutral payment policy that is not applicable to IRFs and may decrease LTCHs payments for certain discharges, under Medicare. Other services provided. In addition to these Medicare specific differences, a few stakeholders we interviewed also noted the two qualifying hospitals receive additional funding from their strong philanthropic donor base that may allow them to provide other services and resources, not covered by Medicare or offered at some IRFs. For example, while a few IRFs that responded to our information request reported offering housing for families of injured patients, the two qualifying hospitals offer up to 30 days of free housing to families of newly injured rehabilitation patients, if both the family and patient live more than 60 miles from the hospital. Officials from Shepherd Center told us their revenues are supplemented by investment income and donor funds. Craig Hospital has also established a foundation that supports the hospital in achieving its goals through philanthropy. We provided a draft of this report to HHS. HHS provided technical comments, which we incorporated as appropriate. We also provided the two qualifying hospitals summaries of information we collected from them, to confirm the accuracy of statements included in our draft report. We incorporated their comments, as appropriate. We are sending copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at farbj@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix IV. This appendix describes our methodology for conducting simulations of payments for the two qualifying hospitals. We used Medicare claims data to conduct simulations of payments for the two qualifying hospitals. We first identified discharges at each hospital in two baseline years—federal fiscal years 2013 and 2016. We selected fiscal year 2016 because it was the year with the most recent data available at the time of our analysis, and we selected a second baseline year because data for 2016 was different than data for other recent years. For example, the number of discharges for one qualifying hospital declined by nearly half between fiscal years 2013 and 2016. We chose fiscal year 2013 because data from that year was used to help determine which hospitals are subject to the temporary exception. To identify how to appropriately calculate the long-term care hospital (LTCH) payment for each of these discharges in future payment years, we reviewed applicable federal regulation and documents from the Centers for Medicare & Medicaid Services (CMS) and the Medicare Payment Advisory Commission (MedPAC), and interviewed officials from both organizations. See table 4 for the relevant components in the formulas, such as Medicare severity long-term care diagnosis related group (MS-LTC-DRG) weights, identified from final rule tables. When conducting these simulations, we made the following assumptions: For simulated payments for payment policies in effect for fiscal years 2017 and 2018, we used the base rates, relative weights (e.g., the MS-LTC-DRG weights), geometric mean length of stay, wage index, geographic adjustment factor, fixed-loss amounts, and outlier thresholds that were published in the final rule tables for LTCH and inpatient prospective payment system (IPPS) hospitals—also known as acute care hospitals—for each respective year. At the time we began our analysis, this information was not known for fiscal years 2019 through 2021. We chose to use the fiscal year 2018 rates when conducting simulations for payment policies in those years because historical trends showed that annual changes were minimal—about 1 percent. Therefore, to the extent that these values continue to change over time, our findings may understate or overstate the amount that the qualifying hospitals would have been paid in our baseline years based on these future payment policies. The site-neutral payment policy did not apply to discharges from the fiscal year 2013 baseline year. Therefore, we examined Medicare claims data to determine whether each discharge would have met the criteria to receive the LTCH standard rate in that year. Specifically, we determined whether each discharge had an acute care hospital stay that immediately preceded their LTCH stay. We then determined whether the time at the acute care hospital included three or more days in the intensive care unit or whether there was a code on the LTCH claim that indicated at least 96 hours of mechanical ventilation services were provided. Per Medicare’s payment policy, we assumed any discharge that met these two criteria would qualify for full LTCH payment rate, unless the case was a psychiatric or rehabilitation stay, as identified by the following MS-LTC-DRG codes: 876, 880, 881, 882, 883, 884, 885, 886, 887, 894, 895, 896, 897, 945, or 946. Under statute, unless 50 percent or more of the hospital’s discharges beginning during or after 2020 qualify for the standard rate, no subsequent payments will be made to a hospital at that rate. Therefore, when calculating simulated payments for fiscal year 2021, we applied the 50 percent threshold. At the time of our analysis, CMS had not yet finalized this policy through rule-making. As of November 2018, CMS officials told us that it is unlikely that any payment adjustment under this provision would apply until 2022 because the percentage cannot be determined until after an LTCH’s cost reporting period has ended and data have been submitted. Shepherd Center’s fiscal year is different than the federal fiscal year. Therefore, the variables used to determine whether discharges in federal fiscal year 2016 met criteria to receive the standard rate were not available to use for some of the discharges that year. Of those discharges, we assumed that the same percentage of discharges that met the criteria to receive the standard rate in Shepherd’s fiscal year—30 percent—met the criteria in federal fiscal year 2016. When calculating site-neutral payments, we assumed that each discharge would be paid at a rate comparable to that for acute care hospitals—the IPPS comparable amount rate. Site-neutral payments may also be based on the estimated cost-of-care, if it is lower than the IPPS comparable amount rate. However, over 90 percent of discharges at the qualifying hospitals were paid at the IPPS comparable amount rate in fiscal year 2016. Per CMS’s recommendation, we applied the cost-to-charge ratio that was effective October 1, 2017, for each qualifying hospital, regardless of discharge date. For Craig Hospital this value was 0.442 and for Shepherd Center this value was 0.464. According to CMS officials, in general, these values do not change significantly when they are updated during the fiscal year. Therefore, they believe that using the values effective at the start of the fiscal year is a reasonable assumption. We excluded indirect medical education adjustments and disproportionate share hospital payments that are part of the IPPS comparable amount rate because, according to CMS, they were unlikely to have much impact for these hospitals. CMS reviewed each of these assumptions and agreed they were reasonable for purposes of our analysis. CMS also verified that we were correctly applying the formulas for calculating these payments and using the appropriate values from the final rules. Figures 3 and 4 illustrate the methodology for calculating Medicare payments under the long-term care hospital (LTCH) prospective payment system (PPS) and the inpatient rehabilitation facility (IRF) PPS, respectively, as reported by the Medicare Payment Advisory Commission (MedPAC). Appendix III: List of Common Diagnosis Groups for Long-Term Care Hospitals (LTCH) In its March 2018 annual report to the Congress, the Medicare Payment Advisory Commission (MedPAC) reported that 20 diagnosis groups accounted for over 61 percent of LTCH discharges at both for-profit and not-for-profit facilities, in fiscal year 2016. Table 5 provides a list of these 20 diagnosis groups. In addition to the contact named above, Will Simerl, Assistant Director; Kathy King; Amy Leone, Analyst-in-Charge; Todd Anderson; Sam Amrhein; LaKendra Beard; Rich Lipinski; Jennifer Rudisill; and Eric Wedum made key contributions to this report. Also contributing were Leia Dickerson, Diona Martyn, Vikki Porter, and Lisa Rogers.
|
The Centers for Medicare & Medicaid Services pays LTCHs for care provided to Medicare beneficiaries. There were about 400 LTCHs across the nation in 2016. The 21st Century Cures Act included a provision for GAO to examine certain issues pertaining to LTCHs. This report examines (1) the health care needs of Medicare beneficiaries who receive services from the two qualifying hospitals; (2) how Medicare LTCH payment polices could affect the two qualifying hospitals; and (3) how the two qualifying hospitals compare with other LTCHs and other facilities that may treat Medicare patients with similar conditions. GAO analyzed the most recently available Medicare claims and other data for the two qualifying hospitals and other facilities that treat patients with spinal cord injuries. GAO also interviewed HHS officials and stakeholders from the qualifying hospitals, other facilities that treat spinal cord patients, specialty associations, and others. GAO provided a draft of this report to HHS. HHS provided technical comments, which were incorporated as appropriate. We also provided the two qualifying hospitals summaries of information we collected from them, to confirm the accuracy of statements included in our draft report. We incorporated their comments, as appropriate. Spinal cord injuries may result in secondary complications that often lead to decreased functional independence and quality of life. The 21st Century Cures Act changed how Medicare pays certain long-term care hospitals (LTCH) that provide spinal cord specialty treatment. For these hospitals, the act included a temporary exception from how Medicare pays other LTCHs. Two LTCHs—Craig Hospital in Englewood, Colorado and Shepherd Center in Atlanta, Georgia—have qualified for this exception. GAO found that most Medicare beneficiaries treated at these two hospitals typically receive specialized care for multiple chronic conditions and other long-term complications that develop after initial injuries, such as pressure ulcers that can result in life-threatening infection. The two hospitals also provide specialty care for acquired brain injuries, such as traumatic brain injuries. GAO's simulations of Medicare payments to these two hospitals using claims data from two baseline years—fiscal years 2013 and 2016—illustrate potential effects of payment policies. LTCHs are paid under a two-tiered system for care provided to beneficiaries: they receive the LTCH standard federal payment rate—or standard rate—for certain patients discharged from the LTCH, and a generally lower rate—known as a “site-neutral” rate—for all other discharges. Under the temporary exception, Craig Hospital and Shepherd Center receive the standard rate for all discharges during fiscal years 2018 and 2019. Assuming their types of discharges remain the same as in fiscal years 2013 and 2016, GAO's simulations of Medicare payments in the baseline years indicate: Most of the discharges we examined would not qualify for the standard rate, if the exception did not apply. Medicare payments would generally decrease under fiscal year 2020 payment policy, once the exception expires. However, the actual effects of Medicare's payment policies on these two hospitals could vary based on factors, including the severity of patient conditions (e.g., Medicare payment is typically higher for more severe injuries), and whether hospitals' discharges meet criteria for the standard rate. Similarities and differences may exist between the two qualifying hospitals and other facilities that treat Medicare patients with spinal cord and brain injuries. Patients with spinal cord and brain injuries may receive care in other LTCHs, but GAO found that most Medicare beneficiaries at these other LTCHs are treated for conditions other than spinal cord and brain injuries. Certain inpatient rehabilitation facilities (IRF) also provide post-acute rehabilitation services to patients with spinal cord and brain injuries. While data limitations make a direct comparison between these facilities and the two qualifying hospitals difficult, GAO identified some similarities and differences. For example, officials from some IRFs we interviewed reported providing several of the same programs and services as the two qualifying hospitals to medically complex patients, but the availability of services and complexity of patients varied. Among other reasons, the different Medicare payment requirements that apply to LTCHs and IRFs affect the types of services they provide and the patients they treat.
|
The Energy Policy and Conservation Act (EPCA) of 1975 authorized the SPR, partly in response to the Arab oil embargo of 1973 to 1974 that caused a shortfall in the international oil market. The SPR is owned by the federal government, managed by DOE’s Office of Petroleum Reserves, and maintained by Fluor Federal Petroleum Operations LLC. The SPR stores oil in underground salt caverns along the Gulf Coast in Louisiana and Texas. DOE established an initial target capacity for the SPR of 500 million barrels based on U.S. import levels and implemented a phased approach to create large underground oil storage sites in salt formations, to reach a physical storage capacity of 750 million barrels. The SPR currently maintains four storage sites with a physical capacity of 713.5 million barrels. Three recent laws required sales of oil from the SPR to fund its modernization and other national priorities. The Bipartisan Budget Act of 2015 provided for the drawdown and sale of 58 million barrels of oil from fiscal years 2018 through 2025 and authorized the sale of up to $2 billion worth of oil through fiscal year 2020 to be used for an SPR modernization program. The Fixing America’s Surface Transportation Act provided for the drawdown and sale of 66 million barrels of oil from fiscal years 2023 through 2025. The 21st Century Cures Act provided for the drawdown and sale of 25 million barrels from fiscal years 2017 through 2019. DOE estimates that, as a result of these sales, the SPR will hold between 506 and 513 million barrels of oil by 2025. For member countries to meet net petroleum import obligations, the IEA counts both public and private oil reserves, although the United States meets its IEA obligation solely through the SPR. As of July 2017, according to IEA data, the SPR held the equivalent of 141 days of import protection and U.S. private oil held the equivalent of an additional 216 days, for a total of about 356 days. Based on EIA projections of net imports, between 506 and 513 million barrels of oil would be equivalent to about 242 and 245 days of net imports in 2025. The United States has two regional refined product reserves—Northeast Home Heating Oil Reserve and Northeast Gasoline Supply Reserve. The Northeast Home Heating Oil Reserve, which is not part of the SPR, holds 1 million barrels of ultra-low sulfur distillate, used for heating oil, for homes and businesses in the northeastern United States, a region heavily dependent upon the use of heating oil, according to DOE’s website. The distillate is stored in leased commercial storage in terminals located in three states: Connecticut, Massachusetts, and New Jersey. In 2000, President Clinton directed the creation of the reserve to hold approximately 10 days of inventory, the time required for ships to carry additional heating oil from the Gulf of Mexico to New York Harbor. The Northeast Gasoline Supply Reserve, a part of the SPR, holds a 1 million barrel supply of gasoline for consumers in the northeastern United States. According to DOE’s website, this region is particularly vulnerable to gasoline disruptions as a result of hurricanes and other natural events. In response to Superstorm Sandy, which caused widespread gasoline shortages in the region in 2012, DOE conducted a test sale of the SPR and used a portion of the proceeds from the sale to create the reserve in 2014. The gasoline is stored in leased commercial storage in terminals located in three states: Maine, Massachusetts, and New Jersey. Under conditions prescribed by EPCA, as amended, the President has discretion to authorize the release of petroleum products from the SPR to minimize significant supply disruptions. In the event of an oil supply disruption, the SPR can supply the market by selling stored oil. Should the President order an emergency sale of SPR oil, DOE conducts a public sale, evaluates and selects offers, and awards contracts to the highest qualified bidders. Purchasers are responsible for making their own arrangements for the transport of the SPR oil to its final destination. The Secretary of Energy also is authorized to release petroleum products from the SPR through an exchange for the purpose of acquiring oil for the SPR. According to DOE officials, this authority is sometimes utilized in oil supply disruptions when a specific volume of SPR oil is provided to a private sector company in an emergency exchange for an equal quantity of oil plus an additional amount as a premium to be returned to the SPR in the future. According to DOE’s website, emergency exchanges are generally requested by a company after an event outside the control of the company, such as a hurricane, disrupts commercial oil supplies. The Secretary of Energy is also authorized to carry out test drawdowns through a sale or exchange of petroleum products to evaluate SPR’s drawdown and sales procedures. When oil is released from the SPR, it flows through commercial pipelines or on waterborne vessels to refineries, where it is converted into gasoline and other petroleum products, and then transported to distribution centers for sale to the public. Petroleum markets have changed substantially in the 40 years since the establishment of the SPR, including increases in global markets, increases in domestic oil production, and declines in net petroleum imports. Increases in global markets. At the time of the Arab oil embargo, price controls in the United States prevented the prices of oil and petroleum products from increasing as much as they otherwise might have, contributing to a physical oil shortage that caused long lines at gasoline stations throughout the United States. Now that the oil market is global, the price of oil is determined in the world market, primarily on the basis of supply and demand. In the absence of price controls, scarcity is generally expressed in the form of higher prices, as purchasers are free to bid as high as they want to secure oil supply. In a global market, an oil supply disruption anywhere in the world raises prices everywhere. Releasing oil reserves during a disruption provides a global benefit by reducing oil prices in the world market. Increases in domestic oil production. Reversing a decades-long decline, U.S. oil production has generally increased in recent years. According to EIA data, U.S. production of oil reached its highest level in 1970 and generally declined through 2008, reaching a level of almost one-half of its peak. During this time, the United States increasingly relied on imported oil to meet growing domestic energy needs. However, recent improvements in technologies have allowed producers to extract oil from shale formations that were previously considered to be inaccessible because traditional techniques did not yield sufficient amounts for economically viable production. In particular, the application of horizontal drilling techniques and hydraulic fracturing—a process that injects a combination of water, sand, and chemical additives under high pressure to create and maintain fractures in underground rock formations that allow oil and natural gas to flow—have increased U.S. oil and natural gas production. Declines in net petroleum imports. One measure of the economy’s vulnerability to oil supply disruptions is to assess net petroleum imports—that is, imports minus exports. Net petroleum imports have declined by over 60 percent from a peak of about 12.5 million barrels per day in 2005 to about 4.8 million barrels per day in 2016. In 2006, net imports were expected to increase in the future, increasing the country’s reliance on foreign oil. However, imports have declined since then and, according to EIA’s most recent forecast, are expected to remain well below 2005 import levels into the future. Canada and Mexico are the nation’s major foreign sources for imported oil. Furthermore, the United States has increased its exports of oil and refined petroleum products. To quantify how DOE has used the SPR to address domestic petroleum supply disruptions, we reviewed DOE and EIA documents. We also reviewed our past work from August 2006 to January 2014. Our preliminary analysis indicates that DOE has primarily used exchanges to private companies in response to domestic supply disruptions such as hurricanes and other events. DOE has released oil 24 times from 1985 through September 2017, including 11 releases in response to domestic supply disruptions. Of these 11 releases, 10 were exchanges, including 6 exchanges in response to hurricanes. One of the 11 releases was an SPR sale in response to Hurricane Katrina, which was part of an IEA coordinated action release. Historic releases from the SPR are shown in figure 1. Our preliminary analysis also indicates that the six exchanges from DOE to U.S. refineries in response to hurricanes totaled about 28 million barrels. Based on our preliminary analysis of DOE documents, most recently, in response to Hurricane Harvey in 2017, DOE exchanged 5 million barrels of oil to Gulf Coast refineries that requested supplies. Refinery operations largely depend on a supply of oil and feedstocks. Hurricane Harvey closed or restricted ports through which 2 million barrels of oil per day were imported, and several refineries had no supply options except for SPR oil. According to DOE officials, exchanges from the SPR have allowed refineries to continue to operate until alternative supply sources became available, ensuring continued production of refined petroleum products for use by consumers. Based on our preliminary analysis of DOE documents, DOE’s most significant response to a hurricane was in 2005 following Hurricane Katrina. As we reported in January 2014, oil platforms were evacuated and damaged, virtually shutting down all oil production in the Gulf region as a result of the hurricane. Based on our preliminary analysis of DOE documents, exchanges from the SPR, totaling 9.8 million barrels of oil, helped refineries offset this short-term physical supply disruption at the beginning of the supply chain, thereby helping to moderate the impact of the production shutdown on U.S. oil supplies. In addition to these exchanges, DOE also participated in an IEA collective action that was called in response to Hurricane Katrina by selling 11 million barrels of oil from the SPR, and IEA member countries delivered and sold much needed gasoline and other products to the United States. In total, DOE sold or exchanged 20.8 million barrels of oil from the SPR. Our preliminary analysis of DOE documents and reports also showed that although almost all of DOE’s releases in response to domestic supply disruptions have been from the SPR, DOE also used the Northeast Home Heating Oil Reserve in response to Superstorm Sandy in 2012. According to DOE’s website, the agency transferred approximately 120,000 barrels of fuel to the Department of Defense to help provide fuel for first responders. Based on our past work and preliminary observations, the SPR is limited in its ability to respond to domestic petroleum supply disruptions for three main reasons. First, as we reported in September 2014, reserves are almost entirely composed of oil and not refined products, which may not be effective in responding to all disruptions that affect the refining sector. Second, as we reported in September 2014, reserves are nearly entirely located in one region, the Gulf Coast, which may limit responsiveness to disruptions in other regions. Third, during the course of our ongoing work, we reviewed DOE and energy task force reports that found that the statutory authorities governing SPR releases may inhibit their use for regional disruptions. Composition: As we reported in September 2014, the SPR is almost entirely composed of oil, which may not be effective in responding to all disruptions that affect the refining sector. In September 2014, we reported that many recent economic risks associated with supply disruptions have originated from the refining and distribution sectors, which provide refined products, such as gasoline, rather than from shortages of oil. Oil reserves are of limited use in such instances. We reported in May 2009 that following Hurricanes Katrina and Rita, nearly 30 percent of U.S. refining capacity was shut down for weeks, disrupting supplies of gasoline and other products. The SPR could not mitigate the effects of disrupted supplies because it holds oil. As of September 2017, over 99 percent of the SPR and its Northeast Gasoline Supply Reserve component (about 674 of 675 million barrels) is held as oil rather than as refined products, such as gasoline and diesel. Moreover, Gulf Coast hurricanes severely impacted refinery operations, such as Hurricane Katrina in 2005, Hurricane Ike and Hurricane Gustav in 2008, and Hurricane Harvey this year. According to DOE officials, oil reserves are not able to mitigate the potential effects of large-scale Gulf Coast refinery outages that may impact refined product deliveries. Location: As we reported in September 2014, the SPR is nearly entirely located in one region, the Gulf Coast, which may limit its ability to respond to disruptions in other regions. In the Gulf Coast, the SPR is located close to a major refining center as well as to distribution points for tankers, barges, and pipelines that can carry oil from it to refineries in other regions of the country. Most of the system of oil pipelines in the United States was constructed in the 1950s, 1960s, and 1970s to accommodate the needs of the refining sector and demand centers at the time. Given the SPR’s current location in the Gulf Coast, transporting oil from the reserve may impact commercial distribution of oil. Based on our ongoing work, according to DOE’s 2016 long-term strategic review of the SPR, the agency reported that the expanding North American oil production and the resulting shifts in how oil is transported around the country have reduced the SPR’s ability to add incremental barrels of oil to the market under certain scenarios in the event of an oil supply crisis. This means that while the SPR remains connected to physical assets that could bring oil to the market, in many cases, forcing SPR oil into the distribution system would result in an offsetting reduction in domestic commercial oil flows. As we reported in September 2014, it may be more difficult to move oil from the SPR to refineries in certain regions of the United States. For example, since no pipelines connect the SPR to the West Coast, supplies of petroleum products and oil must be shipped by pipeline, truck, or barge from other domestic regions or by tanker from foreign countries. Such modes of transport are slower and more costly than via pipelines. For example, it can take about 2 weeks for a vessel to travel from the Gulf Coast to Los Angeles—including transit time through the Panama Canal. Statutory release authorities: In the course of our ongoing work, we reviewed DOE and energy task force reports that found that the statutory authorities governing SPR releases may inhibit their use for regional disruptions. DOE is authorized to release petroleum distillate (fuel) from the Northeast Home Heating Oil Reserve upon a finding by the President of a severe energy supply interruption that includes a dislocation in the heating oil market or other regional supply shortage. On the other hand, because the Northeast Gasoline Supply Reserve is a part of the SPR, DOE can release gasoline from that reserve only after the President makes the statutorily required findings for release from the SPR, which do not explicitly include the existence of a regional supply shortage. According to DOE’s 2016 long-term strategic review of the SPR, a regional product reserve is meant to address regional supply shortages, whereas the SPR of which the Northeast Gasoline Supply Reserve is a part of, is meant to address severe energy supply interruptions that have a national impact. As a result, according to DOE’s 2016 long-term strategic review of the SPR, in practice, this means that the release of the gasoline reserve would have to have a national impact. The Quadrennial Energy Review of 2015 recommended that Congress integrate the authorities of the President to release products from the regional product reserves—the Northeast Home Heating Oil Reserve and Northeast Gasoline Supply Reserve—into a single, unified authority by amending the trigger for the release of fuel from the two refined product reserves so that they are aligned and properly suited to the purpose of a product reserve, as opposed to an oil reserve. As discussed, based on our preliminary observations, DOE has used the SPR in response to domestic supply disruptions, but the effectiveness of these releases is unclear because DOE has not formally assessed all of them. DOE has exchanged about 28 million barrels of oil in response to hurricanes, but we found only two reports assessing DOE’s response to Hurricanes Gustav, Ike, Katrina, and Rita, and it is unclear whether DOE has examined other responses. According to a 2006 DOE Inspector General report, DOE used the SPR and its assets with great effectiveness to address emergency energy needs in the crises surrounding Hurricanes Katrina and Rita, but the concentration of SPR sites along the Gulf Coast meant the United States also had to rely on refined petroleum products from Europe. The report noted that despite being in the path of the hurricanes’ destruction, the SPR promptly fulfilled requests for oil from refineries suffering from storm-related supply shortages. However, the damage caused by Hurricane Katrina demonstrated that the concentration of refineries on the Gulf Coast and resulting damage to pipelines left the United States to rely on imports of refined petroleum products from Europe, as part of an IEA collective response. Consequently, regions experienced a shortage of gasoline, and prices rose. DOE testified in 2009 that despite a response from the SPR and IEA, some markets south of Virginia and north of Florida could not be immediately supplied with refined products due to a lack of infrastructure to receive and distribute imports from the Atlantic coast to inland population centers. Exchanges with multiple refiners totaling 5.4 million barrels of SPR oil were authorized to Hurricanes Gustav and Ike in 2008. DOE assessed this response and submitted a report to Congress in 2009. According to DOE’s 2009 report, the exchanges conducted in September and October 2008 were successful in providing emergency petroleum supplies to refiners experiencing shortages caused by Hurricanes Gustav and Ike. As we reported in May 2009, as originally enacted, EPCA envisioned the possibility that the SPR would include a variety of petroleum products stored at locations across the country. In a 2009 hearing, the then Deputy Assistant Secretary for Petroleum Reserves testified that DOE still considers a large SPR focused on oil storage to be the best way to protect the nation from the negative impacts of a short-term international interruption to U.S. oil imports; however, the hurricanes of 2005 and 2008 showed that the SPR may be limited in its ability to address some short- term interruptions to domestic refined products supply and distribution infrastructure. Based on information reviewed during the course of our ongoing work, to respond to disruptions, 27 of the 29 IEA member countries use one of five reserve structures, also known as stockholding structures, in which these countries hold public reserves, industry reserves, or a combination of these. The five structures are shown in figure 2. Also, most members hold refined petroleum products, with many members holding at least a third of their reserves in refined petroleum products. Some members hold their refined petroleum products in different regions across their country to respond to disruptions. Based on our preliminary analysis of information on the 29 IEA member countries, 18 place a stockholding obligation on industry either exclusively or in part to meet their total emergency reserve needs. Most of these countries distribute the stockholding obligation in proportion to companies’ share of oil imports or of sales in the domestic market. However, several member countries instead impose a higher obligation on refineries because of their high amount of operating oil. According to a 2014 IEA report, most IEA members hold some amount of refined petroleum products, and a European Union (EU) directive generally requires EU members to ensure that at least one-third of their stockholding obligation is held in the form of refined petroleum products. For example, according to the IEA’s website, Germany’s stockholding agency, Erdolbevorratungsverband (EBV), holds 55 percent of its reserve in refined petroleum products such as gasoline, diesel fuel, and light heating oil. In contrast, the United States holds almost all of its reserves in oil rather than refined petroleum products. Some IEA member countries geographically disperse their reserves of refined petroleum products to be able to respond to domestic disruptions. For example, according to the IEA’s website, to maintain a wide geographical distribution of emergency reserves, the French stockholding agency stores refined petroleum products in each of its seven geographic zones. Moreover, according to the IEA’s website, France’s agency stores petroleum product reserves in each zone; reserves in each zone should represent specified amounts based on consumption in order to respond to emergencies. During a labor strike in December 2013, France used its emergency reserves to supply local gas stations when delivery of fuel was impeded for a prolonged period of time, according to a French document. In another example, the IEA reported that Germany holds petroleum product reserves in several regions in the country and that the reserves are to be distributed throughout Germany, so that a minimum reserve equivalent to a 15-day supply is maintained in each of five designated supply areas. The rationale for this is to prevent logistical bottlenecks that could occur if all emergency reserves were stored centrally, according to a 2014 IEA report. Based on our preliminary observations, DOE has taken some steps to evaluate different structures for holding reserves. However, the agency has not formally evaluated other countries’ structures in over 35 years and has not finalized its 2015 studies on regional petroleum product reserves. According to DOE officials, the agency explored the feasibility of adopting the industry structure shortly after creating the SPR and concluded that this and other structures were not feasible in the United States. In 1980, DOE studied the feasibility of adopting the agency structure, which is the most similar to the SPR since the only major difference is how the reserve is funded, according to DOE officials. According to IEA documents, in the agency structure, generally the reserve is funded by a tax or levy on products or industry, which is passed down to the consumer. In contrast, the SPR is funded through congressional appropriations. However, DOE officials we interviewed cautioned that the agency has not reassessed its findings from 35 years ago. As mentioned above, in 2016 DOE reassessed the SPR in light of the changing global oil market, but this assessment did not include a review of other IEA countries’ structures. Our preliminary review indicates that DOE examined the feasibility of additional regional petroleum product reserves in two 2015 studies, but it did not finalize these studies or expand the SPR to include additional reserves. In September 2014, we reported that DOE officials told us they were conducting a regional fuel resiliency study that would provide insights into whether there is a need for additional regional product reserves and, if so, where these reserves should be located. The Quadrennial Energy Review of 2015 recommended that the agency analyze the need for additional or expanded regional product reserves by undertaking updated cost-benefit analyses for all of the regions of the United States that have been identified as vulnerable to fuel supply disruptions. Figure 3 illustrates vulnerabilities that DOE identified in 2014. In response to the 2015 recommendation, DOE contractors studied the feasibility of additional regional petroleum product reserves, as part of the SPR, in the U.S. Southeast and West Coast regions to address supply vulnerabilities from hurricanes and earthquakes, respectively. According to DOE officials, weather events in the Southeast are of higher probability but lower consequence, and events in the West Coast are of lower probability but higher consequence. DOE did not finalize its 2015 studies on regional petroleum product reserves and make them publicly available. According to DOE officials, because consensus could not be reached within the Administration on several issues associated with the refined product reserve studies, these studies were not included as part of DOE’s 2016 long-term strategic review of the SPR. Our ongoing work indicates that DOE’s 2016 long-term strategic review of the SPR did not account for the risks of domestic supply disruptions as a factor in determining the appropriate size, location, and composition of the SPR. Prior to the two 2015 studies, in 2011, DOE carried out a cost-benefit study of the establishment of a refined product reserve in the Southeast and estimated that such a reserve would reduce the average gasoline price rise by 50 percent to 70 percent in the weeks immediately after a hurricane landfall, resulting in consumer cost savings, according to the Quadrennial Energy Review of 2015. In closing, I note that we are continuing our ongoing work examining issues that may help inform future considerations for the SPR. Given the constrained budget environment and the evolving nature of energy markets and their vulnerabilities, it is important that DOE ensures the SPR is an efficient and effective use of federal resources. We look forward to continuing our work to determine whether additional DOE actions may be warranted to promote this objective. Chairman Upton, Ranking Member Rush, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact Frank Rusco, Director, Natural Resources and Environment, at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Quindi Franco, Assistant Director; Philip Farah, Ellen Fried, Nkenge Gibson, Cindy Gilbert, Gregory Marchand, Patricia Moye, Camille Pease, Oliver Richard, Danny Royer, Rachel Stoiko, Marie Suding, and Kiki Theodoropoulos. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Over 4 decades ago, Congress authorized the SPR—the world's largest government-owned stockpile of emergency oil—to release oil to the market during supply disruptions and protect the U.S. economy from damage. The SPR is managed by DOE. According to DOE's strategic plan, the SPR benefits the nation by providing an insurance policy against actual and potential interruptions in U.S. petroleum supplies caused by international turmoil and hurricanes, among other things. The SPR also helps the United States meet its obligations, including to holding reserves of oil or refined petroleum products equaling 90 days of net petroleum imports, as one of 29 members of the IEA—an international energy forum established to help members respond to major oil supply disruptions. The SPR held almost 674 million barrels of oil at the end of September 2017. This testimony primarily focuses on preliminary observations from ongoing work on (1) DOE's use of the SPR in response to domestic petroleum supply disruptions, (2) the extent to which the SPR is able to respond to domestic petroleum supply disruptions, and (3) how other IEA members structure their strategic reserves and extent to which DOE has examined these structures. GAO reviewed past work from August 2006 through September 2014 and DOE and IEA documentation. GAO also interviewed DOE and IEA officials, as part of GAO's ongoing work. GAO's preliminary analysis of Department of Energy (DOE) documents indicates that DOE has primarily used the Strategic Petroleum Reserve (SPR) to an exchange of oil to companies in response to domestic supply disruptions, such as hurricanes. In the event of a supply disruption, the SPR can supply the market by either exchanging oil for an equal quantity of oil plus an additional amount as a premium to be returned to the SPR in the future or selling stored oil. Since the SPR was authorized in 1975, DOE has released oil 11 times in response to domestic supply disruptions. All but one were in the form of an exchange, including six exchanges in response to hurricanes. For example, Hurricane Harvey in 2017 closed or restricted ports through which 2 million barrels of oil per day were imported. In response, DOE exchanged 5 million barrels of oil to Gulf Coast refineries. According to DOE officials, exchanges from the SPR allowed refineries to operate, ensuring continued production of refined petroleum products for use by consumers. Based on past GAO work and preliminary observations, the SPR is limited in its ability to respond to domestic supply disruptions, including severe weather events, for three main reasons. First, as GAO reported in September 2014 (GAO-14-807), the SPR is almost entirely composed of oil and not refined products like gasoline, which may not be effective in responding to all disruptions. For example, following Hurricanes Katrina and Rita, nearly 30 percent of U.S. refining capacity was shut down for weeks, disrupting supplies of gasoline and other petroleum products. The SPR could not mitigate the effects of disrupted supplies. Second, as GAO also reported in September 2014, the SPR is nearly entirely located in the Gulf Coast, so it may not be responsive to disruptions in other regions, such as the West Coast. Third, GAO's ongoing work reviewed DOE and energy task force reports that found that statutory authorities governing SPR releases may inhibit their use for regional disruptions. GAO's preliminary observations show that other International Energy Agency (IEA) member countries generally have used one of five reserve structures configured in various ways. The structures are defined by whether countries hold either public reserves (e.g., the SPR), industry reserves (e.g., placing reserve holding requirements on industry), or a combination. Most IEA members hold refined petroleum products in reserve, with many members holding at least a third of their reserves in these products. For example, in Germany, 55 percent of reserves are in petroleum products. In addition, some IEA members' reserves are geographically dispersed in their countries to respond to disruptions. For example, France has reserves in each of its seven regions and has used these to address fuel supply disruptions as a result of recent domestic strikes. DOE has taken some steps to evaluate other structures but has not formally evaluated the structures of other countries in over 35 years. In addition, DOE contractors studied the feasibility of regional product reserves in the Southeast and West Coast regions to address supply vulnerabilities from hurricanes and earthquakes, respectively but DOE did not finalize the two 2015 studies. In 2016, DOE released a long- term strategic review of the SPR that Congress had required and GAO recommended. However, DOE did not include the results of the two studies in its 2016 review. GAO is not making recommendations but will consider making them, as appropriate, as it finalizes its work.
|
Spectrum is a natural resource used to provide a variety of communication services to businesses and consumers, as well as federal, state, and local governments. Businesses and consumers use spectrum for a variety of wireless services including mobile voice and data, WiFi- and Bluetooth-enabled devices, broadcast television, radio, and satellite services. Federal, state, and local governments’ uses of spectrum include national defense, law enforcement communication, air- traffic control, weather services, military radar, and first responder communications. IoT applications that rely on spectrum are highly diverse and include connected vehicles, devices in the home, and personal mobile devices. IoT devices communicate using wireless networks, including wide area networks that use cellular networks to cover large areas (e.g., cellular transmission), local area networks that cover about 100 meters (e.g., Wi-Fi within a house), and personal networks covering about 10 meters (e.g., Bluetooth inside a room) (see fig. 1). Each of these wireless devices, like other wireless IoT devices, communicates using spectrum, and the number of connected devices is expected to increase. In 2013, the number of devices connected to the internet globally was estimated to be over 9 billion. In 2015, the Organisation for Economic Cooperation and Development (OECD) estimated that a family of four had an average of 10 devices connected to the Internet in their household, and that this average will increase to 50 devices by 2022. As companies bring new IoT technologies and services to market and government users develop new mission needs, the demand for spectrum will increase. The frequencies, or frequency bands, of spectrum have different characteristics that make them more or less suitable for specific purposes, depending on the specific band (see fig. 2). These bands have different levels of ability to penetrate physical obstacles and cover distances, known as “propagation,” and different limits to the amount of information that they can carry, known as data capacity, and are used for different communication purposes. Low frequency bands are characterized by strong propagation, and are used by numerous IoT devices, some of which may only transmit small amounts of information such as temperature, location, or activity status. The strong propagation of low bands means they can transmit over long distances. Mid-band frequencies have higher data capacity than low bands (because, in part, frequency allocations in higher bands are larger, allowing wider channels), as well as, stronger propagation qualities than higher bands. The bands above 30 GHz have high data capacity but relatively poor propagation, to the point that bands at the highest frequencies can be easily obstructed. This spectrum is currently used by a variety of services, including satellite, fixed microwave, and radio astronomy, and is expected to be important for the next generation wireless technology (5G). FCC is the federal agency responsible for allocating spectrum for various consumer and commercial purposes, assigning spectrum licenses, and making spectrum available for use by unlicensed devices. Licensing assigns frequencies of spectrum, in a specific area, to a specific entity, such as a telecommunications company that operates a network using licensed spectrum. We refer to these bands as licensed spectrum. In some frequency bands, FCC authorizes unlicensed use of spectrum bands—generally referred to as unlicensed spectrum—that is, users do not need to obtain a license to use spectrum. Rather, users of unlicensed devices can share frequencies on a non-interference basis, such as with home wireless networks, cordless phones, and garage door openers. In addition, FCC supports federal emergency-communications activities. NTIA is responsible for establishing policy on regulating federal spectrum use, assigning spectrum bands to government agencies, and maintaining spectrum use databases. Additionally, like FCC, NTIA participates in federal emergency communications activities. NTIA also determines what spectrum bands reserved for the federal government can be made available for commercial use. In managing spectrum, one factor that FCC and NTIA consider is the potential for interference. Harmful interference occurs when two communication signals are either at the same frequencies or close to the same frequencies in the same vicinity, a situation that may lead to degradation of a device’s operation or service. Co-channel interference occurs when two communications systems operate on the same frequency assignment in the same vicinity. Adjacent band interference occurs between two communication systems operating on different, but adjacent frequencies in the same geographic area. Another source of interference can be signals on adjacent spectrum bands leaking into another band. FCC and NTIA work to make more efficient use of spectrum that has been assigned. One means of more efficiently using spectrum is to share it, between and among both federal users and commercial users. In 2017, FCC and NTIA continued oversight of the development of a new- spectrum sharing mechanism called the Spectrum Access System (SAS) in the 3.5 GHz band. Among other things, the SAS allows multiple users access to the same band at different times or places. Within this spectrum band the SAS establishes a three-tiered system of access priority, with federal and non-federal incumbent users having first priority, new non- federal users who have paid for licensed access as second priority, and other users as third priority. This system relies on the SAS to assign frequencies by determining if a frequency is in use by a higher priority user before assigning it to a lower priority user. Stakeholders representing IoT network providers, device manufacturers, users, and federal regulators consistently identified two spectrum-related challenges to the continued growth and development of IoT 1) ensuring the availability of sufficient spectrum and 2) managing the harmful interference from the increasing number of IoT devices. While not currently a crisis, the stakeholders we spoke to agreed that ensuring the availability of sufficient amounts and the right kinds of spectrum is a key challenge for supporting the growth of IoT. Specifically, stakeholders cited three dimensions of the spectrum availability challenge: the amount, the balance between licensed and unlicensed, and the variety of spectrum bands available. According to some reports, incorrectly anticipating industry needs in any of these areas could weaken IoT growth and development in the United States. Amount of spectrum: The amount of spectrum needed for IoT devices is expected to increase with their growth. According to a majority of stakeholders we interviewed, FCC will need to continue to make additional spectrum commercially available in order to meet the demand from expected rapid growth in wireless devices, including IoT devices. FCC officials told us the current amount of available spectrum will be sufficient for the growth of IoT unless devices that use a disproportionally large amount of spectrum become more prevalent. Such devices, like those that stream video, could lead to a spectrum shortage that negatively impacts IoT growth. According to several stakeholders spectrum availability will become an issue as use of these devices increases. FCC officials said that cellular providers experienced similar issues when they introduced smart phones, spurring rapid, exponential growth in consumer demand to send and receive wireless data. Despite the potential for a shortage of spectrum for IoT devices, most of the stakeholders agreed that there should not be specific spectrum set aside for IoT devices; rather, some noted spectrum policies should remain flexible, allowing licensees to determine the best use. Licensed and unlicensed spectrum: A majority of stakeholders said that the spectrum availability challenge includes making both licensed and unlicensed spectrum available. According to FCC staff, FCC is responsible for ensuring sufficient spectrum exists for commercial purposes and will continue to identify new spectrum that can be used for a variety of uses, including by IoT and other wireless devices. This identification of new spectrum includes making spectrum available on both a licensed and unlicensed basis to meet the needs of IoT and other wireless devices. For example, some devices may need to send a signal over a long distance and with a high quality of service to ensure a signal will go through, such as a fire alarm, something licensed spectrum can provide. However, for other devices, cost is a more important consideration. Licensed spectrum has costs that can come from purchasing the license or accessing the spectrum. For example, an official from a supply-chain automation company that develops radio- frequency identification tags told us the lack of inexpensive, low power networks that provide broad coverage is a challenge for their business. With such a network, the company’s tags could send out small amounts of data at intervals to help manufacturers track their goods. However, the cost of such a service is important if these tags are to attach to all size packages because paying for GPS or a wireless connections for each would make it unfeasible. According to several stakeholders, the correct balance between licensed and unlicensed spectrum is difficult to know. Spectrum bands: Several stakeholders indicated that the need to make various spectrum bands available for IoT devices contributes to the spectrum management challenge. As previously described, each band of spectrum has different characteristics, such as the ability to carry data long distances and penetrate obstacles. IoT devices have diverse spectrum needs, such as needing to send a signal over a distance or send a constant stream of information. For example, in the package delivery industry there could be IoT devices, sending signals over a distance, that read the location of the vehicle and direct the driver on a different route based on traffic and deliveries. In addition, there are IoT devices that can monitor containers being delivered including their location, temperature within the container, and other characteristics. In both these examples, the devices can send signals over long distances to systems that can monitor the information. Some stakeholders and FCC staff also agreed that managing interference caused by the increasing number of IoT devices will challenge the continued growth of IoT. As previously stated, interference occurs when signals in the same vicinity attempt to access the same spectrum bands or bands close to each other, causing the signals to degrade. This can lead to intermittent access, poor reception, or no reception. As the number of wireless IoT devices grows, the chances of harmful interference increases. The number of IoT devices is predicted to grow so fast the instances of harmful interference could be difficult to track. Furthermore, according to one stakeholder, with devices being made by more manufacturers, not all devices are created of equal quality, potentially further increasing the chance that such devices will cause interference. A recently issued GAO report found that according to FCC staff, the expansion in wireless services and devices, not just IoT, has contributed to interference becoming more of a challenge for FCC. FCC staff agreed that managing interference is becoming more challenging as the number of wireless IoT devices grows. However, according to FCC staff, relatively few complaints pertaining to licensed services involve devices that are compliant with FCC regulations and operating properly. Managing interference may be particularly difficult in homes where many devices rely on unlicensed spectrum. The FCC Technical Advisory Council’s (TAC) report from 2014, expressed concerns that the rapid growth of IoT could exacerbate interference issues in the home. Particularly, the growing reliance on unlicensed spectrum for many consumer IoT devices has contributed to this concern. For example, many IoT devices using unlicensed spectrum, such as digital assistants or wireless speaker systems, use Wi-Fi, Bluetooth or similar technology to transmit a short distance to a smart phone or Wi-Fi router. Not all agree however, that this use is an issue. One spectrum expert we interviewed for a recently-issued report said that interference among consumer devices is less likely to be an issue because they only transmit for short durations and over short distances. If the devices only transmit a short distance then many devices can transmit on the same spectrum. Similarly, if devices only transmit for short durations then they can take turns transmitting over the same spectrum. To plan for spectrum needs, FCC has repurposed spectrum by making additional spectrum available for commercial purposes and, according to FCC officials, the agency is continuing to look for additional opportunities to do so. For example, in 2016, FCC issued a final order that opened up high-band spectrum (above 24 GHz) for use with 5G networks and applications. This particular rulemaking from FCC opened up a total of 10.85 GHz of spectrum, 3.85 GHz for licensed mobile use and 7 GHz for unlicensed use. According to FCC, this order follows a technology neutral approach to planning by allowing spectrum users to develop technologies for the spectrum and not have FCC dictate its specific use. Advances in technology that now allow use of spectrum above 24 GHz for high-speed mobile services led the FCC to initiate the proceeding resulting in this order. Previously, this spectrum was best suited for various satellite or fixed microwave applications. As shown in table 1, in recent years FCC has freed up spectrum for licensed use, unlicensed use, and sharing between the two. In 2016, FCC issued a proposed rule to allow mobile uses in an additional 17.7 GHz of spectrum. In 2017, the FCC issued a Notice of Inquiry seeking input on potential opportunities for additional flexibility, particularly for wireless broadband services, in spectrum bands between 3.7 and 24 GHz. However, according to FCC staff, the process of identifying and freeing up new spectrum can take a significant amount of time as FCC must complete a rulemaking and either relocate existing users or define sharing arrangements between the existing users and new users. FCC has also proposed sharing mechanisms it hopes will allow some bands to be used by existing users as well as additional uses in the future. Other efforts to make additional spectrum commercially available have included examining the potential for sharing the 5.9 GHz band that FCC designated for transportation safety. This band was allocated over 15 years ago and designated exclusively for safety communication between vehicles and between vehicles and infrastructure. In recent years, FCC has worked with the automobile industry and Department of Transportation to assess whether all or a portion of that spectrum could be shared. FCC is also monitoring development of specifications to support 5G—the next generation of wireless networks. According to FCC, the 5G technologies that providers develop are projected to bring wireless networks lower latency, better coverage, faster Internet connections, and allow for more connections than the existing cellular network, all of which may enable more IoT devices to be connected. However, 5G technology is still being developed, and while specifications are not fully defined, according to the plans from the standards-making bodies there will be particular standards designed to support IoT communications. In 2016, NTIA issued a report on the potential roles of the federal government in support of the growth of IoT. It addressed specific questions regarding the spectrum needs and potential interference related to IoT devices and reaffirmed the government’s role in supporting technology growth. Furthermore, the report identified ongoing initiatives that support IoT as well as proposed future steps the Department of Commerce can take to further support IoT development. For example, NTIA’s report proposed that it continue to analyze the usage and growth of IoT devices through its survey used to collect its Digital Nation data. Recent Digital Nation surveys have asked about wearable devices, use of smart televisions, and use of Internet-enabled mobile phones, all uses that include IoT applications. The most recent survey, in 2015, also asked Internet users whether they interact with household equipment or appliances via the Internet. NTIA officials recently told us that they will continue to monitor these connected items to track trends in their use but do not intend to expand the survey to include questions about additional IoT devices. Specifically, in January 2017, NTIA sought out public comment on its November 2017 Digital Nation survey including comment on a proposed questionnaire. NTIA subsequently submitted its proposed questionnaire to Office of Management and Budget for final approval. NTIA also has ongoing spectrum studies through its Institute for Telecommunications Sciences and the findings may apply to IoT’s use of spectrum. As shown in table 2, these studies touch on a number of areas related to IoT including interference issues and spectrum use. NTIA also co-chairs the Wireless Spectrum Research and Development Interagency Working Group that coordinates spectrum-related research and development activities across the federal government, academia, and the private sector. Among other activities, this working group has developed the Wireless Spectrum Research and Development Inventory that, in its 2016 iteration, provides information on completed projects or those scheduled to be completed between January 1, 2015 and December 31, 2018. FCC has a strategic goal of promoting economic growth, and one way FCC pursues that goal is by ensuring that there is sufficient spectrum to support commercial demand. Most stakeholders agree that the growth in mobile IoT devices will eventually require additional spectrum to operate effectively. According to some stakeholders we interviewed and reports we reviewed, rapid, unexpected growth in two areas could lead to congestion and interference that could slow the growth of IoT in the United States: (1) high-bandwidth devices and (2) devices that operate in unlicensed bands. Federal standards for internal control instruct agencies to address risks such as these by estimating the significance of the risk, analyzing the likelihood of it occurring, and assessing its nature. Such assessments can be used to determine how to respond to the potential risks that could prevent agencies from meeting their goals. Rapid growth in high-bandwidth and unlicensed spectrum devices represent risks to FCC achieving its goal of promoting economic growth by ensuring that sufficient spectrum is available. FCC officials said that the agency tracks industry-produced trends and projections related to spectrum demand and use but does not focus on specific devices. Rather, it relies on network providers to manage and track the spectrum related to specific device types. When more spectrum is needed, FCC officials said that FCC identifies additional spectrum and makes it available to the commercial sector. However, this reactive approach may not adequately address the risks caused by high- bandwidth and unlicensed-spectrum devices. High-bandwidth devices: Some stakeholders we interviewed and FCC officials said that rapid increases in high-bandwidth IoT devices could overwhelm current wireless networks. Such IoT devices could include video-streaming devices or unmanned drones, which have much higher data needs and will require a lot of bandwidth. FCC officials said that the supply of spectrum has not always kept pace with demand caused by rapid increases in high-bandwidth devices. For example, the officials said that wireless networks were overwhelmed when providers introduced smart phones. Until then, ringtones represented the bulk of demand for wireless data, but mobile Internet browsing caused the demand for wireless data to increase several fold. In 2014, the FCC TAC warned that new IoT applications could overwhelm networks the same way smartphones and other new technologies have in the past. The TAC recommended that FCC monitor IoT wireless networks with a specific focus on high-bandwidth devices. Unlicensed spectrum use: Some stakeholders also said that unlicensed bands are particularly vulnerable to congestion and potential interference because of expected growth in IoT devices. For example, all the commercial, industrial, and personal devices that connect using WiFi and Bluetooth networks use unlicensed spectrum. In 2014, the TAC indicated that the majority of wireless IoT devices will rely on unlicensed spectrum and recommended FCC make sufficient unlicensed spectrum available for devices operating on local and personal area networks, like WiFi and Bluetooth. However, FCC may not have enough information to determine when the amount of unlicensed spectrum is sufficient. While network providers can manage the number of devices on their own licensed networks, this approach does not work for devices that use unlicensed spectrum, and FCC does not track unlicensed spectrum utilization. It does not track use of unlicensed spectrum because congestion of unlicensed spectrum is geographically and technically challenging to track. Specifically, it is geographically challenging because network congestion and demand can vary over very short distances and technically challenging because there are so many bands of spectrum that would have to be tracked at one time and unlicensed spectrum typically propagates over relatively short distances. However, there may be ways to track unlicensed use that does not require monitoring. For example, NTIA’s Digital Nation survey provides information on select IoT devices using unlicensed spectrum that could help track unlicensed spectrum use. While FCC makes additional spectrum available when needed, it lacks an early warning system for high-risk sectors, like high-bandwidth and unlicensed-spectrum devices. The process of identifying and reallocating spectrum is a lengthy process that can take years, including the need to identify new bands, address the needs of existing users on the bands, establish service rules, and license or assign the spectrum for commercial uses. Without tracking the high-bandwidth and unlicensed-spectrum devices, FCC is not assessing a key risk associated with its goal of promoting economic growth. Rapid, unexpected growth in these IoT sectors could lead to spectrum congestion and interference that could slow or halt the economic growth associated with IoT until FCC can make additional spectrum available. Like the United States, France, Germany, the Netherlands, and South Korea are among the world leaders in the development of IoT. We contacted public and private officials in these countries to identify their approaches to spectrum planning to address the growth of IoT. Those officials described approaches to planning for future spectrum needs that are similar to the United States in one area but different in others (see table 3). Specifically, we found that all four countries practice technology neutral spectrum planning, an approach that was broadly supported by the stakeholders we interviewed, including wireless carriers, a technology manufacturer, academics, and a nonprofit group. Some of these stakeholders indicated that this approach to spectrum planning encourages innovation as it allows developers to choose the most appropriate spectrum bands for new technology without having to take the extra step of getting regulators’ permission for each new device or application. Two of the selected leading countries, Germany and South Korea, have developed national IoT plans focused on developing IoT for industry; however, only South Korea has a plan that specifically addresses spectrum issues. South Korea’s national IoT plan seeks to increase collaboration among IoT stakeholders, promote innovation, and develop services for the global market in order to promote productivity and efficiency in Korean business. South Korea also developed a mid- to long-term spectrum plan to respond to the expected growth in demand for spectrum as IoT expands and 5G cellular networks are deployed. Released in 2016, the plan intends to makes more spectrum available to support new services such as smart homes, smart factories, smart cities, remote medical treatment, and unmanned vehicles. Specifically, the South Korean spectrum plan that includes IoT and establishes the following goals: almost doubling the amount of available spectrum available, expanding from 44 GHz of available spectrum to 84 GHz by 2026, and increasing the efficiency of spectrum use, promoting spectrum sharing, and advancing international coordination in spectrum planning. Officials from France and the Netherlands told us that making more unlicensed spectrum available is a high priority in their spectrum planning. These officials told us that unlicensed spectrum promotes greater innovation by lowering barriers to access, and many IoT devices are expected to be designed to operate on unlicensed bands. German and Dutch officials told us that numerous smart city IoT applications have been developed in their respective countries, most of which operate on unlicensed spectrum. For example, German and Dutch networks use unlicensed spectrum for purposes that include managing street lighting, preventing the theft of property such as bicycles, monitoring parking spaces, and managing agricultural resources. To provide service options for low power IoT devices, private companies in France, the Netherlands, and South Korea developed nationwide low- power wide-area networks (LPWAN) which use unlicensed spectrum to transmit data. These LPWANs use the 800 and 900 MHz bands to transmit data from wireless IoT devices such as sensors and location trackers. Signals in these bands can be transmitted over long distances and can penetrate obstacles. According to one LPWAN provider, the distance served by a LPWAN site is greater than a single cellular network site. However, according to the same LPWAN operator, the bands used for LPWAN networks have limited data capacity compared to those used by cellular networks. According to officials and telecommunications industry stakeholders in these countries, LPWANs offer several potential benefits including low barriers to entry, low costs, and broad coverage. According to a Dutch telecommunications industry stakeholder most devices that use LPWANs transmit only small amounts of data. A telecommunications industry stakeholder in France told us that the long range and strong propagation of these LPWANs make them useful for utility metering data and South Korean official told us that LPWANs are used to transmit location or temperature data. For example, in the Netherlands, LPWANs are used to monitor water depth and quality, manage street lighting, and to track the location of business inventory and personal property. In France, LPWANs are used for similar tracking as well as smoke detectors. Other uses for the LPWANs are currently in development. For example, a representative of a Dutch telecommunications company told us that in the Netherlands, IoT devices operating on the nationwide LPWAN are being tested at an airport for use in logistical processes such as baggage handling. Additionally, a Dutch railway station is experimenting with IoT technology that monitors rail switches using the LPWAN, and depth sounders at the port of Rotterdam have been fitted with devices to connect them to the network. South Korean officials said that the LPWAN in their country also provides specialized location-tracking services. Selected leading countries take many similar approaches to each other and the United States to managing spectrum in order to address related challenges (see table 4). Like the United States, spectrum-planning officials in France, Germany, and the Netherlands told us that it was necessary to coordinate spectrum planning with other countries on their borders. Officials in each of these countries told us that European spectrum planning is complicated by the number of countries that share borders. Germany, for example, borders nine other countries. As each country is responsible for its own spectrum planning, if their plans are not closely coordinated, there is a potential for cross-border interference. This coordination is complicated by the fact that European countries have legacy spectrum allocations, and these must be accommodated in spectrum planning. The United States, by contrast, shares its border with only Mexico and Canada. According to FCC officials, both of these countries generally align their spectrum plans to those of the United States, reducing interference issues. In order to facilitate international coordination of spectrum planning, each of the four selected leading countries, like the United States, belongs to a regional spectrum-planning association that works to harmonize spectrum planning among member states. Officials of regional groups we spoke with told us that harmonizing can reduce interference issues across borders and facilitate interoperability of devices across different countries. Officials from the manufacturing and telecommunications industries told us that this interoperability creates a larger potential market for IoT devices, thereby improving the economies of scale for the manufacture of IoT devices and reducing production costs. Regional planning associations are also taking steps to prepare their member countries for the spectrum needs of IoT. For example, an official of one association, the Inter-American Telecommunication Commission, told us that in 2016 it held a workshop on “machine-to-machine” technologies that brought together spectrum planners and stakeholders from IoT-related industries. Regional-planning associations also represent their member countries at World Radiocommunications Conferences (WRC). An official from one association told us that due to the diverse nature of IoT devices and applications it is unnecessary for IoT to be explicitly addressed as an agenda item at WRCs. However, the official further stated that the spectrum needs of specific IoT applications— including low power sensors, robotics, and connected vehicles—are included on the agenda. For example, the next WRC is scheduled for 2019 and includes an agenda item addressing connected vehicles, which are closely linked to IoT. Spectrum-planning officials in each of the selected leading countries told us they are concerned about the potential for spectrum congestion, due to growth in the number of IoT devices. However, like FCC in the United States, these officials do not currently believe such congestion presents an immediate problem. Representatives of the four countries we spoke with told us that one way that they address the potential challenge of spectrum congestion is through the use of spectrum-sharing arrangements. Representatives from Germany specifically stressed the importance of finding additional sharing arrangements in response to the expected spectrum needs for IoT. In 2016, both France and the Netherlands initiated pilot programs for spectrum sharing in which multiple users’ access the same bands while prioritizing use by the licensee. These pilot programs are similar to the dynamic-sharing model that FCC adopted in 2015, as described previously. However, whereas the model adopted by FCC has three tiers of users, the model used by France and the Netherlands has only two, and lacks the third tier of general access users. Unlike the United States, officials from Germany and France told us that they directly monitor spectrum congestion. For example, German officials told us that there are spectrum-monitoring services at six locations around the country, and that they perform mobile measurements of spectrum congestion. FCC officials told us that their primary means of tracking congestion is to communicate with spectrum licensees. According to officials from the Netherlands, the Dutch spectrum management agency takes a similar approach and has struck an agreement with a group of telecommunications companies to share information concerning IoT’s interference and congestion issues. Officials also told us that it is easier to monitor spectrum congestion in smaller countries, as there is simply less geographical space to monitor. Nevertheless, officials in France, Germany, and the Netherlands told us that monitoring spectrum is a challenging task, as it is difficult to determine how many wireless devices are active at any given time. FCC has a strategic goal to promote economic growth and effective spectrum management represents a key way that FCC can support meeting that goal. To that end, FCC officials said that the agency continuously seeks to make additional spectrum available and broadly tracks spectrum demand. However, stakeholders and FCC’s own technical advisors have identified rapid, unexpected growth in both high- bandwidth devices and unlicensed spectrum as risks to effective spectrum management. By overwhelming existing networks before FCC can make more spectrum available, rapid growth in spectrum demand could slow or halt IoT’s potential to facilitate economic growth. Absent additional efforts to assess the risks to effective spectrum management by focusing on high-bandwidth and unlicensed-spectrum devices, spectrum congestion and interference could slow IoT growth. We are making the following two recommendations to the Chairman of FCC. The Chairman of FCC should track the growth in high bandwidth IoT devices, such as video-streaming devices and optical sensors. (Recommendation 1) The Chairman of FCC should track the growth in IoT devices relying on unlicensed spectrum. (Recommendation 2) We provided a draft of this report to FCC and the Department of Commerce for their review and comment. FCC provided comments in a letter, which is reprinted in appendix IV. FCC and the Department of Commerce provided technical comments that we incorporated as appropriate. In its written comments, FCC did not concur with our recommendation that it track growth in high-bandwidth devices. FCC noted that it continues to believe that the best approach to track growth of devices is by monitoring overall traffic statistics and forecasts and how these devices affect aggregate spectrum requirements for all applications and services. However, FCC noted that it would task the Technological Advisory Council (TAC) to periodically review the state of the IoT ecosystem to ensure that the planned communications infrastructure is sufficient to support the needs of the growing sector and advise on any actions the FCC should take. We continue to believe that tracking the growth of high- bandwidth devices is necessary to avoid the potential spectrum shortage and that the TAC may be able to help FCC accomplish that. FCC did not concur with our recommendation to track IoT devices that rely on unlicensed spectrum. FCC noted that it may not be practical to determine which devices qualify as IoT or quantify their effect on spectrum utilization. As a result, FCC said that the best way to monitor growth in unlicensed IoT devices is to continue to monitor published papers and conferences and work with industry. However, since most of the projected IoT growth is expected to occur in unlicensed bands that are not protected from interference, we continue to believe that FCC should place a greater focus on tracking IoT devices in these bands. For example, the TAC may also be well positioned to help FCC track unlicensed IoT devices. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Homeland Security and Commerce, the Chairman of FCC, and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or members of your staff have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. We were asked to examine the challenges facing federal spectrum managers and the steps they are taking to address those challenges. In this report we discuss: (1) the spectrum-related challenges stakeholders identified due to the anticipated growth of IoT, (2) steps FCC and NTIA are taking to plan for the anticipated growth in the demand for spectrum as a result of IoT, and (3) efforts that selected leading countries are undertaking to plan for IoT’s spectrum needs and ways that these efforts compare with those of the United States. To identify the spectrum-related challenges stemming from the expected growth of IoT, we reviewed documents from the Federal Communications Commission (FCC) and the National Telecommunications and Information Administration (NTIA), the two federal agencies that have direct authority over spectrum planning. Further, in order to identify relevant literature for review, we (1) conducted a key word search of data bases; (2) searched IoT and spectrum related websites, such as those of cellular carriers, telecommunications industry groups, and nonprofit organizations; (3) reviewed prior GAO reports on IoT and spectrum issues; and (4) asked FCC and NTIA officials, researchers, and non-profit organizations to identify relevant documents. Through our literature search, we identified a number of documents, including academic reports, government reports, congressional committee hearings, and trade journals addressing the projected growth of IoT to understand the number of devices that would be relying on the spectrum in the coming years. We also reviewed literature concerning the growth of other wireless devices, such as smart phones, and the burden they place on the spectrum, to assess if there are any lessons learned from the demand these devices placed on the spectrum that could be applied to the expected growth of IoT. In addition, we interviewed FCC and NTIA officials, and conducted 24 telephone and in-person interviews with officials from industry associations, industrial and commercial users of IoT, nonprofit groups, subject matter experts, manufacturers, and telecommunications companies to obtain their perspectives on the challenges presented by the expected growth of IoT. The experiences of the stakeholders are not generalizable to those of all IoT stakeholders in the United States; however, we believe that the information we gathered from selected stakeholders provides a balanced and informed perspective on the topics discussed. We identified relevant stakeholders by reviewing comments submitted to NTIA in response to its request for comment on the government’s role in planning for IoT growth, reviewing congressional hearings, and conducting a literature review encompassing academic articles, government reports, and trade journals. We interviewed officials from businesses that manufacture Internet-connected devices or equipment that would be considered part of IoT, including agriculture, telecommunications, and manufacturing. We spoke with these officials to gather information about the spectrum challenges they face as businesses working with and developing IoT devices. We then analyzed the results of these interviews and related documents to identify the main themes and develop summary findings. To characterize the views captured during the interviews, we defined the terms to quantify the views as follows: “most” users represents 18 to 24 users, “a majority of” users represents 11 to 17 users, “several” users represents 6 to 10 users, and “some” users represents 3 to 5 users.” To identify the steps FCC and NTIA are taking to plan for the anticipated growth in the demand for spectrum as a result of IoT, we interviewed FCC and NTIA officials and reviewed agency reports and documents. We interviewed officials to understand any agency plans to address spectrum needs for IoT devices and how these plans aligned with the spectrum planning for other wireless devices. We reviewed agency reports and documents on spectrum planning, IoT planning, and the role of the federal government in planning for IoT. Specifically, we reviewed comments submitted in response to NTIA’s request for comment and the final report developed in response to the comments received on the role of the federal government. To identify other relevant reports and literature from FCC and NTIA, we asked officials at the meetings and conducted a literature search. We also compared those planning efforts against FCC’s and NTIA’s strategic goals and the federal internal control standards related to risk management. Specifically, we compared FCC’s planning against its strategic goal to promote economic growth and national leadership in telecommunications, and NTIA’s efforts against its mission to expand the use of spectrum by all users and to ensure that the Internet remains an engine for continued innovation and economic growth. We also assessed the efforts of both agencies against leading practices that we previously developed for identifying, analyzing, and responding to risks related to achieving agency objectives. To identify the efforts that selected foreign governments are taking to plan for the expected spectrum needs of IoT and ways their efforts compare with those of the United States, we surveyed trade journals, industry publications, and foreign governments’ websites and publications. Through this survey, we identified seven countries of potential interest, all of which have conducted spectrum planning in support of IoT: China, France, Germany, Netherlands, Japan, Singapore, and South Korea. We selected four of these countries—France, Germany, the Netherlands, and South Korea—as being like the United States and leaders in IoT development based on additional criteria including the level of their economic development, the maturity of their telecommunications infrastructures, the comparability of their governments to the United States, and the accessibility of their spectrum-planning information. We categorized a country’s economy as fully developed if the United Nations Statistics Division categorized it in 2016 as existing in a developed economic region. When determining the maturity of a country’s telecommunications infrastructure, we followed the United Nation’s International Telecommunication Union (ITU) in categorizing a country’s telecom infrastructure as mature if it was included in the top quartile of the 175 countries ranked in ITU’s 2016 Information and Communications Technology Development Index. We considered a country to have a government structure comparable to that of the United States if Freedom House’s 2016 Freedom in the World report rated it as “free” and the Polity Project categorized it as a “democracy” in 2015. Finally, we considered the extent to which information could be efficiently procured from each country under consideration. We reviewed documents and conducted telephone and written interviews with officials from the spectrum management agencies in each of these four countries. We also conducted eight telephone and written interviews with officials from foreign telecommunications companies, IoT manufactures, and international spectrum-planning groups to gather information about IoT development, challenges, and responses to these challenges in the leading countries that we contacted. We conducted this performance audit from August 2016 to November 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Barcoding Case IH Deere & Co. O-I Consumer Technology Association CTIA National Association of Manufacturers Telecommunications Industry Association U.S. Chamber of Commerce Wi-Fi Alliance World Shipping Council New America Foundation Public Knowledge Technology and Innovation Foundation Jeffrey Reed, Ph.D. (Virginia Polytechnic Institute and State University) Douglas Sicker, Ph.D. (Carnegie Mellon University) AT&T Sigfox Verizon Agence Nationale des Fréquences (France) Agentschap Telecom (Netherlands) Bundesnetzagentur (Germany) Ministry of Science, ICT, and Future Planning (South Korea) In addition to the individual named above, Keith Cunningham (Assistant Director); Eric Hudson (Analyst-in-Charge); Camilo Flores; Adam Gomez; Josh Ormond; Andrew Stavisky; Hai Tran; and Michelle Weathers made key contributions to this report.
|
IoT generally refers to devices (or “things”), such as vehicles and appliances, that use a network to communicate and share data with each other. The increasing popularity of wireless IoT devices that use spectrum has created questions about spectrum needs. GAO was asked to examine issues related to spectrum and IoT. This report discusses, among other things, (1) spectrum challenges related to IoT, (2) how the federal government plans for IoT’s spectrum needs, and (3) how selected leading countries prepare for IoT’s spectrum needs. GAO reviewed documents and interviewed officials from FCC and the National Telecommunications and Information Administration as well as 24 officials from a variety of sectors, including government, commercial, and manufacturing. Stakeholders were selected based on a literature review, among other factors. GAO interviewed government and commercial representatives from four leading countries regarding IoT planning and development and reviewed associated documents. These countries were selected based on criteria that included level of economic development among other criteria. The stakeholders GAO spoke with identified two primary spectrum-related challenges for the internet of things (IoT)—the availability of spectrum and managing interference. Although not considered an immediate concern, Federal Communications Commission (FCC) staff and some stakeholders noted that rapid increases in IoT devices that use large amounts of spectrum—called high-bandwidth devices—could quickly overwhelm networks, as happened with smart phones. Stakeholders and FCC staff also indicated that managing interference is becoming more challenging as the number of IoT and other wireless devices grows, particularly in bands that do not require a spectrum license. The figure below illustrates the uses of radio frequency spectrum, including unlicensed use. FCC plans for IoT’s spectrum needs by broadly tracking spectrum demand and making additional spectrum available as needed. Ensuring sufficient spectrum to support commercial demand is one way FCC pursues its strategic goal of promoting economic growth. FCC has made additional spectrum publicly available at least four times since 2015 by repurposing over 11 gigahertz of spectrum. However, FCC does not track the growth of IoT devices in two areas that pose the greatest risk to IoT’s growth—high bandwidth and unlicensed-spectrum devices. In 2014, FCC’s Technical Advisory Council (TAC) recommended that FCC monitor high-bandwidth IoT devices and make sufficient unlicensed spectrum available. FCC officials said that FCC monitors spectrum use broadly and makes spectrum available as needed. However, since the process of reallocating spectrum is lengthy, FCC may not have adequate time to take actions to avoid a shortage, possibly hindering IoT’s growth and associated economic growth. Spectrum planners in four leading countries—France, Germany, the Netherlands, and South Korea—have taken steps similar to those taken by the United States in preparation for IoT’s expansion, including taking a technology-neutral approach that stakeholders believe encourages innovation. Unlike the United States, officials from two leading countries said they are concerned about spectrum congestion from the growth of IoT devices, but only one is actively monitoring congestion. In addition, three leading countries have developed nationwide low power wide-area networks that use unlicensed spectrum with potential benefits including low costs and low barriers to entry. FCC should track the growth in (1) high-bandwidth IoT devices and (2) IoT devices that rely on unlicensed spectrum. FCC did not believe these actions are necessary but noted that it would ask its TAC to periodically review and report on IoT’s growth. GAO continues to believe the recommendations are valid.
|
This section outlines the legal framework under which agencies and federal labs license patents and general stages of the patent licensing process. Prior to 1980, federal agencies generally retained title to any inventions developed through federally funded research—whether extramural, that is, conducted by universities and contractors, or intramural, conducted by federal agencies in their own facilities. By the late 1970s, there was increasing debate in Congress over ways to allow the private and public sectors better access to federally owned inventions by, among other things, creating a uniform policy for those seeking to license inventions developed in federal labs. In the 1980s, Congress began passing a series of key laws that have provided the foundation for federal technology transfer activities, including patenting and licensing inventions that are developed in federal labs and funded by federal dollars. One of the first technology transfer laws, the Stevenson-Wydler Act, established technology transfer as a federal policy and required federal labs to set up Offices of Research and Technology Applications (which, for our purposes, we refer to as technology transfer offices) and devote budget and personnel resources to promoting the transfer of federal technologies to the private sector. In 1980, another key law, the Bayh-Dole Act allowed not-for-profit corporations, including universities, and small businesses to retain title to their federally funded inventions. In 1984, through amendments made to the Bayh-Dole Act, Commerce became responsible for issuing regulations to implement the act. The Stevenson-Wydler Act was amended by the Federal Technology Transfer Act of 1986, which (1) established the Federal Laboratory Consortium (FLC); (2) required that technology transfer efforts be considered positively in employee performance evaluations; and (3) empowered federal agencies to permit the directors of government- owned, government-operated labs to enter into cooperative research and development agreements (CRADA) and to negotiate license agreements for inventions created in the labs. The FLC began largely as a forum for the education, training, and networking of federal technology transfer officials to promote the integration of technical knowledge that federal departments and agencies developed into the U.S. economy. Over time, the FLC’s role would include serving as a clearinghouse—a central point for collecting and disseminating information—for federal technologies and assisting outside entities in identifying available federal technology. Within Commerce, NIST is the designated host and financial administrator of the FLC. Additional laws were adopted to help further the development of federally owned inventions for commercial use. Among them was the National Competitiveness Technology Transfer Act of 1989, which directed federal agencies to propose, for inclusion in contracts, provisions to establish technology transfer as a mission of government-owned, contractor- operated labs and permitted those labs, under certain circumstances, to enter into CRADAs. In addition, the Technology Transfer Commercialization Act of 2000 required Commerce to provide Congress with summary reports on agencies’ patent licensing and other technology transfer activities. Since 2007, Commerce has delegated to NIST the role of providing to Congress an annual report summarizing technology transfer at federal agencies. NIST’s role as the lead in an interagency collaborative effort in federal technology transfer grew further when Commerce delegated to the agency the additional responsibility of coordinating the Interagency Working Group for Technology Transfer. Commerce also has delegated to NIST its authority to promulgate implementing regulations pertaining to patenting and licensing at federal labs. In 2011, Congress passed the Leahy-Smith America Invents Act (AIA) that further affected technology transfer activities by federal labs through comprehensive changes made to the U.S. patent system. Federal labs are typically managed under either a government-operated or a contractor-operated model. Commerce regulations prescribe the terms, conditions, and procedures that government-operated labs are to use to license their inventions for commercial use or other practical applications. Government-operated labs are usually owned or leased by the federal government and are predominantly staffed by federal employees. Contractor-operated labs, on the other hand, operate facilities and equipment that are owned by the federal government, but the staff is employed by a private or nonprofit contractor that operates the lab under a contract with the federal government. Contractor-operated labs typically license their technologies under the authority of the Bayh-Dole Act, applicable regulations, and their contracts, which generally give contractor-operated labs more flexibility in licensing their technologies. Contractors that manage and operate labs include universities, private companies, nonprofit organizations, or consortia thereof. As discussed below, whether a lab is government-operated or contractor-operated will affect how that lab licenses inventions because each type operates under a different set of licensing regulations and requirements. The pathway of an invention from lab development to commercial product can end at any point, and products may not always reach, or find success in, the marketplace. Figure 2 shows the seven general areas of the patent licensing process at federal labs. The patent licensing process begins with researchers identifying inventions—a process that primarily relies on researchers disclosing their inventions to lab officials, mostly through the lab director or directly to an agency’s technology transfer office. Various laws and regulations establish a uniform policy for determining who holds the rights to government employees’ inventions. Some government-operated labs allow or encourage researchers to publish their research, including research describing inventions, for public dissemination, such as in research journals. Contractor-operated labs are required to disclose inventions to the agency within 2 months after notifying contractor personnel responsible for patent licensing activities. Labs must then decide within 2 years after the disclosure whether to retain title to the invention. The contract then must file its initial patent application on the invention to which it elects to retain title within one year after election of title. If the contractor-operated lab does not disclose the invention or elect to retain title within the times specified in the law and regulations, it will convey title to the invention to the funding agency upon written request. Once an invention has been identified and disclosed, federal agencies and labs keep track of the invention. How they do so varies in degree of automation and centralization. For example, systems that keep track of lab inventions can range from spreadsheets to automated software that tracks all patent licensing and other technology transfer activities. Also, such systems can be centralized, with oversight at the agency level, or decentralized, with independent oversight at the lab level—which is generally the case at contractor-operated labs. Some contractor-operated labs manage their federally funded inventions through the Interagency Edison (iEdison) reporting system, which is owned and managed by NIH. Before applying for patent protection through USPTO, agency and lab officials review the invention—often using evaluation committees and patent attorneys—to consider a number of factors, including whether it is patentable, it furthers the lab’s mission, and patenting the invention is likely to bring it to commercial use or practical application. The agency must file a patent application within 1 year of the first publication, public use, sale, or offer for sale of the invention or lose U.S. patent rights to that invention. Not all patents will be licensed out to companies for a variety of reasons, including national security considerations. The average time from filing to issuing a patent, or when an application is abandoned, is about 2 years, according to USPTO. Patent applications are often rejected, modified, and refiled, and various fees are associated with filing and prosecuting a patent application. However, according to USPTO, patent maintenance fees that allow federal labs to maintain their patents in force are among the most significant fees. Agencies and labs use a variety of methods to attract potential licensees, including those from industry, universities, and nonprofits. For example, agencies may post their inventory of patented inventions online, publish them in academic journals, or highlight them at public events. Agencies and labs actively engage with the private sector by, for example, attending conferences where companies can network with federal researchers and federal technology transfer officials. In addition, technology transfer offices often work with partnership intermediaries— such as local or state entities and nonprofit organizations—to support their efforts, including reaching out to potential licensees. Labs have other mechanisms to help attract potential licensees to further develop their inventions. For example, CRADAs can help facilitate licensing or the transfer of knowledge from a lab to a licensee, and new inventions that arise under a CRADA are typically made available to the partner via an option to license. The technology transfer offices and legal counsel are generally responsible for crafting and negotiating the terms of the patent license, sometimes with input from other lab officials. Negotiations are often an iterative process in which both the lab and the licensee request adjustments to the terms of the license. Laws and regulations specify some terms that government-operated labs must include in their licenses. Among others, a typical license includes terms related to (1) financial compensation (if applicable), (2) the degree of exclusivity of the license, (3) the U.S. manufacturing requirement, (4) retained rights for the government, (5) termination of the license, and (6) enforcement of licenses. Financial terms may include up-front fees; minimum payments; royalties, usually based on sales; and milestone payments, among others. Federal labs typically establish financial terms on a case-by-case basis that are tailored to the specifics of the technology, licensee, and market conditions. License agreements may be nonexclusive, partially exclusive, or fully exclusive, and may be limited to some fields of the invention’s use or to specific geographic areas. Government-operated labs must publicly announce their intent to grant an exclusive license for at least 15 days. After this period, comments and objections are considered. Negotiations then begin with the proposed licensee or, if the licensee has changed, another public announcement of the new licensee may be required. Government-operated labs are required to obtain a commercialization plan from a potential licensee regardless of the degree of exclusivity. Contractor-operated labs, which typically retain title to their inventions under the authority of the Bayh-Dole Act, are not subject to the requirement to obtain a commercialization plan from a prospective licensee before granting a license; however, they are subject to requirements specified in their contracts regarding patent licensing. In addition, they are not subject to the same notification requirements as government-operated labs. The law also contains some other provisions pertaining to patent licenses originating from federal labs. For example, the law generally gives preference to small businesses that are capable of bringing the invention to practical application. There is a general preference for products that incorporate federal inventions to be manufactured substantially in the United States; however, on a case-by-case basis, agencies may waive this requirement. Applicable law also reserves certain rights for the government to protect the public’s interests in federally funded inventions. For example, the government retains a royalty-free license to use inventions that are contractor owned or that are licensed exclusively. In addition, the Bayh-Dole Act provides the government march-in authority when certain statutory conditions have been met. Under this authority, an agency may grant a license to an invention developed with federal funding even if the invention is exclusively licensed to another party if, for example, it determines that such action is needed to alleviate public health or safety needs which are not reasonably satisfied by the contractor, assignee, or their licensee. A federal lab can also terminate a license when the licensee is not meeting its commitment to achieve practical application of the invention. The lab can also, through the license, grant permission to a licensee to pursue patent infringement cases. Federal license agreements generally require licensees to report periodically on their commercialization. For instance, labs generally put specific monitoring requirements in the license agreements, including milestones and reporting requirements. Through the agreements, government-operated labs have the right to terminate or modify licenses if certain requirements are not met. Government-operated labs must submit written notices to the licensees and any sublicensees of their intentions to modify or terminate licenses, and allow 30 days for the licensees or sublicensees to remedy any breach of the licenses or show cause why the licenses should not be modified or terminated. Contractor-operated labs also monitor licensee performance in much the same way; however, they are subject to a different set of regulations. Federal labs are responsible for measuring the outcomes of their activities in all areas of the patent licensing process by developing metrics and evaluation methods. Measuring licensing outcomes help labs assess the effectiveness of their patent licensing efforts. Soon after the passage of AIA, President Obama issued a memorandum in October 2011 to the heads of executive departments and agencies calling for, among other things, (1) developing strategies to increase the usefulness and accessibility of information about federal technology transfer opportunities; (2) listing all publicly available, federally owned inventions on a public government database; and (3) improving and expanding its collecting of metrics for Commerce’s annual technology transfer summary report. Federal law states that it is Congress’s policy and objective to use the patent system to promote the commercialization and public availability of inventions, and that technology transfer, including federal patent licensing, is the responsibility of each laboratory science and engineering professional. No single federal agency is responsible for managing technology transfer activities government-wide. Rather, each federal agency involved in technology transfer designs its own program to meet technology transfer objectives, consistent with its other mission responsibilities. Federal agency and lab officials and external stakeholders have identified challenges across the federal patent licensing process, but NIST has not fully reported such challenges. Specifically, DOD, DOE, NASA, and NIH officials at the agency and lab levels, as well as external stakeholders, cited challenges related to all seven areas of the patent licensing process. In addition, officials and stakeholders cited challenges in one area that cuts across the entire process: prioritizing patent licensing as part of agencies’ missions. In its annual reports to Congress on federal labs’ performance in patent licensing activities, NIST has discussed some challenges identified by agency and lab officials and external stakeholders but has not fully reported on the range of challenges they have experienced. DOD, DOE, NASA, and NIH officials at the agency and lab levels, as well as external stakeholders, identified challenges in all seven areas of the patent licensing process, including identifying inventions, keeping track of inventions, and negotiating license agreements. They also cited challenges in prioritizing patent licensing as part of agencies’ missions. Based on our analysis of relevant literature and on interviews with external stakeholders, many of these challenges are occurring government-wide. DOD, DOE, NASA, and NIH have taken some steps to address the challenges in each area of the patent licensing process. DOD, DOE, NASA, and NIH officials at the agency and lab levels, as well as external stakeholders, identified challenges in all seven areas of the patent licensing process, including not identifying inventions, keeping track of inventions in inadequate systems, and difficulty negotiating license agreements. For example, several DOD, DOE, NASA, and NIH officials stated that some researchers do not have adequate training in identifying potentially patentable inventions. When a federal researcher does not disclose to lab officials an invention developed in a federal lab, the opportunity to assess the invention’s potential for commercial use may be lost. Federal officials cited various reasons why researchers do not disclose inventions. Navy officials, for example, stated that researchers are often intimidated by the overall invention disclosure process and tend to focus on their research rather than consider what could be patentable. Officials at one NASA lab noted that they have come across a few contractor employees who do not see the benefit of filing invention disclosures, and sometimes researchers are too busy to engage in the patenting process. Our analysis of relevant literature and interviews with stakeholders also showed that researchers not identifying and disclosing inventions is a government-wide challenge. For example, one stakeholder stated that researchers at federal labs generally have limited understanding of the patenting process, including an understanding of what constitutes patentable subject matter and how to conduct a prior art search on the technology to determine whether it is patentable. DOD, DOE, NASA, and NIH officials stated that they are taking a variety of actions to help address this challenge. For example, some agency and lab officials stated that labs conduct training to educate researchers about the patenting process, inform researchers about requirements to disclose inventions, and incentivize them by acknowledging their efforts through awards and monetary incentives—such as potential royalty distributions— when their inventions reach commercial success. In addition, DOD, DOE, and NIH officials described their agencies’ systems for keeping track of inventions developed in the labs as inadequate or in need of improvement. How agencies and labs keep track of such inventions can range from spreadsheets to sophisticated databases that manage all technology transfer activities, including keeping track of patented inventions and licenses. Currently, DOD has a decentralized approach to keeping track of inventions, which, according to DOD officials, needs improvement given how large the agency is. Several stakeholders we interviewed also noted that the challenge of keeping track of inventions exists government-wide. According to some stakeholders, federal labs not only have inadequate systems to keep track of their own inventions but also limited information on the kinds of inventions being developed in federal labs across the government. The result is that agencies risk being unaware of research across the labs, which can limit their ability to leverage other federal research efforts. For example, one stakeholder stated that there can be research conducted independently at three or four labs under different agencies but little interaction among those labs about the research. DOD, DOE, and NIH officials stated that they have made efforts to improve their current systems for keeping track of inventions. Specifically, DOE officials reported that they have developed a plan to leverage the capabilities of the iEdison reporting system to unify the agency’s data management process. Air Force and NIH officials stated that they have contacted NASA, which has a centralized system for tracking inventions, about leveraging its expertise. NASA officials reported that they have been hosting regular webinars with other agencies to determine whether NASA’s tracking system could help meet other agencies’ needs. Furthermore, agency and lab officials and stakeholders noted that federal labs face challenges in negotiating license agreements because the licensing process is lengthy and uniquely regulated, which can deter companies from licensing federal inventions. Stakeholders stated that the federal licensing process can take anywhere from about 3 months to more than 2 years. Some stakeholders stated that from their point of view taking a year to negotiate a license agreement is too long. One stakeholder said that such lengthy processes are particularly difficult for start-ups, which often need to finalize license agreements in 3 months. DOD, DOE, NASA, and NIH officials said they are taking steps to address companies’ concerns about the time it takes to negotiate a license agreement. For instance, NASA, NIH, and Navy officials told us that they have developed model license agreements to help guide companies through the process, and NASA and NIH have special license agreements for start-ups to shorten the licensing process. For more detail on challenges in the seven areas of the patent licensing process that agency and lab officials and external stakeholders identified, see appendix II. DOD, DOE, NASA, and NIH face challenges in prioritizing patent licensing as part of their agency missions. For example, DOD and DOE officials stated that an agency’s mission affects patent licensing activities. DOD officials stated that the agency’s primary mission is protecting the warfighter and that patent licensing is a secondary benefit to the agency. According to DOE officials, the nuclear security labs do not focus on patenting but instead on developing technologies associated with a weapons program. In addition, several stakeholders we interviewed stated that some agencies and labs do not have a culture that prioritizes patent licensing. In particular, one stakeholder stated that at some federal labs, patent licensing is not reflected in performance evaluation management plans, which can help incentivize lab personnel to engage in patent licensing activities. A few stakeholders stated that at some labs where management does not prioritize patent licensing activities, researchers’ careers can be negatively affected if they engage in patent licensing activities. Some agency and lab officials stated that they have taken steps to overcome such challenges. For example, officials at one Navy lab stated that the lab has management support and nine patent attorneys to assist in the reviews of researchers’ invention disclosures. Also, officials at one NIH lab stated that the lab has strong management support and a good royalty stream from successful inventions that pay for patenting and other reinvestments, which allows the lab to not draw from its appropriations. In its three most recent fiscal year summary reports to Congress, NIST identified some challenges faced by federal labs in areas of patent licensing and has assisted agencies in addressing challenges in their patent licensing activities. However, NIST does not fully report on the range of challenges that agency and lab officials and stakeholders identify. NIST collaborates with agencies to gather patent licensing data for its summary reports to Congress. For example, according to agency officials, NIST engages with agencies to inform them about new requirements in technology transfer and helps them identify their successes in conducting technology transfer activities. NIST also provides administrative support to the FLC, which offers training to federal technology transfer specialists through workshops; publishes a desk reference on federal patent licensing, laws, and regulations; and has commissioned studies on efforts to develop federal inventions for commercial use. Further, NIST developed a survey in 2016 on agency technology transfer processes. NIST officials stated that the survey is aimed in part at improving federal labs’ decisions on whether to spend money on applying for patents, whether patents will facilitate the commercialization of technology, and what data are needed to make those determinations. NIST officials stated that the agency continues to analyze the survey data and currently plans to report its findings in fiscal year 2018. While NIST has identified in its annual summary reports to Congress some challenges that federal labs face in patent licensing and other technology transfer activities, it has not fully reported the range of challenges that agencies and labs face in patent licensing. For example, in its fiscal year 2015 summary report—its most recent report—on federal technology transfer, NIST reported that the federal intramural research budget has been relatively consistent over the years but not that DOD, DOE, NASA, and NIH face challenges in prioritizing patent licensing as an agency mission. The report also mentions that there is no uniform federal system for tracking research that employees in federal labs published but not that DOE, for example, has faced challenges in keeping track of inventions developed in its labs. In addition, we found that although the report mentions that the Department of Veterans Affairs is facing challenges with its labs disclosing inventions, it does not mention similar challenges at DOD. NIST officials stated that they were generally aware of the challenges identified by agency and lab officials and external stakeholders but had not considered including such challenges to a greater degree in the summary reports to Congress. We have previously reported on Congress’s goal to make the federal government more results oriented through reporting of agency performance information to aid decision making by agency executives, Congress, and program partners. Specifically, we have reported how the effective implementation of good governance can help address government challenges in five key areas involving agency performance and management: (1) instituting a more coordinated and crosscutting approach to achieving meaningful results, (2) focusing on addressing weaknesses in major management functions, (3) ensuring that agency performance information is useful and used in decision making, (4) sustaining leadership commitment and accountability for achieving results, and (5) engaging Congress in identifying management and performance issues to address. By fully reporting the range of challenges in federal patent licensing—such as those outlined in this report—and including that information in its annual summary reports to Congress, NIST has the opportunity to further ensure that Congress is more aware of challenges that limit agencies’ efforts in patent licensing and ways for potentially addressing those challenges. To identify these challenges, NIST could, for example, leverage its survey, past FLC studies, and agency reports. Federal agencies and labs have limited information on processes, goals, and comparable licenses to guide establishing the financial terms in patent licenses. DOD, DOE, NASA, and NIH labs generally do not document their processes for establishing the financial terms of patent licenses and instead rely on the expertise of technology transfer staff. Furthermore, existing agency and lab guidance does not consistently link the practice of establishing license financial terms to the statutory goal of promoting commercial use of inventions. In addition, although many federal labs rely on comparable licenses to aid them in setting the terms of new licenses, labs have varying levels of access to information about such licenses. DOD, DOE, NASA, and NIH labs have limited documentation of their processes for establishing the financial terms of patent licenses. Such documentation is limited at both the agency level and the lab level. At the agency level, the four agencies we reviewed had some documentation on patent licensing in general, such as policies, procedures, guides, and handbooks, but had limited information on how to establish financial terms. For example, the Air Force and the Navy had handbooks on technology transfer that include brief passages on financial terms. However, agency officials noted that these handbooks were either outdated or under revision. At DOE, labs collaborated to develop two agency-level documents on patent licensing: one for lab officials on using equity in licenses and a licensing guide for licensees. These documents describe the general structure of various types of financial terms and, in the document on using equity, factors to consider regarding its use in a license, but do not discuss methods for establishing financial terms. NASA and NIH have policies and procedures for patent licensing that mention the types of financial terms that are normally found in licenses but do not cover other aspects, such as methods for establishing financial terms. All four agencies reported that they gave their labs discretion to develop their own processes for establishing financial terms. At the lab level, DOD, DOE, NASA, and NIH generally had not documented their processes for establishing financial terms in patent licenses. Based on documentation provided by NASA, NIH, and DOD, few labs at these agencies had issued additional documentation on the patent licensing process. DOE labs had documented the patent licensing process in general, and 6 out of 17 DOE labs provided documentation that covered aspects of establishing financial terms. For example, one DOE lab document contained a set of licensing principles that help clarify what financial terms a license usually contains, their purpose, and how to structure the financial terms in patent licenses. In addition, agency and lab officials at NASA and DOE reported using tools, such as financial term calculators, at some of their labs, which aid technology transfer staff in valuing technologies. Agency and lab officials reported that they generally rely on the expertise of technology transfer staff to establish and vet appropriate financial terms. Accordingly, agencies and labs reported that they have taken some steps to develop, share, and retain expertise among staff in their technology transfer offices. The agencies we reviewed reported that some technology transfer staff participate in training opportunities provided by professional organizations like the Association of University Technology Managers (AUTM) or the Licensing Executives Society (LES), as well as the FLC and the agencies. In addition, some agencies and labs reported that internal working groups and regular meetings are opportunities to share licensing expertise. At DOD, officials stated that on a case-by-case basis, labs may use the expertise of their partnership intermediary to help establish financial terms. However, according to agency and lab officials and stakeholders, federal labs face challenges in acquiring, developing, and retaining expertise in patent licensing for their technology transfer offices. Specifically, some agency officials, lab officials, and stakeholders cited issues such as losing experienced technology transfer staff to retirement or to the private sector, having difficulties in hiring staff with expertise in part because of limited funding, and facing a limited pool of prospective employees to hire with the expertise to value and license inventions. A few stakeholders said that government training in the business aspects of patent licensing is inadequate and not widespread. In addition, some stakeholders had concerns about consistency in licensing practices both within the labs and across labs. For example, some of these stakeholders said that the outcome of license negotiations can depend on the specific licensing professional handling the license. Varying levels of expertise may lead to inconsistency in licensing practices, including establishing financial terms, as can undocumented processes. Under the federal standards for internal control, management should design control activities by, for example, clearly documenting them in management directives, administrative policies, or operating manuals, to achieve objectives and respond to risks. Furthermore, documentation can act as a means to retain organizational knowledge and provide some assurance that an approach is operational across the lab or agency. Agency and lab officials stated that they had not documented their processes for establishing financial terms for various reasons. For example, lab officials stated that establishing financial terms is often complex and varies based on the specific circumstances applicable to each potential license, which may limit what can be documented. Some agency and lab officials stated that labs need flexibility in negotiating terms to make adjustments based on the circumstances and therefore officials do not want to be prescriptive. A few agency and lab officials also noted that there are benefits to having streamlined processes. Furthermore, a few agency and lab officials described negotiating license terms as a craft or art that requires expertise and said that documenting this will not enhance licensing by itself. However, some agency and lab officials and stakeholders said that it is possible to document some aspects of the process. A few stakeholders we interviewed noted that even if each agreement is unique, it is still possible to develop guidelines or outline a methodology for establishing financial terms. A few agency and lab officials stated that they are investigating opportunities to standardize their processes or would be open to documenting them. For example, one agency official told us that the agency plans to update existing documents with specific information about royalty ranges so labs do not have to constantly “reinvent the wheel.” Some labs also described steps that they take to establish financial terms, such as methods for valuing inventions, without being prescriptive. By documenting processes for establishing the financial terms of licenses while maintaining enough flexibility to tailor the specific terms of each license, the four agencies could have more reasonable assurance of consistency across their labs regardless of the expertise of staff. Agency and lab documentation does not consistently link establishing financial terms in patent licenses to the goal of promoting commercial use of inventions. As noted above, federal law states that it is Congress’s policy and objective to use the patent system to promote the commercialization and public availability of inventions, and that technology transfer, including federal patent licensing, is the responsibility of each laboratory science and engineering professional. Agency-level documentation at NASA contains a provision that clearly links establishing financial terms to the goal of promoting commercial use of inventions—that is, “terms should be negotiated that provide the licensee incentive to commercialize the invention.” NIH’s documentation mentions financial terms in the context of protecting the public from nonuse, which is one aspect of promoting commercial use, and also mentions the goal of obtaining a fair financial return on investment from the licensed invention. DOD and DOE agency-level documents mention the general goal of promoting the commercial use of inventions without specifically linking it to the financial terms. At the lab level, DOD documents generally do not address the goals for financial terms. Of 17 DOE labs, 4 had a statement in their documentation to link financial terms to the goal of promoting commercial use of inventions. DOD, DOE, NASA, and NIH officials we interviewed stated that getting the technology into the marketplace is their primary goal in licensing but also mentioned other goals related to financial terms that support their mission. In addition, some agency and lab officials described using revenues from licenses as a means to provide a reward to inventors for their work or to obtain a fair return on investment for research conducted by federal agencies. Furthermore, lab officials we interviewed mentioned the flexibility of revenues from licenses as helpful in funding activities, such as additional research, training, and patent prosecution. Some agency officials and stakeholders we interviewed expressed concerns about competing goals for establishing financial terms. For example, a few stakeholders stated that licensing professionals may be motivated to negotiate for increased license revenue because it reflects positively on them professionally. Further, some stakeholders expressed concerns about labs taking a short-term view of some licensees, particularly small companies, because they have less ability to pay initially and thus may offer less certain revenues. Our review of relevant economic literature and interviews with stakeholders suggest that license financial terms set with goals other than promoting commercial use in mind, such as short-term revenue maximization, may undermine that longer-term goal. For example, high up-front license fees typically provide more guaranteed short-term revenue to the licensor than other forms of payment but can also reduce the capital available to develop a product successfully. Labs with other goals in mind when establishing financial terms may be at risk of establishing them in ways that run counter to the goal of promoting commercial use. NIST plays an important role in providing regulations and guidance to agencies regarding patent licensing. Commerce has delegated to NIST the authority to promulgate implementing regulations pertaining to patenting and licensing at federal labs—that is, regulations that indicate how agencies are to implement statutory provisions, including the goal of, among other things, promoting commercial use of inventions. NIST has developed regulations, but they do not link the financial terms of federal patent licenses and the statutory goal of promoting commercial use of inventions. As the host of the FLC and a coordinator for the Interagency Working Group for Technology Transfer, NIST also plays a role in supporting the development of interagency guidance on patent licensing that covers, among other topics, establishing financial terms in licenses. However, existing interagency guidance provides limited information regarding the goals for financial terms. For example, the FLC desk reference contains a statement that links royalty rates to the goal of promoting commercial use but does not clarify how the goal applies to other financial terms. Furthermore, the FLC desk reference states that labs are entitled to market-based compensation for their intellectual property. However, licenses are structured differently to accomplish different goals and a primary focus on obtaining market-based compensation may undermine the goal of promoting commercial use. As the lead agency on the government-wide effort to find commercial uses or practical applications for federally funded inventions, NIST has been delegated the responsibility to promulgate regulations pertaining to patenting and licensing at federal labs, including implementing the statutory goal of promoting commercial use. NIST officials stated that a change to the regulations could be made as part of an upcoming rule- making process. However, in doing so, a stakeholder and agency officials noted that any changes to the regulations should avoid prescriptive language that mandates specific practices. NIST officials also stated that they could update relevant guidance on this issue through one of their current efforts. By clarifying the link between establishing federal patent license financial terms and the goal of encouraging commercial use, through the upcoming rule-making process and updating relevant guidance, NIST would have better assurance that financial terms in patent licenses are targeted to that goal. According to agency and lab officials, comparable license information can be used as a point of reference to guide establishing financial and other terms in new patent licenses. Just as real estate agents look at sales of comparable houses when setting the selling price of a house, patent licensing professionals can look at licenses for comparable inventions when determining what financial terms to include in a new license. However, federal labs have varying amounts of information on comparable licenses when establishing financial terms. NASA and NIH each have an agency-wide system that enables each lab to access information from other labs at the agency, including the financial terms in previous licenses. NIH agency officials reported that technology transfer offices have access to thousands of previous licenses and refer to such information frequently to help establish the financial terms of new licenses. Labs at DOE and DOD are generally responsible for tracking their own licenses and do not have access to information on comparable licenses from other labs in their agencies. According to DOE officials, under DOE contracts and relevant law, license information at the agency’s contractor-operated labs is considered business sensitive and a contractor-owned record that resides at the labs, which limits DOE’s ability to share it. Officials at DOE and DOD’s military departments reported that they have investigated and continue to investigate systems that would provide greater access to information on financial terms but have encountered some obstacles, such as network security requirements, that they have not yet overcome. To bolster their access to comparable license information, some federal labs obtain private sector license information. For example, some lab officials we interviewed said that they have occasionally purchased benchmarking guides and access to other private sector license information through organizations such as AUTM and LES. According to some lab officials and stakeholders, private sector license information is useful for understanding acceptable royalty rates in industry and may cover certain technology areas or inventions that are new to the lab. However, access to private sector license information is typically ad hoc and can be limited by its cost, according to agency and lab officials. Some agency and lab officials stated that they would like increased access to private sector information on comparable licenses. For example, according to agency officials at DOE, there is an effort under way to obtain benchmark financial terms from labs and universities with comparable R&D portfolios. Although lab officials and stakeholders said that private licensing information can be helpful for understanding financial terms acceptable to the market, using private license information may not always be appropriate for government licenses. Private licenses are often structured to maximize revenue for the licensor—not necessarily to promote commercial use or practical application, according to stakeholders. Our review of economic literature and interviews with stakeholders and agency officials suggest that licenses are structured differently to accomplish different goals. For example, a few stakeholders and agency officials noted that federal licenses would typically be less exclusive and have different financial terms than those in the private sector, where there is a greater emphasis on generating revenue from R&D investments. Some stakeholders and agency officials also stated that in general the value of a government license may be different from that of a private license for a similar technology because of the rights the government retains on its licenses. In addition, according to agency and lab officials and stakeholders, government inventions tend to be in an earlier stage of development than those in the private sector, potentially making it more difficult to find licenses for comparable inventions in the private sector. Some agency and lab officials and a few stakeholders stated that it would be valuable for federal labs to have greater access to information on financial terms in government licenses to help establish a benchmark for financial terms. Our analysis of approximately 21,000 patents assigned to DOD, DOE, NASA, and NIH and issued since 2000 shows that different agencies may patent inventions in similar technology fields. All four agencies we reviewed had patented inventions in 26 of 35 technology fields covered by the patents, and all had 10 or more patents in 9 of the 35 technology fields. DOD and DOE, including DOE contractor-operated labs, had more patents in a wider range of fields than the other agencies. On the other hand, HHS’s patents are more focused on fields such as biotechnology and medical technology. However, even in the area of biotechnology, there were hundreds of patents issued to the other three agencies. Although other information would be needed to determine whether the agencies’ inventions are truly comparable, their having patents in the same technology fields suggests that some government- wide information on financial terms could be useful to federal labs. Under internal control standards for the federal government, management should externally communicate the necessary quality information to achieve the entity’s objectives; this includes communicating with and obtaining quality information from external parties using established reporting lines. The four agencies we reviewed communicate and share information through several collaborative efforts to improve federal patent licensing, including the FLC and the Interagency Working Group for Technology Transfer. For example, agency officials said they share experiences, ideas, and best practices related to patent licensing informally through these groups. However, there is no formal sharing of information on financial terms in patent licenses among federal labs, according to NIST officials. We have previously reported that federal agencies engaged in interagency collaborative efforts should identify and address needs by leveraging their resources to obtain additional benefits that would not be available if they were working separately. NIST plays a leading role in these interagency collaborative efforts on patent licensing, including gathering and sharing information among the labs. As the administrative host for the FLC, NIST has already supported an effort to share information about available technology. NIST is also responsible for gathering information from technology transfer agencies, including gross license income, and submitting summary reports to Congress annually and sharing them with the public. Furthermore, NIST has initiated a survey of practices at federal technology transfer offices and shared some preliminary information with the agencies. By facilitating the formal sharing of comparable license information, NIST could help provide agencies and labs with benchmarks for evaluating which financial terms are best suited to licensing inventions successfully. NIST officials stated that gathering and sharing comparable license information could be done as part of their existing efforts but that there are obstacles to doing so. Specifically, NIST officials stated that this effort would add to the reporting burdens of agencies, may require additional resources, and would need to take into account data security and proprietary information considerations. Agency officials also stressed that any effort to share license terms would have to ensure that confidential and proprietary information from licensees, including specific financial terms from a particular license, is not divulged. Federal labs under DOD, DOE, NASA, and NIH face challenges at various stages of the patent licensing process, and agencies have taken some steps to address such challenges. For example, ensuring that researchers identify and disclose inventions is a government-wide challenge, according to interviews with external stakeholders and our analysis of relevant literature. However, such challenges in federal patent licensing are not fully reported by NIST, the lead agency delegated by Commerce to provide annual summary reports to Congress on federal technology transfer activities. By fully reporting the range of these challenges that agencies and labs face, NIST can ensure that Congress has greater awareness of these challenges. To help identify these challenges, NIST could, for example, leverage its survey of practices at federal technology transfer offices, past FLC studies, and agency reports. In addition, DOE, DOD, NASA, and NIH documentation does not consistently link establishing financial terms in patent licenses to the statutory goal of promoting commercial use. As the lead agency on the government-wide effort to find commercial uses or practical applications for federally funded inventions, NIST has been delegated the responsibility to promulgate regulations pertaining to patenting and licensing at federal labs, including implementing the statutory goal of promoting commercial use. By clarifying the link between establishing patent license financial terms and the goal of encouraging commercial use, through the upcoming rule-making process and updating relevant guidance, NIST would have better assurance that financial terms in patent licenses are targeted to that goal. Further, federal labs have varying amounts of information on comparable government licenses when establishing financial terms. However, there is no formal sharing of information on financial terms in patent licenses among federal labs, according to NIST officials. NIST plays a leading role in interagency collaborative efforts on patent licensing, including gathering and sharing information among the labs. By facilitating the formal sharing of comparable license information, NIST could help provide agencies and labs with benchmarks for evaluating which financial terms are best suited to successfully licensing inventions. To establish financial terms, DOD, DOE, NASA, and NIH labs rely on the expertise of their technology transfer staff and take a number of steps to build and share expertise, but had limited documentation of their processes for establishing the financial terms of patent licenses. Agency and lab officials explained that there is a need for flexibility, and thus not every aspect of their processes can be documented in detail. By documenting processes for establishing the financial terms of licenses while maintaining enough flexibility to tailor the specific terms of each license, the four agencies could have more reasonable assurance of consistency across their labs regardless of the expertise of staff. We are making seven recommendations, including three to Commerce and one each to DOD, DOE, NASA, and NIH: The Secretary of Commerce should instruct NIST to fully report the range of challenges in federal patent licensing, such as those outlined in this report, by, for example, leveraging its survey of practices at federal technology transfer offices, past FLC studies, and agency reports and including that information in its summary reports to Congress. (Recommendation 1) The Secretary of Commerce should instruct NIST to clarify the link between establishing patent license financial terms and the goal of promoting commercial use, through appropriate means, such as the upcoming rule-making process and updating relevant guidance. (Recommendation 2) The Secretary of Commerce should instruct NIST to facilitate formal information sharing among the agencies to provide federal labs with information on financial terms in comparable patent licenses, as appropriate. (Recommendation 3) The Secretary of Defense should ensure that the agency or its labs document processes for establishing license financial terms, while maintaining flexibility to tailor the specific financial terms of each license. (Recommendation 4) The Secretary of Energy should ensure that the agency or its labs document processes for establishing license financial terms, while maintaining flexibility to tailor the specific financial terms of each license. (Recommendation 5) The Administrator of NASA should ensure that the agency or its labs document processes for establishing license financial terms, while maintaining flexibility to tailor the specific financial terms of each license. (Recommendation 6) The Director of NIH should ensure that the agency or its labs document processes for establishing license financial terms, while maintaining flexibility to tailor the specific financial terms of each license. (Recommendation 7) We provided a draft of this report to Commerce, DOD, DOE, NASA, and NIH for review and comment. All provided written responses, which are reproduced in appendixes IV-VIII. Commerce and NIH also provided technical comments, which we incorporated as appropriate. Commerce agreed with all three of our recommendations to the agency. In general, the agency stated that it will work through interagency groups, such as the Interagency Working Group for Technology Transfer and the FLC, to address our recommendations, including by creating a specific section in its annual reports to Congress with more details on challenges agencies and labs face in patent licensing and by examining and implementing solutions to facilitate the sharing of information among agencies. According to Commerce, such solutions could include identifying licensing officers who have expertise and creating a community of practice in which they can share best practices and approaches for establishing license terms. DOD, DOE, and HHS agreed, and NASA partially agreed, with the recommendation that they or their labs document processes for establishing financial terms in patent licenses. In its written response, DOD said it will direct the military departments and appropriate defense agencies to have their labs establish documentation of their licensing processes as appropriate. In their written comments, DOE, HHS, and NASA noted the complexity and nuances associated with negotiating license agreements, such as understanding the market for the technology and the level of risk involved. Further, DOE and NASA noted challenges that limit their ability to document processes and emphasized the importance of maintaining flexibility in establishing financial terms in patent licenses. We agree that some flexibility in establishing financial terms of patent licenses is important. DOE, HHS, and NASA all identified steps they would take to ensure that at least some processes for establishing financial terms are documented. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Commerce, Defense, and Energy; the Administrator of NASA; and the Director of NIH. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or neumannj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IX. Figure 3 presents examples of inventions developed in federal laboratories under the Department of Defense, Department of Energy, National Aeronautics and Space Administration, and National Institutes of Health. The following are additional descriptions of challenges in the seven areas of the patent licensing process as well as challenges in prioritizing patent licensing faced by federal laboratories (lab) that were identified by external stakeholders and by agency and lab officials at the Department of Defense (DOD), Department of Energy (DOE), National Aeronautics and Space Administration (NASA), and the National Institutes of Health (NIH)—as well as steps agencies and labs have taken to address those challenges. DOD, DOE, NASA, and NIH officials reported challenges in identifying inventions that lab researchers developed. When a federal researcher does not disclose to lab officials an invention developed in a federal lab, the opportunity to assess the invention’s potential for commercial use may be lost. Federal officials cited various reasons why researchers do not disclose inventions. For instance, several DOD, DOE, NASA, and NIH agency and lab officials stated that some researchers do not have adequate training in identifying potentially patentable inventions. Some agency and lab officials pointed to other reasons why invention disclosures may not be filed, such as researchers not having enough incentive to disclose their inventions. Navy officials stated that researchers are often intimidated by the overall invention disclosure process and tend to focus on their research rather than consider what could be patentable. Officials at one NASA lab noted that they have come across a few contractor employees who do not see the benefit of filing invention disclosures, and sometimes researchers are too busy to engage in the patenting process. According to National Institute of Standards and Technology (NIST) officials, some researchers decide not to disclose an invention because they believe filing a patent application, which includes a filing fee, could take away money from the research itself, and most federal researchers are not motivated by the potential for receiving royalty distributions. Our analysis of relevant literature and interviews with stakeholders also showed that researchers not identifying and disclosing inventions is a government-wide challenge. One stakeholder stated that researchers at federal labs generally have limited understanding of the patenting process, including an understanding of what constitutes patentable subject matter and how to conduct prior research on the technology to determine whether it is patentable. DOD, DOE, NASA, and NIH agency and lab officials stated that they are taking a variety of actions to help address these challenges. For example, some agency and lab officials stated that labs conduct training to educate researchers about the patenting process, inform researchers about statutory requirements to disclose inventions, and incentivize them by acknowledging their efforts through awards and monetary incentives when their inventions reach commercial success. DOD, DOE, and NIH officials described their agencies’ systems for keeping track of inventions developed in the labs as inadequate or in need of improvement. How agencies and labs keep track of such inventions can range from spreadsheets to sophisticated databases that manage all technology transfer activities, including keeping track of patented inventions and licenses. Currently, DOD has a decentralized approach to keeping track of inventions, which, according to DOD officials, needs improvement given how large the agency is. Each military department has its own systems to track and store information on inventions developed in the labs. Officials from DOD and the departments describe the systems as inadequate to keep track of the agency’s inventions. For example, Navy officials described the department’s in-house system to track inventions as “plagued by outages” and thus ineffective. According to officials, the Army funds systems that track inventions, but these systems are different from each other and not connected to headquarters and have been suspended since 2015. We have previously reported on federal agencies’ challenges in monitoring technology transfer activities, including tracking inventions developed in the federal labs. Several stakeholders we interviewed also noted that keeping track of inventions is a government-wide challenge. According to some stakeholders, federal labs not only have inadequate systems to keep track of their own inventions but also limited information on the kinds of inventions being developed in federal labs across the government. The result is that agencies risk being unaware of research across the labs, which can limit their ability to leverage other federal research efforts. One stakeholder specifically noted that the Interagency Edison (iEdison) reporting system—which allows federal grantees and contractors to report federally funded inventions to the agency that issued the funding award, including inventions developed by some contractor- operated labs—is difficult to navigate and needs improvement. Another stakeholder stated that there can be independent research at three or four labs under different agencies but little interaction among those labs about the research. Information on federal lab inventions can also be accessed publically through the Federal Laboratory Consortium (FLC) website; however, NIST officials stated that the website’s information on inventions relies on agencies to submit accurate information, which may be limited by the agencies’ tracking systems. DOD, DOE, and NIH officials stated that they have made efforts to improve their current systems. For example, since our 2015 report on the agency’s challenges with its data management systems that track federally funded inventions, DOE officials reported that they have developed a plan to leverage the capabilities of the iEdison reporting system to unify the agency’s data management process. While DOD officials stated that the agency has been unsuccessful in purchasing software to track inventions across the agency, Air Force officials said they are developing a pilot program and seeking new software to manage the Air Force’s inventions, and they expect the pilot program to increase the number of invention disclosures. Air Force and NIH officials stated that they have contacted NASA, which has a centralized system for tracking inventions, about leveraging the agency’s expertise. NASA officials reported that they have been hosting regular webinars with other agencies to determine whether NASA’s tracking system could help meet other agencies’ needs. DOD, DOE, NASA, and NIH agency and lab officials cited selecting inventions to patent as a challenge because of the expense of patenting fees. According to some agency and lab officials we interviewed, fees paid to the United States Patent and Trademark Office (USPTO) affect their decision on whether to patent an invention. For example, DOE officials stated that budget constraints force them to make decisions about whether they should file a patent or engage in other agency activities. NIH officials stated that the agency maintains fewer patents because of the patent maintenance fees and the agency’s tight budgets. NASA officials reported that one step the agency is taking to deal with the costs of maintaining its issued patents is to identify technologies with low licensing potential and allow the patents to expire if they fail to attract licensees. NASA has created a searchable database that catalogs thousands of expired NASA patents already in the public domain, making them freely available to industry for unrestricted commercial use. Federal labs under DOD, DOE, NASA, and NIH face challenges that limit their ability to attract potential licensees, according to agency and lab officials. Even officials at NASA, described by NIST officials as one of the best agencies in promoting its inventions to industry, said the agency is not selecting among multiple licensees and would like to have more companies license its patents. There are various reasons why federal labs struggle to attract companies interested in licensing their inventions, according to agency and lab officials we interviewed. First, several agency and lab officials cited that the number of entities that want to license inventions is generally not large. Second, some agency and lab officials identified inadequate promotion of federal inventions and licensing opportunities to companies, including start-ups, as a factor. Third, some agency and lab officials also noted that their inventions are often in the early stages of development and thus pose more of a risk for companies to license. Based on our analysis of relevant literature and interviews with stakeholders, difficulty in attracting industry to license inventions developed in federal labs is a government-wide challenge. According to several stakeholders, industry perceives federal labs as not friendly to the private sector when it comes to patent licensing, especially for start-ups. For example, one stakeholder said that it is rare that federal agencies want to license to a start-up, and that more often the labs want a “safer route” by licensing inventions to large companies that already have a steady revenue stream. Another stakeholder said that DOE’s contractor- operated labs in particular tend to not issue exclusive licenses to start-ups and prefer to license to large companies because the agency sees those companies as presenting less of a risk. In addition, stakeholders stated that federal inventions are often not yet commercially viable, which can deter companies from licensing federal inventions. One stakeholder, for example, stated that NASA officials may think that NASA technology is more developed than it is and therefore underestimate how long it will take a company to develop it for practical application, the millions of dollars needed to develop it, and whether it can be manufactured for commercial use. DOD, DOE, NASA, and NIH officials stated that they are taking steps to attract potential licensees by, for example, conducting local outreach to attract companies and working on improving their databases so that companies can learn about federal inventions available for licensing. For instance, NASA officials stated that the agency’s comprehensive database accessible to potential licensees uses a wide variety of search criteria and attracted 6 million unique visitors in 2016. Agency and lab officials and stakeholders noted that federal labs face challenges in negotiating the license agreement because the process is (1) lengthy and (2) uniquely regulated, which can deter companies from licensing federal inventions. Stakeholders stated that the federal licensing process can take anywhere from about 3 months to more than 2 years. Some stakeholders stated that from their point of view taking a year to negotiate a license agreement is too long. One stakeholder said that such lengthy processes are particularly difficult for start-ups, which often need to finalize license agreements in 3 months. Another stakeholder noted that the federal government in general does not understand how urgent it is for companies to complete the licensing process in a timely manner. Although actions on the part of both the labs and companies can cause delays, if the overall process is time-consuming, prospective licensees will tend to move onto something else instead, according to agency and lab officials and stakeholders. Based on our analysis of licensing information provided by the agencies, we found that the amount of time from receipt of an application for a license to signature of the license by the lab varies widely. Specifically, based on this measure of the length of the process, approximately 60 percent of 132 licenses effective in fiscal year 2014 took at most 6 months for DOD, DOE, NASA, and NIH labs to process. Officials at one Navy lab stated that issuing an invention license to a company within 6 months is “highly unusual,” and officials at one NASA lab stated that the fastest they have issued a license was a week because the start-up was prepared and ready to go. For more on our analysis of licensing information from DOD, DOE, NASA, and NIH, see appendix III. Several agency and lab officials also noted that federal regulations associated with patent licensing can deter companies from licensing federal inventions. Such regulations include requirements that are unique to federally funded and federally owned inventions, including that products arising from the invention must be substantially manufactured in the United States and that the government may retain rights to the invention and terminate the license agreement if the licensee does not take steps to commercialize the technology. In particular, NASA officials stated that venture capital firms sometimes oppose the government retaining rights for federal technology used by start-ups that they fund. According to DOD and DOE officials, federal regulations require a level of documentation or explanation that can deter some companies from licensing inventions developed in federal labs. Based on interviews with stakeholders, as well as our analysis of relevant literature, company concerns about federal regulations is a government-wide challenge that federal labs face in licensing their inventions. For example, according to NIST officials, the U.S. manufacturing requirement can influence whether companies consider licensing federal inventions, because manufacturing in the United States can be more expensive than manufacturing in other countries. NIST officials also stated that some prospective licensees initially become concerned when they are told about march-in authority, because it applies to federally funded inventions and contractors. However, once companies are told that it is a legal requirement and that the provision has never been exercised, they generally become more comfortable with it. DOD, DOE, NASA, and NIH agency officials said they are taking steps to address companies’ concerns about the time it takes to negotiate a license agreement and their unfamiliarity with federal licensing requirements. For instance, NASA, NIH, and Navy officials told us they have developed model license agreements to help guide companies through the process, and NASA and NIH have special license agreements for start-ups to shorten the licensing process. Also, DOE created an agency-wide licensing guide to help prospective licensees navigate federal licensing requirements. DOD, DOE, NASA, and NIH agency and lab officials we interviewed identified limited resources and inadequate monitoring systems as factors that make it difficult to monitor licensee performance. NASA and NIH officials reported that the number of license agreements has increased in their labs and that they do not have enough resources to monitor licenses. DOD officials stated that the agency’s technology transfer offices have traditionally been understaffed and that the agency’s monitoring systems are inadequate for tracking the status of issued licenses. Officials at one DOE lab stated that collecting royalties from licensees can be difficult because the lab does not have enough funds to support that activity. In addition, agencies may rely on the same systems they use to keep track of inventions to monitor licensee performance, and as previously discussed, these systems are in need of improvement. Some stakeholders we interviewed noted that monitoring licensee performance is a government-wide challenge. They explained that sometimes licensees do not pay fees if they are not contacted, and a few stakeholders stated that federal labs have limited funding and resources to monitor contracts effectively. One stakeholder recalled one agency that did not communicate with a licensee for 2 years after the license agreement was signed. According to another stakeholder, ineffective monitoring of licensee performance may limit federal labs’ ability to determine whether a company is developing federal inventions for commercial use per the terms and conditions of the license agreement. Some agency and lab officials stated that they have taken steps to regularly monitor licensees. In particular, at NASA and NIH—where monitoring of licensee performance is centralized at the agency level— officials have programed systems to remind staff to check on licensee performance. Federal labs, including those under DOD, DOE, NASA, and NIH, also face challenges in effectively measuring patent licensing outcomes, based on our interviews with stakeholders and analysis of relevant literature. According to one stakeholder, labs need metrics to assess whether a licensee has made progress on developing the invention for commercial use and whether the lab needs to get the license back and give it to another company. However, some stakeholders we interviewed stated that although the 2011 presidential memorandum on technology transfer called for strategies to establish metrics, federal labs are still struggling to implement metrics for measuring technology transfer outcomes, including patent licensing activities. Stakeholders we interviewed and our analysis of relevant literature have indicated that federal labs in general track the numbers of patents, licenses, and revenues instead of using metrics that identify direct economic impacts from patent licensing and other technology transfer activities. In agencies where such metrics do exist, they may be applied inconsistently across labs. For example, officials at one DOE lab stated that DOE metrics are generally not consistent across the agency’s labs. DOD, DOE, NASA, and NIH agency officials stated that they are working to improve their metrics and incorporate metrics beyond tracking numbers of patents, licenses, and revenues. For example, in addition to measuring the numbers of patents and licenses issued, NASA and Air Force officials stated that they are also measuring factors that affect the length of time it takes for their labs to process licenses. Such information, officials said, will help them expedite the licensing process. DOD, DOE, NASA, and NIH face challenges in prioritizing patent licensing as part of their agency missions, which can affect the entire patent licensing process. For example, DOD and DOE agency and lab officials stated that an agency’s mission affects patent licensing activities. DOD officials stated that the agency’s primary mission is protecting the warfighter and that patent licensing is a secondary benefit to the agency. According to DOE officials, the nuclear security labs do not focus on patenting but instead on developing technologies associated with a weapons program. In addition, several stakeholders we interviewed stated that some agencies and labs do not have a culture that prioritizes patent licensing. In particular, one stakeholder stated that at some federal labs, patent licensing is not reflected in performance evaluation management plans, which can help incentivize lab personnel to engage in patent licensing activities. A few stakeholders stated that at some labs where management does not prioritize patent licensing activities, researchers’ careers can be negatively affected if they engage in patent licensing activities. DOD, DOE, NASA, and NIH agency and lab officials cited limited resources to conduct the range of activities related to patent licensing. For example, sometimes there is just one person at a DOD lab overseeing technology transfer activities, according to DOD agency and lab officials. Officials at one NIH lab stated that many labs across the agency do not receive enough royalties to offset their patent licensing costs. In its fiscal year 2015 report—its most recent report—to Congress on federal technology transfer activities, NIST reported that the federal intramural research budget, which include patent licensing activities, has generally not increased in the past 4 fiscal years. Several agency and lab officials stated that budget constraints affect the extent to which they can engage in patent licensing activities—including patent enforcement, which can cost millions of dollars and presents challenges for federal labs, according to DOE officials. Some agency and lab officials stated they have taken steps to overcome such challenges. For example, officials at one Navy lab stated that the lab has management support and nine patent attorneys to assist in the reviews of researchers’ invention disclosures. Also, officials at one NIH lab stated that the lab has strong management support and a good royalty stream from successful inventions that pay for patenting and other reinvestments, which allows the lab to not draw from its appropriations. Tables 1 through 3 and figures 4 through 6 are based on 222 patent licenses that became effective in fiscal year 2014, and associated data, provided by the Department of Defense (specifically the Army, Navy, and Air Force), Department of Energy, National Aeronautics and Space Administration, and National Institutes of Health. They include both data provided by the agencies and information compiled directly from the licenses. The tables and figures are provided for informational purposes and are not generalizable to all patent licenses. In addition to the contact named above, Robert J. Marek (Assistant Director), James D. Ashley, Kevin S. Bray, Virginia A. Chanley, Ellen L. Fried, Sarah C. Gilliland, Cheryl M. Harris, Robert Letzler, Gregory A. Marchand, Christopher P. Murray, Emmy L. Rhine Paule, Dan C. Royer, Ardith A. Spence, Vasiliki Theodoropoulos, and Reed Van Beveren made key contributions to this report. Bozeman, Barry. Technology Transfer Research and Evaluation: Implications for Federal Laboratory Practice, Final Report to VNS Group, Inc. and the U.S. National Institute of Standards, 2013. Accessed on March 14, 2018. https://www.nist.gov/tpo/return-investment-roi-initiative. Bozeman, Barry, Heather Rimes, and Jan Youtie. “The Evolving State-of- the-Art in Technology Transfer Research: Revisiting the Contingent Effectiveness Model.” Research Policy, vol. 44, no. 1 (2014): 34-49. Franza, Richard M. and Kevin P. Grant. “Improving Federal to Private Sector Technology Transfer: A Study Identifies Seven Critical Factors with the Greatest Impact on Whether Transfer Attempt Succeeds or Fails.” Research Technology Management, vol. 49, no. 3 (2006): 36-40. Greiner, Michael A. and Richard M. Franza. “Barriers and Bridges for Successful Environmental Technology Transfer.” Journal of Technology Transfer, vol. 28, no. 2 (2003): 167-177 Howieson, Susannah V., Stephanie Shipp, Gina Walejko, Pamela Rambow, Vanessa Peña, Sherrica S. Holloman, and Phillip N, Miller. Exemplar Practices for Department of Defense Technology Transfer. Alexandria, Va.: Institute of Defense Analyses, January 2013. Hughes, Mary E., Susannah V. Howieson, Gina Walejko, Nayanee Gupta, Seth Jonas, Ashley T. Brenner, Dawn Holmes, Edward Shyu, and Stephanie Shipp. Technology Transfer and Commercialization Landscape in the Federal Laboratories. Alexandria, Va.: Institute of Defense Analyses, June 2011. Jin, D., X. Mo, A. M. Subramanian, K. H. Chai, and C. C. Hang. “Key Management Processes to Technology Transfer Success.” 2016 IEEE International Conference on Management of Innovation and Technology, (2016), 67-71. Linton, Jonathan D., Cesar A. Lombana, and A. D. Romig, Jr. “Accelerating Technology Transfer from Federal Laboratories to the Private Sector—the Business Development Wheel.” Engineering Management Journal, vol. 13, no. 3 (2001): 15-20. Office of Science and Technology Policy and the National Institutes of Health, National Heart, Lung and Blood Institute. Lab-to-Market Inter- agency Summit: Recommendations from the National Expert Panel. Washington, D.C.: National Expert Panel, White House Conference Center, May 2013. Stepp, Matthew, Sean Pool, Nick Loris, and Jack Spencer. Turning the Page: Reimagining the National Labs in the 21st Century Innovation Economy. Washington, D.C.: Information Technology and Innovation Foundation, Center for American Progress, and Heritage Foundation, June 2013. Toregas, Costis, E. Colin Campbell, Sharon S. Dawes, Harold B. Finger, Michael D. Griffin, and Thomas Stackhouse. Technology Transfer: Bringing Innovation to NASA and the Nation. Washington, D.C.: National Academy of Public Administration, November 2004. U.S. Department of Energy, Commission to Review the Effectiveness of the National Energy Laboratories. Securing America’s Future: Realizing Potential of the Department of Energy’s National Laboratories, vol. 1, Executive Report. Washington, D.C.: October 2015. Accessed March 14, 2018. https://www.energy.gov/labcommission/downloads/final-report- commission-review-effectiveness-national-energy-laboratories. Wang, Mark, Shari Pfleeger, David M. Adamson, Gabrielle Bloom, William Butz, Donna Fossum, Mihal Gross, et al. Technology Transfer of Federally Funded R&D: Perspectives from a Forum. Conference Proceedings. Santa Monica, Calif.: RAND Corporation, 2003.
|
The federal government spends approximately $137 billion annually on research and development—mostly at DOD, DOE, NASA, and NIH—to further agencies' missions, including at federal labs. Multiple laws have directed agencies and labs to encourage commercial use of their inventions, in part by licensing patents, to private sector companies and others that aim to further develop and bring the inventions to market. GAO was asked to review agency practices for managing inventions developed at federal labs, with a particular focus on patent licensing. This report examines (1) challenges in licensing patents and steps taken to address and report them and (2) information to guide establishing financial terms in patent licenses at DOD, DOE, NASA, and NIH. GAO reviewed relevant literature, laws, and agency documents, including patent licenses from 2014, to match the most recent NIST summary report when the licenses were requested, and GAO interviewed agency officials and knowledgeable stakeholders, including organizations that assist federal labs in licensing patents. Federal agency and laboratory (lab) officials identified challenges in licensing patents across the federal government, and agencies have taken some steps to address and report them. Patent licensing is a technology transfer activity that allows, for example, federal inventions to be legally transferred to the private sector for commercial use. Specifically, officials at the Departments of Defense (DOD) and Energy (DOE), National Aeronautics and Space Administration (NASA), and National Institutes of Health (NIH), as well as external stakeholders, noted challenges in having researchers identify potentially patentable inventions. DOD, DOE, and NIH officials also cited having inadequate internal systems to keep track of inventions developed in the labs. In addition, several stakeholders stated that licensing patented inventions can be lengthy and bureaucratic, which may deter companies from licensing. The agencies reported taking steps to address these challenges, such as implementing model license agreements across labs to expedite the process. The Department of Commerce has delegated to its National Institute of Standards and Technology (NIST) to annually report agencies' technology transfer activities, including patent licensing. Although NIST has reported some challenges, it has not fully reported the range of challenges identified by agency and lab officials and stakeholders. NIST officials stated that they were generally aware of the challenges but had not considered including them to a greater degree in their annual reports to Congress. By fully reporting the range of challenges in federal patent licensing, NIST has the opportunity to further ensure that Congress is more aware of challenges that limit agencies' efforts and ways for potentially addressing those challenges. Federal agencies and labs have limited information to guide officials when establishing the financial terms of patent licenses. For example, while federal labs can use comparable licenses to help establish financial terms, their access to information on comparable licenses from other labs varies, and such information is not formally shared among the agencies. Based on its established interagency role, NIST is best positioned to assist agencies in sharing information on comparable licenses, in accordance with leading practices for interagency collaboration. By doing so, NIST would provide federal agencies and labs with useful information that can help them better establish financial terms and successfully license inventions. GAO is making seven recommendations, including that Commerce instruct NIST to fully report the range of challenges in federal patent licensing in its annual reports to Congress and facilitate information sharing among agencies. Commerce, DOD, DOE, NASA, and NIH generally agreed with GAO's recommendations and are taking steps to implement them.
|
NASA’s mission is to drive advances in science, technology, aeronautics, and space exploration to enhance knowledge, education, innovation, economic vitality, and stewardship of Earth. The NASA Administrator is responsible for leading the agency and is accountable for all aspects of its mission, including establishing and articulating its vision and strategic priorities and ensuring successful implementation of supporting policies, programs, and performance assessments. Within NASA headquarters, the agency has four mission directorates that define its major core mission work: (1) Aeronautics Research conducts cutting-edge research to enable revolutionary advances in future aircraft, as well as in the airspace in which they will fly; (2) Human Exploration and Operations is responsible for NASA space operations, developing new exploration and transportation systems, and performing scientific research; (3) Science carries out the scientific exploration of Earth and space to expand the frontiers of Earth science, planetary science, and astrophysics, and (4) Space Technology develops revolutionary technologies through transparent, collaborative partnerships that expand the boundaries of aerospace. The agency also has a mission support directorate to manage its business needs and administrative functions, such as human capital management. In addition to NASA headquarters in Washington, D.C., the agency is composed of nine field centers managed by NASA employees, and one federally funded research and development center that are responsible for executing programs and projects. NASA centers are located throughout the country and manage projects or programs for multiple mission directorates. For example, the Goddard Space Flight Center supports various IT programs within the Science mission directorate, while the Johnson Space Center supports multiple programs in the Human Exploration and Operations mission directorate. According to NASA documents, the agency planned to spend $1.6 billion of its fiscal year 2018 budget authority on IT. Of this total, $888 million was to be used for business IT and $672.8 million was to be used for mission IT. Business IT includes the infrastructure and systems needed to support internal agency operations, such as commodity IT (e.g., e-mail and communications systems), infrastructure, IT management, administrative services, and support systems, whereas mission IT includes the technology needed to support space programs and research for the agency’s mission programs. The technology that the agency uses to support its mission programs includes highly-specialized IT, defined by NASA as any equipment, system, and/or software that is used to acquire, store, retrieve, manipulate, and/or transmit data or information when the IT is embedded in a mission platform or provides a platform required for simulating, executing, or operating a mission. Historically, NASA and its Inspector General have reported that funding for and oversight of highly-specialized IT has been decentralized among mission directorates and embedded within launch programs and other mission activities instead of being identified as IT to be managed as part of the agency’s IT portfolio. According to the Inspector General, the agency’s decentralized funding for and oversight of IT has minimized agency-wide visibility into and oversight of NASA’s spending on these systems. The agency’s Chief Information Officer (CIO) reports directly to the NASA Administrator and serves as the principal advisor to the NASA Administrator and senior officials on all matters pertaining to IT. The CIO is to provide leadership, planning, policy direction, and oversight for the management of NASA’s information and systems. Toward this end, the CIO’s responsibilities include developing and implementing approaches for executing the goals and outcomes in the NASA strategic plan; ensuring that the agency’s human resources possess the requisite knowledge and skills in IT and information resources management; maximizing the value of NASA IT investments through an investment management process; and leading and implementing the agency’s IT security program. The CIO also is responsible for developing and implementing agency-wide IT policies and processes. NASA’s CIO also is to direct, manage, and provide policy guidance and oversight of the agency’s center CIOs. Each center has a CIO responsible for supporting center leadership and managing IT staff. Similarly, each mission directorate has a representative who coordinates with programs on IT-specific issues and, as needed, obtains support from the Office of the CIO. Both center CIOs and mission directorate IT representatives report to the NASA CIO and to the leadership of their respective centers and mission directorates. The CIO is supported by staff in the Office of the CIO. This office is organized into four divisions responsible for (1) IT security, (2) capital planning and governance, (3) technology and innovation, and (4) enterprise services and integration. Collectively, these divisions support NASA’s approach to IT strategic and workforce planning, governance boards and practices, and cybersecurity. In March 2017, the Office of the CIO submitted plans to establish a fifth division focused on new applications, and also to rename existing divisions to better represent the services they provide. For example, the Office of the CIO proposed that the Capital Planning and Governance Division be renamed the IT Business Management Division. As of March 2018, NASA had not yet approved or implemented the planned reorganization. Figure 1 depicts the organization of the Office of the CIO, including relevant reporting relationships for center CIOs and mission directorate IT representatives, as of March 2018. We and NASA’s Office of Inspector General have reported on longstanding IT management weaknesses within the agency. For example, in October 2009, we reported that NASA had made progress in implementing IT security controls and aspects of its information security program, but that it had not always implemented appropriate controls to sufficiently protect the confidentiality, integrity, and availability of information and systems. We also identified control vulnerabilities and program shortfalls, which, collectively, increased the risk of unauthorized access to NASA’s sensitive information, as well as inadvertent or deliberate disruption of its system operations and services. We recommended that the NASA Administrator take steps to mitigate control vulnerabilities and fully implement a comprehensive information security program. The agency concurred with our eight recommendations and stated that it was taking actions to mitigate the information security weaknesses identified. In addition, NASA’s Office of Inspector General has issued 24 reports over the last 7 years on IT governance and security weaknesses at the agency. For example, in June 2013, the office reported that the decentralized nature of NASA’s operations and its longstanding culture of autonomy had hindered the agency’s ability to implement effective IT governance. Specifically, the report stated that the CIO had limited visibility and control over a majority of IT investments, operated in an organizational structure that marginalized the authority of the position, and could not enforce security measures across NASA’s computer networks. Moreover, the IT governance structure in place at the time was overly complex, did not function effectively, and operated under a decentralized model that relegated decision making about critical IT issues to numerous individuals across NASA, leaving such decisions outside the purview of the CIO. The Office of Inspector General made eight recommendations to the NASA Administrator for improving IT governance, including calling for all governance to be consolidated within the Office of the CIO to ensure adequate visibility, accountability, and integration into all mission-related IT assets and activities. The Administrator concurred with six and partially concurred with two of the recommendations and planned actions sufficient for the Office of Inspector General to close all eight recommendations as implemented. However, the Office of Inspector General later reported that the extent to which NASA had implemented the agreed-upon changes was in doubt based on subsequent audit findings that NASA was still struggling with limited agency CIO authority, decentralized IT operations, and ineffective IT governance. A follow-on report issued in October 2017 described a continued lack of progress in improving IT governance, determined that the CIO’s visibility into investments across the agency continued to be limited, and identified flaws in the process developed to improve governance. Specifically, the Office of Inspector General noted that the Office of the CIO had made changes to its IT governance boards over the past few years, but the boards had not made strategic decisions to substantively impact how NASA IT would be managed. According to the Office of Inspector General, slow implementation of the revised governance structure had left many IT officials operating under the previous inefficient and ineffective framework. The report also noted that, as of August 2017, the Office of the CIO had not finalized the roles and responsibilities for IT management and lingering confusion regarding security roles, coupled with poor IT inventory practices, had negatively impacted NASA’s security posture. Importantly, the report explained that the Office of the CIO continued to have limited influence over IT management within the mission directorates and at centers. The Office of Inspector General made five recommendations to the CIO that were intended to improve, among other things, governance and security. As of October 2017, NASA had concurred with three recommendations, partially concurred with two recommendations, and described corrective actions taken or planned. However, the Office of Inspector General found that NASA’s original proposed action to address the fourth recommendation was insufficient; thus, in December 2017, the agency established additional proposed actions to address that recommendation. We have identified a set of essential and complementary management disciplines that provide a sound foundation for IT management. These include the following: Strategic planning: Strategic planning defines what an organization seeks to accomplish and identifies the strategies it will use to achieve desired results. We have previously reported that a defined strategic planning process allows an agency to clearly articulate its strategic direction and establish linkages among planning practices, such as goals, objectives, and strategies and identified leading practices for agency planning. Workforce planning: We have previously reported that it is important for an agency to have a strong IT workforce to help ensure the timely and effective acquisition of IT. In November 2016, we identified eight key workforce planning activities derived from the Clinger-Cohen Act of 1996 and relevant guidance, including memorandums and guidance from OPM and OMB, and prior GAO reports. These laws and guidance focus on the importance of setting the strategic direction for workforce planning, analyzing the workforce to identify skill gaps, developing strategies to address skill gaps, and monitoring and reporting on progress in addressing skill gaps. IT governance: IT projects can significantly improve an organization’s performance, but they can also become costly, risky, and unproductive. In 1996, Congress passed the Clinger-Cohen Act, which requires executive branch agencies to establish a process for selecting, managing, and evaluating investments in order to maximize the value and assess and manage the risks of IT acquisitions. Agencies can maximize the value of their investments and minimize the risks of their acquisitions by having an effective and efficient governance process, as described in GAO’s guide to effective IT investment management. Cybersecurity: Federal systems and networks are often interconnected with other internal and external systems and networks, including the Internet. When systems are interconnected, the number of avenues of attack increases and the attack surface expands. Effective security for agency systems and data is essential to prevent data tampering, disruptions in critical operations, fraud, and inappropriate disclosure of sensitive information, including personal information entrusted to the government by members of the American public. Taking action to assure that an agency’s contractors and partners are adequately protecting the agency’s information and systems is one way an agency can address cybersecurity risks. NIST has issued a suite of information security standards and guidelines that, collectively, provide comprehensive guidance on managing cybersecurity risk to agencies and any entities performing work on the agencies’ behalf. NIST’s cybersecurity framework was issued in February 2014 in response to Executive Order 13636. The framework outlines a risk-based approach to managing cybersecurity risk and protecting an organization’s critical information assets. Subsequent to the issuance of the cybersecurity framework, a May 2017 executive order required agencies to use the framework to manage cybersecurity risks. The order outlined actions to enhance cybersecurity across federal agencies and critical infrastructure to improve the nation’s cyber posture and capabilities against cybersecurity threats to digital and physical security. NASA has not yet effectively established and implemented leading IT management practices for strategic planning, workforce planning, governance, and cybersecurity. Specifically, The agency’s IT strategic planning process is not yet fully documented and its IT strategic plan lacks key elements called for by leading practices. NASA has not yet established an IT workforce planning process consistent with leading practices. The agency has taken recent action to improve its IT governance structure; however, it has not yet fully established that structure, documented improvements to its investment selection process, fully implemented investment oversight leading practices, or fully defined its policies and procedures for IT portfolio management. NASA has not fully established an effective approach to managing agency-wide cybersecurity risk. While it has designated a risk executive, the agency lacks a dedicated office to provide comprehensive executive oversight of risks. In addition, the agency- wide cybersecurity risk management strategy is currently in development, and the agency’s information security program plan does not address all leading practices and has not been finalized. Further, policies and procedures for protecting NASA’s information systems are in place, but the agency has not ensured that they are always current or integrated. Leading practices of IT strategic planning established in OMB guidance call for an agency to document its IT strategic planning process, including, at a minimum, documenting the responsibilities and accountability for IT resources across the agency. It also calls for documenting the method by which the agency defines its IT needs and develops strategies, systems, and capabilities to meet those needs. NASA’s documented IT strategic planning process describes the responsibilities and accountability for IT resources across the agency. For example, NASA has assigned specific governance bodies with responsibility for developing and overseeing the implementation of the IT strategy. Also, in its IT strategic plan, NASA described key stakeholders across the agency that are responsible for the development of the plan. These stakeholders include the Associate CIOs, representatives from mission directorates, mission support organizations, and the centers. On the other hand, the methods by which the agency defines its IT needs and develops strategies, systems, and capabilities to meet those needs are not documented. For example, according to the IT strategic plan, the Office of the CIO is to perform a gap analysis to inform the development of NASA’s roadmap that translates its IT needs and the strategies identified for meeting those needs into tactical plans. The tactical plans are to define how the strategic plan will be incrementally executed to achieve the longer term goals. However, the Office of the CIO has not documented in its strategic planning policies and procedures how the CIO will perform the gap analysis or the methods for developing these tactical plans and roadmaps. This is particularly important since, according to officials in NASA’s Office of the CIO, the centers vary as to whether they have developed their own IT strategic plans or tactical plans, and the office does not oversee or review any center-level plans to ensure they align with the NASA IT strategic plan. According to officials in the Office of the CIO, NASA used a new model in formulating its IT strategy for fiscal years 2018 to 2021, such as including a broader set of stakeholders in the strategic planning cycle before documenting the strategic planning process. The officials stated that they intend to identify lessons learned from using this new model and formally document a complete and repeatable IT strategic planning process in the future. However, the agency has not established time frames for when the Office of the CIO will fully document its strategic planning process. Without a fully documented strategic planning process, NASA risks not being able to clearly articulate what it seeks to accomplish and identify the IT resources needed to achieve desired results in a way that is consistent and complete. In addition to calling for agencies to fully document the strategic planning process, leading practices from OMB guidance and our prior research and experience at federal agencies have shown that an agency should develop a comprehensive and effective IT strategic plan that (1) is aligned with the agency’s overall strategy; (2) identifies the mission of the agency, results-oriented goals, and performance measures that permit the agency to determine whether implementation of the plan is succeeding; (3) includes strategies, with resources and time frames, that the governing IT organization intends to use to achieve desired results; and (4) provides descriptions of interdependencies within and across projects so that they can be understood and managed. The resulting plan is to serve as an agency’s vision, or road map, and help align information resources with business strategies and investment decisions. NASA has taken steps to improve its IT strategic plan, but the updated plan is not comprehensive in that it does not fully address all four elements of a comprehensive and effective plan outlined above. In this regard, the agency had a prior strategic plan covering the time frame of March 2014 to November 2017. More recently, in December 2017, the CIO and Associate Administrator approved an updated plan for implementation. The updated plan is intended for use from the date it was approved through fiscal year 2021. Regarding the four elements of a comprehensive IT strategic plan, NASA’s prior plan addressed one element, partially addressed two elements, and did not address one element. The updated plan was slightly improved in that it addressed two elements, partially met one element, and did not meet one element of a comprehensive strategic plan. Table 1 provides a summary of the extent to which NASA’s prior IT strategic plan (covering the time frame of March 2014 to November 2017) and recently updated IT strategic plan (covering the time frame of December 2017 to fiscal year 2021) addressed key elements of a comprehensive strategic plan. NASA’s prior IT strategic plan was aligned with the agency’s overall strategic plan and identified the mission of the agency and results- oriented goals. However, these goals were not linked to specific performance measures that were needed to track progress and did not always describe strategies to achieve desired results. Additionally, this plan lacked descriptions of interdependencies within and across projects. NASA’s updated IT strategic plan is aligned with the agency’s overall strategic plan and identifies the mission of the agency and results- oriented goals. For example, the plan describes the agency’s IT vision, mission, principles, and objectives of five strategic goals—excellence, data, cybersecurity, value, and people. To support these goals, the plan defines 14 objectives to be accomplished over 4 years. For example, the plan defines objectives for increasing the effectiveness of NASA’s IT strategy execution through disciplined program and project management. In addition, NASA has improved upon the prior plan by identifying performance measures that allow the agency to determine whether it is succeeding in the implementation of its goals. For example, in order to increase the effectiveness of its IT strategy execution, the Office of the CIO expects 85 percent of projects to be in conformance with approved project plans by the end of fiscal year 2018. As another example, to prepare its employees to achieve NASA’s IT vision, the Office of the CIO plans to, by the end of fiscal year 2020, identify skills gaps and ways to close the gaps based on the workforce strategy. However, similar to the prior plan, the updated plan does not fully describe strategies NASA intends to use to achieve the desired results or descriptions of interdependencies within and across projects. Specifically, the plan discusses how the agency intends to achieve its strategic goals and objectives through various activities. For example, according to the plan, to increase the effectiveness of investment analysis and prioritization, NASA intends to implement a financial management process that integrates Office of the CIO, center, and mission directorate IT spending. The plan states that this process will map IT investments to NASA’s vision and strategy, as well as enable high-quality internal and external investment insight and reporting. However, the updated plan does not further describe the strategies NASA intends to use to accomplish these activities, including a schedule for significant actions and the resources needed to achieve this objective. For instance, the plan states that the Office of the CIO will define clear lines of authority and accountability for IT between the agency and NASA’s centers, but does not describe a strategy, including time frames and resources, for accomplishing this. Additionally, the plan does not describe interdependencies between projects, which is essential to help define the relationships within and across projects and major initiatives. According to NASA’s CIO, the updated strategic plan was kept at a higher level with the expectation that more detailed implementation plans (e.g., tactical plans and roadmaps) would define the necessary projects and interdependencies. However, NASA has not defined guidance for developing the implementation plans to ensure that any plans developed will fully describe strategies and interdependencies, or time frames for when these plans will be completed. Until NASA incorporates the key elements of a comprehensive IT strategic plan, it will lack critical information needed to align information resources with business strategies and investment decisions. Key to an agency’s success in managing its IT investments is sustaining a workforce with the necessary knowledge, skills, and abilities to execute a range of management functions that support the agency’s mission and goals. Achieving such a workforce depends on having effective human capital management consistent with workforce planning activities pursuant to federal laws and guidance. Specifically, OMB requires agencies to develop and maintain a current workforce planning process. In addition, we reported in 2016 on the importance of setting a strategic direction for IT workforce planning, identifying skills gaps and implementing strategies to address them, and monitoring and reporting on progress in addressing the identified skills gaps. We identified eight key IT workforce planning activities that are essential to agency efforts to establish an effective IT workforce: 1. establish and maintain a workforce planning process; 2. develop competency and staffing requirements; 3. assess competency and staffing needs regularly; 4. assess gaps in competencies and staffing; 5. develop strategies and plans to address gaps in competencies and 6. implement activities that address gaps (including IT acquisition cadres, cross-functional training of acquisition and program personnel, career paths for program managers, plans to strengthen program management, and use of special hiring authorities); 7. monitor the agency’s progress in addressing competency and staffing 8. report to agency leadership on progress in addressing competency and staffing gaps. The Office of the CIO has had IT workforce planning efforts underway since 2015 that are intended to address the workforce planning activities listed above; however, the office has not finalized or implemented any of the planned actions. The office recently began working to establish a more comprehensive workforce strategy for fiscal year 2019 to align with the agency’s increased emphasis on improving the overall workforce. Specifically, in the draft NASA Strategic Plan, the agency established a workforce development goal and two strategic objectives that relate to its IT workforce and call for, among other things, workforce training and efforts to increase cybersecurity awareness to reduce cybersecurity risks. Nevertheless, NASA has gaps in its IT workforce planning efforts. Of the eight key IT workforce planning activities that we previously outlined, NASA partially implemented five and did not implement three. Table 2 shows the extent to which NASA has implemented each IT workforce planning activity and provides examples of workforce practices planned or implemented, as well as those not yet undertaken. According to NASA’s CIO, the Office of the CIO put IT workforce planning activities on hold in 2015 pending the outcome of more comprehensive, agency-wide efforts. Specifically, the agency began planning and developing a new phased program—the Mission Support Future Architecture Program—designed to deliver workforce and other mission support services, including a talent management program. Phase 1 of the new phased Mission Support Future Architecture Program began in May 2017. According to the NASA CIO, the Office of the CIO is expected to be part of a future phase and to renew its IT workforce planning as part of that effort. However, the CIO did not have an estimate for when the Office of the CIO would join the program. Until NASA implements all of the key IT workforce planning activities discussed in this report, the agency will have difficulty anticipating and responding to changing staffing needs. Further, NASA will face challenges in controlling human capital risks when developing, implementing, and operating IT systems. Leading practices for governing IT, such as those identified by GAO in its IT investment management framework, call for agencies to establish and follow a systematic and organized approach to investment management to help lay a foundation for successful, predictable, and repeatable decisions. Critical elements of such an approach include instituting an IT investment board (or boards), developing and documenting a governance process for investment selection and for investment oversight, and establishing governance policies and procedures for managing the agency’s overall IT investment portfolio. Instituting an effective IT governance structure involves establishing one or more governance boards, clearly defining the boards’ roles and responsibilities, and ensuring that they operate as intended. Moreover, Section 811(a) of the National Aeronautics and Space Administration Transition Authorization Act of 2017 directs the agency to ensure that the NASA CIO, mission directorates, and centers have appropriate roles in governance processes. The act also calls on the Administrator to provide, among other things, an IT program management framework to increase the efficiency and effectiveness of IT investments, including relying on metrics for identifying and reducing potential duplication, waste, and cost. NASA has established three boards focused specifically on IT governance—an IT Council which is its executive-level IT board, a CIO Leadership Team, and an IT Program Management Board which provides oversight of programs and projects. Meeting minutes for the three IT- specific governance bodies identified above revealed that these groups are meeting as required by their charters. Further, two of NASA’s agency-wide councils (whose governance responsibilities extend beyond IT) also play a role in IT governance. Specifically, the Mission Support Council is the governance body to which the IT Council escalates unresolved decisions, and the Agency Program Management Council is responsible for reviewing and approving highly- specialized IT. In addition, NASA centers have the option to create center-specific IT governance boards to make decisions about center- level IT investments under the authority of center CIOs. Table 3 describes the roles of the IT-specific governance boards, the agency-wide councils with roles in IT governance, and the center-level IT governance boards. The table also includes additional details on how frequently the councils and boards meet, the dollar thresholds NASA has established to determine which investments each council or board reviews, and which officials serve as members of the boards. Although it has established and assigned responsibilities for the aforementioned governance councils and boards, NASA has not yet fully instituted an effective investment board governance structure for several reasons. Planned improvements to the IT governance structure are not yet complete. NASA has established new governance boards in addition to the boards listed above, but has not yet approved charters to guide their operations. Specifically, the Office of the CIO has revised its governance structure to establish six new boards, one for each of its IT programs. Agency officials, including the IT governance lead, reported that the boards had been established; however, as of December 2017, NASA had not yet approved charters defining the new governance bodies’ membership, functions, and interactions with other governance boards. Roles and responsibilities of the IT governance boards and agency-wide governance councils are not clearly defined. NASA continues to operate a federated governance model with decentralized roles and responsibilities for governance of mission and business IT investments. Business IT is selected and approved by the IT-specific governance boards, but mission IT follows a different path for investment selection in that it is not reviewed and approved by the CIO along with other IT investments proposed for selection. Instead, the Agency Program Management Council’s reviews focus on the selection of overall mission programs, and not on selecting IT. As a result, mission IT has historically been reported to the Office of the CIO only if the program has been designated as a major agency IT investment to be reported to OMB. NASA has begun making changes to its decentralized governance approach in response to provisions in legislation commonly referred to as the Federal Information Technology Acquisition Reform Act that are intended to ensure that the CIO has visibility into both mission and business IT investments. However, the agency has not yet developed policies and procedures to clarify how these changes will affect the CIO’s and governance boards’ roles and responsibilities. For example, in January 2017, the IT Council approved an updated definition for highly-specialized IT and established new expectations about the extent to which highly-specialized IT investments would be reviewed by the NASA CIO. However, NASA has not clarified roles and responsibilities for identifying such investments and ensuring they are reported by mission directorate programs to the CIO. In addition, the agency has not yet outlined procedures for how these investments that are overseen by the agency-wide Agency Program Management Council are to be reported to the CIO or IT-specific governance boards. During a January 2017 IT Council meeting, the NASA CIO acknowledged that roles and responsibilities for IT governance were unclear and that it would take 1 to 2 years to clarify them. In July 2017, the Deputy CIO recognized that significant work remained for NASA to achieve a consistent agency-wide governance approach with established roles and responsibilities. While the IT governance boards are meeting regularly, they are not consistently operating as intended. Board charters finalized in 2016 defined the membership for the governance boards and established expectations for the expertise to be made available to support board decisions. However, the boards are not consistently operating with all designated board members in attendance. For example, the Chief Engineer was designated as a member of the IT Council, but the council’s meeting minutes indicated that the Deputy Chief Engineer regularly attends the council meetings instead. In addition, IT Program Management Board meetings are consistently held with fewer voting members than designated by the board’s charter. The board’s meeting minutes indicated that fewer than six voting members regularly attend board meetings instead of the eight voting members outlined in the board charter. For example, the minutes showed that each meeting has been held with only one center and mission support directorate representative—instead of the two required by the charter. NASA officials, including the Associate CIO for Capital Planning and Governance, stated that planned efforts to update the governance structure and develop additional guidance for IT investment management have impacted the agency’s time frames for fully establishing its new boards and defining their roles and responsibilities. Specifically, these officials stated that the Office of the CIO is working to develop a comprehensive IT framework intended to update the governance structure, fully establish the new governance boards, and define governance roles and responsibilities. According to the officials, this framework is expected to be finalized in 2018, but the office did not provide a detailed schedule with milestones for completing the framework. Without a detailed schedule for updating the governance structure and establishing a comprehensive IT framework to help ensure that the revised governance boards are fully established and operating as intended, NASA may not be able to improve IT governance in accordance with the requirements in the National Aeronautics and Space Administration Transition Authorization Act of 2017. According to our IT investment management guide, defining policies and procedures for selecting investments provides investment boards and others with a structured process and a common understanding of how investments will be selected. Selection policies and procedures should, among other things, establish thresholds or criteria (e.g., investment size, technical difficulty, risk, business impact, customer needs, and cost- benefit analysis) for boards to use in identifying, analyzing, prioritizing and selecting new IT proposals. In addition, outlining a process for reselecting ongoing projects is intended to support board decisions about whether to continue to fund projects not meeting established goals or plans. Using the defined selection process promotes consistency and transparency in IT governance decision making. Further, after the guidance has been developed, organizations must actively maintain it, making sure that it always reflects the board’s current structure and the processes that are being used to manage the selection of the organization’s IT investments. NASA’s defined selection process policies and procedures designated the CIO with responsibility to ensure that IT governance, investment management, and program/project management processes are integrated to facilitate the selection of appropriate IT investments. The agency has established multiple policies and procedures outlining certain aspects of how both mission programs and business IT investments are to be planned, such as standardized templates for requesting approval to plan investments and direction for teams to use in planning for investments. In addition, the Office of the CIO has established a Capital Planning and Investment Control Guide for business IT investments and issues annual budget guidance for requesting funding for IT investments. The agency’s selection process also includes specific IT governance processes developed by centers for the investments they review. For example, Goddard Space Flight Center had developed additional center- specific guidance assigning lead responsibility for assessing new and ongoing projects. The center also has established predetermined criteria, such as whether projects conflict, overlap, or are redundant with other projects, and the risk if the investment was not funded. Nevertheless, NASA’s established process does not yet define thresholds or criteria (e.g., qualitative or quantitative data) to be analyzed and compared when governance boards make decisions to select investments. Charters for NASA’s governance boards outline the functions these boards are to perform and direct them to be involved in IT governance. However, the charters do not outline specific thresholds or procedures that the boards are to follow in selecting investments. For example, NASA’s process does not fully define how investment risks are to be evaluated. NASA policy establishes dollar thresholds for IT governance board reviews, but does not define any other parameters for how risk will be evaluated. In addition, NASA has established an expectation that the new capital investment review process is to yield risk- based decisions for all investments and help mitigate IT security risks. However, guidance for capital investment reviews does not address how investment risks are to be evaluated. Moreover, NASA’s selection process policies and procedures have not been updated to reflect efforts to improve governance. Its guidance for selecting investments (and for all aspects of its governance process) is fragmented, and the agency has not updated its policies and procedures to reflect current selection practices. In addition, this guidance does not yet reflect recent efforts to clarify and standardize the definitions of fundamental IT investment terms, such as “information technology” and “major” investments. Further, while NASA has begun changing its selection process to ensure that the CIO and IT governance boards will be provided data about all IT investments, including mission IT investments such as highly-specialized IT, the agency’s selection policies have not been updated to reflect these changes. NASA’s Capital Planning and Investment Control Guide does not require all investments to be included in the selection process (or other IT governance processes) and the NASA Space Flight Program and Project Management procedures for mission program governance do not address whether or how the investments within mission programs are to be reported to the agency’s IT-specific governance boards. In addition, NASA has not yet defined a reselection process for IT investments. Current policies and guidance for selecting investments do not clearly define a consistent approach for how performance is to be considered in reselecting investments. Without a defined reselection process, the agency’s boards lack structure and a common understanding about how to make decisions about whether to continue to fund projects not meeting established goals or plans. NASA officials acknowledged that the current policies and procedures do not establish sufficient content within the business cases and IT plans for proposed investments to support effective governance decision making. The agency has begun working to update its policy for IT program and project management but did not expect to complete the update until April 2018. Further, even when this key IT investment management policy is updated, the agency will still need to update related policies and procedures to reflect changes it has made but not yet documented in the investment selection process. NASA has not yet established plans for when all needed updates to the policies and procedures will be completed. Until NASA updates its IT governance policies and procedures to establish thresholds and procedures to guide its boards in decision making and outline a process for reselecting investments, the agency will be limited in its assurance that the investment selection process will provide a consistent and structured method for selecting investments. Further, until all relevant governance policies and procedures are updated to reflect current investment selection practices and proposed changes intended to provide the CIO with data about mission IT, the CIO will not be positioned to minimize investments that present undue risk to the agency and ensure accountability for both business and mission IT. Organizations that provide effective IT investment oversight have documented policies and procedures that, among other things, ensure that data on actual performance (e.g., cost, schedule, benefit, and risk) are provided to the appropriate IT investment board(s). In addition, such organizations establish procedures for escalating or elevating unresolved or significant issues; ensure that appropriate actions are taken to correct or terminate underperforming IT projects based on defined criteria; and regularly track corrective actions until they are completed. As with investment selection, NASA has established multiple policies and procedures for the oversight of IT investments. In October 2015, the agency added to its oversight processes by establishing a capital investment review process to improve the quality of the information available for investment oversight and established a matrix defining dollar thresholds to delineate oversight among the IT governance boards. The IT Program Management Board is also assigned specific oversight responsibilities for reviewing investment cost, schedule, performance, and risk at key lifecycle decision points for investments submitted for its review. In addition, the IT Program Management Board’s charter requires this board to track, among other things, board decisions about investments and action items. In implementing NASA’s oversight practices, the IT Program Management Board consistently reviewed updates on investment performance (i.e., cost, schedule, and benefits) and progress. In addition, the IT Program Management Board’s oversight decisions about IT investments are documented in meeting minutes, and the board also records any action items identified for investments in the decision memorandums it submits to the CIO. Nevertheless, we identified limitations in NASA’s established oversight policies and procedures. For example, the agency’s policies and procedures require IT investments to report data to the governance boards at key decision points but do not establish specific thresholds or other criteria for the governance boards to use in overseeing the investments’ performance or escalating investments to review by other boards. The oversight guidance also does not specify the conditions under which a project would be terminated. In addition, weaknesses we identified in oversight of specific NASA IT investments highlighted additional limitations of the established oversight process. Specifically, NASA did not have a mechanism for alerting the IT Program Management Board to provide oversight if investments were underperforming or overdue for review. For example, significant schedule overruns did not trigger additional oversight for one investment. In March 2015, NASA approved the proposed design for an investment to implement a security tool in June 2015 at an expected cost of $1.3 million. Although the project fell 13 months behind schedule and encountered unforeseen challenges, the IT Program Management Board did not review the investment again until June 2017—2 years later. Not all IT investments followed the established oversight process. For example, in our review of governance board meeting minutes and documentation, we identified an investment that was close to completion before the IT Program Management Board reviewed its proposed design. Specifically, in February 2016, the board was asked—1 day before the investment was to become operational—to (1) approve the proposed design and (2) grant authority to operate for the investment intended for use by NASA staff and external partners. Although concerns about limited oversight were noted, the investment was approved. Further, NASA lacks procedures to ensure that action items identified are tracked. We identified instances in which the IT Program Management Board did not consistently track action items identified for IT investments. NASA’s investments typically report back to the IT Program Management Board at future decision point reviews about steps taken to address documented action items. However, the board’s meeting minutes and documentation identified multiple examples of investments that were returned to the board at future decision points without reporting on whether identified action items had been addressed. Moreover, NASA’s oversight processes do not encompass highly- specialized or other IT that supports mission programs. After reviewing NASA’s fiscal year 2015 budget request, OMB directed NASA to identify unreported IT investments throughout the agency to ensure that all related spending would be documented. NASA established a team in 2016 to explore how to identify such investments so that they could be reported to the CIO. The team initiated efforts to identify such investments in mission directorates and evaluated various mechanisms that NASA could employ to detect unreported IT. However, the agency has not yet finalized decisions about how to implement the team’s recommendations, including those for fully identifying investments for all mission directorates or determining which mechanisms to employ to identify unreported IT. According to NASA officials, time frames for completing these activities have not yet been established. In July 2017, NASA officials, including the Deputy CIO, acknowledged in governance board meeting minutes describing needed improvements, that the agency had not yet fully identified its IT footprint and needed to establish a comprehensive investment management process to address federal requirements, including those governing processes for selecting, reselecting, and overseeing IT investments. NASA officials explained that important progress had been made in improving oversight practices, but that efforts to implement more thorough capital investment reviews and identify IT investments across the agency had not yet been completed. The officials reported that they anticipated additional improvement to be made by the next annual budget cycle. However, expanding NASA’s oversight of IT will require continued coordination with the mission directorates to work through any needed changes to the longstanding differences in NASA’s management of mission and business IT. The scope and complexity of such efforts are likely to be significant and may take time to plan and implement. Clearly defining how IT across the agency is to be identified and reported to the CIO would likely involve changes to policies and processes within and across NASA’s IT, engineering, and mission program areas and would involve expertise and collaboration from those same groups. Until such practices are fully established, NASA will continue to operate with limitations in its oversight process and projects that fall short of performance expectations. In addition, the agency will face increased risk that its oversight will fail to (1) prevent duplicative investments, (2) identify opportunities to improve efficiency and effectiveness, and (3) ensure that investment progress and performance meet expectations. The IT investment management framework developed by GAO notes that, as investment management processes mature, agencies move from project specific processes to managing investments as a portfolio. The shift from investment management to IT portfolio management enables agencies to evaluate potential investments by how well they support the agency’s missions, strategies, and goals. According to the framework, the investment board enhances the IT investment management process by developing a complete investment portfolio. As part of the process to develop a complete portfolio, an agency is to establish and implement policies and procedures for developing the portfolio criteria, creating the portfolio, and evaluating the portfolio. NASA has not yet fully defined its policies and procedures for developing the portfolio criteria, creating the portfolio, and evaluating the portfolio. In its Annual Capital Investment Review Implementation Plan, dated October 2015, NASA began documenting policies for IT portfolio management and procedures for creating and evaluating the portfolio. For example, the procedures state that NASA is to update its IT portfolio annually in conjunction with the agency’s planning and budgeting process. Additionally, in its IT Capital Planning and Investment Control Process guide, dated October 2006, NASA outlined procedures the agency can use to analyze the portfolio by establishing factors that should be taken into consideration, including the relative benefits, costs, and risks of the investment compared to all other proposals and the strength of the investment’s linkage to NASA’s strategic business plan. However, these documents do not constitute a comprehensive IT portfolio management process in that they do not specifically define the procedures for creating and modifying the IT portfolio selection criteria; analyzing, selecting, and maintaining the investment portfolio; or reviewing, evaluating, and improving the performance of its portfolio. Further, the policies and procedures have not been updated to reflect current NASA practices. Specifically, the current policies and procedures have not been updated to reflect changes the agency made to its capital investment review process that are relevant to portfolio management. According to NASA officials, the reason that the agency has not fully defined its policies and procedures is because they are intended to be part of a new IT portfolio management framework that also requires NASA to make changes to its investment management process. Specifically, the IT portfolio management plan that NASA drafted in January 2017 called for the agency to develop new IT investment criteria, discover currently unreported IT investments, develop an investment review process, and implement an IT investment dashboard and reporting tool, and a communications plan. Although the IT Council has not yet approved the IT portfolio management plan, NASA has begun work to address elements of the draft plan, including building the requirements for an IT dashboard and reporting tool for implementation in 2018. In addition, according to Office of the CIO officials, the capital planning team is continuing to work with stakeholders to develop a comprehensive IT framework and investment review process. However, no firm dates have been established for the approval and implementation of the final plan or the framework. Until NASA fully defines its policies and procedures for developing the portfolio criteria, creating the portfolio, and evaluating the portfolio, the agency will lack assurance it is identifying and selecting the appropriate mix of IT projects that best meet its mission needs. We have previously reported that securing federal government computerized information systems and electronic data is vital to the nation’s security, prosperity, and well-being. Yet, the security over these systems is inconsistent and agencies have faced challenges in establishing cybersecurity approaches. Accordingly, we have recommended that federal agencies address control deficiencies and fully implement organization-wide information security programs. NIST’s cybersecurity framework is intended to support federal agencies as they develop, implement, and continuously improve their cybersecurity risk management programs. In this regard, the framework identifies cybersecurity activities for achieving specific outcomes over the lifecycle of an organization’s management of cybersecurity risk. According to NIST, the first stage of the cybersecurity risk management lifecycle— which the framework refers to as “identify”—is focused on foundational activities for effective risk management that provide agencies with the organizational understanding to manage cybersecurity risk to systems, assets, data, and capabilities. NIST also provides specific guidance for implementing foundational activities and achieving desired outcomes that calls for, among other things, the following: A risk executive in the form of an individual or group that provides agency-wide oversight of risk activities and facilitates collaboration among stakeholders and consistent application of the risk management strategy. A cybersecurity risk management strategy that articulates how an agency intends to assess, respond to, and monitor risk associated with the operation and use of the information systems it relies on to carry out the mission. An information security program plan that describes the security controls that are in place or planned for addressing an agency’s risks and facilitating compliance with applicable federal laws, executive orders, directives, policies, or regulations. Risk-based policies and procedures that act as the primary mechanisms through which current security requirements are communicated to help reduce the agency’s risk of unauthorized access or disruption of services. However, NASA has not yet fully implemented these foundational activities of effective cybersecurity risk management. According to NIST guidance, federal agencies should establish a risk executive in the form of an individual or group that provides organization- wide oversight of risk activities and facilitates collaboration among stakeholders and consistent application of the risk management strategy. This functional role helps to ensure that risk management is institutionalized into the day-to-day operations of organizations as a priority and integral part of carrying out missions. NASA has developed a policy regarding the establishment of a risk executive function in accordance with NIST guidance, but it has not fully implemented the policy. Specifically, the agency’s policy designates the Senior Agency Information Security Officer (SAISO) as the risk executive. According to the policy, the SAISO is charged with ensuring that cybersecurity is considered and managed consistently across the systems that support the agency and its partnerships—academic, commercial, international, and others that leverage NASA resources and extend scientific results. The policy also calls for the SAISO to establish an office with the mission and resources for information security operations, security governance, and cyber-threat analysis. In accordance with its policy, NASA has designated an Acting SAISO. Since April 2017, the Acting SAISO has led the IT Security Division within the Office of the CIO—an office that coordinates information security operations, security governance, security architecture and engineering, and cyber-threat analysis. However, the agency has not yet established a risk executive office with assigned leadership positions and defined roles and responsibilities. According to NASA documentation, the agency had planned for the office to become operational by mid-December 2016. Agency officials, including the Acting Deputy Associate CIO for Information Security, explained that an IT security program office was not established in 2016 because the planned time frame for doing so was not realistic and failed to take into account other risk management efforts competing for available resources. For example, the officials stated that the agency was focused on a priority goal of deploying a centralized tool across its centers that would provide monitoring of implemented security controls to ensure they are functioning adequately. According to the NASA CIO, the agency planned to establish a comprehensive risk executive function by employing a cybersecurity risk manager in April 2018 and forming a program office—called the Enterprise Security Office—by September 2018. NASA’s new cybersecurity risk manager began work on April 2, 2018. The agency’s plan to have the new cybersecurity risk manager establish a comprehensive risk executive function should help ensure that current risk management efforts and decisions are appropriate and consistently carried out across the agency and its external partnerships. NIST guidance states that federal agencies should establish and implement an organizational strategy for managing cybersecurity risk that guides and informs how the agency assesses, responds to, and monitors risk to the information systems being relied on to carry out its mission. The strategy should, among other things, make explicit an agency’s risk tolerance, accepted risk assessment methodologies, a process for consistently evaluating risk across the organization, risk response strategies, approaches for monitoring risk over time, and priorities for investing in risk management. In 2015, NASA recognized the need to establish and implement an agency-wide strategy for managing its cybersecurity risks to address weaknesses it had identified with the decentralized approach it was using. Specifically, because the agency’s centers had independently developed approaches for managing cybersecurity risk, there was little integration regarding risk management and practices across the agency. Further, NASA determined that the decentralized, center-level approach did not provide sufficient transparency regarding risks that could affect mission directorate programs. To overcome the limitations of its decentralized approach, NASA planned to develop and begin implementing a comprehensive cybersecurity strategy by the end of September 2016 that was expected to include the key elements identified in NIST guidance. For example, it was expected to define the agency’s risk tolerance, establish a methodology for identifying and assessing risks, and provide a clear understanding of NASA’s risk posture. However, the strategy was not completed as planned and is currently in development. According to officials in the Office of the CIO, including the Acting Deputy Associate CIO for Information Security, the strategy was not completed as planned due to the complexity and scope of the effort. For example, the officials stated that establishing an effective agency- wide strategy required insight into center-specific practices and significant input from stakeholders at all levels of NASA. In addition, these officials and the NASA CIO explained that the agency’s efforts were redirected in order to respond to a new executive order from the President to develop an action plan for adopting NIST’s cybersecurity framework in phases. According to NASA’s CIO, the agency plans to move forward with drafting an agency-wide cybersecurity strategy that reflects the agency’s approach to using NIST’s framework; however, the agency has not yet established time frames for completing this effort. Until NASA establishes and implements a comprehensive strategy for managing its cybersecurity risks using NIST’s framework, its ability to make operational decisions that adequately address security risks and prioritize IT security investments will be hindered. NIST recommends that federal agencies develop and disseminate an information security program plan that describes the organization-wide security controls that are in place or planned for addressing the agency’s risks and complying with applicable federal laws, executive orders, directives, policies, or regulations. Specifically, the plan should provide a description of the agency’s program management controls and common controls in place or planned for meeting relevant federal, legal, or regulatory requirements; include the identification and assignment of roles, responsibilities, and coordination among organizational entities responsible for different aspects of information security; define the frequency for reviews of the security program plan; and receive approval from a senior official with responsibility and accountability for the risk being incurred. NASA issued a draft information security program plan in November 2017 that addresses many of the components called for in NIST guidance. For example, the plan discusses program management controls that will be established, including the development of an inventory of its information systems, measures to determine information security performance, and an information security workforce development and improvement program; common controls that are to be implemented agency-wide, including configuration management, contingency planning, and personnel security; roles and responsibilities for promoting collaboration and providing consolidated unclassified security operations, and incident response and IT security awareness and training capabilities; and responsibility for ensuring that the information security program plan is maintained, approved by the NASA CIO, and reviewed annually. However, the plan is currently in draft and incomplete. For example, it does not yet describe the majority of the security functions and services that are to be carried out by the agency’s IT Security Division to address the relevant federal statutory and regulatory requirements. Specifically, the plan does not identify the agency-wide privacy controls derived from standards promulgated pursuant to federal law and guidance that, according to the agency, are an integral part of its security program. According to NASA’s Acting Deputy Associate CIO for Information Security, the information security program plan has not been finalized because of an upcoming revision to NIST’s guidance for implementing security controls. Specifically, a fifth revision of NIST SP 800-53 is planned for release in December 2018. NASA’s Acting Deputy Associate CIO for Information Security stated that the agency intends to finalize its draft plan after incorporating the updated NIST guidance. In the absence of an established information security program plan, NASA’s view of the security controls that protect its systems will remain decentralized, and it will lack assurance that it has established oversight over security controls for all of its systems. In addition, the agency will continue to operate its systems without defined and established information security requirements that are essential to agency-wide operations. NIST Special Publication 800-53 recommends that agencies create policies and procedures to facilitate the appropriate application of security controls. If properly implemented, these policies and procedures may be able to effectively reduce the risk that could come from cybersecurity threats such as unauthorized access or disruption of services. Because risk-based policies and procedures are the primary mechanisms through which federal agencies communicate views and requirements for protecting their computing environments, it is important that they are established and kept current. NASA has taken steps to document policies and procedures that address the security controls identified in NIST guidance for protecting information systems. For example, the agency established an overarching security policy that identified roles and responsibilities related to configuration management, contingency planning, and incident response. In addition, the agency issued procedures for implementing each of the NIST controls. However, NASA does not have current and fully integrated policies and procedures. For example, the agency’s overarching policy for implementing security controls expired in May 2017. In addition, approximately one-third of the documents that guide the implementation of these controls remained in effect past their expiration dates instead of being updated before they had expired per NASA policy requirements. Further, in July 2017, NASA determined that cybersecurity roles and responsibilities were not always clear and sufficiently integrated across policies. For example, responsibilities were not consistently well-defined in the policies for governance, IT security, program and project management, and systems engineering. In addition, although NASA’s Policy Directive 2810.1E, NASA Information Security Policy provided the SAISO with responsibility for the agency’s cybersecurity risk, the policy assigned mission directorates control over risk decisions for their missions and programs and the centers were given the authority to implement any technical changes needed to address risk. NASA’s Procedural Requirement 2810.1A, Security of Information Technology states that the agency’s SAISO is responsible for ensuring that information security policies and procedures are reviewed and appropriately updated. However, according to officials in the Office of the CIO, including the specialist for IT security, responsibilities for establishing, reviewing, and updating policies and procedures are being shared by two groups: the IT Security Division, led by the SAISO, and the Capital Planning and Governance Division. Specifically, the IT Security Division controls the content of IT-related policies and procedures but does not have control over the established NASA-wide process for reviewing the policies and procedures to determine if any changes are needed to the content. Instead, the Capital Planning and Governance Division is responsible for ensuring formal review and approval of any IT- related policies and procedures through the standard agency process and schedule. Officials from the Office of the CIO, including the specialist for IT security, also stated that they intend to (1) establish a policy management framework that would provide the SAISO with more control over policies and procedures and include an annual document review, and (2) clarify and update cybersecurity roles and responsibilities in NASA policies. However, the agency has not yet developed a plan and specific time frame for completing these activities. In addition, the Acting Deputy Associate CIO for Information Security stated that, having expired policies and procedures is not significant because they will remain in use until they are rescinded or superseded by updated versions. However, until NASA fully updates its policies and procedures to govern security over the agency’s computing environments, it will have limited assurance that controls over information are appropriately applied to its systems. NASA continues to pursue efforts to improve IT strategic planning, workforce planning, IT governance, and cybersecurity, but consistently lacks the documented processes needed to ensure that policies and leading practices are fully addressed. Specifically, the agency has taken steps to improve the content of its strategic plan and established an agency-wide goal for improving its workforce. In addition, after analyzing its IT management and governance structure, NASA took action to streamline its governance boards and standardize and strengthen its selection and oversight of investments, including initiating a portfolio management process. NASA has also moved toward new strategies and plans to bolster cybersecurity. Nevertheless, while NASA has made progress, the agency has not yet fully addressed many of the leading IT management practices noted in this report or completed efforts to increase the CIO’s authority over, and visibility into, agency-wide IT. Among other things, NASA has not fully documented a process for IT strategic planning or addressed all key elements of a comprehensive plan. In addition, it has not yet fully implemented a workforce planning process and has gaps in efforts to address leading practices. Regarding IT governance, its efforts to institute an effective governance structure and update policies and procedures for selecting IT investments are not yet complete. Moreover, NASA has not yet addressed weaknesses in its oversight practices or fully defined policies and procedures for developing an effective portfolio management process. Similarly, although NASA continues cybersecurity improvement efforts, important elements of an effective cybersecurity approach have not been completed, including establishing a risk management strategy, an information security program plan, and updated policies and procedures. Until NASA leadership fully addresses these leading practices, its ability to overcome its longstanding weaknesses and ensure effective oversight and management of IT across the agency will remain limited. Moreover, NASA may be limited in its ability to strengthen its risk posture, including ensuring effective cybersecurity across partnerships with commercial entities, federal agencies, and other countries. We are making 10 recommendations to the National Aeronautics and Space Administration: The Administrator should direct the Chief Information Officer to develop a fully documented IT strategic planning process, including methods by which the agency defines its IT needs and develops strategies, systems, and capabilities to meet those needs. (Recommendation 1) The Administrator should direct the Chief Information Officer to update the IT strategic plan for 2018 to 2021 and develop associated implementation plans to ensure it fully describes strategies the agency will use to achieve the desired results and descriptions of interdependencies within and across programs. (Recommendation 2) The Administrator should direct the Chief Information Officer to address, in conjunction with the Chief Human Capital Officer, gaps in IT workforce planning by fully implementing the eight key IT workforce planning activities noted in this report. (Recommendation 3) The Administrator should direct the Chief Information Officer to institute an effective IT governance structure by completing planned improvement efforts and finalizing charters to fully establish IT governance boards, clearly defining roles and responsibilities for selecting and overseeing IT investments, and ensuring that the governance boards operate as intended. (Recommendation 4) The Administrator should direct the Chief Information Officer to update policies and procedures for selecting investments to provide a structured process, including thresholds and criteria needed for, among other things, evaluating investment risks as part of governance board decision making, and outline a process for reselecting investments. (Recommendation 5) The Administrator should direct the Chief Information Officer to address weaknesses in oversight practices and ensure routine oversight of all investments by taking action to document criteria for escalating investments among governance boards and establish procedures for tracking corrective actions for underperforming investments. (Recommendation 6) The Administrator should ensure that the Chief Information Officer fully defines policies and procedures for developing the portfolio criteria, creating the portfolio, and evaluating the portfolio. (Recommendation 7) The Administrator should direct the Chief Information Officer to establish an agency-wide approach to managing cybersecurity risk that includes a cybersecurity strategy that, among other things, makes explicit the agency’s risk tolerance, accepted risk assessment methodologies, a process for consistently evaluating risk across the organization, response strategies and approaches for monitoring risk over time, and priorities for risk management investments; (Recommendation 8) an information security program plan that fully reflects the agency’s IT security functions and services and agency-wide privacy controls for protecting information; (Recommendation 9) and policies and procedures with well-defined roles and responsibilities that are integrated and reflect NASA’s current security practices and operating environment. (Recommendation 10) We provided a draft of this product to NASA for comment. In its comments, which are reproduced in appendix II, NASA concurred with seven of the recommendations, partially concurred with two recommendations, and did not concur with one recommendation. NASA partially concurred with our first and second recommendations. Specifically, consistent with the first recommendation, NASA agreed to fully document its strategic planning process, including the methods by which the agency defines IT needs and develops outcomes, strategies, major actions, and performance measures to meet those needs. In addition, our second recommendation called for NASA to update the strategic plan and develop associated implementation plans. With regard to updating the plan, NASA stated that its strategic plan provides the context and parameters to support achievement of the agency's vision and mission through the strategic use of IT. The agency also stated that this plan describes the business outcomes, strategies, major actions, and performance measures to achieve the desired results. With regard to the implementation plans related to our first and second recommendation, NASA agreed to develop the associated implementation plans for accomplishing the IT strategic plan, including descriptions of the interdependencies within and across programs. Nevertheless, in commenting on both recommendations, as well as the first recommendation, NASA stated that it does not believe that implementation plans, including specific IT capability and system changes, should be part of a strategic plan. The agency also maintained that the implementation plans, including descriptions of interdependencies within and across programs, are at a lower level than the IT strategic plan, since detailed IT implementation plans are more dynamic than the four-year NASA IT Strategic Plan. However, our first and second recommendations do not call for NASA to incorporate implementation plans within the strategic plan. Rather, as discussed in the report, it is important that NASA document how it intends to accomplish the activities outlined in the strategic plan. Further, we continue to believe that NASA should address the weaknesses we identified in this report by updating the strategic plan to incorporate strategies on resources and time frames to achieve desired results and descriptions of interdependencies within and across projects so that they can be understood and managed. Thus, we stand by both recommendations (recommendations 1 and 2) that the agency take these actions. NASA did not concur with our third recommendation to implement the IT workforce planning activities noted in our report. In this regard, the agency stated that its workforce improvement efforts were already underway. Specifically, NASA stated that IT workforce planning is part of the agencywide Mission Support Future Architecture Program. It added that, among other things, this program is intended to ensure that mission support resources, including the IT workforce, are optimally structured to support NASA’s mission. In addition, NASA referenced our two additional ongoing audits of the agency’s IT workforce, and noted that its activities related to IT workforce planning would be centered on any recommendations resulting from those audits. In our view, neither of these circumstances should hinder NASA from addressing our recommendation in this report. As of March 2018, the agency’s IT workforce plans were out-of-date and incomplete because activities the agency had been planning since 2015 had not been finalized in an approved plan or implemented. Further, NASA had not yet determined when the Office of the CIO would become an active part of the agencywide Mission Support Future Architecture program or developed plans for when that program’s assessment of the IT workforce would be completed. Thus, instead of limiting NASA’s ability to address our recommendation, implementing the workforce planning activities discussed in this report could complement the agency’s ongoing and future efforts. Specifically, NASA could use the IT workforce leading practices described in this report to strengthen any new workforce plans and assess the implementation of any planned improvements. Until NASA documents an IT workforce planning process and implements all of the key IT workforce planning activities, the agency may not be effectively positioned to anticipate and respond to changing staffing needs. Further, the agency is likely to face challenges in controlling human capital risks when developing, implementing, and operating IT systems. NASA concurred with our four recommendations aimed at addressing deficiencies in its IT governance (recommendations 4 through 7). In this regard, the agency described planned actions intended to address each of these recommendations. For example, among other activities, the agency stated that it intended to publish charters for all IT governance boards; have the IT Council review governance board operations annually; document criteria for escalating investments among governance boards; and update policies and procedures for managing its investments as a portfolio. Similarly, NASA concurred with our three recommendations related to establishing an agency-wide approach to managing cybersecurity risk (recommendations 8, 9, and 10). The agency described actions it had taken or planned to address each of these recommendations. In particular, with regard to establishing a cybersecurity risk management strategy (recommendation 8), NASA asserted that it had already taken actions that met the requirements of our recommendation. Specifically, NASA stated that it had established an approach to developing its cybersecurity risk management strategy by approving a charter for an agency-wide team to address cybersecurity risk management needs and hiring a Chief Cybersecurity Risk Officer to oversee agency-wide risk management initiatives. While these actions constitute steps toward addressing the recommendation, we disagree that establishing a charter for a team and hiring a Chief Cybersecurity Risk Officer fully addresses the recommendation. As previously noted in this report, the agency does not have a cybersecurity risk management strategy that includes elements of NIST guidance. The strategy should, among other things, make explicit the agency’s risk tolerance, accepted risk assessment methodologies, a process for consistently evaluating risk across the organization, risk response strategies, approaches for monitoring risk over time, and priorities for investing in risk management. Ensuring that the established agency-wide team and the Chief Cybersecurity Risk Officer develop a cybersecurity risk management strategy that aligns with the NIST guidance will be essential to fully address our recommendation. NASA also provided technical comments on the draft report, which we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Administrator of the National Aeronautics and Space Administration, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. Should you or your staffs have any questions on information discussed in this report, please contact Carol Harris at (202) 512-4456 or harriscc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The National Aeronautics and Space Administration Transition Authorization Act of 2017 included a provision for us to review the effectiveness of the agency’s approach to overseeing and managing information technology (IT), including its ability to ensure that resources are aligned with agency missions, cost effective, and secure. Our specific objective for this review was to address the extent to which the National Aeronautics and Space Administration (NASA) has established and implemented leading IT management practices in strategic planning, workforce planning, governance, and cybersecurity. To address this objective, we compared NASA’s IT management policies, procedures, and other documentation to criteria established by federal laws and leading practices. This documentation included the agency’s strategic plans, workforce gap assessments, governance board meeting minutes and briefings, charters, policies and procedures, and other documentation of the Chief Information Officer’s (CIO) authority. We also reviewed relevant reports by GAO and the NASA Office of Inspector General. With regard to IT strategic planning, we identified the strategic plans and related planning guidance issued by NASA and the Office of the CIO, including NASA’s Governance and Strategic Management Handbook, dated November 26, 2014; NASA’s Information Resources Management Strategic Plan, dated March 2014; and NASA’s updated Information Technology Strategic Plan for fiscal years 2018 to 2021. We then reviewed the agency’s overall strategic plan, and evaluated its previous and current IT strategic plans against key practices for IT strategic planning that we have previously identified. These practices call for documenting the agency’s IT strategic planning processes and developing an IT strategic plan that aligns with the agency’s overall strategy; identifies the mission of the agency, results-oriented goals, and performance measures that permit the agency to determine whether implementation of the plan is succeeding; includes strategies the governing IT organization will use to achieve desired results; and provides descriptions of interdependencies within and across projects so that they can be understood and managed. To determine the extent to which NASA has established and implemented leading IT workforce planning practices, we conducted a comparative analysis of NASA’s IT workforce planning policies and documents. Specifically, we compared agency documents, such as NASA policy directives, the desk guide, and documentation of efforts to establish IT workforce competencies and staffing requirements and conduct gap assessments, to GAO’s IT workforce framework. GAO’s framework consists of four IT workforce planning steps and eight key activities. The eight key activities were identified in federal law, regulations, and guidance, including the Clinger-Cohen Act of 1996, the legislation referred to as the Federal Information Technology Acquisition Reform Act, Office of Management and Budget (OMB) guidance, the Office of Personnel Management’s Human Capital Framework, and GAO reports. Based on our assessment of the documentation and discussions with agency officials, we assessed the extent to which the agency implemented, partially implemented, or did not implement the activities. We considered an activity to be fully implemented if NASA addressed all of the underlying practices for the activity; partially implemented if it addressed some but not all of the underlying practices for the activity; and not implemented if it did not address any of the underlying practices for the activity. We assessed IT governance practices by comparing NASA documentation to critical processes identified by GAO in the IT investment management framework. To align our work with the provision in Section 811(a) of the National Aeronautics and Space Administration Transition Authorization Act of 2017 calling for NASA to take actions regarding IT governance, we selected critical processes from Stage 2 of the framework: instituting the investment board; selecting and reselecting investments that meet business needs; and providing investment oversight. For each critical process, we compared key practices outlined in the framework to NASA documentation. The documentation we reviewed included NASA’s IT governance policies and procedures, and charters and other guidance. We also reviewed governance board meeting minutes and briefings from each board’s first meeting in 2016 through meetings held in August 2017. In addition, we selected key practices for effective governance from Stage 3 of the IT investment management framework regarding establishing and implementing policies and procedures for developing the portfolio criteria, creating the portfolio, and evaluating the portfolio. We then compared documentation, including NASA’s IT Capital Planning and Investment Control Process guide dated October 2006, and Annual Capital Investment Review Implementation Plan dated October 2015, and draft IT portfolio management plans, against these practices. Using standards and guidance from the National Institute of Standards and Technology (NIST), which identify foundational elements of effective cybersecurity risk management, we evaluated NASA’s cybersecurity risk management approach by analyzing policies and plans for establishing a comprehensive risk evaluating documents and plans for establishing a cybersecurity risk comparing a draft Information Security Program Plan to determine if it was consistent with NIST guidance; and analyzing policies and procedures to determine if they address relevant NIST security controls and are current. In addition to assessing NASA headquarters, we reviewed IT management practices at two of the agency’s nine centers (Marshall Space Flight Center in Huntsville, Alabama; and Johnson Space Center in Houston, Texas) and at one of NASA’s four mission directorates (the Human Exploration and Operations Mission Directorate). The two centers and one mission directorate were selected because they had the largest fiscal year 2017 IT budgets, respectively, as reported on the federal IT dashboard. We also visited the Goddard Space Flight Center in Greenbelt, Maryland, because of the center’s proximity to GAO. The results of our work at the selected NASA centers and mission directorate are not generalizable to other NASA centers and mission directorates. To assess the reliability of these data, we compared them to budgetary data obtained directly from NASA’s Office of the CIO. We found the data to be sufficiently reliable for the purpose of identifying the NASA centers and mission directorate with the largest IT budgets. We also interviewed cognizant officials with responsibilities for IT management at NASA headquarters and for the selected centers and mission directorate. We conducted this performance audit from May 2017 to May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. In addition to the contact name above, the following staff also made key contributions to this report: Eric Winter (Assistant Director), Donald Baca, Rebecca Eyler, Amanda Gill (Analyst in Charge), Tom Johnson, Kate Nielsen, Teresa Smith, and Niti Tandon.
|
NASA depends heavily upon IT to conduct its work. The agency spends at least $1.5 billion annually on IT investments that support its missions, including ground control systems for the International Space Station and space exploration programs. The National Aeronautics and Space Administration Transition Authorization Act of 2017 included a provision for GAO to review the effectiveness of NASA's approach to overseeing and managing IT, including its ability to ensure that resources are aligned with agency missions and are cost effective and secure. Accordingly, GAO's specific objective for this review was to determine the extent to which NASA has established and implemented leading IT management practices in strategic planning, workforce planning, governance, and cybersecurity. To address this objective, GAO compared NASA IT policies, strategic plans, workforce gap assessments, and governance board documentation to federal law and leading practices. GAO also assessed NASA IT security plans, policies, and procedures against leading cybersecurity risk management practices. The National Aeronautics and Space Administration (NASA) has not yet effectively implemented leading practices for information technology (IT) management. Specifically, GAO identified weaknesses in NASA's IT management practices for strategic planning, workforce planning, governance, and cybersecurity. NASA has not documented its IT strategic planning processes in accordance with leading practices. While NASA's updated IT strategic plan represents improvement over its prior plan, the updated plan is not comprehensive because it does not fully describe strategies for achieving desired results or describe interdependencies within and across programs. Until NASA establishes a comprehensive IT strategic plan, it will lack critical information needed to align resources with business strategies and investment decisions. Of the eight key IT workforce planning activities, the agency partially implemented five and did not implement three. For example, NASA does not assess competency and staffing needs regularly or report progress to agency leadership. Until NASA implements the key IT workforce planning activities, it will have difficulty anticipating and responding to changing staffing needs. NASA's IT governance does not fully address leading practices. While the agency revised its governance boards, updated their charters, and acted to improve governance, it has not fully established the governance structure, documented improvements to its investment selection process, fully implemented investment oversight practices and ensured the Chief Information Officer's visibility into all IT investments, or fully defined policies and procedures for IT portfolio management. Until NASA addresses these weaknesses, it will face increased risk of investing in duplicative investments or may miss opportunities to ensure investments perform as intended. NASA has not fully established an effective approach to managing agency-wide cybersecurity risk. An effective approach includes establishing executive oversight of risk, a cybersecurity risk management strategy, an information security program plan, and related policies and procedures. As NASA continues to collaborate with other agencies and nations and increasingly relies on agreements with private companies to carry out its missions, the agency's cybersecurity weaknesses make its systems more vulnerable to compromise. Until NASA leadership fully addresses these leading practices, its ability to ensure effective management of IT across the agency and manage cybersecurity risks will remain limited. GAO is making 10 recommendations to NASA to address the deficiencies identified in NASA IT strategic planning, workforce planning, governance, and cybersecurity. NASA concurred with seven recommendations, partially concurred with two, and did not concur with one. GAO maintains that all of the recommendations discussed in this report remain valid.
|
Unmanned systems provide DOD with capabilities for conducting a range of military operations, including environmental sensing and battlespace awareness; chemical, biological, radiological, and nuclear detection; counter-improvised explosive device capabilities; port security; precision targeting; and precision strike. DOD’s unmanned systems operate in different warfighting “domains” ranging from air, land, and maritime environments. As shown in figure 1, DOD categorizes its unmanned systems into five groups by domain (i.e., aerial and maritime, including surface and underwater) and other attributes of size and capability. Group 1 UASs weigh fewer than 20 pounds and operate below 1,200 feet in altitude, whereas group 5 UASs weigh more than 1,320 pounds and operate above 18,000 feet. Similarly, USVs are categorized in five groups, increasing in size and capability from very small to extra-large, and UUVs are categorized in four groups—small, medium, large, and extra-large. Various offices within the Office of the Secretary of Defense and the Department of the Navy have roles and responsibilities for evaluating the appropriate mix of personnel for the Navy’s and the Marine Corps’ total workforces. According to Section 129a of Title 10 of the U.S. Code, which governs DOD’s general policy for total force management, the Secretary of Defense is required to establish policies and procedures for determining the most appropriate and cost efficient mix of military, federal civilian, and contractor personnel to perform the missions of the department. Section 2463 of Title 10 mandates the Under Secretary of Defense for Personnel and Readiness (USD(P&R)) to devise and implement guidelines and procedures to ensure consideration is given to using DOD civilian employees to perform new functions and functions that are performed by contractors and could be performed by civilian employees. DOD policies also establish roles and responsibilities for the USD(P&R): DOD Directive 1100.4 establishes departmental policy concerning workforce management, including multiple responsibilities for the USD(P&R) (e.g., reviewing the workforce management guidelines and practices of DOD components for compliance with established policies and guidance). DOD Instruction 1100.22 implements policy set forth under DOD Directive 1100.4; assigns responsibilities; and prescribes procedures for determining the appropriate mix of military, federal civilian, and contractor personnel. The instruction assigns to the USD(P&R) the responsibility for overseeing the instruction’s implementation and working with component heads to ensure that they establish policies and procedures consistent with this instruction. DOD Instruction 7041.04 states that DOD’s USD(P&R), the Comptroller, and the Director of Cost Assessment and Program Evaluation are responsible for developing a DOD-wide cost model for estimating and comparing the full costs of DOD workforce and contract support. Section 129a of title 10 of the U.S. Code directs the Secretary of Defense to delegate responsibility for the implementation of policies and procedures established by the Secretary to the Secretaries of the military departments. In accordance with this delegation, the Secretary of the Navy has overall responsibility for requirements determination, planning, programming, and budgeting for policies and procedures for determining the appropriate and cost-effective mix of personnel. DOD policies establish the following roles and responsibilities for the military department Secretaries, including the Secretary of the Navy and heads of other DOD components: DOD Directive 1100.4 requires the component heads to designate an individual with full authority for workforce management, to include responsibility for, among other things, developing annual personnel requests to Congress considering the advantages of converting from one form of support (active or reserve military servicemembers, federal civilians, or private sector contractors) to another for the performance of a specified function, consistent with section 129a of the U.S. Code. DOD Instruction 1100.22 establishes that the component heads should require that their designated workforce authority issue implementing guidance requiring the use of the instruction when determining workforce mix for current, new, or expanded missions. Secretary of the Navy Instruction 5430.7R assigns authority for workforce management in the Department of the Navy, including workforce mix issues, to the Assistant Secretary of the Navy for Manpower and Reserve Affairs. Concurrently with a weapon system’s development through DOD’s acquisition process, the Navy and the Marine Corps determine the numbers and types of personnel and skills required for their unmanned systems. The personnel requirements development process generally begins with the program manager from a Navy systems command (e.g., Naval Air Systems Command for Navy and Marine Corps aircraft and Naval Sea Systems Command for ships and submarines) that is responsible for supervising the management of assigned acquisition programs. The program manager and systems command utilize Navy policies and other inputs to formulate initial requirements. In doing so, the program manager coordinates any Navy personnel requirements with the Office of the Chief of Naval Operations and other entities such as the Navy Personnel Command and commands that will operate and maintain the systems, such as the U.S. Fleet Forces Command and the Commander, Naval Air Forces. For Marine Corps aircraft systems, the program manager from the Naval Air Systems Command coordinates with Marine Corps headquarters entities, such as the Deputy Commandant for Aviation and the Deputy Commandant for Combat Development and Integration. The program manager and systems command calculate the cost of personnel as part of a system’s total life cycle cost. The program manager validates personnel requirements as program changes dictate and at a minimum annually, over a system’s lifecycle. The Navy and the Marine Corps staff the units that will operate and maintain their unmanned systems by filling the required positions to the extent possible based on the number of positions funded and the number of trained and qualified personnel available to fill them. This staffing process is managed by the Navy Personnel Command and in the Marine Corps by the Deputy Commandant for Manpower and Reserve Affairs. The Navy and the Marine Corps are in the process of rapidly growing their portfolios of unmanned systems, but have not evaluated the use of alternative workforces—specifically the use of federal civilian employees and private sector contractors as unmanned system operators. DOD Directive 1100.4 states that authorities should consider all available sources when determining workforce mix, including federal civilians and contractors, and personnel shall be designated as federal civilians except in enumerated circumstances. According to DOD Instruction 1100.22, the initial steps in planning for personnel requirements include determining categories of eligible personnel (e.g., military servicemembers, federal civilian employees, or private sector contractors). These determinations are based on whether activities to be performed are “military essential” (the activity must be performed by a military servicemember), “inherently governmental” (the activity could be performed by a military servicemember or a federal civilian employee), or “commercial” (the activity could be performed by military servicemembers, federal civilians, or private sector contractors). Military servicemembers and federal civilians must be considered before the services may consider using contractors to perform a function. In the absence of workforce alternative analyses, the services have decided to rely solely on military servicemembers as operator workforces for all of their unmanned systems, including the eight systems we reviewed in detail. For all eight case studies, Navy and Marine Corps officials told us that their decisions to rely on servicemembers as operators were based on the pre-existing force structure made up of personnel who were already trained in related mission areas. For seven of the eight selected systems, the officials stated that they did not evaluate the use of federal civilians or contractors in their determinations for using military personnel for their operator workforces. In the case of an eighth system, the MQ-4 Triton UAS, the Navy evaluated using contractor personnel, but did so without first considering the use of federal civilian employees as DOD policy requires. In a 2009 analysis for the Triton, the Navy concluded that comparisons between the cost-effectiveness of using military personnel and federal civilian employees were beyond the expertise of the working group that performed the analysis. Ultimately, the Navy decided to use military personnel as Triton operators. According to senior-level officials from OUSD(P&R), there are concerns within the department about the level of consideration the military services have applied to workforce mix alternatives for unmanned system operators. As a result, OUSD(P&R) and other entities from the Office of the Secretary of Defense commissioned the Institute for Defense Analyses to conduct a study, which was published in June 2016, on alternative staffing strategies to enable DOD to accomplish UAS-related missions more cost-effectively. The study found that staffing alternatives exist for each service and could produce cost savings. According to the Institute for Defense Analyses’ report, the use of enlisted personnel for a portion of the Navy’s and the Air Forces’ UAS operator workforces offers the potential for savings, as could the use of limited duty officers or warrant officers. The Institute for Defense Analyses also reported that federal civilian employees of DOD could generate the most substantial savings of the options studied if they were used in combination with military servicemembers as UAS operators responsible for the launch and recovery of air vehicles. OUSD(P&R) officials stated that this latter approach would free up military servicemembers to fill key positions for supporting military readiness in other areas of operations that are military personnel essential, and better leverage the services’ limited military personnel end strengths. In September 2016, OUSD(P&R) issued a proposal for an additional study of UAS staffing options that stated that the Department of the Navy’s workforce mix determination (i.e., relying on military servicemembers as operators) is “immature and infeasible” and that any recommended approaches should also be applied to unmanned maritime systems. OUSD(P&R) has also commissioned a study to clarify circumstances in which military servicemembers should be considered essential for certain positions, which is expected to be complete by the end of fiscal year 2018. OUSD(P&R) officials stated that they plan to continue their efforts to expand awareness of these studies and of the available workforce mix alternatives for UAS operators with military service officials. On the basis of our discussions with Navy and Marine Corps workforce planners, key reasons for not evaluating workforce alternatives for unmanned system operators were that planners did not believe it was necessary, and they did not believe that federal civilian employees or private sector contractors were viable workforce alternatives to military servicemembers for such roles and functions. For example, officials cited concerns that federal civilians cannot serve aboard Navy ships or provide rapid deployment capability. However, officials from OUSD(P&R) told us that these concerns are inaccurate, noting that federal civilian employees have deployed on Navy ships. Further, we note that DOD’s Expeditionary Civilian Workforce comprises federal civilian employees across DOD components who are available to deploy within 120 days of notice to meet urgent requirements. DOD officials responsible for the Expeditionary Civilian Workforce program stated that such personnel are intended to be predictable, reliable, and effective so that the military services will source them and the combatant commands can depend upon them. Further, service workforce planners stated that relevant service-level guidance is unclear on when and how such personnel can and should be considered for performing in operational roles and in deployable positions. The Navy’s and the Marine Corps’ policies do not provide details about the types of operational roles specific to a service, including those related to unmanned system operators, that could be filled with federal civilians or private sector contractors, nor do the policies provide guidance on the limitations and benefits of using these personnel sources, such as those identified in DOD-commissioned reports and our prior work. For example, military personnel can be the most costly of the three personnel categories and shortages exist in certain functions that have been deemed military essential and are in high demand, such as fighter pilots. On the other hand, federal civilians and private sector contractors can be cost-effective and may augment military servicemembers on a short-term basis if needed (see table 1). Federal internal controls standards emphasize the importance of having clear, updated policies that align with an organization’s mission and goals. Officials from the Office of the Secretary of the Navy for Manpower and Reserve Affairs agreed that the cited service policies do not provide the sort of detail and clarity that could aid planners and decision makers with determining eligible personnel categories for their workforces and weighing the benefits and limitations thereof. Clarifying their respective workforce planning policies could help workforce planners better understand when, where, and how federal civilians or contractors may serve in operational roles (e.g., from shore or from underway naval vessels) and what the benefits and limitations are. The use of military servicemembers, and not federal civilians or private sector contractors, as unmanned system operators may indeed be the most appropriate and cost-effective workforce option for the Navy and the Marine Corps. However, the services will not have certainty about the basis for such decisions without first clarifying workforce planning policies and then applying the revised policies to evaluate the use of all personnel resources available to them for future unmanned systems. The Navy and the Marine Corps have efforts underway to develop requirements for operators, maintainers, and other support personnel needed for selected unmanned systems. According to Navy information, personnel requirements for three systems are sufficient and the sufficiency of requirements for four other systems is yet undetermined. However, the Navy and the Marine Corps have not updated personnel requirements and the related cost estimate for the RQ-21 Blackjack UAS based on deployment data. Furthermore, the Department of the Navy has not fully evaluated and updated policies or clarified goals that may inform future personnel requirements development and updates to requirements. The Navy and the Marine Corps have efforts underway to develop requirements for operators, maintainers, and other support personnel needed for selected unmanned systems, commensurate with each system’s maturity in DOD’s acquisition process. The USVs associated with the littoral combat ships, the Snakehead Large Displacement UUV, and the MQ-25 Stingray UAS are in earlier phases of both acquisition and personnel requirements development and, according to Navy information, the precise number of required personnel will be determined and updated as the systems progress through acquisition. On the other hand, the MK 18 UUVs, MQ-8 Fire Scout UAS, MQ-4 Triton UAS, and RQ-21 Blackjack UAS have matured the furthest through DOD’s acquisition process. The Navy and the Marine Corps have identified personnel requirements, and service officials told us they have reviewed their sufficiency as units have trained and deployed with the systems. Although future modifications to personnel requirements for the MK 18 UUVs, the MQ-8 Fire Scout, and the MQ-4 Triton may be needed as their inventories and the pace of deployments increase, Navy officials told us the numbers of operators are appropriate at this time to meet mission objectives based on available deployment data and feedback from operators. For the RQ-21 Blackjack UAS, however, Navy and Marine Corps headquarters and command entities disagree with unit-level officials about the sufficiency of the personnel requirements. Marine Corps UAS squadrons have identified a requirements shortfall of 13 to 21 personnel per detachment to support each RQ-21 Blackjack UAS. The UAS squadrons have established that a total of 22 personnel are necessary to form a detachment sufficiently sized to support operations with the UAS. Marine Corps unit-level officials told us that this personnel requirement is based on the numbers needed to conduct training and deployments since the first Blackjack system was delivered in 2015, for which 22 to 30 personnel have been needed per detachment to meet mission requirements. In contrast, higher level command and service headquarters entities in the Navy and the Marine Corps have established a requirement of nine Marine Corps personnel per detachment, including three enlisted UAS operators and one UAS officer along with maintenance and support personnel. Squadron officials stated to the Navy and the Marine Corps in their written rebuttal of the 9-person requirement that 13 more personnel are needed to support operations for 10 to 12 hours per day, or up to 24 hours a day for 10-day surges in operations, and to comply with naval aviation maintenance procedures. Marine Corps officials also told us that the squadrons believe these additional personnel are essential for supporting the workload and levels of supervision they believe are necessary to operate and maintain an RQ- 21 Blackjack UAS and avoid mishaps and damage to the aircraft during recovery. DOD policy directs that personnel requirements should be driven by workload and established at the minimum levels necessary to accomplish mission and performance objectives. In addition, according to a Navy instruction, personnel requirements must be validated as program changes dictate and at a minimum annually, over a system’s lifecycle to determine if a personnel update is required. The Navy instruction also identifies guidelines for average weekly working hours and personnel availability for different tasks, which are key elements in the calculation of personnel requirements. The instruction states that routinely exceeding these guidelines to meet workloads should be avoided because it can adversely affect unit morale, retention and safety. With respect to the RQ-21 Blackjack UAS, Marine Corps officials stated that the concept of operations has changed for the service’s vision of employing the system to support Marine Expeditionary Units and that the 9-person detachment requirement was based on the outdated concept of operations. As a result, Marine Corps officials told us that the personnel requirements for the squadrons that operate them are too low to support the workloads associated with the systems and service headquarters- level decision makers have not yet updated them based on the most current and enduring concept of operations for the system. Marine Corps officials stated that efforts are underway to review the differences in personnel requirements deemed necessary by squadrons and headquarters-level entities as training and deployments continue, which is a positive step. However, according to the program office, the personnel requirements were not changing at the time of this report. Until the Navy and the Marine Corps update the personnel requirements for the RQ-21 Blackjack based on the most current and enduring concept of operations and deployment data, the services will lack current information about the number of operators needed for the squadrons that operate the RQ-21 Blackjack. In addition, the Navy and the Marine Corps have not updated the life cycle cost estimate for the RQ-21 Blackjack UAS to include additional personnel that Marine Corps squadrons have needed for current operations and expect to need for future operations and deployments. The program office estimated the total Marine Corps personnel cost for the RQ-21 Blackjack based on detachments of 9 personnel each at approximately $371 million over the program’s expected 19-year life cycle—nearly 20 percent of the Marine Corps’ life cycle cost for the program. However, this estimate may be too low because Marine Corps squadrons have reported that they need up to 21 more personnel per detachment to support the workload associated with the system, as discussed previously. DOD guidance requires that components determine a weapon system program’s life cycle cost by planning for the many factors needed to support the system, including personnel. Decision makers use this information to determine whether a new program is affordable and the program’s projected funding and personnel requirements are achievable. In addition, the Office of Management and Budget’s Capital Programming Guide indicates that to keep the cost analyses for capital assets, such as weapon systems, current, accurate, and valid, cost estimating should be continuously updated based on the latest information available as programs mature. The Navy and the Marine Corps have updated the life cycle cost estimate for the RQ-21 Blackjack to account for changing assumptions, such as the expected usage rate of spare parts for system repairs, but not for additional Marine Corps personnel that squadrons have reportedly needed for deployments. Without updating the cost estimate as appropriate after updating personnel requirements, the Navy and the Marine Corps may not have current information about the Marine Corps’ RQ-21 Blackjack UAS lifecycle cost and affordability. The Department of the Navy has made some positive steps but has not fully evaluated and updated its aviation policies for operation and maintenance of certain UAS to inform the development of future personnel requirements. According to officials from the Navy Manpower Analysis Center, correctly determining personnel workload and the related numbers of personnel required for operation and maintenance is especially critical for UAS units because of the safety risks associated with operating in shared airspaces and over populated areas. These officials also stated that naval aviation policies—which apply to manned aircraft and UAS—affect the workload of operators and maintenance personnel and the numbers required to achieve a squadron’s mission and meet the standards prescribed in the policies. For example, the Naval Air Training and Operating Procedures Standardization manual contains provisions for pilot fatigue and hours they can fly compared with the hours they must rest. Further, the Naval Aviation Maintenance Program instruction prescribes standards for performing and documenting quality assurance steps for maintenance tasks, among other things. Our review of these selected policies found that some naval aviation standards have been modified to account for UAS separately from manned aircraft, and to some extent between UAS of different sizes and capabilities. The Naval Air Training and Operating Procedures Standardization manual was updated in 2016 with a new chapter for UAS policies and operations. The Naval Aviation Maintenance Program instruction has been updated to specify that UAS of groups 3, 4, and 5 will always be governed by the policy similar to manned aircraft, with a few exceptions, such as compass calibration. Notwithstanding these updates, Marine Corps headquarters- and unit- level officials told us that the policies have not been fully reviewed and updated to account for differences in UAS of varying sizes and capabilities, especially group 3 UAS, which are those systems weighing 55 to 1,320 pounds. According to these officials, applying certain procedures and standards from these policies equally across different sizes of UAS is problematic for group 3 UAS in particular, which includes the RQ-21 Blackjack. The officials stated that the application of such standards affects workloads and personnel levels in a way that prevents squadrons from accomplishing their missions as efficiently as possible. Specifically, they stated that upholding current naval aviation standards is one key reason—the other being changes to the concept of operations for the RQ-21 Blackjack—for having staffed up to 21 more personnel per RQ-21 Blackjack detachment than the 9-person requirement discussed earlier in this report. Applying naval aviation operating and maintenance standards equally across different sizes of UAS may not align with the Marine Corps’ concept of operations, which states that all UAS are intended to be recovered by landing or capture even though they may be expendable. Each RQ-21 Blackjack system includes five air vehicles, more than one of which could be unavailable for assigned missions at the same time. For example, Marine Corps officials told us that damage to RQ-21 Blackjack air vehicles can be caused by weather, a deficiency with the air vehicle itself, a crash landing, or a combination of factors, and up to three air vehicles could be unavailable at a time. These officials told us that holding the RQ-21 Blackjack to maintenance standards designed for other non-expendable aircraft may not be efficient because their application has a limited effect on mishap rates relative to the additional personnel needed to uphold the standards. Moreover, in discussion groups we held with Marine Corps UAS operator personnel, operators mentioned that mishap investigations performed to existing standards sideline operators from training pending the investigation’s outcome. Such standards also apply to the Navy’s larger, non-expendable UAS like the MQ-8 Fire Scout and the MQ-4 Triton. According to DOD Directive 1100.4, existing policies, procedures, and structures should be periodically evaluated to ensure efficient and effective use of personnel resources. Further, federal internal controls standards emphasize the importance of having clear, updated policies that align with an organization’s mission and goals. Such goals could include the Department of the Navy’s goal to accelerate the development and fielding of unmanned systems, and the Marine Corps’ emphasis on reducing operator workload and providing effective and efficient support to mission execution and decision making. For example, the Marine Corps’ UAS concept of operations envisions a future in which one UAS operator will perform multiple functions as opposed to the current approach in which multiple Marines are necessary for a single mission. We found that the Navy has taken a preliminary step to further evaluate what policy changes may be needed to support unmanned systems by establishing an advisor position for this purpose within the Naval Innovation Advisory Council. The advisor is responsible for making recommendations to the Secretary of the Navy and other senior leaders to streamline policy and remove roadblocks that hinder innovation, among other things. In addition, the program manager for the RQ-21 Blackjack and the Marine Corps’ Deputy Commandant for Combat Development and Integration are supporting a research effort through the Naval Postgraduate School to improve the efficiency and effectiveness of naval aviation maintenance procedures for group 3 UAS, according to a Marine Corps official who is leading this effort. While these are positive steps, the time frames for making such policy changes have not been identified. In addition, we did not find evidence that the Navy has taken or planned related steps such as determining whether future reductions to personnel requirements could be accomplished, and any associated cost savings, or benefits to UAS operations if policies were further updated to account for UAS of different sizes and capabilities. The Navy has thus far prioritized the evaluation and modification of acquisition-related policies to expedite the delivery of unmanned systems to units, consistent with a 2015 memorandum from the Secretary of the Navy. Unless the Navy and the Marine Corps prioritize updating policies for operating and maintaining UAS of different sizes and capabilities they may miss opportunities to effectively and efficiently use personnel resources as system inventories grow. The Department of the Navy also lacks clear overarching goals for informing future unmanned system personnel requirements and the level of priority that should be assigned to these systems and the units that operate them for the purpose of personnel resourcing decisions. While DOD’s Unmanned Systems Integrated Roadmap, FY2013-2038 stated that the department must strive to reduce the number of personnel required to operate and maintain its unmanned systems, the Department of the Navy has not affirmed this goal or communicated any other personnel goals for its unmanned system development. Department of the Navy documents we reviewed for unmanned systems expressed goals that are less directly related to personnel requirements, to include expanding the range of operations and reducing costs and risks to personnel safety and mission success. As previously mentioned, the Navy has prioritized the evaluation and modification of acquisition-related policies to expedite the delivery of unmanned systems to units, consistent with a 2015 memorandum from the Secretary of the Navy. Navy and Marine Corps officials we spoke with who are responsible for the RQ-21 Blackjack and other case study systems we reviewed told us they did not believe the Department of the Navy has a clear and overarching goal for unmanned system personnel requirements either now or over the long-term. For example, officials stated that they did not know if the Department of the Navy expects that fewer personnel should be needed to operate and support unmanned systems than the numbers of personnel required for other types of systems. Without such clarity about personnel-related goals and priority levels, some officials expressed concern that using the term “unmanned” systems conveys expectations that technological advances can substantially reduce personnel requirements in the near term, and that funding for related personnel resources are a lower priority than those for other system types. For example, a senior Navy personnel official told us that the Navy’s past goals and related efforts to reduce personnel required for its ship crews—an initiative referred to as optimal manning—makes them cautious about whether the same goals and efforts will be adopted for unmanned systems and could produce similar, undesirable effects on readiness. Navy officials at three commands also stated they are concerned that resources for unmanned system personnel over future years may not keep pace with the increasing inventories of the systems if a lower priority is assigned to them in budget decisions in the absence of goals and clarity over priorities. The Navy’s Commander, Submarine Forces, identified a personnel shortfall for supporting increased UUV inventories as its second-highest personnel priority for the Navy’s fiscal year 2019 budget deliberations to help underscore to headquarters entities the importance of personnel resources for such systems. According to Navy officials, the Navy has since authorized the requested addition of 66 personnel to the command to augment the sole unit that will operate the Snakehead Large Displacement UUV along with increasing inventories of other types of UUVs. Federal internal controls standards state that an agency’s management should define goals clearly to enable the identification of risk. By applying this standard to the Department of the Navy’s acquisition and operations of unmanned systems, such goals could include whether or not unmanned systems should require fewer personnel resources than manned counterparts. Until the Secretary of the Navy clarifies overarching goals for unmanned system personnel requirements and resource priority levels and communicates them to requirements planners and budget decision makers, the services will be hampered in developing future personnel requirements and identifying risks as system inventories grow and operations expand. The Navy and the Marine Corps have developed staffing approaches to select, train, and track unmanned systems operators and to retain some UAS operators by offering special and incentive pays. However, both services face challenges in ensuring that there are sufficient UAS operators to meet personnel requirements. Yet neither service has assessed the commercial drone industry to inform its retention approach for UAS operators. Although Marine Corps UAS operators and officers report low morale and career satisfaction, the Marine Corps has not fully explored the use of human capital flexibilities to address these workforce challenges. In the Navy, unmanned system operations are secondary skills for personnel from related communities. For its UASs in groups 4 and 5, for example, the Navy utilizes personnel from manned aviation communities within the same mission areas, such as MH-60 helicopter pilots and aircrew who are selected and then trained to operate the MQ-8 Fire Scout UAS. Likewise, Navy officials stated that personnel from related communities are selected and trained to operate USVs and UUVs. The Navy is taking steps to track these trained operator personnel by using secondary skill identification codes. According to Navy officials, these identification codes will help personnel managers monitor the inventories of personnel with unmanned system operator qualifications and provide a temporary surge in capability if needed. In contrast to the Navy’s approach, the Marine Corps has a primary career field for operating UAS, including enlisted and officer personnel. The Marine Corps replenishes its UAS operator and officer personnel inventories by selecting from eligible applicant groups. To become UAS operators, enlisted marines must achieve minimum test scores comparable to those required for other high-skill occupations, such as intelligence specialists. Eligible groups include new graduates of recruit training and experienced marines who apply for a lateral transfer from another occupational specialty. UAS officers take a separate test battery and must attain the same minimum scores as other officers who are selected for manned naval aviation training. They are selected from three sources: new graduates of officer training; pilot or flight officer trainees who do not complete their manned aircraft qualification; and experienced officers seeking a transfer from another occupational specialty, including pilots of manned aircraft. Following their selection, enlisted personnel and officers must complete 5 months of Army UAS training courses or 6 months of Air Force UAS training courses, respectively. The Marine Corps then assigns a primary occupation identification code to trained personnel, which facilitates tracking their inventory to help meet requirements. To help retain sufficient numbers of personnel to meet requirements, both the Navy and the Marine Corps have offered special and incentive pays to personnel who operate UASs. Navy personnel who serve as air vehicle operators for the MQ-8 Fire Scout and MQ-4 Triton or as MQ-4 Triton tactical coordinators are eligible for two types of aviation pays based on their qualification as pilots or naval flight officers rather than their UAS assignments—monthly “flight pay” of up to $1,000 and aviation career continuation pay bonuses of $75,000 for a new 5-year contract, as of fiscal year 2017. Marine Corps UAS officers are not offered special and incentive pays, but enlisted operators have been eligible for a selective reenlistment or selective retention bonus since 1998, which ranged from $8,250 up to $19,750 in fiscal year 2017 for qualified marines who committed to an additional 4 years of service. Based on our analysis, the Navy faces challenges with meeting personnel requirements for UAS operators although, according to Navy officials, it is too soon to know if personnel shortfalls may arise with unmanned maritime systems because many programs are in early in stages of development. Navy officials told us they have sufficient numbers of personnel to operate the current inventory of UAS, which included 49 MQ-8 Fire Scouts and 2 MQ-4 Tritons as of September 2017. As UAS inventories increase, the Navy has reported growing retention challenges among its pilots and naval flight officers over the past 3 years as the U.S. economy improves and commercial airline hiring increases. Navy aviation and workforce planning officials told us this could affect the ability to fill both its manned aviation and UAS personnel requirements. According to Navy proposals for the Navy’s aviation retention bonus program, future retention shortfalls are expected in the helicopter, maritime patrol and reconnaissance, and E-2 Hawkeye communities, among others. The first two communities are sources of personnel for the MQ-8 Fire Scout and MQ-4 Triton and, according to Navy officials, the latter community is being considered as a personnel source for the MQ- 25 Stingray. In particular, the Navy has reported concerns about the future retention of its maritime patrol and reconnaissance pilots because their experience directly translates to a commercial 737 aircraft. Additionally, the Navy has reported shortages and significant retention issues in meeting requirements for its reserve helicopter and maritime patrol and reconnaissance pilots, communities that the Navy uses to augment its available inventories of active duty pilots who also operate UASs. Based on our analysis, the Marine Corps has experienced past shortfalls of UAS operators through fiscal year 2017. Since the first fiscal year of available data after the inception of the Marine Corps’ career specialty for UAS officers in 2012, personnel inventories have increased but fallen short of requirements (see fig. 2). For fiscal years 2013 through 2017, the Marine Corps was substantially short of captains, majors, and lieutenant colonels (i.e., O3, O4, and O5 pay grades) to serve as UAS officers. Consistent with this trend, the Marine Corps has designated UAS officer inventories as unhealthy since fiscal year 2013. Marine Corps officials told us these shortfalls could be attributable to the annual growth in requirements for this new community. They also stated that they do not currently anticipate retention challenges for UAS officers. However, according to these officials, their predictions about UAS officer retention for future years are based on data from other longer established career fields as proxies until more UAS officer data are available. For fiscal years 2007 through 2017, inventories of enlisted UAS operators increased in all but one year, but fell short of requirements (see fig. 3) in part due to substantial yearly shortfalls of certain junior enlisted personnel. According to a Marine Corps official, the UAS operator inventory will exceed requirements in fiscal year 2018 because the requirement has decreased by about 60 percent from the previous year. However, the Marine Corps has leveraged lateral personnel transfers from other occupations to meet approximately 33 to 89 percent of its yearly retention quotas for first-term UAS operator reenlistments since fiscal year 2010 (see fig. 3 above). A Marine Corps personnel planning official told us that personnel transfers have been helpful and necessary for meeting retention quotas. However, other Marine Corps officials told us that heavily leveraging transfers shows that the UAS community is not retaining its own experienced operators—that is, UAS operators who have attained proficiency and advanced skills and been deployed. For more senior enlisted UAS operators eligible for a second reenlistment or beyond, the Marine Corps has fallen short of its retention quotas for fiscal years 2015 through 2017. Despite the current and future challenges previously discussed, Navy and Marine Corps officials told us that the services have not used information about the commercial drone industry to inform their use of special and incentive pays because they did not believe doing so was needed. Marine Corps officials told us that they have not observed a retention problem for UAS operators and officers and unless they miss retention goals in 3 consecutive years they will not consider changing financial incentives— i.e., increasing bonuses to enlisted UAS operators or offering special and incentive pays to UAS officers. Until such time, pilots who are selected for the UAS career field are informed by the Marine Corps that their flight pay and aviation continuation pay bonus eligibility will be terminated. Another Marine Corps official with knowledge of the UAS community told us that studying the commercial drone industry and the potential effect on retention is timely because the services must program for the necessary resources for financial incentives 2 years in advance of the budget year. They stated that after 3 years of missing retention goals the problem could persist for another 2 years before additional funds were available to increase retention bonuses given the programming and budget cycle. Navy workforce planning officials acknowledged that they are concerned about increasing difficulty in providing sufficient numbers of mid-career pilots to meet the Navy’s aviation requirements over future years, which includes UAS operator requirements. In addition to competition from commercial airlines, Navy officials told us a growing labor market in the commercial drone industry could exacerbate pilot retention challenges for those with secondary qualifications to operate UAS. However, they added that little is known about the demand and available wages in that industry. Likewise, Marine Corps officials told us that past challenges in meeting requirements and retaining experienced operators could persist in future years, and hiring in the commercial drone industry could affect retention. These officials stated that the Air Force could also pose a future retention challenge for the Marine Corps’ UAS operator community. The Air Force offers the potential for higher pay to its UAS operators than the Marine Corps along with larger and more capable types of UAS. The Air Force reported to Congress in July 2017 that its projections of enlisted UAS operator retention indicate that a bonus may be necessary as soon as 2022. During discussion groups we held with Marine Corps UAS operators, enlisted operators cited the potential for higher pay for their skills outside the Marine Corps as a factor that has influenced reenlistment decisions among them or their peers. Operators in one group told us that three of their five RQ-21 Blackjack instructors were former enlisted operators from their squadron who secured employment with the RQ-21 Blackjack’s manufacturer as private sector contractors. DOD’s 2012 Eleventh Quadrennial Review of Military Compensation determined that organizations should assess civilian supply and demand and civilian wages to develop the most cost effective special and incentive pay strategies. We reported in February 2017 that conducting such an assessment is a key principle of effective human capital management by which to evaluate DOD’s special and incentive pay programs. Our report also found that the services do conduct such assessments for aviation, nuclear propulsion, and cybersecurity occupations. Without assessing the commercial drone industry and using such information to inform retention approaches, including the use of special and incentive pays, the Navy and the Marine Corps may not know if their approaches are effectively tailored to ensure a sufficient number of UAS operators are available to meet future requirements. The Marine Corps has experienced workforce challenges with its career field for UAS officers and enlisted operators, including diminished morale and career satisfaction and short periods of time in which operators are trained and available to UAS squadrons before their contract or squadron assignment ends. Results of a 2015 Marine Corps survey of UAS officers showed that about 65 percent of captains and first lieutenants who responded were dissatisfied with their career and about 75 percent of that group cited low job satisfaction as influencing their decision to leave the Marine Corps. UAS officers and enlisted operators in all eight discussion groups we held told us about factors that enhance their morale, including the opportunities to learn and to shape their community and their positive deployment experiences, but they also discussed factors that negatively affect their job satisfaction. UAS operators in all enlisted groups cited the frequency of personnel turnover in the squadron as a source of frustration in developing and retaining expertise with the RQ-21 Blackjack. Officers told us they feel like a lower tier priority in Marine Corps aviation for reasons ranging from the lack of a uniform insignia device akin to those awarded to manned aircraft pilots (i.e., pilot “wings”), to confusion over the strategy and missions for Marine Corps UAS now and in future years. UAS officers also told us they desired assignments to positions outside the UAS squadrons that they believed would enhance their leadership ability, but such positions had not consistently been available to them because they were needed to fill squadron billets. For example, the Marine Corps has limited or restricted UAS officers from applying for in- residence professional military education opportunities in past years because they could not be diverted from billets requiring their qualifications due to inventory shortages. UAS operators and officers spend approximately 2 years or more of their 3-year squadron assignment awaiting and completing training to attain proficiency and advanced skills with the RQ-21 Blackjack UAS. After training and deployment, they may have about 4 months or fewer to impart their knowledge and deployment experience to others in the squadron before they reach the end of their squadron assignment, the end of their service obligation, or both (see fig. 4). According to Marine Corps officials we spoke with, the loss of experienced UAS operators who do not reenlist and are replaced by lateral transfers from other careers results in diminished UAS expertise among mid-career enlisted members in the squadrons. These officials told us that personnel who transfer to the UAS career to replace experienced operators must spend at least 2 years in training for initial qualification and then proficiency on the RQ-21 Blackjack. Moreover, Marine Corps officials told us that a portion of the UAS operators who reenlist past their first contract must fulfill 3-year special duty assignments outside the UAS community. They stated that this exacerbates the diminished squadron expertise and is the reason that some operators leave rather than reenlist in the Marine Corps. Although the Marine Corps has taken steps to address challenges with UAS operator inventories by using special and incentive pays for enlisted operators and limiting opportunities that would divert officers away from squadrons, as previously discussed, it has not fully explored flexibilities for managing its UAS career fields more effectively to help meet requirements. Employing flexibilities to improve job satisfaction could help improve retention of experienced personnel in an already-challenged environment. For example, the Marine Corps has not authorized available aviation special and incentives pays for UAS officers in spite of challenges meeting personnel requirements. As mentioned previously, pilots who are selected for the UAS career field are informed by the Marine Corps that their flight pay and aviation continuation pay bonus eligibility will be terminated. The Marine Corps has incentivized enlisted personnel from certain specialties, such as aircraft maintenance, to both reenlist and to remain in a specified unit as recently as fiscal year 2018, but has not offered this opportunity to UAS operators. By considering longer UAS operator contracts, the Marine Corps could increase the availability of experienced operators to squadrons, where they can pass on their knowledge and skills to junior enlisted personnel. Our prior work has identified that a key principle for effective strategic human capital planning is that organizations should ensure that flexibilities are part of the overall human capital strategy to ensure effective workforce planning. According to Marine Corps officials, they have not taken additional steps to address workforce challenges in part because inventories of UAS operators and officers have grown and squadrons have generally attained readiness goals and accomplished their deployment missions despite personnel shortages. Further, these officials stated that low morale and career satisfaction could be partially caused by the current transition from the RQ-7 Shadow UAS to the RQ- 21 Blackjack, and to the relative newness of the officer career field. Without exploring these or other human capital flexibilities to improve morale and career satisfaction and maximize operators’ availability to squadrons, the Marine Corps may face continued challenges in meeting personnel requirements and the growing demands of expanding operations and increasing UAS inventories. Moreover, as the Marine Corps budgets for additional resources to establish its own school for UAS operator training, flexibilities that could improve retention and maximize operator availability could also help ensure the greatest return on its investment in the UAS operator workforce. For almost 20 years we have identified strategic management of human capital as a high-risk area across government in part because of persistent gaps in mission critical skills. With the Navy’s commitment to accelerate the delivery of unmanned systems to the fleet and its budget of nearly $10 billion to develop and procure those systems in fiscal years 2018 through 2022, having sufficient personnel with the appropriate skills at the right time will be critical. To that end, without additional actions to improve their workforce planning the Navy and the Marine Corps may not be positioned to support their expanding unmanned systems operations. Specifically, lacking clear workforce planning policies, decision makers may not know when they should consider using federal civilian employees and private sector contractors as alternatives in determining the most appropriate and cost-effective workforces for their unmanned system operators. With respect to personnel requirements development, until the Marine Corps’ requirements and related cost estimates for the RQ-21 Blackjack UAS are updated, the services will lack current information about the number of operators needed and their affordability. Further, unless the Navy and the Marine Corps prioritize policy updates for operating and maintaining UAS of different sizes and capabilities they may miss opportunities to effectively and efficiently use personnel resources as system inventories grow. Without assessing the commercial drone industry and using that information to inform retention approaches, the Navy and Marine Corps may not know whether special and incentive pays are effectively tailored to ensure a sufficient number of UAS operators are available to meet future requirements. The Marine Corps, in particular, may continue to face challenges in meeting requirements and growing operational demands until it examines additional flexibilities to improve morale and career satisfaction among its UAS operator workforce and maximize the availability of operators serving in its squadrons. Overall, unmanned systems are key to future Navy and Marine Corps operations, but for these systems to be effective the services need to ensure that they take the necessary actions to provide sufficient personnel. We are making the following ten recommendations to DOD. The Secretary of the Navy ensures that: The Chief of Naval Operations should clarify workforce planning policies to identify circumstances in which federal civilian employees and private sector contractors may serve in operational roles and what the benefits and limitations are of using federal civilians and private sector contractors as alternative workforces. (Recommendation 1) The Chief of Naval Operations should, after clarifying workforce planning policies, apply the revised policies to evaluate the use of alternative workforces (including federal civilian employees and private sector contractors) for future unmanned system operators. (Recommendation 2) The Commandant of the Marine Corps should clarify workforce planning policies to identify circumstances in which federal civilian employees and private sector contractors may serve in operational roles and what the benefits and limitations are of using federal civilians and private sector contractors as alternative workforces. (Recommendation 3) The Commandant of the Marine Corps should, after clarifying workforce planning policies, apply the revised policies to evaluate the use of alternative workforces (including federal civilian employees and private sector contractors) for future unmanned system operators. (Recommendation 4) The Commander, Naval Air Systems Command, in coordination with the Deputy Commandant of the Marine Corps for Combat Development and Integration, should update the Marine Corps personnel requirements associated with the RQ-21 Blackjack UAS based on the most current and enduring concept of operations and utilize the updated requirements in planning for UAS squadron personnel requirements. (Recommendation 5) The Commander, Naval Air Systems Command, should update the life cycle cost estimate for the RQ-21 Blackjack UAS to make adjustments as appropriate after updating the personnel requirements for the system. (Recommendation 6) The Deputy Chief of Naval Operations for Warfare Systems (N9), in coordination with the Deputy Commandant for Aviation, should prioritize continued efforts to fully evaluate policies for operating and maintaining UAS of different sizes and capabilities, such as group 3 UAS—to include establishing completion time frames, determining whether reductions to personnel requirements could be accomplished, and identifying any associated cost savings and the benefits to the UAS squadrons’ ability to complete missions—and update such policies as needed. (Recommendation 7) The Secretary of the Navy should clarify overarching goals for unmanned systems’ personnel requirements, including related priority levels for resourcing purposes, and communicate them to requirements planners and budget decision makers. (Recommendation 8) The Chief of Naval Personnel and the Deputy Commandant for Manpower and Reserve Affairs should assess civilian supply, demand, and wages in the commercial drone industry and use the results to inform retention approaches, including the use of special and incentive pays for UAS operators. (Recommendation 9) The Deputy Commandant for Aviation and the Deputy Commandant for Manpower and Reserve Affairs should examine the use of additional human capital flexibilities that could improve the career satisfaction and retention of experienced UAS operators and maximize their availability to squadrons. Such flexibilities could include authorizing available special and incentive pays; permitting UAS operators to extend their enlistments to serve longer within squadrons; ensuring the availability of career- and promotion- enhancing opportunities for professional military education; considering the use of a potential insignia device for operators; or extending UAS operator contract lengths. (Recommendation 10) We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix III, DOD concurred with eight of our recommendations and partially concurred with two recommendations. DOD also provided technical comments on the draft report, which we incorporated as appropriate. With regard to our recommendation to assess civilian supply, demand, and wages in the commercial drone industry and use the results to inform retention approaches, DOD partially concurred. DOD stated that it will assess competitive markets, both externally and internally, and then analyze the usage of incentive pays for UAS operators when retention rates and inventory levels of personnel display decreasing trends. DOD added that such analysis would be premature if conducted before initial operational capability is attained for each UAS because retention behaviors and air crew dynamics are not yet established. As noted in our report, the Navy and the Marine Corps have each attained initial operational capability with one UAS (i.e., the MQ-8 Fire Scout B-variant and the RQ-21 Blackjack) and quantities of these and other UAS are expected to increase in future years. Additionally, the Marine Corps has designated UAS officer inventories as unhealthy since fiscal year 2013. Accordingly, we continue to believe that conducting such assessments and using the results are timely and important steps to ensure enough personnel to meet future operator requirements. DOD partially concurred with our recommendation to examine the use of additional human capital flexibilities that could improve the career satisfaction and retention of experienced UAS operators. DOD stated that human capital flexibilities are constantly under review. Further, DOD stated that the UAS community is still in its infancy, but as it continues to grow and become healthier, assignment opportunities and flexibilities will become more prevalent and special and incentive pays will be examined as retention rates dictate. Such efforts would meet the intent of our recommendation if the opportunities and flexibilities DOD considers include other examples cited in our recommendation. That is, we continue to believe that DOD should also consider permitting UAS operators to extend their enlistments to serve longer within squadrons; ensuring the availability of career- and promotion-enhancing opportunities for professional military education; considering the use of a potential insignia device for operators; and extending UAS operator contract lengths. We are providing copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of the Navy, and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The Navy’s MQ-8 Fire Scout unmanned aerial system (UAS) (B and C variants) is intended to provide real-time imagery and data in support of intelligence, surveillance, and reconnaissance missions for surface, anti- submarine, and mine warfare. The system is part of the surface warfare and mine countermeasures mission packages of the littoral combat ships. The MQ-8 system comprises one or more air vehicles with sensors, a control station, and ship equipment to aid in vertical launch and recovery. According to the program office, the MQ-8C has 90 percent commonality with the previously developed MQ-8B. The primary differences between the two are structural modifications to accommodate the MQ- 8C’s larger airframe and fuel system. The manufacturer has delivered 49 aircraft to the Navy as of September 2017 (including 30 B variants and 19 C variants), and 11 more aircraft (C variants) are scheduled to be delivered by fiscal year 2019. The Navy attained initial operational capability with the B variant of the Fire Scout in fiscal year 2014, and plans to attain initial operational capability with the C variant in December 2018, depending on the availability of the littoral combat ship from which it deploys. A composite aviation detachment embarked on a littoral combat ship consists of up to 24 personnel, including operator air crews equipped with one MH-60 helicopter and one MQ-8 Fire Scout UAS. An air crew consists of two personnel: one air vehicle operator and one mission payload operator. There is no additive personnel requirement associated with operators of the MQ-8 Fire Scout because these personnel already reside within existing expeditionary MH-60 helicopter squadron detachments. The littoral combat ships’ crew berthing constraints was a key limiting factor in creating the personnel requirements for the number of air crew in a single composite aviation detachment. Navy officials told us that they believe, based on deployment experiences and available data, that the personnel requirements for the MQ-8 Fire Scout are correct, although they stated that the operational tempo has been very limited to date due to problems with the littoral combat ship that have reduced the number of deployments. MH-60 helicopter pilots and enlisted aircrewmen from expeditionary helicopter squadrons attend 8 and 6 weeks, respectively, of MQ-8 Fire Scout UAS training. During deployments, these personnel serve dual roles as air crew of both the MH-60 and the MQ-8 Fire Scout. MQ-8 Fire Scout air vehicle operators hold primary career designators as Navy helicopter pilots, and after their UAS training they are identified with an additional qualification designator of DY8. According to a senior Navy official, private sector contractors trained 126 air vehicle operators prior to February 2015, and since then Navy has trained another 91 air vehicle operators as of May 2017. MQ-8 Fire Scout mission payload operators have an enlisted rating as a helicopter aircrewman, and after their UAS training they are identified with a Navy enlisted classification code of 8367. According to a senior Navy official, private sector contractors trained 148 mission payload operators through March 2017, and the Navy has trained another 68 mission payload operators since February 2017 (as of May 2017). According to Navy officials, they do not expect that the approach for staffing MQ-8 Fire Scout aircrew to negatively affect accessions or retention in the helicopter community, even when operational tempo increases, but they are continuing to monitor feedback from deployments. The Navy’s MQ-4 Triton UAS is intended to provide persistent maritime intelligence, surveillance, and reconnaissance data collection and dissemination capability in an operating area of a 2,000 nautical miles radius. Based on the Air Force’s RQ-4B Global Hawk air vehicle, the MQ- 4 Triton was formerly known as the Broad Area Maritime Surveillance UAS. Triton UAS sensors can provide detection, classification, tracking, and identification of maritime targets. Additionally, the MQ-4 Triton is designed with a communications relay capability that can link dispersed forces in the theater of operation. The system will cue other Navy assets for further situational investigation and/or attack, and will also provide a battle damage assessment of the area of interest. Tactical-level data analysis will occur in real-time at shore-based mission control systems via satellite communications. The MQ-4 Triton is planned to operate from five shore-based sites worldwide as part of the Navy’s family of maritime patrol and reconnaissance systems. From these sites, five MQ-4 Triton air vehicles will be airborne concurrently, 24 hours a day and 7 days a week (see fig.6). As a precursor to the MQ-4 Triton, the Navy’s RQ-4A Broad Area Maritime Surveillance System-Demonstrator has been continuously deployed to the U.S. Central Command area since January 2009. All four of those planned demonstrator systems have been delivered to the Navy. The manufacturer has delivered 2 systems to the Navy as of September 2017 and the Navy expects 10 more systems to be delivered through fiscal year 2021. At the time of this report, no air vehicles had yet been delivered to the Navy’s first unmanned patrol squadron; the 2 systems were being utilized for testing. The Navy has estimated that it will attain initial operational capability with the MQ-4 Triton UAS in 2021. One of the Navy’s two planned unmanned patrol squadrons (referred to as VUPs) will have 30 mission crews, the other squadron will have 20 mission crews, and both squadrons will have additional launch and recovery operators. A MQ-4 Triton mission crew will consist of four personnel: one air vehicle operator, one tactical coordinator, and two mission payload operators. Future upgrades to the MQ-4 Triton will require a fifth mission crew member to fill a signals intelligence capability operator position. The number of required mission crew members was based in part upon a model that Naval Air Systems Command utilizes to project the number of air crew personnel to support a system. According to Navy officials, the additional personnel requirements for the Navy associated with the establishment of Triton squadrons are offset by realignments of the Maritime Patrol and Reconnaissance Force, including the retirement of the P-3 Orion aircraft and reduction of associated personnel requirements. Navy officials told us that they believe, based in part on experience with the Broad Area Maritime Surveillance - Demonstrator, that the personnel requirements for the MQ-4 Triton are adequate, although they stated that they will continue to review and monitor the requirements for sufficiency in future years as the Navy attains steady state operations with the system’s five continuous orbits. The Navy’s approach for staffing operator aircrew for the MQ-4 Triton is to utilize a portion of its naval aviators, naval flight officers, and enlisted aircrew whose qualification is on a maritime patrol and reconnaissance force aircraft (e.g., the P-8A Poseidon) and assign them to an unmanned patrol squadron following a sea tour with their primary aircraft. According to Navy officials, the career path for all its aviators generally includes a number of shore duty options following a first deployment. The unmanned patrol squadron assignments will be an additional option for aviators’ first shore tour. The Navy will provide Triton aircrew members with approximately 3 months of training to qualify on the UAS in connection with their unmanned patrol squadron assignment. Air vehicle operators and tactical coordinators who are trained and qualified on the MQ-4 Triton will be identified with an additional qualification designator of DC5. Trained and qualified mission payload operators will be identified with a Navy enlisted classification of 7828. According to Navy officials, they do not expect the approach for staffing MQ-4 Triton aircrew to affect accessions or retention in the maritime patrol and reconnaissance community at this time, but it is too soon to be certain. In the meantime, the officials stated that they will continue to monitor personnel feedback and reassure personnel about the career value of experience in a MQ-4 Triton squadron. In addition, the Navy plans to leverage members of its reserve component to augment the pool of available personnel who can be assigned to its VUP squadrons. The Navy’s MQ-25 Stingray UAS will be the first UAS to operate from aircraft carriers. According to Navy officials, the MQ-25 Stingray’s primary mission will be to provide a robust refueling capability to extend the range and reach of the carrier air wing and reduce the need for F/A-18E/F Super Hornets to perform refueling missions, freeing them for strike missions, and preserving service life. As a secondary mission, the MQ-25 Stingray will also provide an intelligence, surveillance, and reconnaissance capability. The Navy previously referred to the MQ-25 Stingray as the Carrier Based Aerial Refueling System, a program that followed a restructuring of the former Unmanned Carrier-Launched Airborne Surveillance and Strike program. The Navy’s initial plan is to purchase 72 MQ-25 Stingray air vehicles. No systems have been delivered and a delivery schedule has not been established because the system is still in an early stage of DOD’s acquisition process, with a contract award for system development scheduled for the fourth quarter of fiscal year 2018. The Navy has estimated attaining initial operational capability with the system by the mid-2020s time frame. The Navy has not yet developed a staffing approach for MQ-25 Stingray operators. According to Navy officials involved in establishing plans and requirements for the system, they are considering different options for the systems’ operators, including using enlsited personnel or an approach similar to that used for the MQ-8 Fire Scout operators in which a population of aviation personnel, including pilots, would be identified from a related, existing aircraft community—such as the E-2 Hawkeye aircraft—and provided with UAS qualification training if they were assigned to operate the MQ-25 Stingray in a composite squadron along with their other primary aircraft. According to these officials, at the direction of the Commander of Naval Air Forces, they have considered establishing a new UAS operator career field and surveyed midshipmen at the U.S. Naval Academy to gauge their interest in such a career. The Marine Corps’ RQ-21 Blackjack UAS provides units with a dedicated intelligence, surveillance, and reconnaissance capability for tactical commanders in real time by providing actionable intelligence and communications relay for 12-hour continuous operations per day, with a short surge capability of 24-hours of continuous operations for a 10-day period, during any 30-day cycle. An RQ-21 Blackjack system consists of five air vehicles, two ground control stations, multi-mission payloads, one launcher, one recovery system, data links, and support systems. Standard payloads include electro-optical and infrared cameras, communications relay payload, and automatic identification system. Future upgraded capabilities may include command and control integration, weapons integration, heavy fuel engine, laser designator, frequency agile communications relay, digital common data link, and cyclic refresh of the electro-optical and infrared cameras. The RQ-21 Blackjack can be launched and recovered from land or from air-capable ships, including L-class ships (e.g., amphibious transport docks) (see fig. 7). The manufacturer has delivered 11 systems to the Marine Corps as of September 2017 and the Marine Corps expects the other 21 planned systems to be delivered through 2022. The Marine Corps attained initial operational capability with the RQ-21 Blackjack in 2016. The Marine Corps has three active duty unmanned aerial vehicle squadrons (VMU 1, 2, and 3) and one reserve VMU squadron (VMU 4) that will operate the RQ-21 Blackjack UAS. Each active duty VMU will contain nine detachments and each detachment will comprise 9 personnel—including 1 UAS officer and 3 enlisted UAS operators—and one RQ-21 Blackjack UAS. The Marine Corps Reserve’s VMU 4 will contain three detachments. The Marine Corps’ does not distinguish between requirements for air vehicle operators and mission payload operators for the RQ-21 Blackjack because those functions are performed by the same operator. The Marine Corps has a primary career field for operating UAS, including enlisted UAS operators and UAS officers. The Marine Corps replenishes its UAS operator and officer personnel inventories by selecting from eligible applicant groups. For enlisted UAS operators, eligible groups include new graduates of recruit training and experienced marines who apply for a lateral transfer from another occupational specialty. UAS officers are selected from three sources: new graduates of officer training; pilot or flight officer trainees who do not complete their manned aircraft qualification; and experienced officers seeking a transfer from another occupational specialty, including pilots of manned aircraft. The Marine Corps requires certain minimum test scores before marines can be selected for UAS training. Enlisted marines must achieve minimum test scores comparable to those required for other high-skill occupations, such as intelligence specialists. Officers take a separate test battery and must attain the same minimum scores as other officers who are selected for manned naval aviation training. Following their selection for UAS training, enlisted personnel must complete 5 months of Army UAS training courses to attain their military occupational specialty as a UAS operator. Officers attend 6 months of Air Force training courses to attain their occupational specialty. The Marine Corps then assigns a primary occupation identification code to trained personnel, which is 7314 for enlisted UAS operators or 7315 for UAS officers. The Marine Corps assigns enlisted personnel and officers to one of its UAS squadrons after they attain their occupational specialty, where they continue their UAS training to attain and maintain proficiency and advanced qualifications. As discussed earlier in this report, Marine Corps UAS squadrons believe that an RQ-21 Blackjack detachment requirement of 9 personnel is not sufficient to meet their workloads. Since 2015, squadrons have staffed their deploying detachments with up to 30 personnel each to support the workload and levels of supervision they believe are necessary to operate and maintain an RQ-21 Blackjack UAS and avoid mishaps and damage to the aircraft during recovery to meet operating and maintenance standards, among other reasons. The Navy’s Mine Countermeasures Unmanned Surface Vehicle (USV) and Unmanned Influence Sweep System will be part of the mine countermeasures mission package of the Navy’s littoral combat ships (see fig. 8). The Mine Countermeasures USV will tow a sonar payload for mine hunting. The Unmanned Influence Sweep System will use the same USV platform to tow an acoustic and magnetic influence sweep payload to clear bottom and moored mines. Both systems will be launched and recovered from littoral combat ships. For the Mine Countermeasures USV, the projected inventory is 2 systems per mine countermeasures mission package for a total of 48 systems, in addition to systems needed for training. For the Unmanned Influence Sweep System, the projected inventory is 1 per mine countermeasures mission package for a total of 24 payloads, in addition to payloads for training. As of September 2017, two Mine Countermeasures USVs were under construction, but neither had been delivered to the Navy. The Navy plans to attain initial operational capability with the Mine Countermeasures USVs in fiscal year 2021. As of September 2017, one Unmanned Influence Sweep System had been constructed and the Navy expects it to be delivered for testing by fiscal year 2018. The Navy plans to attain initial operational capability with the Unmanned Influence Sweep System in fiscal year 2019. The Mine Countermeasures USV and Unmanned Influence Sweep System will be operated by littoral combat ship mine countermeasures mission package crews of 20 personnel each. The precise number of operators per system will be determined and updated as the systems progress through acquisition. According to Navy officials, USV operators associated with the littoral combat ships’ mine countermeasures mission package crews will not be directly accessed and recruited to such positions. Instead, these officials stated that enlisted sailors from related primary career ratings will be assigned to the crews and trained on the USVs along with other systems as part of a longer training pipeline. Upon their completion of training, the Navy plans to identify them with a Navy enlisted classification code of 1206, Littoral Combat Ship Mine Warfare Mission Package Specialist. The Navy’s MK 18 Unmanned Underwater Vehicle (UUV) family of systems consists of the MK 18 “Mod 1” Swordfish UUV and the MK 18 “Mod 2” Kingfish UUV. The MK 18 Mod 1 Swordfish is a man-portable system that performs autonomous, low-visibility exploration and reconnaissance missions in support of amphibious landings and mine countermeasures operations, among other things. The MK 18 Mod 2 Kingfish UUV is a larger vehicle with increased endurance and depth, and more advanced sensors to improve mine countermeasures capabilities. The Mod 1 Swordfish and the Mod 2 Kingfish operate in very shallow water and shallow water zones, and will be tactically integrated to enable detection of moored and bottom mines at increased standoff and reduced risk to operators and systems that would otherwise be operating in the minefield. The MK 18 systems can be launched and recovered from shore, from rigid hull inflatable boats or from ships (see fig. 9). 41 (25 Mod 1 Swordfish and 16 Mod 2 Kingfish) The manufacturer has delivered 33 systems (21 Mod 1 Swordfish and 12 Mod 2 Kingfish) to the Navy as of fiscal year 2017. The Navy attained full operational capability with the first increment of the Mod 1 Swordfish in fiscal year 2007 and expects to attain initial operational capability with the first increment of the Mod 2 Kingfish in fiscal year 2019. MK 18 UUVs are operated by platoons within three different Navy units: Explosive Ordinance Disposal Mobile Unit One, Mobile Diving and Salvage Unit Two, and the Naval Oceanography Mine Warfare Center. According to Navy officials, the establishment of such platoons did not generate an additive personnel requirement to those units. The minimal personnel requirement for MK 18 operations includes three UUV operators and a UUV supervisor, along with an officer-in-charge, a boat coxswain, and a boat engineer. According to Navy officials, the Navy does not directly access or recruit personnel to fill its requirements for operators of the MK 18 UUVs. These officials stated that, instead, enlisted sailors from related primary career ratings, including special warfare boat operator and aerographer’s mate ratings, can be assigned to a unit that operates the UUVs either on their first tour or later in their career on a subsequent assignment. Navy officials also stated that Navy Expeditionary Combat Command is coordinating with the Commander, Submarine Forces, to potentially utilize the Navy enlisted classification code of 9550 for its UUV operators. The Navy’s Snakehead Large Displacement UUV will be a long- endurance, off-board system that will conduct reconnaissance and surveillance missions in denied areas and in waters too shallow or otherwise inaccessible for conventional platforms (see fig. 10). The Snakehead Large Displacement UUV will be launched and recovered from submarines and surface ships. No systems have been delivered to the Navy. The Navy is planning for the first 2 systems to be delivered in fiscal year 2020 and for another 2 systems to be delivered in fiscal year 2023. The Navy will attain initial operational capability with the first phase systems when two of them are delivered and tested on a host platform, a life-cycle sustainment plan is in place, and personnel are trained and equipped to operate and maintain the system from a host platform. The Navy plans to field the Snakehead Large Displacement UUVs to UUV Squadron 1. According to Navy officials, the squadron is also testing or operating more than 10 other types of UUVs and expects to receive 2 or more other new types of UUVs through approximately fiscal year 2020, along with the Snakehead. Although Navy officials told us that it is too soon to analyze and determine the numbers of personnel required for the system at the time of this report, they plan to utilize forward-deployed operators to launch and recover the vehicle, an operator to control the vehicle from an operations center on land, and a mission payload operator as needed depending on the mission. The precise number of operators per system will be determined and updated as the systems progress through acquisition. In staffing personnel to meet requirements for UUV Squadron 1, Navy officials stated that they do not directly access or recruit personnel to fill such positions. Instead, these officials told us that enlisted sailors from related career ratings within the submarine community, such as sonar technicians, are assigned to the squadron generally after they have completed at least one previous assignment and have approximately 5 years of experience in the Navy. According to the officials, once personnel are assigned to the squadron, they receive UUV training to qualify on the systems they will operate, and they will be identified with a Navy enlisted classification code of 9550 for UUV operators. This report addresses the extent to which the Navy and the Marine Corps have (1) evaluated workforce alternatives for their unmanned system operators, including the use of federal civilian employees and private sector contractors; (2) developed and updated personnel requirements and related policies and goals that affect requirements for operators, maintainers, and other support personnel for selected unmanned systems; and (3) developed approaches for staffing unmanned system operators to meet personnel requirements and have met those requirements. To address these objectives, we included in the scope of our review the Navy’s and the Marine Corps’ unmanned aerial systems (UAS), unmanned surface vehicles (USV), and unmanned underwater vehicles (UUV) that were programs of record in calendar year 2016. On the basis of Department of the Navy documentation and interviews with knowledgeable officials, we identified 24 such systems. To provide illustrative examples for our first and third objectives and to address the entirety of our second objective, we further narrowed our scope to those systems that had progressed far enough through DOD’s acquisition process to be part of a program of record within the purview of the services’ system commands. Additionally, we narrowed our scope for UASs, in particular, to those categorized as group 3 or above. We omitted smaller group 1 UASs because service officials told us that those systems are fielded in larger numbers as additional capabilities for existing units in accomplishing their missions and entail a small workload for operating and maintaining them relative to UASs of group 3 and above. Group 2 UASs that the Navy and the Marine Corps utilize are contractor-owned and operated, which was outside the scope of our review. From the remaining unmanned systems in our scope, we selected eight case studies to review the services’ evaluations of workforce alternatives, development and updates of personnel requirements and related policies and goals, and staffing approaches: four UASs—the Navy’s MQ-4 Triton, MQ-8 Fire Scout, MQ-25 Stingray, and the Marine Corps’ RQ-21 Blackjack; the two USVs—the Unmanned Influence Sweep System and the Mine Countermeasures USV—associated with the Navy’s littoral combat ships; and two types of the Navy’s UUVs—the MK 18 family of UUV systems and the Snakehead Large Displacement UUV—based on their size and missions. Although the results of the UUV case studies cannot be generalized to all UUVs across the Navy, they illustrate different characteristics of and approaches used for workforce mix, requirements, and staffing for such systems. To address our first objective, we compared any Navy and Marine Corps efforts to evaluate federal civilian employees and private sector contractors as workforce alternatives for operators of all of their unmanned systems, including those from our case study sample, with criteria from (1) DOD Directive 1100.4, Guidance for Manpower Management, which directs, among other things, that authorities consider all available sources when determining workforce mix, and that workforces be designated as federal civilians except in certain circumstances, and (2) DOD Instruction 1100.22, Policy and Procedures for Determining Workforce Mix, which establishes the workforce mix decision process and directs that workforce planning authorities consider all available personnel when determining the workforce mix—that is, the combination of military servicemembers, federal civilians, and private sector contractors. Specifically, we analyzed available documentation for the selected case study systems on any evaluations the services performed of alternative workforces and the related decisions made about eligible personnel categories, and interviewed knowledgeable service officials about factors that informed those evaluations and decisions and any reasons for not evaluating workforce alternatives. We also interviewed officials from the Navy and OUSD(P&R) who are responsible for reviewing workforce and personnel planning documents for Navy and Marine Corps programs to understand any broader DOD or service workforce planning efforts for unmanned systems, and reasons for omitting certain personnel categories from consideration for systems that are in development. We reviewed our prior reports on workforce mix and DOD-commissioned workforce mix studies and interviewed officials from OUSD(P&R) to identify limitations and benefits associated with different categories of personnel, including military servicemembers, federal civilian employees of DOD, and private sector contractors. We reviewed the Navy’s and the Marine Corps’ policies on workforce planning to determine whether those policies provide more detailed guidance or criteria relative to those available in DOD’s policies on circumstances for which alternative personnel sources should be considered or on the limitations and benefits associated with different workforce mix options. We also compared these service-level workforce planning policies with federal internal controls standards that emphasize the importance of having clear, updated policies that align with an organization’s mission and goals. To address our second objective, we reviewed the Navy’s and the Marine Corps’ efforts to develop and update personnel requirements for our selected case study systems, including documentation of steps taken to analyze and determine personnel requirements levels. We interviewed service officials about their views of the sufficiency of those personnel requirements for supporting training and deployment requirements for the selected systems. For any systems that service officials were concerned about the sufficiency of related personnel requirements, we compared documentation of the requirements with DOD Directive 1100.4 and with a Navy instruction. The DOD policy states that personnel requirements should be driven by workload and established at the minimum levels necessary to accomplish mission and performance objectives. Navy Instruction 1000.16L states that personnel requirements must be validated as program changes dictate and at a minimum annually over a system’s lifecycle to determine if a personnel update is required. Further, we reviewed documentation of the life cycle cost estimate for the number of Marine Corps personnel required to operate and maintain the RQ-21 Blackjack, and of UAS squadrons’ position on the sufficiency of those personnel requirements, and compared those documents with DOD guidance requiring that components determine a weapon system program’s life cycle costs by planning for the many factors needed to support the system, including personnel, and with Office of Management and Budget guidance that states that to keep the cost analyses for capital assets, such as weapon systems, current, accurate, and valid, cost estimating should be continuously updated based on the latest information available as programs mature. In addition, we reviewed Navy policies on operating and maintaining UAS and documentation from the Marine Corps about the effect of those policies on UAS squadron personnel workload, and interviewed Navy and Marine Corps headquarters- and unit-level officials about those effects and any efforts underway to review and update policies. We then compared those efforts to review and update policies with DOD Directive 1100.4 stating that existing policies, procedures, and structures should be periodically evaluated to ensure efficient and effective use of personnel resources, and with federal internal controls standards that emphasize the importance of having clear, updated policies that align with an organization’s mission and goals. Finally, we compared goals established in DOD’s Unmanned Systems Integrated Roadmap, FY2013- 2038 and Department of the Navy strategy documents on unmanned systems with federal internal controls standards that state than an agency’s management should define objectives clearly to enable the identification of risk. For our third objective, we reviewed the Navy’s and the Marine Corps’ steps to select, train, and track unmanned system operators to identify any challenges. We reviewed for the selected systems a combination of manpower estimate reports and personnel and training plan documents to identify approaches for staffing operators. We also reviewed personnel and training manuals describing prerequisites for related military qualifications and occupations. We interviewed command- and unit-level officials from the Navy and the Marine Corps to discuss the effectiveness of current staffing approaches for meeting their training and deployment requirements. Focusing on challenges with providing enough personnel to serve as UAS operators in particular, we also reviewed Navy reports on the retention of certain aviation personnel to serve as UAS operators and we reviewed Marine Corps data on its UAS operator inventory and retention levels relative to its requirements and goals. Specifically, we reviewed Navy reports on retention for fiscal years 2015 through 2017 because data from earlier years were less relevant given the lower numbers of UAS inventories. We requested data from the Marine Corps on its inventories of and requirements for enlisted UAS operators for fiscal years 2007 through 2017 and on UAS officers for fiscal years 2013 (the first year of available data) through 2017. We requested retention data—actual numbers of personnel who reenlisted versus annual quotas—on enlisted UAS operators for fiscal years 2010 (the earliest year for which data were available) through 2017. We assessed the reliability of these Marine Corps data by administering questionnaires and interviewing relevant personnel responsible for maintaining and overseeing the systems that supplied the data and manually checking the data for errors or omissions. Through these methods, we obtained information on the systems’ ability to record, track, and report on these data, as well as on the quality control measures in place. We found the inventory and requirements data to be sufficiently reliable for the purposes of describing personnel inventory trends and the sufficiency of operator personnel to meet requirements. We found that the retention data are of undetermined reliability but are reporting them because they are the data of record used by Marine Corps planning officials. We also reviewed Navy and Marine Corps financial incentives for retaining sufficient personnel to serve as UAS operators and compared those approaches with criteria from DOD’s 2012 Eleventh Quadrennial Review of Military Compensation, which established that organizations should assess civilian supply and demand and civilian wages to determine the most cost effective special and incentive pay strategies. Further, we compared the Marine Corps’ efforts to address workforce challenges specific to the Marine Corps’ UAS operator career field with a key principle of strategic human capital planning from our prior work, which states that agencies should ensure that flexibilities are part of their overall human capital strategy. In our prior work, we found that strategic human capital planning is an important component of an agency’s effort to develop long-term strategies for acquiring, developing, and retaining staff needed for an agency to achieve its goals and of an agency’s effort to align human capital activities with the agency’s current and emerging mission. Specifically, we have found that an agency’s efforts to conduct strategic human capital planning should include, among other things, building the capability needed to address administrative, educational, and other requirements important to supporting workforce strategies by ensuring that flexibilities are part of the overall human capital strategy. We focused on workforce challenges in the Marine Corps, in particular, because it has a long-established career field for UAS operators, and the Navy does not yet have a separate career field for any of its unmanned systems operators. We identified workforce challenges within the Marine Corps’ UAS operator career field by reviewing a 2015 Marine Corps-sponsored survey of its pilot and UAS officer workforce. The survey included questions about satisfaction with career and benefits, and intentions to stay in the Marine Corps and the underlying reasons for these. Although officers in ranks of first lieutenant through lieutenant colonel were surveyed, we were unable to include majors and lieutenant colonels in reporting results for UAS officers because the Marine Corps aggregated those officers’ responses with those of majors and lieutenant colonels who operate other types of aircraft. By reviewing the survey methodology and interviewing an official involved in administering the survey and analyzing the results, we determined that the survey results were sufficiently reliable for reporting the perceptions about career satisfaction at a single point in time for UAS operators who answered those questions. In addition, we visited one of three active duty Marine Corps UAS squadrons, which we chose because it had the most deployment experience with the RQ-21 Blackjack UAS. We met with squadron leaders to discuss their views about UAS personnel requirements and staffing approaches. We also conducted eight small group discussions with active duty UAS operators and officers—separately for enlisted personnel and officers—to gain their perspectives on topics such as morale, workload, and career satisfaction. The opinions of Marine Corps UAS operators we obtained during our discussion groups are not generalizable to the population of UAS operators in the Marine Corps. Office of the Deputy Commandant for Aviation Office of the Deputy Commandant for Combat Development and Office of the Deputy Commandant for Manpower and Reserve Affairs Marine Corps Systems Command Marine Unmanned Aerial Vehicle Squadron 2 We conducted this performance audit from September 2016 to February 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, key contributors to this report were Lori Atkinson, (Assistant Director), Melissa Blanco, Tim Carr, Mae Jones, Amie Lesser, Felicia Lopez, Ben Sclafani, Mike Silver, and Paul Sturm. Department of Defense: Actions Needed to Address Five Key Mission Challenges. GAO-17-369. Washington, D.C.: June 13, 2017. Navy Force Structure: Actions Needed to Ensure Proper Size and Composition of Ship Crews. GAO-17-413. Washington, D.C.: May 18, 2017. High Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. Military Compensation: Additional Actions Are Needed to Better Manage Special and Incentive Pay Programs. GAO-17-39. Washington, D.C.: February 3, 2017. Unmanned Aerial Systems: Air Force and Army Should Improve Human Capital Planning for Pilot Workforces. GAO-17-53. Washington, D.C.: January 31, 2017. Unmanned Aerial Systems: Further Actions Needed to Fully Address Air Force and Army Pilot Workforce Challenges. GAO-16-527T. Washington, D.C.: March 16, 2016. Military Personnel: Army Needs a Requirement for Capturing Data and Clear Guidance on the Use of Military for Civilian or Contractor Positions. GAO-15-349. Washington, D.C.: June 15, 2015. Unmanned Aerial Systems: Actions Needed to Improve DOD Pilot Training. GAO-15-461. Washington, D.C.: May 14, 2015. Air Force: Actions Needed to Strengthen Management of Unmanned Aerial System Pilots. GAO-14-316. Washington, D.C.: April 10, 2014. Human Capital: Additional Steps Needed to Help Determine the Right Size and Composition of DOD’s Total Workforce. GAO-13-470. Washington, D.C.: May 29, 2013. Unmanned Aircraft Systems: Comprehensive Planning and a Results- Oriented Training Strategy Are Needed to Support Growing Inventories. GAO-10-331. Washington, D.C.: March 26, 2010. Human Capital: Key Principles for Effective Strategic Workforce Planning. GAO-04-39. Washington, D.C.: December 11, 2003.
|
The Department of the Navy has committed to rapidly grow its unmanned systems portfolio. It currently has at least 24 types of systems and has budgeted nearly $10 billion for their development and procurement for fiscal years 2018-2022. Personnel who launch, navigate, and recover the systems are integral to effective operations. Senate Report 114-255 included a provision for GAO to review the Navy's and the Marine Corps' strategies for unmanned system operators. GAO examined, among other things, the extent to which the Navy and the Marine Corps have (1) evaluated workforce alternatives (such as the use of civilians and contractors) for unmanned system operators and (2) developed and updated personnel requirements and related policies and goals for selected unmanned systems. GAO compared documentation on unmanned systems with DOD policies and conducted discussion groups with unmanned system operators. The Navy and the Marine Corps are rapidly growing their portfolios of unmanned aerial systems (UAS) and unmanned maritime systems and have opted to use military personnel as operators without evaluating alternatives, such as federal civilian employees and private sector contractors. Service officials stated that civilians or contractors are not viable alternatives and policies are unclear about when and how to use them. However, a June 2016 Department of Defense-commissioned study found that alternative staffing strategies could meet the UAS mission more cost-effectively. Military personnel may be the most appropriate option for unmanned systems, but without clarifying policies to identify circumstances in which civilians and contractors may serve in operational roles, the services could continue to make workforce decisions that do not consider all available resources. The Navy and the Marine Corps have sufficient personnel requirements or efforts underway to develop personnel requirements for seven unmanned systems that GAO reviewed (see fig.), but requirements for one system (i.e., the RQ-21 Blackjack UAS) have not been updated. That system's requirements have not been updated because service entities disagree about whether they are sufficient. Since 2015, units have deployed with about two to three times the personnel that headquarters and command officials expected they would need. Marine Corps officials stated that the Blackjack's personnel requirements were based on an outdated concept of operations and are insufficient for supporting workloads. Without updating the personnel requirements for the Blackjack UAS, the services will lack current information about the number of personnel needed. The Department of the Navy has taken positive steps but has not fully evaluated and updated aviation policies that affect personnel requirements for certain UAS and lacks clear goals for informing future requirements for all of its UASs. GAO found that the policies do not fully account for differences between UASs of varying sizes and capabilities. These policies require, for example, that the Blackjack UAS be held to the same maintenance standards designed for larger aircraft and UAS, which in turn affects personnel requirements. Until the Department of the Navy evaluates and updates such policies and clarifies related goals, the services will be hampered in developing and updating future requirements as unmanned system inventories grow and operations expand. GAO is making ten recommendations, including that the Navy and the Marine Corps clarify policies to identify circumstances in which civilians and contractors may serve in operational roles and apply the policies to future evaluations; update personnel requirements for one UAS; and evaluate and update policies and goals to inform future personnel requirements. DOD concurred with eight recommendations and partially concurred with two. As discussed in the report, GAO continues to believe that all ten are warranted.
|
The DATA Act was enacted May 9, 2014, for purposes that include expanding on previous federal transparency legislation by requiring the disclosure of federal agency expenditures and linking agency spending information to federal program activities, so that both policymakers and the public can more effectively track federal spending. The act also calls for improving the quality of data submitted to USAspending.gov by holding federal agencies accountable for the completeness and accuracy of the data submitted. The Federal Funding Accountability and Transparency Act of 2006 (FFATA), as amended by the DATA Act, identifies OMB and Treasury as the two agencies responsible for leading government-wide implementation. For example, the DATA Act requires OMB and Treasury to establish government-wide financial data standards that shall, to the extent reasonable and practicable, provide consistent, reliable, and searchable spending data for any federal funds made available to or expended by federal agencies. These standards specify the data elements to be reported under the DATA Act and define and describe what is to be included in each data element, with the aim of ensuring that information will be consistent and comparable. The DATA Act also requires OMB and Treasury to ensure that the standards are applied to the data made available on USAspending.gov. USAspending.gov has many sources of data. For example, agencies submit data from their financial management systems, and other data are extracted from government-wide federal financial award reporting systems populated by federal agencies and external award recipients. A key component of the reporting framework is Treasury’s DATA Act broker (broker)—a system that collects and validates agency-submitted data to create linkages between the financial and award data prior to their publication on the USAspending.gov website. According to Treasury guidance documents, agencies are expected to submit three data files with specific details and data elements to the broker from their financial management systems. File A: Appropriations account. This includes summary information such as the fiscal year cumulative federal appropriations account balances and includes data elements such as the agency identifier, main account code, budget authority appropriated amount, gross outlay amount, and unobligated balance. File B: Object class and program activity. This includes summary data such as the names of specific activities or projects as listed in the program and financing schedules of the annual budget of the U.S. government. File C: Award financial. This includes award transaction data such as the obligation amounts for each federal financial award made or modified during the reporting quarter (e.g., January 1, 2017, through March 31, 2017). The broker also extracts spending information from government-wide award reporting systems that supply award data (e.g., federal grants, loans, and contracts) to USAspending.gov. These systems—including the Federal Procurement Data System-Next Generation (FPDS-NG), System for Award Management (SAM), Financial Assistance Broker Submission (FABS), and the FFATA Subaward Reporting System (FSRS)—compile information that agencies and external federal award recipients submit to report, among other things, procurement and financial assistance award information required under FFATA. The four files produced with information extracted by the broker from the four systems are as follows: File D1: Procurement. This includes award and awardee attribute information (extracted from FPDS-NG) on procurement (contract) awards and contains elements such as the total dollars obligated, current total value of award, potential total value of award, period of performance start date, and other information to identify the procurement award. File D2: Financial assistance. This includes award and awardee attribute information (extracted from FABS) on financial assistance awards and contains elements such as the federal award identification number, the total funding amount, the amount of principal to be repaid for the direct loan or loan guarantee, the funding agency name, and other information to identify the financial assistance award. File E: Additional awardee attributes. This includes additional information (extracted from SAM) on the award recipients and contains elements such as the awardee or recipient unique identifier; the awardee or recipient legal entity name; and information on the award recipient’s five most highly compensated officers, managing partners, or other employees in management positions. File F: Subaward attributes. This includes information (extracted from FSRS) on awards made to subrecipients under a prime award and contains elements such as the subaward number, the subcontract award amount, total funding amount, the award description, and other information to facilitate the tracking of subawards. The key components of the broker and how the broker operated when the agencies submitted their data for the second quarter fiscal year 2017 are shown in figure 1. After agencies submit the three files to the DATA Act broker, it runs a series of validations and produces warnings and error reports for agencies to review. After passing validations for these three files, the agencies are to generate Files D1 and D2, the files containing details on procurement and assistance awards. Before the data are displayed on USAspending.gov, agency senior accountable officials are required to certify the data submissions in accordance with OMB guidance. Certification is intended to assure alignment among Files A, B, C, D1, D2, E, and F, and to provide assurance that the data are valid and reliable. According to Treasury officials, once the certification is submitted a sequence of computer program instructions or scripts are issued to transfer and map the data from broker data tables to tables set up in a database used as a source for the information on the website. Certified data are then displayed on USAspending.gov along with certain historical information from other sources, including Monthly Treasury Statements. The DATA Act requires each OIG to issue three reports on its assessment of the quality of the agency’s data submission and compliance with the DATA Act. The first report was due November 8, 2016; however, agencies were not required to submit spending data in compliance with the DATA Act until May 2017. Therefore, the Council of the Inspectors General on Integrity and Efficiency (CIGIE) developed an approach to address what it described as a reporting date anomaly; encouraged interim OIG readiness reviews and related reports on agencies’ implementation efforts; and delayed issuance of the mandated reports to November 2017, with subsequent reports following a 2-year cycle and due November 2019 and 2021. CIGIE established the Federal Audit Executive Council (FAEC) to discuss and coordinate issues affecting the federal audit community, with special emphasis on audit policy and operations of common interest to FAEC members. FAEC formed the FAEC DATA Act Working Group to assist the OIG community in understanding and meeting its DATA Act oversight requirements by (1) serving as a working-level liaison with Treasury, (2) consulting with GAO, (3) developing a common approach and methodology for conducting the readiness reviews and mandated reviews, and (4) coordinating key communications with other stakeholders. To assist the OIG community, the FAEC DATA Act Working Group developed a common methodology and published the Inspectors General Guide to Compliance Under the DATA Act (IG Guide) for use in conducting mandated reviews. The IG Guide includes procedures to test data in agencies’ Files A and B by reconciling these data to the information that agencies report in their quarterly SF 133, Report on Budget Execution and Budgetary Resources. The IG Guide also instructs OIGs to select a statistically valid sample of spending data from the agencies’ available award-level transactions in File C, and among other procedures, to confirm whether these data are also included in the agencies’ Files D1 and D2. The OIGs are also to confirm whether the transactions in the sample were linked to the award and awardee attributes in Files E and F. The data in Files E and F are reported by award recipients in two external government-wide systems, and are outside the direct control of the federal agencies, except for the General Services Administration, which manages these external systems. Based on additional guidance from the FAEC DATA Act Working Group, OIGs are not required to assess the quality of the award recipient-entered data that the broker extracted from the two external government-wide systems used to create Files E and F. According to the IG Guide, the sampled spending data and testing results are to be evaluated using the following definitions for the requirements being assessed: Completeness is measured in two ways: (1) all transactions that should have been recorded are recorded in the proper reporting period, and (2) as the percentage of transactions containing all applicable data elements required by the DATA Act. Timeliness is measured as the percentage of transactions reported within 30 days of the end of the quarter. Accuracy is measured as the percentage of transactions that are complete and agree with the systems of record or other authoritative sources. Quality is defined in OMB guidance as a combination of utility, objectivity, and integrity. Utility refers to the usefulness of the information to the intended users. Objectivity refers to whether the disseminated information is being presented in an accurate, clear, complete, and unbiased manner. Integrity refers to the protection of information from unauthorized access or revision. The IG Guide also states that OIGs should assess agencies’ implementation and use of the data standards, including evaluating each agency’s process for reviewing the 57 required data elements and associated definitions that OMB and Treasury established and documenting any variances. In November 2017, we issued our first report on data quality as required by the DATA Act, which identified issues with the completeness and accuracy of the data that agencies submitted for the second quarter of fiscal year 2017, use of data elements, and presentation of the data on Beta.USAspending.gov. Among other things, we recommended that Treasury disclose known data quality issues and limitations on the new USAspending.gov website. Treasury agreed with that recommendation and stated that it would develop a plan to better disclose known data quality issues. Since the DATA Act’s enactment in 2014, we have issued a series of interim reports on our ongoing monitoring of the implementation of the DATA Act and made recommendations intended to help ensure effective government-wide implementation. However, many of those recommendations still remain open. These reports identified a number of challenges related to OMB’s and Treasury’s efforts to facilitate agency reporting of federal spending, as well as internal control weaknesses and challenges related to agency financial management systems that we and agency auditors reported that present risks to agencies’ ability to submit quality data as required under the act. For example, our prior work has identified issues with agency source systems that could affect the quality of spending data made available to the public. In April 2017, we reported a number of weaknesses and issues previously identified by agencies’ auditors and OIGs that affect agencies’ financial reporting and may affect the quality of the information reported under the DATA Act. We also reported on findings and recommendations from prior reports with issues on the four key award systems—FPDS-NG, SAM, the Award Submission Portal (ASP), and FSRS—which increase the risk that the data submitted to USAspending.gov may not be complete, accurate, and timely. Based on our review of the 53 OIG reports, the scope of all of the OIG reviews covered their agencies’ submission of spending data for the second quarter of fiscal year 2017 (i.e., January through March 2017). However, the files that the OIGs included in their scope to select and review sample transactions and the type of audit standards used—such as attestation examination engagement or performance audit—varied among the OIGs. According to the IG Guide, the OIGs were to select and review a statistically valid sample of transactions, preferably from the agencies’ File C certified data submissions; if File C was unavailable or did not contain data, they were to select their sample test items from Files D1 and D2. Based on their survey responses, we found that most OIGs tested data from File C, File D1, File D2, or some combination of these agency file submissions. We also found that some OIGs tested a statistical sample of transactions in these files, while others tested all the transactions in the files because of the small population size. Further, we found that some OIGs used different files when testing for completeness, timeliness, or accuracy. For example, one OIG used File C when testing for completeness, File D1 when testing for timeliness, and File D2 when testing for accuracy. Overall, as shown in figure 2, the source files that 47 of the 53 OIGs used for testing accuracy were as follows. Twenty-eight OIGs selected items for testing accuracy from File C. Twelve OIGs selected items for testing accuracy from Files D1, D2, or both. Seven OIGs selected items for testing accuracy from a combination of Files C, D1, and D2. The IG Guide also states that OIGs should conduct either attestation examination engagements or performance audits in accordance with generally accepted government auditing standards (GAGAS). Performance audits are audits that provide findings or conclusions based on an evaluation of sufficient, appropriate evidence against criteria. Attestation examination engagements involve obtaining sufficient, appropriate evidence with which to express an opinion stating whether the subject matter is in conformity with the identified criteria. In contrast to these two types of engagements that provide conclusions or opinions, agreed-upon procedures attestation engagements do not result in opinions or conclusions, but instead involve auditors performing specific procedures on the subject matter and issuing a report of findings. All 53 OIGs reported that they performed their engagements in accordance with GAGAS; 47 OIGs reported that they conducted a performance audit, 5 reported that they performed an attestation examination engagement, and 1 reported that it performed an agreed- upon procedures attestation engagement. Twenty-one CFO Act agency OIGs and 26 non-CFO Act agency OIGs conducted performance audits, 3 CFO Act agency OIGs and 2 non-CFO Act agency OIGs conducted attestation examination engagements, and 1 non-CFO Act agency OIG conducted an agreed-upon procedures attestation engagement. According to the OIG reports, about half of the agencies met the OMB and Treasury requirements for implementation and use of data standards. However, almost three-fourths of OIGs determined that their respective agencies’ submissions were not complete, timely, accurate, or of quality. Based on their reports and survey responses, certain OIGs also found data errors related to problems with how Treasury’s DATA Act broker extracted information from external award reporting systems. The FAEC DATA Act Working Group considered these data errors to be a government-wide issue. Other errors that the OIGs identified may have been caused by agency-specific internal control deficiencies. Most of the OIGs made recommendations to agencies to help address the concerns they identified in their reports. Based on our review of the 53 OIG reports, we found that 27 OIGs determined that their agencies met OMB and Treasury requirements for implementation and use of the data standards, whereas 23 OIGs determined that their agencies did not meet these requirements. In addition, 3 CFO Act agency OIGs did not include an assessment of their agencies’ implementation and use of the data standards in their reports. The OIG reports described reasons why the 23 agencies did not meet the implementation and use of data standards requirements, including data submissions that did not include required data elements or included data elements that did not conform with the established data standards. For example, one OIG reported that 74 percent of transactions it tested did not contain program activity names or codes aligned with the President’s Budget, and as a result, 39 percent of total obligations and 57 percent of total expenditures from that agency’s data submission could not be aligned with established programs. Another OIG reported that because of inconsistent application of data standards and definitions across award systems, the agency’s spending data were not complete, timely, or accurate. In their survey responses, certain OIGs identified additional concerns about their agencies’ implementation and use of data standards and related data elements. Specifically, six OIGs identified differences between their agencies’ definitions of the data standards and OMB guidance. For example, two OIGs noted differences between definitions in OMB guidance and their agencies’ definitions of “primary place of performance address.” One of these OIGs noted that its agency submitted the wrong data, providing the address of the legal entity receiving the award instead of the address of the primary place where performance of the award will be accomplished or take place. In our November 2017 report, we also noted that OMB guidance for this data element was unclear and recommended that OMB clarify and align existing guidance regarding the appropriate definitions agencies should use to collect and report on primary place of performance and establish monitoring mechanisms to foster consistent application and compliance. In addition, based on their survey responses, 21 OIGs reported error rates over 50 percent for 25 data elements. This includes 10 data elements that were reported by multiple OIGs and 15 data elements only reported by one OIG, as shown in table 1. There were five other data elements with error rates over 50 percent that the FAEC DATA Act Working Group determined to be government-wide broker-related data reporting issues, as discussed later in this report. The OIGs’ survey responses did not indicate whether the data elements with errors were the result of issues related to the agencies’ implementation or use of required data standards. Based on the OIG reports, we found that 15 of the 53 OIGs determined that their agencies’ data were generally complete, timely, accurate, or of quality, comprising 6 CFO Act agency OIGs and 9 non-CFO Act agency OIGs (see fig. 3). Conversely, 38 of 53 OIGs determined that their agencies’ data were not complete, timely, accurate, or of quality, comprising 18 CFO Act agency OIGs and 20 non-CFO Act agency OIGs. OIG reports did not always include separate assessments for completeness, timeliness, and accuracy, but gave an overall assessment of the quality of the data. As part of our OIG survey, we requested the overall error rates, agency- specific error rates, and broker error rates for each requirement— completeness, timeliness, and accuracy—used to evaluate the quality of data tested to help provide more insights on the nature and extent of errors that the OIGs identified. For the purposes of our survey, based on guidance from the FAEC DATA Act Working Group and in the IG Guide, these error rates were defined as follows: Overall error rate is the percentage of transactions tested that were not in accordance with policy, and includes errors due to the agency, broker, and external award reporting systems. Agency error rate is the percentage of transactions tested that were not in accordance with policy, and includes only errors that were within the agency’s control. Broker error rate is the percentage of transactions tested that were not in accordance with policy, and includes only errors due to the broker and external award reporting systems. With regard to overall error rates and the tests conducted, 40 OIGs reported that they tested a statistical sample of transactions, 9 OIGs reported that they tested all transactions in the populations of data, and 4 OIGs reported that they did not test any transactions or were unable to complete their testing. As shown in figure 4, our survey results show that the 40 OIGs that tested a statistical sample of transactions generally reported higher (projected) overall error rates for the accuracy and completeness of data than for the timeliness of data. We found similar results based on our tests to assess the completeness, timeliness, and accuracy of government-wide spending data that we tested for the same time period, as described in our November 2017 report. More than half of the 40 OIGs reported projected overall error rates of 25 percent or greater for accuracy, including 8 OIGs reporting projected accuracy error rates of over 75 percent. In contrast, more than three-fourths of the OIGs projected overall error rates of less than 25 percent for completeness and timeliness of their agencies’ data. See appendix II for more details on the 53 OIGs’ individual agency testing results, including the actual overall error rates for those OIGs that tested the full population of transactions included in their agencies’ data submissions and the estimated range of projected overall error rates for OIGs that conducted a statistical sample. The OIG survey responses that included agency-specific error rates showed that the agency-specific error rates were similar to the overall error rates, with accuracy of data having higher error rates than those for completeness and timeliness. Fourteen OIGs provided agency-specific error rates for accuracy, 13 OIGs provided agency-specific error rates for completeness, and 12 OIGs provided agency-specific error rates for timeliness of the data sampled. In addition, nine OIGs reported error rates for broker-related errors that, similar to the overall and agency-specific error rates, had higher error rates for accuracy of data than for completeness and timeliness. The FAEC DATA Act Working Group determined that the broker-related errors had a government-wide impact, as discussed further below. In October 2017—1 month before the mandated reports were to be issued—the working group provided guidance to the OIGs suggesting that they determine and report these additional broker error rates separately because they were not within the agencies’ control. Some OIGs may not have reported separate agency-specific and broker error rates as their work was already substantially completed. Of the nine OIGs that reported they tested all transactions in the populations of their agencies’ data, five OIGs reported actual overall error rates and found that overall error rates for accuracy were higher than the error rates for completeness or timeliness. Of the four OIGs that reported agency-specific error rates, only one OIG reported an error rate for accuracy, and it was greater than 75 percent. One OIG reported a broker error rate, and it was higher for accuracy than for completeness or timeliness. In addition to using different testing methodologies (e.g., statistical sampling or testing the full population of transactions) and source files, as previously discussed, the OIGs also used different assumptions and sampling criteria to design and select sample items for testing. As a result, the overall error rates are not comparable and a government-wide error rate cannot be projected. Based on discussions with OIGs, the FAEC DATA Act Working Group identified certain data errors caused by broker-related issues that it determined to be government-wide data reporting issues. Also, because the broker is maintained by Treasury, these issues were beyond the control of the affected agencies. According to the working group, these issues involve inconsistencies in data the broker extracted from government-wide federal financial award reporting systems, as described in table 2. To help provide consistency in reporting these issues, the working group developed standard report language used by OIGs in their reports to describe the errors caused by the broker. The standard reporting language stated that because agencies do not have responsibility for how the broker extracts data, the working group did not expect agency OIGs to evaluate the reasonableness of Treasury’s planned corrective actions. In April 2018, a Treasury official told us that the issues causing these problems have been resolved. To address these issues, the Treasury official stated that, among other things, Treasury implemented the DATA Act Information Model Schema version 1.1, loaded previously missing historical procurement data to USAspending.gov, updated how information from FPDS-NG is mapped to File D1, and replaced ASP with FABS. However, we plan to follow up on these efforts as a part of our ongoing monitoring efforts. In their survey responses and OIG reports, 43 OIGs reported agency- specific control deficiencies that may have contributed to or increased the risk of data errors. Of these 43 OIGs, 37 OIGs identified deficiencies affecting accuracy, 32 OIGs identified deficiencies affecting completeness, and 14 OIGs identified deficiencies affecting timeliness. A few OIGs reported that they leveraged their financial statement audit results, which found deficiencies in certain financial reporting controls, in conducting their DATA Act reviews. We categorized the OIGs’ reported control deficiencies and found that the categories with the most frequently reported deficiencies related to their agencies’ lack of effective procedures or controls, such as conducting reviews and reconciliations of data submissions to source systems, and information technology system deficiencies, as shown in figure 5. In their survey responses, OIGs provided additional information about whether their agencies’ controls over agency source systems and controls over the DATA Act submission processes were properly designed, implemented, and operating effectively to achieve their objectives. For both CFO Act and non-CFO Act agencies, OIGs generally reported that agencies’ internal controls over source systems and the DATA Act submission process were designed effectively but were not implemented or operating effectively as designed. Some examples of agency-specific control deficiencies reported by the OIGs are as follows. Lack of effective procedures or controls. Deficiencies where agency procedures for reviewing and reconciling data and files to different sources were not performed, or were performed ineffectively, or standard operating procedures for data submissions had not been designed and implemented. For example, some of these deficiencies related to agencies’ lack of review or reconciliation of data in Files A and B to data in Files D1 and D2. Further, two OIGs found that their agencies did not perform any sort of quality review of their data until after they were submitted to the broker. Another OIG found that its agency did not ensure that its components developed objectives for accomplishing its data submissions, assessed the risks to achieving those objectives, or established corresponding controls to address them. As a result, the agency’s DATA Act submissions included errors. Information technology system deficiencies. Deficiencies related to the lack of effective automated systems controls necessary to ensure proper system user access or automated quality control procedures and the accuracy and completeness of data, as well as systems that are not compliant with federal financial management system requirements. For example, one OIG noted that its agency experienced issues related to segregation of duties and access controls that affected the agency’s ability to ensure completeness and accuracy of data in its financial, procurement, and grant processing systems. Another OIG found that its agency did not complete necessary system updates to ensure that all data were certified prior to submission. Further, an OIG reported that its agency’s information system was unable to combine transactions with the same unique identifiers, resulting in over 12,000 transactions being removed because of broker warnings. Insufficient documentation. Deficiencies related to agencies’ production and retention of documentary evidence supporting their DATA Act submissions. For example, three OIGs found that their agencies were unable to provide supporting documentation for various portions of their DATA Act submissions. Another OIG reported that one of its agency’s components did not take effective steps to ensure that procurement and grant personnel understood the specific documentation that should be maintained to support data entered in grant and contract files. Further, another OIG found that its agency did not document the process for compiling the agency’s DATA Act submission files. Inappropriate application of data standards and data elements. Deficiencies related to the inappropriate use of data definition standards or the misapplication of data elements. For example, one OIG found that its agency did not identify the prior year funding activity names or codes for all transactions included in its spending data submission. Another OIG found that its agency did not consistently apply standardized object class codes in compliance with OMB guidance, as well as standardized U.S. Standard General Ledger account codes as outlined in Treasury guidance. Similarly, an OIG reported instances where agency users of certain award systems were not knowledgeable about how required DATA Act elements were reported in their procurement system. Data entry errors or incomplete data. Deficiencies related to controls over data entry and errors or incomplete data in agency or government- wide external systems. For example, an OIG found that its agency did not include purchase card transactions greater than $3,500, which represented about 1 percent of the agency’s data submission. Another OIG reported that its agency’s service provider did not enter miscellaneous obligations in the data submission file because it expected the agency to enter such transactions in the federal procurement data system. Timing errors. Deficiencies related to delays in reporting information to external government-wide systems that result in errors in the data submitted. For example, one OIG reported that its agency did not take effective steps to ensure that contracting officers timely report required DATA Act award attribute information in FPDS-NG. Another OIG reported that a bureau in its agency consistently submitted certain payment files 2 months late, resulting in incomplete Files C and D2 in the agency’s data submission. Inaccurate broker uploads. Deficiencies related to agencies uploading data to the broker. For example, one OIG found a lack of effective internal controls over data reporting from its agency’s source systems to the DATA Act broker for ensuring that the data reported are complete, timely, accurate, and of quality. Specifically, certain components were not able to consolidate data from multiple source systems and upload accurate data to the broker for File C. Another OIG reported that the broker could not identify and separate an individual component’s award data from agency- wide award data. Specifically, the broker recognized only agency-wide award data and did not include award data from its agency’s individual components. As a result, the OIG reported that the component did not comply with the DATA Act requirements because its submission did not include all of the agency’s required award data. Reliance on manual processes. Deficiencies that cause agencies to rely on manual processes and work-arounds. For example, one OIG found that in the absence of system patches to map data elements directly from feeder award systems to financial systems, its agency developed an interim solution that relied heavily on manual processes to collect data from multiple owners and systems and increased the risk for data quality to be compromised. Another OIG reported that its agency’s financial management systems are outdated and unable to meet DATA Act requirements without extensive manual efforts, resulting in inefficiencies in preparing data submissions. Other. Other deficiencies including, among other things, instances where an agency’s senior accountable official did not submit a statement of assurance certifying the reliability and validity of the agency account-level and award-level data submitted to the DATA Act broker, an agency did not provide adequate training and cross-training of personnel on the various DATA Act roles, and certain components of one agency were not included in the agency’s DATA Act executive governance structure. To help address control deficiencies and other issues that resulted in data errors, 48 of the 53 OIGs (23 CFO Act agency OIGs and 25 non-CFO Act agency OIGs) included recommendations in their reports. As shown in figure 6, the most common recommendations OIGs made to their agencies related to the need for agencies to develop controls over their data submissions, develop procedures to address errors, and finalize or implement procedures or guidance. Some examples of OIG recommendations made to agencies to improve data quality and controls are as follows. Develop controls over submission process. Recommendations related to controls or processes to resolve issues in submitting agency financial system data to the broker. For example, one OIG recommended that its agency develop and implement a formal process to appropriately address significant items on broker warning reports, which could indicate systemic issues. Develop procedures to address errors. Recommendations related to procedures to address data errors in the agency’s internal systems. For example, one OIG recommended that its agency correct queries to extract the correct information and ensure that all reportable procurements are included in its DATA Act submissions. Finalize or implement procedures or guidance. Recommendations related to establishing and documenting an agency’s DATA Act-related standard operating procedures or agency guidance, including the roles and responsibilities of agency stakeholders. For example, one OIG recommended that its agency update its guidance on what address to use for primary place of performance to be consistent with OMB and Treasury guidance. Maintain documentation. Recommendations related to establishing or maintaining documentation of the agency’s procedures, controls, and related roles and responsibilities for performing them. For example, one OIG recommended that its agency develop a central repository for grant award documentation and maintain documentation to support its DATA Act submissions. Provide training. Recommendations related to developing, implementing, and documenting training for an agency’s DATA Act stakeholders. For example, one OIG recommended that its agency provide mandatory training to all contracting officers and grant program staff to ensure their understanding of DATA Act requirements. Work with Treasury, OMB, and other external stakeholders. Recommendations for the agency to work with Treasury, OMB, or other stakeholders external to the agency to resolve government-wide issues. For example, one OIG recommended that its agency work closely with its federal shared service provider to address timing and coding errors that the service provider caused for future DATA Act submissions. Implement systems controls or modify systems. Recommendations related to developing and implementing automated systems and controls. For example, one OIG recommended that its agency complete the implementation of system interfaces and new procedures that are designed to improve collection of certain data that were not reported timely to FPDS-NG and improve linkages of certain financial transactions and procurement awards using a unique procurement instrument identifier. Increase resources. Recommendations related to increasing the staff, resources, or both necessary to fully implement DATA Act requirements. For example, one OIG recommended that its agency allocate the resources to ensure that reconciliations are performed when consolidating source system data to the DATA Act submission files. Management for 36 agencies stated that they concurred or generally concurred with the recommendations of their OIGs (see fig. 7). Management at many of these agencies stated that they continued to improve their processes and controls for subsequent data submissions. In addition, management for seven agencies stated that they partially concurred with the recommendations that their OIGs made. Management for two agencies did not concur with their OIGs’ recommendations. Management for one agency that did not concur with the recommendations stated that they should not be held responsible for data discrepancies that other agencies caused, and management for the other agency stated that they followed authoritative guidance that OMB and Treasury issued related to warnings and error messages. OMB staff told us that they reviewed the OIG reports—focusing on the 24 CFO Act agencies—to better understand issues that the OIGs identified and to determine whether additional guidance is needed to help agencies improve the completeness, timeliness, accuracy, and quality of their DATA Act submissions. OMB staff explained to us how they have or are planning to address OIG-identified issues. OMB staff told us that in April 2017 the CFO Council’s DATA Act Audit Collaboration working group was formed, which includes officials from OMB, Treasury, and the Chief Financial Officers (CFO) Council to foster collaboration and understanding of the risks that were being identified as agencies prepared and submitted their data. The working group also consults with CIGIE, which is not a member of the working group, but its representatives attend meetings to help the group members better understand issues involving the OIG reviews and the IG guide. According to OMB staff, the working group is the focal point to identify government- wide issues and identify guidance that can be clarified. They also told us that OMB continues to meet with this working group to determine what new guidance is needed to meet the DATA Act requirement to ensure that the standards are applied to the data available on the website. In June 2018, OMB issued new guidance requiring agencies to develop data quality plans intended to achieve the objectives of the DATA Act. According to OMB staff, OMB is committed to ensuring integrity and providing technical assistance to ensure data quality. Treasury officials told us that they reviewed OIG reports that were publicly available on Oversight.gov and are collaborating with OMB and the CFO Council to identify and resolve government-wide issues, including issues related to the broker, so that agencies can focus on resolving their agency-specific issues. In February 2018, the working group documented certain topics identified for improving data quality and value. OMB staff and Treasury officials also told us that OMB and Treasury have taken steps to address issues we previously reported related to their oversight of agencies’ implementation of the DATA Act. For example, we recommended in April 2017 that OMB and Treasury take appropriate actions to establish mechanisms to assess the results of independent audits and reviews of agencies’ compliance with the DATA Act requirements. The DATA Act Audit Collaboration working group is one of the mechanisms OMB and Treasury use to assess and discuss the results of independent audits and to address identified issues. In November 2017, we also recommended, among other things, that Treasury (1) reasonably assure that ongoing monitoring controls to help ensure the completeness and accuracy of agency submissions are designed, implemented, and operating as designed, and (2) disclose known data quality issues and limitations on the new USAspending.gov. Treasury has taken some steps and is continuing to take steps to address these recommendations. For example, under the data quality section of the About page on USAspending.gov, Treasury disclosed the requirement for each agency OIG to report on its agency’s compliance with the DATA Act and noted the availability of the reports at Oversight.gov. We provided a draft of this report to OMB, Treasury, and CIGIE for comment. We received written comments from CIGIE that are reproduced in appendix III and summarized below. In addition, OMB, Treasury, and CIGIE provided technical comments, which we incorporated as appropriate. In its written comments, CIGIE noted that the report provides useful information on OIG efforts to meet oversight and reporting responsibilities under the DATA Act. CIGIE further stated that it believes that the report will contribute to a greater understanding of the oversight work that the OIG community performs and of agency efforts to report and track government-wide spending more effectively. We are sending copies of this report to the Director of the Office of Management and Budget, the Secretary of the Treasury, the Chairperson and Vice Chairperson of the Council of the Inspectors General on Integrity and Efficiency, as well as interested congressional committees and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9816 or rasconap@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The Digital Accountability and Transparency Act of 2014 (DATA Act) includes provisions requiring us to review the Offices of Inspector Generals’ (OIG) mandated reports and issue our own reports assessing and comparing the completeness, timeliness, accuracy, and quality of the data that federal agencies submit under the act and the federal agencies’ implementation and use of data standards. We issued our first report on data quality in November 2017, as required. This report includes our review of the OIGs’ mandated reports, which were also issued primarily in November 2017. Our reporting objectives were to describe 1. the reported scope of work covered and type of audit standards OIGs used in their reviews of agencies’ DATA Act spending data; 2. any variations in the reported implementation and use of data standards and quality of agencies’ data, and any common issues and recommendations reported by the OIGs; and 3. the actions, if any, that the Office of Management and Budget (OMB) and the Department of the Treasury (Treasury) have reported taking or planning to take to use the results of OIG reviews to help monitor agencies’ implementation of the act. To address our first and second objectives, we obtained and reviewed 53 OIG reports that were issued on or before January 31, 2018, including reports related to 24 Chief Financial Officers Act of 1990 (CFO Act) agencies and 29 non-CFO Act agencies. Of 91 entities for which second quarter fiscal year 2017 spending data were submitted, we did not obtain and review OIG DATA Act reports for 38 entities with obligations totaling at least $1.2 billion (as displayed on USAspending.gov on May 23, 2018) because no reports for those entities were publicly available by our January 31, 2018, cutoff date. Table 3 lists the 53 agencies for which we obtained and reviewed the OIG reports on the quality of data that agencies submitted in accordance with DATA Act requirements. We also developed and conducted a survey of OIGs to provide further details on the design and results of their efforts to conduct statistical samples to select and test agencies’ data submissions and reviews of internal controls. In November 2017, we sent the survey to those OIGs whose agencies originally submitted DATA Act data to Treasury’s DATA Act broker. We received and reviewed responses from the 53 OIGs that we obtained reports from, with 9 OIGs including the completed surveys in their published reports and the others providing us their completed survey responses separately. We analyzed 53 OIG reports and survey responses, following up with OIGs for clarification when necessary. We reviewed each of the 53 OIG reports we obtained and identified the reported scope of work covered (e.g., the quarter of data reviewed) and the type of audit standards OIGs used to conduct their reviews (e.g., performance audit or attestation examination engagement). We also developed and used a data collection instrument to compile and summarize the conclusions and opinions included in the OIG reports on the completeness, timeliness, accuracy, and quality of agencies’ data submissions and their implementation and use of data standards. During this process, GAO analysts worked in teams of three to reach a consensus on how these OIG conclusions and opinions were categorized. For OIG reports that did not specifically state whether the agencies met the DATA Act requirements, we considered the reported results in conjunction with the more detailed information provided in the OIG responses to our survey and made conclusions about the OIGs’ assessments based on our professional judgment. We also reviewed the OIG reports and survey responses and used two data collection instruments to compile, analyze, and categorize common issues or agency-specific control deficiencies the OIGs identified in their reviews and recommendations they made to address them. During this process, GAO analysts worked in teams of three to obtain a consensus in how these issues and deficiencies were categorized. To address our third objective, we interviewed OMB staff and Treasury officials about how they used or planned to use the results of the OIG DATA Act reviews to assist them in their monitoring of agencies’ implementation of the act. We conducted this performance audit from September 2017 to July 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In their survey responses, Offices of Inspector General (OIG) for 45 agencies reported actual overall error rates or estimated error rates and estimated ranges of errors associated with the spending data transactions they tested for accuracy, completeness, or timeliness (see table 4). These results include OIGs that tested a statistical sample of transactions, tested the full population, and conducted an assessment of internal controls without additional substantive testing. OIGs that tested a sample responded that they used different sampling criteria, and the sources of files they used to select their statistical samples varied based on the files that were available. Regardless of whether the OIG tested a sample or the full population, some of the OIGs selected items for testing from File C, File D1, File D2, or some combination thereof. As a result, the overall error rates the OIGs reported are not from the same data submission files and are not fully comparable, but are intended to provide additional information on the individual results of the completeness, timeliness, and accuracy of the data each agency OIG tested. In addition to the contact named above, Michael LaForge (Assistant Director), Diane Morris (Auditor in Charge), Umesh Basnet, Thomas Hackney, and Laura Pacheco made major contributions to this report. Other key contributors include Dave Ballard, Carl Barden, Maria Belaval, Jenny Chanley, Patrick Frey, Ricky Harrison, Jason Kelly, Jason Kirwan, Quang Nguyen, Samuel Portnow, Carl Ramirez, Anne Rhodes-Kline, and Dacia Stewart. DATA Act: OMB, Treasury, and Agencies Need to Improve Completeness and Accuracy of Spending Data and Disclose Limitations. GAO-18-138. Washington, D.C.: November 8, 2017. DATA Act: As Reporting Deadline Nears, Challenges Remain That Will Affect Data Quality. GAO-17-496. Washington, D.C.: April 28, 2017. DATA Act: Office of Inspector General Reports Help Identify Agencies’ Implementation Challenges. GAO-17-460. Washington, D.C.: April 26, 2017. DATA Act: Implementation Progresses but Challenges Remain. GAO-17- 282T. Washington, D.C.: December 8, 2016. DATA Act: OMB and Treasury Have Issued Additional Guidance and Have Improved Pilot Design but Implementation Challenges Remain. GAO-17-156. Washington, D.C.: December 8, 2016. DATA Act: Initial Observations on Technical Implementation. GAO-16- 824R. Washington, D.C.: August 3, 2016. DATA Act: Improvements Needed in Reviewing Agency Implementation Plans and Monitoring Progress. GAO-16-698. Washington, D.C.: July 29, 2016. DATA Act: Progress Made but Significant Challenges Must Be Addressed to Ensure Full and Effective Implementation. GAO-16-556T. Washington, D.C.: April 19, 2016. DATA Act: Data Standards Established, but More Complete and Timely Guidance Is Needed to Ensure Effective Implementation. GAO-16-261. Washington, D.C.: January 29, 2016. DATA Act: Progress Made in Initial Implementation but Challenges Must be Addressed as Efforts Proceed. GAO-15-752T. Washington, D.C.: July 29, 2015.
|
The DATA Act was enacted to increase accountability and transparency and, among other things, expanded on the required federal spending information that agencies are to submit to Treasury for posting to a publicly available website. The act also includes provisions requiring a series of oversight reports by agencies' OIGs and GAO. The objectives of this report are to describe (1) the reported scope of work covered and type of audit standards OIGs used in their reviews of agencies' DATA Act spending data; (2) any variations in the reported implementation and use of data standards and quality of agencies' data, and any common issues and recommendations reported by the OIGs; and (3) the actions, if any, OMB and Treasury have reported taking or planning to take to use the results of OIG reviews to help monitor agencies' implementation of the act. To address these objectives, GAO reviewed 53 OIG reports issued on or before January 31, 2018, that assessed agencies' first submissions of spending data for the second quarter of fiscal year 2017 and surveyed the OIGs to obtain additional information. The Digital Accountability and Transparency Act of 2014 (DATA Act) requires agencies' Offices of Inspector General (OIG) to issue reports on their assessments of the quality of the agencies' spending data submissions and compliance with the DATA Act. The scope of all OIG reviews covered their agencies' second quarter fiscal year 2017 submissions. The files the OIGs used to select and review sample transactions varied based on data availability, and OIGs performed different types of reviews under generally accepted government auditing standards. Some OIGs reported testing a statistical sample of transactions that their agencies submitted and other OIGs reported testing the full population of submitted transactions. Because of these variations, the overall error rates reported by the OIGs are not fully comparable and a government-wide error rate cannot be projected. According to the OIG reports, about half of the agencies met Office of Management and Budget (OMB) and Department of the Treasury (Treasury) requirements for the implementation and use of data standards. The OIGs also reported that most agencies' first data submissions were not complete, timely, accurate, or of quality. OIG survey responses show that OIGs generally reported higher (projected) overall error rates for the accuracy of data than for completeness and timeliness. OIGs reported certain errors that involve inconsistencies in how the Treasury broker (system that collects and validates agency-submitted data) extracted data from certain federal award systems that resulted in government-wide issues outside the agencies' control, while other errors may have been caused by agency-specific control deficiencies. For example, OIGs reported deficiencies related to agencies' lack of effective procedures or controls and systems issues. Most OIGs made recommendations to agencies to address identified concerns. OMB staff and Treasury officials told GAO that they reviewed the OIG reports to better understand issues identified by the OIGs. OMB issued new guidance in June 2018 requiring agencies to develop data quality plans intended to achieve the objectives of the DATA Act. Treasury officials told GAO that they are collaborating with OMB and the Chief Financial Officers Council DATA Act Audit Collaboration working group to identify and resolve government-wide issues. GAO is not making recommendations in this report. The Council of the Inspectors General on Integrity and Efficiency (CIGIE) noted that GAO's report provides useful information on OIG efforts to meet oversight and reporting responsibilities under the DATA Act. OMB, Treasury, and CIGIE also provided technical comments that GAO incorporated as appropriate.
|
Territorial governments issue debt securities and receive loans for a variety of purposes, including to finance long-term investments, such as infrastructure projects, and to fund government operating costs. For the purposes of this report, total public debt outstanding refers to the sum of bonds and other debt held by and payable to the public, as reported in the territories’ single audit reports. Bonds payable are marketable bonded debt securities issued by the territories’ primary governments or their component units and held by investors outside those governments. The primary government is generally comprised of governmental activities (generally financed with taxes and intergovernmental aid) and business- type activities (generally financed with charges for goods and services). Component units are legally separate entities for which a government is financially accountable. For the purposes of this report, any reference to total government activity and balances includes both the primary government and component units. Other debt payable may include shorter term marketable notes and bills issued by territorial governments and held by investors outside those governments, non-marketable intragovernmental notes, notes held by local banks, federal loans, intragovernmental loans, and loans issued by local banks. Pension liabilities and other post-employment benefits (OPEB) are not included in our definition of total public debt. Marketable debt securities, primarily bonds with long-term maturities, are the main vehicle by which the territories access capital markets. Municipal bonds issued by all five territories have traditionally been attractive to investors because they are triple tax exempt; interest from the bonds is generally not subject to federal, state, and local income taxes regardless of an investor’s state of residence. There are several different types of marketable debt securities: General obligation bonds are bonds issued by territorial governments that are payable from the general funds of the issuer, although the precise source and priority of payment for general obligation bonds may vary considerably from issuer to issuer depending on applicable law. Most general obligation bonds are said to entail the full faith and credit (and in many cases the taxing power) of the issuer, depending on applicable law. In USVI, unlike in the other four territories in which general obligations bonds are backed by the full faith and credit of the government, debt issued by the primary government is either backed by 1) both a general obligation of the government and revenue from USVI’s gross receipts tax, or 2) revenue from the federal excise tax on rum rebated to the territory. Limited obligation bonds are bonds payable from specific taxes that are limited by law in rate or amount, while revenue bonds are payable from specific sources of revenue. Marketable notes differ from bonds in that they are short-term obligations of an issuer to repay a specified principal amount on a certain date, together with interest at a stated rate, usually payable from a defined source of anticipated revenues. Notes usually mature in 1 year or less, although notes of longer maturities are also issued. Bonds and notes may be issued by both the territories’ primary governments and by their component units. Examples of the territories’ component units are USVI’s Water and Power Authority, Guam’s Airport Authority, CNMI’s Ports Authority, and Puerto Rico’s Electric Power Authority. Unlike the states, territories are prohibited from authorizing their component units to seek debt restructuring under Chapter 9 of the federal bankruptcy code, which can be used to extend the timeline for debt repayment, refinance debt, or reduce the principal or interest on existing debt. U.S. law restricts the territories’ authority to impose certain territorial taxes. Three territories—Guam, CNMI, and USVI—are required by U.S. law to have a mirror tax code. In general this means that these territories must use the U.S. Internal Revenue Code (IRC) as their territorial income tax law. In contrast, American Samoa and Puerto Rico, which are not bound by a mirror tax code, have established and promulgated their own income tax regulations. Although Guam and CNMI are mirror-code jurisdictions, they are authorized under the Tax Reform Act of 1986 to delink from the IRC if certain conditions are met. Revenues are amounts that result from governments’ exercise of their sovereign power to tax or otherwise compel payment. Revenues also include income generated by the territories’ component units. While our analysis primarily focuses on trends in general revenues, we also include total revenue—general revenues and program revenues combined—in our analysis. In addition to general revenue levels, another measure of fiscal health is the net position for primary government activities, which represents the difference between the primary government’s assets (including the deferred outflows of resources) and the primary government’s liabilities (including the deferred inflows of resources). In other words, the net position for primary government activities reflects what the primary government would have left after satisfying its liabilities. A negative net position means that the primary government has more liabilities than assets. A decline in net position may indicate a deteriorating financial position. While our analysis primarily focuses on trends in the net position for the primary government, we also include certain information on trends in the total net position—primary government net position and component unit net position combined— for the government. Fiscal risks refer to responsibilities, programs, and activities that may legally commit or create the expectation for future government spending. Fiscal risks may be explicit in that the government is legally required to fund the commitment, or implicit in that an exposure arises not from a legal commitment, but from current policy, past practices, or other factors that may create the expectation for future spending. Civilian pension benefits are typically an example of an explicit fiscal risk because the government has a legal commitment to pay pension benefits earned by current government employees who will receive benefits in the future and to pay retirees who currently receive benefits. Puerto Rico’s total public debt outstanding increased continuously between fiscal years 2005 and 2014. (See figure 2.) Total public debt grew from $39.2 billion in fiscal year 2005 to $67.8 billion at the end of fiscal year 2014 —an average rate of 6.3 percent per year. Bonded debt outstanding —including mainly general obligation and revenue bonds—represented the majority of total public debt outstanding for all years. Bonded debt outstanding averaged 86 percent of total public debt between fiscal years 2005 and 2014, increasing from a total of $35 billion in fiscal year 2005 to $58.5 billion in fiscal year 2014. Puerto Rico’s Consolidated Audited Financial Report for fiscal year 2015 was not available as of June 2017. However, in the March 13, 2017, fiscal plan released by the Government of Puerto Rico, total public debt outstanding was listed as $74.3 billion as of February 2017. As of fiscal year 2014, the primary government’s bonded debt outstanding was mainly comprised of revenue bonds. These accounted for $24.3 billion of the $37.9 billion in total bonded debt. In contrast, between fiscal years 2005 and 2008, general obligation bonds represented the majority of the primary government’s bonded debt. In fiscal year 2009, the amount of revenue bonds outstanding tripled. The risks of general obligation bonds and revenue bonds are different. A revenue bond is secured by a specific revenue stream, identified in the bond contract, whereas a general obligation bond is secured by the full taxing power of the government, but also reliant on the full faith and credit of the issuing government. Puerto Rico also issued notes between fiscal years 2005 and 2014. Puerto Rico’s primary government and the three largest component units—the Puerto Rico Electric Power Authority (PREPA), the Puerto Rico Aqueduct and Sewage Authority (PRASA), and the Puerto Rico Highways and Transportation Authority (PRHTA)—owed the majority of Puerto Rico’s public debt outstanding in fiscal year 2014. (See table 1.) These component units mostly issued debt backed by their own resources, including the revenue generated from their operations. Other component units also held public debt in fiscal year 2014, including the Government Development Bank, State Insurance Fund Corporation and the Puerto Rico Trade and Export Company, among others. The primary government’s share of total public debt outstanding grew relative to debt owed by all of the component units from 44 percent in fiscal year 2005 to 59 percent in fiscal year 2014. Puerto Rico’s total public debt outstanding as a percentage of Gross Domestic Product (GDP) grew from 47 percent in fiscal year 2005 to 66 percent in fiscal year 2014, and its ratio of total public debt outstanding to Gross National Product (GNP) grew from 71 percent of GNP in fiscal year 2005 to 99 percent in fiscal year 2014. (See figure 3.) GDP measures the value of goods and services produced inside a country, or for the purpose of this report, a territory. In contrast, GNP measures the value of goods and services produced by its residents. GNP includes production from residents abroad and excludes production by foreign companies in a country. In Puerto Rico, GDP has consistently been greater than GNP, which means that production by foreign companies in Puerto Rico is larger than production by Puerto Rican residents in the territory and abroad. For this reason, according to the U.S. Department of the Treasury, GNP is generally a more representative measure of Puerto Rico’s economic activity than GDP. A July 2014 report by the Federal Reserve Bank of New York stated that debt to GNP ratios above just 60 percent can inhibit economic growth because they generally lead to higher financing costs and limit access to other sources of financing. Puerto Rico’s share of total public debt outstanding to GNP has remained above 90 percent since 2010. Puerto Rico’s total public debt outstanding per capita has almost doubled since fiscal year 2005, rising from $10,000 per person in fiscal year 2005 to $19,000 per person in fiscal year 2014. (See figure 4.) Puerto Rico’s general revenue fluctuated between fiscal years 2005 and 2014, with lows around $11.6 billion between fiscal years 2008 and 2010 and again in 2013. Puerto Rico’s general revenue in fiscal year 2014 was $13.8 billion, of which 75 percent or $10.3 billion was tax revenue. Most of the tax revenue for the same year was reported as income taxes (52 percent of the total or $5.4 billion) and excise taxes (33 percent of the total or $3.4 billion.) Revenue in fiscal year 2014 increased by over $2 billion from the prior year. The majority of this growth was due to increases in income and excise taxes. Puerto Rico’s total revenue (i.e. general revenue and program revenue combined) also fluctuated but grew slightly by 3 percent on average, per year, from $25.5 billion in fiscal year 2005 to $32.5 billion in fiscal year 2014. (See figure 5.) Despite the growth in revenue in fiscal year 2014, Puerto Rico’s net position for the primary government as of fiscal year end 2014 was a negative $49.7 billion, declining from a negative $46.4 billion as of fiscal year end 2013. Moreover, despite the fluctuations in revenue between fiscal years 2005 and 2014, Puerto Rico’s net position for the primary government declined year over year from a negative $15.2 billion as of fiscal year end 2005 to a negative $49.7 billion as of fiscal year end 2014. Puerto Rico’s declining net position for the primary government reflects its deteriorating financial position. Further, the effect of Puerto Rico implementing Governmental Accounting Standards Board (GASB) Statement No. 68, Accounting and Financial Reporting for Pensions —An Amendment of GASB Statement No. 27, is not yet known. GASB Statement No. 68 was in effect for fiscal years beginning after June 15, 2014, and established standards for measuring and recognizing liabilities, deferred outflows of resources, and deferred inflows of resources related to pensions. For each of the other territories that implemented GASB Statement No. 68, implementing the statement resulted in the territory recognizing previously unrecognized net pension liabilities and, therefore, a decline in ending net position in the year of recognition. Puerto Rico’s total net position for the primary government and component units combined also declined year over year between fiscal years 2005 and 2014, from a positive $2.5 billion as of fiscal year end 2005 to a negative $43.6 billion as of fiscal year end 2014. Puerto Rico officials, representatives from ratings agencies that we spoke to, and publically available reports that we reviewed cited various major factors as contributors to Puerto Rico’s high debt levels. The factors cited include the following: Public debt financing government operations: Ratings agency officials told us that Puerto Rico has long used public debt as a means to finance general government operations and indicated that debt has been used for this purpose in Puerto Rico since at least 2000. According to these officials, the sustained use of debt to finance general government operations is unusual when compared to states and was considered a “red flag” in the case of Puerto Rico. As Puerto Rico’s debt grew, the government found it increasingly difficult to meet other responsibilities, including paying tax returns, settling accounts payable, and fulfilling pension obligations. Triple tax exempt status: Debt in Puerto Rico was attractive to investors for its triple tax exempt status. Over time, Puerto Rico’s primary government accumulated debt from investors without addressing its persistent deficits. According to the February 28, 2017, version of the Puerto Rico government’s fiscal plan, Puerto Rico’s capacity to issue debt at favorable rates postponed the implementation of fiscal reforms and controls necessary to balance Puerto Rico’s budget. Financial data limitations: A lack of comprehensive, timely, and accurate financial data from Puerto Rico may have limited the ability of some investors to anticipate or fully understand the economic crisis in the territory. For example, according to the Government of Puerto Rico’s February 28, 2017, version of the fiscal plan, audited financial statements for Puerto Rico were only issued on time three times from 2005 to 2014. Audited financial statements are still currently pending for fiscal years 2015 and 2016. In addition, forecasts routinely overestimated revenue. Recession and outmigration: Recession and outmigration have resulted in reduced tax revenue. A recession in Puerto Rico began in 2006 and continued through the period we reviewed. Outmigration also accelerated most years since 2005 as Puerto Ricans migrated to the U.S. mainland and elsewhere. According to U.S. Census Bureau estimates, Puerto Rico lost 14 percent of its population, more than 550,000 individuals, between July 2009 and July 2016. 936 tax credit phase out: The phase out of the section 936 tax credit is often cited by Puerto Rico officials for its negative effect on Puerto Rico’s economy. Other experts said the effect was not as significant. In addition, in 2006, we reported that the expiration of the benefit did not ultimately lead to a reduction in income and value added. A substantial share of production in Puerto Rico is carried out by U.S. multinational corporations, in part because of federal corporate income tax benefits, once available to firms located in Puerto Rico. Prior to 1994, certain U.S. corporations could claim the possessions tax credit under section 936 of the Internal Revenue Code (IRC). In general, the credit equaled the full amount of federal tax liability related to an eligible corporation’s income from its operations in a possession—including Puerto Rico—effectively making such income tax-free. In 1993, caps were placed on the amount of possessions credits that corporations could earn. In 1996, the credit was repealed, although corporations that were existing credit claimants were eligible to claim credits through 2005. Puerto Rico had missed up to $1.5 billion in debt service payments as of September 2016. Puerto Rico’s government is working with the Financial Management and Oversight Board (Board) to implement plans for long- term financial reform and to adjust debts accrued by both the primary government and public corporations. The Board has the power to approve or certify fiscal plans, budgets, voluntary agreements with bondholders, debt restructuring plans, and critical projects within Puerto Rico. As the first step in a process to adjust debts in Puerto Rico, the Board certified the current Governor’s fiscal plan in March 2017, which outlines strategies for financial reform. The fiscal plan includes estimates for how much each year can be allocated for debt payments, which average 23 percent of total debt payments due for the years 2018 through 2026. (See figure 6.) On May 3, 2017, the Board filed an initial petition for restructuring Puerto Rico’s debt and pension liabilities. Puerto Rico’s ultimate liability for its outstanding debt will be determined based on the outcome of this process in federal court. American Samoa’s total public debt outstanding grew from $27 million in fiscal year 2005 to $69.5 million in fiscal year 2015. Until fiscal year 2015, the portion of American Samoa’s total public debt outstanding that was bonded debt outstanding was limited. (See figure 7.) In fiscal year 2007, the territory paid off a general obligation bond that was issued in fiscal year 2000 to refinance prior debt. Between fiscal years 2008 and 2014, American Samoa had no outstanding bonded public debt. In fiscal year 2015, American Samoa’s primary government issued a general obligation bond for about $55 million, and in January 2016 a second bond was issued for $23 million. Most of American Samoa’s bonded debt outstanding is scheduled to mature by 2035. Between fiscal years 2005 and 2015, American Samoa’s loan balance was significantly greater than bonded debt outstanding for all years except fiscal year 2015. American Samoa’s loan balance consists of both loans from the U.S. government and intragovernmental loans, or loans between the territory’s primary government and component units. Between fiscal years 2005 and 2015, this included 1993 and 1994 Federal Emergency Management Agency community disaster loans totaling $10.2 million and a 1999 Department of the Interior loan in the amount of $18.6 million. In 2006 and 2007, the primary government also entered into two loan agreements with the government retirement fund, in the amounts of $10 million and $20 million, in part to finance infrastructure projects. American Samoa’s total public debt outstanding has remained small relative to its economy between fiscal years 2005 and 2015. During this period, American Samoa’s total public debt outstanding as a percentage of GDP was 5.3 percent in fiscal year 2005, reached a low of 4.4 percent in fiscal year 2014, and grew to 10.9 percent in fiscal year 2015. During this same period, bonded debt outstanding as a share of GDP was 1.3 percent in fiscal year 2005, declined to 0.44 percent in fiscal year 2007 and remained at 0 percent between fiscal years 2008 and 2014. The new bond issuance in fiscal year 2015 increased the share to 8.6 percent. (See figure 8.) Total public debt per capita grew from $414 per person in fiscal year 2005 to $1,212.8 in fiscal year 2015. (See figure 9.) American Samoa’s general revenue fluctuated, but trended upward between fiscal years 2005 and 2015. American Samoa’s general revenue of $116.5 million in fiscal year 2015 represented a 20 percent increase over its revenue of $97.4 million in fiscal year 2005. Approximately 55 percent of the general revenue earned by American Samoa during this period was comprised of tax revenue, and all of the tax revenue was from income and excise taxes. American Samoa’s total revenue (i.e. general revenue and program revenue combined) also fluctuated but trended upward between fiscal years 2005 and 2015. Its total revenue of $436.4 million in fiscal year 2015 represented a 55 percent increase over its total revenue of $281.8 million in fiscal year 2005. According to territory officials, growth in revenue during this period can be attributed in part to revenue generated by stimulus funding the territory received as part of the American Recovery and Reinvestment Act of 2009. (See figure 10.) Along with the growth in revenue, American Samoa’s net position for the primary government was consistently positive and generally improving between fiscal years 2005 and 2014. American Samoa’s net position for the primary government generally improved year over year from a positive $217.7 million as of fiscal year end 2005 to a positive $291.9 as of fiscal year end 2014; it then declined to a positive $245.1 million as of fiscal year end 2015. American Samoa’s net position for the primary government as of fiscal year end 2014 is shown prior to restatement. In fiscal year 2015, American Samoa implemented GASB Statement No. 68 and adjusted its beginning net position by $60.1 million, resulting in a restated net position as of fiscal year end 2014 of a positive $240.8 million. The implementation of GASB Statement No. 68 resulted in the territory recognizing previously unrecognized net pension liabilities and, therefore, a decline in ending net position in the year of recognition. American Samoa’s total net position for the primary government and component units combined was also consistently positive and generally improving between fiscal years 2005 and 2015. It increased from $317.9 million as of fiscal year end 2005 to $450.2 million as of fiscal year end 2015. The territory has previously faced financial management challenges, including failures to meet revenue projections and deficiencies in forecasting expenditures. Territory officials said, however, that they are taking a number of steps to improve forecasting. In early 2015, officials convened a task force in Hawaii to develop a plan to improve the management of American Samoa’s finances. As part of the effort to improve forecasting, the plan requires the treasury and budget departments to meet on a monthly basis to reconcile actual revenues and expenditures and brief the Governor. If revenues are below projections, the Governor may instruct all government departments to reduce spending by an additional 5-10 percent. In addition, officials told us that the territory is planning to procure a contractor in fiscal year 2017 to help further improve its revenue and spending forecasts. According to territory officials, American Samoa has never issued debt to fund government operating costs and does not intend to do so. Territory officials confirmed that the fiscal year 2015 and 2016 general obligation bonds were issued primarily to fund various infrastructure projects, including relocating airport fuel tanks, constructing an inter-island ferry, and establishing a territorial charter bank. While American Samoa’s level of public debt is relatively low compared to other territories, we found that it faces significant economic vulnerabilities that may hamper its ability to repay that debt. According to territory officials and our prior work, American Samoa’s economy relies heavily on the tuna processing and canning industry. In December 2016, we reported that canneries employed about 14 percent of American Samoa’s workforce in 2014. Moreover, we found that the canneries provided a number of indirect benefits to other industries and the economy in American Samoa. For example, other businesses exist because of the canneries, such as the company that manufactures the cans. Maintenance for the canneries and for the vessels that supply the canneries also has brought business and jobs to the island. Cannery workers spend money at local establishments, such as restaurants and retail stores. Additionally, exported cannery products and delivery of materials to the canneries reduced the shipping cost of bringing other goods to American Samoa. We also reported that the tuna canning industry faces a number of challenges; in addition territory officials expressed concerns about federal policies that may hamper American Samoa’s tuna industry, such as scheduled minimum wage increases that increase labor costs for tuna canning in American Samoa relative to other locations, decreased access to fishing grounds in the Pacific due to environmental regulations, and potential erosion of the territory’s preferential trade status. In October 2016, one of the two companies with canning operations in American Samoa announced that it would indefinitely suspend its operations in the territory, and the other temporarily suspended operations twice during the same year. Changes in American Samoa’s tuna industry have been important determinants of changes in its GDP, and additional disruptions in the industry would reduce revenue and hamper GDP growth, which, if severe enough, could impede the repayment of existing debt. In part because of such challenges, Moody’s Investor Services assigned a noninvestment grade rating to the territory’s bonds in early 2016. According to the rating agency, this downgrade reflected concerns associated with the territory’s small and volatile economy, low income levels, weak financial position, and financial management challenges. Territory officials told us that the Puerto Rico debt crisis has affected their access to favorable rates in capital markets, and said that they currently do not have plans to issue any more bonded debt. CNMI’s total public debt outstanding declined from $251.7 million in fiscal year 2005 to $144.7 million in fiscal year 2015. (See figure 11.) During this time, CNMI’s primary government issued one general obligation bond in the amount of about $100.5 million in fiscal year 2007. This general obligation bond refinanced two prior bonds that were issued in fiscal years 2000 and 2003. Most of CNMI’s bonded debt outstanding is scheduled to mature in 2030 or later. Between fiscal years 2005 and 2015, CNMI’s total public debt outstanding as a share of GDP grew from 23 percent in fiscal year 2005 to 26 percent in fiscal year 2007, and then declined to 16 percent in fiscal year 2015. Bonded debt outstanding as a share of GDP was 14 percent in both fiscal years 2005 and 2015, but reached 19 percent in fiscal year 2011. (See figure 12.) CNMI’s total public debt outstanding per capita declined from about $4,199 per person in fiscal year 2007 to about $2,776 per person in fiscal year 2015. (See figure 13.) CNMI’s general revenue fluctuated between fiscal years 2005 and 2015. General revenues declined by about 39 percent between fiscal years 2005 and 2011, largely due to the decline in the territory’s garment industry. (See figure 14.) General revenues have steadily increased since fiscal year 2011, primarily as a result of growth in the tourism sector. Data from the Marianas Visitor Authority show that the downward trend in Japanese visitors from 2013 to 2016 was offset by the growth in visitors from China and South Korea. The tourist industry has also been boosted by the introduction of a new casino. In August 2014, the CNMI government entered into a casino license agreement to construct a development project that will include a hotel with a minimum of 2,004 guest rooms and areas for gaming, food, retail, and entertainment, among other things. CNMI’s total revenue (i.e. general revenue and program revenue combined) also fluctuated between fiscal years 2005 and 2015. Total revenue reached a high of $635.7 million in fiscal year 2014 and then declined to $573.8 million in fiscal year 2015, which represented only a one percent increase over the fiscal year 2005 revenue of $567.9 million. While general revenue fluctuated, dipping then rebounding between fiscal years 2005 and 2015, CNMI’s net position for the primary government has been negative and generally trending downward. Specifically, CNMI’s net position for the primary government declined from a negative $38.1 million as of fiscal year end 2005 to a negative $215.4 million as of fiscal year end 2015. CNMI’s net position for the primary government has been negative by over $200 million for each fiscal year since 2010, but it showed a slight improvement between fiscal years 2011 and 2013 and in fiscal year 2015. CNMI’s total net position for the primary government and component units combined fluctuated but generally remained stagnant, increasing slightly from $281.6 million as of fiscal year end 2005 to $284.8 million as of fiscal year end 2015. CNMI’s Constitution prohibits public indebtedness for operating expenses of the CNMI government or its political subdivisions. In addition, the territory’s legislature must approve any bond issuances and the value of any bonds issued cannot exceed 10 percent of the assessed value of real property within CNMI. In fiscal year 2007, the primary government of CNMI issued one general obligation bond to refinance two bonds originally issued in 2000 and 2003. Both the 2000 and 2003 bonds were issued to finance various infrastructure improvement projects. The 2003 issuance was also used for a onetime payment to settle land claims for the appropriation of private lands for public use. Component units in CNMI also issue debt. In 2007, the Commonwealth Ports Authority, which is responsible for operating, maintaining, and improving all airports and seaports in CNMI, issued a bond for about $7.2 million. The proceeds of the bond were used in part to pay for improvements to seaport facilities at Saipan Harbor. While CNMI’s economic outlook has improved, with GDP increasing 3 years in a row since 2013, we found that the territory faces growing labor shortages that may affect its ability to repay public debt in the future. In May 2017, we reported that CNMI’s economy relies heavily on a foreign workforce and foreign workers comprised a majority of the territory’s workforce in 2015. The Consolidated Natural Resources Act of 2008, among other things, established federal control of CNMI immigration beginning in 2009. The act established a transition period with special provisions for foreign visitors, investors, and workers. Specifically, it required the U.S. Department of Homeland Security (DHS) to establish a temporary work permit program for foreign workers and to reduce annually the number of permits issued, reaching zero by the end of the transition period—now set to occur on December 31, 2019. We analyzed the economic effect of removing all permitted foreign workers from CNMI’s economy using the most recent GDP information available from calendar year 2015. Depending on assumptions made, with no permitted workers CNMI’s GDP in 2015 would have hypothetically declined by 26 to 62 percent. Planned reductions in permitted workers could worsen the effect on GDP going forward and hamper the territory’s ability to repay existing debt. CNMI also has significant pension liabilities, but the exact amount of the net pension liability is not included in the territory’s most recent single audit report because the government has not complied with accounting standards that require it to do so. In 2013, a U.S. district court approved a settlement agreement with the territory’s government pension plan, which applied for bankruptcy in 2012. As part of the settlement, CNMI agreed to make minimum annual payments to the fund to allow members to receive 75 percent of their full benefits. In addition to the settlement plan, CNMI appropriated $25 million of casino license fees to fund the restoration of the 25 percent reduction of the retirees’ and beneficiaries’ pensions, among other purposes. CNMI made one payment of $27 million and another payment of $19.4 million to the fund in fiscal year 2015. Territory officials told us they are planning to market a $45 million general obligation bond in 2017 to provide additional financing for the pension fund. They added, however, that they currently have no plans to issue debt for other purposes, such as infrastructure projects, because of uncertainty in the labor market. In 2012, Moody’s Investor Services confirmed CNMI’s general obligation bond ratings as non-investment grade, which was downgraded in 2009. According to the rating agency, the 2012 rating was due to losses in the territory’s garment industry, consistent operating deficits, and increasing unfunded pension liabilities. Guam’s total public debt outstanding increased from almost $1 billion in fiscal year 2005 to $2.5 billion fiscal year 2015, with the majority of the increase occurring between fiscal years 2008 and 2015 when total outstanding public debt grew 13 percent on average per year. (See figure 15.) In fiscal year 2015, 54 percent of Guam’s total public debt outstanding was issued by component units. Territory officials told us component unit debt is backed solely by the revenue component units generate and cannot be used to service debt issued by the primary government. The majority of Guam’s total public debt is in the form of bonds. Bonded debt outstanding comprised between 93 and 97 percent of total public debt outstanding from fiscal years 2005 through 2015. Most of Guam’s bonded debt outstanding will mature in 2027 or afterwards. The remainder of Guam’s public debt outstanding between fiscal years 2005 and 2015 was primarily comprised of notes and loans, including loans from the federal government. Between fiscal years 2005 and 2015, Guam’s total public debt outstanding as a share of GDP increased from 24 percent to 44 percent, with bonded debt outstanding growing similarly from 22 percent of GDP to 42 percent. (See figure 16.) Both total public debt and bonded public debt outstanding per capita more than doubled between fiscal years 2005 and 2015. Total public debt outstanding per capita rose from about $6,270 per person to $15,323 per person, while bonded public debt outstanding increased from $5,810 per person to $14,759 per person. (See figure 17.) Guam’s general revenue grew by 6 percent on average, per year, between fiscal years 2005 and 2015, from $573.2 million to $862.7 million. General revenue declined sharply in fiscal year 2006, recovered in fiscal year 2007, and then increased steadily through fiscal year 2015. According to territory officials, this increase in revenue can largely be attributed to economic development, with significant growth in tourism and new construction. A 2015 report to Guam’s bondholders noted that there was an increase in visitors to the island each month between 2014 and 2015. The report attributed this increase to several factors, such as the expanded number of airline routes to Guam, the favorable exchange rate for Asian visitors, and the relative improvement of the overall global economy. Guam’s total revenue, or general revenue and program revenue combined, also grew by 5 percent on average, per year, between fiscal years 2005 and 2015, from $1.4 billion to $2.2 billion. (See figure 18.) To project revenues, Guam officials use a model comprised of statistical weights that are calculated and assigned to each revenue source, which is derived from historical collections data from the prior fiscal years. While revenue generally grew, Guam’s net position for the primary government fluctuated significantly between fiscal years 2005 and 2015. Since fiscal year end 2006, Guam’s net position for the primary government has been negative and trending downward. Specifically, Guam’s net position for the primary government declined from a positive $79.8 million as of fiscal year end 2005 to a negative $194.2 million as of fiscal year end 2012. Net position improved significantly and was positive in fiscal years 2013 and 2014, but then declined from a positive $174.4 million as of fiscal year end 2014 to a 10-year low of a negative $670.9 million as of fiscal year end 2015. Guam’s net position for the primary government as of fiscal year end 2014 is shown prior to restatement. In fiscal year 2015, Guam implemented GASB Statement No. 68 and adjusted its beginning net position by $815.6 million, resulting in a restated net position as of fiscal year end 2014 of a negative $641.2 million. The implementation of GASB Statement No. 68 resulted in the territory recognizing previously unrecognized net pension liabilities and, therefore, a decline in ending net position in the year of recognition. Guam’s total net position for the primary government and component units combined also fluctuated significantly. Specifically, Guam’s total net position increased from a positive $788.8 million as of fiscal year end 2012 to a 10-year high of positive $1.2 billion as of fiscal year end 2014. It declined to a 10-year low of positive $47.3 million as of fiscal year end 2015 due to the implementation of GASB Statement No. 68. According to territory officials, Guam’s bonded debt outstanding has primarily been used to comply with federal requirements and court orders. Guam has issued debt in several cases when compelled to meet federal and territorial requirements. For example, since Guam adheres to the mirror tax code, the territory is required to fund the Earned Income Tax Credit (EITC) and is not reimbursed for this by the federal government. In June 2004, the territory agreed to pay $60 million over 9 years in settlement of unpaid EITC refunds from 1996, and in September 2006, the territory reached a new settlement replacing the 2004 agreement in which it agreed to pay up to $90 million. Moreover, in 2006, the Superior Court of Guam held that a territorial statutory provision required the retirement fund for government employees to pay past due annual lump sum Cost of Living (COLA) payments plus interest to eligible retirees and survivors. This resulted in an award of $123.5 million plus interest to those individuals. In response, Guam issued a general obligation bond in 2007 in the amount of $151.9 million to finance these past due tax refunds and outstanding COLA settlement payments, as well as to refinance prior debt and help fund infrastructure projects. In 2009, it issued another general obligation bond in the amount of $271 million for similar purposes. According to a Guam government report, the largest increase in the territory’s indebtedness occurred between fiscal year 2008 and fiscal year 2009, and was due in part to issuing bonds to pay for past due tax refunds and unpaid COLA expenses. In Guam’s 2017 draft debt management policy, the Governor cited the administration’s commitment to ensuring that tax refunds will be paid on time and no later than 6 months after filing. In addition, in February 2004 the U.S. Environmental Protection Agency (EPA) and the Department of Justice filed a consent decree in the U.S. District Court of Guam. The consent decree set forth the settlement terms agreed to by the federal government and Guam settling a lawsuit alleging Guam violated the Clean Water Act. The consent decree included deadlines for opening a new landfill and adopting a dump closure plan. In response to a 2009 District Court order that Guam comply with the terms of the consent order, the territory chose to issue a $202.4 million limited obligation bond to fund closing the Ordot dump and constructing a new landfill to meet the terms of the settlement agreement. Guam also issued revenue bonds between fiscal years 2005 and 2015 to finance infrastructure projects. For example, in 2011 a revenue bond backed by hotel occupancy taxes was issued in the amount of $90.6 million in part to fund the construction of a museum on the island and other projects to benefit Guam’s tourism industry. In addition, in 2013 Guam’s Airport Authority issued $247 million in bonds that were used, in part, to fund airport enhancements. As established under its Organic Act, Guam has the authority to issue bonds, but Guam’s public indebtedness is not authorized or allowed to exceed 10 percent of the aggregate tax valuation of property in the territory; tax valuation of property is currently set at 90 percent of appraised value of property. The limit applies to both general obligation and limited obligation debt. In fiscal year 2007, to increase borrowing capacity to address a $524 million deficit, the government changed the percentage of appraised value which constitutes the assessed value. The debt ceiling still limits the amount of public debt Guam can issue to 10 percent of the aggregate tax valuation of property. However, in September 2007, Guam amended its statutory definition of assessed value from 35 percent of appraised property values to 70 percent. In May 2009, the definition tax valuation of property was again amended to 90 percent of appraised property values. This second increase was imposed so Guam could issue bonds to comply with the requirement to close the Ordot dump and open a new landfill. In fiscal year 2012, the government increased borrowing capacity a third time by amending the definition of assessed value to 100 percent of appraised value in order to fund past due tax refunds. In fiscal year 2016, the statutory definition of assessed value was decreased back down to 90 percent of appraised value. Despite economic growth, we found that Guam faces large fiscal risks related to unfunded pension liabilities and other post-employment benefits (OPEB) that, if unaddressed, may hamper its ability to repay existing debt and increase its need to issue debt. A number of factors may contribute to continued economic growth in Guam. Specifically, according to a government report, visitor arrivals to Guam are projected to continue increasing and higher room rates and occupancy are leading to continued hotel development. Moreover, the Marine Corps has plans to consolidate bases in Okinawa, Japan, and relocate 4,100 Marines to Guam. The Department of Defense (DOD) expects this relocation to Guam to occur between fiscal years 2022 and 2026. Officials from Guam predict that the military buildup will result in significant additional investment in Guam’s economy. In July 2016, DOD agreed to give Guam approximately $55.6 million in grants to fund civilian water and wastewater projects linked to the military buildup; additional investments in the power infrastructure will also be funded by DOD. A 2014 study conducted by the Department of the Navy on the effect of the military buildup on Guam’s economy concluded that it would increase civilian labor force demand, increase civilian labor force income, and increase tax revenues. While it maintained Guam’s debt as investment grade as of 2017, the rating agency Standard and Poor’s expressed concern about Guam’s extremely high debt burden and vulnerability to economic changes in its tourism and military industries. In addition, Guam has large pension and OPEB liabilities that may stress current debt service payment arrangements if anticipated savings from changes to the government pension system are not realized. In fiscal year 2015, pension liabilities were $1.2 billion and OPEB liabilities were $2 billion, 22 and 37 percent of GDP, respectively. Territory officials told us that they have taken a variety of steps to address their unfunded pension and OPEB liabilities. In 1995, the government closed the defined benefit plan to new members with all new employees participating in a defined contribution plan, which resulted in a decrease in accrued liabilities. To address insufficient savings by members in the defined contribution plan, the legislature created two new retirement plans in 2016. The government estimates that the new retirement plans could add an additional $173 million to the pension fund. Territory officials said the government is meeting its actuarial contributions on an annual basis and is on track to pay off the existing unfunded pension liability in approximately 15 years. Between fiscal years 2005 and 2015, USVI’s total public debt outstanding grew by 84 percent, from $1.4 billion to $2.6 billion. (See figure 19.) The sharpest increase was between fiscal years 2008 and 2010. During this period, total public debt outstanding increased by about $800 million, and almost all of USVI’s public debt was in the form of bonds. Bonds issued by USVI’s primary government are either backed by 1) both a general obligation of the government and a gross receipts tax, or 2) an excise tax on rum produced in USVI. Bonds issued by component units are backed by their revenues. Approximately half of USVI’s bonded debt is backed by revenues generated from the excise tax placed on rum imports to the U.S. mainland. Both the primary government and component units issued notes and took out loans during this period. Most of USVI’s bonded debt outstanding is scheduled to mature in 2027 or afterward. USVI’s total public debt outstanding as a percentage of GDP doubled between fiscal years 2005 and 2015, growing from 34 percent to 72 percent. The steepest increases were between 2008 and 2010, when total public debt outstanding as a percentage of GDP increased by 19 percent, and between 2011 and 2014, when it increased by 16 percent. Total public debt outstanding as a share of GDP reached 72 percent in fiscal year 2015. Bonded debt outstanding was 63 percent of GDP in fiscal year 2015. (See figure 20.) Total public debt outstanding per capita also increased during this period. It ranged from about $13,063 per person in fiscal year 2005 to about $25,739 per person in fiscal year 2015. (See figure 21.) USVI’s general revenue showed almost no growth in the 10-year period between fiscal years 2005 and 2015. USVI’s general revenue declined from fiscal years 2008 to 2009 due to the 2008 recession and operating losses at the Hovensa oil refinery, and rebounded in fiscal year 2010 as the economy recovered. General revenue decreased again from fiscal year 2010 to 2011. Between fiscal years 2011 and 2014 revenue increased again. Despite the increase, the fiscal year 2015 general revenue of $919.4 million was only about $43 million greater than that collected 10 years prior. In contrast USVI’s total revenue (i.e. general revenue and program revenue combined) grew slightly by 2 percent on average, per year, between fiscal years 2005 and 2015, from $1.6 billion to $1.9 billion. (See figure 22.) USVI has a statutory requirement that a team, composed of senior executives and legislative officials, meet at least twice a year to establish an official economic forecast of the territorial economy, including estimates of the following year’s revenue. Territory officials acknowledged that in recent years actual revenues have been less than had been estimated, citing both adverse economic conditions and litigation that had blocked the collection of property taxes for several years. These officials said that a new estimation methodology has been devised which uses a weighted average of the prior 5 years of actual revenue. USVI’s net position for the primary government declined year over year from a negative $215.0 million as of fiscal year end 2008 to a negative $1.5 billion as of fiscal year end 2014; continuing to decline to a negative $3.7 billion as of fiscal year end 2015. USVI’s net position for the primary government as of fiscal year end 2014 is shown prior to restatement implementing GASB Statement No. 68. In fiscal year 2015, USVI implemented GASB Statement No. 68 and adjusted its beginning net position by $2.0 billion, resulting in a restated net position as of fiscal year end 2014 of a negative $3.5 billion. The implementation of GASB Statement No. 68 resulted in the territory recognizing previously unrecognized net pension liabilities and, therefore, a decline in ending net position in the year of recognition. USVI’s declining net position for its primary government reflects its deteriorating financial position. USVI’s total net position for the primary government and component units combined increased between fiscal year end 2005 and 2007; it then declined year over year from positive $490.9 million as of fiscal year end 2008 to negative $3.6 billion as of fiscal year end 2015. More than a third of USVI’s current bonded debt outstanding as of fiscal year 2015 was issued to fund government operating costs. Before that time bonded debt outstanding issued on behalf of the primary government was used either to refinance earlier bond issues; fund infrastructure projects such as improvements to schools, public safety facilities, and transportation infrastructure; or to assist privately-owned industrial enterprises, specifically construction at the Cruzan and Diageo rum distilleries and payment of a portion of the costs of sewage and solid waste disposal at the Hovensa oil refinery. In the period following the recession of 2008, revenues declined and there were continuing demands for spending. In response, USVI issued debt for the purpose of financing regular government operating expenses. Between July 2010 and December 2014, USVI issued almost $850 million in bonds for this purpose with maturities ranging between 1 and 20 years. According to territory officials, several factors contributed to USVI’s increasing reliance on debt to fund government operations, including the recession of 2008, the 2012 closure of the Hovensa oil refinery, a decline in USVI’s share of worldwide rum sales, and a decline in visits from cruise ship passengers. According to a senior government official, the closure of the Hovensa refinery was particularly detrimental to the territory’s economy and resulted in the loss of 2,000 jobs on St. Croix and a significant decrease in revenue. As of April 2017, USVI’s unemployment rate was 10.3 percent. USVI officials cited several federal requirements that contributed to USVI’s need to issue debt. Because USVI is part of the mirror tax code, officials noted that USVI is required to pay the EITC to its residents, but is not reimbursed for this by the federal government. In contrast, state governments do not pay EITC because it is a federal benefit administered through the federal tax code. EPA directives for improving landfills and water projects and federal banking regulations that treat branches of U.S. banks placed in USVI as non-U.S. banks—thereby discouraging large banks from having branches in USVI—were also cited as reasons that USVI has issued debt. USVI officials expressed confidence in the territory’s ability to repay public debt, but we found that large fiscal risks and exclusion from capital markets may hamper its ability to do so. USVI’s bonds are backed by the gross receipts tax on some individuals and entities doing business in USVI and by excise tax revenues collected by the federal government and remitted to USVI as required by statute. Officials said that revenues from the gross receipts tax and excise tax rebates—from which debt service payments are made—are monitored on a month-by-month basis. Also, officials cited as a protection against default the “lockbox” provisions that USVI has had contractually for some time and that were written into its statutes in 2016. According to these provisions, gross receipts tax and excise rebate revenue go directly to an escrow account in a New York bank, and the escrow agent makes debt service payments twice a year from the account; a year’s worth of payments is held in reserve at all times. USVI officials expressed confidence that these provisions make it difficult for USVI to default on its debt payments. However, in a recent statement, Moody’s rating service said that these security provisions have not been tested in a stress scenario where the government faces a lack of funds to provide basic services. This observation was part of a statement issued by Moody’s in late January 2017 in which it announced it had downgraded USVI’s matching fund bonds (those backed by excise tax rebates) to noninvestment grade. Other rating agencies expressed similar concerns. For example, Standard & Poor’s cited 1) the government’s fiscal distress, as evidenced by its significant structural imbalance and continued reliance on deficit financing to fund operations; 2) revenue backed bond issues that have exhibited either declining or flat growth absent tax rate increases and are levied on a limited and concentrated base; 3) adequate, but substantially reduced, debt service coverage; and 4) a limited economy, concentrated in rum production, tourism, and government. In late January 2017, USVI cancelled a new bond issuance it was attempting to market to provide additional financing for general government operations. The bond issuance was authorized by the USVI legislature in 2016, but according to a senior bank official involved in underwriting USVI bonds, delays in bringing the issuance to market, and the legislature’s delay in enacting so-called “sin taxes” on items such as beer, cigarettes, and liquor, reduced the chances of successfully marketing the bond issue to investors. By the time USVI made an effort to market the bonds in late 2016 and January 2017, the Puerto Rico debt crisis had increased investors’ concerns about USVI’s debt as well. The rating downgrades of existing USVI debt, while not the decisive factor according to the bank official, did reinforce existing skepticism on the part of potential investors. Ultimately, the early 2017 bond issuance was not adequately subscribed and the offer failed. USVI effectively lost market access to new debt even at high interest rates. In September 2016, the administration released its 5-year financial plan. The two major features of this plan were a reduction in government expenditures by limiting hiring and reducing non-personnel costs, and a proposal for increasing revenue through taxes on beer, rum, wine, brandy, sugar-laden carbonated beverages, and cigarettes, among other revenue generating measures. The legislature passed the tax increase bill, with some modification of the Governor’s proposal, in early March and the Governor signed it into law on March 22, 2017. In the 5-year financial plan, the administration said that adopting austerity and tax measures would eliminate future deficits, which otherwise would amount to more than $130 million for each fiscal year between 2017 and 2021. A senior USVI official expressed a belief that the level of consumption of cigarettes, for example, will remain at pretax levels despite the higher cost. However, due to elasticity of demand, an increase in the price of cigarettes could decrease cigarette consumption and therefore revenues. If the tax increases do not produce the anticipated level of revenue, and if USVI is not able to regain access to capital markets, it will place even more stress on the debt service arrangements currently in effect. Moreover, the recent measures do not address the fiscal risk presented by unfunded pension liabilities and OPEB for government employees. USVI reported an unfunded pension liability of over $3 billion, which was 83 percent of GDP in fiscal year 2015. According to an independent consulting firm’s August 2016 report conducted for the USVI Government Employees Retirement System, the retirement fund will become insolvent in 2023 without adding financial resources and adjusting benefit levels. Territory officials cited several reasons for the large unfunded pension and OPEB benefit liabilities. These include recent legislation that resulted in more retirees eligible for pensions and a decline in the active USVI government workforce that resulted in a narrower ratio of retirees to workers, dropping from 6-to-1 in fiscal year 1982 to almost 1-to-1 in fiscal year 2015. In addition, officials told us that the most significant cause for the current condition of the retirement system is the primary government making contributions to the system below the amounts required by law. Some measures have been taken to address the retirement fund’s impending insolvency, and other steps have been recommended. According to territory officials, USVI law changed in 2005, resulting in increased required pension contributions from all newly hired employees except for judges and legislators. In 2013, a Pension Reform Task Force (Task Force) recommended legislation that would 1) increase government and employee contributions towards pension benefits, 2) raise contribution rates for senators and judges, 3) reduce retiree current benefits by 10 percent, 4) increase the early retirement age from 50 to 55 and the regular retirement age from 60 to 65, 5) limit cost of living increases, and 6) change the formula used to calculate benefits. In October 2015, the Legislature enacted and the Governor signed legislation that raised retirement ages for some employees, changed the basis for determining pension levels to career earnings, and allowed the retirement system to invest funds in lower-rated securities. This did not, however, address most of the Task Force recommendations. Territory officials told us that the administration will put forward additional pension reform proposals in the near future, however it remains unclear what those reforms will entail and when they will take effect. Moreover, territory officials told us that since 2011 the government has paid less than half of actual post-employment benefit costs, leaving an unpaid current obligation of $357 million as of fiscal year 2015. The unfunded liability for post-employment benefits, projecting anticipated future costs, was most recently calculated in October 2013; at that time it was just over $1 billion. USVI’s pension and OPEB obligations are already contributing to the territory’s debt burden, and will likely continue to do so at an increasing rate. If unaddressed, they may place additional stress on the debt service arrangements currently in effect and hamper the territory’s ability to repay debt. We provided a draft of this report for review to the U.S. Departments of the Interior and Treasury. We also provided, to the governments of Puerto Rico, American Samoa, the Commonwealth of the Northern Mariana Islands (CNMI), Guam, and the United States Virgin Islands (USVI), portions of the draft that were relevant to them. We received written comments from each of the five territories’ governments, which are reprinted in appendixes II, III, IV, V, and VI, respectively. We also received technical comments from American Samoa, Guam, USVI, and Treasury, which we incorporated as appropriate. We did not receive any comments from the Department of the Interior. In the letter from the Governor of Guam, the territory raised some issues, which we subsequently discussed in depth with territory officials. Following these discussions, we made modifications to the draft to provide additional context by broadening our coverage of revenue for Guam and for other territories, as applicable. We provide additional information about changes that we made or did not make at the end of Appendix V. We will provide copies of this report to the Governor of each territory and the U.S. Secretaries of the Interior and Treasury. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have questions about this report, please contact Susan J. Irving at (202) 512-6806, or David Gootnick at (202) 512-3149. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Our objectives were, for each U.S. territory—Puerto Rico, American Samoa, the Commonwealth of the Northern Mariana Islands (CNMI), Guam and the U.S. Virgin Islands (USVI)— to describe: (1) trends in public debt and its composition between fiscal years 2005 and 2015, (2) trends in revenue and its composition between fiscal years 2005 and 2015, (3) the major reported drivers of the territory’s public debt, and (4) what is known about the ability of the territory to repay public debt. For the purposes of this report, total debt held by the public (public debt) refers to the sum of bonds payable and other debt payable as described in the audited financial statements included within the territories’ single audit reporting packages, hereinafter referred to as the single audit reports. Bonds payable are marketable bonded debt securities issued by territorial governments or their component units and held by investors outside those governments. Other debt payable may include marketable notes issued by territorial governments and held by investors outside those governments; non-marketable intragovernmental notes; and notes held by local banks, federal loans, intragovernmental loans, and loans issued by local banks. Pension liabilities and other post-employment benefits (OPEB) are not included in the definition of total public debt but are considered and discussed in the sections of the report that describe the territories’ ability to repay their public debt. To describe trends in public debt and its composition for each territory, we reviewed the territories’ single audit reports. These single audits are conducted each year by independent accounting firms in accordance with government accounting standards. We obtained single audits for American Samoa, CNMI, Guam, and USVI for fiscal years 2005 through 2015. We also obtained and analyzed consolidated audited financial statements for Puerto Rico from the Commonwealth of Puerto Rico’s Treasury Department website for fiscal years 2005 through 2014. For each territory, we reviewed the independent auditor’s report corresponding to each single audit and noted the type of opinion that was expressed on the financial statements and accompanying note disclosures. With the exception of Puerto Rico, each of the territories received modified opinions by auditors on one or more of the single audit reports included in our analysis. We reviewed each of these opinions and determined that despite the modified opinions the data we obtained from each of the single audit reports was reliable for the purpose of describing trends in debt and revenue and their composition for the fiscal years included in our analysis. For each territory, we extracted information on public debt—specifically bonds, loans, and notes for both the primary government and component units—for each fiscal year and recorded the data on spreadsheets, which were then independently verified by other analysts. For American Samoa, CNMI, Guam, and USVI, we calculated debt per capita and debt as a percentage of nominal Gross Domestic Product (GDP) using nominal GDP and population data from the U.S. Department of Commerce’s Bureau of Economic Analysis. For Puerto Rico, we obtained data on Gross National Product (GNP) and nominal GDP from the Commonwealth of Puerto Rico Office of the Governor’s Planning Board and data on population from the U. S. Census Bureau. To identify trends in revenue and its composition for each territory, we obtained and recorded information from the single audit reports on general revenues. All tax revenues, including tax revenues that are dedicated to particular purposes, are reported in general revenues. Tax revenues represent the largest component of general revenues and include both derived tax revenues (resulting from assessments imposed on exchange transactions, such as income taxes and sales taxes) and imposed nonexchange revenues (resulting from assessments imposed on non-exchange transactions, such as property taxes and fines). General revenues also include other forms of revenue, such as unrestricted aid from other governments and investment earnings. Our analysis primarily focused on trends in general revenues because the territories’ public debt is either explicitly or implicitly backed by general revenues. We also included total revenue—general revenues and program revenues combined—in our analysis because it reflects revenue generated by the territories’ component units and could be used to service debt payments. In addition to general revenue levels, another measure of fiscal health is the net position for primary government activities, which represents the difference between the primary government’s assets (including the deferred outflows of resources) and the primary government’s liabilities (including the deferred inflows of resources). In other words, the net position for primary government activities reflects what the primary government would have left after satisfying its liabilities. A negative net position means that the primary government has more liabilities than assets. A decline in net position may be indicative of a deteriorating financial position. While our analysis primarily focuses on trends in the net position for the primary government, we also include certain information on trends in the total net position for the primary government and component units combined. To determine the major reported drivers of public debt and what is known about the territories’ ability to repay this debt, we interviewed officials from the territories’ governments, including officials from the Governors’ offices, departments of finance or treasury, and the agency responsible for issuing and marketing bonded debt. We also spoke to officials in territorial public audit offices. In addition, we interviewed representatives of the three rating agencies that provide credit ratings for the territories’ securities: Fitch, Moody’s, and Standard and Poor’s. In addition, to determine what is known about the territories’ ability to repay public debt we analyzed common factors—identified through prior work, documents, and interviews with the three rating agencies—that indicate territories’ potential vulnerability to debt crises. These factors included 1) the extent to which territories consistently issued debt to fund general government operations, 2) the extent to which territories’ economies were vulnerable to shocks due to a heavy dependence on a single or limited industry, and 3) the extent to which territories faced large fiscal risks such as pension liabilities. We also interviewed officials from the Department of the Interior’s Office of Insular Affairs, which provides grant aid and technical assistance and support to the territories, and the Pacific and Virgin Islands Training Initiatives, which provides training and technical assistance on fiscal management to the Pacific territories and USVI, and directs the preparation of an annual report on the fiscal condition of these territories. In addition, we spoke with subject matter experts on territorial debt, officials from an investment bank involved in underwriting the territories’ bonds, and officials from the three rating agencies that rate the marketability of the territories’ bonds. We obtained and reviewed information on territorial bond issuances from fiscal years 2005 through 2015 from the Electronic Municipal Market Access (EMMA) database of the Municipal Securities Rulemaking Board, the primary regulator of the municipal securities market. We reviewed information from EMMA on bonds issued by the territories from fiscal years 2005 through 2015, including memoranda of offering for individual bond issuances. We conducted this performance audit from September 2016 to October 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained for the purpose of addressing our audit objectives provides a reasonable basis for our findings and conclusions. The following are GAO’s comments on Guam’s letter that supplement the comments in the text. 1. Our responses to Guam’s technical comments are not corrections. After reviewing Guam’s comments, we expanded the information provided on revenue and net position for all 5 territories. For example, for Guam, we included on pages 41 and 42 of this report, additional information on revenue where we combine primary government revenue and component unit revenue. 2. Since our objective was to provide the most comprehensive metric of total public debt, it would have been incorrect for us to exclude public enterprise and revenue bond debt in our measure. 3. We do not compare the relative public debt burdens of the territories in this report. Further, pension liabilities are not included in our definition of public debt. Our definition of total public debt does include component unit debt, which Guam excludes from the calculations presented in its response. 4. On pages 41 and 42 of this report we include both a measure of primary government revenue, and a measure of primary government revenue and component unit revenue combined; an “apples-to- apples” comparison can be made to our total public debt figure, which includes component unit debt. 5. Our calculation of total public debt outstanding for Guam is the total of bonds payable and notes payable, both the current and noncurrent portions, and other debt as defined on page 8 of this report. Guam’s calculation of total public debt outstanding as shown in the table is all noncurrent liabilities except the net pension liability and results in a higher amount for fiscal year 2015 than our calculation. For bonds payable, our calculation includes both the current and noncurrent portions of bonds payable. Guam’s calculation of bonds payable as shown in the table only includes the noncurrent portion and results in a lower amount for fiscal year 2015 than our calculation. As a result of these differences, our calculation of bonded debt outstanding as a percentage of total public debt outstanding for fiscal year 2015 is higher than Guam’s calculation. 6. As noted on page 42 of this report, while revenue generally grew, Guam’s net position for the primary government fluctuated significantly between fiscal years 2005 and 2015. Since fiscal year end 2006, Guam’s net position for the primary government has been negative and trending downward. Guam’s total net position for the primary government and component units also combined fluctuated significantly. On page 41 and 42 of this report, we explicitly note the increase in revenue, however in the long-term significant financial risks may outweigh any given year’s revenue increase. 7. Based on our methodology, which includes component unit debt, Guam’s total public debt outstanding was $2.5 billion for fiscal year 2015. 8. We used total public debt outstanding, not solely tax-supported debt to calculate the debt-to-GDP ratio for all 5 territories. As reported on page 39 of this report, Guam’s debt to GDP ratio is 44 percent for total public debt and 42 percent for bonded debt for fiscal year 2015. We do not rank the U.S. territories in this report. 9. The per capita amounts presented in the report are based on debt amounts from Guam’s fiscal year 2015 audit as reported in the single audit report. However, the debt amounts and population figure shown in the table differ from those used in our calculations. Total public debt and bonded public debt outstanding used in our per capita calculations are calculated as discussed in comment 5 above, which differ by about $7 million from the amounts cited in the table. In addition, the population figure used in our per capita calculations is on a fiscal year basis, which results in 161,500 for fiscal year 2015. 10. Pension liabilities are not included in our definition of public debt. The debt per capita numbers that we present in this report are based on total public debt. For Guam that figure for fiscal year 2015 was $2.5 billion. 11. We disagree with Guam’s comment that the presentation in this report is negative. The final section in the discussion of Guam notes both the elements that may contribute to continued economic growth in Guam and the vulnerabilities and risks to the future: a high total debt burden and vulnerability to economic changes in its tourism and military industries. In addition, we note that Guam has large pension and other post-employment benefits liabilities that may stress current debt service payment arrangements if anticipated savings from changes to the government pension system are not realized. In addition to the contacts named above, Tara Carter, Assistant Director; Emil Friberg, Assistant Director; Divya Bali, Analyst-in-Charge; and Steven Berke, Karen Cassidy, and Eddie Uyekawa made significant contributions to this report. Dawn Simpson, Director; Nicole Burkart, Assistant Director; and J. Mark Yoder provided accounting expertise. Also contributing to this report were Pedro Almoguera, Jeffrey Arkin, Ann Czapiewski, John Hussey, Heather Krause, Donna Miller, Amy Radovich, Justin Snover, and A.J. Stephens.
|
The United States has five territories: Puerto Rico, American Samoa, CNMI, Guam, and USVI. The territories, like U.S. states in some cases, borrow through financial markets. Puerto Rico in particular has amassed large amounts of debt, and defaulted on billions of dollars of debt payments. In response to the fiscal crisis in Puerto Rico, Congress enacted and the President signed the Puerto Rico Oversight, Management, and Economic Stability Act (PROMESA) in June of 2016, which established an Oversight Board with broad powers of budgetary and financial control over Puerto Rico and requires GAO to study fiscal issues in all five U.S. territories. In this report, for each territory for fiscal years 2005-2015, GAO examined (1) trends in public debt and its composition, (2) trends in revenue and its composition, (3) the major reported drivers of the territory's public debt, and (4) what is known about the ability of each territory to repay public debt. GAO analyzed the territories' single audit reports; interviewed officials from the territories' governments, ratings agencies, and subject matter experts; and reviewed documents and prior GAO work. Puerto Rico: Between fiscal years 2005 and 2014, the latest figures available, Puerto Rico's total public debt outstanding (public debt) grew from $39.2 billion to $67.8 billion, reaching 66 percent of Gross Domestic Product (GDP). Despite some revenue growth, Puerto Rico's net position was negative and declining during the period, reflecting its deteriorating financial position. Experts pointed to several factors as contributing to Puerto Rico's high debt levels, and in September 2016 Puerto Rico missed up to $1.5 billion in debt payments. The outcome of the ongoing debt restructuring process will determine future debt repayment. American Samoa: American Samoa's public debt more than doubled in fiscal year 2015 to $69.5 million, but remained small relative to its economy, with a debt to GDP ratio of 10.9 percent. American Samoa's debt was primarily used to fund infrastructure projects. Between fiscal years 2005 and 2015, revenues grew and the government's net position was positive and generally improving. GAO previously reported that American Samoa relies heavily on the tuna processing and canning industry. Disruptions in this industry could affect its ability to repay debt. Commonwealth of the Northern Mariana Islands (CNMI): CNMI's public debt declined from $251.7 million to $144.7 million between fiscal years 2005 and 2015, decreasing CNMI's debt to GDP ratio to 16 percent. Most of CNMI's debt was used to refinance prior debt and fund infrastructure projects. Despite revenue growth since fiscal year 2011, CNMI's net position was negative and generally declining during the period. GAO previously reported that labor shortages may affect GDP. This could impede CNMI's ability to repay debt in the future. Guam: Between fiscal years 2005 and 2015, Guam's public debt more than doubled from almost $1 billion to $2.5 billion, with a debt to GDP ratio of 44 percent for fiscal year 2015. Most of Guam's debt was used to comply with federal requirements and court orders. Revenue grew during this period, and net position fluctuated significantly, with a negative balance in fiscal year 2015. Despite recent and expected economic growth, GAO found that large unfunded pension and other post-employment benefit (OPEB) liabilities may present a risk. U.S. Virgin Islands (USVI): Between fiscal years 2005 and 2015, USVI's public debt nearly doubled, reaching $2.6 billion and a debt to GDP ratio of 72 percent. Since 2010, most of USVI's debt was used to fund general government operations. Revenue remained stagnant and net position was negative and declining during the period, reflecting a deteriorating financial position. While USVI holds a year's worth of debt service payments in reserve, GAO found that economic uncertainty and looming government pension fund insolvency by 2023 may hamper repayment. In early 2017, USVI was unable to access capital markets to issue new debt at favorable rates. Although the government adopted a financial plan intended to reduce expenditures and increase revenue, the plan does not address USVI's significant unfunded pension and OPEB liabilities and it is unclear whether the plan will produce the intended level of savings. GAO is not making recommendations in this report.
|
JWST is envisioned to be a large deployable space telescope, optimized for infrared observations, and the scientific successor to the aging Hubble Space Telescope. JWST is being designed for a 5-year mission to find the first stars, study planets in other solar systems to search for the building blocks of life elsewhere in the universe, and trace the evolution of galaxies from their beginning to their current formation. JWST is intended to operate in an orbit approximately 1.5 million kilometers—or 1 million miles—from the Earth. With a 6.5-meter primary mirror, JWST is expected to operate at about 100 times the sensitivity of the Hubble Space Telescope. JWST’s science instruments are designed to observe very faint infrared sources and therefore are required to operate at extremely cold temperatures. To help keep these instruments cold, a multi-layered tennis court-sized sunshield is being developed to protect the mirrors and instruments from the sun’s heat. The JWST project is divided into three major segments: the observatory segment, the ground segment, and the launch segment. When complete, the observatory segment of JWST is to include several elements (Optical Telescope Element (OTE), Integrated Science Instrument Module (ISIM), and spacecraft) and major subsystems (sunshield and cryocooler). The hardware configuration referred to as OTIS was created when the Optical Telescope Element and the Integrated Science Instrument Module were integrated. Additionally, JWST is dependent on software to deploy and control various components of the telescope, and to collect and transmit data back to Earth. The elements, major subsystems, and software are being developed through a mixture of NASA, contractor, and international partner efforts. See figure 1 for the elements and major subsystems of JWST and appendix 1 for more details, including a description of the elements, major subsystems, and JWST’s instruments. For the majority of work remaining, the JWST project is relying on two contractors: Northrop Grumman and the Association of Universities for Research in Astronomy’s Space Telescope Science Institute. Northrop Grumman plays the largest role, developing the sunshield, the Optical Telescope Element, the spacecraft, and the Mid-Infrared Instrument’s cryocooler, in addition to integrating and testing the observatory. Space Telescope Science Institute’s role includes soliciting and evaluating research proposals from the scientific community, and receiving and storing the scientific data collected, both of which are services that it currently provides for the Hubble Space Telescope. Additionally, the Institute is developing the ground system that manages and controls the telescope’s observations and will operate the observatory on behalf of NASA. JWST will be launched on an Ariane 5 rocket, provided by the European Space Agency. JWST depends on 22 deployment events—more than a typical science mission—to prepare the observatory for normal operations on orbit. For example, the sunshield and primary mirror are designed to fold and stow for launch and deploy once in space. Due to its large size, it is nearly impossible to perform deployment tests of the fully assembled observatory, so the verification of deployment elements is accomplished by a combination of lower level component tests in flight-simulated environments; ambient deployment tests for assembly, element, and observatory levels; and detailed analysis and simulations at various levels of assembly. We have previously found that complex development efforts like JWST face numerous risks and unforeseen technical challenges, which can often become apparent during integration and testing. To accommodate unanticipated challenges and manage risk, projects reserve extra time in their schedules, which is referred to as schedule reserve, and extra funds in their budgets, which is referred to as cost reserve. Schedule reserve is allocated to specific activities, elements, and major subsystems in the event of delays or to address unforeseen risks. Each JWST element and major subsystem has been allocated schedule reserve. When an element or major subsystem exhausts schedule reserve, it may begin to affect schedule reserve on other elements or major subsystems whose progress is dependent on prior work being finished for its activities to proceed. Cost reserves are additional funds within the project manager’s budget that can be used to address unanticipated issues for any element or major subsystem, and are used to mitigate issues during the development of a project. For example, cost reserves can be used to buy additional materials to replace a component or, if a project needs to preserve schedule reserve, reserves can be used to accelerate work by adding shifts to expedite manufacturing. NASA’s Goddard Space Flight Center— the NASA center with responsibility for managing JWST—has issued procedures that establish the requirements for cost and schedule reserves. In addition to cost reserves held by the project manager, management reserves are funds held by the contractors that allow them to manage program risks and to address unanticipated cost increases throughout development. We have previously found that management reserves should contain 10 percent or more of the cost to complete a project and are generally used to address various issues tied to the contract’s scope. NASA’s cost-plus-award-fee contract with Northrop Grumman has spanned almost two decades, during which there have been significant variances in contractor performance. Cost-reimbursement contracts are suitable when uncertainties in the scope of work or cost of services prevent the use of contract types in which prices are fixed, known as fixed-price contracts. Award fee contracts provide contractors the opportunity to obtain monetary incentives for performance in designated areas identified in the award fee plan. Award fees may be used when key elements of performance cannot be defined objectively, and, as such, require the project officials’ judgment to assess contractor performance. For JWST’s contract with Northrop Grumman, these areas include cost, schedule, technical, and business management and are established in the contracts’ performance evaluation plans. In December 2013, the JWST program and the contractor agreed to replace a $56 million on-orbit incentive—incentives based on successful performance in space—with award fees. The award fees are to incentivize cost and schedule performance during development. This shift increased the available award fee for the entire contract to almost a quarter of a billion dollars. According to officials, restructuring the incentives gave NASA more flexibility to incentivize the contractor to prioritize the cost and schedule performance over exceeding technical requirements. In December 2014, we found that NASA award fee letters of award fee periods from February 2013 to March 2014 indicated that the contractor had been responsive to interim award fee period criteria provided by NASA and that contractor officials confirmed that they pay close attention to this guidance in prioritizing their work. For example, Northrop Grumman officials reported that they had made specific changes to improve communications in direct response to this guidance, which was validated by award fee letters from NASA. The JWST program has a history of significant schedule delays and increases to project costs, which resulted in replans in 2011 and 2018. Before 2011, early technical and management challenges, contractor performance issues, low levels of cost reserves, and poorly phased funding caused the JWST program to delay work. As a result, the program experienced schedule overruns, including launch delays, and cost growth. The JWST program underwent a replan in September 2011, and a rebaseline in November of that same year, and Congress placed an $8 billion cap on the formulation and development costs for the project. On the basis of the replan, NASA rebaselined JWST with a life- cycle cost estimate of $8.835 billion, which included additional money for operations and a planned launch in October 2018. Congress also required that NASA treat any cost increase above the cap according to procedures established for projects that exceed their development cost estimates by at least 30 percent. This process is known as a rebaseline. Congress must authorize continuation of the JWST program if formulation and development costs increase over the $8 billion cost cap. In June 2018, after a series of launch delay announcements due to technical and workmanship issues identified during spacecraft element integration, NASA notified Congress that it had again revised the JWST program’s cost and schedule estimates. NASA estimated that it now required $828 million in additional resources and 29 more months to complete beyond those estimates agreed to in the 2011 rebaseline. As of November 2018, NASA had funding to continue to execute the program and was waiting to see if Congress would authorize the program’s continuation and appropriate funds for the program in fiscal year 2019. Figure 2 shows the project’s history of changes to its cost or schedule and key findings from two external independent review teams and our prior work. As discussed above, various technical and workmanship errors drove some of the more recent delays. Examples of some of the workmanship issues we found in the past include: In October 2015, the project reported that a piece of flight hardware for the sunshield’s mid-boom assembly was irreparably damaged during vacuum sealing in preparation for shipping. The damaged piece had to be remanufactured, which consumed 3 weeks of schedule reserve. In April 2017, a contractor technician applied too much voltage and irreparably damaged the spacecraft’s pressure transducers, components of the propulsion system that help monitor spacecraft fuel levels. The transducers had to be replaced and reattached in a complicated welding process. At the same time, Northrop Grumman also addressed several challenges with integrating sunshield hardware. These issues combined took up another 1.25 months of schedule reserve. In May 2017, some of the valves in the spacecraft propulsion system’s thruster modules were leaking beyond permissible levels. Northrop Grumman determined that the most likely cause was the use of an improper cleaning solution, and the thruster modules were returned to the vendor for investigation and refurbishment. Reattaching the refurbished modules was expected to be complete by February 2018, but was delayed by one month when a technician applied too much voltage to one of the components in a recently refurbished thruster module. NASA and Northrop Grumman reported that resolving the thruster module issue resulted in a 2-month delay to the project’s overall schedule. In October 2017, when conducting folding and deployment exercises on the sunshield, Northrop Grumman discovered several tears in the sunshield membrane layers. According to program officials, a workmanship error contributed to the tears. The tears resulted in another 2-month delay to the project’s overall schedule. In addition, some first-time efforts took longer than planned. For example, in fall 2017, the project determined that it would need to use up to 3 months of schedule reserve based upon lessons learned from the contractor’s initial sunshield folding operation. This first deployment, or unfolding, took 30 days longer than planned. The sunshield has since undergone another deployment, and will be deployed twice more before launch. The IRB took into account these technical and workmanship errors, as well as other considerations, when it analyzed the project’s organizational and technical issues. The board’s final report, issued in May 2018, included 31 recommendations that addressed a range of factors. For example, the IRB recommended that the project: Conduct an audit to identify potential embedded design flaws— problems that have not been detected through analysis, inspection, or test activities and pose a significant risk to JWST schedule, cost, and mission success; Establish corrective actions to detect and correct human mistakes during integration and test; Establish a coherent, agreed-upon, and factual narrative on project status and communicate that status regularly across to all relevant stakeholders; and Augment integration and test staff to ensure adequate long-term staffing and improve employee morale. In its response to the IRB’s report, NASA stated that it accepted the report’s recommendations and had already begun implementing action in response to many of them. Further, project officials told us that some of the actions were underway before the IRB completed its review. To develop a new schedule for JWST’s 2018 replan, NASA took into account the remaining integration and test work and added time to the schedule to address threats that were not yet mitigated. This includes 5.5 months to address an anomaly that occurred on the sunshield’s cover in 2018. The project also replenished its schedule reserves—which we found in February 2018 had been consumed—so that they now exceed the recommended levels. Both the project and IRB conducted schedule risk assessments that produced similar launch dates. The project relied on the replan schedule to determine its remaining costs because the workforce necessary to complete the observatory represents most of the remaining cost. Following is additional information on the schedule and cost considerations. Schedule: JWST’s revised launch readiness date of March 2021 reflects a consideration of the hardware integration and test challenges the project has experienced, including adding time to: Add snag guards for the membrane tensioning system—which helps deploy the sunshield and maintain its correct shape—to prevent excess cable from snagging, Repair tears of the sunshield membrane, Deploy, fold, and stow the sunshield, and Mitigate contractor schedule threats. In addition, the project added extra time to the schedule to complete repairs to the membrane cover assembly, which did not perform as expected during acoustics testing in April 2018. The membrane cover assembly shown in figure 3 is used to cover the sunshield membrane when in the stowed position to provide thermal protection during launch. After the anomaly occurred, the project halted spacecraft element testing, investigated the anomaly, and found that the fasteners had come loose due to a design change made to prevent the fasteners from damaging the sunshield membrane. The design change caused the nuts to not lock properly. According to project officials, due to the design of the membrane cover assembly, the project was not able to conduct flight-like, stand- alone testing on the cover prior to spacecraft element testing. As a result, the project did not discover the design issue until the hardware came loose while installed on the spacecraft element. The project determined that the repairs would take approximately 5.5 months. The project’s replan also reflected schedule reserves above the level required by Goddard Space Flight Center policy, which would have been approximately 5 months at that time. The new schedule includes a total of 293 days or 9.6 months of schedule reserves leading up to its committed launch readiness date of March 2021. NASA approved a JWST launch date of March 2021, but the project and the contractor are working toward a launch date in November 2020. Figure 4 shows the project’s new schedule following the 2018 replan, including how the project distributed its schedule reserves through different integration and test activities. As part of its May 2018 study, the IRB reviewed the project’s schedule and recommended a launch date of March 2021, which was subsequently reflected in NASA’s new schedule for the program. In reviewing the project’s schedule, the IRB found that the project had robust scheduling practices for ensuring that the schedule represented a complete and dynamic network of tasks that could respond automatically to changes. This schedule also passed a standard health check with minimal errors indicating that it was well constructed. However, the IRB noted that this schedule does not account for certain types of unknown risks to the program such as integration and test errors which can take many months to resolve, or the potential need to remove a science instrument from the observatory, which can have about a 1 year impact. As a result, the program could experience additional delays if a risk of this magnitude is realized. Cost: The project’s new $9.7 billion life-cycle cost estimate is principally driven by the schedule extension, which requires keeping the contractor’s workforce to complete integration and test longer than expected. Specifically, the project determined that almost all of the hardware had been delivered and the remaining cost was predominantly the cost for the workforce necessary to complete and test the observatory. For the past 3 years, we have reported that Northrop Grumman’s ability to decrease its workforce was central to JWST’s capacity to meet its long- term cost commitments. However, Northrop Grumman’s actual workforce continued to exceed its projections. This was because it needed to maintain higher workforce levels due to technical challenges, including problems with spacecraft and sunshield integration and test. It also needed to keep specialized engineers available when needed during final assembly and test activities. In developing the cost estimate supporting the 2018 replan, the project used a Northrop Grumman workforce profile that is higher than previous projections because Northrop Grumman now plans to maintain personnel longer during integration and test. According to project officials, the planned reduction of Northrop Grumman’s workforce is now more gradual and conservative than the prior plan. For example, the Northrop Grumman workforce will not start to significantly decline until the observatory ships to the launch site, which is expected to occur in August 2020. As shown in Figure 5, the JWST workforce assembling the observatory declines and the government and contractor workforce necessary to manage and operate the observatory remains after the internal launch readiness date of November 2020. As seen in the above figure, the Space Telescope Science Institute workforce, the contractor responsible for operating JWST, will remain generally flat between fiscal years 2021 to 2026 when it operates the observatory. The NASA civil service and support contractor will remain relatively flat through November 2020 launch date and then decline. In addition, the new cost estimate also took into account $61 million for implementing the IRB recommendations and mission success enhancements, funding for project cost reserves, and operations costs. In June 2018, the NASA associate administrator—who is the project’s decision authority—approved the project to proceed with its replan with a March 2021 launch date and $9.7 billion in life-cycle costs based on the Agency Program Management Council review and replan documents. The associate administrator did not require the project to conduct an updated Joint Cost and Schedule Confidence Level (JCL) analysis for this replan. A JCL is an integrated analysis of a project’s cost, schedule, risk, and uncertainty whose result indicates the probability of a project’s success of meeting cost and schedule targets. NASA policy states that a JCL should be recalculated and approved as a part of the rebaselining approval process, but it is not required. In its replan decision memo, NASA’s associate administrator explained that he did not require the project to update the JCL because project costs are almost entirely related to the workforce and most of the remaining planned activities will be performed generally in sequence. Therefore, according to NASA’s associate administrator, the total cost would be driven almost entirely by the schedule because the workforce levels will remain the same through delivery of the observatory. Both the project and independent estimators used multiple schedule estimating methods to analyze the schedule for the remaining work, and NASA’s associate administrator said these analyses returned consistent, high confidence launch dates. The project’s ability to execute to its new schedule will be tested as it progresses through the remainder of challenging integration and test work. The project has yet to complete three of five integration and test phases. The remaining phases include integration and test of OTIS, the spacecraft element, and the observatory. Our prior work has shown that integration and testing is the phase in which problems are most likely to be found and schedules tend to slip. For a uniquely complex project such as JWST, this risk is magnified as events start to become more sequential in nature. As a result, it will continue to become more difficult for the project to avoid schedule delays by mitigating issues in parallel. As of November 2018, the project is about a week behind its replanned schedule because repairs on the membrane cover assembly took longer than planned. Completing the membrane cover assembly repairs and returning the spacecraft to vibration testing was a key event for the project to demonstrate that it could execute to its new schedule. When the project developed its 2018 replanned schedule, it had planned to complete the membrane cover assembly repairs and reinstall the assembly onto the sunshield and restart spacecraft element integration and test activities by November 6, 2018. The project allocated 4 weeks of schedule reserves specifically for these repairs. However, the membrane cover repairs proved more difficult than anticipated. For example, the program had to address unanticipated technical challenges on the membrane cover assemblies, including repairing tears and pin holes in the covers discovered after the covers were removed. The project also had to allot time to install bumpers, which are kapton tubes, to the assembly to protect the composite material on a sunshield structure during launch. The project identified the need to add the bumpers during subassembly vibration testing. As a result, as of November 2018, the project had used about 4.5 weeks of schedule reserves to cover delays associated with these activities. The use of reserves beyond what the project had planned for the repairs pushed the restart of spacecraft element integration and test activities out about a week to November 14, 2018. Figure 6 compares the project’s initial membrane cover assembly schedule in June 2018 to the actual schedule in November 2018. While the project repaired the membrane cover assembly, it also used this time to conduct risk mitigation activities on OTIS. For example, the project worked to mitigate a design issue on the frill connections. The frill is composed of a single layer of blankets placed around the outside of the primary mirror used to block stray light (see figure 7). A combination of modeling and inspections revealed that most of the frill sections did not have as much slack as expected at the near-absolute zero cryogenic temperatures of space. This caused shrinkage that put stress on the edges of the outer ring of mirrors, which could affect the stability of the optical mirror and image quality. The project loosened these outer connections by adding a ring to the connecting points. As of November 2018, project officials said they were in the process of verifying the fix through inspections. Examples of technical issues and risks that the project continues to face during the remaining phases of integration and test include: The project is working to mitigate a design issue on the sunshield membrane tensioning system—which helps deploy the sunshield and maintain its correct shape. In our February 2018 report, we found that Northrop Grumman was planning to modify the design of the membrane tensioning system after one of the sunshield’s six membrane tensioning systems experienced a snag when conducting folding and deployment exercises on the sunshield in October 2017. The project and Northrop Grumman determined that a design modification was necessary to fully mitigate the issue, which includes modifying clips used to progressively release the cable tension and adding guards to control the excess cable. The project identified a concern that the depressurization of trapped air in the folded sunshield membrane when the fairing separates to release the JWST observatory may overly stress the membrane material. The project is working with Arianespace—the company responsible for operating JWST’s launch vehicle—and experts at the Kennedy Space Center to resolve this concern. Officials estimated that a design solution would be in place in mid-2019. However, if the project determines that it needs to reinforce the membrane covers to survive excessive residual pressure as it works on this design solution, a multi-month schedule delay could occur. As of November 2018, the project has mitigated 21 of its 47 hardware and software risks to acceptable levels, and reviews these risks monthly for any changes that might affect the continued acceptability of the risk. Five of these 21 risks are related to the project’s more than 300 potential single point failures—several of which are related to the deployment of the sunshield. The project is actively working to mitigate the remaining 26 risks to acceptable levels or closure prior to launching. The project also has several first-time and challenging integration and test activities remaining. For example, the project must integrate OTIS and the completed spacecraft element and test the full observatory in the final integration phase, which includes another set of challenging environmental tests. See figure 8 for an image of OTIS and the spacecraft element prior to being integrated. As previously discussed, the project also has two remaining deployments of the sunshield, and prior deployments have taken longer than planned. To help mitigate the risks associated with the deployments, the project added additional time for deployments in the 2018 replanned schedule based on lessons learned from prior deployments. The two remaining deployments are to occur after spacecraft element integration and test and again after observatory integration and test. The JWST project office is required to evaluate whether the project can complete development within its revised cost and schedule commitments at its next major review—the system integration review—planned for August 2019. This review is to occur after the project has completed two major tasks—OTIS and spacecraft element integration and test. The review is to evaluate whether the project (1) is ready to enter observatory integration and test, and (2) can complete remaining project development with acceptable risk and within its cost and schedule constraints. NASA guidance does not require projects to conduct a JCL at this review. However, project officials said that they plan to conduct another schedule risk analysis in the future. They do not intend to complete a new JCL for the same reasons they did not complete one for the 2018 replan— because costs are almost entirely related to the workforce and can be derived from a schedule that takes into account known risk. While not required, conducting a JCL prior to the system integration review would inform NASA about the probability of meeting both its cost and schedule commitments. If the project proceeds with its plan to conduct only a schedule risk analysis, NASA would be provided only with an updated probability of meeting its schedule commitments. Our cost estimating best practices recommend that cost estimates should be updated to reflect changes to a program or kept current as it moves through milestones and as new risks emerge. In addition, government and industry cost and schedule experts we spoke with noted that integration and testing is a critical time for a project when problems can develop. These experts told us that completing a JCL is a best practice for analyzing major risks at the most uncertain part of project execution. Conducting a JCL at system integration review—a review that occurs during the riskiest phase of development, the integration and test phase— would allow the project to update its assumptions of risk and uncertainty based on its experiences in OTIS and spacecraft element integration and test. The project could then determine how those updated assumptions affect overall cost and schedule for the JWST project. As noted above, the project has many risks to mitigate, technical challenges to overcome, and challenging test events to complete, which could affect the project’s schedule and risk posture. Further, the project has an established history of significant cost growth and schedule delays. In its June 2018 letter notifying an appropriate congressional committee of its updated cost and schedule commitments, NASA acknowledged that recent cost growth for the project will likely impact other science missions. Conducting a JCL at system integration review would provide NASA and Congress with critical information for making informed resource decisions on the JWST project and its affordability within NASA’s portfolio of projects more broadly. NASA has taken steps to augment oversight of the contractor and project following the discovery of the embedded design flaws and workmanship errors that contributed to the project’s most recent schedule delays and cost increases. See table 1 for examples of changes NASA has made to contractor and project oversight—some of which NASA self-identified and others that were in response to IRB recommendations. The IRB made 31 recommendations that ranged from improving employee morale to improving security during transporting JWST to its launch site. NASA has also used award fees to try to incentivize Northrop Grumman to improve its performance. In a July 2018 hearing on the JWST program before the House Science, Space, and Technology Committee, Administrator Bridenstine stated that NASA had reduced the available award fee through commissioning by $28 million out of a total of about $60 million. Northrop Grumman also did not earn its full award fee in the two most recent periods of performance that NASA assessed. For the performance period of April 1, 2017 to September 30, 2017, Northrop Grumman earned approximately 56 percent of the available award fee. Reasons that NASA cited for its evaluation of award fees in this period included workmanship errors on the propulsion system, schedule delays, as well as issues with schedule execution, management, and quality control. For the period of October 1, 2017 to March 31, 2018, Northrop Grumman earned none of the available award fee. Northrop Grumman’s overall score was driven by an “unacceptable” rating in schedule and cost due to delays and in anticipation of exceeding the project’s $8 billion cost cap. Northrop Grumman received an “excellent” rating under the technical category, but the evaluation noted ongoing issues with quality controls, which resulted in delays. For example, the process steps for applying voltage to the spacecraft’s pressure transducers were not clear enough, which resulted in technician error and irreparable damage to the hardware. According to Northrop Grumman officials, the contractor has started to take action to try to improve its quality assurance processes. Officials described actions that ranged from rewriting hardware integration and test procedures to starting efforts to change aspects of the company’s culture that contributed to quality control issues. For example, in July 2018, Northrop Grumman initiated a JWST mission assurance culture change campaign to increase focus on product quality and process compliance. This effort includes having inspectors affirm by signature that they have personally inspected, verified, and confirmed that all aspects of an activity meet quality standards. According to the form instructions, if the inspector is uncertain on compliance or if instructions are unclear, workers are to halt work, investigate and assess the situation, and request help to resolve the situation. Project and Northrop Grumman officials provided an example of these changes working. During a manual deployment of a radiator panel, a Northrop Grumman employee discovered that a flap used as thermal protection for a radiator was installed incorrectly and reported the error. Northrop Grumman technicians found that this flap had been swapped with another flap in the process of moving them to be installed and corrected the problem before work proceeded. Further, NASA and Northrop Grumman are conducting audits to try to minimize the risk of failures during the remaining phases of integration and test. These audits are conducted on items that have not been fully tested, are in workmanship-sensitive areas, or have had a late design change. The first phase of the audit was completed in September 2018 and found no major design issues or hardware rework required. The project plans to audit other areas through at least spring 2019, but will add audits if needed. The JWST oversight structure includes a number of positions that could be responsible for ensuring that the recent augmentations to contractor and project oversight are sustained through launch (see table 2). In response to our review, NASA officials clarified that the project manager has sole responsibility for ensuring that these improvements are sustained through launch. Further, these officials stated that the project office is responsible for monitoring these changes at the project level and at Northrop Grumman. The project manager’s continued focus on these efforts will be important because: The project is implementing a wide span of improvement efforts, ranging from more on-site coverage at the contractor facility to cultural improvements, which will now need to be sustained for an additional 29 months. The project has had recurring issues with effective internal and external communication as well as defining key management and oversight responsibilities, both of which are important to sustaining oversight. For example, the Independent Comprehensive Review Panel identified communication problems—between the JWST project and Science Mission Directorate management as well as between NASA and Northrop Grumman—and that the project’s governance structure lacked clear lines of authority and accountability. In December 2012, we found the JWST project had taken several steps to improve communication—such as instituting meetings that include various levels of NASA, contractor, and subcontractor management— but the IRB’s findings in 2018 indicate that communication and governance issues have resurfaced in some areas. For example, the IRB found that communication with key stakeholders including the science community, Congress, and NASA leadership, has been variable and at times inconsistent. The project may encounter new schedule pressures as it proceeds through integration and test. A senior NASA official with expertise in workmanship issues told us that schedule pressure is a key reason for increased quality problems on projects. For example, this official said that companies tend to give experts leniency to operate without the burden of quality assurance paperwork when schedule pressures arise, which can lead to workmanship errors. While JWST project officials told us they do not view this as applicable to their project, the perspective regarding potential schedule pressures and workmanship is important to keep focus on given the magnitude of technical challenges and delays the project has faced. We will continue to monitor the project’s efforts at maintaining these oversight augmentations in future reviews, given that less than a year has passed since the project began implementing many of them. Moreover, the project may find that some actions will be required of officials outside the project, particularly since the communication problems identified by the IRB may well extend to headquarters’ interaction with stakeholders from the science community, industry, and the Congress. JWST is one of NASA’s most expensive and complex science projects, and NASA has invested considerable time and resources on it. The project first established its cost and schedule baseline in 2009. Since then, the project made progress by completing two of five phases of integration and test, but has also experienced significant cost growth and schedule delays. However, the project did not complete a JCL analysis as part of its second replan. Between now and its system integration review planned for August 2019, the JWST program will have to continue to address technical challenges and mitigate risks. Conducting a JCL would better inform decision makers on the status of the project as they determine whether the project can complete remaining project development with acceptable risk and within its cost and schedule constraints. Given the project is now on its third iteration of cost and schedule commitments, conducting a JCL is a small step that NASA can take to demonstrate it is on track to meet these new commitments. We are making the following recommendation to NASA: The NASA Administrator should direct the JWST project office to conduct a JCL prior to its system integration review. (Recommendation 1) We provided a draft of this report to NASA for comment. In written comments, NASA agreed with our recommendation. NASA expects to complete the JCL by September 2019, prior to the system integration review. The comments are reprinted in appendix II. NASA also provided technical comments, which have been addressed in the report, as appropriate. We are sending copies of this report to the appropriate congressional committees, the NASA Administrator, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. In addition to the contact named above, Molly Traci (Assistant Director), Karen Richey (Assistant Director), Jay Tallon (Assistant Director), Brian Bothwell, Daniel Emirkhanian, Laura Greifner, Erin Kennedy, Jose Ramos, Sylvia Schatz, Roxanna Sun, and Alyssa Weir made key contributions to this report.
|
JWST, a large, deployable telescope, is one of NASA's most complex projects and top priorities. The project has delayed its planned launch three times since September 2017 due to problems discovered in testing. In June 2018, NASA approved new cost and schedule estimates for JWST. Since the project established its cost and schedule baselines in 2009, the project's costs have increased by 95 percent and the launch date has been moved back by 81 months. Conference Report No. 112-284, accompanying the Consolidated and Further Continuing Appropriations Act, 2012, included a provision for GAO to assess the project annually and report on its progress. This is the seventh report. This report assesses (1) the considerations NASA took into account when updating the project's cost and schedule commitments and (2) the extent to which NASA has taken steps to improve oversight and performance of JWST, among other issues. GAO reviewed relevant NASA policies, analyzed NASA and contractor data, and interviewed NASA and contractor officials. In June 2018, the National Aeronautics and Space Administration (NASA) revised the cost and schedule commitments for the James Webb Space Telescope (JWST) to reflect known technical challenges, as well as provide additional time to address unanticipated challenges. For example, the revised launch readiness date of March 2021 included 5.5 months to address a design issue for the cover of the sunshield (see image). The purpose of the sunshield is to protect the telescope's mirrors and instruments from the sun's heat. NASA found that hardware on the cover came loose during testing in April 2018. The new cost estimate of $9.7 billion is driven by the schedule extension, which requires keeping the contractor's workforce on board longer than expected. Before the project enters its final phase of integration and test, it must conduct a review to determine if it can launch within its cost and schedule commitments. As part of this review, the project is not required to update its joint cost and schedule confidence level analysis—an analysis that provides the probability the project can meet its cost and schedule commitments—but government and industry cost and schedule experts have found it is a best practice to do so. Such analysis would provide NASA officials with better information to support decisions on allocating resources, especially in light of the project's recent cost and schedule growth. NASA has taken steps to improve oversight and performance of JWST, and identified the JWST project manager as responsible for monitoring the continued implementation of these changes. Examples of recent changes include increasing on-site presence at the contractor facility and conducting comprehensive audits of design processes. Sustaining focus on these changes through launch will be important if schedule pressures arise later and because of past challenges with communications. GAO will follow up on the project's monitoring of these improvements in future reviews. GAO recommends NASA update the project's joint cost and schedule confidence level analysis. NASA concurred with the recommendation made in this report.
|
The President issued two executive orders addressing border security and immigration enforcement on January 25, 2017. These orders direct executive branch agencies to implement a series of reporting, policy, and programmatic provisions to carry out the administration’s border security and immigration policies and priorities. Executive Order 13767 lays out key policies of the executive branch with regard to securing the southern border, preventing further unlawful entry into the United States, and repatriating removable foreign nationals. To support these purposes, the order directs DHS to, among other actions, produce a comprehensive study of the security of the southern border; issue new policy guidance regarding the appropriate and consistent use of detention of foreign nationals for violations of immigration law; plan, design, and construct a wall or other physical barriers along the southern border; and hire and on- board, as soon as practicable, 5,000 additional Border Patrol agents. Executive Order 13767 also directs DOJ to assign immigration judges to immigration detention facilities in order to conduct removal and other related proceedings. Executive Order 13768 focuses on immigration enforcement within the United States. Among other things, the order lays out the administration’s immigration enforcement priorities for removable foreign nationals; directs ICE to hire 10,000 additional immigration officers; states that, as permitted by law, it is the policy of the executive branch to empower state and local law enforcement officials to perform the functions of immigration officers; calls for weekly public reports on criminal actions committed by foreign nationals and any jurisdictions that do not honor ICE detainers with respect to such individuals; and terminates the Priority Enforcement Program while reinstituting Secure Communities. The order also directs DHS and DOJ to ensure that jurisdictions that willfully prohibit or otherwise restrict communication with DHS regarding immigration status information are not eligible to receive federal grants, except as determined necessary for law enforcement purposes. On March 6, 2017, the President issued Executive Order 13780. This order directed agencies to take various actions to improve the screening and vetting protocols and procedures associated with the visa-issuance process and the U.S. Refugee Admissions Program. Specifically, the order directed agencies to conduct a worldwide review to identify any additional information needed from each foreign country to adjudicate visas and other immigration benefits to ensure that individuals applying for such benefits are not a security or public-safety threat. The order also instituted visa entry restrictions for nationals from certain listed countries for a 90-day period; directed agencies to develop a uniform baseline for screening and vetting standards and procedures; and suspended the U.S. Refugee Admissions Program for 120 days in order to review refugee application and adjudication procedures. The order further directed DHS to expedite the completion and implementation of a biometric entry-exit tracking system for travelers to the United States. Implementation of Executive Order 13780 entry restrictions for visa travelers and refugees commenced on June 29, 2017, subject to a June 26 ruling of the U.S. Supreme Court prohibiting enforcement of such restrictions against foreign nationals with a credible claim of a bona fide relationship to a person or entity in the United States. The federal budget process provides the means for the President and Congress to make informed decisions between competing national needs and policies, allocate resources among federal agencies, and ensure laws are executed according to established priorities. The President generally submits the budget request for the upcoming fiscal year to Congress no later than the first Monday of February (e.g. the fiscal year 2019 budget request was submitted in February 2018). To ensure there is not a lapse in appropriations for one or more federal departments or agencies, regular appropriations bills must be enacted to fund the government before the expiration of the prior appropriations, which would typically be in effect through September 30 in a regular appropriations cycle. If these regular full-year appropriations bills are not enacted by the deadline, Congress must pass a continuing appropriation (or resolution) to temporarily fund government operations into the next fiscal year. For fiscal year 2017, multiple continuing appropriations were enacted to extend funding until the Consolidated Appropriations Act, 2017, was enacted in May 2017. At the time the President issued the executive orders in January and March of 2017, agencies were operating under a continuing appropriation which did not incorporate any funding explicitly for the administration’s immigration and border security priorities, such as hiring 5,000 additional Border Patrol agents. The administration sought additional funds to implement the executive orders through an out-of-cycle March 2017 budget amendment and supplemental appropriations request for the remainder of fiscal year 2017. In May 2017, Congress provided funding for selected priorities through the Consolidated Appropriations Act, 2017. The administration submitted additional funding requests related to the executive orders through the President’s fiscal year 2018 and 2019 budget requests. A number of continuing appropriations acts were enacted from September 2017 through February 2018, providing fiscal year 2018 funding at fiscal year 2017 levels through March 23, 2018. The Consolidated Appropriations Act, 2018, was signed into law on March 23, 2018, providing funding for government operations for the remainder of fiscal year 2018. Figure 1 below provides a timeline of executive order issuance and key milestones in the budget process from December 2016 through March 2018. DHS, DOJ, and State each play key roles in enforcing U.S. immigration law and securing U.S. borders. Key components and bureaus at the three agencies, and their general roles and responsibilities with regard to border security and immigration enforcement, are described in table 1. DHS, DOJ, and State issued reports, developed or revised policies, and took initial planning and programmatic actions in response to the executive orders. Each agency took a distinct approach to implementing the orders based on its organizational structure and the scope of its responsibilities. Each executive order established near-term reporting requirements for agencies, including updates on the status of their efforts, studies to inform planning and implementation, and reports for the public. According to officials, agencies focused part of their initial implementation efforts on meeting these reporting requirements. In addition, agencies developed and revised policies, initiated planning efforts, and made initial program changes (such as expanding or expediting programs) to reflect the administration’s priorities. DHS: DHS established an Executive Order Task Force (EOTF), which was responsible for coordinating and tracking initial component actions to implement the executive orders. The EOTF assembled an operational planning team with representatives from key DHS components, such as U.S. Customs and Border Protection (CBP) and ICE. The EOTF and the planning team inventoried tasks in the orders, assigned component responsibilities for tasks, and monitored the status of the tasks through an online tracking mechanism and weekly coordination meetings. Additionally, the EOTF coordinated and moved reports required by the orders through DHS. For example, Section 4 of Executive Order 13767 directed DHS to produce a comprehensive study of the security of the southern border. DHS completed and submitted this report to the White House on November 22, 2017, according to EOTF officials. DHS also publicly issued three Declined Detainer Outcome Reports pursuant to Section 9 of Executive Order 13768. Additionally, EOTF officials stated that, in 2017, DHS produced and submitted to the White House 90-day and 180-day reports on the progress of implementing Executive Orders 13767 and 13768. The Secretary of Homeland Security issued two memoranda establishing policy and providing guidance related to Executive Orders 13767 and 13768 in February 2017. One memorandum implemented Executive Order 13767 by outlining new policies designed to stem illegal entry into the United States and to facilitate the detection, apprehension, detention, and removal of foreign nationals seeking to unlawfully enter or remain in the United States. For example, the memorandum directed U.S. Citizenship and Immigration Services (USCIS), CBP, and ICE to ensure that appropriate guidance and training is provided to agency officials to ensure proper exercise of parole in accordance with existing statue. The other memorandum implemented Executive Order 13768 and provided additional guidance with respect to the enforcement of immigration laws. For example, it terminated the Priority Enforcement Program, under which ICE prioritized the apprehension, detention, and removal of foreign nationals who posed threats to national security, public safety, or border security, including convicted felons; and restored the Secure Communities Program, pursuant to which ICE may also target for removal those charged, but not yet convicted, of criminal offenses, among others. Additionally, the memorandum reiterated DHS’s general enforcement priorities. ICE, CBP, and USCIS may allocate resources to prioritize enforcement activities as they deem appropriate, such as by prioritizing enforcement against convicted felons or gang members. DHS components subsequently issued additional guidance further directing efforts to implement the executive orders and apply the guidance from the memoranda. For example, ICE issued guidance to its legal program to review all cases previously administratively closed based on prosecutorial discretion. ICE’s new guidance requested its attorneys to determine whether the basis for closure remains appropriate under DHS’s new enforcement priorities. USCIS also reviewed its guidance for credible and reasonable fear determinations—the initial step for certain removable individuals to demonstrate they are eligible to be considered for particular forms of relief or protection from removal in immigration court. As a result, USCIS made select modifications pursuant to Executive Order 13767, including adding language related to evaluating an applicant’s credibility based on prior statements made to other DHS officials, such as CBP and ICE officers. DHS also initiated a number of planning and programmatic actions to implement the executive orders. In some cases DHS components expanded or enhanced existing regular, ongoing agency activities and programs in response to the orders. For example, in response to Executive Order 13768, ICE officials reported that they expanded the use of the existing Criminal Alien Program. In other instances, DHS components altered their activities consistent with the administration’s immigration priorities. For instance, in response to Executive Order 13768, the Secretary of Homeland Security directed ICE to terminate outreach or advocacy services to potentially removable foreign nationals, and reallocate all resources currently used for such purposes to a new office to assist victims of crimes allegedly perpetrated by removable foreign nationals (the Victims of Immigration Crime Engagement, or VOICE, office, established in April 2017). Additional examples of planning and programmatic actions that DHS took, or officials reported taking, in response to the executive orders are described in table 2. DOJ: Within DOJ, the Office of the Deputy Attorney General coordinated and oversaw DOJ’s initial implementation of key provisions in the executive orders, according to DOJ officials. Specifically, DOJ officials said that the Office of the Deputy Attorney General coordinated and collected information for executive order reporting requirements and participated in an interagency working group related to Executive Order 13780, and interagency meetings related to Executive Order 13767. However, DOJ components were responsible for implementing the provisions and ensuring that they met executive order requirements. In addition, DOJ assisted in the creation and issuance of various reports. For example, officials told us that DOJ provided data to State for a report on foreign assistance to the Mexican government, as required by Section 9 of Executive Order 13767. DOJ also jointly issued three reports with DHS in response to Executive Order 13768 Section 16, which included information regarding the immigration status of foreign-born individuals incarcerated under the supervision of the Federal Bureau of Prisons and in pre-trial detention in U.S. Marshals Service (USMS) custody. The Attorney General issued two memoranda providing policy and guidance related to Executive Orders 13767 and 13768 in April and May of 2017. The April 2017 memorandum contains guidance for federal prosecutors on prioritizing certain immigration-related criminal offenses. For example, the memorandum requires that federal prosecutors consider prosecution of foreign nationals who illegally re-enter the United States after prior removal, and prioritize defendants with criminal histories. The May 2017 memorandum addresses Executive Order 13768’s provision directing DOJ and DHS to ensure that jurisdictions willfully prohibiting immigration status-related communication with the federal government (referred to as “sanctuary jurisdictions”) are not eligible for federal grants. It requires jurisdictions to certify their compliance with 8 U.S.C §1373, under which a federal, state, or local government entity or official may not prohibit, or in any way restrict the exchange of citizenship or immigration status information with DHS. Additionally, DOJ took a number of initial planning and programmatic steps to implement the executive orders. DOJ officials stated that some provisions outlined in the executive orders represent regular, ongoing agency activities and did not require any major changes to be implemented. For example, DOJ detailed Assistant United States Attorneys (AUSAs) and immigration judges to southern border districts and detention centers to assist in prosecutions and to conduct removal proceedings in response to the executive orders. However, while they expanded their efforts, DOJ officials said that detailing immigration judges and AUSAs to the border districts is a regular practice, and not a new function created by the executive orders. Examples of actions that DOJ took, or officials reported taking, in response to the executive orders are described in table 3. State: State’s Bureaus of Population, Refugees, and Migration and Consular Affairs led efforts to implement key provisions in Executive Order 13780. Several legal challenges and resulting federal court injunctions affected State’s implementation of Executive Order 13780 and at times curtailed specific provisions. Initial State actions included conducting reviews and contributing to reports required by the order. For instance, while State generally suspended refugee travel for 120 days, the department, in conjunction with DHS and the Office of the Director of National Intelligence, conducted a review to determine what, if any, additional procedures should be implemented in the U.S. Refugee Admissions Program. According to State officials, the agencies provided a joint memorandum to the President in October 2017 that contained recommendations regarding resumption of the program, specific changes to refugee processing, and further reviews and steps that the interagency group should take. Additionally, State worked with DHS and the Office of the Director of National Intelligence to conduct a worldwide review. This review identified any additional information that the United States may need from each foreign country to adjudicate visas and other immigration benefit applications and ensure that individuals seeking to enter the United States do not pose a threat to public safety or national security. In July 2017, upon completion of this review, DHS, in consultation with State and other interagency partners, issued a report to the President cataloguing information needed from each country and listing countries not providing adequate information. State also issued a number of policies and guidance in response to the executive orders; however, guidance on how to implement certain provisions often changed due to legal challenges. For example, the Bureau of Population, Refugees, and Migration issued 23 iterations of refugee travel restrictions guidance to overseas refugee processing centers in response to federal litigation and budgetary uncertainties. Similarly, the Secretary of State issued a number of cables to visa-issuing foreign posts on implementing travel restrictions for nationals of selected countries following court orders limiting the implementation of such restrictions. Executive Order 13780 contained several time-sensitive provisions directed to the Secretary of State. State focused on first addressing these provisions while working towards longer-term priorities outlined in the order. For instance, Executive Order 13780 Sections 2 and 6 established visa and refugee entry restrictions, which contained near-term timelines. State implemented these provisions, consistent with judicial decisions. Examples of planning and programmatic actions that State took, or officials reported taking, to implement Executive Order 13780 are described in table 4. For more information on specific planning or programmatic actions DHS, DOJ, and State have taken to implement the executive orders, see appendix I. The examples we provided for DHS, DOJ, and State represent initial actions and do not constitute an exhaustive list of actions that agencies have taken, or may take in the future, to fully implement the executive orders. Agency officials anticipate that implementation of the executive orders will be a multi-year endeavor comprising present and future reporting, planning, and other actions. For example, DOJ officials noted that many of the actions that they took to implement the orders will be ongoing and responsive to additional DHS actions. Specifically, DOJ bases the number of immigration judges and AUSAs detailed to the southern border districts on court caseloads driven by ICE. If ICE hires additional officers and attorneys and arrests and files charges of removability against more foreign nationals, then DOJ may need to staff additional judges and AUSAs to meet caseload needs. Existing Fiscal Year 2017 Resources: Many of the initial actions agencies and components took in response to the executive orders fit within their existing fiscal year 2017 budget framework and aligned with their established missions. At the time the executive orders were issued in January and March of 2017, federal agencies were operating under existing continuing appropriations pending enactment of fiscal year 2017 appropriations; therefore the new administration’s border security and immigration priorities and policies had not yet been incorporated into the budget process. As a result, it is not always possible to disaggregate which fiscal year 2017 funds were used for implementation of the executive orders versus other agency activities. For example, while the orders call for a surge in hiring at CBP and ICE, these agencies regularly hire additional personnel to offset attrition or to meet budget hiring targets as part of their normal operations. We asked agencies to identify budgetary resources they used specifically to address the executive orders. In some cases agencies were able to quantify their expenditures; however in other cases they could not. For example, according to DOJ officials, the Executive Office for Immigration Review, which conducts immigration court proceedings, spent close to $2.4 million in existing funds to surge approximately 40 immigration judge positions to detention centers and the southwest border from March through October 2017 in response to Executive Order 13768. DHS’s USCIS reported expending approximately $4.2 million detailing asylum officers to immigration detention facilities along the southern border from February 2017 through February 2018. Additionally, as a result of the 120-day suspension of refugee admissions, State cancelled airline tickets for previously approved refugee applicants, which resulted in a cost of nearly $2.4 million in cancellation and unused ticket fees. State officials noted that, aside from the ticket costs, other budgetary costs associated with implementing the order are difficult to disaggregate from other processing activities. For example, any budgetary costs associated with refugees who were admitted on a case-by-case basis were absorbed into overseas processing budgets. In some cases, agencies also identified cost savings or avoidances. For example, State reported a total cost avoidance of over $160 million in fiscal year 2017, partially as a result of admitting fewer refugees than originally planned under the prior administration. While the costs above were part of agencies’ normal operations, we identified one case where Congress approved a DHS request to reprogram $20 million from existing programs to fund the planning and design of new physical barriers along the border, including prototype design and construction. Specifically, CBP reprogrammed $15 million from funds originally requested for Mobile Video Surveillance System deployments and $5 million from a border fence replacement project in Naco, Arizona. Additionally, we identified another case where DHS shifted funds and notified Congress, but determined Congressional approval for reprogramming was not required. Specifically, in response to Executive Order 13768, the Secretary of Homeland Security directed ICE to reallocate any and all resources used to advocate on behalf of potentially removable foreign nationals (except as necessary to comply with a judicial order) to the new VOICE office. As part of this effort, ICE’s Office of the Principal Legal Advisor determined that the creation of the VOICE office fell within ICE’s authority to carry out routine or small reallocations of personnel or functions. According to officials at DHS, DOJ, and State, there were no additional requests to reprogram or transfer funds to implement the executive orders. DHS budget officials stated that any future requests from DHS components to reprogram or transfer funds would typically be considered at the midway point in the budget cycle. All three agencies indicated that they used existing personnel to implement the executive orders and, in some cases, a substantial amount of time was spent preparing reports, planning to implement provisions, and responding to changes or new developments in the executive orders. For example, USCIS officials noted that the agency devoted a significant number of manpower hours to aligning USCIS priorities to the executive orders. ICE’s Office of Human Capital established a dedicated executive order hiring team to plan for the hiring surge directed by Executive Order 13768. Additionally, officials at State told us that personnel were diverted from normal operations in order to implement executive order policy actions and that there were overtime costs associated with some provisions. In most cases, agencies did not specifically track or quantify the amount of time spent on these efforts; however, ICE’s Office of Human Capital tracked the amount of time spent on planning for the potential surge in ICE hiring in its human resource data system. According to ICE information, ICE personnel charged approximately 14,000 regular hours (the equivalent of 1,750 8-hour days) and 2,400 overtime hours to this effort from January 2017 through January 2018. Fiscal Year 2017 Request for Supplemental Appropriations: In March 2017, the President submitted a budget amendment along with a request for $3 billion in supplemental appropriations for DHS to implement the executive orders and address border protection activities. In May 2017, an additional appropriation of approximately $1.1 billion was provided in response to this request, some of which DHS used to fund actions to implement the orders. For example, CBP received $65.4 million for hiring and, according to CBP officials, used these funds to plan and prepare for the surge in Border Patrol agents directed by Executive Order 13767. As of January 2018, CBP had obligated $18.8 million and expended $14.1 million of the $65.4 million it received. Additionally, ICE received $147.9 million for custody operations. At the end of fiscal year 2017, ICE had obligated and expended nearly all—over 99.9 percent—of the funds it received. Fiscal Years 2018 and 2019 Budget Requests and Fiscal Year 2018 Appropriations: Agency officials anticipate additional costs to further implement the executive orders and expect that certain provisions will require a multi-year effort. According to DHS officials, the agency expects to incorporate executive order implementation into its annual strategic and budgetary planning processes. DHS officials also noted that additional future planning and funds will be needed to fully implement actions in the orders. Agencies plan to continue to use their base budgets as well as request additional funds as needed to carry out their mission. Examples of DHS and DOJ fiscal year 2018 budget requests and appropriations to implement executive order provisions are listed below. CBP requested $1.6 billion and in the Consolidated Appropriations Act, 2018, received approximately $1.3 billion to build new and replace existing sections of physical barriers along the southern border. CBP also projected out-year funding for construction along certain segments of the border through 2024. ICE requested $185.9 million for approximately 1,000 new immigration officers and 606 support staff. ICE’s fiscal year 2018 appropriation included $15.6 million to support the hiring of 65 additional investigative agents, as well as 70 attorneys and support staff. DOJ requested approximately $7.2 million to hire additional attorneys in support of the orders. According to DOJ officials, DOJ received sufficient funds in the fiscal year 2018 budget to meet the hiring goal for attorneys. DHS and DOJ also requested funds for fiscal year 2019 to implement executive order provisions, examples of which are listed below. ICE requested $571 million to hire 2,000 immigration officers (including 1,700 deportation officers and 300 criminal investigators) and 1,312 support staff (including attorneys). DOJ requested $1.1 million for 17 paralegal support positions to support the additional attorneys requested in the fiscal year 2018 request. DOJ also requested approximately $40 million to hire new immigration judges and their supporting staff, citing an over 25 percent increase in new cases brought forward by DHS over the course of fiscal year 2017. DHS and DOJ components that were not directly tasked with responsibilities in the executive orders have also begun to plan for potential effects as agencies implement the orders. For example, as CBP and ICE work to meet the hiring surge in the orders, USMS anticipates a likely increase in the number of individuals who are charged with criminal immigration offenses and detained pending trial, resulting in a corresponding increase in its workload. USMS developed a multi-year impact statement which projected possible effects on USMS prisoner operations, judicial security, and investigative operations. According to DOJ officials, these efforts may inform USMS’s budget requests and future year planning. For example, for fiscal year 2018 USMS requested approximately $9 million to hire 40 USMS deputies to support the executive orders. For fiscal year 2019, USMS projected that the administration’s policies to increase immigration enforcement and immigration-related prosecutions could result in an increase of nearly 19,000 prisoners between fiscal year 2017 and fiscal year 2019 and a corresponding budget increase of approximately $105 million for immigration expenses. In addition, officials at the Federal Law Enforcement Training Centers stated that they coordinated with Border Patrol and ICE to assess future training needs and project future resource requirements based on the hiring assumptions in the executive orders. For example, the Federal Law Enforcement Training Centers requested an increase of $29 million in fiscal year 2018 and $25.7 million in fiscal year 2019 for tuition and training requirements to implement the executive orders, among other funding requested. Appendix I includes additional information on funds DHS, DOJ, and State have obligated, expended, or shifted, to implement provisions of the executive orders. We provided a draft of this report to DHS, DOJ, and State for review and comment. DHS provided written comments, which are reproduced in appendix III; DOJ and State did not provide written comments. In its written comments, DHS discussed resources and legislative authorities the department believes it needs to carry out executive order requirements. All three agencies provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Homeland Security, the Attorney General, and the Secretary of State. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix IV. This appendix contains summaries of initial actions that the Department of Homeland Security (DHS), Department of Justice (DOJ), and Department of State (State) took to implement selected programmatic provisions of the President’s executive orders on border security and immigration. These orders include Executive Order 13767, Border Security and Immigration Enforcement Improvements; Executive Order 13768, Enhancing Public Safety in the Interior of the United States; and Executive Order 13780, Protecting the Nation from Foreign Terrorist Entry into the United States. These summaries also contain overviews of budget information related to implementing the executive orders, including obligations, expenditures, and budget requests where available, among other things. Table 5 lists the summaries and the executive order provisions on which they focus. We reviewed the executive orders and placed each provision directed at DHS, DOJ, and State into one of three categories: (1) analyses and reports, (2) policies, and (3) programs. We defined the analyses and reports category as executive order provisions that direct agencies to review and analyze data, policies, processes, and operational mission areas and produce reports. We defined the policies category as executive order provisions that establish new or modify existing policies, guidance, or processes related to border security or immigration. We defined the programs category as tangible, measurable, and quantifiable executive order provisions that implement policies. We confirmed our categorization with each agency, particularly for the programs category, since it was sometimes ambiguous whether provisions would lead to actions that were tangible, measurable, and quantifiable. Specifically, we reviewed agency documentation, such as a DHS inventory of tasks related to the executive orders, and interviewed agency officials. In some cases, we moved policy provisions to the programs category if agency efforts to implement the policy were underway. We prepared summaries for each executive order provision(s) we categorized as a program. For each program, we identified actions planned, completed, or underway at DHS, DOJ, and State as of March 2018 through reviewing documentation, interviewing agency officials, and submitting data collection instruments. For each program we also collected available budgetary costs—specifically, any funds requested, appropriated, obligated, and expended for executive order implementation from January 2017 through March 2018. We reviewed publicly available budget requests, congressional budget justifications, public laws, and budgetary data from agencies’ internal data systems. While we were able to identify certain funds directly attributed to the executive order provisions from these documents, it was not always possible to extract funds specifically meant for implementing the executive order provisions from more general budget increase requests, appropriations, or expenditures. To specifically identify funds used for the executive order provisions, we reviewed agency documentation, interviewed agency budget and program officials, and submitted written questions as necessary. In instances where we were unable to differentiate executive order provision funds from regular operating funds, we identified the larger account used for executive order funds and noted this distinction. We analyzed agency documentation on the policies, procedures, and processes for maintaining budgetary data and interviewed agency officials about their data collection practices to assess the reliability of these data. We determined that the data were sufficiently reliable for our purposes. Action Overview CBP has taken initial steps to plan, design, and construct new and replacement physical barriers on the southern border. For instance, CBP began the acquisition process for a Border Wall System Program, including developing plans to construct barrier segments and awarding eight task orders with a total value of over $3 million to design and construct barrier prototypes (four made from concrete and four made from non-concrete materials). CBP selected San Diego, California as the first segment and plans to replace an existing 14 miles of primary and secondary barriers. DHS plans to use fiscal year 2017 funding for the replacement of the primary barrier which it plans to rebuild to existing—as opposed to prototype—design standards. In January 2018, DHS leadership also approved cost, schedule, and performance goals for a second segment in the Rio Grande Valley in Texas, which will extend an existing barrier with 60 miles of new fencing. The Consolidated Appropriations Act, 2018, stated that fiscal year 2018 funds for primary pedestrian fencing are only available for “operationally effective designs deployed as of ,” such as steel bollard fencing currently deployed in areas of the border. As of April 2018, CBP and DHS were evaluating what, if any, impact this direction will have on the department’s plans, according to DHS officials. Additionally, DHS waived specific legal restrictions, such as environmental restrictions, in order to begin construction of barriers in the El Centro and San Diego Border Patrol sectors in California; and the Santa Teresa, New Mexico segment of the El Paso Border Patrol Sector. DHS also completed a categorical exclusion for replacement of a segment of existing barriers in El Paso, Texas. Budget Overview To fund the barrier prototypes, Congress approved a DHS request to reprogram $20 million in fiscal year 2017. Specifically: CBP reprogrammed $15 million from funds originally requested for Mobile Video Surveillance System deployments. The funds were originally part of the fiscal year 2015/2017 Border Security Fencing, Infrastructure, and Technology (BSFIT) Development and Deployment funding and were available due to a contract bid protest and delays associated with the Mobile Video Surveillance System Program. CBP also reprogrammed $5 million from funds originally intended for a fence replacement project in Naco, Arizona. The funds were part of fiscal year 2016 BSFIT Operations and Maintenance funding and were available as a result of unanticipated contract savings. The Naco Fence Replacement project will be completed within its original scope, according to CBP documentation. DHS also received an appropriation in fiscal year 2017 to replace existing fencing and to install new gates; and an appropriation in fiscal year 2018 for border barrier planning and design, and to replace existing fencing and build new barriers. As previously discussed, the Consolidated Appropriations Act, 2018, limited the use of funds provided for construction of new and replacement primary pedestrian fencing to previously deployed fencing designs. DHS has requested, but has not received, fiscal year 2019 funds for building new barriers. For more information regarding funding for future barrier construction projects along the southern border, see table 6. According to CBP documentation, the total cost to construct the Border Wall System Program over approximately 10 years is $18 billion. DHS headquarters conducted an independent cost estimate for the San Diego and Rio Grande Valley segments of the program, which CBP adopted as the program’s life cycle cost estimate. Acquisition and operations and maintenance costs for the Rio Grande Valley segment were separately described in other DHS documents and are shown in table 7 below. Provision: Sections 5 and 6 Sections 5 and 6 pertain to detention facilities and detention of foreign nationals for violations of immigration law, pending the outcome of their proceedings or to facilitate removal. The order directs the Department of Homeland Security (DHS) to take immediate actions to construct, operate, or control facilities to detain foreign nationals at or near the southern border, and assign asylum officers to immigration detention facilities, among other things. Additionally, the order directs the Department of Justice (DOJ) to immediately assign immigration judges to immigration detention facilities. DHS: U.S. Customs and Border Protection (CBP), U.S. Immigration and Customs Enforcement (ICE), U.S. Citizenship and Immigration Services (USCIS) DOJ: Executive Office for Immigration Review (EOIR) ICE and U.S. Border Patrol officials stated they consider custody determinations on a case by case basis. Additionally, officials from CBP’s Office of Field Operations stated they inspect all applicants for admission in accordance with the Immigration and Nationality Act, as prescribed by the executive order and a February 2017 memorandum the Secretary of Homeland Security issued. ICE, through its Enforcement and Removal Operations directorate, manages the nation’s immigration detention system, which houses foreign nationals detained while their immigration cases are pending or after being ordered removed from the country. DOJ’s EOIR is responsible for conducting immigration court proceedings, appellate reviews, and administrative hearings, pursuant to U.S. immigration law and regulation. ICE initially intended to increase bed capacity at detention facilities in order to accommodate potential surges in apprehensions that could result from implementation of the executive order. According to ICE officials, ICE identified 1,100 additional beds available at detention facilities already in use. However, officials also stated that, as of February 2018, ICE has not needed to use these additional beds due to a decrease in the number of apprehensions. Additionally, ICE officials indicated no acquisition actions were needed because contracts and agreements are in place at existing detention facilities and additional beds are available for excess capacity. CBP and ICE are continuously monitoring bed space requirements based on migration volume. According to ICE officials, as of February 2018, ICE had no additional actions planned to increase bed capacity. DHS’s Office of Strategy, Policy and Plans convened a cross-component meeting to discuss detention standards, which govern the conditions of detainee confinement, according to DHS officials. ICE officials reported that ICE is currently re-writing its national detention standards (the standards applicable at most county jails housing immigration detainees). According to officials, the new standards are intended to make it easier for local jurisdictions to comply with standards without completely re-writing their existing policies to conform to ICE’s requirements. USCIS officials told us they began working with ICE to identify where additional asylum officers were needed based on workload needs and space availability as soon as the executive order was issued in January 2017. From February 2017 through February 2018, USCIS deployed between 30 and 64 asylum officers during any given week along the southern border and continues to do so in response to caseload needs. USCIS continues to monitor and periodically adjust asylum officer staffing requirements, according to USCIS officials. DOJ officials stated that DOJ components coordinated with ICE to identify removal caseloads along the southern border that were large enough to warrant additional immigration judges. According to DOJ officials, from March 2017 through October 2017, EOIR detailed approximately 40 immigration judge positions, both in person and by video teleconference, to 19 DHS detention facilities, including many along the southern border, in response to the executive order. DOJ officials further explained that as caseloads fluctuated, some of the details ended, some in- person details were converted to video teleconference, and some details were converted to permanent immigration judge positions. EOIR often details immigration judges for operational reasons; however officials noted that the scale of this detail mobilization was larger because of the executive order. Fiscal Year 2017: Because Executive Orders 13767 and 13768 were issued during fiscal year 2017, DHS submitted a budget amendment and requested supplemental appropriations to address the needs of the department in support of executive order implementation. The request proposed funding to increase daily immigration detention capacity to 45,700 detention beds by the end of fiscal year 2017. The request stated that the detention capacity was necessary to implement the administration’s immigration enforcement policies for removing foreign nationals illegally entering or residing in the United States. ICE: On May 5, 2017, ICE received a supplemental appropriation of $236.9 million for enforcement and removal operations, including $147.9 million for custody operations, $57.4 million for alternatives to detention, and $31.6 million for transportation and removal operations. According to ICE documentation, almost all of the funds from that additional appropriation were obligated and expended at the conclusion of fiscal year 2017, as shown in table 8. USCIS: USCIS documentation estimated that it expended at least $4.2 million detailing asylum officers to immigration detention facilities along the southern border from February 2017 through February 2018. Fiscal Year 2018: The President’s budget requested an additional $1.5 billion above the 2017 annualized continuing appropriations level, for expanded detention, transportation, and removal of foreign nationals who enter, or remain in, the United States, in violation of U.S. immigration law. As part of the $1.5 billion requested, the ICE congressional budget justification requested $1.2 billion in additional funds to support an average daily population (ADP) of detainees of 51,379—a 49 percent increase over fiscal year 2016 ADP (34,376). The request stated that Executive Order 13768 and subsequent department guidance were expected to drive increases in the ADP due to the increase in ICE law enforcement officers and an expected increase in the average length of stay at detention facilities. ICE also requested funds for transportation and alternatives to detention. In fiscal year 2018, ICE was appropriated $4.1 billion to support enforcement and removal operations. According to DHS officials, the Consolidated Appropriations Act, 2018, provides funds for an ADP of 40,520 total beds, 10,859 lower than requested. Fiscal Year 2019: The President’s budget requested $2.5 billion for detention and removal capacity. As part of the $2.5 billion requested, ICE’s congressional budget justification states $2.3 billion will support an ADP of 47,000. According to the ICE congressional budget justification, the number of beds will sustain the fiscal year 2017 ADP level (38,106) and provide additional detention capacity stemming from the continued implementation of Executive Order 13768. ICE also requested funds for transportation and alternatives to detention. Prior GAO Work: Our prior work on immigration detention examined ICE’s formulation of its budget request and cost estimate for detention resources. In April 2018, we found errors and inconsistencies in ICE’s calculations for its congressional budget justifications and bed rate model. Specifically, we found that ICE made errors in its budget justifications, underestimated the actual bed rate, and its methods for estimating detention costs did not meet the characteristics of a reliable cost estimate. We also found ICE did not document its methodology for its projected ADP. We recommended that ICE assess and update its adult bed rate and ADP methodology and take steps to ensure that its budget estimating process fully addresses cost estimating best practices. DHS concurred with our recommendations and plans to take actions in response to them. Fiscal Year 2017: DOJ documentation showed it expended approximately $2.4 million detailing immigration judge positions to immigration detention facilities from March 2017 through October 2017, either through video teleconferencing, or in-person, to adjudicate removal proceedings. EOIR officials explained the funds used were unobligated balances carried over from a prior fiscal year. Fiscal Year 2018: For fiscal year 2018, DOJ requested an increase of $75 million to hire 75 additional immigration judge teams to enhance public safety and law enforcement. According to DOJ officials, the agency received sufficient funds in the fiscal year 2018 budget to meet this hiring goal. Fiscal Year 2019: The fiscal year 2019 President’s budget also requests an increase of $40 million for 75 new immigration judge teams at EOIR and nearly $40 million for 338 new prosecuting attorneys at ICE to ensure immigration cases are heard expeditiously. According to the President’s budget, these investments are critical to the prompt resolution of newly-brought immigration charges and to reduce the 650,000 backlog of cases currently pending in the immigration courts. EOIR’s fiscal year 2019 congressional budget justification includes a program increase totaling almost $65 million to provide funding for immigration judges and support staff, as well as information technology efforts. This increase supports initiatives that implement Presidential and Attorney General priority areas, among other things. USCIS has discretion to authorize parole for urgent humanitarian reasons or significant public benefit, which it uses to allow an individual, who may be inadmissible or otherwise ineligible for admission to come to the United States for a temporary period. USCIS asylum officers adjudicate asylum applications filed with USCIS, and conduct credible and reasonable fear screenings to determine if certain removable foreign nationals may be eligible to seek particular forms of relief or protection in immigration court. Additional Funds Saved and Expended: According to USCIS officials, USCIS saved approximately $274,000 from not renewing contracts to administer the Central American Minors Parole Program. According to USCIS documentation, USCIS expended approximately $70,300 to deploy FDNS officers along the southern border from March 2017 to February 2018. Action Overview DHS has taken a number of actions to implement the executive order hiring provisions. Specifically, DHS requested and the Office of Personnel Management approved a number of changes to assist DHS and its components with the executive order hiring directives. These changes include granting CBP and ICE direct hire authority and a special salary rate for polygraphers, among others. DHS’s Office of the Chief Human Capital Officer and DHS components’ human capital offices also began additional hiring planning, such as refining component-level hiring plans, coordinating on potential joint hiring events, and targeting specific recruitment efforts, such as military veterans. CBP and ICE have also taken the following additional actions: CBP: In November 2017, CBP awarded a contract not to exceed $297 million to Accenture Federal Service LLC to help with law enforcement hiring for all CBP components. The contract is structured so the contractor receives a set dollar amount for each law enforcement officer hired—80 percent for each final offer letter and 20 percent for each law enforcement officer who enters on duty. The contractor is to assist CBP in hiring 7,500 qualified agents and officers, including 5,000 Border Patrol agents, 2,000 CBP officers, and 500 Air and Marine Interdiction agents over 5 years. CBP expects Accenture to be fully operational and effectively provide surge hiring capacity by June 2018, according to CBP officials. ICE: According to ICE Office of Human Capital (OHC) officials, OHC is ensuring policies and procedures are in place so that ICE is ready to begin hiring additional immigration officers and support staff if funds are appropriated. In January 2018, ICE OHC also issued a contract solicitation for recruitment, market research, data analytics, marketing, hiring, and onboarding activities. ICE OHC sought to procure comprehensive hiring and recruitment services to assist ICE OHC in meeting the demands required to achieve the executive order’s hiring goals and develop efficiencies to current OHC processes. ICE aimed to have a similar pricing structure as CBP’s Accenture contract, according to the solicitation. Specifically, according to the solicitation, the yet to be selected contractor would receive a set dollar amount for each frontline officer hired–80 percent for each preliminary offer letter and 20 percent for each frontline officer who enters on duty. The contractor would assist ICE in hiring 10,000 law enforcement agents, including 8,500 deportation officers and 1,500 criminal investigators. It would also assist in the hiring of approximately 6,500 support personnel positions. In May 2018, the contract solicitation was cancelled; however, the government anticipates re-soliciting the requirement in fiscal year 2019. According to the contract cancellation notice and an ICE OHC official, DHS cancelled the contract due to delays associated with the fiscal year 2018 budget and hiring timelines, as well as the limited number of additional ICE positions funded in the fiscal year 2018 budget. In the interim, ICE is partnering with the Office of Personnel Management to meet the executive order’s hiring goals and develop efficiencies to current OHC processes, according to ICE officials. Because Executive Orders 13767 and 13768 were issued during fiscal year 2017, DHS submitted a budget amendment and requested supplemental appropriations to help address the needs of the department in support of executive order implementation. The request included funding for DHS agencies to begin building the administrative capacity necessary to recruit, hire, train and equip the additional 5,000 Border Patrol agents and 10,000 ICE officers. The Federal Law Enforcement Training Centers (FLETC), which provides training to law enforcement professionals who protect the homeland, including any new ICE and CBP personnel hired as result of the executive orders, also requested funds to support these efforts. On May 5, 2017, CBP received an additional appropriation of $65.4 million to improve hiring processes for Border Patrol agents, CBP officers, and Air and Marine Operations personnel, and for officer relocation enhancements. Of the $65.4 million appropriated in fiscal year 2017, CBP obligated $18.8 million and expended $14.1 million as of January 2018. While ICE also received additional funding for custody operations, alternatives to detention, and transportation and removal, it did not receive supplemental funds in fiscal year 2017 specifically for hiring. DHS also requested funds for CBP, ICE, and FLETC hiring and training in fiscal year 2018 and fiscal year 2019. For additional details, see table 9. According to FLETC officials, the total average cost to provide basic law enforcement training varies by agencies and position, as shown in table 10. FLETC officials noted their partners also provide additional training unique to their missions, which is not included in the costs below. Action Overview ICE officials reported expediting review of pending 287(g) requests and approved 46 additional state and local jurisdictions for the program from February 2017 through March 2018, bringing the total to 76 law enforcement agencies in 20 states. See figure 2 for a map of additional jurisdictions approved. Section 10 and Section 8 of Executive Orders 13767 and 13768, respectively, direct the Department of Homeland Security (DHS) to engage with state and local entities to enter into agreements under Section 287(g) of the Immigration and Nationality Act. DHS: U.S. Immigration and Customs Enforcement (ICE) The Illegal Immigration Reform and Immigrant Responsibility Act of 1996 added Section 287(g) to the Immigration and Nationality Act, which authorizes ICE to enter into agreements with state and local law enforcement agencies, permitting designated state and local officers to perform immigration law enforcement functions. According to ICE officials, ICE also conducted outreach with state and local officials and identified potential law enforcement partners with whom to enter into possible future 287(g) agreements. U.S. Customs and Border Protection (CBP) officials stated that they agreed to support ICE’s program expansion efforts and provided hundreds of viable state and local law enforcement referrals to ICE to assist with this effort. For example, CBP reviewed data and conducted a gap analysis, to include a survey, to identify potential law enforcement partners for future 287(g) memorandums of agreement. CBP officials further noted that they introduced new language into Operation Stonegarden grant guidance that allows the use of grant funding to support CBP-identified, 287(g) law enforcement operational activities. According to CBP and ICE officials, efforts to develop a 287(g) enforcement model that can be used for this purpose are pending. According to ICE officials, the agency is considering developing a program under which designated local law enforcement officers would be trained and authorized to serve and execute administrative warrants for individuals who are in violation of U.S. immigration laws at the time they are released from state criminal custody. ICE officials indicated that program participants would have limited authority under 287(g). For example, they would not interview individuals regarding nationality and removability, lodge detainers, or process individuals for removal. ICE has not yet finalized the program and it may evolve as ICE further develops the program, according to ICE officials. ICE is also leveraging an existing Basic Ordering Agreement, a procurement tool to expedite acquisition of a substantial, but presently unknown, quantity of supplies or services, according to ICE officials. A Basic Ordering Agreement is not a contract, but rather, is a written instrument of understanding, negotiated between ICE and state and local jurisdictions, to house detainees upon ICE’s issuance and their acceptance of an Immigration Detainer and either a Warrant for Arrest of Alien or Warrant of Removal. For any order placed under the agreement, ICE will reimburse the provider, such as a state or local jurisdiction, for up to 48 hours of detention, under applicable regulations. The rate will be fixed at $50.00 for up to 48 hours of detention. No payment will be made for any detention beyond 48 hours. The Secretary of Homeland Security vested authority in CBP to accept state services to carry out certain immigration enforcement functions pursuant to Title 8, United States Code Section 1357(g). According to CBP officials, CBP also joined a 287(g) Program Advisory Board, which reviews and assesses ICE field office recommendations about pending 287(g) applications. Participation in the 287(g) program is expected to expand further in fiscal years 2018 and 2019, according to ICE. Additionally, ICE anticipates further increase in the number of 287(g) memorandums of agreement in fiscal years 2018 and 2019. In fiscal year 2018, ICE requested $24.3 million for ICE 287(g) program funding. According to the explanatory statement accompanying the Consolidated Appropriations Act, 2018, the 287(g) program was fully funded at the requested level. In fiscal year 2019, ICE requested $75.5 million for ICE 287(g) program funding. Section 11 of Executive Order 13768 directs DOJ and the Department of Homeland Security (DHS) to develop and implement a program to ensure that adequate resources are devoted to prosecuting criminal immigration offenses, and to develop cooperative strategies to reduce the reach of transnational criminal organizations and violent crime. border districts developed guidelines for prioritizing misdemeanor cases involving individuals illegally entering the United States for the first time. However, according to these officials, southern border districts developed these guidelines based on an initial high volume of apprehensions, and when apprehensions decreased the guidelines were no longer necessary and never published. DOJ: Executive Office for United States Attorneys (EOUSA) DHS: Immigration and Customs Enforcement (ICE) EOUSA provides executive and administrative support for United States Attorneys and Assistant United States Attorneys (AUSAs). AUSAs conduct trial work, as prosecutors, in which the United States is a party, including prosecution of criminal immigration offenses. Western District of Texas and Arizona, and two AUSAs each to the Southern District of California, the District of New Mexico, and the Southern District of Texas, for a total of 12 details according to DOJ officials. The first round of details lasted for 6 months, and EOUSA extended the details of one AUSA at each southern border district for an additional 6 months. DOJ officials told us that EOUSA will continue to evaluate the need for additional details along the southern border based on the needs of the districts, as determined by the number of DHS apprehensions. According to DOJ officials, implementation of these provisions is ongoing and will depend largely upon DHS executive order actions—for instance, as DHS hires more enforcement personnel, criminal immigration cases may increase which could spur a need for more AUSAs. ICE litigates charges of removability against foreign nationals and conducts criminal investigations, including investigations of immigration fraud. The Secretary of Homeland Security released a memorandum with guidance on the enforcement of immigration laws in the United States on February 20, 2017. In response to this memorandum, ICE’s Office of the Principal Legal Advisor sent guidance to its attorneys directing them to prioritize legal services supporting the timely removal of foreign nationals in accordance with Executive Order 13768. The guidance directed ICE to review all cases previously administratively closed based on prosecutorial discretion to determine whether the basis for closure remains appropriate under DHS’s enforcement priorities. The guidance also directed ICE to coordinate with the Executive Office for Immigration Review to ensure that foreign nationals charged as removable and who meet the enforcement priorities remain on active immigration court dockets and that their cases are completed as expeditiously as possible. In response to the executive orders, ICE Homeland Security Investigations officials stated that the agency began to focus more of its resources on the investigation and criminal prosecution of immigration fraud. ICE Homeland Security Investigations added five new Document and Benefit Fraud Task Forces throughout the nation and directed field offices to increase staffing of task forces. Additionally, ICE is in the process of combining five Benefit Fraud Units into an immigration fraud center—the National Lead Development Center— that will serve as a new centralized entity that will refer cases to the task forces for enforcement action. A summary of DOJ budget increase requests, appropriations, and expenditures related to prosecution priorities in the executive orders that we identified can be found in table 11. The fiscal year 2018 President’s budget request included $19.3 million for 195 attorney positions in ICE’s Office of the Principal Legal Advisor. According to ICE officials, while the Consolidated Appropriations Act, 2018, included funds for 70 positions for the Homeland Security Investigations Law Division, it did not include funds for additional attorney positions for immigration litigation within the Office of the Principal Legal Advisor. The fiscal year 2019 President’s budget request included $39.7 million for additional attorney resources in ICE’s Office of the Principal Legal Advisor. Provision: Sections 5 and 10 Sections 5 and 10 direct the Department of Homeland Security (DHS) to take action related to immigration enforcement. Specifically, Section 5 directs DHS to prioritize the removal of certain categories of removable foreign nationals. Section 10 directs DHS to terminate the Priority Enforcement Program (PEP) and reinstitute Secure Communities, among other things. DHS: U.S. Immigration and Customs Enforcement (ICE), U.S. Customs and Border Protection (CBP) Under PEP (from 2015 to 2017), ICE issued a request for detainer (with probable cause of removability) or information or transfer, for a priority removable individual, such as one posing a threat to national security or public safety, including a foreign national convicted of a felony, among others, under DHS’s former tiered civil enforcement categories. Under Secure Communities, ICE may issue detainers for removable individuals charged, but not yet convicted, of criminal offenses, in addition to individuals subject to a final order of removal whether or not they have a criminal history. Pursuant to Executive Order 13768, the Secretary of Homeland Security terminated PEP and reinstituted the Secure Communities program. As such, DHS is no longer required to utilize a tiered approach to civil immigration enforcement with direction to dedicate resources to those deemed of highest priority. Instead, under Section 5 of the executive order, various categories of removable individuals are general priorities for removal, and DHS personnel may initiate enforcement actions against all removable persons they encounter. Further, the DHS memorandum implementing this executive order allows ICE, CBP, and USCIS to allocate resources to prioritize enforcement activities within these categories, such as by prioritizing enforcement against convicted felons or gang members. As part of this effort, ICE reported it reviewed policies, regulations, and forms relevant to enforcement priorities. ICE subsequently rescinded prior enforcement priority guidance and issued new guidance directing application of the new approach to immigration enforcement prioritization. Additionally, ICE eliminated existing forms and created a new form to place detainers on foreign nationals who have been arrested on local criminal charges and for whom ICE possesses probable cause to believe that they are removable from the United States, so that ICE can take custody of such individuals upon release. According to ICE officials, more than 43,300 convicted criminal aliens have been identified and removed through Secure Communities from January 25, 2017 through the end of fiscal year 2017. Pursuant to Executive Order 13768 and in accordance with the Secretary of Homeland Security’s memorandum entitled, Enforcement of the Immigration Laws to Serve the National Interest, ICE’s Enforcement and Removal Operations (ERO) expanded the use of the Criminal Alien Program (CAP) by increasing the use of Criminal Alien Program Surge Enforcement Team (CAPSET) operations, traditional CAP Surge operations, and the Institutional Hearing Program. Specifically, ICE took the following actions: ICE ERO conducted four CAPSET operations in Louisiana, Georgia, and California in fiscal year 2017, resulting in a total of 386 encounters, 275 detainers, and 261 charging documents issued, according to ICE documentation. ICE ERO field offices conducted CAP Surge operations, which concluded in March 2017. According to ICE documentation, the operations collectively resulted in 2,061 encounters, 668 arrests, 1,307 detainers issued, and 614 charging documents issued. ICE, along with the Department of Justice’s Executive Office for Immigration Review and the Federal Bureau of Prisons, expanded the number of Institutional Hearing Program sites by nine, from 12 to 21. As of January 22, 2018, five of the nine Institutional Hearing Program expansion sites were operational. ICE officials reported that ICE also detailed over 30 percent more officers (79 officers) to support Community Shield efforts, an international law enforcement initiative to combat the growth and proliferation of transnational criminal street gangs, prison gangs, and outlaw motorcycle gangs throughout the United States. According to ICE officials, CAP used existing resources in fiscal year 2017 to support the efforts required by Executive Order 13768. ICE also requested funds in fiscal years 2018 and 2019 for CAP. Specifically, ICE stated in its fiscal year 2018 and 2019 congressional budget justifications that CAP performs its duties in accordance with immigration enforcement priorities defined by Executive Order 13768. In fiscal year 2018, ICE requested $412.1 million for CAP. The Consolidated Appropriations Act, 2018, funded $319.4 million for CAP, $92.6 million less than requested. Section 9 directs the Department of Justice (DOJ) and the Department of Homeland Security (DHS) to ensure that jurisdictions in willful noncompliance with 8 U.S.C. § 1373 (section 1373) are ineligible to receive federal grants. The section also directs DOJ to take appropriate enforcement action against any entity that violates section 1373, or which has in effect a policy, statute, or practice that prevents or hinders the enforcement of federal law. conducted a compliance review of certain jurisdictions relative to 8 U.S.C. § 1373, and issued a report in May 2016 finding that 10 jurisdictions raised compliance concerns. In response, DOJ placed a special condition on certain fiscal year 2016 grant awards, requiring recipients to submit an assessment of their compliance with section 1373. In November 2017, as part of the section 1373 compliance effort predating Executive Order 13768, DOJ sent letters to 29 jurisdictions expressing concern that they may not be in compliance with section 1373, and requesting responses regarding compliance. In January 2018, DOJ sent 23 follow-up demand letters to jurisdictions seeking further documents to determine whether they are unlawfully restricting information sharing by their law enforcement officers with federal immigration authorities, and stating that failure to respond will result in records being subpoenaed. The Attorney General determined that Section 9 will be applied solely to DOJ or DHS federal grants for jurisdictions willfully refusing to comply with section 1373. Under section 1373, a federal, state, or local government entity or official may not prohibit, or in any way restrict the exchange of information regarding citizenship or immigration status with DHS. ICE developed weekly Declined Detainer Outcome Reports detailing jurisdictions with the highest volume of declined detainers and a list of sample crimes suspected or determined to have been committed by released individuals. According to ICE officials, ICE identified data processing errors and incorrect detainer information and is working to correct these issues. ICE officials noted that they temporarily suspended the reports, and have not yet determined a specific time frame for future publications. DHS reviewed all DHS grant programs to determine which programs could be conditioned to require compliance with section 1373 and plans to provide this information to the Office of Management and Budget, according to DHS officials. DOJ has not obligated, expended, or requested any additional funds to implement Executive Order 13768, section 9(a). The fiscal year 2019 President’s budget proposed to amend the Illegal Immigration Reform and Immigrant Responsibility Act of 1996 to condition DHS and DOJ grants and cooperative agreements on state and local governments’ cooperation with immigration enforcement. Section 2 directed multiple agencies, including the Department of State (State) and Department of Homeland Security (DHS), to conduct a worldwide review to identify any additional information needed from each foreign country to adjudicate immigration benefit applications and ensure that individuals applying for a visa or other immigration benefit are not a security or public safety threat. It also directed the agencies to send a report of the findings of the worldwide review to the President. This section further established visa entry restrictions applicable to foreign nationals from Iran, Libya, Somalia, Sudan, Syria, and Yemen for a 90-day period. It also stated that agencies, including State and DHS, could continue to submit additional countries for inclusion in visa entry restrictions. Section 5 required agencies, including State, DHS, and the Department of Justice (DOJ), to develop a uniform baseline for screening and vetting to identify individuals seeking to enter the United States on a fraudulent basis or who support terrorism or otherwise pose a danger to national security or public safety. practices based on the criteria identified above. In July 2017, State directed its posts to inform their respective host governments of the new information-sharing criteria and request that host governments provide the required information or develop a plan to do so. CA directed posts to engage more intensively with countries whose information-sharing and identity-management practices were preliminarily deemed “inadequate” or “at risk” and submit an assessment of mitigating factors or specific interests that should be considered in the deliberations regarding any travel restrictions. According to officials, State and its posts will continue to engage with foreign countries to address information-sharing and identify management deficiencies. State: Bureau of Consular Affairs (CA), DHS, and DOJ CA provides consular services in reviewing and adjudicating visa applications for those seeking to enter the United States. DHS adjudicates visa petitions, and DHS and DOJ also play roles in screening and vetting applicants. DHS and DOJ, along with State, are responsible for implementing the enhanced screening and vetting protocols established under the executive order. June 29, 2017 through September 24, 2017. During the implementation period, if an applicant was found ineligible for a visa on other grounds unrelated to the executive order, such as prior criminal activity or immigration violations, the applicant would be refused the visa on those grounds, according to State officials. If the applicant was found to be otherwise eligible for the visa and did not qualify for an exemption or a waiver under the executive order, he or she would be refused on the basis of the executive order. CA sent several cables to posts with guidance on implementing the 90-day travel restriction, including operational guidance and updated guidance following court decisions. CA also offered trainings to consular posts on implementation of the order. A series of legal challenges ultimately led to the June 26, 2017 Supreme Court decision prohibiting enforcement of entry restrictions against foreign nationals who could credibly claim a bona fide relationship with a person or entity in the United States. On September 24, 2017, pursuant to section 2(e) of Executive Order 13780, the President issued Presidential Proclamation 9645, which established conditional restrictions on U.S. entry for certain categories of nationals from Chad, Iran, Libya, North Korea, Syria, Venezuela, Yemen and Somalia, for an indefinite period. According to State officials, State, DHS, DOJ, and other agencies formed a working group and developed a uniform baseline for screening and vetting standards and procedures to ensure ineligible individuals are not permitted to enter the United States, and are implementing the new requirements. The working group conducted a review of the visa screening and vetting process and established uniform standards for (1) applications, (2) interviews, and (3) system security checks, including biographic and biometric checks. According to State officials, for applications, the group identified data elements against which applicants are to be screened and vetted. For interviews, the working group established a requirement for all applicants to undergo a baseline uniform national security and public safety interview. The working group modeled its interview baseline on elements of the refugee screening interview. As of June 2017, State collected most of the data elements online for immigrant and nonimmigrant visas, according to State officials. The President issued a memorandum on February 6, 2018, directing DHS, in coordination with State, DOJ, and the Office of the Director of National Intelligence to establish a national vetting center to coordinate agency vetting efforts to identify individuals who pose a threat to national security, border security, homeland security, and public safety. The National Vetting Center will be housed in DHS, and will leverage the capabilities of the U.S. intelligence community to identify, and prevent entry of, individuals that may pose a threat to national security. On February 14, 2018, the Secretary of Homeland Security appointed a director for the National Vetting Center. The Secretary also delegated authorities of the National Vetting Center to the Commissioner of U.S. Customs and Border Protection. State officials said that personnel worked overtime to implement Section 2 and the following Presidential Proclamation, but did not identify monetary costs or budget increases associated with implementation. DHS also dedicated several full-time staff positions to developing and implementing enhanced screening and vetting protocols, and DHS employees worked overtime to implement these provisions, according to officials. Section 6 directed the Department of State (State) to suspend travel of refugees seeking to enter the United States, and the Department of Homeland Security (DHS) to suspend adjudications on refugee applications, for 120 days. Section 6 further ordered that during the 120- day period, State, together with DHS, and the Office of the Director of National Intelligence review the refugee application and adjudication process to identify and implement additional procedures to ensure that refugees seeking entry into the United States under the United States Refugee Admissions Program (USRAP) do not pose a threat to U.S. security and welfare. This section also capped annual refugee admission at 50,000 in fiscal year 2017. State generally suspended travel of refugees into the United States from June 29, 2017 through October 24, 2017. State coordinated with DHS, the Office of the Director of National Intelligence, and other security vetting agencies on the 120-day review of the USRAP application and adjudication process to determine what additional procedures should be used to ensure that individuals seeking admission as refugees do not pose a threat to the security and welfare of the United States, according to State officials. Upon completion of the review, the agencies submitted a joint memorandum to the President. The United States admitted 53,716 refugees in fiscal year 2017, according to State officials. Throughout fiscal year 2017, State issued guidance that steered the refugee admissions program to different refugee arrival goals during different periods of time due to court decisions and budget considerations. Prior to the issuance of Executive Order 13769, which, after largely being blocked nationwide by a federal court injunction was revoked and replaced by Executive Order 13780, PRM operated at the rate of 110,000 refugees per year. After the issuance of Executive Orders 13769 and 13780, PRM officials noted that at times, State made no bookings for refugee arrivals, and also made bookings based on 50,000 arrivals, as well as 110,000 arrivals. A series of legal challenges and resulting court injunctions culminated in the June 26, 2017, Supreme Court order limiting State’s implementation of this section to prospective refugees without bona fide ties to the United States. Together with budget uncertainties, State could not enact the refugee travel suspension or 50,000-person admissions cap based on the timeline set in the executive order. Figure 3 below shows key milestones related to this section of the order. The USRAP resettles refugees to the United States in accordance with a refugee admission ceiling set by the President each year. PRM is responsible for coordinating and managing the USRAP. USCIS is responsible for adjudicating refugee applications. According to USCIS officials, USCIS is implementing new requirements and vetting procedures for refugees. For example, these officials stated that USCIS is accessing more detailed biographical information earlier in the vetting process. Additionally, these officials noted that USCIS’s Fraud Detection and National Security unit is conducting additional reviews of applicants, including social media and other information against various databases. USCIS officials further noted that USCIS’s International Operations office sent guidance to the field that established the logistical requirements of the new procedures. As of April 2018, USCIS was finalizing further guidance and training officers for the enhanced review and vetting procedures, according to USCIS officials. State officials said that State and DHS executed four categories of exemptions during the 120-day USRAP suspension: a Congolese woman with a life-threatening illness and her family; 29 unaccompanied refugee minors; 17 Yezidis and other religious minorities in northern Iraq who had been victims of ISIS; and 53 individuals on Nauru and Manus Islands. appointments by 12 months. In October 2017, State approved extending offers for follow-on 60-month Limited Non-Career Appointments to Consular Fellows who complete a successful initial 60-month appointment. State officials noted the first officer to accept a follow-on appointment was sworn in during April 2018. CA and State’s Bureau of Human Resources updated the CA Limited Non-Career Appointments handbook to include an implementation plan for extending such appointments, and according to officials, providing language training outside of the applicant’s area of core linguistic ability. Consular Fellows serve in U.S. embassies and consulates overseas and primarily adjudicate visa applications for foreign nationals. The Visa Interview Waiver Program formerly waived in-person interviews for certain categories of visa applicants. In early 2017, State streamlined the application process for Consular Fellows and realigned resources to expedite their security clearance process, according to CA officials. From February 2017 through February 2018, State hired 134 new Consular Fellows, according to CA officials. Additionally, State officials said that they expect to hire 120 more Consular Fellows for the remainder of fiscal year 2018. In August 2017, the Foreign Service Institute created a 12-week Spanish Language program for Consular Fellows who received certain scores on the Spanish language exam, according to CA officials. Eleven Consular Fellows completed the program in January 2018 and 20 more are expected to complete the program in July 2018, according to CA officials. As of January 2018, five Consular fellows were being trained in a language outside their core linguistic ability, according to CA officials. While these actions were taken to support implementation of the executive order, CA officials also told us that hiring Consular Fellows has been a State priority for some time. CA officials said that the bureau has hired an increasing number of Consular Fellows to meet worldwide visa demand since 2012, and that providing consular services is one of State’s highest priorities, as well as a national security imperative. According to CA officials, because the Consular Fellows program is entirely funded by non-appropriated consular fees, subject to fluctuating demand for passports and visas, the expansion of the program did not have appropriations impacts. However, officials did provide per unit costs associated with aspects of expanding the Consular Fellows program. For example, Consular Fellows salaries range from approximately $48,000 to approximately $98,000 and Foreign Service Institute language courses last from 24 to 36 weeks, at a cost of $1,700 per week, per student. Executive orders 13767 (Border Security and Immigration Enforcement Improvements), 13768 (Enhancing Public Safety in the Interior of the United States), and 13780 (Protecting the Nation from Foreign Terrorist Entry into the United States) include reporting requirements for the Department of Homeland Security (DHS), the Department of State (State), and the Department of Justice (DOJ). Table 13 lists completed reports as of April 2018, according to DHS, State, and DOJ officials. In addition to the contact named above, Taylor Matheson (Assistant Director), Sarah Turpin (Analyst-in-Charge), Isabel Band, and Kelsey Hawley made key contributions to this report, along with David Alexander, Eric Hauswirth, Sasan J. “Jon” Najmi, Kevin Reeves, and Adam Vogt.
|
In January and March 2017, the President issued a series of executive orders related to border security and immigration. The orders direct federal agencies to take a broad range of actions with potential resource implications. For example, Executive Order 13767 instructs DHS to construct a wall or other physical barriers along the U.S. southern border and to hire an additional 5,000 U.S. Border Patrol agents. Executive Order 13768 instructs federal agencies, including DHS and DOJ, to ensure that U.S immigration law is enforced against all removable individuals and directs ICE to hire an additional 10,000 immigration officers. Executive Order 13780 directs agencies to develop a uniform baseline for screening and vetting standards and procedures; and established nationality-based entry restrictions with respect to visa travelers for a 90-day period, and refugees for 120 days. GAO was asked to review agencies' implementation of the executive orders and related spending. This report addresses (1) actions DHS, DOJ, and State have taken, or plan to take, to implement provisions of the executive orders; and (2) resources to implement provisions of the executive orders, particularly funds DHS, DOJ, and State have obligated, expended, or shifted. GAO reviewed agency planning, tracking, and guidance documents related to the orders, as well as budget requests, appropriations acts, and internal budget information. GAO also interviewed agency officials regarding actions and budgetary costs associated with implementing the orders. The Departments of Homeland Security (DHS), Justice (DOJ), and State issued internal and public reports such as studies and progress updates, developed or revised policies, and took initial planning and programmatic actions to implement Executive Orders 13767, 13768, and 13780. For example: DHS's U.S. Customs and Border Protection (CBP) started the acquisition process for a Border Wall System Program and issued task orders to design and construct barrier prototypes. In November 2017, CBP awarded a contract worth up to $297 million to help with hiring 5,000 U.S. Border Patrol agents, 2,000 CBP officers, and 500 Air and Marine Operations agents. DOJ issued memoranda providing guidance for federal prosecutors on prioritizing certain immigration-related criminal offenses. Additionally, from March through October 2017, DOJ detailed approximately 40 immigration judge positions to detention centers and to the southern border to conduct removal and other related proceedings, according to DOJ officials. State participated in an interagency working group to develop uniform standards related to the adjudication of visa applications, interviews, and system security checks. State also implemented visa and refugee entry restrictions in accordance with the Supreme Court's June 26, 2017, ruling. Agency officials anticipate that implementing the executive orders will be a multi-year endeavor comprising additional reporting, planning, and other actions. DHS, DOJ, and State used existing fiscal year 2017 resources to support initial executive order actions that fit within their established mission areas. GAO found that it was not always possible to disaggregate which fiscal year 2017 funds were used for implementation of the orders versus other agency activities. All three agencies indicated that they used existing personnel to implement the orders and, in some cases, these efforts took substantial time. For example, according to ICE data, personnel spent about 14,000 regular hours (the equivalent of 1,750 8-hour days) and 2,400 overtime hours planning for the ICE hiring surge from January 2017 through January 2018. In March 2017, the President submitted a budget amendment along with a request for $3 billion in supplemental appropriations for DHS to implement the orders. In May 2017, DHS received an appropriation of just over $1.1 billion, some of which DHS used to fund actions to implement the orders. For example, CBP received $65 million for hiring and, according to CBP officials, used these funds to plan and prepare for the surge in U.S. Border Patrol agents. As of January 2018, CBP had obligated $18.8 million of the $65 million. Agencies plan to continue to use their base budgets and request additional funds as needed to carry out their missions and implement the orders. For example, for fiscal year 2018, CBP requested approximately $1.6 billion and received (in March 2018) approximately $1.3 billion to build new and replace existing sections of physical barriers along the southern border. For fiscal year 2019, ICE requested $571 million to hire 2,000 immigration officers and DOJ requested approximately $40 million to hire new immigration judges and supporting staff.
|
IT systems supporting federal agencies and our nation’s critical infrastructures are inherently at risk. These systems are highly complex and dynamic, technologically diverse, and often geographically dispersed. This complexity increases the difficulty in identifying, managing, and protecting the numerous operating systems, applications, and devices comprising the systems and networks. Compounding the risk, federal systems and networks are also often interconnected with other internal and external systems and networks, including the Internet. This increases the number of avenues of attack and expands their attack surface. As systems become more integrated, cyber threats will pose an increasing risk to national security, economic well-being, and public health and safety. Advancements in technology, such as data analytics software for searching and collecting information, have also made it easier for individuals and organizations to correlate data (including PII) and track it across large and numerous databases. For example, social media has been used as a mass communication tool where PII can be gathered in vast amounts. In addition, ubiquitous Internet and cellular connectivity makes it easier to track individuals by allowing easy access to information pinpointing their locations. These advances—combined with the increasing sophistication of hackers and others with malicious intent, and the extent to which both federal agencies and private companies collect sensitive information about individuals—have increased the risk of PII being exposed and compromised. Cybersecurity incidents continue to impact entities across various critical infrastructure sectors. For example, in its 2018 annual data breach investigations report, Verizon reported that 53,308 security incidents and 2,216 data breaches were identified across 65 countries in the 12 months since its prior report. Further, the report noted that cybercriminals can often compromise a system in just a matter of minutes—or even seconds, but that it can take an organization significantly longer to discover the breach. Specifically, the report stated nearly 90 percent of the reported breaches occurred within minutes, while nearly 70 percent went undiscovered for months. These concerns are further highlighted by the number of information security incidents reported by federal executive branch civilian agencies to DHS’s U.S. Computer Emergency Readiness Team (US-CERT). For fiscal year 2017, 35,277 such incidents were reported by the Office of Management and Budget (OMB) in its 2018 annual report to Congress, as mandated by the Federal Information Security Modernization Act (FISMA). These incidents include, for example, web-based attacks, phishing, and the loss or theft of computing equipment. Different types of incidents merit different response strategies. However, if an agency cannot identify the threat vector (or avenue of attack), it could be difficult for that agency to define more specific handling procedures to respond to the incident and take actions to minimize similar future attacks. In this regard, incidents with a threat vector categorized as “other” (which includes avenues of attacks that are unidentified) made up 31 percent of the various incidents reported to US-CERT. Figure 1 shows the percentage of the different types of incidents reported across each of the nine threat vector categories for fiscal year 2017, as reported by OMB. These incidents and others like them can pose a serious challenge to economic, national, and personal privacy and security. The following examples highlight the impact of such incidents: In March 2018, the Mayor of Atlanta, Georgia reported that the city was victimized by a ransomware cyberattack. As a result, city government officials stated that customers were not able to access multiple applications that are used to pay bills or access court related information. In response to the attack, the officials noted that they were working with numerous private and governmental partners, including DHS, to assess what occurred and determine how best to protect the city from future attacks. In March 2018, the Department of Justice reported that it had indicted nine Iranians for conducting a massive cybersecurity theft campaign on behalf of the Islamic Revolutionary Guard Corps. According to the department, the nine Iranians allegedly stole more than 31 terabytes of documents and data from more than 140 American universities, 30 U.S. companies, and five federal government agencies, among other entities. In March 2018, a joint alert from DHS and the Federal Bureau of Investigation (FBI) stated that, since at least March 2016, Russian government actors had targeted the systems of multiple U.S. government entities and critical infrastructure sectors. Specifically, the alert stated that Russian government actors had affected multiple organizations in the energy, nuclear, water, aviation, construction, and critical manufacturing sectors. In July 2017, a breach at Equifax resulted in the loss of PII for an estimated 148 million U.S. consumers. According to Equifax, the hackers accessed people’s names, Social Security numbers (SSN), birth dates, addresses and, in some instances, driver’s license numbers. In April 2017, the Commissioner of the Internal Revenue Service (IRS) testified that the IRS had disabled its data retrieval tool in early March 2017 after becoming concerned about the misuse of taxpayer data. Specifically, the agency suspected that PII obtained outside the agency’s tax system was used to access the agency’s online federal student aid application in an attempt to secure tax information through the data retrieval tool. In April 2017, the agency began notifying taxpayers who could have been affected by the breach. In June 2015, OPM reported that an intrusion into its systems had affected the personnel records of about 4.2 million current and former federal employees. Then, in July 2015, the agency reported that a separate, but related, incident had compromised its systems and the files related to background investigations for 21.5 million individuals. In total, OPM estimated 22.1 million individuals had some form of PII stolen, with 3.6 million being a victim of both breaches. Safeguarding federal IT systems and the systems that support critical infrastructures has been a long-standing concern of GAO. Due to increasing cyber-based threats and the persistent nature of information security vulnerabilities, we have designated information security as a government-wide high-risk area since 1997. In 2003, we expanded the information security high-risk area to include the protection of critical cyber infrastructure. At that time, we highlighted the need to manage critical infrastructure protection activities that enhance the security of the cyber and physical public and private infrastructures that are essential to national security, national economic security, and/or national public health and safety. We further expanded the information security high-risk area in 2015 to include protecting the privacy of PII. Since then, advances in technology have enhanced the ability of government and private sector entities to collect and process extensive amounts of PII, which has posed challenges to ensuring the privacy of such information. In addition, high- profile PII breaches at commercial entities, such as Equifax, heightened concerns that personal privacy is not being adequately protected. Our experience has shown that the key elements needed to make progress toward being removed from the High-Risk List are top-level attention by the administration and agency leaders grounded in the five criteria for removal, as well as any needed congressional action. The five criteria for removal that we identified in November 2000 are as follows: Leadership Commitment. Demonstrated strong commitment and top leadership support. Capacity. The agency has the capacity (i.e., people and resources) to resolve the risk(s). Action Plan. A corrective action plan exists that defines the root cause, solutions, and provides for substantially completing corrective measures, including steps necessary to implement solutions we recommended. Monitoring. A program has been instituted to monitor and independently validate the effectiveness and sustainability of corrective measures. Demonstrated Progress. Ability to demonstrate progress in implementing corrective measures and in resolving the high-risk area. These five criteria form a road map for efforts to improve and ultimately address high-risk issues. Addressing some of the criteria leads to progress, while satisfying all of the criteria is central to removal from the list. Figure 2 shows the five criteria and illustrative actions taken by agencies to address the criteria. Importantly, the actions listed are not “stand alone” efforts taken in isolation from other actions to address high- risk issues. That is, actions taken under one criterion may be important to meeting other criteria as well. For example, top leadership can demonstrate its commitment by establishing a corrective action plan including long-term priorities and goals to address the high-risk issue and using data to gauge progress—actions which are also vital to monitoring criteria. As we reported in the February 2017 high-risk report, the federal government’s efforts to address information security deficiencies had fully met one of the five criteria for removal from the High-Risk List— leadership commitment—and partially met the other four, as shown in figure 3. We plan to update our assessment of this high-risk area against the five criteria in February 2019. Based on our prior work, we have identified four major cybersecurity challenges: (1) establishing a comprehensive cybersecurity strategy and performing effective oversight, (2) securing federal systems and information, (3) protecting cyber critical infrastructure, and (4) protecting privacy and sensitive data. To address these challenges, we have identified 10 critical actions that the federal government and other entities need to take (see figure 4). The four challenges and the 10 actions needed to address them are summarized following the table. The federal government has been challenged in establishing a comprehensive cybersecurity strategy and in performing effective oversight as called for by federal law and policy. Specifically, we have previously reported that the federal government has faced challenges in establishing a comprehensive strategy to provide a framework for how the United States will engage both domestically and internationally on cybersecurity related matters. We have also reported on challenges in performing oversight, including monitoring the global supply chain, ensuring a highly skilled cyber workforce, and addressing risks associated with emerging technologies. The federal government can take four key actions to improve the nation’s strategic approach to, and oversight of, cybersecurity. Develop and execute a more comprehensive federal strategy for national cybersecurity and global cyberspace. In February 2013 we reported that the government had issued a variety of strategy- related documents that addressed priorities for enhancing cybersecurity within the federal government as well as for encouraging improvements in the cybersecurity of critical infrastructure within the private sector; however, no overarching cybersecurity strategy had been developed that articulated priority actions, assigned responsibilities for performing them, and set timeframes for their completion. Accordingly, we recommended that the White House Cybersecurity Coordinator in the Executive Office of the President develop an overarching federal cybersecurity strategy that included all key elements of the desirable characteristics of a national strategy including, among other things, milestones and performance measures for major activities to address stated priorities; cost and resources needed to accomplish stated priorities; and specific roles and responsibilities of federal organizations related to the strategy’s stated priorities. In response to our recommendation, in October 2015, the Director of OMB and the Federal Chief Information Officer, issued a Cybersecurity Strategy and Implementation Plan for the Federal Civilian Government. The plan directed a series of actions to improve capabilities for identifying and detecting vulnerabilities and threats, enhance protections of government assets and information, and further develop robust response and recovery capabilities to ensure readiness and resilience when incidents inevitably occur. The plan also identified key milestones for major activities, resources needed to accomplish milestones, and specific roles and responsibilities of federal organizations related to the strategy’s milestones. Since that time, the executive branch has made progress toward outlining a federal strategy for confronting cyber threats. Table 1 identifies these recent efforts and a description of their related contents. These efforts provide a good foundation toward establishing a more comprehensive strategy, but more effort is needed to address all of the desirable characteristics of a national strategy that we recommended. The recently issued executive branch strategy documents did not include key elements of desirable characteristics that can enhance the usefulness of a national strategy as guidance for decision makers in allocating resources, defining policies, and helping to ensure accountability. Specifically: Milestones and performance measures to gauge results were generally not included in strategy documents. For example, although the DHS Cybersecurity Strategy stated that its implementation would be assessed on an annual basis, it did not describe the milestones and performance measures for tracking the effectiveness of the activities intended to meet the stated goals (e.g., protecting critical infrastructure and responding effectively to cyber incidents). Without such performance measures, DHS will lack a means to ensure that the goals and objectives discussed in the document are accomplished and that responsible parties are held accountable. According to officials from DHS’s Office of Cybersecurity and Communications, the department is developing a plan for implementing the DHS Cybersecurity Strategy and expects to issue the plan by mid-August 2018. The officials stated that the plan is expected to identify milestones, roles, and responsibilities across DHS to inform the prioritization of future efforts. The strategy documents generally did not include information regarding the resources needed to carry out the goals and objectives. For example, although the DHS Cybersecurity Strategy identified a variety of actions the agency planned to take to perform their cybersecurity mission, it did not articulate the resources needed to carry out these actions and requirements. Without information on the specific resources needed, federal agencies may not be positioned to allocate such resources and investments and, therefore, may be hindered in their ability to meet national priorities. Most of the strategy documents lacked clearly defined roles and responsibilities for key agencies, such as DHS, DOD, and OMB. These agencies contribute substantially to the nation’s cybersecurity programs. For example, although the National Security Strategy discusses multiple priority actions needed to address the nation’s cybersecurity challenges (e.g. building defensible government networks and deterring and disrupting malicious cyber actors), it does not describe the roles, responsibilities, or the expected coordination of any specific federal agencies, including DHS, DOD, or OMB, or other non- federal entities needed to carry out those actions. Without this information, the federal government may not be able to foster effective coordination, particularly where there is overlap in responsibilities, or hold agencies accountable for carrying out planned activities. Ultimately, a more clearly defined, coordinated, and comprehensive approach to planning and executing an overall strategy would likely lead to significant progress in furthering strategic goals and lessening persistent weaknesses. Mitigate global supply chain risks. The global, geographically disperse nature of the producers and suppliers of IT products is a growing concern. We have previously reported on potential issues associated with IT supply chain and risks originating from foreign- manufactured equipment. For example, in July 2017, we reported that the Department of State had relied on certain device manufacturers, software developers, and contractor support which had suppliers that were reported to be headquartered in a cyber-threat nation (e.g., China and Russia). We further pointed out that the reliance on complex, global IT supply chains introduces multiple risks to federal agencies, including insertion of counterfeits, tampering, or installation of malicious software or hardware. Earlier this month, we testified that if such global IT supply chain risks are realized, they could jeopardize the confidentiality, integrity, and availability of federal information systems. Thus, the potential exists for serious adverse impact on an agency’s operations, assets, and employees. These factors highlight the importance and urgency of federal agencies appropriately assessing, managing, and monitoring IT supply chain risk as part of their agencywide information security programs. Address cybersecurity workforce management challenges. The federal government faces challenges in ensuring that the nation’s cybersecurity workforce has the appropriate skills. For example, in June 2018, we reported on federal efforts to implement the requirements of the Federal Cybersecurity Workforce Assessment Act of 2015. We determined that most of the Chief Financial Officers (CFO) Act agencies had not fully implemented all statutory requirements, such as developing procedures for assigning codes to cybersecurity positions. Further, we have previously reported that DHS and DOD had not addressed cybersecurity workforce management requirements set forth in federal laws. In addition, we have reported in the last 2 years that federal agencies (1) had not identified and closed cybersecurity skills gaps, (2) had been challenged with recruiting and retaining qualified staff, and (3) had difficulty navigating the federal hiring process. A recent executive branch report also discussed challenges associated with the cybersecurity workforce. Specifically, in response to Executive Order 13800, the Department of Commerce and DHS led an interagency working group exploring how to support the growth and sustainment of future cybersecurity employees in the public and private sectors. In May 2018, the departments issued a report that identified key findings, including: the U.S. cybersecurity workforce needs immediate and sustained improvements; the pool of cybersecurity candidates needs to be expanded through retraining and by increasing the participation of women, minorities, and veterans; a shortage exists of cybersecurity teachers at the primary and secondary levels, faculty in higher education, and training instructors; and comprehensive and reliable data about cybersecurity workforce position needs and education and training programs are lacking. The report also included recommendations and proposed actions to address the findings, including that private and public sectors should (1) align education and training with employers’ cybersecurity workforce needs by applying the National Initiative for Cybersecurity Education Cybersecurity Workforce Framework; (2) develop cybersecurity career model paths; and (3) establish a clearinghouse of information on cybersecurity workforce development education, training, and workforce development programs and initiatives. In addition, in June 2018, the executive branch issued a government reform plan and reorganization recommendations that included, among other things, proposals for solving the federal cybersecurity workforce shortage. In particular, the plan notes that the administration intends to prioritize and accelerate ongoing efforts to reform the way that the federal government recruits, evaluates, selects, pays, and places cyber talent across the enterprise. The plan further states that, by the end of the first quarter of fiscal year 2019, all CFO Act agencies, in coordination with DHS and OMB, are to develop a critical list of vacancies across their organizations. Subsequently, OMB and DHS are to analyze these lists and work with OPM to develop a government-wide approach to identifying or recruiting new employees or reskilling existing employees. Regarding cybersecurity training, the plan notes that OMB is to consult with DHS to standardize training for cybersecurity employees, and should work to develop an enterprise-wide training process for government cybersecurity employees. Ensure the security of emerging technologies. As the devices used in daily life become increasingly integrated with technology, the risk to sensitive data and PII also grows. Over the last several years, we have reported on weaknesses in addressing vulnerabilities associated with emerging technologies, including: IoT devices, such as fitness trackers, cameras, and thermostats, that continuously collect and process information are potentially vulnerable to cyber-attacks; IoT devices, such as those acquired and used by DOD employees or that DOD itself acquires (e.g., smartphones), may increase the security risks to the department; vehicles that are potentially susceptible to cyber-attack through technology, such as Bluetooth; the unknown impact of artificial intelligence cybersecurity; and advances in cryptocurrencies and blockchain technologies. Executive branch agencies have also highlighted the challenges associated with ensuring the security of emerging technologies. Specifically, in a May 2018 report issued in response to Executive Order 13800, the Department of Commerce and DHS issued a report on the opportunities and challenges in reducing the botnet threat. The opportunities and challenges are centered on six principal themes, including the global nature of automated, distributed attacks; effective tools; and awareness and education. The report also provides recommended actions, including that federal agencies should increase their understanding of what software components have been incorporated into acquired products and establish a public campaign to support awareness of IoT security. In our previously discussed reports related to this cybersecurity challenge, we made a total of 50 recommendations to federal agencies to address the weaknesses identified. As of July 2018, 48 recommendations had not been implemented. These outstanding recommendations include 8 priority recommendations, meaning that we believe that they warrant priority attention from heads of key departments and agencies. These priority recommendations include addressing weaknesses associated with, among other things, agency-specific cybersecurity workforce challenges and agency responsibilities for supporting mitigation of vehicle network attacks. Until our recommendations are fully implemented, federal agencies may be limited in their ability to provide effective oversight of critical government-wide initiatives, address challenges with cybersecurity workforce management, and better ensure the security of emerging technologies. In addition to our prior work related to the federal government’s efforts to establish key strategy documents and implement effective oversight, we also have several ongoing reviews related to this challenge. These include reviews of: the CFO Act agencies’ efforts to submit complete and reliable baseline assessment reports of their cybersecurity workforces; the extent to which DOD has established training standards for cyber mission force personnel, and efforts the department has made to achieve its goal of a trained cyber mission force; selected agencies’ ability to implement cloud service technologies and notable benefits this might have on agencies; and the federal approach and strategy to securing agency information systems, to include federal intrusion detection and prevention capabilities and the intrusion assessment plan. The federal government has been challenged in securing federal systems and information. Specifically, we have reported that federal agencies have experienced challenges in implementing government-wide cybersecurity initiatives, addressing weaknesses in their information systems and responding to cyber incidents on their systems. This is particularly concerning given that the emergence of increasingly sophisticated threats and continuous reporting of cyber incidents underscores the continuing and urgent need for effective information security. As such, it is important that federal agencies take appropriate steps to better ensure they have effectively implemented programs to protect their information and systems. We have identified three actions that the agencies can take. Improve implementation of government-wide cybersecurity initiatives. Specifically, in January 2016, we reported that DHS had not ensured that the National Cybersecurity Protection System (NCPS) had fully satisfied all intended system objectives related to intrusion detection and prevention, information sharing, and analytics. In addition, in February 2017, we reported that the DHS National Cybersecurity and Communications Integration Center’s (NCCIC) functions were not being performed in adherence with the principles set forth in federal laws. We noted that, although NCCIC was sharing information about cyber threats in the way it should, the center did not have metrics to measure that the information was timely, relevant and actionable, as prescribed by law. Address weaknesses in federal information security programs. We have previously identified a number of weaknesses in agencies’ protection of their information and information systems. For example, over the past 2 years, we have reported that: most of the 24 agencies covered by the CFO Act had weaknesses in each of the five major categories of information system controls (i.e., access controls, configuration management controls, segregation of duties, contingency planning, and agency-wide security management); three agencies—the Securities Exchange Commission, the Federal Deposit Insurance Corporation, and the Food and Drug Administration—had not effectively implemented aspects of their information security programs, which resulted in weaknesses in these agencies’ security controls; information security weaknesses in selected high-impact systems at four agencies—the National Aeronautics and Space Administration, the Nuclear Regulatory Commission, OPM, and the Department of Veterans Affairs—were cited as a key reason that the agencies had not effectively implemented elements of their information security programs; DOD’s process for monitoring the implementation of cybersecurity guidance had weaknesses and resulted in the closure of certain tasks (such as completing cyber risk assessments) before they were fully implemented; and agencies had not fully defined the role of their Chief Information Security Officers, as required by FISMA. We also recently testified that, although the government had acted to protect federal information systems, additional work was needed to improve agency security programs and cyber capabilities. In particular, we noted that further efforts were needed by agencies to implement our prior recommendations in order to strengthen their information security programs and technical controls over their computer networks and systems. Enhance the federal response to cyber incidents. We have reported that certain agencies have had weaknesses in responding to cyber incidents. For example, as of August 2017, OPM had not fully implemented controls to address deficiencies identified as a result of its 2015 cyber incidents; DOD had not identified the National Guard’s cyber capabilities (e.g., computer network defense teams) or addressed challenges in its exercises. as of April 2016, DOD had not identified, clarified, or implemented all components of its support of civil authorities during cyber incidents; and as of January 2016, DHS’s NCPS had limited capabilities for detecting and preventing intrusions, conducting analytics, and sharing information. In the public versions of the reports previously discussed for this challenge area, we made a total of 101 recommendations to federal agencies to address the weaknesses identified. As of July 2018, 61 recommendations had not been implemented. These outstanding recommendations include 14 priority recommendations to address weaknesses associated with, among other things, the information security programs at the National Aeronautics and Space Administration, OPM, and the Security Exchange Commission. Until these recommendations are implemented, these federal agencies will be limited in their ability to ensure the effectiveness of their programs for protecting information and systems. In addition to our prior work, we also have several ongoing reviews related to the federal government’s efforts to protect its information and systems. These include reviews of: Federal Risk and Authorization Management Program (FedRAMP) implementation, including an assessment of the implementation of the program’s authorization process for protecting federal data in cloud environments; the Equifax data breach, including an assessment of federal oversight of credit reporting agencies’ collection, use, and protection of consumer PII; the Federal Communication Commission’s Electronic Comment Filing System security, to include a review of the agency’s detection of and response to a May 2017 incident that reportedly impacted the system; DOD’s efforts to improve the cybersecurity of its major weapon DOD’s whistleblower program, including an assessment of the policies, procedures, and controls related to the access and storage of sensitive and classified information needed for the program; IRS’s efforts to (1) implement security controls and the agency’s information security program, (2) authenticate taxpayers, and (3) secure tax information; and federal intrusion detection and prevention capabilities. The federal government has been challenged in working with the private sector to protect critical infrastructure. This infrastructure includes both public and private systems vital to national security and other efforts, such as providing the essential services that underpin American society. As the cybersecurity threat to these systems continues to grow, federal agencies have millions of sensitive records that must be protected. Specifically, this critical infrastructure threat could have national security implications and more efforts should be made to ensure that it is not breached. To help address this issue, NIST developed the cybersecurity framework—a voluntary set of cybersecurity standards and procedures for industry to adopt as a means of taking a risk-based approach to managing cybersecurity. However, additional action is needed to strengthen the federal role in protecting the critical infrastructure. Specifically, we have reported on other critical infrastructure protection issues that need to be addressed. For example: Entities within the 16 critical infrastructure sectors reported encountering four challenges to adopting the cybersecurity framework, such as being limited in their ability to commit necessary resources towards framework adoption and not having the necessary knowledge and skills to effectively implement the framework. Major challenges existed to securing the electricity grid against cyber threats. These challenges included monitoring implementation of cybersecurity standards, ensuring security features are built into smart grid systems, and establishing metrics for cybersecurity. DHS and other agencies needed to enhance cybersecurity in the maritime environment. Specifically, DHS did not include cyber risks in its risk assessments that were already in place nor did it address cyber risks in guidance for port security plans. Sector-specific agencies were not properly addressing progress or metrics to measure their progress in cybersecurity. DOD and the Federal Aviation Administration identified a variety of operations and physical security risks that could adversely affect DOD missions. We made a total of 19 recommendations to federal agencies to address these weaknesses and others. These recommendations include, for example, a total of 9 recommendations to 9 sector-specific agencies to develop methods to determine the level and type of cybersecurity framework adoption across their respective sectors. As of July 2018, all 19 recommendations had not been implemented. Until these recommendations are implemented, the federal government will continue to be challenged in fulfilling its role in protecting the nation’s critical infrastructure. In addition to our prior work related to the federal government’s efforts to protect critical infrastructure, we also have several ongoing reviews focusing on: the physical and cybersecurity risks to pipelines across the country responsible for transmitting oil, natural gas, and other hazardous liquids; the cybersecurity risks to the electric grid; and the privatization of utilities at DOD installations. The federal government has been challenged in protecting privacy and sensitive data. Advances in technology, including powerful search technology and data analytics software, have made it easy to correlate information about individuals across large and numerous databases, which have become very inexpensive to maintain. In addition, ubiquitous Internet connectivity has facilitated sophisticated tracking of individuals and their activities through mobile devices such as smartphones and fitness trackers. Given that access to data is so pervasive, personal privacy hinges on ensuring that databases of PII maintained by government agencies or on their behalf are protected both from inappropriate access (i.e., data breaches) as well as inappropriate use (i.e., for purposes not originally specified when the information was collected). Likewise, the trend in the private sector of collecting extensive and detailed information about individuals needs appropriate limits. The vast number of individuals potentially affected by data breaches at federal agencies and private sector entities in recent years increases concerns that PII is not being properly protected. Federal agencies should take two types of actions to address this challenge area. In addition, we have previously proposed two matters for congressional consideration aimed toward better protecting PII. Improve federal efforts to protect privacy and sensitive data. We have issued several reports noting that agencies had deficiencies in protecting privacy and sensitive data that needed to be addressed. For example: The Department of Health and Human Services’ (HHS) Centers for Medicare and Medicaid Services (CMS) and external entities were at risk of compromising Medicare Beneficiary Data due to a lack of guidance and proper oversight. The Department of Education’s Office of Federal Student Aid had not properly overseen its school partners’ records or information security programs. HHS had not fully addressed key security elements in its guidance for protecting the security and privacy of electronic health information. CMS had not fully protected the privacy of users’ data on state- based marketplaces. Poor planning and ineffective monitoring had resulted in the unsuccessful implementation of government initiatives aimed at eliminating the unnecessary collection, use, and display of SSNs. Appropriately limit the collection and use of personal information and ensure that it is obtained with appropriate knowledge or consent. We have issued a series of reports that highlight a number of the key concerns in this area. For example: The emergence of IoT devices can facilitate the collection of information about individuals without their knowledge or consent; Federal laws for smartphone tracking applications have not generally been well enforced. The FBI has not fully ensured privacy and accuracy related to the use of face recognition technology. We have previously suggested that Congress consider amending laws, such as the Privacy Act of 1974 and the E-Government Act of 2002, because they may not consistently protect PII. Specifically, we found that while these laws and guidance set minimum requirements for agencies, they may not consistently protect PII in all circumstances of its collection and use throughout the federal government and may not fully adhere to key privacy principles. However, revisions to the Privacy Act and the E-Government Act have not yet been enacted. Further, we also suggested that Congress consider strengthening the consumer privacy framework and review issues such as the adequacy of consumers’ ability to access, correct, and control their personal information; and privacy controls related to new technologies such as web tracking and mobile devices. However, these suggested changes have not yet been enacted. We also made a total of 29 recommendations to federal agencies to address the weaknesses identified. As of July 2018, 28 recommendations had not been implemented. These outstanding recommendations include 6 priority recommendations to address weaknesses associated with, among other things, publishing privacy impact assessments and improving the accuracy of the FBI’s face recognition services. Until these recommendations are implemented, federal agencies will be challenged in their ability to protect privacy and sensitive data and ensure that its collection and use is appropriately limited. In addition to our prior work, we have several ongoing reviews related to protecting privacy and sensitive data. These include reviews of: IRS’s taxpayer authentication efforts, including what steps the agency is taking to monitor and improve its authentication methods; the extent to which the Department of Education’s Office of Federal Student Aid’s policies and procedures for overseeing non-school partners’ protection of federal student aid data align with federal requirements and guidance; data security issues related to credit reporting agencies, including a review of the causes and impacts of the August 2017 Equifax data breach; the extent to which Equifax assessed, responded to, and recovered from its August 2017 data breach; federal agencies’ efforts to remove PII from shared cyber threat indicators; and how the federal government has overseen Internet privacy, including the roles of the Federal Communications Commission and the Federal Trade Commission, and strengths and weaknesses of the current oversight authorities. In summary, since 2010, we have made over 3,000 recommendations to agencies aimed at addressing the four cybersecurity challenges. Nevertheless, many agencies continue to be challenged in safeguarding their information systems and information, in part because many of these recommendations have not been implemented. Of the roughly 3,000 recommendations made since 2010, nearly 1,000 had not been implemented as of July 2018. We have also designated 35 as priority recommendations, and as of July 2018, 31 had not been implemented. The federal government and the nation’s critical infrastructure are dependent on IT systems and electronic data, which make them highly vulnerable to a wide and evolving array of cyber-based threats. Securing these systems and data is vital to the nation’s security, prosperity, and well-being. Nevertheless, the security over these systems and data is inconsistent and urgent actions are needed to address ongoing cybersecurity and privacy challenges. Specifically, the federal government needs to implement a more comprehensive cybersecurity strategy and improve its oversight, including maintaining a qualified cybersecurity workforce; address security weaknesses in federal systems and information and enhance cyber incident response efforts; bolster the protection of cyber critical infrastructure; and prioritize efforts to protect individual’s privacy and PII. Until our recommendations are addressed and actions are taken to address the four challenges we identified, the federal government, the national critical infrastructure, and the personal information of U.S. citizens will be increasingly susceptible to the multitude of cyber-related threats that exist. Chairmen Meadows and Hurd, Ranking Members Connolly and Kelly, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. Questions about this testimony can be directed to Nick Marinos, Director, Cybersecurity and Data Protection Issues, at (202) 512-9342 or marinosn@gao.gov; and Gregory C. Wilshusen, Director, Information Security Issues, at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Jon Ticehurst, Assistant Director; Kush K. Malhotra, Analyst-In-Charge; Chris Businsky; Alan Daigle; Rebecca Eyler; Chaz Hubbard; David Plocher; Bradley Roach; Sukhjoot Singh; Di’Mond Spencer; and Umesh Thakkar. Information Security: Supply Chain Risks Affecting Federal Agencies. GAO-18-667T. Washington, D.C.: July 12, 2018. Information Technology: Continued Implementation of High-Risk Recommendations Is Needed to Better Manage Acquisitions, Operations, and Cybersecurity. GAO-18-566T. Washington, D.C.: May 23, 2018. Electronic Health Information: CMS Oversight of Medicare Beneficiary Data Security Needs Improvement. GAO-18-210. Washington, D.C.: April 5, 2018. Technology Assessment: Artificial Intelligence, Emerging Opportunities, Challenges, and Implications. GAO-18-142SP. Washington, D.C.: March 28, 2018. GAO Strategic Plan 2018-2023: Trends Affecting Government and Society. GAO-18-396SP. Washington, D.C.: February 22, 2018. Critical Infrastructure Protection: Additional Actions are Essential for Assessing Cybersecurity Framework Adoption. GAO-18-211. Washington, D.C.: February 15, 2018. Cybersecurity Workforce: Urgent Need for DHS to Take Actions to Identify Its Position and Critical Skill Requirements. GAO-18-175. Washington, D.C.: February 6, 2018. Homeland Defense: Urgent Need for DOD and FAA to Address Risks and Improve Planning for Technology That Tracks Military Aircraft. GAO-18-177. Washington, D.C.: January 18, 2018. Federal Student Aid: Better Program Management and Oversight of Postsecondary Schools Needed to Protect Student Information. GAO-18-121. Washington, D.C.: December 15, 2017. Defense Civil Support: DOD Needs to Address Cyber Incident Training Requirements. GAO-18-47. Washington, D.C.: November 30, 2017. Federal Information Security: Weaknesses Continue to Indicate Need for Effective Implementation of Policies and Practices. GAO-17-549. Washington, D.C.: September 28, 2017. Information Security: OPM Has Improved Controls, but Further Efforts Are Needed. GAO-17-614. Washington, D.C.: August 3, 2017. Defense Cybersecurity: DOD’s Monitoring of Progress in Implementing Cyber Strategies Can Be Strengthened. GAO-17-512. Washington, D.C.: August 1, 2017. State Department Telecommunications: Information on Vendors and Cyber-Threat Nations. GAO-17-688R. Washington, D.C.: July 27, 2017. Internet of Things: Enhanced Assessments and Guidance Are Needed to Address Security Risks in DOD. GAO-17-668. Washington, D.C.: July 27, 2017. Information Security: SEC Improved Control of Financial Systems but Needs to Take Additional Actions. GAO-17-469. Washington, D.C.: July 27, 2017. Information Security: Control Deficiencies Continue to Limit IRS’s Effectiveness in Protecting Sensitive Financial and Taxpayer Data. GAO-17-395. Washington, D.C.: July 26, 2017. Social Security Numbers: OMB Actions Needed to Strengthen Federal Efforts to Limit Identity Theft Risks by Reducing Collection, Use, and Display. GAO-17-553. Washington, D.C.: July 25, 2017. Information Security: FDIC Needs to Improve Controls over Financial Systems and Information. GAO-17-436. Washington, D.C.: May 31, 2017. Technology Assessment: Internet of Things: Status and Implications of an Increasingly Connected World. GAO-17-75. Washington, D.C.: May 15, 2017. Cybersecurity: DHS’s National Integration Center Generally Performs Required Functions but Needs to Evaluate Its Activities More Completely. GAO-17-163. Washington, D.C.: February 1, 2017. High-Risk Series: An Update. GAO-17-317. Washington, D.C.: February 2017. IT Workforce: Key Practices Help Ensure Strong Integrated Program Teams; Selected Departments Need to Assess Skill Gaps. GAO-17-8. Washington, D.C.: November 30, 2016. Electronic Health Information: HHS Needs to Strengthen Security and Privacy Guidance and Oversight. GAO-16-771. Washington, D.C.: September 26, 2016. Defense Civil Support: DOD Needs to Identify National Guard’s Cyber Capabilities and Address Challenges in Its Exercises. GAO-16-574. Washington, D.C.: September 6, 2016. Information Security: FDA Needs to Rectify Control Weaknesses That Place Industry and Public Health Data at Risk. GAO-16-513. Washington, D.C.: August 30, 2016. Federal Chief Information Security Officers: Opportunities Exist to Improve Roles and Address Challenges to Authority. GAO-16-686. Washington, D.C.: August 26, 2016. Federal Hiring: OPM Needs to Improve Management and Oversight of Hiring Authorities. GAO-16-521. Washington, D.C.: August 2, 2016. Information Security: Agencies Need to Improve Controls over Selected High-Impact Systems. GAO-16-501. Washington, D.C.: May 18, 2016. Face Recognition Technology: FBI Should Better Ensure Privacy and Accuracy. GAO-16-267. Washington, D.C.: May 16, 2016. Smartphone Data: Information and Issues Regarding Surreptitious Tracking Apps That Can Facilitate Stalking. GAO-16-317. Washington, D.C.: May 9, 2016. Vehicle Cybersecurity: DOT and Industry Have Efforts Under Way, but DOT Needs to Define Its Role in Responding to a Real-world Attack. GAO-16-350. Washington, D.C.: April 25, 2016. Civil Support: DOD Needs to Clarify Its Roles and Responsibilities for Defense Support of Civil Authorities during Cyber Incidents. GAO-16-332. Washington, D.C.: April 4, 2016. Healthcare.gov: Actions Needed to Enhance Information Security and Privacy Controls. GAO-16-265. Washington, D.C.: March 23, 2016. Information Security: DHS Needs to Enhance Capabilities, Improve Planning, and Support Greater Adoption of Its National Cybersecurity Protection System. GAO-16-294. Washington, D.C.: January 28, 2016. Critical Infrastructure Protection: Sector-Specific Agencies Need to Better Measure Cybersecurity Progress. GAO-16-79. Washington, D.C.: November 19, 2015. Critical Infrastructure Protection: Cybersecurity of the Nation’s Electricity Grid Requires Continued Attention. GAO-16-174T. Washington, D.C.: October 21, 2015. Maritime Critical Infrastructure Protection: DHS Needs to Enhance Efforts to Address Port Cybersecurity. GAO-16-116T. Washington, D.C.: October 8, 2015. Cybersecurity: National Strategy, Roles, and Responsibilities Need to Be Better Defined and More Effectively Implemented. GAO-13-187. Washington, D.C.: February 14, 2014. Information Resellers: Consumer Privacy Framework Needs to Reflect Changes in Technology and the Marketplace. GAO-13-663. Washington, D.C.: September 25, 2013. Cyberspace: United States Faces Challenges in Addressing Global Cybersecurity and Governance. GAO-10-606. Washington, D.C.: July 2, 2010. Privacy: Alternatives Exist for Enhancing Protection of Personally Identifiable Information. GAO-08-536. Washington, D.C.: May 19, 2008. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Federal agencies and the nation's critical infrastructures—such as energy, transportation systems, communications, and financial services—are dependent on information technology systems to carry out operations. The security of these systems and the data they use is vital to public confidence and national security, prosperity, and well-being. The risks to these systems are increasing as security threats evolve and become more sophisticated. GAO first designated information security as a government-wide high-risk area in 1997. This was expanded to include protecting cyber critical infrastructure in 2003 and protecting the privacy of personally identifiable information in 2015. GAO was asked to update its information security high-risk area. To do so, GAO identified the actions the federal government and other entities need to take to address cybersecurity challenges. GAO primarily reviewed prior work issued since the start of fiscal year 2016 related to privacy, critical federal functions, and cybersecurity incidents, among other areas. GAO also reviewed recent cybersecurity policy and strategy documents, as well as information security industry reports of recent cyberattacks and security breaches. GAO has identified four major cybersecurity challenges and 10 critical actions that the federal government and other entities need to take to address them. GAO continues to designate information security as a government-wide high-risk area due to increasing cyber-based threats and the persistent nature of security vulnerabilities. GAO has made over 3,000 recommendations to agencies aimed at addressing cybersecurity shortcomings in each of these action areas, including protecting cyber critical infrastructure, managing the cybersecurity workforce, and responding to cybersecurity incidents. Although many recommendations have been addressed, about 1,000 have not yet been implemented. Until these shortcomings are addressed, federal agencies' information and systems will be increasingly susceptible to the multitude of cyber-related threats that exist. GAO has made over 3,000 recommendations to agencies since 2010 aimed at addressing cybersecurity shortcomings. As of July 2018, about 1,000 still needed to be implemented.
|
FEMA’s mission is to help people before, during, and after disasters. It provides assistance to those affected by emergencies and disasters by supplying immediate needs (e.g., ice, water, food, and temporary housing) and providing financial assistance grants for damage to personal or public property. FEMA also provides non-disaster assistance grants to improve the nation’s preparedness, readiness, and resilience to all hazards. FEMA accomplishes a large part of its mission through awarding grants to state, local, and tribal governments and nongovernmental entities to help communities prevent, prepare for, protect against, mitigate the effects of, respond to, and recover from disasters and terrorist attacks. As previously mentioned, for fiscal years 2005 through 2014, the agency obligated about $104.5 billion in disaster relief grants. In addition, as of April 2018, the four major disasters in 2017—hurricanes Harvey, Irma, and Maria; and the California wildfires—had resulted in over $22 billion in FEMA grants. The current FEMA grants management environment is highly complex with many stakeholders, IT systems, and users. Specifically, this environment is comprised of 45 active disaster and non-disaster grant programs, which are grouped into 12 distinct grant categories. For example, one program in the Preparedness: Fire category is the Assistance to Firefighters Grants (AFG) program, which provides grants to fire departments, nonaffiliated emergency medical service organizations, and state fire training academies to support firefighting and emergency response needs. As another example, the Housing Assistance grant program is in the Recovery Assistance for Individuals category and provides financial assistance to individuals and households in geographical areas that have been declared an emergency or major disaster by the President. Table 1 lists FEMA’s non-disaster and disaster-based grant categories. According to FEMA, the processes for managing these different types of grants vary because the grant programs were developed independently by at least 18 separate authorizing laws that were enacted over a 62-year period (from 1947 through 2009). The various laws call for different administrative and reporting requirements. For example, the Robert T. Stafford Disaster Relief and Emergency Assistance Act, as amended, established the statutory authority for 11 of the grant programs, such as the administration of Public Assistance and Individual Assistance grant programs after a presidentially declared disaster. The act also requires the FEMA Administrator to submit an annual report to the President and Congress covering FEMA’s expenditures, contributions, work, and accomplishments, pursuant to the act. As another example, the National Dam Safety Program Act established one of the grant programs aimed at providing financial assistance to improve dam safety. Key stakeholders in modernizing the IT grants management environment include the internal FEMA officials that review, approve, and monitor the grants awarded, such as grant specialists, program analysts, and supervisors. FEMA has estimated that it will need to support about 5,000 simultaneous internal users of its grants management systems. Other users include the grant recipients that apply for, receive, and submit reports on their grant awards; these are considered the external system users. These grant recipients can include individuals, states, local governments, Indian tribes, institutions of higher education, and nonprofit organizations. FEMA has estimated that there are hundreds of thousands of external users of its grants systems. The administration of the many different grant programs is distributed across four divisions within FEMA’s organizational structure. Figure 1 provides an overview of FEMA’s organizational structure and the divisions that are responsible for administering grants. Within three of the four divisions—Resilience, United States Fire Administration, and Office of Response and Recovery—16 different grant program offices are collectively responsible for administering the 45 grant programs. The fourth division consists of 10 regional offices that help administer grants within their designated geographical regions. For example, the Office of Response and Recovery division oversees three different offices that administer 13 grant programs that are largely related to providing assistance in response to presidentially declared disasters. Figure 2 shows the number of grant programs administered by each of the four divisions’ grant program and regional offices. In addition, appendix II lists the names of the 45 grant programs. FEMA’s OCIO is responsible for developing, enhancing, and maintaining the agency’s IT systems, and for increasing efficiencies and cooperation across the entire organization. However, we and the DHS Office of Inspector General (OIG) have previously reported that the grant programs and regional offices develop information systems independent of the OCIO and that this has contributed to the agency’s disparate IT environment. We and the DHS OIG have reported that this disparate IT environment was due, in part, to FEMA’s decentralized IT budget and acquisition practices. For example, from fiscal years 2010 through 2015, the OCIO’s budget represented about one-third of the agency’s IT budget, with the grant program offices accounting for the remaining two-thirds of that budget. In February 2018, the OIG found that FEMA had shown limited progress in improving its IT management and that many of the issues reported in prior audits remained unchanged. As such, the OIG initiated a more comprehensive audit of the agency’s IT management that is ongoing. FEMA has identified 10 primary legacy IT systems that support its grants management activities. According to the agency, most of these systems were developed to support specific grant programs or grant categories. Table 2 summarizes the 10 primary legacy systems. According to FEMA officials, the 10 primary grant systems are all in operation (several have been for decades) and are not interoperable. As a result, individual grant programs and regional offices have independently developed work arounds intended to address existing capability gaps with the primary systems. FEMA officials stated that while these work arounds have helped the agency partially address capability gaps with its primary systems, they are often nonstandardized processes, and introduce the potential for information security risks and errors. This environment has contributed to labor-intensive manual processes and an increased burden for grant recipients. The disparate systems have also led to poor information sharing and reporting capabilities, as well as difficulty reconciling financial data. The DHS OIG and we have previously highlighted challenges with FEMA’s past attempts to modernize its grant management systems. For example, In December 2006, the DHS OIG reported that EMMIE, an effort to modernize its grants management systems and provide a single grants processing solution, was being developed without a clear understanding and definition of the future solution. The report also identified the need to ensure crosscutting participation from headquarters, regions, and states in developing and maintaining a complete, documented set of FEMA business and system requirements. In April 2016, we found weaknesses in FEMA’s development of the EMMIE system. For example, we noted that the system was implemented without sufficient documentation of system requirements, an acquisition strategy, up-to-date cost estimate and schedule, total amount spent to develop the system, or a systems integration plan. In response to our findings and related recommendations, FEMA took action to address these issues. For example, the agency implemented a requirements management process that, among other things, provided guidance to programs on analyzing requirements to ensure that they are complete and verifiable. We reported in November 2017 that EMMIE lacked the ability to collect information on all pre-award activities and, as a result, agency officials said that they and applicants used ad hoc reports and personal tracking documents to manage and monitor the progress of grant applications. FEMA officials added that applicants often struggled to access the system and that the system was not user friendly. Due to EMMIE’s shortfalls, the agency had to develop another system in 2017 to supplement EMMIE with additional grant tracking and case management capabilities. FEMA initiated GMM in 2015, in part, due to EMMIE’s failed attempt to modernize the agency’s grants management environment. The program is intended to modernize and streamline the agency’s grants management environment. To help streamline the agency’s grants management processes, the program established a standard framework intended to represent a common grants management lifecycle. The framework consists of five sequential phases—pre-award, award, post-award, closeout, and post- closeout—along with a sixth phase dedicated to continuous grant program management activities, such as analyzing data and producing reports on grant awards and managing IT systems. FEMA also established 43 distinct business functions associated with these six lifecycle phases. Figure 3 provides the general activities that may occur in each of the grant lifecycle phases, but specific activities would depend on the type of grant being administered (i.e., disaster versus non-disaster). GMM is expected to be implemented within the complex IT environment that currently exists at FEMA. For example, the program is intended to replace the 10 legacy grants management systems, and potentially many additional subsystems, with a single IT system. Each of the 10 legacy systems was developed with its own database(s) and with no standardization of the grants management data and, according to FEMA officials, this legacy data has grown significantly over time. Accordingly, FEMA will need to migrate, analyze, and standardize the grants management data before transitioning it to GMM. The agency awarded a contract in June 2016 to support the data migration efforts for GMM. The agency also implemented a data staging environment in October 2017 to migrate the legacy data and identify opportunities to improve the quality of the data. Further, the GMM system is expected to interface with a total of 38 other systems. These include 19 systems external to DHS (e.g., those provided by commercial entities or other federal government agencies) and 19 systems internal to DHS or FEMA. Some of the internal FEMA systems are undergoing their own modernization efforts and will need to be coordinated with GMM, such as the agency’s financial management systems, national flood insurance systems, and enterprise data warehouses. For example, FEMA’s Financial Systems Modernization Program was originally expected to deliver a new financial system in time to interface with GMM. However, the financial modernization has been delayed until after GMM is to be fully implemented; thus, GMM will instead need to interface with the legacy financial system. As a result, GMM is in the process of removing one of its key performance parameters in the acquisition program baseline related to financial systems interoperability and timeliness of data exchanged. In May 2017, DHS approved the acquisition program baseline for GMM. The baseline estimated the total lifecycle costs to be about $251 million, initial operational capability to be achieved by September 2019, and full operational capability to be achieved by September 2020. FEMA intends to develop and deploy its own software applications for GMM using a combination of commercial-off-the-shelf software, open source software, and custom developed code. The agency plans to rely on an Agile software development approach. According to FEMA planning documentation, the agency plans to fully deliver GMM by September 2020 over eight Agile development increments. Agile development is a type of incremental development, which calls for the rapid delivery of software in small, short increments. Many organizations, especially in the federal government, are accustomed to using a waterfall software development model. This type of model typically consists of long, sequential phases, and differs significantly from the Agile development approach. We have previously reported that DHS has sought to establish Agile software development as the preferred method for acquiring and delivering IT capabilities. However, the department has not yet completed critical actions necessary to update its guidance, policies, and practices for Agile programs, in areas such as, developing lifecycle cost estimates, managing IT requirements, testing and evaluation, oversight at key decision points, and ensuring cybersecurity. (See appendix III for more details on the Agile software development approach.) FEMA’s acquisition approach includes using contract support to assist with the development and deployment efforts. The agency selected a public cloud environment to host the computing infrastructure. In addition, from March through July 2017, the agency used a short-term contract aimed at developing prototypes of GMM functionality for grant tracking and monitoring, case management of disaster survivors, grant reporting, and grant closeout. The agency planned to award a second development contract by December 2017 to complete the GMM system (beyond the prototypes) and to begin this work in September 2018. However, due to delays in awarding the second contract to develop the complete GMM system, in January 2018, the program extended the scope and time frames of the initial short-term prototype contract for an additional year to develop the first increment of the GMM system— referred to as the AFG pilot. On August 31, 2018, FEMA awarded the second development contract, which is intended to deliver the remaining functionality beyond the AFG pilot (i.e., increments 2 through 8). FEMA officials subsequently issued a 90-day planning task order for the Agile development contractor to define the work that needs to be done to deliver GMM and the level of effort needed to accomplish that work. However, the planning task order was paused after a bid protest was filed with GAO in September 2018. According to FEMA officials, they resumed work on the planning task order after the bid protest was withdrawn by the protester on November 20, 2018, and then the work was paused again during the partial government shutdown from December 22, 2018, through January 25, 2019. FEMA began working on the AFG pilot—GMM’s first increment—in January 2018. This increment was intended to pilot GMM’s use of Agile development methods to replace core functionality for the AFG system (i.e., one of the 10 legacy systems).This system supports three preparedness/fire-related grant programs—Assistance to Firefighters Grants Program, Fire Prevention and Safety Grant Program, and Staffing for Adequate Fire and Emergency Response Grant Program. According to FEMA officials, the AFG system was selected as the first system to be replaced because it is costly to maintain and the DHS OIG had identified cybersecurity concerns with the system. Among the 43 GMM business functions discussed earlier in this report, FEMA officials specified 19 functions to be delivered in the AFG pilot. Figure 4 shows the planned time frames for delivering the AFG pilot in increment 1 (which consisted of four 3-month Agile development sub- increments), as of August 2018. As of August 2018, the program was working on sub-increment 1C of the pilot. In September 2018, GMM deployed its first set of functionality to a total of 19 AFG users—which included seven of 169 total internal AFG users, and 12 of more than 153,000 external AFG users. The functionality supported four of the 19 business functions that are related to the closeout of grants (i.e., the process by which all applicable administrative actions and all required work to award a grant have been completed). This functionality included tasks such as evaluation of final financial reports submitted by grant recipients and final reconciliation of finances (e.g., final disbursement to recipients and return of unobligated federal funds). According to FEMA officials, closeout functionality was selected first for deployment because it was the most costly component of the legacy AFG system to maintain, as it is an entirely manual and labor-intensive process. The remaining AFG functionality and remaining AFG users are to be deployed by the end of the AFG pilot. The GMM program is executed by a program management office, which is overseen by a program manager and program executive. This office is responsible for directing the day-to-day operations and ensuring completion of GMM program goals and objectives. The program office resides within the Office of Response and Recovery, which is headed by an Associate Administrator who reports to the FEMA Administrator. In addition, the GMM program executive (who is also the Regional Administrator for FEMA Region IX) reports directly to the FEMA Administrator. GMM is designated as a level 2 major acquisition, which means that it is subject to oversight by the DHS acquisition review board. The board is chaired by the DHS Undersecretary for Management and is made up of executive-level members, such as the DHS Chief Information Officer. The acquisition review board serves as the departmental executive board that decides whether to approve GMM through key acquisition milestones and reviews the program’s progress and its compliance with approved documentation every 6 months. The board approved the acquisition program baseline for GMM in May 2017 (i.e., estimated costs to be about $251 million and full operational capability to be achieved by September 2020). In addition, the program is reviewed on a monthly basis by FEMA’s Grants Management Executive Steering Group. This group is chaired by the Deputy Administrator of FEMA. Further, DHS’s Financial Systems Modernization Executive Steering Committee, chaired by the DHS Chief Financial Officer, meets monthly and is to provide guidance, oversight, and support to GMM. For government organizations, including FEMA, cybersecurity is a key element in maintaining the public trust. Inadequately protected systems may be vulnerable to insider threats. Such systems are also vulnerable to the risk of intrusion by individuals or groups with malicious intent who could unlawfully access the systems to obtain sensitive information, disrupt operations, or launch attacks against other computer systems and networks. Moreover, cyber-based threats to federal information systems are evolving and growing. Accordingly, we designated cybersecurity as a government-wide high risk area 22 years ago, in 1997, and it has since remained on our high-risk list. Federal law and guidance specify requirements for protecting federal information and information systems. The Federal Information Security Modernization Act (FISMA) of 2014 requires executive branch agencies to develop, document, and implement an agency-wide cybersecurity program to provide security for the information and information systems that support operations and assets of the agency. The act also tasks NIST with developing, for systems other than those for national security, standards and guidelines to be used by all agencies to establish minimum cybersecurity requirements for information and information systems based on their level of cybersecurity risk. Accordingly, NIST developed a risk management framework of standards and guidelines for agencies to follow in developing cybersecurity programs. The framework addresses broad cybersecurity and risk management activities, including categorizing the system’s impact level; selecting, implementing, and assessing security controls; authorizing the system to operate (based on progress in remediating control weaknesses and an assessment of residual risk); and monitoring the efficacy of controls on an ongoing basis. Figure 5 provides an overview of this framework. Prior DHS OIG assessments, such as the annual evaluation of DHS’s cybersecurity program, have identified issues with FEMA’s cybersecurity practices. For example, in 2016, the OIG reported that FEMA was operating 111 systems without an authorization to operate. In addition, the agency had not created any corrective action plans for 11 of the systems that were classified as “Secret” or “Top Secret,” thus limiting its ability to ensure that all identified cybersecurity weaknesses were mitigated in a timely manner. The OIG further reported that, for several years, FEMA was consistently below DHS’s 90 percent target for remediating corrective action plans, with scores ranging from 73 to 84 percent. Further, the OIG reported that FEMA had a significant number of open corrective action plans (18,654) and that most of these plans did not contain sufficient information to address identified weaknesses. In 2017, the OIG reported that FEMA had made progress in addressing security weaknesses. For example, it reported that the agency had reduced the number of systems it was operating without an authorization to operate from 111 to 15 systems. According to GAO’s Business Process Reengineering Assessment Guide and the Software Engineering Institute’s Capability Maturity Model Integration® for Development, successful business process reengineering can enable agencies to replace their inefficient and outmoded processes with streamlined processes that can more effectively serve the needs of the public and significantly reduce costs and improve performance. Many times, new IT systems are implemented to support these improved business processes. Thus, effective management of IT requirements is critical for ensuring the successful design, development, and delivery of such new systems. These leading practices state that effective business process reengineering and IT requirements management involve, among other things, (1) ensuring strong executive leadership support for process reengineering; (2) assessing the current and target business environment and business performance goals; (3) establishing plans for implementing new business processes; (4) establishing clear, prioritized, and traceable IT requirements; (5) tracking progress in delivering IT requirements; and (6) incorporating input from end user stakeholders. Among these six selected leading practices for reengineering business processes and managing IT requirements, FEMA fully implemented four and partially implemented two of them for its GMM program. For example, the agency ensured strong senior leadership commitment to changing the way it manages its grants, took steps to assess and document its business environment and performance goals, defined initial IT requirements for GMM, took recent actions to better track progress in delivering planned IT requirements, and incorporated input from end user stakeholders. In addition, FEMA had begun planning for business process reengineering; however, it had not finalized plans for transitioning users to the new business processes. Further, while GMM took steps to establish clearly defined and prioritized IT requirements, key requirements were not always traceable. Table 3 summarizes the extent to which FEMA implemented the selected leading practices. According to GAO’s Business Process Reengineering Assessment Guide, the most critical factor for engaging in a reengineering effort is having strong executive leadership support to establish credibility regarding the seriousness of the effort and to maintain the momentum as the agency faces potentially extensive changes to its organizational structure and values. Without such leadership, even the best process design may fail to be accepted and implemented. Agencies should also ensure that there is ongoing executive support (e.g., executive steering committee meetings headed by the agency leader) to oversee the reengineering effort from start to finish. FEMA senior leadership consistently demonstrated its commitment and support for streamlining the agency’s grants management business processes and provided ongoing executive support. For example, one of the Administrator’s top priorities highlighted in FEMA’s 2014 through 2022 strategic plans was to strengthen grants management through innovative systems and business processes to rapidly and effectively deliver the agency’s mission. In accordance with this strategic priority, FEMA initiated GMM with the intent to streamline and modernize grants management across the agency. In addition, FEMA established the Grants Management Executive Steering Group in September 2015. This group is responsible for transforming the agency’s grants management capabilities through its evaluation, prioritization, and oversight of grants management modernization programs, such as GMM. The group’s membership consists of FEMA senior leaders from across the agency’s program and business support areas, such as FEMA regions, Individual Assistance, Public Assistance, Preparedness, Office of the Chief Financial Officer, Office of Chief Counsel, OCIO, and the Office of Policy and Program Analysis. In this group’s ongoing commitment to reengineering grants management processes, it meets monthly to review GMM’s updates, risks, and action items, as well as the program’s budget, schedule, and acquisition activities. For example, the group reviewed the status of key acquisition activities and program milestones, such as the follow-on award for the pilot contractor and the program’s initial operational capability date. The group also reviewed GMM’s program risks, such as data migration challenges (discussed later in this report) and delays in the Agile development contract award. With this continuous executive involvement, FEMA is better positioned to maintain momentum for reengineering the new grants management business processes that the GMM system is intended to support. GAO’s Business Process Reengineering Assessment Guide states that agencies undergoing business process reengineering should develop a common understanding of the current environment by documenting existing core business processes to show how the processes work and how they are interconnected. The agencies should then develop a deeper understanding of the target environment by modeling the workflow of each target business process in enough detail to provide a common understanding of exactly what will be changed and who will be affected by a future solution. Agencies should also assess the performance of their current major business processes to identify problem areas that need to be changed or eliminated and to set realistically achievable, customer- oriented, and measurable business performance improvement goals. FEMA has taken steps to document the current and target grants management business processes. Specifically, The agency took steps to develop a common understanding of its grants management processes by documenting each of the 12 grant categories. For example, in 2016 and 2017, the agency conducted several nationwide user outreach sessions with representatives from FEMA headquarters, the 10 regional offices, and state and local grant recipients to discuss the grant categories and the current grants management business environment. In addition, FEMA’s Office of Chief Counsel developed a Grants Management Manual in January 2018 that outlined the authorizing laws, regulations, and agency policies for all of its grant programs. According to the Grants Management Executive Steering Group, the manual is intended to promote standardized grants management procedures across the agency. Additionally, the group expects grant program and regional offices to assess the manual against their own practices, make updates as needed, and ensure that their staff are properly informed and trained. FEMA also documented target grants management business process workflows for 18 of the 19 business functions that were notionally planned to be developed and deployed in the AFG pilot by December 2018. However, the program experienced delays in developing the AFG pilot (discussed later in this report) and, thus, deferred defining the remaining business function until the program gets closer to developing that function, which is now planned for August 2019. In addition, FEMA established measurable business performance goals for GMM that are aimed at addressing problem areas and improving grants management processes. Specifically, the agency established 14 business performance goals and associated thresholds in an October 2017 acquisition program baseline addendum, as well as 126 performance metrics for all 43 of the target grants management business functions in its March 2017 test and evaluation master plan. According to FEMA, the 14 business performance goals are intended to represent essential outcomes that will indicate whether GMM has successfully met critical, business-focused mission needs. GMM performance goals include areas such as improvements in the satisfaction level of users with GMM compared to the legacy systems and improvements in the timeliness of grant award processing. For example, one of GMM’s goals is to get at least 40 percent of users surveyed to agree or strongly agree that their grants management business processes are easier to accomplish with GMM, compared to the legacy systems. Program officials stated that they plan to work with the Agile development contractor to refine their performance goals and target thresholds, develop a plan for collecting the data and calculating the metrics, and establish a performance baseline with the legacy systems. Program officials also stated that they plan to complete these steps by September 2019—GMM’s initial operational capability date—which is when they are required to begin reporting these metrics to the DHS acquisition review board. According to GAO’s Business Process Reengineering Assessment Guide, agencies undergoing business process reengineering should (1) establish an overall plan to guide the effort (commonly referred to as an organizational change management plan) and (2) provide a common understanding for stakeholders of what to expect and how to plan for process changes. Agencies should develop the plan at the beginning of the reengineering effort and provide specific details on upcoming process changes, such as critical milestones and deliverables for an orderly transition, roles and responsibilities for change management activities, reengineering goals, skills and resource needs, key barriers to change, communication expectations, training, and any staff redeployments or reductions-in-force. The agency should develop and begin implementing its change management plan ahead of introducing new processes to ensure sufficient support among stakeholders for the reengineered processes. While FEMA has begun planning its business process reengineering activities, it has not finalized its plans or established time frames for their completion. Specifically, as of September 2018, program officials were in the process of drafting an organizational change management plan that is intended to establish an approach for preparing grants management stakeholders for upcoming changes. According to FEMA, this document is intended to help avoid uncertainty and confusion among stakeholders as changes are made to the agency’s grant programs, and ensure successful adoption of new business processes, strategies, and technologies. As discussed previously in this report, the transition to GMM will involve changes to FEMA’s disparate grants management processes that are managed by many different stakeholders across the agency. Program officials acknowledged that change management is the biggest challenge they face in implementing GMM and said they had begun taking several actions intended to support the agency’s change management activities. For example, program officials reported in October 2018 that they had recently created an executive-level working group intended to address FEMA’s policy challenges related to the standardization of grants management processes. Additionally, program officials reported that they planned to: (1) hire additional support staff focused on coordinating grants change management activities; and (2) pursue regional office outreach to encourage broad support among GMM’s decentralized stakeholders, such as state, local, and tribal territories. However, despite these actions, the officials were unable to provide time frames for completing the organizational change management plan or the additional actions. Until the plan and actions are complete, the program lacks assurance that it will have sufficient support among stakeholders for the reengineered processes. In addition, GMM did not establish plans and time frames for the activities that needed to take place prior to, during, and after the transition from the legacy AFG to GMM. Instead, program officials stated that they had worked collaboratively with the legacy AFG program and planned these details informally by discussing them in various communications, such as emails and meetings. However, this informal planning approach is not a repeatable process, which is essential to this program as FEMA plans to transition many sets of functionality to many different users during the lifecycle of this program. Program officials acknowledged that for future transitions they will need more repeatable transition planning and stated that they intend to establish such plans, but did not provide a time frame for when such changes would be made. Until FEMA develops a repeatable process, with established time frames for communicating the transition details to its customers prior to each transition, the agency risks that the transition from the legacy systems to GMM will not occur as intended. It also increases its risk that stakeholders will not support the implementation of reengineered grants management processes. Leading practices for software development efforts state that IT requirements are to be clearly defined and prioritized. This includes, among other things, maintaining bidirectional traceability as the requirements evolve, to ensure there are no inconsistencies among program plans and requirements. In addition, programs using Agile software development are to maintain a product vision, or roadmap, to guide the planning of major program milestones and provide a high-level view of planned requirements. Programs should also maintain a prioritized list (referred to as a backlog) of narrowly defined requirements (referred to as lower-level requirements) that are to be delivered. Programs should maintain this backlog with the product owner to ensure the program is always working on the highest priority requirements that will deliver the most value to the users. The GMM program established clearly defined and prioritized requirements and maintained bidirectional traceability among the various levels of requirements: Grant lifecycle phases: In its Concept of Operations document, the program established six grants management lifecycle phases that represent the highest level of GMM’s requirements, through which it derives lower-level requirements. Business functions: The Concept of Operations document also identifies the next level of GMM requirements—the 43 business functions that describe how FEMA officials, grant recipients, and other stakeholders are to manage grants. According to program officials, the 43 business functions are to be refined, prioritized, and delivered to GMM customers iteratively. Further, for the AFG pilot, the GMM program office prioritized 19 business functions with the product owner and planned the development of these functions in a roadmap. Epics: GMM’s business functions are decomposed into epics, which represent smaller portions of functionality that can be developed over multiple increments. According to program officials, GMM intends to develop, refine, and prioritize the epics iteratively. As of August 2018, the program had developed 67 epics in the program backlog. An example of one of the epics for the AFG pilot is to prepare and submit grant closeout materials. User stories: The epics are decomposed into user stories, which convey the customers’ requirements at the smallest and most discrete unit of work that must be done within a single sprint to create working software. GMM develops, refines, and prioritizes the user stories iteratively. As of August 2018, the program had developed 1,118 user stories in the backlog. An example of a user story is “As an external user, I can log in with a username and password.” Figure 6 provides an example of how GMM’s different levels of requirements are decomposed. Nevertheless, while we found requirements to be traceable at the sprint- level (i.e., epics and user stories), traceability of requirements at the increment-level (i.e., business functions) were inconsistent among different requirements planning documents. Specifically, the capabilities and constraints document shows that five business functions are planned to be developed within sub-increment 1A, whereas the other key planning document—the roadmap for the AFG pilot—showed one of those five functions as being planned for the sub-increment 1B. In addition, the capabilities and constraints document shows that nine business functions are planned to be developed within sub-increment 1B, but the roadmap showed one of those nine functions as being planned for the sub- increment 1C. Program officials stated that they decided to defer these functions to later sub-increments due to unexpected technical difficulties encountered when developing functionality and reprioritizing functions with the product owners. While the officials updated the roadmap to reflect the deferred functionality, they did not update the capabilities and constraints document to maintain traceability between these two important requirements planning documents. Program officials stated that they learned during the AFG pilot that the use of a capabilities and constraints document for increment-level scope planning was not ideal and that they intended to change the process for how they documented planned requirements for future increments. However, program officials did not provide a time frame for when this change would be made. Until the program makes this change and then ensures it maintains traceability of increment-level requirements between requirements planning documents, it will continue to risk confusion among stakeholders about what is to be delivered. In addition, until recently, GMM’s planning documents were missing up- to-date information regarding when most of the legacy systems will be transitioned to GMM. Specifically, while the program’s planning documents (including the GMM roadmap) provided key milestones for the entire lifecycle of the program and high-level capabilities to be delivered in the AFG pilot, these documents lacked up-to-date time frames for when FEMA planned to transition the nine remaining legacy systems. For example, in May 2017, GMM drafted notional time frames for transitioning the legacy systems, including plans for AFG to be the seventh system replaced by GMM. However, in December 2017, the program decided to reprioritize the legacy systems so that AFG would be replaced first—yet this major change was not reflected in the program’s roadmap. Moreover, while AFG program officials were informed of the decision to transition the AFG program first, in June 2018 officials from other grant programs told us that they had not been informed on when their systems were to be replaced. As a result, these programs were uncertain about when they should start planning for their respective transitions. In August 2018, GMM program officials acknowledged that they were delayed in deciding the sequencing order for the legacy system transitions. Program officials stated that the delay was due to their need to factor the Agile development contractor’s perspective into these decisions; yet, at that time, the contract award had been delayed by approximately 8 months. Subsequently, in October 2018, program officials identified tentative time frames for transitioning the remaining legacy systems. Program officials stated that they determined the tentative time frames for transitioning the legacy systems based on key factors, such as mission need, cost, security vulnerabilities, and technical obsolescence, and that they had shared these new time frames with grant program officials. The officials also stated that, once the Agile contractor begins contract performance, they expect to be able to validate the contractor’s capacity and finalize these time frames by obtaining approval from the Grants Management Executive Steering Group. By taking steps to update and communicate these important time frames, FEMA should be better positioned to ensure that each of the grant programs are prepared for transitioning to GMM. According to leading practices, Agile programs should track their progress in delivering planned IT requirements within a sprint (i.e., short iterations that produce working software). Given that sprints are very short cycles of development (e.g., 2 weeks), the efficiency of completing planned work within a sprint relies on a disciplined approach that includes using a fixed pace, referred to as the sprint cadence, that provides a consistent and predictable development routine. A disciplined approach also includes identifying by the start of a sprint which user stories will be developed, developing those stories to completion (e.g., fully tested and demonstrated to, and accepted by, the product owner), and tracking completion progress of those stories. Progress should be communicated to relevant stakeholders and used by the development teams to better understand their capacity to develop stories, continuously improve on their processes, and forecast how long it will take to deliver all remaining capabilities. The GMM program did not effectively track progress in delivering IT requirements during the first nine sprints, which occurred from January to June 2018. These gaps in tracking the progress of requirements, in part, had an impact on the program’s progress in delivering the 19 AFG business functions that were originally planned by December 2018 and are now deferred to August 2019. However, beginning in July 2018, in response to our ongoing review, the program took steps to improve in these areas. Specifically, GMM did not communicate the status of its Agile development progress to program stakeholders, such as the grant programs, the regional offices, and the development teams, during most of the first nine sprints. Program officials acknowledged that they should use metrics to track development progress and, in July 2018, they began reporting metrics to program stakeholders. For example, they began collecting and providing data on the number of stories planned and delivered, estimated capacity for development teams, and the number of days spent working on the sprint, as part of the program’s weekly status reports to program stakeholders, such as product owners. Rather than using a fixed, predictable sprint cadence, GMM allowed a variable development cadence, meaning that sprint durations varied from 1 to 4 weeks throughout the first nine sprints. Program officials noted that they had experimented with the use of a variable cadence to allow more time to complete complex technical work. Program officials stated that they realized that varying the sprints was not effective and, in July 2018 for sprint 10, they reverted back to a fixed, 2 week cadence. GMM added a significant amount of scope during its first nine sprints, after the development work had already begun. For example, the program committed to 28 user stories at the beginning of sprint eight, and then nearly doubled the work by adding 25 additional stories in the middle of the sprint. Program officials cited multiple reasons for adding more stories, including that an insufficient number of stories had been defined in the backlog when the sprint began, the realization that planned stories were too large and needed to be decomposed into smaller stories, and the realization that other work would be needed in addition to what was originally planned. Program officials recognized that, by the start of a sprint, the requirements should be sufficiently defined, such that they are ready for development without requiring major changes during the sprint. The program made recent improvements in sprints 11 and 12, which had only five stories added after the start of a sprint. By taking these steps to establish consistency among sprints, the program has better positioned itself to more effectively monitor and manage the remaining IT development work. In addition, this improvement in consistency should help the program avoid future deferments of functionality. Leading practices state that programs should regularly collaborate with, and collect input from, relevant stakeholders; monitor the status of stakeholder involvement; incorporate stakeholder input; and measure how well stakeholders’ needs are being met. For Agile programs, it is especially important to track user satisfaction to determine how well the program has met stakeholders’ needs. Consistent stakeholder participation ensures that the program meets its stakeholders’ needs. FEMA implemented its responsibilities in this area through several means, such as stakeholder outreach activities; development of a strategic communications plan; and continuous monitoring, solicitation, and recording of stakeholder involvement and feedback. For example, the agency conducted nationwide outreach sessions from January 2016 through August 2017 and began conducting additional outreach sessions in April 2018. These outreach sessions involved hundreds of representatives from FEMA headquarters, the 10 regional offices, and state and local grant recipients to collect information on the current grants management environment and opportunities for streamlining grants management processes. FEMA also held oversight and stakeholder outreach activities and actively solicited and recorded feedback from its stakeholders on a regular basis. For example, GMM regularly verified with users that the new functionality met their IT requirements, as part of the Agile development cycle. Additionally, we observed several GMM biweekly requirements validation sessions where the program’s stakeholders were involved and provided feedback as part of the requirements development and refinement process. In addition, FEMA identified GMM stakeholders and tracked its engagement with these stakeholders using a stakeholder register. The agency also defined processes for how the GMM program is to collaborate with its stakeholders in a stakeholder communication plan and Agile development team agreement. Also, while several officials from the selected grant program and regional offices that we interviewed indicated that the program could improve in communicating its plans for GMM and incorporating stakeholder input, most of the representatives from these offices stated that GMM is doing well at interacting with its stakeholders. Finally, in October 2018, program officials reported that they had recently begun measuring user satisfaction by conducting surveys and interviews with users that have utilized the new functionality within GMM. The program’s outreach activities, collection of stakeholder input, and measurement of user satisfaction demonstrate that the program is taking the appropriate steps to incorporate stakeholder input. Reliable cost estimates are critical for successfully delivering IT programs. Such estimates provide the basis for informed decision making, realistic budget formulation, meaningful progress measurement, and accountability for results. GAO’s Cost Estimating and Assessment Guide defines leading practices related to the following four characteristics of a high-quality, reliable estimate. Comprehensive. The estimate accounts for all possible costs associated with a program, is structured in sufficient detail to ensure that costs are neither omitted nor double counted, and documents all cost-influencing assumptions. Well-documented. Supporting documentation explains the process, sources, and methods used to create the estimate; contains the underlying data used to develop the estimate; and is adequately reviewed and approved by management. Accurate. The estimate is not overly conservative or optimistic, is based on an assessment of the costs most likely to be incurred, and is regularly updated so that it always reflects the program’s current status. Credible. Discusses any limitations of the analysis because of uncertainty or sensitivity surrounding data or assumptions, the estimate’s results are cross-checked, and an independent cost estimate is conducted by a group outside the acquiring organization to determine whether other estimating methods produce similar results. In May 2017, DHS approved GMM’s lifecycle cost estimate of about $251 million for fiscal years 2015 through 2030. We found this initial estimate to be reliable because it fully or substantially addressed all the characteristics associated with a reliable cost estimate. For example, the estimate comprehensively included government and contractor costs, all elements of the program’s work breakdown structure, and all phases of the system lifecycle; and was aligned with the program’s technical documentation at the time the estimate was developed. GMM also fully documented the key assumptions, data sources, estimating methodology, and calculations for the estimate. Further, the program conducted a risk assessment and sensitivity analysis, and DHS conducted an independent assessment of the cost estimate to validate the accuracy and credibility of the cost estimate. However, key assumptions that FEMA made about the program changed soon after DHS approved the cost estimate in May 2017. Thus, the initial cost estimate no longer reflects the current approach for the program. For example, key assumptions about the program that changed include: Change in the technical approach: The initial cost estimate assumed that GMM would implement a software-as-a-service model, meaning that FEMA would rely on a service provider to deliver software applications and the underlying infrastructure to run them. However, in December 2017, the program instead decided to implement an infrastructure-as-a-service model, meaning that FEMA would develop and deploy its own software application and rely on a service provider to deliver and manage the computing infrastructure (e.g., servers, software, storage, and network equipment). According to program officials, this decision was made after learning from the Agile prototypes that the infrastructure-as-a-service model would allow GMM to develop the system in a more flexible environment. Increase in the number of system development personnel: A key factor with Agile development is the number of development teams (each consisting of experts in software development, testing, and cybersecurity) that are operating concurrently and producing separate portions of software functionality. Program officials initially assumed that they would need three to four concurrent Agile development teams, but subsequently realized that they would instead need to expend more resources to achieve GMM’s original completion date. Specifically, program officials now expect they will need to at least double, and potentially triple, the number of concurrent development teams to meet GMM’s original target dates. Significant delays and complexities with data migration: In 2016 and 2017, GMM experienced various technical challenges in its effort to transfer legacy system data to a data staging platform. This data transfer effort needed to be done to standardize the data before eventually migrating the data to GMM. These challenges resulted in significant delays and cost increases. Program officials reported that, by February 2018—at least 9 months later than planned—all legacy data had been transferred to a data staging platform so that FEMA officials could begin analyzing and standardizing the data prior to migrating it into GMM. FEMA officials reported that they anticipated the cost estimate to increase, and for this increase to be high enough to breach the $251 million threshold set in GMM’s May 2017 acquisition program baseline. Thus, consistent with DHS’s acquisition guidance, the program informed the DHS acquisition review board of this anticipated breach. The board declared that the program was in a cost breach status, as of September 12, 2018. As of October 2018, program officials stated that they were in the process of revising the cost estimate to reflect the changes in the program and to incorporate actual costs. In addition, the officials stated that the program was applying a new cost estimating methodology tailored for Agile programs that DHS’s Cost Analysis Division had been developing. In December 2018, program officials stated that they had completed the revised cost estimate but it was still undergoing departmental approval. Establishing an updated cost estimate should help FEMA better understand the expected costs to deliver GMM under the program’s current approach and time frames. The success of an IT program depends, in part, on having an integrated and reliable master schedule that defines when the program’s set of work activities and milestone events are to occur, how long they will take, and how they are related to one another. Among other things, a reliable schedule provides a roadmap for systematic execution of an IT program and the means by which to gauge progress, identify and address potential problems, and promote accountability. GAO’s Schedule Assessment Guide defines leading practices related to the following four characteristics that are vital to having a reliable integrated master schedule. Comprehensive. A comprehensive schedule reflects all activities for both the government and its contractors that are necessary to accomplish a program’s objectives, as defined in the program’s work breakdown structure. The schedule also includes the labor, materials, and overhead needed to do the work and depicts when those resources are needed and when they will be available. It realistically reflects how long each activity will take and allows for discrete progress measurement. Well-constructed. A schedule is well-constructed if all of its activities are logically sequenced with the most straightforward logic possible. Unusual or complicated logic techniques are used judiciously and justified in the schedule documentation. The schedule’s critical path represents a true model of the activities that drive the program’s earliest completion date and total float accurately depicts schedule flexibility. Credible. A schedule that is credible is horizontally traceable—that is, it reflects the order of events necessary to achieve aggregated products or outcomes. It is also vertically traceable—that is, activities in varying levels of the schedule map to one another and key dates presented to management in periodic briefings are consistent with the schedule. Data about risks are used to predict a level of confidence in meeting the program’s completion date. The level of necessary schedule contingency and high-priority risks are identified by conducting a robust schedule risk analysis. Controlled. A schedule is controlled if it is updated regularly by trained schedulers using actual progress and logic to realistically forecast dates for program activities. It is compared to a designated baseline schedule to measure, monitor, and report the program’s progress. The baseline schedule is accompanied by a baseline document that explains the overall approach to the program, defines ground rules and assumptions, and describes the unique features of the schedule. The baseline schedule and current schedule are subject to a configuration management control process. GMM’s schedule was unreliable because it minimally addressed three characteristics—comprehensive, credible, and controlled—and did not address the fourth characteristic of a reliable estimate—well-constructed. One of the most significant issues was that the program’s fast approaching, final delivery date of September 2020 was not informed by a realistic assessment of GMM development activities, and rather was determined by imposing an unsubstantiated delivery date. Table 4 summarizes our assessment of GMM’s schedule. In discussing the reasons for the shortfalls in these practices, program officials stated that they had been uncertain about the level of rigor that should be applied to the GMM schedule, given their use of Agile development. However, leading practices state that program schedules should meet all the scheduling practices, regardless of whether a program is using Agile development. As discussed earlier in this report, GMM has already experienced significant schedule delays. For example, the legacy data migration effort, the AFG pilot, and the Agile development contract have been delayed. Program officials also stated that the delay in awarding and starting the Agile contract has delayed other important activities, such as establishing time frames for transitioning legacy systems. A more robust schedule could have helped FEMA predict the impact of delays on remaining activities and identify which activities appeared most critical so that the program could ensure that any risks in delaying those activities were properly mitigated. In response to our review and findings, program officials recognized the need to continually enhance their schedule practices to improve the management and communication of program activities. As a result, in August 2018, the officials stated that they planned to add a master scheduler to the team to improve the program’s schedule practices and ensure that all of the areas of concern we identified are adequately addressed. In October 2018, the officials reported that they had recently added two master schedulers to GMM. According to the statement of objectives, the Agile contractor is expected to develop an integrated master schedule soon after it begins performance. However, program officials stated that GMM is schedule-driven—due to the Executive Steering Group’s expectation that the solution will be delivered by September 2020. The officials added that, if GMM encounters challenges in meeting this time frame, the program plans to seek additional resources to allow it to meet the 2020 target. GMM’s schedule-driven approach has already led to an increase in estimated costs and resources. For example, as previously mentioned, the program has determined that, to meet its original target dates, GMM needs to at least double, and possibly triple, the number of concurrent Agile development teams. In addition, we have previously reported that schedule pressure on federal IT programs can lead to omissions and skipping of key activities, especially system testing. In August 2018, program officials acknowledged that September 2020 may not be feasible and that the overall completion time frames established in the acquisition program baseline may eventually need to be rebaselined. Without a robust schedule to forecast whether FEMA’s aggressive delivery goal for GMM is realistic to achieve, leadership will be limited in its ability to make informed decisions on what additional increases in cost or reductions in scope might be needed to fully deliver the system. NIST’s risk management framework establishes standards and guidelines for agencies to follow in developing cybersecurity programs. Agencies are expected to use this framework to achieve more secure information and information systems through the implementation of appropriate risk mitigation strategies and by performing activities that ensure that necessary security controls are integrated into agencies’ processes. The framework addresses broad cybersecurity and risk management activities, which include the following: Categorize the system: Programs are to categorize systems by identifying the types of information used, selecting a potential impact level (e.g., low, moderate, or high), and assigning a category based on the highest level of impact to the system’s confidentiality, integrity, and availability, if the system was compromised. Programs are also to document a description of the information system and its boundaries and should register the system with appropriate program management offices. System categorization is documented in a system security plan. Select and implement security controls: Programs are to determine protective measures, or security controls, to be implemented based on the system categorization results. These security controls are documented in a system security plan. For example, control areas include access controls, incident response, security assessment and authorization, identification and authentication, and configuration management. Once controls are identified, programs are to determine planned implementation actions for each of the designated controls. These implementation actions are also specified in the system security plan. Assess security controls: Programs are to develop, review, and approve a security assessment plan. The purpose of the security assessment plan approval is to establish the appropriate expectations for the security control assessment. Programs are to also perform a security control assessment by evaluating the security controls in accordance with the procedures defined in the security assessment plan, in order to determine the extent to which the controls were implemented correctly. The output of this process is intended to produce a security assessment report to document the issues, findings, and recommendations. Programs are to conduct initial remediation actions on security controls and reassess those security controls, as appropriate. Obtain an authorization to operate the system: Programs are to obtain security authorization approval in order to operate a system. Resolving weaknesses and vulnerabilities identified during testing is an important step leading up to achieving an authorization to operate. Programs are to establish corrective action plans to address any deficiencies in cybersecurity policies, procedures, and practices. DHS guidance also states that corrective action plans must be developed for every weakness identified during a security control assessment and within a security assessment report. Monitor security controls on an ongoing basis: Programs are to monitor their security controls on an ongoing basis after deployment, including determining the security impact of proposed or actual changes to the information system and assessing the security controls in accordance with a monitoring strategy that determines the frequency of monitoring the controls. For the GMM program’s engineering and test environment, which went live in February 2018, FEMA fully addressed three of the five key cybersecurity practices in NIST’s risk management framework and partially addressed two of the practices. Specifically, FEMA categorized GMM’s environment based on security risk, implemented select security controls, and monitored security controls on an ongoing basis. However, the agency partially addressed the areas of assessing security controls and obtaining an authorization to operate the system. Table 5 provides a summary of the extent to which FEMA addressed NIST’s key cybersecurity practices for GMM’s engineering and test environment. Consistent with NIST’s framework, GMM categorized the security risk of its engineering and test environment and identified it as a moderate- impact environment. A moderate-impact environment is one where the loss of confidentiality, integrity, or availability could be expected to have a serious or adverse effect on organizational operations, organizational assets, or individuals. GMM completed the following steps leading to this categorization: The program documented in its System Security Plan the various types of data and information that the environment will collect, process, and store, such as conducting technology research, building or enhancing technology, and maintaining IT networks. The program established three information types and assigned security levels of low, moderate, or high impact in the areas of confidentiality, availability, and integrity. A low-impact security level was assigned to two information types: (1) conducting technology research and (2) building or enhancing technology; and a moderate- impact security level was assigned to the third information type: maintaining IT networks. The engineering and test environment was categorized as an overall moderate-impact system, based on the highest security impact level assignment. GMM documented a description of the environment, including a diagram depicting the system’s boundaries, which illustrates, among other things, databases and firewalls. GMM properly registered its engineering and test environment with FEMA’s Chief Information Officer, Chief Financial Officer, and acting Chief Information Security Officer. By conducting the security categorization process, GMM has taken steps that should ensure that the appropriate security controls are selected for the program’s engineering and test environment. Consistent with NIST’s framework and the system categorization results, GMM appropriately determined which security controls to implement and planned actions for implementing those controls in its System Security Plan for the engineering and test environment. For example, the program utilized NIST guidance to select standard controls for a system categorized with a moderate-impact security level. These control areas include, for example, access controls, risk assessment, incident response, identification and authentication, and configuration management. Further, the program documented its planned actions to implement each control in its System Security Plan. For example, GMM documented that the program plans to implement its Incident Response Testing control by participating in an agency-wide exercise and unannounced vulnerability scans. As another example, GMM documented that the program plans to implement its Contingency Plan Testing control by testing the contingency plan annually, reviewing the test results, and preparing after action reports. By selecting and planning for the implementation of security controls, GMM has taken steps to mitigate its security risks and protect the confidentiality, integrity, and availability of the information system. Consistent with NIST’s framework, in January 2018, GMM program officials developed a security assessment plan for the engineering and test environment. According to GMM program officials, this plan was reviewed by the security assessment team. However, the security assessment plan lacked essential details. Specifically, while the plan included the general process for evaluating the environment’s security controls, the planned assessment procedures for all 964 security controls were not sufficiently defined. Specifically, GMM program officials copied example assessment procedures from NIST guidance and inserted them into its security assessment documentation for all of its 964 controls, without making further adjustments to explain the steps that should be taken specific to GMM. Table 6 shows an example of a security assessment procedure copied from the NIST guidance that should have been further adjusted for GMM. In addition, the actual assessment procedures that the GMM assessors used to evaluate the security controls were not documented. Instead, the program only documented whether each control passed or failed each test. GMM program officials stated that the planned assessment procedures are based on an agency template that was exported from a DHS compliance tool, and that FEMA security officials have been instructed by the DHS OCIO not to tailor or make any adjustments to the template language. However, the assessment procedures outlined in NIST’s guidance are to serve as a starting point for organizations preparing their program specific assessments. According to NIST, organizations are expected to select and tailor their assessment procedures for each security control from NIST’s list of suggested assessment options (e.g., review, analyze, or inspect policies, procedures, and related documentation options). DHS OCIO officials stated that, consistent with NIST’s guidance, they expect that components will ensure they are in compliance with the minimum standards and will also add details and additional rigor, as appropriate, to tailor the planned security assessment procedures to fit their unique missions or needs. In November 2018, in response to our audit, DHS OCIO officials stated that they were meeting with FEMA OCIO officials to understand why they did not document the planned and actual assessment procedures performed by the assessors for GMM. Until FEMA ensures that detailed planned evaluation methods and actual evaluation procedures specific to GMM are defined, the program risks assessing security controls incorrectly, having controls that do not work as intended, and producing undesirable outcomes with respect to meeting the security requirements. In addition, the security assessment plan was not approved by FEMA’s OCIO before proceeding with the security assessment. Program officials stated that approval was not required for the security assessment plan prior to the development of the security assessment report. However, NIST guidance states that the purpose of the security assessment plan approval is to establish the appropriate expectations for the security control assessment. By not getting the security assessment plan approved by FEMA’s OCIO before security assessment reviews were conducted, GMM risks inconsistencies with the plan and security objectives of the organization. Finally, consistent with NIST guidance, GMM performed a security assessment in December 2017 of the engineering and test environment’s controls, which identified 36 vulnerabilities (23 critical- and high-impact vulnerabilities and 13 medium- and low-impact vulnerabilities). The program also documented these vulnerabilities and associated findings and recommendations in a security assessment report. GMM conducted initial remediation actions (i.e., remediation of vulnerabilities that should be corrected immediately) for 12 of the critical- and high-impact vulnerabilities and a reassessment of those security controls confirmed that they were resolved by January 2018. Remediation of the remaining 11 critical- and high-impact vulnerabilities and 13 medium- and low- impact vulnerabilities were to be addressed by corrective action plans as part of the authorization to operate process, which is discussed in the next section. The authorization to operate GMM’s engineering and test environment was granted on February 5, 2018. Among other things, this decision was based on the important stipulation that the remaining 11 critical- and high- impact vulnerabilities associated with multifactor authentication would be addressed within 45 days, or by March 22, 2018. However, the program did not meet this deadline and, instead, approximately 2 months after this deadline passed, obtained a waiver to remediate these vulnerabilities by May 9, 2019. These vulnerabilities are related to a multifactor authentication capability. Program officials stated that they worked with FEMA OCIO officials to attempt to address these vulnerabilities by the initial deadline, but they were unsuccessful in finding a viable solution. Therefore, GMM program officials developed a waiver at the recommendation of the OCIO to provide additional time to develop a viable solution. However, a multifactor authentication capability is essential to ensuring that users are who they say they are, prior to granting users access to the GMM engineering and test environment, in order to reduce the risk of harmful actors accessing the system. In addition, as of September 2018, the program had not established corrective action plans for the 13 medium- and low-impact vulnerabilities. Program officials stated that they do not typically address low-impact vulnerabilities; however, this is in conflict with DHS guidance that specifies that corrective action plans must be developed for every weakness identified during a security control assessment and within a security assessment report. In response to our audit, in October 2018, GMM program officials developed these remaining corrective action plans. The plans indicated that these vulnerabilities were to be fully addressed by January 2019 and April 2019. While the program eventually took corrective actions in response to our audit by developing the missing plans, the GMM program initially failed to follow DHS’s guidance on preparing corrective actions plans for all security vulnerabilities. Until GMM consistently follows DHS’s guidance, it will be difficult for FEMA to determine the extent to which GMM’s security weaknesses identified during its security control assessments are remediated. Additionally, as we have reported at other agencies, vulnerabilities can be indicators of more significant underlying issues and, thus, without appropriate management attention or prompt remediation, GMM is at risk of unnecessarily exposing the program to potential exploits. Moreover, GMM was required to assess all untested controls by March 7, 2018, or no later than 30 days after the approval of the authorization to operate; however, it did not meet this deadline. Specifically, we found that, by October 2018, FEMA had not fully tested 190 security controls in the GMM engineering and test environment. These controls were related to areas such as security incident handling and allocation of resources required to protect an information system. In response to our findings, in October 2018, GMM program officials reported that they had since fully tested 27 controls and partially tested the remaining 163 controls. Program officials stated that testing of the 163 controls is a shared responsibility between GMM and other parties (e.g., the cloud service provider). They added that GMM had completed its portion of the testing but was in the process of verifying the completion of testing by other parties. Program officials stated that the untested controls were not addressed sooner, in part, because of errors resulting from configuration changes in the program’s compliance tool during a system upgrade, which have now been resolved. Until GMM ensures that all security controls have been tested, it remains at an increased risk of exposing programs to potential exploits. Consistent with the NIST framework, GMM established methods for assessing and monitoring security controls to be conducted after an authorization to operate has been approved. GMM has tailored its cybersecurity policies and practices for monitoring its controls to take into account the frequent and iterative pace with which system functionality is continuously being introduced into the GMM environment. Specifically, the GMM program established a process for assessing security impact changes to the system and conducting reauthorizations to operate within the rapid Agile delivery environment. As part of this process, GMM embedded cybersecurity experts on each Agile development team so that they are involved early and can impact security considerations from the beginning of requirements development through testing and deployment of system functionality. In addition, the process involves important steps for ensuring that the system moves from development to completion, while producing a secure and reliable system. For example, it includes procedures for creating, reviewing, and testing new system functionality. As the new system functionality is integrated with existing system functionality, it is to undergo automated testing and security scans in order to ensure that the integrity of the security of the system has not been compromised. Further, an automated process is to deploy the code if it passes all security scans, code tests, and code quality checks. GMM’s process for conducting a reauthorization to operate within the rapid delivery Agile development environment is to follow FEMA guidance that states that all high-level changes made to a FEMA IT system must receive approval from both a change advisory board and the FEMA Chief Information Officer. The board and FEMA Chief Information Officer are to focus their review and approval on scheduled releases and epics (i.e., collections of user stories). Additionally, the Information System Security Officer is to review each planned user story and, if it is determined that the proposed changes may impact the integrity of the authorization, the Information System Security Officer is to work with the development team to begin the process of updating the system authorization. Finally, GMM uses automated tools to track the frequency in which security controls are assessed and to ensure that required scanning data are received by FEMA for reporting purposes. Program officials stated that, in the absence of department-level and agency-level guidance, they have coordinated with DHS and FEMA OCIO officials to ensure that these officials are in agreement with GMM’s approach to continuous monitoring. By having monitoring control policies and procedures in place, FEMA management is positioned to more effectively prioritize and plan its risk response to current threats and vulnerabilities for the GMM program. Given FEMA’s highly complex grants management environment, with its many stakeholders, IT systems, and internal and external users, implementing leading practices for business process reengineering and IT requirements management is critical for success. FEMA has taken many positive steps, including ensuring executive leadership support for business process reengineering, documenting the agency’s grants management processes and performance improvement goals, defining initial IT requirements for the program, incorporating input from end user stakeholders into the development and implementation process, and taking recent actions to improve its delivery of planned IT requirements. Nevertheless, until the GMM program finalizes plans and time frames for implementing its organizational change management actions, plans and communicates system transition activities, and maintains clear traceability of IT requirements, FEMA will be limited in its ability to provide streamlined grants management processes and effectively deliver a modernized IT system to meet the needs of its large range of users. While GMM’s initial cost estimate was reliable, key assumptions about the program since the initial estimate had changed and, therefore, it no longer reflected the current approach for the program. The forthcoming updated cost schedule is expected to better reflect the current approach. However, the program’s unreliable schedule to fully deliver GMM by September 2020 is aggressive and unrealistic. The delays the program has experienced to date further compound GMM’s schedule issues. Without a robust schedule that has been informed by a realistic assessment of GMM’s development activities, leadership will be limited in its ability to make informed decisions on what additional increases in cost or reductions in scope might be needed to achieve their goals. Further, FEMA’s implementation of cybersecurity practices for GMM in the areas of system categorization, selection and implementation, and monitoring will help the program. However, GMM lacked essential details for evaluating security controls, did not approve the security assessment plan before proceeding with the security assessment, did not follow DHS’s guidance to develop corrective action plans for all security vulnerabilities, and did not fully test all security controls. As a result, the GMM engineering and test environment remains at an increased risk of exploitations. We are making eight recommendations to FEMA: The FEMA Administrator should ensure that the GMM program management office finalizes the organizational change management plan and time frames for implementing change management actions. (Recommendation 1) The FEMA Administrator should ensure that the GMM program management office plans and communicates its detailed transition activities to its affected customers before they transition to GMM and undergo significant changes to their processes. (Recommendation 2) The FEMA Administrator should ensure that the GMM program management office implements its planned changes to its processes for documenting requirements for future increments and ensures it maintains traceability among key IT requirements documents. (Recommendation 3) The FEMA Administrator should ensure that the GMM program management office updates the program schedule to address the leading practices for a reliable schedule identified in this report. (Recommendation 4) The FEMA Administrator should ensure that the FEMA OCIO defines sufficiently detailed planned evaluation methods and actual evaluation methods for assessing security controls. (Recommendation 5) The FEMA Administrator should ensure that the FEMA OCIO approves a security assessment plan before security assessment reviews are conducted. (Recommendation 6) The FEMA Administrator should ensure that the GMM program management office follows DHS guidance on preparing corrective action plans for all security vulnerabilities. (Recommendation 7) The FEMA Administrator should ensure that the GMM program management office fully tests all of its security controls for the system. (Recommendation 8) DHS provided written comments on a draft of this report, which are reprinted in appendix IV. In its comments, the department concurred with all eight of our recommendations and provided estimated completion dates for implementing each of them. For example, with regard to recommendation 4, the department stated that FEMA plans to update the GMM program schedule to address the leading practices for a reliable schedule by April 30, 2019. In addition, for recommendation 7, the department stated that FEMA plans to ensure that corrective action plans are prepared by July 31, 2019, to address all identified security vulnerabilities for GMM. If implemented effectively, the actions that FEMA plans to take in response to the recommendations should address the weaknesses we identified. We also received technical comments from DHS and FEMA officials, which we incorporated, as appropriate. We are sending copies of this report to the Secretary of Homeland Security and interested congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4456 or harriscc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Our objectives were to (1) determine the extent to which the Federal Emergency Management Agency (FEMA) is implementing leading practices for reengineering its grants management business processes and incorporating business needs into Grants Management Modernization (GMM) information technology (IT) requirements; (2) assess the reliability of the program’s estimated costs and schedule; and (3) determine the extent to which FEMA is addressing key cybersecurity practices for GMM. To address the first objective, we reviewed GAO’s Business Process Reengineering Assessment Guide and Software Engineering Institute’s Capability Maturity Model for Integration for Development to identify practices associated with business process reengineering and IT requirements management. We then selected six areas that, in our professional judgment, represented foundational practices that were of particular importance to the successful implementation of an IT modernization effort that is using Agile development processes. We also selected the practices that were most relevant based on where GMM was in the system development lifecycle and we discussed the practice areas with FEMA officials. The practices are: Ensuring executive leadership support for process reengineering Assessing the current and target business environment and business Establishing plans for implementing new business processes Establishing clear, prioritized, and traceable IT requirements Tracking progress in delivering IT requirements Incorporating input from end user stakeholders We also reviewed selected chapters of GAO’s draft Agile Assessment Guide (Version 6A), which is intended to establish a consistent framework based on best practices that can be used across the federal government for developing, implementing, managing, and evaluating agencies’ IT investments that rely on Agile methods. To develop this guide, GAO worked closely with Agile experts in the public and private sector; some chapters of the guide are considered more mature because they have been reviewed by the expert panel. We reviewed these chapters to ensure that our expectations for how FEMA should apply the six practices for business process reengineering and IT requirements management are appropriate for an Agile program and are consistent with the draft guidance that is under development. Additionally, since Agile development programs may use different terminology to describe their software development processes, the Agile terms used in this report (e.g., increment, sprint, epic, etc.) are specific to the GMM program. We obtained and analyzed FEMA grants management modernization documentation, such as current and target grants management business processes, acquisition program baseline, operational requirements document, concept of operations, requirements analyses workbooks, Grants Management Executive Steering Group artifacts, stakeholder outreach artifacts, Agile increment- and sprint-level planning and development artifacts, and the requirements backlog. We assessed the program documentation against the selected practices to determine the extent to which the agency had implemented them. We then assessed each practice area as: fully implemented—FEMA provided complete evidence that showed it fully implemented the practice area; partially implemented—FEMA provided evidence that showed it partially implemented the practice area; not implemented—FEMA did not provide evidence that showed it implemented any of the practice area. Additionally, we observed Agile increment and sprint development activities at GMM facilities in Washington, D.C. We also observed a demonstration of how the program manages its lower level requirements (i.e., user stories and epics) and maintains traceability of the requirements using an automated tool at GMM facilities in Washington, D.C. We also interviewed FEMA officials, including the GMM Program Executive, GMM Program Manager, GMM Business Transformation Team Lead, and Product Owner regarding their efforts to streamline grants management business processes, collect and incorporate stakeholder input, and manage GMM’s requirements. In addition, we interviewed FEMA officials from four out of 16 grant program offices and two out of 10 regional offices to obtain contextual information and illustrative examples of FEMA’s efforts to reengineer grants management business processes and collect business requirements for GMM. Specifically, We selected the four grant program offices based on a range of grant programs managed, legacy systems used, and the amount of grant funding awarded. We also sought to select a cross section of different characteristics, such as selecting larger grant program offices, as well as smaller offices. In addition, we ensured that our selection included the Assistance to Firefighters Grants (AFG) program office because officials in this office represent the first GMM users and, therefore, are more actively involved with the program’s Agile development practices. Based on these factors, we selected: Public Assistance Division, Individual Assistance Division, AFG, and National Fire Academy. Additionally, the four selected grant program offices are responsible for 16 of the total 45 grant programs and are users of five of the nine primary legacy IT systems. The four selected grant program offices also represent about 68 percent of the total grant funding awarded by FEMA from fiscal years 2005 through 2016. We selected two regional offices based on (1) the largest amount of total FEMA grant funding for fiscal years 2005 through 2016—Region 6 located in Denton, Texas; and (2) the highest percentage of AFG funding compared to the office’s total grant funding awarded from fiscal years 2005 through 2016—Region 5 located in Chicago, Illinois. To assess the reliability of data from the program’s automated IT requirements management tool, we interviewed knowledgeable officials about the quality control procedures used by the program to assure accuracy and completeness of the data. We also compared the data to other relevant program documentation on GMM requirements. We determined that the data used were sufficiently reliable for the purpose of evaluating GMM’s practices for managing IT requirements. For our second objective, to assess the reliability of GMM’s estimated costs and schedule, we reviewed documentation on GMM’s May 2017 lifecycle cost estimate and on the program’s schedule, dated May 2018. To assess the reliability of the May 2017 lifecycle cost estimate, we evaluated documentation supporting the estimate, such as the cost estimating model, the report on GMM’s Cost Estimating Baseline Document and Life Cycle Cost Estimate, and briefings provided to the Department of Homeland Security (DHS) and FEMA management regarding the cost estimate. We assessed the cost estimating methodologies, assumptions, and results against leading practices for developing a comprehensive, accurate, well-documented, and credible cost estimate, identified in GAO’s Cost Estimating and Assessment Guide. We also interviewed program officials responsible for developing and reviewing the cost estimate to understand their methodology, data, and approach for developing the estimate. We found that the cost data were sufficiently reliable. To assess the reliability of the May 2018 GMM program schedule, we evaluated documentation supporting the schedule, such as the integrated master schedule, acquisition program baseline, and Agile artifacts. We assessed the schedule documentation against leading practices for developing a comprehensive, well-constructed, credible, and controlled schedule, identified in GAO’s Schedule Assessment Guide. We also interviewed GMM program officials responsible for developing and managing the program schedule to understand their practices for creating and maintaining the schedule. We noted in our report the instances where the quality of the schedule data impacted the reliability of the program’s schedule. For both the cost estimate and program schedule, we assessed each leading practice as: fully addressed—FEMA provided complete evidence that showed it implemented the entire practice area; substantially addressed—FEMA provided evidence that showed it implemented more than half of the practice area; partially addressed—FEMA provided evidence that showed it implemented about half of the practice area; minimally addressed—FEMA provided evidence that showed it implemented less than half of the practice area; not addressed—FEMA did not provide evidence that showed it implemented any of the practice area. Finally, we provided FEMA with draft versions of our detailed analyses of the GMM cost estimate and schedule. This was done to verify that the information on which we based our findings was complete, accurate, and up-to-date. Regarding our third objective, to determine the extent to which FEMA is addressing key cybersecurity practices for GMM, we reviewed documentation regarding DHS and FEMA cybersecurity policies and guidance, and FEMA’s authorization to operate for the program’s engineering and test environment. We evaluated the documentation against all six cybersecurity practices identified in the National Institute of Standards and Technology’s (NIST) Risk Management Framework. While NIST’s Risk Management Framework identifies six total practices, for reporting purposes, we combined two interrelated practices—selection of security controls and implementation of security controls—into a single practice. The resulting five practices were: categorizing the system based on security risk, selecting and implementing security controls, assessing security controls, obtaining an authorization to operate the system, and monitoring security controls on an ongoing basis. We obtained and analyzed key artifacts supporting the program’s efforts to address these risk management practices, including the program’s System Security Plan, the Security Assessment Plan and Report, Authorization to Operate documentation, and the program’s continuous monitoring documentation. We also interviewed officials from the GMM program office and FEMA’s Office of the Chief Information Officer, such as the GMM Security Engineering Lead, GMM Information System Security Officer, and FEMA’s Acting Chief Information Security Officer, regarding their efforts to assess, document, and review security controls for GMM. We assessed the evidence against the five practices to determine the extent to which the agency had addressed them. We then assessed each practice area as: fully addressed—FEMA provided complete evidence that showed it fully implemented the practice area; partially addressed—FEMA provided evidence that showed it partially implemented the practice area; not addressed—FEMA did not provide evidence that showed it implemented any of the practice area. To assess the reliability of data from the program’s automated security controls management tool, we interviewed knowledgeable officials about the quality control procedures used by the program to assure accuracy and completeness of the data. We also compared the data to other relevant program documentation on GMM security controls for the engineering and test environment. We found that some of the security controls data we examined were sufficiently reliable for the purpose of evaluating FEMA’s cybersecurity practices for GMM, and we noted in our report the instances where the accuracy of the data impacted the program’s ability to address key cybersecurity practices. We conducted this performance audit from December 2017 to April 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Federal Emergency Management Agency (FEMA) awards many different types of grants to state, local, and tribal governments and nongovernmental entities. These grants are to help communities prevent, prepare for, protect against, mitigate the effects of, respond to, and recover from disasters and terrorist attacks. Agile software development is a type of incremental development that calls for the rapid delivery of software in small, short increments. The use of an incremental approach is consistent with the Office of Management and Budget’s guidance as specified in its information technology (IT) Reform Plan, as well as the legislation commonly referred to as the Federal Information Technology Acquisition Reform Act. Many organizations, especially in the federal government, are accustomed to using a waterfall software development model, which typically consists of long, sequential phases, and differs significantly from the Agile development approach. Agile practices integrate planning, design, development, and testing into an iterative lifecycle to deliver software early and often. Figure 7 provides a depiction of software development using the Agile approach, as compared to a waterfall approach. The frequent iterations of Agile development are intended to effectively measure progress, reduce technical and programmatic risk, and respond to feedback from stakeholders in changes to IT requirements more quickly than traditional methods. Despite these intended benefits, organizations adopting Agile must overcome challenges in making significant changes to how they are accustomed to developing software. The significant differences between Agile and waterfall development impact how IT programs are planned, implemented, and monitored in terms of cost, schedule, and scope. For example, in waterfall development, significant effort is devoted upfront to document detailed plans and all IT requirements for the entire scope of work at the beginning of the program, and cost and schedule can be varied to complete that work. However, for Agile programs the precise details are unknown upfront, so initial planning of cost, scope, and timing would be conducted at a high level, and then supplemented with more specific plans for each iteration. While cost and schedule are set for each iteration, requirements for each iteration (or increment) can be variable as they are learned over time and revised to reflect experiences from completed iterations and to accommodate changing priorities of the end users. The differences in these two software development approaches are shown in figure 8. Looking at figure 8, the benefit provided from using traditional program management practices such as establishing a cost estimate or a robust schedule, is not obvious. However, unlike a theoretical environment, many government programs may not have the autonomy to manage completely flexible scope, as they must deliver certain minimal specifications with the cost and schedule provided. In those cases, it is vital for the team to understand and differentiate the IT requirements that are “must haves” from the “nice to haves” early in the planning effort. This would help facilitate delivery of the “must-haves” requirements first, thereby providing users with the greatest benefits as soon as possible. In addition to the contact named above, the following staff made key contributions to this report: Shannin G. O’Neill (Assistant Director), Jeanne Sung (Analyst in Charge), Andrew Beggs, Rebecca Eyler, Kendrick Johnson, Thomas J. Johnson, Jason Lee, Jennifer Leotta, and Melissa Melvin.
|
FEMA, a component of DHS, annually awards billions of dollars in grants to help communities prepare for, mitigate the effects of, and recover from major disasters. However, FEMA's complex IT environment supporting grants management consists of many disparate systems. In 2008, the agency attempted to modernize these systems but experienced significant challenges. In 2015, FEMA initiated a new endeavor (the GMM program) aimed at streamlining and modernizing the grants management IT environment. GAO was asked to review the GMM program. GAO's objectives were to (1) determine the extent to which FEMA is implementing leading practices for reengineering its grants management processes and incorporating needs into IT requirements; (2) assess the reliability of the program's estimated costs and schedule; and (3) determine the extent to which FEMA is addressing key cybersecurity practices. GAO compared program documentation to leading practices for process reengineering and requirements management, cost and schedule estimation, and cybersecurity risk management, as established by the Software Engineering Institute, National Institute of Standards and Technology, and GAO. Of six important leading practices for effective business process reengineering and information technology (IT) requirements management, the Federal Emergency Management Agency (FEMA) fully implemented four and partially implemented two for the Grants Management Modernization (GMM) program (see table). Specifically, FEMA ensured senior leadership commitment, took steps to assess its business environment and performance goals, took recent actions to track progress in delivering IT requirements, and incorporated input from end user stakeholders. However, FEMA has not yet fully established plans for implementing new business processes or established complete traceability of IT requirements. Until FEMA fully implements the remaining two practices, it risks delivering an IT solution that does not fully modernize FEMA's grants management systems. While GMM's initial May 2017 cost estimate of about $251 million was generally consistent with leading practices for a reliable, high-quality estimate, it no longer reflects current assumptions about the program. FEMA officials stated in December 2018 that they had completed a revised cost estimate, but it was undergoing departmental approval. GMM's program schedule was inconsistent with leading practices; of particular concern was that the program's final delivery date of September 2020 was not informed by a realistic assessment of GMM development activities, and rather was determined by imposing an unsubstantiated delivery date. Developing sound cost and schedule estimates is necessary to ensure that FEMA has a clear understanding of program risks. Of five key cybersecurity practices, FEMA fully addressed three and partially addressed two for GMM. Specifically, it categorized GMM's system based on security risk, selected and implemented security controls, and monitored security controls on an ongoing basis. However, the program had not initially established corrective action plans for 13 medium- and low-risk vulnerabilities. This conflicts with the Department of Homeland Security's (DHS) guidance that specifies that corrective action plans must be developed for every weakness identified. Until FEMA, among other things, ensures that the program consistently follows the department's guidance on preparing corrective action plans for all security vulnerabilities, GMM's system will remain at increased risk of exploits. GAO is making eight recommendations to FEMA to implement leading practices related to reengineering processes, managing requirements, scheduling, and implementing cybersecurity. DHS concurred with all recommendations and provided estimated dates for implementing each of them.
|
OPA amended the Clean Water Act and established provisions expanding and consolidating the federal government’s authority to prevent and respond to oil spills. This includes providing the federal government with the authority to perform cleanup immediately after a spill using federal resources, monitor the response efforts of the spiller, or direct the spiller’s cleanup activities. OPA also established a “polluter pays” system, placing the primary burden of liability and costs of oil spills on the responsible party for the vessel or facility from which oil is discharged. Under this system, the responsible party assumes, up to a specified limit, the burden of paying for spill costs, including both removal costs (for cleaning up the spill) and damage claims (for restoring the environment and paying compensation to parties economically harmed by the spill). OPA authorized the use of the Oil Spill Liability Trust Fund to fund up to $1 billion per spill incident for pollution removal costs and damages resulting from oil spills and mitigation of a substantial threat of an oil spill in navigable U.S. waters when a responsible party cannot or does not pay for the cleanup. After the Deepwater Horizon oil spill, the Resources and Ecosystems Sustainability, Tourist Opportunities, and Revived Economies of the Gulf Coast States Act of 2012 (RESTORE Act) established a new trust fund for programs, projects, and activities that restore and protect the environment and economy of the Gulf Coast region as well as the RESTORE Council, which is to summarize its activities for each calendar year’s activities in an annual report to Congress. In addition, NOAA finalized regulations in 1996 for assessing natural resource damages resulting from a discharge or substantial threat of a discharge of oil. The NRDA regulations recognize that OPA provides for designating federal, state, and tribal officials as natural resource trustees and authorizes them to make claims against the parties responsible for the injuries23, 24 Under NRDA regulations, a trustee council’s work usually occurs in three steps: (1) a pre-assessment phase, (2) the restoration planning phase, and (3) the restoration implementation phase. During the pre-assessment phase the trustees are to determine whether they have jurisdiction to pursue restoration. In the restoration planning phase the trustees are to evaluate information on potential injuries and use that information to determine the need for, type of, and scale of restoration. Finally, the restoration implementation phase describes the process for implementing restoration. The NRDA regulations define injury as an observable or measurable adverse change in a natural resource or impairment of a natural resource service. 15 C.F.R. 990.11. federal and state trustees entered into legal settlements with responsible parties to resolve certain claims. The Exxon Valdez Trustee Council is in the restoration implementation phase, while the Deepwater Horizon Trustee Council is in both the restoration planning and implementation phases. The National Oil and Hazardous Substances Pollution Contingency Plan, commonly known as the National Contingency Plan, contains the federal government’s framework and operative requirements for preparing and responding to discharges of oil and releases of hazardous substances, pollutants, and contaminants. It establishes that federal oil spill response authority is determined by the location of the spill: the Coast Guard has response authority in the U.S. coastal zone, and EPA covers the inland zone. In addition, NOAA is to provide scientific analysis and consultation during oil spill response activities in the coastal zones. The Exxon Valdez oil spill in Alaska’s Prince William Sound in 1989 contaminated portions of national wildlife refuges, national and state parks, a national forest, and a state game sanctuary—killing or injuring thousands of sea birds, marine mammals, and fish and disrupting the ecosystem in its path. In October 1991, the U.S. District Court for the District of Alaska approved a civil settlement and criminal plea agreement among Exxon, the federal government, and the state of Alaska for recovery of natural resource damages resulting from the oil spill. Exxon agreed to pay $900 million in civil claims in 11 annual payments and $125 million to resolve various criminal charges. In August 1991, the federal government and the state of Alaska signed a memorandum of agreement and consent decree to act as co-trustees in collecting and using natural resource damage payments from the spill. The 1991 memorandum states that all decisions related to injury assessment, restoration activities, or other use of the natural resource damage payments are to be made by unanimous agreement of the trustees. According to the memorandum, the trustees are to use the natural resource damage payments to restore, replace, rehabilitate, enhance, or acquire the equivalent of the natural resources injured as a result of the oil spill and the reduced or lost services provided by such resources. The memorandum also recognized that EPA was designated to coordinate restoration activities on behalf of the federal government. In 1992, the trustees established the Exxon Valdez Trustee Council to ensure coordination and cooperation in restoring the natural resources injured, lost, or destroyed by the spill. In 1994, the Exxon Valdez Trustee Council prepared a restoration plan for use of the funds, which consisted of five categories: (1) general restoration; (2) habitat protection and acquisition; (3) monitoring and research; (4) restoration reserve; and (5) public information, science management, and administration. The restoration plan noted that in addition to restoring natural resources, funds may be used to restore reduced or lost services (including human uses) from injured natural resources, which includes subsistence, commercial fishing, recreation, and tourism services. The Exxon Valdez Trustee Council is advised by members of the public and a panel of scientists, and its Executive Director manages the day-to-day administrative functions. The Exxon Valdez Trustee Council has published documents that are on the council’s public website, such as the Injured Resources and Services list (current as of 2014), lingering oil updates (current as of 2016), annual reports (current as of 2018), and annual project work plans (current as of 2018). The Deepwater Horizon oil spill in the Gulf of Mexico in 2010 resulted in the tragic loss of 11 lives and a devastating environmental impact and affected the livelihoods of thousands of Gulf Coast citizens and businesses. In April 2016, BP, the federal government, and the five Gulf Coast states agreed to a settlement resolving multiple claims for federal civil penalties and natural resource damages related to the spill totaling up to $14.9 billion. Under the terms of the consent decree for the settlement, BP must pay up to $8.8 billion in natural resource damages under OPA, which includes $1 billion BP previously committed to pay for early restoration projects, and up to $700 million to address injuries that were unknown to the trustees as of July 2, 2015, including for any associated Natural Resource Damage assessment and planning activities, or to adapt, enhance, supplement, or replace restoration projects or approaches that the trustees initially selected. BP is to make these payments into the Deepwater Horizon Oil Spill Natural Resource Damages Fund managed by the Department of the Interior (Interior), to be used jointly by the federal and state trustees of the Deepwater Horizon Trustee Council for restoration of injured or lost natural resources. Two additional, separate restoration funds are to receive money from the BP civil and criminal penalties: (1) the Gulf Coast Restoration Trust Fund established under the RESTORE Act is to receive 80 percent of the $5.5 billion Clean Water Act civil penalty paid by BP to support environmental restoration and economic recovery projects in the Gulf Coast region and (2) the Gulf Environmental Benefit Fund managed by the nonprofit National Fish and Wildlife Foundation is to receive $2.394 billion in criminal penalties. For more information on the amount and distribution of the BP civil and criminal payments, see figure 1. Prior to reaching the settlement in 2016, BP signed an agreement in April 2011 to provide $1 billion toward early restoration projects in the Gulf of Mexico to address injuries to natural resources caused by the spill. Early restoration projects may be developed prior to the completion of the injury assessment, which can take months or years to complete. Payments by BP for early restoration projects are counted towards its liability for the $8.8 billion in natural resource damages resulting from the spill. The designated trustees are to administer these payments for natural resources, according to OPA. The designated trustees include federal officials from Interior, NOAA, the U.S. Department of Agriculture, and EPA, as well as state officials from the five Gulf States that were affected by the spill—Alabama, Florida, Louisiana, Mississippi, and Texas. In February 2016, the Deepwater Horizon Trustee Council finalized the Programmatic Damage Assessment and Restoration Plan (programmatic restoration plan) that provided the council’s injury assessment and proposed a framework for identifying and developing project-specific restoration plans. The five goals of the programmatic restoration plan are to (1) restore and conserve habitat; (2) restore water quality; (3) replenish and protect living coastal and marine resources; (4) provide and enhance recreational opportunities; and (5) provide for monitoring, adaptive management, and administrative oversight to support restoration implementation. According to the 2016 programmatic restoration plan, the Deepwater Horizon Trustee Council is to coordinate with other Deepwater Horizon restoration programs, such as those funded by the RESTORE Act, the National Fish and Wildlife Foundation, and other entities. The 2016 programmatic restoration plan established Trustee Implementation Groups for each of the seven designated restoration areas—one for each of the five Gulf States, the Region-Wide implementation group, and the Open Ocean implementation group. Each trustee implementation group is to plan, decide on, and implement restoration activities, including monitoring and adaptive management, for the funding that the consent decree allocated to its restoration area. Federal trustees serve in all the trustee implementation groups, and state trustees serve on the Region-Wide implementation group and the trustee implementation groups for their states; decisions are to be made by consensus. The Deepwater Horizon Trustee Council is to coordinate the work of the trustee implementation groups by establishing standard procedures and practices to ensure consistency in developing and implementing restoration activities. OPA created the interagency committee to provide a comprehensive, coordinated federal oil pollution research program and promote cooperation with industry, universities, research institutions, state governments, and other nations through information sharing, coordinated planning, and joint funding of projects. It also designated member agencies and authorized the President to designate other federal agencies as members of the interagency committee. As of November 2018, the interagency committee consisted of 15 federal members representing independent agencies, departments, and department components. OPA directs that a representative from the Coast Guard serve as the chair, and the interagency committee charter designates that a representative from NOAA, EPA, or the Bureau of Safety and Environmental Enforcement (BSEE) serve as the vice-chair and that the committee’s Executive Director provide staff support. The interagency committee’s charter notes that it shall meet at least semi-annually or at the decision of the chair. According to OPA, the chair’s duties include reporting biennially to Congress on the interagency committee’s activities related to oil pollution research, development, and demonstration programs. OPA also required the interagency committee to prepare and submit a research and technology plan, which has been updated periodically. In September 2015, the interagency committee released the research and technology plan for fiscal years 2015 through 2021. This research and technology plan updates the interagency committee’s 1992 plan, revised in 1997, and provides a new baseline of the nation’s oil pollution research needs. The plan is primarily directed at federal agencies with responsibilities for conducting or funding such research, but it can also serve as a research planning guide for nonfederal stakeholders such as, industry, academia, state governments, research institutions, and other nations, according to interagency committee documents. The 2015 research and technology plan established a common language and planning framework to enable researchers and interested parties to identify and track research in four classes or categories that represent general groupings of oil spill research: Prevention: Research that supports developing practices and technologies designed to predict, reduce, or eliminate the likelihood of discharges or minimize the volume of oil discharges into the environment. Preparedness: Research that supports the activities, programs, and systems developed prior to an oil spill to improve the planning, decision-making, and management processes needed for responding to and recovering from oil spills. Response: Research that supports techniques and technologies that address the immediate and short-term effects of an oil spill and encompasses all activities involved in containing, cleaning up, treating, and disposing of oil to (1) maintain the safety of human life, (2) stabilize a situation to preclude further damage, and (3) minimize adverse environmental and socioeconomic effects. Injury assessment and restoration: Research that involves collecting and analyzing information to (1) evaluate the nature and extent of environmental, human health, and socioeconomic injuries resulting from an incident; (2) determine the actions needed to restore natural resources and their services to pre-spill conditions; and (3) make the environment and public whole after interim losses. In response to the Exxon Valdez and Deepwater Horizon oil spills and by forming trustee councils, federal and state trustees have used the restoration trust funds to authorize money for activities in accordance with approved restoration plans. The Exxon Valdez Trustee Council has largely completed restoration work and authorized approximately $985 million, roughly 86 percent of the restoration trust fund, primarily on habitat protection and general restoration, research, and monitoring activities. As a result of these restoration activities and natural recovery, the majority of the injured natural resources and human services in the spill area has recovered or is recovering, according to the council’s assessment. However, the Exxon Valdez Trustee Council continues to monitor the lack of recovery of Pacific herring and the presence of lingering oil in the spill area. The Deepwater Horizon Trustee Council is completing early restoration work and initial post-settlement restoration planning. It has authorized approximately $1.1 billion for restoration activities, roughly 13 percent of the restoration trust fund, and spent $368 million, roughly 5 percent of the restoration trust fund, primarily on habitat protection and enhancing recreation, such as building boat ramps and other recreational facilities. Exxon’s payments to the restoration trust fund totaled approximately $900 million, and the interest earnings, as of January 2016, totaled $247 million. From 1992 to 2018, the Exxon Valdez Trustee Council authorized the expenditure of approximately $985 million or 86 percent of the roughly $1.15 billion in principal funds plus interest from the restoration trust fund, primarily on habitat protection ($445 million) and general restoration, research, and monitoring of injured natural resources ($234 million). The remaining unspent restoration trust fund balance as of January 2018 was $210 million, split evenly between the habitat investment subaccount for future habitat protection activities and the research investment subaccount for future general restoration activities (see fig. 2). According to the Exxon Valdez Trustee Council, as of January 2018, it had spent approximately $445 million to protect and enhance habitat, including acquiring 628,000 acres of lands and interest in lands. As outlined in the trustee council’s 1994 restoration plan, the habitat program is intended to minimize further injury to resources and services and allow recovery to continue with the least interference by authorizing funds for federal and state resource agencies to acquire title or conservation easements on ecologically valuable lands. For example, in 2017 the Exxon Valdez Trustee Council authorized about $5.5 million to acquire a conservation easement on 1,060 acres at the northeastern end of Kodiak Island in the Gulf of Alaska, known as Termination Point. The trustee council authorized funds for this acquisition to (1) protect the property from timber logging and development and (2) provide habitat and feeding areas for marine birds injured by the spill, such as marbled murrelets and pigeon guillemots. According to the Exxon Valdez Trustee Council, habitat acquisitions prevent additional injury to species during recovery, promote restoration of spill-affected resources and services, and are the primary tool for acquiring equivalent resources harmed by the spill. The habitat program also supports habitat enhancement projects, which, according to the Exxon Valdez Trustee Council, aim to repair human- caused harm to natural resources, their habitats, and the services they provide to humans. For example, the trustee council authorized $2.2 million to the Alaska Department of Natural Resources to stabilize stream bank vegetation and install elevated steel walkways to provide less- damaging access to the Kenai River, a popular fishing destination. The Exxon Valdez Trustee Council has spent roughly $234 million from October 1992 to January 2018 on hundreds of general restoration, monitoring, and research activities. As outlined in the 1994 restoration plan, general restoration includes activities that manipulate the environment, manage human use, and reduce marine pollution. Research and monitoring activities also provide information on the status and condition of resources and services, including (1) whether they are recovering, (2) whether restoration activities are successful, and (3) factors that may be constraining recovery, according to the 1994 plan. For example, since 2012, the trustee council has authorized money for a program called Gulf Watch Alaska that provides long-term monitoring data on the status of environmental conditions—such as waters temperature and salinity—and the marine and nearshore ecosystems. Gulf Watch Alaska provides data to federal, state, and tribal agencies, as well as the public, that informs resource conservation programs and aid in the management of species injured by the spill. According to the trustee council, its expenditures for research projects have resulted in hundreds of peer-reviewed scientific studies and increased knowledge about the marine environment that benefits the injured resources. The Exxon Valdez Trustee Council has spent roughly $89 million from October 1992 to January 2018 on administration, science management, and public information. According to the 1994 restoration plan, expenditures under this category cover the cost to (1) prepare work plans, (2) negotiate habitat purchases, (3) provide independent scientific review, (4) involve the public, and (5) operate the restoration program. Although the Exxon Valdez Trustee Council set a target of 5 percent administrative costs in the 1994 restoration plan, according to a written statement that the trustee council provided, administrative costs averaged around 6 percent from 1994 through 2001. The trustees and council staff we interviewed told us that in hindsight the 5 percent target was unrealistic as it did not reflect the actual administrative costs at that time, although such costs were included in project budgets or were absorbed by federal and state agencies. Therefore, in 2012, the Exxon Valdez Trustee Council changed the way it accounted for administrative costs and has included these costs in the administrative budget. According to the trustee council, under the new accounting policy, administrative costs were recalculated and estimated at around 19 percent for the period 2002 through 2018. The remaining $210 million Exxon Valdez restoration trust fund balance is held by the Alaska Department of Revenue in two interest-bearing subaccounts. As of January 2018, the research subaccount and the habitat subaccount each held approximately $105 million. In the 1994 restoration plan, the Exxon Valdez Trustee Council established the need for a restoration reserve to ensure that restoration activities could continue to be supported after the final annual payments from the Exxon Corporation were received in September 2001. According to the 1994 restoration plan, the trustee council planned to set aside $12 million per year for a period of 9 years into the restoration reserve, totaling $108 million plus interest. In 1999, the Exxon Valdez Trustee Council resolved to transfer the estimated remaining balance of $170 million to the restoration reserve and split the money into two subaccounts. Since 2002, the trustee council is to make allocations for its annual work plans and ongoing habitat acquisition using these accounts. In 2010, the trustee council established a 20-year strategic plan to spend the remaining trust funds using four 5-year incremental work plans. In November 2010, the trustee council issued a call for project proposals for the first 5-year work plan, for fiscal years 2012 through 2016. Although the Exxon Valdez Trustee Council solicited invitations on a 5-year cycle, it has authorized money for each project annually. In a written statement, the trustee council also stated that it continues to pursue and acquire from willing sellers remaining parcels of land that prior studies have identified as high-priority habitat. According to the Exxon Valdez Trustee Council’s long-term spending scenario, both of the subaccounts are expected to be depleted by 2032 or earlier as determined by the market’s performance. According to the Exxon Valdez Trustee Council’s 2014 restoration plan update—its most recent assessment of injured resources and services— all but 5 of the 32 natural resources and human services identified as injured by the spill have recovered, are recovering, or are very likely recovered. In the 1994 restoration plan, the trustee council established a list of resources and services that suffered injuries from the spill, and developed specific, measurable recovery objectives for each injured resource and service. The Exxon Valdez Trustee Council has periodically assessed the status of those resources, most recently in 2014. As of the 2014 assessment, the following 4 resources were listed as not recovering: (1) marbled murrelets, (2) Pacific herring, (3) pigeon guillemots, and (4) one group of killer whales. In addition, the recovery of Kittlitz’s murrelets was listed as unknown. According to the Exxon Valdez Trustee Council, the status of these resources in 2018 is largely similar to their status in 2014 except that one population of pigeon guillemots has likely increased as a result of a predator-control project that the council supported. However, the overall status of this species has not been determined. In a written statement, the trustees stated that the trustee council plans to initiate its next assessment of injured resources in late 2018. The Exxon Valdez Trustee Council remains particularly concerned about the health of the Pacific herring population and the presence of lingering oil. According to the trustee council’s 2014 restoration plan update, Pacific herring are considered an ecologically and commercially important species that in addition to being fished for human consumption is a source of food for various marine species. The assessment noted a combination of factors, including disease, predation, and poor recruitment of additional fish to the stock through growth or migration, appear to have contributed to the continued suppression of herring populations. As a result, the herring fishery has been closed for 23 of the 29 years since the oil spill and has not met the trustee council’s recovery objective. To address concerns regarding the Pacific herring, the trustee council plans to authorize additional money for ongoing Pacific herring research and monitoring through the anticipated end date for the fund in fiscal year 2032, for an estimated total cost of roughly $23 million over 20 years. The Exxon Valdez Trustee Council also has concerns regarding the presence of lingering oil in the spill area. According to a March 2016 report for the trustee council, approximately 27,000 gallons of lightly weathered oil from the Exxon Valdez spill remains, located along almost 22 miles of shoreline at a small number of subsurface sites, where oxygen and nutrients are at levels too low to support microbial degradation. In May 2018, we accompanied researchers working with the trustee council to the spill area and observed the excavation of three pits that revealed lingering oil roughly 6 inches below the surface of the beach, as captured in figure 3. According to the researchers, oil previously recovered from this location was identified as belonging to the Exxon Valdez oil spill. Evidence of exposure to lingering oil was observed as recently as 2009 in a variety of marine species, including sea otters and harlequin ducks, according to the 2016 lingering oil report. The report also noted that the most recent studies show that the sea otter and harlequin duck populations have recovered and that lingering oil is no longer causing ecological damage. Further, studies demonstrated that minimally intrusive remediation of the oil would only be effective at a small number of sites, according to the 2016 report. Therefore, although the trustee council has decided not to pursue remediation of the oil, it stated that it has authorized money for projects to study the effects of oil and lingering oil totaling over $16 million and will continue to monitor the oil to document its physical and chemical changes over time. The Exxon Valdez Trustee Council expects that lingering oil will persist for decades; however, its representatives said that the evidence indicates that there are no current biological effects of the oil. The Exxon Valdez Trustee Council’s priorities for future spending are outlined in the 2014 restoration plan update, and in addition to long-term herring research and lingering oil, the priorities include long-term monitoring of marine conditions and injured resources, shorter-term harbor restoration projects, and habitat protection. Since the federal and state governments reached a final settlement with BP in 2016 and the Deepwater Horizon Trustee Council finalized a programmatic restoration plan, four trustee implementation groups have issued initial independent restoration plans. Specifically, the Alabama, Louisiana, Mississippi, and Texas trustee implementation groups have issued initial restoration plans. According to the Deepwater Horizon Trustee Council, the trustee implementation groups covering Florida, Open Ocean, and Region-Wide restoration are in the midst of a multiyear planning effort and anticipate issuing initial restoration plans in 2019 or later. The trustee implementation groups are responsible for developing and approving restoration plans and resolutions, which, when approved, authorize money to be spent on restoration projects. This process includes soliciting project ideas, submitting proposed plans for public comment, and ensuring compliance with applicable laws and regulations, such as the National Environmental Policy Act. According to the trustee council, there is no specific timetable for approving future restoration plans, as plans are approved on an ongoing basis—typically for several projects at a time. The four completed restoration plans, together with early restoration spending and other activities, including planning and administrative efforts, account for all authorizations made by the Deepwater Horizon Trustee Council as of December 31, 2017, according to NOAA—the agency that manages the system the trustee councils uses for financial reporting. As shown in figure 4, these authorizations include approximately $1.1 billion, or 13 percent, of the $8.1 billion restoration trust fund on five goals. The Deepwater Horizon Trustee Council has authorized roughly $460 million for habitat protection—about 10 percent of the almost $4.7 billion ordered for this use by the settlement. According to the 2016 programmatic restoration plan, habitat protection includes both conservation acquisition and habitat enhancement, such as creating, restoring, or enhancing coastal wetlands. For example, during the first phase of early restoration in 2012, the trustee council authorized $14.4 million to the Louisiana Coastal Protection and Restoration Authority to create 104 acres of new brackish marsh at Lake Hermitage in Barataria Bay, Louisiana. The project involved dredging sediment and planting native marsh vegetation to restore marsh habitat damaged by the spill. The project is currently in the monitoring phase. As of the end of 2017, the Deepwater Horizon Trustee Council had approved 34 habitat protection projects, many of which were still in progress as of December 2017. The initial results of these projects include the restoration of over 4,000 acres of habitat and the creation of over 40 artificial reefs, according to a written statement by the federal trustees. The trustee council has authorized roughly $349 million to enhance recreational use—about 83 percent of the almost $420 million ordered for this use by the settlement. According to the 2016 programmatic restoration plan, enhancing recreational use includes acquiring land along the coast, building improved or new infrastructure, and improving navigation for on-water recreation. For example, during the first phase of early restoration in 2012, the Deepwater Horizon Trustee Council authorized approximately $5.3 million to the Florida Department of Environmental Protection to repair and construct boat ramps in Pensacola Bay and Perdido Bay, Florida. Construction was completed in 2016, and the project is currently in the monitoring and operations and maintenance phase. As of the end of 2017, the Deepwater Horizon Trustee Council had approved 43 projects to enhance recreational use, many of which were still in progress as of December 2017. These projects have provided new or enhanced facilities, such as pavilions, picnic areas, and boat ramps, according to a written statement by the federal trustees. The Deepwater Horizon Trustee Council has authorized roughly $218 million to restore coastal and marine wildlife—about 12 percent of the almost $1.8 billion ordered for this use by the settlement, primarily for birds ($108 million), sea turtles ($50 million), oysters ($38 million), and fish ($20 million). According to the 2016 programmatic restoration plan, restoring coastal and marine wildlife includes activities that restore the resources, such as fish, sea turtles, and deep coral communities, which contribute to a productive, biologically diverse, and resilient ecosystem. For example, during the first phase of early restoration in 2012, the trustee council authorized $11 million to the Mississippi Department of Environmental Quality to deploy a mixture of oyster shells, limestone, and concrete on 1,430 acres in waters off Hancock and Harrison Counties in Mississippi. This material, when placed in oyster spawning areas, provides a surface for free swimming oyster larvae to attach and grow into oysters. The project is currently in the monitoring and operations and maintenance phase. As of the end of 2017, the Deepwater Horizon Trustee Council had approved 32 projects to restore coastal and marine wildlife. Although the trustee council authorized millions of dollars to restore coastal and marine wildlife, it authorized 1 percent or less of funds ordered by the settlement for sturgeon, marine mammals, submerged aquatic vegetation, and other seafloor species—such as corals. According to the 2016 consent decree, the Open Ocean implementation group is responsible for authorizing the majority of the restoration funds for these types of wildlife, but that trustee implementation group has not yet completed its initial restoration plan. According to NOAA, the complexity of restoring several of these resources necessitated additional preplanning and restoration technique development prior to considering specific restoration projects for several of these types of wildlife. The trustee implementation group is developing two restoration plans that will include projects for birds and sturgeon, as well as for sea turtles, fish, marine mammals, and corals, according to a Deepwater Horizon Trustee Council press release. The trustee council released the first draft plan for public comment in October 2018, and plans to release the second plan in early 2019. In August 2017, the Deepwater Horizon Trustee Council announced that the Louisiana implementation group was soliciting project ideas to fund the restoration of submerged aquatic vegetation, among other types, to include in a future restoration plan but has not yet submitted such a plan for public review. Roughly $27 million has been authorized for administrative oversight and monitoring activities, or about 3 percent of the almost $810 million that the settlement ordered for this use. The majority of the funding ($25 million) was for administrative oversight activities, and the balance was for monitoring. According to the 2016 programmatic restoration plan, administrative oversight includes the costs for trustees to guide project selection, implementation, and adaptive management. For the state trustees, all administrative costs are covered by their respective trustee implementation groups, and for federal trustees, all administrative costs are covered by the Open Ocean implementation group. For example, during the postsettlement phase, the trustee council authorized approximately $6.6 million to Interior for (1) participation on the trustee council; (2) restoration planning, plan development, and coordination with other trustees; (3) environmental compliance reviews; (4) technical assistance; and (5) financial management, among other uses. As of the end of 2017, the Deepwater Horizon Trustee Council had approved nine administrative oversight and monitoring projects, which remained ongoing as of December 31, 2017. The results of the trustee council’s activities in this area so far include the completion of a monitoring and adaptive management manual and its standard operating procedures. The Deepwater Horizon Trustee Council has authorized $4 million to restore water quality—about 1 percent of the $410 million that the settlement ordered for this use. According to the 2016 programmatic restoration plan, restoring water quality includes both reducing nonpoint nutrient pollution to coastal watersheds and improving water quality in Florida through efforts such as stormwater control and erosion control. As of the end of 2017, the Deepwater Horizon Trustee Council approved two nonpoint nutrient reduction projects to address excessive nutrient loads in Gulf waters but no water quality projects in Florida. For example, in 2017, the Deepwater Horizon Trustee Council authorized approximately $224,000 to conduct restoration planning to develop, draft, and finalize a restoration plan addressing nonpoint nutrient reduction, among other goals. The trustee council has authorized few funds to date for this restoration goal because, in part, the Florida implementation group has not yet completed its first postsettlement restoration plan. In September 2017, the trustee council announced that the Florida implementation group was reviewing water quality project ideas for its initial restoration plan, and it released a draft of the plan for public comment in September 2018. According to the Deepwater Horizon Trustee Council, the final plan will be released in January 2019. Nine of the interagency committee member agencies funded over 100 oil spill research projects per year from fiscal years 2011 through 2017, for a total cost of about $200 million; however, we found that the interagency committee did not coordinate its research with some key entities. More specifically, approximately half of the interagency committee members said internal coordination on such research improved during this time, but the committee may not have included all relevant agencies, and we found that the committee did not coordinate with relevant trustee councils. During fiscal years 2011 through 2017, 9 of the 15 interagency committee member agencies funded oil spill research projects, spending about $200 million on this research, based on our review of agency data from the member agencies. These nine agencies were the Bureau of Ocean Energy Management, BSEE, the Coast Guard, the Department of Energy, EPA, NASA, NOAA, the Pipeline and Hazardous Materials Safety Administration, and the U.S. Arctic Research Commission. One of these agencies—BSEE—spent about $84 million, or about 40 percent of the total amount spent by all nine agencies (see table 1). In March 2011 we reported that during fiscal years 2000 through 2010, seven interagency committee member agencies spent about $163 million on oil pollution research, according to officials from those agencies. Since we last reported on the interagency committee, three additional agencies told us that they also fund oil spill research—the Department of Energy, BSEE, and the U.S. Arctic Research Commission—while the U.S. Navy told us that it no longer funds oil spill research projects. According to agency officials, the nine interagency committee member agencies funded from 100 to 200 research projects annually from fiscal years 2011 through 2017. These nine agencies reported funding research projects in one or more of the interagency committee’s four oil spill research categories: prevention, preparedness, response, and injury assessment and restoration (see table 2). We reported in March 2011 that federal agencies conducted oil pollution research but that the interagency committee had taken limited actions to foster the communication and coordination of this research among member agencies and nonfederal stakeholders. More specifically, we noted that member agencies were not consistently represented on the interagency committee and interested nonfederal stakeholders reported limited contact with the interagency committee. We recommended, among other things, that the Commandant of the Coast Guard direct the chair of the interagency committee, in coordination with member agencies, establish a more systematic process to identify and consult with key nonfederal stakeholders. Officials from 8 of the 15 member agencies said they believe that the interagency committee’s coordination efforts have improved since the Deepwater Horizon oil spill in 2010. In response to our recommendation on coordination with nonfederal stakeholders, we found that members consistently attend major oil spill conferences and workshops. In addition, we observed that the interagency committee invites outside speakers and researchers to its meetings to update the membership on ongoing research activities in academia, industry, and the government. The committee charter calls for meetings at least semiannually, but since fiscal year 2011 the interagency committee has held quarterly meetings with member agencies as well as meetings with outside groups of knowledgeable stakeholders. At the meetings, member agencies have the opportunity to present information on oil spill research they are conducting, share information about upcoming research conferences, and listen to presentations by outside groups. According to member agency officials, some of the benefits of the interagency committee’s improved coordination efforts include a reduction in research redundancies, increased understanding of the broader oil spill research community, the facilitation of relationships, the identification of research gaps, and the ability to leverage resources. U.S. Navy officials said that the interagency committee facilitated communication between member agencies that use the Navy’s equipment for research purposes. As a result of discussions that took place at an interagency meeting, the Navy offered the use of a hydraulic power unit to the Coast Guard for hydraulic testing in Arctic conditions in Alaska. Officials from a few of the member agencies, including the Coast Guard, BSEE, EPA, and NOAA, told us that they collaborate on oil spill- related research efforts with other member agencies of the interagency committee. In addition, the release of the 2015-2021 research and technology plan provides a new baseline for research, including 150 priority oil pollution research needs within 25 research areas. According to the research and technology plan, future updates will reflect advancements in oil pollution technology and changing research needs by capitalizing on the unique roles and responsibilities of each member agency. According to officials from one member agency, the revised research and technology plan has helped member agencies coordinate with other member agencies to leverage funding and expertise. Member agencies also cooperate with nonfederal research entities on research needs and activities. The interagency committee has demonstrated key practices that strengthen coordination, such as agreeing on common terminology and priorities for oil spill research in its revised research and technology plan. However, the committee could enhance coordination by ensuring that relevant participants have been included—another key practice. Under OPA, certain federal agencies are members of the interagency committee, but member agencies may choose which office or official represents them at meetings and coordinates with other members on committee-related work. Officials from 6 of the 15 member agencies told us that their particular research efforts are not the focus of ICCOPR meetings, and therefore ICCOPR’s ability to coordinate their research efforts are less valuable. For example, NASA officials said the office representing their agency at meetings is not involved in oil spill research, but other offices within their agency fund or conduct relevant research. In addition, 7 of the 15 officials we interviewed from member agencies suggested that other federal agencies could be relevant to the committee’s research efforts. For example, officials we interviewed from several member agencies suggested including the U.S. Geological Survey (USGS) as a full member because of its relevant research and mapping expertise. According to committee documents, the interagency committee considered adding USGS in 2015 but has not made a decision on USGS’s membership. The Commandant of the Coast Guard, under his or her capacity as chair of the interagency committee, has been delegated authority to appoint additional agencies to the committee as appropriate. A leading practice for collaboration calls for interagency groups to ensure that all relevant participants have been included in collaborative efforts. According to this leading practice, participants should have the appropriate knowledge, skills, and abilities to contribute to the outcomes of the collaborative effort. However, interagency committee member agency officials said the committee has not systematically reviewed its membership to determine which offices within current member agencies are the most relevant to its mission and whether adding other federal agencies as members would be beneficial. By systematically reviewing its membership to determine whether any additional agencies should be involved in coordinating oil spill research and that the most appropriate offices within member agencies are represented, the interagency committee could improve its ability to coordinate research among federal agencies. In addition, agency officials knowledgeable about the work of the NRDA trustee councils are not the same officials representing their agency as members on the interagency committee. The research and technology plan notes that the interagency committee’s injury assessment and restoration research is intended to support the NRDA process. However, the NRDA trustees who manage the restoration funds for the Exxon Valdez and Deepwater Horizon oil spills told us that they have not coordinated or communicated on oil spill research or restoration efforts with the interagency committee; therefore, they would not have been involved with developing the research and technology plan. In addition, some trustee council members told us that they were not even aware that the interagency committee existed. Under OPA, one of the interagency committee’s responsibilities is to coordinate with federal agencies and external entities on an oil pollution research program that includes methods to restore and rehabilitate natural resources damaged by oil spills. As previously discussed, the NRDA trustee councils are charged with assessing natural resource damages for the natural resources under their trusteeship and developing and implementing plans for restoration efforts. The research that the interagency committee members fund includes research on restoration that could be pertinent to the work of the NRDA trustee councils. For example, following the oil spill in 2010, the Deepwater Horizon Trustee Council evaluated baseline conditions for several different representative species, such as sea turtles and Gulf sturgeon, to quantify the extent of injury as part of the restoration planning process that OPA regulations required. Some interagency committee member agencies, such as NOAA and BOEM, fund research on baseline data that could inform the NRDA trustee councils’ injury assessment work. In turn, the NRDA trustee councils’ work could also inform the interagency committee’s coordination of future oil spill research by, for example, identifying gaps in research as identified and prioritized in updates to the research and technology plan. By coordinating with the NRDA trustee councils, the interagency committee could ensure that its research informs and supports the councils’ damage assessment and restoration efforts and better leverages members’ resources. According to the literature we reviewed, environmental differences between the Gulf of Mexico and Arctic regions, as well as factors such as the type of oil, influence the potential effectiveness of various oil spill response techniques. In each region, environmental conditions, such as water and air temperature, water movement, and salinity, influence how effective oil spill response techniques can be. Further, according to the literature we reviewed, these conditions determine which response techniques are appropriate. Environmental conditions, such as ocean water and air temperature, can influence the effectiveness of natural oil removal through evaporation or biodegradation. These processes may occur more quickly in warmer climates, such as in the Gulf of Mexico. In the event of an oil spill, communities of microbes can bloom to respond to the new supply of oil. According to a 2011 report from the American Academy of Microbiology, these microbes can biodegrade up to 90 percent of some light crude oil, but the largest and most complex molecules––such as the ones that make up road asphalt––are not significantly biodegradable. A 2016 study found that higher temperatures lead to increased biodegradation, and increased salinity had a small positive impact on crude oil removal. However, the American Academy of Microbiology report also states that while microbes can biodegrade oil over time, the process may not be fast enough to prevent ecological damage. Therefore, immediate containment or physical removal of the oil is an important first response. The effectiveness of oil removal is also influenced by conditions of the water, determined by wind, waves, and currents. According to literature we reviewed, winds and currents can make it more difficult to remove the oil, increasing the likelihood of the oil spill affecting larger areas and additional plant and animal populations. Further, high seas and rough waters can make some response techniques less effective. According to a 2017 study that estimates the effect of environmental conditions on deploying oil spill response techniques in the Arctic Ocean, most response techniques are not suitable during Arctic winters, between November and June. Literature we reviewed also shows that other factors influence the effectiveness of response techniques, including oil type, oil thickness, and the location and depth of oil spill events. Light crude oil typically evaporates and biodegrades more quickly than heavy crude oil, which is more viscous. However, if the oil slick is too thin, it becomes difficult to contain and limits response options. Oil spilled in a remote location, such as the place where the Exxon Valdez oil spill occurred, may complicate response efforts because equipment and personnel are far away and may not be able to respond within the window of opportunity before the oil spreads. According to Coast Guard officials, during an oil spill response, various response techniques are used to minimize the negative effects on the water surface, water column, and shorelines, each with different applications, advantages, disadvantages, and risks. The response techniques we reviewed are: Mechanical recovery in the marine environment uses a variety of containment booms, barriers, and skimmers, as well as natural and synthetic absorbent materials to capture and store the spilled oil until it can be disposed of properly. In-situ burning, meaning in-place burning, is the process of igniting and burning oil slicks in a controlled environment. Dispersants are chemicals that can mitigate the immediate damage caused by oil at the surface and help accelerate the natural removal of the spilled oil. Dispersants work similarly to dish soap, breaking up the oil into small droplets that can more easily spread through the water. The advantage of mechanical recovery is that it physically removes the oil from the water, minimizing the negative effects of the oil. Mechanical recovery can be used to safely remove oil where other methods might cause health risks or environmental damage, according to a 2013 report published by the National Academies Press. However, mechanical recovery has limitations in some conditions. If the oil slick is thin, it is difficult to achieve a significant rate of recovery and requires a lot of equipment to concentrate the slick so it is thick enough to be collected. According to literature we reviewed, mechanical recovery is less effective during inclement weather or high seas because the oil spreads and can emulsify in these conditions and is difficult to contain. Low temperatures and the presence of ice also make it challenging to achieve high recovery rates, and mechanical recovery becomes increasingly ineffective as wave heights increase, according to literature we reviewed. Furthermore, the process of recovering the oil is labor- and cost-intensive, and recovery can be delayed if the equipment is not readily available. Mechanical recovery is especially challenging to implement quickly when spills occur in remote areas, such as with Exxon Valdez, or where the oil is traveling quickly and broadly, such as with Deepwater Horizon. For example, according to a 1999 EPA report, skimmers were not readily available during the first 24 hours following the Exxon Valdez oil spill, repairs to damaged skimmers were time-consuming, and continued inclement weather slowed down the recovery efforts. In addition, a disadvantage of mechanical recovery is that temporary storage for large amounts of oil is frequently needed and recovered oil is generally brought back to the shore for disposal, according to Interior officials. Because of the resources required to physically remove the oil, it is difficult to recover a large percentage of the spilled oil through mechanical recovery in large oil spills. According to two studies and an agency document we reviewed, in-situ burning can be a highly effective technique for eliminating spilled oil from the sea surface. In response to the Deepwater Horizon oil spill, roughly 5 to 6 percent of all of the spilled oil was burned, about double the amount of oil removed with skimmers, according to a 2013 National Academies Press report. The primary advantage of in-situ burning is its efficiency. In ideal conditions, this method can quickly eliminate spilled oil. According to several reports we reviewed, in optimal conditions, in-situ burning can eliminate up to 90 percent of the spilled oil contained for burning with a relatively minimal investment of equipment or manpower. Literature we reviewed suggests that it is especially suited for response in Arctic conditions, particularly in ice-covered water where logistics and environmental conditions may preclude other options and where the ice can act as a natural barrier to help keep the oil slick thick enough to burn. However, in-situ burning also has its disadvantages. Burning has a narrow window of opportunity, and if the approval process takes longer than it takes to prepare for the burn, the opportunity for using in-situ burning may be lost, according to a NOAA document. Similar to mechanical recovery, burning can only be used if the oil slick is a certain thickness and when waves, wind, and currents are not too strong. In-situ burning becomes increasingly difficult in strong winds or with waves over 3 feet tall. A second disadvantage is that the burn residue caused by in- situ burning may have negative effects on ocean life, though studies we reviewed differed on this matter. According to a 2014 National Academies Press report about oil spills in the U.S. Arctic environment, a series of studies in the 1990s found that burn residues have little to no impact on oceanic organisms. However, a 2015 review on burn residues from in- situ burning in Arctic waters concluded that not enough research has been done on the side effects of burn residue from in-situ burning. According to NOAA officials, another disadvantage of in-situ burning is that the soot from inefficient combustion can result in unsightly and unhealthy particulates that may affect any downwind populations before the smoke dissipates. According to Coast Guard officials, chemical dispersants are typically used in conjunction with mechanical means and are considered when offshore mechanical methods are recognized as inadequate because of the spill volume, the geographical extent of the slicks, or specific on- scene environmental conditions. According to the literature we reviewed, an advantage of dispersants is their versatility. Dispersants are not as limited by environmental conditions as other response techniques, and they can be applied on surface or underwater environments. Further, dispersants can be applied through a variety of mechanisms. For example, they can be applied on oil slicks at the water’s surface by boats, planes, or helicopters. Dispersants can also be used below the surface, through subsea injection at the site of the spill, as was applied in response to the Deepwater Horizon oil spill. However, the literature suggests that the effectiveness of dispersants depends on many factors, such as the type of oil, type of dispersant used, and sea and weather conditions. According to Coast Guard officials, the decision to use dispersants is made after careful consideration of the location of the spill, type of oil spilled, seasonal resources at risk, and the environmental conditions at the time, as these factors influence the effectiveness and practicality of using dispersants, as well as the advisability of the tactic in the face of other options and risks. These officials also noted that dispersants are rarely used in the United States, but in certain situations, where mechanical means such as booming and skimming may not be effective, dispersants may be considered. In addition to the uncertainty of their effectiveness, the potential environmental risks associated with dispersants are also uncertain. One 2014 study states that while dispersants were thought to undergo rapid degradation in the water column, there was evidence that the dispersants remained on Gulf of Mexico beaches almost 4 years after the Deepwater Horizon oil spill. During the Deepwater Horizon oil spill, responders applied over 1.8 million gallons of chemical dispersants to the spilled oil— an unprecedented volume in the United States. It was the first major oil spill to use dispersants on such a large scale, and approximately 42 percent of these dispersants were applied sub-sea in the first operational sub-sea application of this technique. According to Coast Guard officials, the toxicity and long-term effects of large-scale application of dispersants on the ecology of marine life are unknown. According to literature we reviewed, there is evidence that chemically dispersed oil and some dispersant compounds may be toxic to some marine life, especially those in early life stages. Coast Guard officials also said that continued monitoring and further review of scientific research should improve the understanding of the impact of dispersants on mitigating the effects of oil spills as well as their overall environmental impact. Following initial response and cleanup efforts, restoration activities related to a significant offshore oil spill, such as those from Exxon Valdez or Deepwater Horizon, can endure for decades. Federal agencies of the interagency committee conduct and fund research projects related to preventing, preparing for, responding to, and restoring the environment after oil spills. The interagency committee has improved the coordination of federal oil spill research efforts since the Deepwater Horizon oil spill in 2010. However, the interagency committee has not systematically reviewed its membership to determine which offices within current member agencies are the most relevant to its mission and whether adding other federal agencies as members would be beneficial. By systematically reviewing its membership to determine whether any additional agencies should be involved in coordinating oil spill research and that the most appropriate offices within member agencies are represented, the interagency committee could improve its ability to coordinate research among federal agencies. In addition, the interagency committee does not coordinate with the NRDA trustee councils that manage the large restoration funds and monitor the restoration of damaged resources after a specific spill, such as the Exxon Valdez and Deepwater Horizon oil spills. Coordinating with the NRDA trustee councils could help ensure that the interagency committee’s oil spill research program is effectively supporting the damage assessment and restoration efforts of the councils, and better knowledge sharing between groups and leveraging its members’ oil spill research resources. We are making the following two recommendations to the Commandant of the U.S. Coast Guard at the Department of Homeland Security: The Commandant of the U.S. Coast Guard should direct the chair of the Interagency Coordinating Committee on Oil Pollution Research, in coordination with member agencies, to systematically review its membership to determine whether any additional agencies should be involved in coordinating oil spill research and that the most appropriate offices within member agencies are represented. (Recommendation 1) The Commandant of the U.S. Coast Guard should direct the chair of the Interagency Coordinating Committee on Oil Pollution Research, in coordination with member agencies, to coordinate with the relevant Natural Resource Damage Assessment trustee councils to help ensure that the interagency committee’s research informs and supports the councils’ damage assessment and restoration efforts. (Recommendation 2) We provided our draft report to the Department of Agriculture, Department of Commerce, Department of Defense, Department of Energy, Department of Homeland Security, Department of the Interior, Department of Transportation, Environmental Protection Agency, National Aeronautics and Space Administration, and U.S. Arctic Research Commission for review and comment. In comments reprinted in appendix II, the Department of Homeland Security concurred with our recommendations. In addition, the departments of Commerce, Homeland Security, Interior, and EPA provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of the report to the appropriate congressional committees; the Secretaries of Agriculture, Commerce, Defense, Energy, Homeland Security, the Interior, and Transportation; the Administrators of EPA and NASA; the Executive Director of the U.S. Arctic Research Commission; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report examines (1) how the Natural Resource Damage Assessment (NRDA) trustee councils have used the restoration trust funds for the Exxon Valdez and Deepwater Horizon oil spills and the status of the restoration efforts; (2) the status of the Interagency Coordinating Committee on Oil Pollution Research’s (interagency committee) oil spill research efforts and how coordination of such efforts has changed since we last reported on it in March 2011; and (3) what literature suggests about the effectiveness of various oil spill response techniques in the Arctic and the Gulf of Mexico. To examine how the NRDA trustee councils used the restoration funds from the Exxon Valdez oil spill (from October 1992 to January 2018) and the Deepwater Horizon oil spill (from April 2012 to December 2017) for restoration and the status of the restoration efforts, we obtained data from each trustee council on the amount of funds (1) ordered by the settlement for each restoration type; (2) authorized by the trustees for, but not yet spent on, restoration activities (authorizations); (3) spent on restoration activities (expenditures); and (4) not yet authorized for restoration activities (remaining balance) through calendar year 2017 for Deepwater Horizon and through January 31, 2018, for Exxon Valdez. To assess the reliability of the financial data, we reviewed related budget documentation; interviewed knowledgeable council staff about how fund balances are recorded and reported; reviewed the totals for obvious errors and inconsistencies; and reviewed internal control documents, such as a database manual and standard operating procedures. We determined that the data were sufficiently reliable for the purposes of our report. We examined the approved restoration plans (1994 restoration plan and 2014 restoration plan update for the Exxon Valdez oil spill, and the 2016 programmatic damage assessment and restoration plan for the Deepwater Horizon oil spill) and, when available, annual reports on restoration activities (1994 through 2018 annual reports for the Exxon Valdez Oil Spill Trustee Council (Exxon Valdez Trustee Council) and 2016 and 2017 annual financial reports for the Deepwater Horizon Natural Resource Damage Assessment Trustee Council (Deepwater Horizon Trustee Council)). We also reviewed project reports and scientific studies that the trustee councils funded to gain a better understanding of the status of restoration of injured natural resources, restoration priorities, activities, and progress made by the trustee councils. We reviewed laws and regulations that provide the legal authority for federal agencies to intervene and respond after an oil spill, such as the Oil Pollution Act of 1990 (OPA), the Clean Water Act, and NRDA regulations. We met with officials from the Exxon Valdez Trustee Council to discuss the distribution of settlement money for restoration purposes after the Exxon Valdez oil spill, and with officials from the Deepwater Horizon Trustee Council, Gulf Coast Ecosystem Restoration Council (RESTORE Council), and the National Fish and Wildlife Foundation to discuss the distribution of settlement money for restoration purposes after the Deepwater Horizon oil spill. Additionally, in May 2018, we traveled to multiple locations in the former spill area in Alaska to observe the extent of restoration efforts and ongoing issues. Along with researchers sent by the Exxon Valdez Trustee Council, we excavated three pits that revealed lingering oil about 6 inches below the surface of the beach on Eleanor Island in Prince William Sound. These researchers told us that oil previously uncovered at this location had been linked to the Exxon Valdez oil spill. In addition to fieldwork in Alaska, in November 2017 and February 2018, we attended public meetings in Alabama and Louisiana to learn about restoration plans for the Gulf States. To examine the status of the interagency committee’s federal oil spill research efforts and how coordination of such efforts has changed since we last reported on it in March 2011, we requested funding data and project information on oil spill research from all 15 member agencies of the interagency committee. We received data from the 9 member agencies that reported funding oil spill research projects from fiscal years 2011 through 2017. These 9 agencies provided data on agency expenditures on oil spill research and the research category of any projects funded. We assessed the reliability of the data by reviewing related documentation, interviewing knowledgeable agency officials, and reviewing agency internal controls for each of the 9 member agencies that provided us data about the steps they take to maintain this information. We determined that in most cases the data were sufficiently reliable for the purposes of our report. However, we chose not to provide the National Oceanic and Atmospheric Administration’s (NOAA) agency expenditures for oil spill research because NOAA officials were unable to provide reliable data on the actual amount the agency spent on such research during the time period we requested. In addition, some agency officials we interviewed raised the concern that their agencies do not track oil spill research funding and therefore the information they provided on expenditures for such research may not include all relevant efforts that could inform oil spill prevention, preparedness, response, and restoration. We also interviewed officials from the 15 member agencies to learn about each agency’s oil spill research efforts and participation in and coordination through the interagency committee, and compared their coordination practices to one of our federal leading practices for collaboration for interagency groups to evaluate the interagency committee’s efforts to coordinate such research. We chose to focus on the collaboration practice pertaining to participants because it appeared to be the most challenging for the interagency committee based on the findings of our previous March 2011 report, the actions taken by the interagency committee to address our recommendations from that report, and our own findings from our research for this report. In addition, we reviewed the 2013 interagency committee charter, the committee’s most recent biennial reports to Congress covering fiscal years 2008 through 2017, and the committee’s third multiyear research and technology plan for fiscal years 2015 through 2021; attended two committee meetings; and reviewed minutes of eight past meetings. We also reviewed OPA’s provisions that established and govern the interagency committee’s coordination efforts and membership, as well as various related executive documents. To examine what literature suggests about the effectiveness of various oil spill response techniques in the Arctic and the Gulf of Mexico, we conducted a literature search for studies and articles that analyzed and summarized the effectiveness of various oil spill response techniques in those regions. We identified existing literature from 1989 (the year of the Exxon Valdez oil spill) to March 2018 by searching various databases, such as Scopus and ProQuest. We chose to focus on three primary response techniques—mechanical recovery, in-situ burning, and the use of dispersants—used to clean up after offshore oil spills according to knowledgeable stakeholders and the literature we reviewed. The database search produced over 800 results. Our subject matter expert helped the team narrow this list to 50 results, of which we relied on 16 studies and articles that we determined were most relevant to our research objective of determining the effectiveness of various oil spill response techniques in the Arctic and the Gulf of Mexico. Some literature was not included if it was too specific for the scope of our review. Literature published recently, generally within the past 10 years, was considered more relevant. We supplemented the list of studies from these databases with literature from the Congressional Research Service, the National Academies Press, the Environmental Protection Agency (EPA), NOAA, the American Academy of Microbiology, the Arctic Oil Spill Response Joint Industry Programme, and our previous report on oil dispersants. In total, we relied upon 22 literature results to inform the findings of our objective. For a complete list of the literature, see the bibliography. We shared our summary of the literature search findings with agency officials representing some of the interagency committee member agencies. The following agencies responded with comments and we included their perspectives where relevant: the Department of the Interior, EPA, NOAA, and the U.S. Coast Guard. We did not independently evaluate the effectiveness of these response techniques. We conducted this performance audit from July 2017 to January 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Christine Kehr (Assistant Director), Amy Ward-Meier (Analyst-in-Charge), Colleen Candrl, Nirmal Chaudhary, Juan Garay, Cindy Gilbert, Matt Hunter, Jessica Lewis, Joe Maher, Greg Marchand, Kimberly (Kim) McGatlin, Cynthia Norris, Travis Schwartz, Sheryl Stein, Sara Sullivan, Vasiliki (Kiki) Theodoropoulos, Matthew Valenta, Sarah Veale, and Dan Will made key contributions to this report. We reviewed literature to examine what it suggests about the effectiveness of various oil spill response techniques in the Arctic and the Gulf of Mexico. This bibliography contains citations for the studies and articles that contributed to these findings. American Academy of Microbiology, Microbes and Oil Spills FAQ (Washington, D.C.: 2011). Arctic Oil Spill Response Technology Joint Industry Programme, Synthesis Report, D. Dickens-DF Dickens Associates, LLC (May 3, 2017). Belore, Randy C., Ken Trudel, Joseph V. Mullin, and Alan Guarino. “Large-scale Cold Water Dispersant Effectiveness Experiments with Alaskan Crude Oils and Corexit 9500 and 9527 Dispersants.” Marine Pollution Bulletin, vol. 58 (2009): 118-128. Boufadel, Michel C., Xiaolong Geng, and Jeff Short. “Bioremediation of the Exxon Valdez Oil in Prince William Sound Beaches.” Marine Pollution Bulletin, vol. 113 (2016): 156-164. Brakstad, Odd G., Trond Nordtug, and Mimmi Throne-Holst. “Biodegradation of Dispersed Macondo Oil in Seawater at Low Temperature and Different Oil Droplet Sizes.” Marine Pollution Bulletin, vol. 93 (2015): 144-152. Committee on Responding to Oil Spills in the U.S. Arctic Marine Environment; Ocean Studies Board; Polar Research Board; Division on Earth and Life Studies; Marine Board; Transportation Research Board; National Research Council, Responding to Oil Spills in U.S. Arctic Marine Environment. National Academies Press (US) (Washington, D.C.: 2014). Committee on the Effects of the Deepwater Horizon Mississippi Canyon- 252 Oil Spill on Ecosystem Services in the Gulf of Mexico, Ocean Studies Board, Division on Earth and Life Studies, National Research Council, An Ecosystem Services Approach to Assessing the Impacts of the Deepwater Horizon Oil Spill in the Gulf of Mexico. National Academies Press (US) (Washington, D.C.: December 20, 2013). Corn, Lynne M., Claudia Copeland, The Deepwater Horizon Oil Spill: Coastal Wetland and Wildlife Impacts and Response. Congressional Research Service (July 7, 2010). Environmental Protection Agency, Office of Emergency and Remedial Response, Understanding Oil Spills and Oil Spill Response, EPA 540-K- 99-007 (Dec 1999). Fletcher, Sierra, Tim Robertson, Bretwood Higman, and Elise DeCola. Estimating Impact of Environmental Conditions on Deployment of Marine Oil Spill Response Tactics in the U.S. Arctic Ocean, proceedings of the Fortieth AMOP Technical Seminar. Ottawa: Environment and Climate Change Canada, 2017, 246-264. Fritt-Rasmussen, Janne, Susse Wegeberg, d Kim Gustavson, “Review on Burn Residues from In Situ Burning of Oil Spills in Relation to Arctic Waters.” Water Air Soil Pollution, vol. 226 (2015). GAO, Oil Dispersants: Additional Research Needed, Particularly on Subsurface and Arctic Applications, GAO-12-585 (Washington, D.C.: May 30, 2012). Naseri, M., and J. Barabady, Safety and Reliability: Methodology and Applications—Performance of Skimmers in the Arctic Offshore Oil Spills. London: Taylor & Francis Group, 2015, 607-614. National Oceanic and Atmospheric Administration, Oil Spill - Behavior, Response and Planning: Open-water Response Strategies: In-situ Burning, (August 1997). Nedwed, Tim, Tom Coolbaugh, and Amy Tidwell. Subsea Dispersant Use during the Deepwater Horizon Incident, proceedings of the Thirty-Fifth AMOP Technical Seminar on Environmental Contamination and Response. Vancouver, BC; Canada, ExxonMobil Upstream Research Company, 2012, 506-518. Nyankson, Emmanuel, Dylan Rodene, and Ram B. Gupta. “Advancements in Crude Oil Spill Remediation Research After the Deepwater Horizon Oil Spill.” Water Air Soil Pollution (2016). Rahsepar, Shokouh, Martijn P.J. Smit, Albertinka J. Murk, Huub H.M. Rijnaarts, and Alette A.M. Langenhoff. “Chemical Dispersants: Oil Biodegradation Friend or Foe?” Marine Pollution Bulletin, vol. 108 (2016): 113-119. Ramseur, Jonathan L., Oil Spills: Background and Governance. Congressional Research Service (Sept 15, 2017). Sharma, Priyamvada, and Silke Schiewer. “Assessment of Crude Oil Biodegradation in Arctic Seashore Sediments: Effects of Temperature, Salinity, and Crude Oil Concentration.” Environmental Science and Pollution Research (2016): 14881-14888. Shi, X., P.W. Bellino, A. Simeoni, and A.S. Rangwala. “Experimental Study of Burning Behavior of Large-scale Crude Oil Fires in Ice Cavities.” Fire Safety Journal, vol. 79 (2016): 91-99. United States Coast Guard, On Scene Coordinator Report Deepwater Horizon Oil Spill, (September 2011). White, Helen K., Shelby L. Lyons, Sarah J. Harrison, David M. Findley, Yina Liu, and Elizabeth B. Kujawinski. “Long-Term Persistence of Dispersants Following the Deepwater Horizon Oil Spill.” Environmental Science & Technology Letters (2014): 295-299.
|
The Exxon Valdez and Deepwater Horizon oil spills are two of the largest offshore oil spills in U.S. history, causing long-lasting damage to marine and coastal resources. OPA includes provisions to prevent and respond to such oil spills by authorizing (1) federal-state trustee councils that manage billions of dollars from legal settlements and (2) an interagency committee to coordinate oil pollution research, among other things. GAO was asked to review the federal government's response, restoration, and research efforts after the Exxon Valdez and Deepwater Horizon oil spills. This report examines, among other things, (1) how the trustee councils have used the restoration trust funds and the status of restoration and (2) the interagency committee's coordination of oil spill research efforts. GAO reviewed the councils' plans for the funds and how they were used, federal funding of oil spill research by member agencies, and key laws. Also, GAO evaluated the coordination of such efforts against a leading collaboration practice. GAO interviewed members of the trustee councils and the interagency committee. The trustee councils, composed of federal and state members, have used portions of the restoration trust funds from the Exxon Valdez and Deepwater Horizon oil spill settlements to restore natural resources. From October 1992 to January 2018, the Exxon Valdez Oil Spill Trustee Council used about 86 percent of the fund's roughly $1 billion, primarily on habitat protection and restoration of damaged natural resources. According to the council, all but 5 of the 32 natural resources and human services identified as damaged by the spill have recovered or are recovering. The health of Pacific herring is one example of a resource that has not yet recovered. Further, the presence of lingering oil remains a concern almost 30 years after the spill. In May 2018, GAO accompanied trustee council researchers to the spill area and observed the excavation of three pits that revealed lingering oil roughly 6 inches below the surface of the beach, as captured in the photo below. The Deepwater Horizon Natural Resource Damage Assessment Trustee Council finalized a programmatic restoration plan in 2016; four trustee implementation groups have since issued initial restoration plans for designated restoration areas, and three anticipate issuing restoration plans in 2019 or later. From April 2012 to December 2017, the council used 13 percent of the at least $8.1 billion restoration trust fund, mostly on habitat protection, enhancing recreation, and marine wildlife and fishery restoration. The Oil Pollution Act of 1990 (OPA), which was enacted after the Exxon Valdez spill in 1989, established the Interagency Coordinating Committee on Oil Pollution Research (interagency committee) to coordinate oil pollution research among federal agencies and with relevant external entities, among other things. However according to the trustee council members that manage the restoration trust funds, the committee does not coordinate with the trustee councils and some were not aware that the interagency committee existed. The research of the member agencies could be relevant to the trustee councils' work on restoration. By coordinating directly with the trustee councils, the interagency committee could ensure better knowledge sharing between groups and leverage its member agencies' resources to inform and support the work of the councils. GAO recommends, among other things, that the interagency committee coordinate with the trustee councils to support their work and research needs. The agency agreed with GAO's recommendations.
|
In providing health care services to veterans, clinicians at VAMCs use RME, such as endoscopes and surgical instruments, which must be reprocessed between uses. Reprocessing covers a wide range of instruments and has become increasingly complex. VHA has developed policies that VAMCs are required to follow to help ensure that RME is reprocessed correctly. In addition, VHA policy requires that VHA and VISNs oversee VAMCs’ reprocessing of RME and that VAMCs report incidents involving improperly reprocessed RME. According to reports from RME professional associations, the complexity of RME reprocessing has increased as the complexity of medical instruments has increased. While at one time reprocessing surgical and dental instruments such as scalpels and retractors might have been the bulk of a SPS program’s tasks, now SPS programs are responsible for reprocessing complex instruments such as endoscopes. Reprocessing these instruments is a detailed and time-consuming process, and their increasing complexity requires a corresponding increase in the skills and time required to safely reprocess them. (See figure 1 for an example of steps that can be required for endoscope reprocessing.) Within VHA, the National Program Office for Sterile Processing under the VHA Deputy Under Secretary of Health for Operations and Management is responsible for developing RME reprocessing policies. It is also responsible for ensuring that VISNs and their respective VAMCs are adhering to its policies. Each of the 18 VISNs are responsible for ensuring adherence with VHA’s RME policies at the VAMCs within its region. In turn, each of the 170 VAMCs are responsible for implementing VHA’s policies related to RME. Within each VAMC, the SPS department is primarily responsible for reprocessing RME, which is used by clinicians in the operating room and other clinical service lines such as the dental and gastroenterology service. (See fig. 2.) Additionally, the SPS department collaborates with other VAMC departments such as the Environmental Management and Engineering Services on variables that affect RME reprocessing, such as the climate where RME is reprocessed. In March 2016 VHA issued Directive 1116(2)—a comprehensive policy outlining requirements for SPS programs and for overseeing RME reprocessing efforts. SPS program operation requirements. To help ensure that VAMCs are reprocessing RME correctly, VHA policy establishes various requirements for the SPS programs in VAMCs to follow, such as a requirement that SPS staff monitor sterilizers to ensure that they are functioning properly, use personal protective equipment when performing reprocessing activities, separate dirty and clean RME, and maintain environmental controls. For example, VAMCs are required to maintain certain temperature, humidity, and air flow standards in areas where RME is reprocessed and stored. Additionally, in order to ensure that RME is reprocessed in accordance with manufacturers’ guidelines, VAMCs are required to assess staff on their competence in following the related reprocessing steps. Oversight requirements. To help ensure that VAMCs are adhering to VHA’s RME policies, VHA requires inspections, reports on incidents of improperly reprocessed RME, and corrective action plans for both non- adherent inspection results and incidents of improperly reprocessed RME. Inspections. VISNs are required to conduct annual inspections at each VAMC within their VISN and to report their inspection results to the VHA National Program Office for Sterile Processing. The VISN inspections are a key oversight tool for regularly assessing adherence to RME policies in the SPS, gastroenterology, and dental areas within VAMCs and use a standardized inspection checklist known as the SPS Inspection Tool. According to VHA officials, VHA developed the SPS Inspection Tool and generally updates it annually. The most recent fiscal year 2017 SPS Inspection Tool contained 148 requirements. Examples of requirements include those regarding proper storage of RME and following manufacturers’ instructions when reprocessing RME. Although VAMCs are also required to conduct annual self-inspections using the SPS Inspection Tool and report the results to VHA, the VISN annual inspections are a separate and important level of oversight. Finally, according to VHA officials, while not a formal policy, VHA’s National Program Office for Sterile Processing also inspects each VAMC at least once every 3 years. VHA requires VISNs and VAMCs to conduct their own inspections even in years when VHA also conducts inspections. Incident Reports. VHA collects incident reports or “issue briefs” generated by VAMCs on incidents involving RME to help determine the extent to which VAMCs are adhering to RME policies, among other things. VHA requires VAMCs to report significant clinical incidents or outcomes involving RME that negatively affect groups or a cohort of veterans in an issue brief. According to a VHA official, when VAMC staff report incidents involving RME to their facility leadership, these officials should follow VHA guidance to determine which incidents, if any, should be reported in an issue brief to the VAMC’s VISN. Similarly, VISN officials, in turn, are responsible for determining whether an incident should be reported in an issue brief to VHA. Corrective Action Plans. Corrective action plans—which detail an approach for addressing any areas of policy non-adherence identified in inspections or incidents identified in issue briefs—are required at both the VISN and VAMC levels. Specifically, both VISNs and VAMCs are required to develop corrective action plans for any deficiencies identified through their inspections, and VAMCs are required to develop corrective action plans for incidents identified in issue briefs. According to a VHA official, VISNs and VAMCs are not required to send corrective action plans from inspections to VHA; however, VAMCs must send their correction action plans to the VISN and also any related to issue briefs to VHA. Further, according to a VHA official, although neither the VAMC nor VISN corrective action plans from inspections are monitored by VHA, VHA does expect VISN officials to inform it of any critical issues that VISNs believe warrant VHA attention. For example, VHA officials would expect VISNs to report instances when RME issues result in the cancellation of procedures for multiple patients or when the VISN discovers a VAMC is lacking documentation of RME reprocessing competency assessments for a large number of their SPS staff. A number of recent reports have identified several RME-related issues at VAMCs, including non-adherence to RME policies. The issues have ranged from improperly reprocessed RME being used on patients to the cancellation of medical procedures due to lack of available RME. For example: In March 2018, the VA Office of Inspector General released a report describing problems identified at the Washington, D.C. VAMC, some of which were RME-related. For example, the office determined that ineffective sterile processing contributed to procedure delays due to unavailable RME. The report included specific recommendations, such as ensuring there are clearly defined and effective procedures for replacing missing or broken instruments and implementing a quality assurance program to verify the cleanliness, functionality, and completeness of instrument sets before they are used in clinical areas. The VAMC Director agreed with those recommendations. In fiscal year 2017, the VA Office of Inspector General reviewed 29 VAMCs and issued reports for each in response to several RME- related complaints received through its reporting hotline. The office identified issues such as staff failure to perform quality control testing on endoscopes or document their competency assessments of SPS staff in employee files. Many of the reports included specific recommendations, such as performing quality control testing on all endoscopes and ensuring SPS staff are assessed for competency at orientation and annually for the types of RME they reprocess. The VAMC Directors agreed with those recommendations. In 2016, the VA Office of the Medical Inspector released a report that substantiated allegations that SPS practices led to the delivery of RME with bioburden, debris, or both to the operating room. The report included specific recommendations, such as reeducating SPS staff on proper SPS standards and ensuring that all training and assessments of RME reprocessing competency of SPS staff are completed as required. The VAMC Director agreed with those recommendations. In 2011, we released a report on VA RME that found issues with RME reprocessing. We found, for example, that VHA did not provide specific guidance on the types of RME that require device-specific training and that the guidance VHA did provide on RME reprocessing training was conflicting. We issued several recommendations for improvement, which VA has implemented. VHA has not ensured that it has complete information from the annual inspections VISNs conduct—a key oversight tool providing the most current VA-wide information on adherence to RME policies—and therefore does not have reasonable assurance that VAMCs are following RME policies intended to ensure veterans are receiving safe care. For fiscal year 2017, we determined that VHA should have had records of 144 VISN SPS inspection reports to have assurance that all required VISN SPS inspections had been conducted. However, our review shows that as of February 2018, VHA had 105 VISN SPS inspection reports and was missing 39, or more than one quarter of the required inspection reports. We also determined that there were two VISNs from which VHA did not have any fiscal year 2017 reports. For the missing SPS inspection reports, VISN officials suggested several reasons why the inspections were either not conducted or conducted but the reports were not submitted to VHA. For example, officials from one of the VISNs from which VHA had no SPS inspection reports told us that VISN management staffing vacancies prevented it from conducting all of its inspections. An official from the other VISN from which VHA had no SPS inspection reports provided evidence that it had conducted all but one of the inspections, but the official told us the VISN did not submit reports because it has yet to receive information from VHA regarding VISN inspection outcomes, common findings, or best practices and therefore sees no value in submitting them. VISNs provided us with evidence showing that they conducted 27 of 39 inspections that were missing from VHA’s data. We analyzed these 27 reports to identify the information about non-adherence to RME policy requirements that VHA does not have from these missing VISN inspections. We determined the 10 requirements for which these VAMCs had the most non-adherence were related to quality, training, and environmental issues, among other things, with the extent of non- adherence ranging from 19 to 38 percent. For example, there were 19 and 26 percent non-adherence rates to the requirements that instrument and equipment levels be sufficient to meet workloads and having a process in place to ensure staff receive make-up/repeat training, respectively. (See Appendix I.) We also found that variation in SPS Inspection Tools and related guidance from VHA resulted in incomplete inspection results for the gastroenterology and dental areas. VHA provided VISNs with three different SPS Inspection Tools throughout the course of fiscal year 2017. Although VHA guidance stated otherwise, only the third SPS Inspection Tool—which was used during the second half of the fiscal year—contained requirements specific to the gastroenterology and dental areas. A VHA Central Office official told us the office hadn’t been aware that it did not have all of the VISN inspection reports until it took steps to respond to our data request. The official told us VHA granted VISNs a 3- month extension for fiscal year 2017—meaning that VISNs had until the end of December 2017 to submit their inspection results—and had granted similar extensions for at least the past 4 fiscal years as well. For all of those years, the VHA official told us that the office didn’t have all VISN inspection reports, even after granting extensions. As a result, VHA did not have assurance that all of the inspections had been conducted. When asked why VHA hadn’t been aware that it didn’t have all VISN SPS inspection reports, a VHA official said that the office has largely relied on the VISNs to ensure complete inspection result reporting because it hasn’t had the resources to dedicate to monitoring inspections. The official told us that VHA has asked for and just recently received approval to hire a data analyst who could potentially be responsible for monitoring the VISN inspection reports. VHA’s lack of complete information from inspection results is inconsistent with standards for internal control in the federal government regarding monitoring and information that state management should establish and operate monitoring activities and use quality information to achieve the entity’s objectives. Without such controls, VHA lacks reasonable assurance that VAMCs are following RME policies designed to ensure that veterans are receiving safe care. We also found that VHA does not consistently share information, particularly inspection results, with VISNs and VAMCs, and that VISNs and VAMCs would like more of this information. Specifically, about two- thirds of VISN and VAMC officials told us that sharing information on the common issues identified in the inspections of other VAMCs as well as potential solutions developed to address these issues would allow VAMCs to be proactive in strengthening their adherence to RME policies and ensuring patient safety. For example, a VAMC official told us that there were problems with equipment designed to sterilize heat- and moisture sensitive devices, and seeing how other VAMCs addressed the problem was helpful for their VAMC. Further, officials from some VISNs said VHA cited their VAMCs for issues that had been found at other facilities and, had the VAMCs been aware of the issue beforehand, they could have corrected or improved their processes earlier. When asked about sharing inspection results and other information, VHA Central Office officials told us the office doesn’t analyze or share information from VISN inspections because of a lack of resources. A VHA official told us that the office does create an internal report of common issues identified through the third of VAMCs it inspects each year, but the office doesn’t share this report with VISNs and VAMCs because the office lacks the resources needed to prepare reports that are detailed enough to be understood correctly by recipients. According to this official, VHA has occasionally shared information it has identified on common inspection issues through newsletters, national calls, and trainings; however, VHA officials at close to half of the VISNs and VAMCs we spoke to said that they rarely or never get this information. For example, officials from one VISN told us they recall only one or two instances where VHA sent a summary of the top five RME-related issues found during inspections. Insufficient sharing of information is inconsistent with standards for internal control in the federal government regarding communication, which state that management should internally communicate the necessary quality information to achieve the entity’s objectives. Until this sharing becomes a regular practice, VHA is missing an opportunity to help ensure adherence to its RME policies, which are intended to ensure that veterans receive safe care. According to interviews with officials from all of the VISNs and selected VAMCs, the top five challenges VAMCs face in operating their SPS programs are related to meeting certain RME policies and challenges addressing SPS workforce needs. In particular, officials told us that VAMCs have challenges (1) meeting two RME policy requirements related to climate control monitoring and a reprocessing transportation deadline, and (2) addressing SPS workforce needs related to lengthy hiring timeframes, the need for consistent overtime, and limited pay and professional growth. (See Table 1.) Regarding the challenges VAMCs face in meeting RME policy requirements, the majority of VISN and selected VAMC officials interviewed reported experiencing challenges adhering to two requirements from 2016 VHA issued Directive 1116(2). Climate control monitoring requirement. Officials reported that meeting the climate control monitoring requirement related to airflow and humidity is challenging for their VAMCs. Under the requirement VAMCs must monitor the humidity and airflow in facility areas where RME is reprocessed and stored in order to ensure that humidity levels do not exceed a certain threshold and thereby allow the growth of microorganisms. According to almost all VISN officials, meeting the requirement is a challenge for some, if not all, of their VAMCs and in particular for older VAMCs that lack proper ventilation systems. We also found some instances of non-adherence on this issue in the group of VISN inspection reports we reviewed. In a September 2017 memorandum, VHA relaxed the requirement (e.g., adjusted the thresholds). Additionally, according to a VHA official, VHA wants to renovate all outdated VAMC heating, ventilation, and air conditioning systems to help VAMCs meet the requirement. Further, according to VHA officials, VHA also allows VAMCs to apply for a waiver exempting them from having to meet this requirement if they have an action plan in place that shows they are working toward meeting the requirement. Reprocessing transportation deadline requirement. Officials reported that meeting the reprocessing transportation deadline was also challenging for their VAMCs. Under the requirement, used RME must be transported to the location where it will be reprocessed within 4 hours of use to prevent bioburden or debris from drying on the instrument and causing challenges with reprocessing. Officials reported this requirement as particularly challenging for VAMCs that must transport their RME to another facility for cleaning, such as community based outpatient clinics in rural areas that must transport their RME to their VAMC’s SPS department. We also found some instances of non-adherence on this issue in the group of VISN inspection reports we reviewed. In June 2016, VHA issued a memorandum allowing the use of a pre-cleaning spray solution that, if used, allows offsite facilities such as community based outpatient clinics to transport that RME within 12 hours instead of the required 4 hours. VHA has made some adjustments to these requirements, although some officials told us the requirements remain difficult to meet. Specifically, over half of the VISN officials reported that the climate control monitoring requirement continues to be a challenge for their VAMCs. Further, some of the officials told us that meeting the 12-hour reprocessing transportation requirement using the pre-cleaning spray was still challenging, due to the distance between clinics and their VAMC’s SPS department; as a result, some facilities have decided to use disposable medical equipment that does not require reprocessing to avoid this requirement completely. When we shared this information with a VHA official, the official stated that providing general information on how all facilities can meet the climate control monitoring requirement is impossible due to the uniqueness of each facility and that VHA has no further plans to adjust the reprocessing transportation deadline requirement. However, these challenges remain and some officials have expressed frustration with the limited support they’ve received from VHA. In September 2017 we recommended that VHA establish a mechanism by which program offices systematically obtain feedback from VISNs and VAMCs on national policy after implementation and take the appropriate actions. Our findings provide further evidence of the need for VA to address this recommendation. Regarding the challenges VAMCs face in meeting SPS workforce needs, almost all of the 18 VISN officials and officials from the three selected VAMCs reported experiencing challenges related to lengthy hiring timeframes, need for consistent overtime, and limited pay and professional growth. According to officials, these challenges result in SPS programs having difficulty maintaining sufficient staffing levels. Lengthy hiring timeframes. Officials reported that the lengthy hiring process for SPS staff creates challenges in maintaining sufficient SPS workforce. For example, officials from one VISN estimated that on average it can take 3 to 4 months for a person to be hired. Officials from a few other VISNs noted that not only does the lengthy hiring process create challenges in recruiting qualified candidates (because they accept other positions where they can be more quickly employed), but that it also results in long periods of time when SPS programs are short-staffed. Need for overtime. Officials reported that needing their SPS staff to work overtime is a challenge. Specifically, 16 of the 18 VISN officials stated that there is a need for staff at their VAMCs to work overtime either “all, most, or some of the time.” Further, officials from one VISN told us their VAMCs have used overtime to meet the increased workload required to implement VHA’s RME policies; one official noted that the overtime has led to dissatisfaction and retention issues among SPS staff. Limited pay and professional growth. Officials identified limited pay and professional growth associated with the current pay grade as the biggest SPS workforce challenge. Almost all officials stated that the current pay grade limits the pay and potential for professional growth for the two main SPS positions—medical supply technicians, who are responsible for reprocessing RME, and SPS Chiefs, who have supervisory responsibility. Specifically, the relatively low maximum allowable pay discourages staff from accepting or staying in positions and the current pay grade does not create a career path for SPS medical supply technicians to grow within the SPS department. Officials from one VISN told us that all VAMCs in their VISN have lost SPS staff due to the low pay grade for both positions. VHA officials said a proposed increase in the pay grade for SPS staff has been drafted; however, they do not know when or if it will be made effective. Further, according to officials with knowledge of the proposed changes, the changes could still be insufficient to recruit and retain SPS staff with the necessary skills and experience. Some VISN and VAMC officials told us that difficulties maintaining sufficient SPS staff levels have in some instances adversely affected patients’ access to care and increased the potential for reprocessing errors that could affect patient safety. According to these officials, staffing challenges can affect access to care when facilities have to limit or delay care—such as surgeries—because there aren’t enough staff available to process all the necessary RME. An official at one VAMC told us that their SPS staff must review available RME daily to determine whether scheduled surgeries or other procedures can proceed. Further, among the 18 operating room nurse managers who responded to our inquiries, 15 indicated they have experienced operating room delays because of RME issues. In addition, some VISN and VAMC officials told us staffing challenges can potentially have an impact on patient safety, because when SPS staffing is not sufficient, mistakes are more likely to occur. For example, officials told us that if SPS staffing levels are low, particularly if they are low for an extended period of time, there is an increased chance RME will be improperly reprocessed and, if used on a patient, put that patient’s safety at risk. A 2018 VA Office of Inspector General report on the Washington D.C. VAMC found that consistent SPS understaffing was a factor in SPS staff not being available to meet providers’ need for reprocessed RME; according to the report, “veterans were put at risk because important supplies and instruments were not consistently available in patient care areas.” While VHA is aware of these workforce challenges cited by VISN and VAMC officials, it has not studied SPS staffing at VAMCs. As a result, it does not know whether or to what extent the workforce challenges VISNs and VAMCs report adversely affect VAMCs’ ability to effectively operate their SPS programs and ensure safe care for veterans. A National Program Office of Sterile Processing official indicated that while the office might have access to some of the necessary data from VAMC SPS departments, it does not have all the necessary data or staff needed to assess SPS staffing levels. Furthermore, the official added, conducting such a study would not be the responsibility of her office. Officials from the Workforce Management and Consulting Office said VHA is considering a study of SPS staffing, given the results of the VA Office of Inspector General 2018 review that identified high vacancy rates as a contributing factor to the challenges with the SPS program at the Washington D.C. VAMC. However, VHA does not have definitive plans to complete this type of study or a timeframe for when the decision will be made. Until the study is conducted and actions are taken based on the study, as appropriate, VHA will not have addressed a potential risk to its SPS programs. This is inconsistent with standards for internal control in the federal government for risk assessment, which state that management should identify, analyze, and respond to risks related to achieving defined objectives. Without examining SPS workforce needs, and taking action based on this assessment, as appropriate, VHA lacks reasonable assurance that its approach to SPS staffing helps ensure veterans’ access to care and safety. The proper reprocessing of surgical instruments and other RME used in medical procedures is critical for ensuring veterans’ access to safe care. We have previously found that VA had not provided enough guidance to ensure SPS staff were reprocessing RME correctly; in 2016, VA issued Directive 1116(2)—with requirements for the SPS program. While this is a good step, our current review shows that VHA needs to strengthen its oversight of VAMCs’ adherence to these requirements. VHA has not ensured that it has complete information from inspections of VAMCs, nor does VHA consistently share inspection results and other information that could help VAMCs meet the requirements. Without analysis of complete information from inspections and consistent sharing of this information, VHA does not have reasonable assurance that VAMCs are following all RME policies, and VHA is missing an opportunity to strengthen VAMCs’ adherence to RME requirements. Furthermore, officials from some VISNs and selected VAMCs report challenges meeting two RME policy requirements—the climate control and the reprocessing transportation deadline requirements. If VHA implements a recommendation we made in 2017 for the agency to obtain feedback from VISNs and VAMCs on their efforts to implement VHA policies and take the appropriate actions, it could help with these challenges. Additionally, while nearly all of the officials from the 18 VISNs and selected VAMCs interviewed reported challenges maintaining a sufficient SPS workforce, VHA does not know whether the current SPS workforce addresses VAMCs’ SPS workforce needs. VHA officials say that VHA is considering studying its SPS workforce; however, it has not done so or announced a timeframe for doing so. Until it conducts such a study, VHA will not know whether or to what extent reported SPS workforce challenges adversely affect the ability of VAMCs to effectively operate their SPS programs and ensure access to safe care for veterans. We are making the following three recommendations to VHA: The Under Secretary of Health should ensure all RME inspections are being conducted and reported as required and that the inspection results VHA has are complete. (Recommendation 1) The Under Secretary of Health should consistently analyze and share top common RME inspection findings and possible solutions with VISNs and VAMCs. (Recommendation 2) The Under Secretary of Health should examine the SPS workforce needs and take action based on this assessment, as appropriate. (Recommendation 3) We provided a draft of this report to VA for comment. In its written comments, which are provided in appendix III, VA concurred with our recommendations. In its comments, VA acknowledged the need for complete RME inspection information, stating that VHA will establish an oversight process for reviewing and monitoring findings from site inspections and for reporting this information to VHA leadership. Further, VA noted that VHA will analyze data from RME inspections and share findings and possible solutions with VISNs and VAMCs via a written briefing that will be published on VHA’s website and discussed during educational sessions and national calls. VA also noted that VHA has an interdisciplinary work group that has identified actions it can take to address SPS workforce needs, including implementing an enhanced market-based approach for determining pay levels and developing a staffing model so VAMCs can determine what staffing levels they need to more effectively operate their SPS programs. VA expects VHA to complete all of these actions by July 2019 or earlier. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Secretary of Veterans Affairs. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Sharon M. Silas at (202) 512-7114 or silass@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our review of the 27 fiscal year 2017 inspections of VAMCs conducted by Veterans Integrated Service Networks (VISN) for which VHA did not have inspection reports identified a number of common reusable medical equipment (RME) issues among the select VAMCs. The top 10 are listed in table 2 below. Our review of the Veterans Health Administration (VHA) summary of issue briefs for fiscal years 2015 through 2017 identified three major categories of issues related to reusable medical equipment (RME). See table 3 below for the percentage of all issue briefs that fell into each of these three categories. In addition to the contact named above, Karin Wallestad (Assistant Director), Teresa Tam (Analyst-in-Charge), Kenisha Cantrell, Michael Zose, and Krister Friday made major contributions to this report. Also contributing were Kaitlin Farquharson, Diona Martyn, and Muriel Brown.
|
VHA operates one of the largest health care delivery systems in the nation, serving over 9 million enrolled veterans. In providing health care services to veterans, VAMCs use RME which must be reprocessed—that is, cleaned, disinfected, or sterilized—between uses. Improper reprocessing of RME can negatively affect patient care. To help ensure the safety of veterans, VHA policy establishes requirements VAMCs must follow when reprocessing RME and requires a number of related oversight efforts. GAO was asked to review VHA's reprocessing of RME. This report examines (1) VHA's oversight of VAMCs' adherence to RME policies and (2) challenges VAMCs face in operating their Sterile Processing Services programs, and any efforts by VHA to address these challenges. GAO reviewed relevant VHA documents including RME policies and VISN inspection results for fiscal year 2017. GAO interviewed officials from VHA, all 18 VISNs, and four VAMCs, selected based on geographic variation, VAMC complexity, and data on operating room delays. GAO examined VHA's oversight in the context of federal internal control standards on communication, monitoring, and information. GAO found that the Department of Veterans Affairs' (VA) Veterans Health Administration (VHA) does not have reasonable assurance that VA Medical Centers (VAMC) are following policies related to reprocessing reusable medical equipment (RME). Reprocessing involves cleaning, sterilizing, and storing surgical instruments and other RME, such as endoscopes. VHA has not ensured that all VAMCs' RME inspections have been conducted because it has incomplete information from the annual inspections by Veterans Integrated Service Networks (VISN), which oversee VAMCs. For fiscal year 2017, VHA did not have 39 of the 144 VISN reports from the VISNs' inspections of their VAMCs' Sterile Processing Services departments. VISNs were able to provide GAO with evidence that they had conducted 27 of the 39 missing inspections; top areas of non-adherence in these inspections were related to quality and training, among other things. Although VHA has ultimate oversight responsibility, a VHA official told GAO that VHA had not been aware it lacked complete inspection results because it has largely relied on the VISNs to ensure complete inspection result reporting. Without analyzing and sharing complete information from inspections, VHA does not have assurance that its VAMCs are following RME policies designed to ensure that veterans receive safe care. GAO also found that VAMCs face challenges operating their Sterile Processing Services programs—notably, addressing workforce needs. Almost all of the officials from all 18 VISNs and selected VAMCs GAO interviewed reported Sterile Processing Services workforce challenges, such as lengthy hiring timeframes and limited pay and professional growth potential. According to officials, these challenges result in programs having difficulty maintaining sufficient staffing. VHA officials told GAO that the office is considering studying Sterile Processing Services staffing at VAMCs, although VHA does not have definitive plans to do so. VHA's Sterile Processing Services workforce challenges pose a potential risk to VAMCs' ability to ensure access to sterilized medical equipment, and VHA's failure to address this risk is inconsistent with standards for internal control in the federal government. Until VHA examines these workforce needs, VHA won't know whether or to what extent the reported challenges adversely affect VAMCs' ability to effectively operate their Sterile Processing Services programs and ensure access to safe care for veterans. GAO is making three recommendations to VHA, including that it ensure all RME inspections are being conducted and complete results reported, and that it examine Sterile Processing Services workforce needs and make adjustments, as appropriate. VA concurred with these recommendations.
|
Federal agencies and our nation’s critical infrastructures—such as energy, transportation systems, communications networks, and financial services—are dependent on computerized (cyber) information systems and electronic data to process, maintain, and report essential information, and to operate and control physical processes. Virtually all federal operations are supported by computer systems and electronic data, and agencies would find it difficult, if not impossible, to carry out their missions and account for their resources without these information assets. Hence, the security of these systems and data is vital to public confidence and the nation’s safety, prosperity, and well-being. Ineffective security controls to protect these systems and data could have a significant impact on a broad array of government operations and assets. Yet, computer networks and systems used by federal agencies are often riddled with security vulnerabilities—both known and unknown. These systems are often interconnected with other internal and external systems and networks, including the Internet, thereby increasing the number of avenues of attack and expanding their attack surface. Furthermore, safeguarding federal computer systems has been a long- standing concern. This year marks the 21st anniversary of when GAO first designated information security as a government-wide high-risk area in 1997. We expanded this high-risk area to include safeguarding the systems supporting our nation’s critical infrastructure in 2003 and protecting the privacy of personally identifiable information in 2015. Over the last several years, we have made about 2,500 recommendations to agencies aimed at improving the security of federal systems and information. These recommendations identified actions for agencies to take to strengthen their information security programs and technical controls over their computer networks and systems. Nevertheless, many agencies continue to be challenged in safeguarding their information systems and information, in part because they have not implemented many of these recommendations. As of March 2018, about 885 of our prior information security-related recommendations had not been implemented. DHS has broad authorities to improve and promote cybersecurity of federal and private-sector networks. The federal laws and policies that underpin these authorities include the following: The Federal Information Security Modernization Act (FISMA) of 2014 clarified and expanded DHS’s responsibilities for assisting with the implementation of, and overseeing, information security at federal agencies. These responsibilities include requirements to: develop, issue, and oversee agencies’ implementation of binding operational directives to agencies, including directives for incident reporting, contents of annual agency reports, and other operational requirements; monitor agencies’ implementation of information security policies provide operational and technical assistance to agencies, including by operating the federal information security incident center, deploying technology to continuously diagnose and mitigate threats, and conducting threat and vulnerability assessments of systems. Act of 2014, among other things, requires DHS to assess its cybersecurity workforce. In this regard, the Secretary of Homeland Security is to identify all positions in DHS that perform cybersecurity functions and to identify cybersecurity work categories and specialty areas of critical need. The National Cybersecurity Protection Act of 2014 codified the role of the National Cybersecurity and Communications Integration Center (NCCIC)—a center established by DHS in 2009—as the federal civilian interface for sharing information concerning cybersecurity risks, incidents, analysis, and warnings to federal and non-federal entities, including owners and operators of information systems supporting critical infrastructure. The Cybersecurity Act of 2015, among other things, sets forth authority for enhancing the sharing of cybersecurity-related information among federal and non-federal entities. The act gives DHS’s NCCIC responsibility for implementing this information sharing authority. The act also requires DHS to: Jointly develop with other specified agencies and submit to Congress, procedures for sharing federal cybersecurity threat information and defensive measures with federal and non-federal entities. Deploy, operate, and maintain capabilities to prevent and detect cybersecurity risks in network traffic traveling to or from an agency’s information system. DHS is to make these capabilities available for use by any agency. In addition, the act requires DHS to improve intrusion detection and prevention capabilities, as appropriate, by regularly deploying new technologies and modifying existing technologies. Long-standing federal policy as promulgated by a presidential policy directive, executive orders, and the National Infrastructure Protection Plan have designated DHS as a lead federal agency for coordinating, assisting, and sharing information with the private-sector to protect critical infrastructure from cyber threats. We have reviewed several federal programs and activities implemented by DHS that are intended to mitigate cybersecurity risk for the computer systems and networks supporting federal operations and our nation’s critical infrastructure. These programs and activities include deploying the National Cybersecurity Protection System, providing continuous diagnostic and mitigation services, issuing binding operational directives, sharing information through the National Cybersecurity and Communications Integration Center, promoting adoption of a cybersecurity framework, and assisting private-sector partners with cyber risk mitigation activities. We also examined DHS’s efforts to assess its cybersecurity workforce. DHS has made important progress in implementing these programs and activities. However, the department needs to take additional actions to ensure that it successfully mitigates cybersecurity risks on federal and private-sector computer systems and networks. DHS is responsible for operating its National Cybersecurity Protection System (NCPS), operationally known as EINSTEIN. NCPS is intended to provide intrusion detection and prevention capabilities to entities across the federal government. It also is intended to provide DHS with capabilities to detect malicious traffic traversing federal agencies’ computer networks, prevent intrusions, and support data analytics and information sharing. In January 2016, we reported that the NCPS was partially, but not fully, meeting most of its stated four system objectives: Intrusion detection: We noted that NCPS provided DHS with a limited ability to detect potentially malicious activity entering and exiting computer networks at federal agencies. Specifically, NCPS compared network traffic to known patterns of malicious data, or “signatures,” but did not detect deviations from predefined baselines of normal network behavior. In addition, the system did not monitor several types of network traffic and its “signatures” did not address threats that exploited many common security vulnerabilities and, thus was not effective in detecting certain types of malicious traffic. Intrusion prevention: The capability of NCPS to prevent intrusions (e.g., blocking an e-mail determined to be malicious) was limited to the types of network traffic that it monitored. For example, the intrusion prevention function monitored and blocked e-mail. However, it did not address malicious content from other types of network traffic. Analytics: NCPS supports a variety of data analytical tools, including a centralized platform for aggregating data and a capability for analyzing the characteristics of malicious code. In addition, DHS had further enhancements to this capability planned through 2018. Information sharing: DHS had not developed most of the planned functionality for NCPS’s information-sharing capability, and requirements had only recently been approved. Moreover, we noted that agencies and DHS did not always agree about whether notifications of potentially malicious activity had been sent or received, and agencies had mixed views about the usefulness of these notifications. Further, DHS did not always solicit—and agencies did not always provide—feedback on the notifications. We recommended that DHS take nine actions to enhance NCPS’s capabilities for meeting its objectives, better define requirements for future capabilities, and develop network routing guidance. The department agreed with our recommendations; however, as of April 2018, it had not fully implemented 8 of the 9 recommendations. As part of a review mandated by the Federal Cybersecurity Enhancement Act of 2015, we are currently examining DHS’s efforts to improve its intrusion detection and prevention capabilities. The Continuous Diagnostics and Mitigation (CDM) program was established to provide federal agencies with tools and services that have the intended capability to automate network monitoring, correlate and analyze security-related information, and enhance risk-based decision making at agency and government-wide levels. These tools include sensors that perform automated scans or searches for known cyber vulnerabilities, the results of which can feed into a dashboard that alerts network managers and enables the agency to allocate resources based on the risk. DHS, in partnership with, and through the General Services Administration, established a government-wide acquisition vehicle for acquiring CDM capabilities and tools. The CDM blanket purchase agreement is available to federal, state, local, and tribal government entities for acquiring these capabilities. There are three phases of CDM implementation and the dates for implementing Phase 2 and Phase 3 appear to be slipping: Phase 1: This phase involves deploying products to automate hardware and software asset management, configuration settings, and common vulnerability management capabilities. According to the Cybersecurity Strategy and Implementation Plan, DHS purchased Phase 1 tools and integration services for all participating agencies in fiscal year 2015. Phase 2: This phase intends to address privilege management and infrastructure integrity by allowing agencies to monitor users on their networks and to detect whether users are engaging in unauthorized activity. According to the Cybersecurity Strategy and Implementation Plan, DHS was to provide agencies with additional Phase 2 capabilities throughout fiscal year 2016, with the full suite of CDM phase 2 capabilities delivered by the end of that fiscal year. However, according to the Office of Management and Budget’s (OMB) FISMA Annual Report to Congress for Fiscal Year 2017, the CDM program began deploying Phase 2 tools and sensors during fiscal year 2017. Phase 3: According to DHS, this phase is intended to address boundary protection and event management throughout the security life cycle. It focuses on detecting unusual activity inside agency networks and alerting security personnel. The agency had planned to provide 97 percent of federal agencies the services they need for CDM Phase 3 in fiscal year 2017. However, according to OMB’s FISMA report for fiscal year 2017, the CDM program will continue to incorporate additional capabilities, including Phase 3, in fiscal year 2018. In May 2016, we reported that most of the 18 agencies covered by the CFO Act that had high-impact systems were in the early stages of implementing CDM. All 17 of the civilian agencies that we surveyed indicated they had developed their own strategy for information security continuous monitoring. Additionally, according to the survey responses, 14 of the 17 civilian agencies had deployed products to automate hardware and software asset configuration settings and common vulnerability management. Further, more than half of these agencies noted that they had leveraged products/tools provided through the General Services Administration’s acquisition vehicle. However, only 2 of the 17 agencies reported that they had completed installation of agency and bureau/component-level dashboards and monitored attributes of authorized users operating in their agency’s computing environment. Agencies noted that expediting the implementation of the CDM phases could be of benefit to them in further protecting their high-impact systems. Subsequently, in March 2017, we reported that the effective implementation of the CDM tools and capabilities can assist agencies in overcoming the challenges of securing their information systems and information. We noted that our audits often identify insecure configurations, unpatched or unsupported software, and other vulnerabilities in agency systems. Thus, the tools and capabilities available under the CDM program, when effectively used by agencies, can help them to diagnose and mitigate vulnerabilities to their systems. We reported that, by continuing to make these tools and capabilities available to federal agencies, DHS can also have additional assurance that agencies are better positioned to protect their information systems and information. Beyond the NCPS and CDM programs, DHS also provides a number of services that could help agencies protect their information systems. Such services include, but are not limited to: US-CERT monthly operational bulletins, which are intended to provide senior federal government information security officials and staff with actionable information to improve their organization’s cybersecurity posture based on incidents observed, reported, or acted on by DHS and US-CERT. CyberStat reviews, which are in-depth sessions attended by National Security Staff, as well as officials from OMB, DHS, and an agency to discuss that agency’s cybersecurity posture and opportunities for collaboration. According to OMB, these interviews are face-to-face, evidence-based meetings intended to ensure agencies are accountable for their cybersecurity posture. The sessions are intended to assist the agencies in developing focused strategies for improving their information security posture in areas where there are challenges. DHS Red and Blue Team exercises that are intended to provide services to agencies for testing their systems with regard to potential attacks. A Red Team emulates a potential adversary’s attack or exploitation capabilities against an agency’s cybersecurity posture. The Blue Team defends an agency’s information systems when the Red Team attacks, typically as part of an operational exercise conducted according to rules established and monitored by a neutral group. In May 2016, we reported that, although participation in these services varied among the 18 agencies we surveyed, most of those that chose to participate reported that they generally found these services to be useful in aiding the cybersecurity protection of their high-impact systems. Specifically, 15 of 18 agencies reported that they participated in US-CERT monthly operational bulletins, and most said they found the service very or somewhat useful. All 18 agencies reported that they participated in the CyberStat reviews, and most said they found the service very or somewhat useful. 9 of 18 agencies reported that they participated in DHS’ Red/Blue team exercises, and most said they found the exercises to be very or somewhat useful. Half of the 18 agencies in our survey reported that they wanted an expansion of federal initiatives and services to help protect their high- impact systems. For example, these agencies noted that expediting the implementation of CDM phases, sharing threat intelligence information, and sharing attack vectors, could be of benefit to them in further protecting their high-impact systems. We believe that by continuing to make these services available to agencies, DHS will be better able to assist agencies in strengthening the security of their information systems. FISMA authorizes DHS to develop and issue binding operational directives to federal agencies and oversee their implementation by agencies. The directives are compulsory and require agencies to take specific actions that are intended to safeguard federal information and information systems from a known threat, vulnerability, or risk. In September 2017, we reported that DHS had developed and issued four binding operational directives as of July 2017, instructing agencies to: mitigate critical vulnerabilities discovered by DHS’s NCCIC through its scanning of agencies’ Internet-accessible systems; participate in risk and vulnerability assessments as well as DHS security architecture assessments conducted on agencies’ high-value assets; address several urgent vulnerabilities in network infrastructure devices identified in a NCCIC analysis report within 45 days of the directive’s issuance; and report cyber incidents and comply with annual FISMA reporting requirements. Since July 2017, DHS has issued two additional binding operational directives instructing agencies to: identify and remove the presence of any information security products developed by AO Kaspersky Lab on their information systems and discontinue the use of such products; and enhance e-mail by, among other things, removing certain insecure protocols, and ensure public facing web sites provide services through a secure connection. We plan to initiate work later this year to identify and assess DHS’s process for developing and overseeing agencies’ implementation of binding operational directives. In February 2017, we reported that NCCIC had taken steps to perform each of its 11 statutorily required cybersecurity functions, such as being a federal civilian interface for sharing cybersecurity-related information with federal and nonfederal entities. NCCIC managed several programs that provided data used in developing 43 products and services that the center made available to its customers in the private-sector; federal, state, local, tribal and territorial government entities; and other partner organizations. For example, NCCIC issued indicator bulletins, which could contain information related to cyber threat indicators, defensive measures, and cybersecurity risks and incidents, and helped to fulfill its function to coordinate the sharing of such information across the government. Respondents to a survey that we administered to NCCIC’s customers varied in their reported use of NCCIC’s products but had generally favorable views of the center’s activities. The National Cybersecurity Protection Act also required NCCIC to carry out its functions in accordance with nine implementing principles, to the extent practicable. However, as we reported, the extent to which NCCIC adhered to the 9 principles when performing the functions was unclear because the center had not yet determined the applicability of the principles to all 11 functions. It also had not established metrics and methods by which to evaluate its performance against the principles. We also identified several impediments to NCCIC performing its cybersecurity functions more efficiently. For example, the center did not have a centralized system for tracking security incidents and, as a result, could not produce a report on the status of all incidents reported to the center. In addition, the center did not keep current and reliable customer information and was unable to demonstrate that it had contact information for all owners and operators of the most critical cyber-dependent infrastructure assets. We made nine recommendations to DHS for enhancing the effectiveness and efficiency of NCCIC. Among other activities, these recommendations called for the department to determine the applicability of the implementing principles and establish metrics and methods for evaluating performance; and address identified impediments. DHS agreed with the recommendations; however, as of April 2018, all nine recommendations remained unimplemented. An executive order issued by the President in February 2013 (E.O. 13636) states that sector-specific agencies (SSA), which include DHS, are to review the National Institute of Standards and Technology Framework for Improving Critical Infrastructure Cybersecurity (cybersecurity framework) and, if necessary, develop implementation guidance or supplemental materials to address sector-specific risks and operating environments. In February 2014, DHS launched the Critical Infrastructure Cyber Community Voluntary Program to assist the enhancement of critical infrastructure cybersecurity and to encourage adoption of the framework across the critical infrastructure sectors. In addition, DHS, as the SSA and co-SSA for 10 critical infrastructure sectors, had developed framework implementation guidance for some of the sectors it leads. Nevertheless, we reported weaknesses in DHS’s efforts to promote the use of the framework across the sectors and within the sectors it leads. Specifically, in December 2015, we reported that DHS did not measure the effectiveness of cyber community voluntary program to encourage use of the Cybersecurity Framework. In addition, DHS and GSA, which are the co-SSAs for the government facilities sector, had yet to determine if sector implementation guidance should be developed for the government facilities sector. Further, in February 2018, we reported that none of the SSAs, to include DHS, had measured the cybersecurity framework’s implementation by entities within their respective sectors, in accordance with the nation’s plan for national critical infrastructure protection efforts. We made two recommendations to DHS to better facilitate adoption of the Cybersecurity Framework across the critical infrastructure sectors and within the government facilities sector. We also recommended that DHS develop methods for determining the level and type of framework adoption by entities across their respective sectors. DHS concurred with the three recommendations. As of April 2018, only the recommendation related to the government facilities sector has been implemented. Presidential Policy Directive-21 issued by the President in February 2013, states that SSAs are to collaborate with critical infrastructure owners and operators to strengthen the security and resiliency of the nation’s critical infrastructure. In November 2015, we reported that the SSAs, including DHS, generally used multiple public-private mechanisms to facilitate the sharing of cybersecurity related information. For example, DHS used coordinating councils and working groups of federal and nonfederal stakeholders to facilitate coordination with each other. In addition, the department’s NCCIC received and disseminated cyber-related information for public and private-sector partners. Nevertheless, we identified deficiencies in critical infrastructure partners’ efforts to collaborate to monitor progress towards improving cybersecurity within the sectors. Specifically, the SSAs for 12 sectors, including DHS for 8 sectors, had not developed metrics to measure and report on the effectiveness of their cyber risk mitigation activities or their sectors’ cybersecurity posture. This was because, among other reasons, the SSAs rely on their private-sector partners to voluntarily share information needed to measure efforts. We made two recommendations to DHS—one recommendation based on its role as the SSA for 8 sectors and one recommendation based on its role as the co-SSA for 1 sector—to collaborate with sector partners to develop performance metrics and determine how to overcome challenges to reporting the results of their cyber risk mitigation activities. DHS concurred with the two recommendations. As of April 2018, DHS has not demonstrated that it has implemented these recommendations. In February 2018, we reported that DHS had taken actions to identify, categorize, and assign employment codes to its cybersecurity positions, as required by the Homeland Security Cybersecurity Workforce Assessment Act of 2014. However, its actions had not been timely and complete. For example, DHS had not met statutorily defined deadlines for completing actions to identify and assign codes to cybersecurity positions or ensured that its procedures to identify, categorize, and code its cybersecurity positions addressed vacant positions, as required by the act. The department also had not (1) identified the individual within each DHS component agency who was responsible for leading and overseeing the identification and coding of the component’s cybersecurity positions or (2) reviewed the components’ procedures for consistency with departmental guidance. In addition, DHS had not yet completed its efforts to identify all of the department’s cybersecurity positions and accurately assign codes to all filled and vacant cybersecurity positions. In August 2017, DHS reported to the Congress that it had coded 95 percent of the department’s identified cybersecurity positions. However, we determined that the department had, at that time, coded approximately 79 percent of the positions. DHS overstated the percentage of coded positions primarily because it excluded vacant positions, even though the act required the department to report such positions. Further, although DHS had taken steps to identify its workforce capability gaps, it had not identified or reported to the Congress on its department- wide cybersecurity critical needs that align with specialty areas. The department also had not annually reported its cybersecurity critical needs to the Office of Personnel Management (OPM), as required; and it had not developed plans with clearly defined time frames for doing so. We recommended that DHS take six actions, including ensuring that its cybersecurity workforce procedures identify position vacancies and responsibilities; reported workforce data are complete and accurate; and plans for reporting on critical needs are developed. DHS concurred with the six recommendations and stated that it plans to take actions to address them by June 2018. In conclusion, DHS is unique among federal civilian agencies in that it is responsible for improving and promoting the cybersecurity of not only its own internal computer systems and networks but also those of other federal agencies and the private-sector owners and operators of critical infrastructure. Consistent with its statutory authorities and responsibilities under federal policy, the department has acted to assist federal agencies and private-sector partners in bolstering their cybersecurity capabilities. However, the effectiveness of DHS’s activities has been limited or not clearly understood because of shortcomings with its programs and a lack of useful performance measures. DHS needs to enhance its capabilities; expedite delivery of services; continue to provide guidance and assistance to federal agencies and private-sector partners; and establish useful performance metrics to assess the effectiveness of its cybersecurity-related activities. In addition, developing and maintaining a qualified cybersecurity workforce needs to be a priority for the department. Until it fully and effectively performs its cybersecurity authorities and responsibilities, DHS’s ability to improve and promote the cybersecurity of federal and private-sector networks will be limited. Chairman Johnson, Ranking Member McCaskill, and Members of the Committee, this concludes my statement. I would be pleased to respond to your questions. If you or your staffs have any questions about this testimony, please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Nabajyoti Barkakati, Chris Currie, Larry Crosland, Tammi Kalugdan, David Plocher, Di’Mond Spencer, and Priscilla Smith. GAO, Critical Infrastructure Protection: Additional Actions Are Essential for Assessing Cybersecurity Framework Adoption, GAO-18-211 (Washington, D.C.: Feb. 15, 2018). GAO, Cybersecurity Workforce: Urgent Need for DHS to Take Actions to Identify Its Position and Critical Skill Requirements, GAO-18-175 (Washington, D.C.: Feb. 6, 2018). GAO, Federal Information Security: Weaknesses Continue to Indicate Need for Effective Implementation of Policies and Practices, GAO-17-549 (Washington, D.C.: Sept. 28, 2017). GAO, Cybersecurity: Federal Efforts Are Under Way That May Address Workforce Challenges, GAO-17-533T (Washington, D.C.: Apr. 4, 2017). GAO, Information Security: DHS Needs to Continue to Advance Initiatives to Protect Federal Systems, GAO-17-518T (Washington, D.C.: Mar. 28, 2017). GAO, High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others, GAO-17-317 (Washington, D.C.: Feb. 15, 2017). GAO, Cybersecurity: Actions Needed to Strengthen U.S. Capabilities, GAO-17-440T (Washington, D.C.: Feb. 14, 2017). GAO, Cybersecurity: DHS’s National Integration Center Generally Performs Required Functions but Needs to Evaluate Its Activities More Completely, GAO-17-163 (Washington, D.C.: Feb. 1, 2017). GAO, Information Security: DHS Needs to Enhance Capabilities, Improve Planning, and Support Greater Adoption of Its National Cybersecurity Protection System, GAO-16-294 (Washington, D.C.: Jan. 28, 2016). GAO, Critical Infrastructure Protection: Measures Needed to Assess Agencies’ Promotion of the Cybersecurity Framework, GAO-16-152 (Washington, D.C.: Dec. 17, 2015). GAO, Critical Infrastructure Protection: Sector-Specific Agencies Need to Better Measure Cybersecurity Progress, GAO-16-79 (Washington, D.C.: Nov. 19, 2015). This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The emergence of increasingly sophisticated threats and continuous reporting of cyber incidents underscores the continuing and urgent need for effective information security. GAO first designated information security as a government-wide high- risk area in 1997. GAO expanded the high-risk area to include the protection of cyber critical infrastructure in 2003 and protecting the privacy of personally identifiable information in 2015. Federal law and policy provide DHS with broad authorities to improve and promote cybersecurity. DHS plays a key role in strengthening the cybersecurity posture of the federal government and promoting cybersecurity of systems supporting the nation's critical infrastructures. This statement highlights GAO's work related to federal programs implemented by DHS that are intended to improve federal cybersecurity and cybersecurity over systems supporting critical infrastructure. In preparing this statement, GAO relied on a body of work issued since fiscal year 2016 that highlighted, among other programs, DHS's NCPS, national integration center activities, and cybersecurity workforce assessment efforts. In recent years, the Department of Homeland Security (DHS) has acted to improve and promote the cybersecurity of federal and private-sector computer systems and networks, but further improvements are needed. Specifically, consistent with its statutory authorities, DHS has made important progress in implementing programs and activities that are intended to mitigate cybersecurity risks on the computer systems and networks supporting federal operations and our nation's critical infrastructure. For example, the department has: issued cybersecurity related binding operational directives to federal agencies; served as the federal-civilian interface for sharing cybersecurity related information with federal and nonfederal entities; Framework for Improving Critical Infrastructure Cybersecurity ; and Nevertheless, the department has not taken sufficient actions to ensure that it successfully mitigates cybersecurity risks on federal and private-sector computer systems and networks. For example, GAO reported in 2016 that DHS's National Cybersecurity Protection System (NCPS) had only partially met its stated system objectives of detecting and preventing intrusions, analyzing malicious content, and sharing information. GAO recommended that DHS enhance capabilities, improve planning, and support greater adoption of NCPS. In addition, although the department's National Cybersecurity and Communications Integration Center generally performed required functions such as collecting and sharing cybersecurity related information with federal and non-federal entities, GAO reported in 2017 that the center needed to evaluate its activities more completely. For example, the extent to which the center had performed its required functions in accordance with statutorily defined implementing principles was unclear, in part, because the center had not established metrics and methods by which to evaluate its performance against the principles. Further, in its role as the lead federal agency for collaborating with eight critical infrastructure sectors including the communications and dams sectors, DHS had not developed metrics to measure and report on the effectiveness of its cyber risk mitigation activities or on the cybersecurity posture of the eight sectors. GAO reported in 2018 that DHS had taken steps to assess its cybersecurity workforce; however, it had not identified all of its cybersecurity positions and critical skill requirements. Until DHS fully and effectively implements its cybersecurity authorities and responsibilities, the department's ability to improve and promote the cybersecurity of federal and private-sector networks will be limited. Since fiscal year 2016, GAO has made 29 recommendations to DHS to enhance the capabilities of NCPS, establish metrics and methods for evaluating performance, and fully assess its cybersecurity workforce, among other things. As of April 2018, DHS had not demonstrated that it had fully implemented most of the recommendations.
|
GAO’s Standards for Internal Control in the Federal Government state that federal agencies—such as DOD—must demonstrate a commitment to training, mentoring, retaining, and selecting competent individuals, which would include program managers. These standards explain that federal agencies like DOD should provide training that enables individuals to develop competencies appropriate for key roles, reinforces standards of conduct, and can be tailored based on the needs of the role; mentor individuals by providing guidance on their performance based on standards of conduct and expectations of competence; retain individuals by providing incentives to motivate and reinforce expected levels of performance and desired conduct; and select individuals for key roles by conducting procedures to determine whether a particular candidate fits the organization’s needs and has the competence for the proposed role. The Project Management Institute, as well as four companies that we included in this review, have also identified these activities as critical for developing program managers. Program managers for DOD’s 78 major defense acquisition programs, along with program executive officers, their respective deputies, and program managers for certain non-major programs, occupy what DOD refers to as program management key leadership positions. There were 446 program management key leadership positions at the end of fiscal year 2016. They are in turn part of a broader program management career field, which numbers approximately 17,000 civilian and military personnel. The Air Force typically brings its future program managers for major defense acquisition programs into the career field early in their careers, and then provides training and experiences to prepare them for the role. In contrast, the Army and Navy typically bring their future program managers into the career field later in their careers and from other fields, such as engineering. As shown in table 1, at the end of fiscal year 2016, most program manager positions for major defense acquisition programs were held by military personnel. According to military service officials, when a military officer fills a program manager position, a civilian usually fills the deputy program manager position for that program and vice versa. Overarching guidance, training, and oversight for the defense acquisition workforce is provided centrally by DOD in the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, which includes Human Capital Initiatives and the Defense Acquisition University. Other officials and organizations that play key roles include the Defense Acquisition Functional leader for program management, who is responsible for establishing a competency model that reflects the knowledge and skills required to be successful in the career field, as well as position descriptions, requirements for key leadership positions, certification standards, and continuous learning activities; the Directors for Acquisition Career Management in each of the military services, who serve as key advisors for policy, coordination, implementation, and oversight of acquisition workforce programs within their services; and acquisition commands and program executive offices within each military service, which work together to manage acquisition programs and initiatives to improve the workforce. Over the last decade, Congress has passed several laws aimed at bolstering the acquisition workforce and specifically the program management career field. Provisions have included requiring DOD to develop a comprehensive strategy for enhancing the role of program managers, provide advancement opportunities for military personnel, and establish training programs for the acquisition workforce. Congress also established the Defense Acquisition Workforce Development Fund (DAWDF) in 2008 to provide funds for the recruitment, training, and retention of DOD acquisition personnel. Since the establishment of DAWDF, DOD has obligated more than $3.5 billion in DAWDF funds for these purposes. Of the more than $440 million in DAWDF funds obligated in fiscal year 2016, almost $12 million was obligated for the program management career field: $0.4 million was obligated for recruitment, $10.5 million was obligated for training, and $0.9 million was obligated for retention and recognition. Additional funds supported the salaries of 33 people hired into the career field during fiscal year 2016. To bolster the number of civilian personnel that could be selected for a program manager position, the National Defense Authorization Act for Fiscal Year 2018 requires DOD to implement a civilian program manager development program. The act states that the plan for such a program shall include consideration of qualifications, training, assignments and rotations, and retention benefits, among other things. We identified 10 practices, across four distinct areas, used by leading organizations to develop program manager talent based on our extensive review of Project Management Institute documents and discussions with AstraZeneca, Boeing, DXC Technology, and Rio Tinto. These four areas correspond to the internal control standards discussed previously. Program managers at these companies share similar basic responsibilities with DOD program managers, including overseeing the development and production of goods and services in a timely and cost- effective manner. As shown in figure 1 below, leading organizations provide a mix of formal and informal training opportunities focused on sharing knowledge and providing experiences that prepare people for program management, offer mentoring opportunities to guide people along career paths use a mix of financial and nonfinancial incentives to retain high select program managers based on identification of high-potential talent and then assign program managers based on program needs. Boeing representatives noted that by using a combination of these practices, over the past 15 years, their program managers have primarily left positions due to promotion or retirement. Rio Tinto representatives noted that in a challenging environment for finding suitable external talent, they have been able to use these practices to successfully develop most of the talent they need internally. DXC Technology representatives noted that these practices enabled their program managers to receive better feedback and address skill gaps. An AstraZeneca representative noted that these practices have made it easier for people to get the range of experiences they need to move into leadership positions. The Project Management Institute identifies training as the most common component of development. Leading organizations we spoke with use venues like training classes to share knowledge and experiences. These organizations also expand people’s knowledge and experience by encouraging rotation of talent across organizational boundaries. Leading organizations also provide access to on-the-job learning opportunities and repositories of best practices and lessons learned. Examples of practices used by commercial companies we spoke with are described below. Practice #1—Training classes that allow program managers to share experiences: Boeing representatives told us that the company sends employees aspiring to be program managers to a 5-day, in-residence program manager workshop. Attendees simulate challenging program management scenarios and get exposure to senior executives who discuss best practices and share experiences. They are expected to make decisions quickly, and play different roles throughout the simulation so they can gain a better understanding of the consequences of their decisions. Similarly, DXC Technology holds multiday workshops for program managers where they participate in role-playing scenarios in which they have to react to a given situation that a program manager could face. One of the key benefits of the workshop noted by DXC Technology representatives is that they receive individual feedback on areas for improvement. Practice #2—Rotational assignments: According to Boeing representatives, the company selects high-performing midcareer employees interested in program management for a 2-year rotation program in which they take leadership roles and solve difficult challenges facing a part of the business. These could be internal assignments within an individual’s current business unit, or external assignments that cross organizational boundaries, for example, between Boeing’s commercial, defense, and services businesses. Boeing representatives noted this as a valuable leadership opportunity for the people involved, which helps drive change in the organizations to which they are assigned. In order to expand people’s capabilities and give them a broader perspective on the business, AstraZeneca regularly notifies its workforce—via a monthly newsletter and an online portal—of rotational opportunities lasting 6 months to a year. These rotations could be within an individual’s business unit, or in a different location or part of the business. Practice #3—On-the-job learning and information repositories: Rio Tinto representatives told us that the company has managers from one project participate in reviews and events for other projects in order to transfer knowledge. For example, a manager from a mining operation based in one country might visit a mining operation in another country to share ideas. Rio Tinto also retains the formal reviews that take place at the end of each project, as well as the lessons learned by the team itself, in an accessible document management system. Similarly, AstraZeneca uses online collaboration software to house project information that might help others. It has also established a community of practice and networking groups to share knowledge, and provides people moving into management positions a checklist of tasks and meetings to complete within their first 6 months. Boeing representatives told us that one way the company provides on-the-job training and support to program managers is by temporarily bringing in experts with prior experience to participate in a wide variety of activities across all types of programs. These activities include verifying designs and proactively identifying and resolving challenges such as manufacturing problems. The Project Management Institute identifies mentoring as a way of encouraging and supporting people. Leading organizations we spoke with have programs in place to facilitate mentor and mentee relationships. They expect senior people to serve as mentors. The organizations we spoke with also mentor employees by laying out the career paths they might need to follow to achieve the highest levels of program management within the organization. Examples of practices used by commercial companies we spoke with are described below. Practice #4—Mentoring programs with senior leader involvement: According to Boeing representatives, the company offers voluntary mentoring programs—both formal and informal—at different points throughout an employee’s career cycle, including the early stages. Depending on the career goals of an individual, Boeing offers both mentors and sponsors, who are senior leaders that nominate people— especially high performers—for specific opportunities. At Boeing, there is an expectation that senior leaders will be involved in mentoring. For example, midcareer program managers can be matched with executives based on the preferences of the two parties. Relationships are reevaluated annually. Through these relationships, mentees get exposure to critical decisions, as well as other parts of the business. Rio Tinto representatives told us that the company has a formal mentoring program targeted at high-potential talent that partners people with senior leaders, including those from different departments. Senior leaders at Rio Tinto are expected to participate in long-term career development discussions for people two levels below them. The company also provides senior executives and other lower-level managers access to external coaches who focus more on leadership than technical company matters. Practice #5—Career paths that describe skills needed to advance: According to DXC Technology representatives, the company has documented a program management career path that details the skills needed to be a program manager. The company annually identifies the developmental needs of employees, who can then take steps such as moving to another program to gain the required experience to address any gaps. This helps management make decisions that benefit both the individual and the company. Boeing representatives told us that the company has developed a general career path for many of its career fields, including program management, and encourages people to develop the skills they need by gaining experience in different career fields and business units. Boeing program managers we met with described the range of experiences they had within the company that equipped them for their roles, such as working on different kinds of aircraft and in technical and business functions. Leading practices identified by us and the Project Management Institute suggest that a combination of financial and nonfinancial incentives can be used to retain high performers. For example, leading organizations we spoke with offer student loan repayments and financing of higher education in compensation packages as financial incentives. They also provide monetary awards to recognize excellence in job performance and contributions to organizational goals. Nonfinancial incentives could include senior leadership recognizing strong performance in program management and emphasizing the idea that program management is prestigious, challenging, and key to business success. Examples of practices used by commercial companies we spoke with are described below. Practice #6—Financial rewards for good performance: Rio Tinto representatives told us that the company offers incentives that are based on performance. The company includes pay raises linked to annual performance ratings, which are determined by the extent to which a program manager meets objectives including cost and schedule goals. According to Boeing representatives, the company annually assesses program managers based on technical and financial performance measures and employee feedback. These assessments help determine annual salary increases and bonuses. Practice #7—Education subsidies: Boeing offers tuition assistance to all people after they have been at the company for at least 1 year. This can support degree programs, professional certificates, and individual courses in fields of study at over 270 colleges and universities. Boeing representatives noted that this has helped foster a high degree of loyalty from people. Practice #8—Recognition: Boeing representatives told us that program managers for major programs hold a high level of responsibility and accountability. When program managers are successful at running effective programs, they are often moved to larger and more complex programs with much greater responsibility. AstraZeneca announces recognition for program achievements such as meeting delivery targets via e-mail and at town hall meetings, and significant achievements can also be recognized through nomination for annual company-wide awards. The Project Management Institute emphasizes the importance of identifying top talent and future high performers for key roles. Leading practices for selecting program managers are rooted in the identification of high-potential talent and the alignment of that talent with program needs. Leading organizations we spoke with engage senior management in identifying high performing people and monitoring their job assignments, performance, and career progression. They also select program managers with the blend of skills, experience, knowledge, and expertise required to be effective within a particular program environment. Examples of practices used by commercial companies we spoke with are described below. Practice #9—Identification of high-potential talent by senior leaders: Rio Tinto representatives told us that senior leaders at the company annually assess the potential and performance of its people and then classify them in one of nine categories that include those who need additional experiences and developmental opportunities, those in the right role and at the right level that need to be kept engaged, and those considered high potential who need challenging opportunities. AstraZeneca identifies and keeps track of high-potential people through annual talent assessments addressing each person’s strengths and gaps, as well as potential roles, development actions, and associated time frames. The assessments also include an individual’s professional aspirations. According to Boeing representatives, the company uses its succession planning process to identify a pool of qualified people able to step into executive and program manager positions, including those who are ready to step into a role immediately, and those who need some additional development. Practice #10—Assignment based on skills, experiences, and program needs: According to DXC Technology representatives, the company assigns program managers to roles based on a review of their demonstrated management and subject matter competencies. For example, an individual is evaluated on experience such as managing programs of a certain size or level of complexity, as well as the outcomes they achieved on those programs in terms of cost, schedule, and client feedback. An individual is also evaluated on whether he or she has the specific skills needed to manage a particular program, such as those related to data migration or software application design. Boeing representatives told us that the company takes into account a wide variety of factors when assigning a program manager to a program. Factors could include the size, dollar value, and complexity of a program, as well as the developmental needs of a program manager. Our analysis of the practices used by the military services to train, mentor, retain, and select program managers for major defense acquisition programs shows a mix in the level of alignment with the leading practices. We based our analysis on a review of DOD, military service, and relevant sub-component documentation on training, mentoring, retaining, and selecting program managers, including policies, guidance, strategic plans, curricula, online portals, and acquisition workforce data. Table 2 provides our assessment of the alignment of military service practices with the 10 leading practices. Practices used by each of the military services align extensively with 4 of the 10 leading practices. For 5 of the 10, practices used by at least one of the military services do not align extensively with leading practices, and for the remaining practice related to financial rewards for good performance, none of the services’ practices align extensively. We discussed these assessments with each military service Director for Acquisition Career Management, and they generally agreed with our assessments. Military service practices align extensively with four of the leading practices, as shown in table 3 below. For the first practice, alignment is largely the result of steps taken by DOD to comply with the Defense Acquisition Workforce Improvement Act, enacted as part of the National Defense Authorization Act for Fiscal Year 1991. This legislation set forth education, training, and experience requirements that program managers must meet prior to being assigned to a major defense acquisition program or significant non-major defense acquisition program. All four practices that have extensive alignment reflect a combination of DOD-wide initiatives and approaches unique to the military services. The following summarizes our assessment of these practices. Practice #1—Training classes that allow program managers to share experiences: DOD provides centralized training that brings together current and prospective program managers to strengthen their skill sets and share their experiences. The Defense Acquisition University has developed a training curriculum of courses that people must complete—in conjunction with experience and education standards—to be certified as ready to take on increasingly challenging assignments. The highest level courses required for program managers incorporate simulations, case studies, senior agency and industry speakers, and team projects to strengthen participants’ analytical, critical thinking, and decision-making skills. According to a Defense Acquisition University official, each year approximately 350 people attend these courses. According to the military services’ Directors for Acquisition Career Management, all current major defense acquisition program managers met their certification requirements. The military services have also developed their own training for program managers that brings peers together and addresses service-specific issues. For example, the Navy has established program management colleges at its largest systems commands. These colleges teach curricula specific to Navy processes. The Navy also provides approximately 200 program managers each year with training courses focused on understanding commercial industry and managing relationships with contractors. These classes, offered through business schools, are taught by academic faculty, senior naval officials, and private sector executives and focus on factors program managers need to be aware of to understand industry behavior and decision-making. According to DOD’s acquisition workforce strategic plan for fiscal years 2016 through 2021, the department intends to improve the type of training it provides program managers, the timing of when courses are provided, and the delivery method. The plan also noted DOD’s intent to strengthen qualification requirements for program management positions by further developing the list of proficiencies associated with certifications, including leadership skills for all levels and technical skills needed by those in the “beginner” and “intermediate” level program management positions. In September 2016, the defense acquisition functional leader for program management finalized and issued this list. Practice #3—On-the-job learning and information repositories: Each of the services provides its own unique on-the-job training or repositories to share lessons learned from acquisition programs. The Air Force provides people in the program management career field with detailed task lists that support on-the-job learning along their career paths. For example, people are encouraged to demonstrate competence in areas such as schedule management. The Army has developed an online portal that houses lessons learned from acquisition programs that were documented around program milestones or upon termination. Users can view and search lessons submitted by others, participate in discussion forums, and reference acquisition case histories. The portal contains over 800 lessons learned, with over 400 relating specifically to program management. The Navy has created a series of physical “war rooms” that display materials on the evolution and organization of the Navy, the service’s acquisition history, how to manage a major program, the unique challenges of ship building, and case studies. The Navy hosts a 5-day training program for program managers in these rooms in order to transfer lessons learned from previous acquisition programs. The Defense Acquisition University has also established an online program management community of practice that houses a range of tools and documents that communicate lessons learned. Practice #8—Recognition: DOD leadership acknowledges the challenges and importance of program management by designating the most senior positions in the career field—including program managers— as key leadership positions. These positions require a significant level of authority commensurate with the responsibility and accountability for acquisition program success. Based on our analysis of DOD acquisition workforce data, while the program management career field represents just over 10 percent of the overall acquisition workforce, it accounts for almost 40 percent of key leadership positions. Senior leadership in each of the services also provides their own types of recognition for good performance in program management. For example, each service has an annual award recognizing high-performing program managers. In addition, program management is an award category for the DOD-wide Defense Acquisition Workforce Individual Achievement Award, which includes recognition for winners at an awards ceremony held at the Pentagon. Practice #10—Assignment based on skills, experiences, and program needs: All of the services evaluate the skills and experiences of candidates for program manager roles, and ensure they have the required qualifications. As part of their processes for filling these roles, the services take note of specific needs associated with a program. In the Army, civilian and military personnel apply each year and are competitively selected by a board of senior Army acquisition leaders who use instructions from the Secretary of the Army to select the best qualified individuals. Once selected by the board, the Army uses another process to match the skills and experience of the individual to those required by the program manager position based on factors such as functional, technical, and educational experience. In the Navy, civilian and military personnel apply and compete for specific programs. As part of the documentation of candidate selection, the Navy requires a description of how the candidate’s skills align with the current status of the program. The Air Force designates whether a program will have a military or civilian program manager in advance. The senior official who approves program manager selections considers program needs along with individual qualifications and functional requirements. In addition, the military services consult with the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics on the selection of program managers for those programs where that office is the decision authority. For five of the leading practices, at least one of the military services’ practices do not align extensively, as shown in table 4 below. The following summarizes our assessment of instances in which one or two military services may be using a leading practice, but not all three services. We also identify examples of military service actions that could serve as a model for meeting those leading practices. Practice #2—Rotational assignments: Each of the services provides civilian and military program management personnel with opportunities to rotate internally among other units or functions. However, while the military services have identified external rotations with industry as a way to gain valuable experience and improve people’s business acumen, practices in this area vary. For example, The Air Force has an external industry rotation program that is open to both civilian and military personnel. In total, about seven military and civilian program management personnel participate in this program each year, according to the Air Force Director for Acquisition Career Management. The Army’s external industry rotation program is open only to military personnel, and approximately 11 program management personnel participate each year, according to the Army Director for Acquisition Career Management. The Director also noted that some local Army organizations send civilian personnel on industry rotations, but was not aware of participation by civilian personnel in the program management career field. The Navy uses the Secretary of Defense Executive Fellows program to provide experience with commercial industry. This program is open to participants from all the military services. Until 2017, participation in the program was restricted to only military personnel. Over the past 5 years, between two and five Navy military acquisition personnel per year participated in the program, according to the Navy Director for Acquisition Career Management. The Directors for Acquisition Career Management noted that two of the inherent difficulties with sending civilians on potentially year-long industry rotations are that their organizational unit would need to fund the participant’s travel costs, and would also need to find people to perform the participant’s duties in their absence. The Air Force’s industry rotation program avoids the travel cost problem by finding civilians opportunities with local companies. In addition, the program is targeted at more junior personnel than the programs used by either the Army or Navy, reducing the difficulty of filling their position while they are on a rotation. As a result of the focus on military personnel participating in industry rotations, civilian personnel in the Army and Navy miss an opportunity to improve their business acumen and gain valuable experience that would better prepare them for program manager roles. They could benefit from consideration of the approaches taken by the Air Force. Practice #4—Mentoring programs with senior leader involvement: Each of the services offers some kind of voluntary mentoring program. However, only the Air Force and Army have a documented expectation that senior civilian and military personnel serve as mentors. The Navy provides a range of mentoring resources, but only has a documented expectation that senior military personnel serve as mentors. The Navy Director for Acquisition Career Management agrees that this expectation is not documented for civilians, but believes that senior civilian leaders in program management are aware that mentoring is a responsibility. However, because it is not documented, some senior civilian leaders might not be aware of this expectation. Practice #5—Career paths that describe skills needed to advance: Each of the services has outlined the steps people need to take to become program managers and provided opportunities for both civilians and military to advance to these and even higher level positions. However, the descriptions of the skills people should obtain to advance along the various career paths are inconsistent among the services. The Air Force includes the skills and competencies people need to achieve specific career goals in the competency-based task lists previously discussed as a tool to support on-the-job learning. The task lists are the same for civilian and military personnel. The Army describes the skills and competencies civilians need to advance via a one-page roadmap. While there is a one-page roadmap for military personnel, it does not discuss or link to skills and competencies. The online version of the civilian roadmap includes direct links to an existing DOD tool that people can use to identify and address gaps in their experience and capture demonstrated experience in a wide range of program management competencies, such as stakeholder management. People and their supervisors are encouraged to use this tool to develop individual career development plans. The tool also provides a common set of standards that organizations can use to mitigate skill gaps through hiring or using developmental opportunities. The Navy’s systems command responsible for delivering and supporting aircraft provides a career roadmap for the program management career field, as well as detailed descriptions of the different levels of skills and competencies needed to advance. However, the systems command responsible for delivering and supporting ships does not have a formal career roadmap. Both Army and Navy Directors for Acquisition Career Management are aware of these inconsistencies, and are working to put approaches in place in fiscal year 2018 to address them and ensure that key groups in the program management career field are not missing important information about skills they should develop. Practice #7—Education subsidies: All the services offer tuition assistance to military and civilian personnel to further their education, which has helped increase the percentage of program management personnel with a graduate degree from 46 percent in fiscal year 2008 to 57 percent in fiscal year 2016. The services also offer student loan repayments, but use them for different purposes. The Army and Navy use DAWDF-funded student loan repayments—and the requirement that recipients sign an agreement to serve for 3 years—as a retention tool for program management personnel. However, the Air Force only uses these repayments as a recruiting tool, despite the fact that they can be used for both recruitment and retention. This decision stems from the results of a 2016 study the Air Force commissioned from the RAND Corporation that found limited utility in offering retention bonuses as a tool to retain talent. The Director for Acquisition Career Management told us that the Air Force is scaling back its use of all financial retention incentives and prefers to use student loan repayments as a recruiting tool. The service agreement therefore only covers the early part of someone’s career with the Air Force, instead of being a way to drive retention of more senior personnel. Prior GAO work has found that financial retention incentives are among the most effective flexibilities that agencies have for managing their workforce, and that insufficient use of existing flexibilities can significantly hinder the ability of agencies to retain and manage personnel. Practice #9—Identification of high-potential talent by senior leaders: The Army regularly and systematically involves senior management in identifying high-potential program management talent among civilian and military personnel. It requires senior managers to annually evaluate the leadership potential of all civilian acquisition personnel at midcareer or above, and the Army’s annual evaluation for all military officers assesses their potential for positions of greater responsibility. The Air Force has a similar process for military personnel, but not civilians. The onus is on civilian personnel to nominate themselves for development programs and resources, rather than being identified and guided toward those opportunities by senior leaders. The Navy only identifies high-potential military and civilian talent on an informal basis, which varies across the service. The Air Force and Navy risk overlooking high-potential talent as a result of their approaches. The Directors for Acquisition Career Management for both services acknowledge the ad hoc nature of their practices, and are looking into steps they could take in fiscal year 2018 to more systematically identify high-potential talent. None of the military services’ practices align extensively with leading practices for providing financial rewards for good performance, as shown in table 5 below. Commercial companies have more flexibility than DOD to financially reward good performance. They are not subject to the legal restrictions on compensation that federal agencies must consider, and can offer types of compensation, such as stock options, that federal agencies cannot. Despite this, DOD has mechanisms to financially reward high- performing people. However, these incentives are either unavailable to all program management personnel because of the various pay systems used by DOD, or are underutilized by the military services. For example, military and civilian personnel are compensated under different systems. Military pay and allowances are delineated in Title 37 of the U.S. Code, and while there are provisions for retention bonuses that would cover acquisition officers, there are none that reward high performance. Most DOD civilian personnel, on the other hand, are covered by the General Schedule classification, a pay system that is used in many agencies across the federal government. For the most part, people in this pay system receive set pay increases as long as their performance is at an acceptable level. The military services also have the option to convert civilian personnel to the Civilian Acquisition Workforce Personnel Demonstration Project, known as AcqDemo, where people including those in the program management career field have the opportunity to earn varying levels of pay increases or bonuses based on their performance. The military services’ use of AcqDemo varies. According to AcqDemo data collected by DOD’s Human Capital Initiatives office, as of the end of fiscal year 2016, approximately 64 percent of the Army’s civilian program management workforce is covered by the system. Army officials told us that the level of coverage has increased since then, and that organizations containing the remaining eligible workforce are considering participation in fiscal year 2018. Furthermore, officials told us that all Army program managers are covered by AcqDemo. However, only 38 percent of the Navy’s civilian program management workforce is covered by the system, and 29 percent of the Air Force’s. According to the AcqDemo program manager and the Air Force and Navy Directors for Acquisition Career Management, organizations are hesitant to extend coverage because they are apprehensive about whether what is currently a demonstration program will become permanent, and the time it takes management to reach formal agreement with local bargaining units. The greater coverage of AcqDemo across the Army’s civilian program management workforce compared to the Air Force and Navy suggests that these two services may have opportunities to learn lessons from the Army’s experience. Congress recently took actions that could address some of the concerns about AcqDemo. The National Defense Authorization Act for Fiscal Year 2018, for example, extends the authorized timeline for AcqDemo use from December 31, 2020 to December 31, 2023, and increases the total number of people who may participate in the program at any one time from 120,000 to 130,000. As of February 2017, a total of approximately 36,000 people across DOD were participating in AcqDemo. The military services can also use DAWDF funding to recognize high- performing civilian personnel, but have only made limited use of this funding for program management personnel. The Directors for Acquisition Career Management reported the following awards between fiscal years 2008 and 2017: The Air Force awarded $5,000 to one recipient in fiscal year 2017. The Army awarded a total of $70,000 to 351 recipients on one team in fiscal year 2015. The Navy awarded a total of $10,000 to seven recipients between fiscal years 2008 and 2017. Requests for DAWDF funds are left to the discretion of acquisition commands. According to the military services’ Directors for Acquisition Career Management, local commanders are not frequently requesting DAWDF funds for program management recognition awards. One director stated that this was because they want to avoid the perception of treating civilian personnel differently from military personnel. As a result, the military services are missing an opportunity to financially reward good performance and potentially losing talented civilians by not using all available retention tools. The Army Director stated that Army organizations have also used other financial performance incentives, such as spot awards for civilian program management personnel that are not funded by DAWDF. This director also noted that government-wide budgetary limitations for individual monetary awards have reduced the flexibility to offer rewards for performance. The National Defense Authorization Act for Fiscal Year 2018 requires DOD to commission a review of military and civilian program manager incentives, including a financial incentive structure to reward program managers for delivering capabilities on budget and on time. This represents an opportunity for DOD to identify and begin to address concerns about the equitable treatment of civilian and military program management personnel. The military services recognize that they need skilled program managers to develop acquisition programs and have taken steps to develop that top-notch talent. Of note, DOD has developed a solid training regimen and established minimum training, experience, and education requirements for people to manage acquisitions of various dollar thresholds. The services have also established repositories that share lessons learned and provide on-the-job learning opportunities to supplement the formal training. Yet, when compared to leading practices, we found that several practices used by the military services for training, mentoring, retaining, and selecting people for program manager positions could be improved. For instance, the Air Force has practices that extensively align with all leading practices for training and mentoring, but we identified some practices for retaining and selecting program managers that do not. We assessed the Army as having practices that extensively align with all leading practices for selecting program managers, but identified some practices for training, mentoring, and retaining program managers that do not. We assessed the Navy as having practices that do not extensively align with leading practices in each of the areas of training, mentoring, retaining, and selecting program managers. In nearly all cases, the military services could improve their practices by learning from ideas and initiatives being used by another military service or by commercial companies and ensuring that civilian and military personnel have similar opportunities to develop. While commercial companies have more flexibility in providing financial incentives to their program managers, the military services could make greater use of financial mechanisms provided by Congress—such as DAWDF and AcqDemo—to reward high performing civilian personnel. DOD also has an opportunity to identify for Congress any concerns about the equitable treatment of civilian and military program management personnel when it comes to rewarding good performance. Taking these actions could encourage high-potential talent to remain in the program management career field and strengthen the next generation of program managers. We are making a total of eight recommendations, including three to the Air Force, two to the Army, and three to the Navy. Specifically: The Secretary of the Air Force should take steps to address areas of civilian and military program manager retention and selection that do not align extensively with leading practices. This could include using approaches already used by the other military services or commercial companies. (Recommendation 1) The Secretary of the Air Force should make greater use of existing financial mechanisms such as DAWDF to recognize high performers. (Recommendation 2) The Secretary of the Air Force should identify lessons learned by the Army related to the Army’s experience to extend coverage of AcqDemo across the civilian program management workforce. (Recommendation 3) The Secretary of the Army should take steps to address areas of civilian and military program manager training, mentoring, and retention that do not align extensively with leading practices. This could include using approaches already used by the other military services or commercial companies. (Recommendation 4) The Secretary of the Army should make greater use of existing financial mechanisms such as DAWDF to recognize high performers. (Recommendation 5) The Secretary of the Navy should take steps to address areas of civilian and military program manager training, mentoring, retention, and selection that do not align extensively with leading practices. This could include using approaches already used by the other military services or commercial companies. (Recommendation 6) The Secretary of the Navy should make greater use of existing financial mechanisms such as DAWDF to recognize high performers. (Recommendation 7) The Secretary of the Navy should identify lessons learned by the Army related to the Army’s experience to extend coverage of AcqDemo across the civilian program management workforce. (Recommendation 8) We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix II, DOD concurred with our eight recommendations and in some cases identified ongoing efforts among the military services to address the recommendations and increase alignment with leading practices. In addition, DOD noted the importance of addressing restrictions on how it can reward and retain military personnel, and requested that this issue be included in an ongoing study of DOD workforce incentives. DOD also stated that some of its recent accomplishments and improvements were not mentioned in the report. For example, DOD noted that representatives from the program management community meet regularly to discuss and share lessons learned and best practices. Recent accomplishments include updated competencies, career tracking and development tools, and improvements to classroom and online training. Our report recognizes the progress made by DOD in these areas and highlights some specific examples. We also agree that there is a broader range of efforts underway to enhance the development of program managers. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; and the Secretaries of the Air Force, Army, and Navy. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report addresses (1) how leading organizations train, mentor, retain, and select program managers and (2) the extent to which military service practices for training, mentoring, retaining, and selecting program managers align with those of leading organizations. To identify how leading organizations train, mentor, retain, and select program managers, we first reviewed GAO’s Standards for Internal Control in the Federal Government to identify criteria regarding the controls that federal agencies such as the Department of Defense (DOD) should have in place to manage talent. To identify leading practices for implementing these internal control standards, we first reviewed key documentation, including relevant legislation and prior GAO reports related to program management. We also reviewed prior GAO reports on managing the federal workforce, and in particular those reports that addressed retention mechanisms. We obtained and reviewed documentation from the Project Management Institute, a not-for-profit association that provides global standards for project and program management, related to program management and managing talent. We also worked with the Project Management Institute to identify suitable companies for us to approach to learn about leading practices, based on their membership in the Project Management Institute’s Global Executive Council, and insights from Project Management Institute representatives regarding these companies’ practices for training, mentoring, retaining, or selecting program managers. We spoke with or visited these companies, and where possible, companies provided relevant documentation to support their examples. The selected companies were the following: AstraZeneca is a biopharmaceutical company that focuses on the discovery, development, and commercialization of prescription medicines. AstraZeneca reported total revenues of $23 billion in 2016. Boeing Company is a global aerospace company and manufacturer of commercial airplanes and defense, space, and security platforms and systems. Boeing reported total revenues of $94.6 billion in 2016. DXC Technology is an end-to-end information technology services company. Created by the merger of CSC and the Enterprise Services business of Hewlett Packard Enterprise, DXC Technology serves nearly 6,000 private and public sector clients across 70 countries, delivering next-generation information technology services and solutions. Rio Tinto is a metal and minerals mining company that finds, mines, processes, and markets mineral resources including iron ore, aluminum, copper, diamonds, and energy. Rio Tinto reported total revenues of $33.8 billion in 2016. Based on our review of Project Management Institute documentation and prior GAO reports, as well as our discussions with commercial companies, we identified a set of leading practices for training, mentoring, retaining, and selecting program managers. We shared this set of leading practices with Project Management Institute representatives and made adjustments based on their feedback. To identify the extent to which military service practices align with those of leading organizations, we analyzed DOD, military service, and relevant sub-component documentation on training, mentoring, retaining, and selecting program managers for DOD’s current portfolio of 78 major defense acquisition programs as defined in our most recent assessment of the portfolio. We also interviewed the following DOD and military service organizations during our review: Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, Office of Human Capital Initiatives. Office of the Under Secretary of Defense for Personnel and Readiness, Office of the Defense Civilian Personnel Advisory Service. Office of the Assistant Secretary of Defense for Acquisition Defense Acquisition University. Department of the Air Force Director for Acquisition Career Management. Department of the Army Director for Acquisition Career Management. Department of the Navy Director for Acquisition Career Management. 4th Estate Director for Acquisition Career Management. Naval Air Systems Command. Naval Sea Systems Command. We also interviewed a former Assistant Secretary of the Army and Deputy Assistant Secretary of the Air Force with expertise in defense acquisition. We used pertinent documentation and information from interviews with officials to assess the extent to which each of the services’ practices aligned with leading practices. Specifically, we assigned ratings for three levels of alignment. Extensive alignment means that the service’s practice contains all of the elements of the leading practice and is not limited to a subset of the population. Partial alignment means that the service’s practice contains some, but not all, elements of the leading practice, or is limited to a subset of the population, such as military or civilian personnel only, or a particular organization within the service. Little to no alignment means that the service’s practice contains minimal or no elements of the leading practice. The following is a list of elements for each practice: 1. Training classes that allow program managers to share experiences: Training classes that involve current or prospective program managers and that allow for knowledge and experience sharing. 2. Rotational assignments: Internal and external—that is, industry— rotational assignments available to military and civilian personnel. 3. On-the-job learning and information repositories: Resources that provide access to guidance on how to perform program management activities and learn from past program management experiences. 4. Mentoring programs with senior leader involvement: Existence of programs that facilitate mentor-mentee relationships and expectation that senior personnel serve as mentors. 5. Career paths that describe skills needed to advance: Documentation for military and civilian personnel of skills needed at different stages of career path(s) to becoming a program manager. 6. Financial rewards for good performance: Consistent use of DAWDF to fund recognition awards for 1 percent or more of civilian program management personnel and AcqDemo coverage of a majority of the civilian program management workforce. 7. Education subsidies: Tuition assistance for further education and use of DAWDF-funded student loan repayments as a retention—versus recruitment—tool. 8. Recognition: Senior-level recognition of prestige and challenging nature of program manager role and of good performance in the role. 9. Identification of high-potential talent by senior leaders: Processes for senior leaders to assess military and civilian program management personnel and identify those considered high potential. 10. Assignment based on skills, experiences, and program needs: Program manager selection processes that assess candidate skills and experiences and specific needs of a program. One analyst performed the initial assessment for each service, and the supporting evidence was then reviewed by the Assistant Director, with any disagreement discussed and resolved as a team. These discussions also informed requests for more information and documentation from each of the services. Assessments were updated based on what was provided by the services. We also reviewed the military services’ practices for approaches that one or more services had adopted that aligned with leading practices, and that could potentially be adopted by the other services to improve their alignment. We shared our assessments with the military service Directors for Acquisition Career Management to give them the opportunity to note additional approaches or initiatives that might inform our assessments, and incorporated their input as appropriate. We reviewed data from DataMart, DOD’s acquisition workforce database, on the composition of the acquisition workforce and the program management career field as of the end of fiscal year 2016, including the extent of coverage of the Civilian Acquisition Workforce Personnel Development (AcqDemo) project. To assess the reliability of DOD’s DataMart data, we (1) reviewed existing information about the data and the system that produced them, (2) interviewed knowledgeable agency officials, and (3) reviewed written answers to questions about the system’s data reliability, including data collection and entry, underlying data sources, and use of internal controls. We determined that the data were sufficiently reliable for the purposes of our reporting objectives. We conducted this performance audit from August 2016 to February 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Michael J. Sullivan, (202) 512-4841 or sullivanm@gao.gov. In addition to the contact named above, Cheryl Andrew (Assistant Director), Emily Bond, Robert Bullock, Lorraine Ettaro, Kurt Gurka, Ruben Gzirian, Ashley Rawson, Lucas Scarasso, and Robin Wilson made key contributions to this report.
|
The Department of Defense's (DOD) major acquisition programs continue to experience cost and schedule overruns. GAO previously found that selecting skilled program managers is a key factor to achieving successful program outcomes. DOD relies on military and civilian program managers to deliver its most expensive new weapon systems, meaning its approach to training, mentoring, retaining, and selecting program managers is critical. House Report 114-537 included a provision for GAO to review the career paths, development, and incentives for program managers. This report addresses how leading organizations train, mentor, retain, and ultimately select program managers; and the extent to which military service practices align with those leading practices. To conduct this work, GAO identified leading practices documented in prior work and by the Project Management Institute, and interviewed commercial companies identified by the Institute as leaders in this field. GAO also analyzed military service practices for developing program managers and compared those to leading practices. Leading organizations use 10 key practices to train, mentor, retain, and ultimately select skilled program managers. GAO found that military service practices for developing program managers align extensively with four of the leading practices, as shown in the table below. At least one military service's practices do not align extensively with five of the leading practices, as shown in the table below. For the remaining leading practice, none of the military services' practices align extensively, as shown in the table below. Military service officials generally agreed with the assessments. More consistent alignment with leading practices—adapted for military and civilian personnel as appropriate and including greater use of existing financial rewards—would enhance the services' ability to manage acquisition programs. GAO is making eight recommendations, including that the military services improve practices that do not align extensively with leading practices and make greater use of existing financial rewards for good performance. DOD concurred with the recommendations.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.