USF Libraries
USF Digital Collections

The Commuter Assistance Program evaluation manual

MISSING IMAGE

Material Information

Title:
The Commuter Assistance Program evaluation manual
Physical Description:
106, 22 p. : ill. ; 28 cm.
Language:
English
Creator:
Florida -- Dept. of Transportation. -- Public Transit Office
University of South Florida -- Center for Urban Transportation Research
Publisher:
University of South Florida, Center for Urban Transportation Research
Place of Publication:
Tampa
Publication Date:

Subjects

Subjects / Keywords:
Commuting -- Planning -- Handbooks, manuals, etc -- Florida   ( lcsh )
Ridesharing -- Evaluation -- Handbooks, manuals, etc -- Florida   ( lcsh )
Employer-sponsored transportation -- Evaluation -- Handbooks, manuals, etc -- Florida   ( lcsh )
Local transit -- Evaluation -- Handbooks, manuals, etc -- Florida   ( lcsh )
Genre:
non-fiction   ( marcgt )

Notes

Additional Physical Form:
Also available online.
Statement of Responsibility:
prepared by Center for Urban Transportation Research, University of South Florida ; prepared for Florida Department of Transportation, Public Transit Office.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 029924261
oclc - 37421434
usfldc doi - C01-00164
usfldc handle - c1.164
System ID:
SFS0032272:00001


This item is only available as the following downloads:


Full Text

PAGE 1

The Commuter Assistance Program Evaluation Manual Prepared for: Florida Department of Transporta tion Public Transit Office 605 Suwannee Street, MS26 Tallahassee, Florida 32399-0450 Prepared by: Center for Urban Transportation Research University of South Florida 4202 E. Fowler Avenue, CUT -I 00 Tampa, FL 33620-5375 (813) 974-3120 Project Staff Daniel Rudge Francis Cleland Philip Winters The opinions, findings. and conclusions expressed in this publication are those of the authors and no1 necessarily tho.se oft he State of Florida Department o/Transpartation. This document was prepared in caaperation with the Stare of Florida Department ofTransportation.

PAGE 2

Tabl e of Contents C bapte r One: Introdu ction ............. ... ..... . ....... ....... ........... I Cbapter Two: Performance Measu res ... . ............ ... .... . . ....... .... 3 Introduction ....... ....... .... .... ........... ...... . . ........... . .... 3 Section A Required Performance Measures ...... ........... . ....... ... ......... 3 Required Performance Measures .... . . ............... . ........ . 5 Definitions of Required Performance Measures ............ ...... 7 Section B District Opt i ona l Performance Measures ....................... ........ I I District Optional Performance Measures ................. ........... 12 Definitions of Distri ct Optional Evaluation Measures ......... . .... 13 Section C Other Performance Measures ..................................... . 15 Goal I -In crease public awareness ....................... .......... . 16 Goal I Increase public awareness ................................... I 7 Goal 2 Increase productivity of roadway system .... ....... . ......... 19 Goal 3 Decrease traffic congestion . ............ ...... ........ . 2 I Goal 4 Improve air quality . ... ....... ... .. ... ......... ... . . ... 25 Goal 5 Conserve energy resources ......... . . ...... ........... 27 Goal 6 Improve mobility Carpools . ... ... . .... ................ . 29 Goal6 -Improve mobility Vanpools ........................... .... 31 Goal6 -Improve mobility Non-motorized (Bicycle & Pedestrian) . .... . 33 Goal 6 Improve mobility Transit .................. .... .... .... 35 Goal 7 Reduce cos ts of auto ownership ............. ........ . ........ 37 Goal 8 Improve economic viabil i ty ............ .... ................. 39 Goal 9 Increase customer inquiry ... ............................... 4 1 Goal I 0 Promote trial use ........................... .. .. ..... .. ... 4 3 Goal !I Facilitate arrangement of pools .... .......... . . . . ...... . 45 Goal 12 Reinforce use of commute altemati ves ...................... 4 7 Goal 13 Develop commuter assistance constituency ..... .............. 49 Section DDetermini n g Appropriate Performance Measure s ......................... 51 Selecting Performance Measures . .... . ...... ........................... 51 An example methodology for measuring overall program effectiveness and changes i n productivity . . . .......... . . ............. ... ... ........... 51 Productivity Matrix ...... . ............................ ... ............. 53 Chapter Three: Evalu ation Types ...... ......................... ........... 54 Introdu c tion .... ... .. ... ..... .......... .... .... ....... ............... 54 Types o f Evaluation ....... .... ..... .... ........ .... ................... . 54

PAGE 3

Needs Assessment ............. . ..... .......... ..... .... ....... 54 Summative Evaluation ....... ........ ......... . ... ... .... . .... 55 Fonnative or Process Evaluation .............. ... ... ...... ...... ..... 55 Multip111pose evaluations ............ ..... ............. ... . ..... 56 Market Research and Surveying .... ...................... . .... ........ ... .... 56 Purposes of doing market researc h surveys ......................... ......... 56 Attribute Testi ng ... ........................... .... .... .... ..... 57 Analysis of Users ..................... ..... . .......... .... ..... 58 Customer Satisfaction studies ............ ....... ..... ... . ... . .... 58 Studies of Decision-Making Methods ..... . .... .... ....... . ....... 59 Market S i zing and/or Forecasting .................... . .... ......... 59 Chapter Four: Survey Methodologies ..... ................ . ...... . .... ... .. 61 Types of Surveys ................ ............... .... ... . . . . .... . . ... 61 Focus groups .......... .... .... . . .. ...... ... . .. ... ........ 61 Written/ mail surveys .... .... .... ..... . . .......... . . .... .... ..... 61 Personal interviews .... ............... .... .... . ... ....... . ...... 62 Panels ......... ................... .... .... .... . . ..... ..... . .... 63 I ssues in Samp ling ...... . ......... ........ . ........... . ........ ...... 65 Definition of target population .......... .. .... .... ..... .... ...... ...... 65 Proper representation ........... ................... .... . ....... . ..... 65 Ensuring proper representation ................. ....... ........... 66 Eval uating surveys for proper representation ... . . . . ............ 68 Sampling efficiency ..................... ......... .... .......... ..... 69 Sam p l e sizes ............ .... ..... .... .... ........... ..... .. ... 70 Sample sources ...... ............. ....... ... ... . .............. .... 71 Summ ary .... .............................................. ...... ..... 72 Chapter Five: Understanding Statistics ........... ..... ....... . ..... . ...... 73 Introduction .. ... . .......................... . .. .. .............. . .... 73 Stausucs ............ . .... .... ............... ........... . . ..... ... .... 73 Confidence levels and confidence intervals ....... ...... ............. ... .. 73 Proportions ........ .................... ............ ........... ....... 7 4 Means ....................... ................................ . ..... 76 Table of typical confidence interval sizes at 95% confidence level ................ 78 Determination and analysis o f differences for significance . . ........... ...... 79 Significant d ifferences for proportions .. .............. ......... ........... 80 Significant differences for means .................................. ...... 81 Statistically significant differences versus meaningful differences ...... .... . .... 81 11

PAGE 4

Chapter Six: Survey Planning and Budgeting ....................... . .......... 83 Introduction . .......... ... ...... ... . .... . ...... . ..... ............ . ..... 83 Survey Timing ...... .... . . . ..... . . ..... .... ......................... 83 Seasonality ..... .. ..................................................... 83 Frequency ....................................... ............. .. . ... 83 Tim ing evaluation results for planning and budgeting purposes .................. 84 Budgeting ........ ....................................... ............ 84 Planning Survey Projects .................................. . . . .............. 84 S tep I : ldentil)" decisions to be made .... .... . ............................. 85 S tep 2: Hypothesis generation .................... . ....................... 86 Step 3: Identifi catio n of data needed to prove or disprove hypotheses .... ... ..... .. 87 ldentil)" ing Data N eeds ........................................ .... 8 7 The Importance of Contro l Groups ................ ........ .......... 88 The Concept of Sampling .................. ... ......... ... ...... 89 Step 4 : l dentil)"ing information sources ....... .... ........................ 89 S tep 5: Determining budget available and the best way to use it .... ..... . . ..... 90 Chapter Seven: Communicating Evaluatio n Findings .......... .................. 92 Introduction ................................ ........................... ..... 92 Getting to Know Your Audien c e ................... ............. ................. 92 Who is the audience for a CAP evaluation report and what do they want to know? ... 92 When is the best time t o conduct an evaluation? .......... ............. ........ 95 Documenting Evaluation Findings . ............... ........ ........ . ..... 96 Appendix A : Sam ple Database Member Survey ........................ ......... 98 Appendix B : Sample Completed Rideshare Database Survey ..... . .......... . ... .. I 02 Appendix C: Comm ute r Assistance Program Procedures ......................... I 06 UJ

PAGE 5

Chapter One Introduction The Florida Commuter Assistance Program (CAP) is an important and integral part of the Florida Department of Transportation's (FDOT) program to meet transportation needs in the State of Florida. Specifically, the developmen t of a statewide network of CAP offices was completed to offer travel choices to Florida's commuters. According to the official.FDOT procedures the Florida Commuter Assistance Program is described as: "Coordinated use of existing transportation resources can provide a responsive, low cost, alterna tive for alleviating urban highway congestion, improving air quality and reducing the need for costly highway improvements. The commuter assistance program focuses on the single occupant commuter trip which is the greatest cause of peak hour highway congestion. A coordinated effort to provide alternatives to these commuters using existing or low cost resources can be beneficial to the development of public transit statewide, the attainment of the Department's program objectives for meeting the transportation needs of the disadvantaged, and the Departments priority efforts to relieve traffic congestion, improve air quality, and to assure energy conservation. > As part of their efforts to ensure tha t Florida's transportation needs are addressed, the FDO T has specific program requirements for each FDOT District Office and each CAP office. These requirements include establishing specific and achievable program objectives, a listing of tasks to undertake and key activities to perform, reporting on each projects performance including written reports, and measurable goals and objectives with milestones to determine progress in stated emphasis areas. All of th ese requirements are intended to provide the Department with a tool to evaluate bow well CAP offices are meeting FDOT priority efforts to relieve trafftc congestion, improve air quality, and to assure energy conservation. This manual was developed to assist Florida's Commuter Assistance Programs (CAP) in their efforts to measure and evaluate their performance. As such, this manual will focus on providing the information necessary for a CAP to devise and conduct their own evaluation program. It will also provide guidance on how to report the results of that evaluation so that key CAP funders, elected offi cials, and the general public can understand and appreciate the efforts of the CAP in addressing traffic congestion, air quality, and mobility concerns. For the ease of use, this manual has been divided into chapters covering specific areas of evaluation. These are: I

PAGE 6

Chapter Two focuses on the performance measures that a CAP can use to evaluate program progress and record achievements. Included in this chapt er are definitions for FO OT required performance measures, FOOT optional performance measures, and a se t of other performance measures that a CAP could use to measure e ffectiveness and/or repo11 progress. Also included are tables which can be used by a CAP to report results and to track progress. Chapter Three examines the different types of evaluation that a CAP office may undertake to measure performance and/or progress. Included are descriptions of techniques such as needs assessments, formative evaluat ion surnmative evaluation, and others. Each is described to help the CAP office determ in e what evaluation is most appropriate to accomplish evaluation objectives. Chapter Four discusses the different types of survey methodologies that can be used by a CAP office. These include a variety of data collection methods such as focus groups and mail surveys, as well as sampling cons iderat ions. Chapter Five serves as an introduction to basic statistics. It is intended to provide a working knowledge of statistical principles that can impact a CAP evaluation. The focus is on such items as confidence i ntervals statistical differences, and other impo11ant characteristics that can impact the quality and reliability of a CAP evaluation program and its results. Chapter Six addresses survey planning and budgeting. It provides guidance on times at which evaluation is conducted (i.e. season, frequency), examines externalities that may influence the survey, and budgeting issues that must be considered when designing a survey The chapter also provides guidance on survey costs. Chapter Seven deals with how evaluat ion findings can be communicated to those who need to know. This includes a discussion of who needs to know what and when, how to communicate findings, and how to compare CAP findings with other programs. As each CAP begins to de sign its own evaluations, it should keep in mind that everyone who examines the evaluation results will bring different expectations and experiences to the review. For example, an MPO may seek to determine how well the CAP is achieving regional transportation objectives. Funders will seek to ensure tbat funds are being spent in a cost effective manner. To address each of these different expectations, the CAP must carefully design an evaluation that takes into consideration these viewpoints. This manual will provide guidance on impo11ant considerations for a CAP that lead to successful evaluations. 2

PAGE 7

CbapterTwo Performance Measures INTRODUCTION This cbapter will focus on the performance measures available to Florid a Commuter Assistance Program (CAP) offices to determine program progress and/or effectiveness. The performance measures are divided into three sections: required performance measures; optional performance measures; and other performance measures. As the name suggests, required performance measures are those that the Florida Department of Transportation (FOOT) Central Office has mandated that all CAP offices in Florida must track and report on at least an annual basis. These performance measures are specified on pages 8-9 of the Commuter Assistance Program procedures dated May 5, 1997. Oistri ct optional performance measures are those that FOOT have determined are appropriate for some of the CAP programs and, at CAP and FOOT District option, can be reported to show progress and/or performance Other performance measures are those that can help a CAP illustrate the effectiveness of their programs in meeting p rogram or regional objectives. SECTION A-REQUIRED PERFORMANCE MEASURES The FDOT required performance measures are: I. Number of commuters requesting assistance 2. Number of commuters switching modes 3. Number of vans in service (where applicable) 4. Number of vehicle trips eliminated 5. Vehicle miles eliminated 6. Employer contacts 7. Major accomplishments 8. Parking spots saved/parking needs reduced 9. Commuter costs saved The following tables have been developed to assist the Commuter Assistance Agencies in Florida track their performance relative to FOOT requirements. The tables are constrUcted with five supporting columns to help the CAP collect analyze, and disseminate the resu lts of the performance measures. The first colurrm describes actions that the CAP agencies take to achieve program goals or potential activities that could be incorporated to achieve the goal The second column includes the performance measures that are required by FOOT The third column is used if benchmarks or actual results are available for each performance measure. These bencltmarks/results could be taken from survey responses, from past commuter assistance program evaluation reportS, or from data available from other s imilar CAP programs. The fourth colurrm lists the source for evaluat ing achievement of the performance measure (i.e. 3

PAGE 8

database survey) The fifth colunm can be used by the conunuter assistance program to select targets to achieve for each of the performance measures The sixth eolunm can be used by CAP staff to explain why the selected targets have been set. FoUowjng each of the tables, a brief description of each performance measure is included along with the method to be used to collect the necessary information. Where appropriate, the formula for calculating the performance measure i s included. Because some of the required performance measures require the CAP to survey their database, a sample survey has been included as Appendix A This survey provides the basic framework needed to collect a U necessary information. The CAP can use this survey, develop one on their own or use this one as a basis for a more comprehensive survey instrument. Appendix B provides a sample completed survey to show how one database member may answer the survey questions. For assist;mce in developing surveys, contact the TDM Clearinghouse at the Center for Urban Transportation Research at (813) 974-9813 or SUN COM 574-9813. 4

PAGE 9

Requir e d P erforma n c e M e a sures A cti o n Pe r f ormance Ben c h mar k/ Sou rce T arge t s Co n t r i b uting Meas u res R es ult s Fact ors RA 1.1 Provide info to commuters RPl Number of about commute alternatives Commuters Collected requesting assistance by CAP RA 1.2 Develop matching sy st em RP2 Number of commuter s switching RA 1.3 Contract for and/or provide modes Survey vans for commuting purposes RP3 Number of vans Collected in service by CAP RA 1.4 Devel op marketing program t o : RP4 Number of a) Promote carpooling vehicle trips b) Promote vanpooling eliminated Survey c) Promote transit use d) Promote walk/bike RPS Vehicle miles Survey eliminated ---'----L. -----------5

PAGE 10

Required Performance Measures Action Performance Benchmark/ So urce Targets Contributing Measures Results Factors RA 1.5 Develop employer outreach RP6 Employe r Collected program con t a cts by C A P RP7 Major Collected accomplishments by CAP RP8 Parking spots Survey saved/parking needs (based on reduced veh. tri ps reduced) RP9 Commu te r Survey Costs saved (based on ve h miles elimnated) -6

PAGE 11

Definitions of Required Performance Measures R.Pl Number of commuters requesting assistance T his is the number of people that request assistance of some sort including: Carpool matchlist Vanpool matchlist or formation assistance Transit route and/or schedule information Telecommuting information Bicycle route and/or locker/ rack information The CAP offices would track the number of requests received and may want to track requests by type. T he information would be reported as part of quarterly and annual progress reports. RP2 Number of commuters switching modes This is the number of people that actually use the information you provide to change from their SOY mode to carpooling, vanpooling, transit use, telecommuting, walking and/or bicycling. 1bis information can be gathered by doing sample survey of commuters assisted on a monthly basis by either phone or maiL Every month contact a random sample of the commuters assisted th e previous month to see how many actually used the information provided. Extrapolate survey results to estimate total. It is recommended that actual data be used where available. RP3 Number of vans in service (where applicable) Th is measure represents the actual number of commuter vans on the roa d and/or the number of vanpoolers. These numbers would be collected and reported by the CAP office. RP4 Number ofvebicle trips eliminated This performance measure is calculated by using follow-up survey data or actual data. To calculate, complete the following steps (Appendix B is a completed sample survey that was used to develop the example below that is highlighted in bold text--in this case a CAP cus tomer who chose vanpooling): 1. I f the answer to Question 6 is not I, 2, or 3, then the total vehicle trips reduced is zero. Go on to the next survey Answer is 2 continue 7

PAGE 12

2. Calculate the total trips reduced by carpooling after using the agency by calculating the following: (Question 9 +Question 13) (Question 10 + Question 14 -I ) I (Question 10 +Question 14 ) (Question II +Question 12) 2 trips/day 49 weeks/year (0 days/week+ 0 days/week) (0 trips/day + 0 trips/day 1) I ( 0 trips/day+ 0 trips/day) (0 months+ 0 months; 0 years) 2 trips/day 49 0 Questions II and 12 should be converted into y ears UP TO I YEAR MAXIMUM, by dividing days by 245, weeks by 49, and months by 12. Since this is an annual measurement, IN NO CASE should the sum of Questions II and 12 be greater than I 3. Calculate the total vehicle trips reduced by vanpooling after using the agency by calculating the following: (Question 17 + Question 21) (Question 18 + Question 22 -I ) I (Question 18 + Question 22) (Question 19 + Question 20) 2 trips/day 49 weeks/year (5 days/week+ 0 days/week) (8 trips/day + 0 trips/day1 trip/day) I (8 trips/day+ 0 trips/day) (8 months= .67 years) 2 trips/day 49 weeks/year = (35/8 days/week .67 years 2 trips/day 49 weeks I year) = 287.3 trips Questions 19 and 20 should be converted into years UP TO I YEAR MAXIMUM, by d i vi ding days by 245, weeks by 49, and months by 12. Since this is an annual measurement, IN NO CASE should the sum of Questions 19 and 20 be greater than 1. 4. Calculate the total vehicle trips reduced through transit use after contacting the agency by calculating the following: (Question 25 +Question 28) (Question 26 +Question 27) 2 trips/day 49 weeks/year (0 days/week+ 0 day s/ week) (0 montbs + 0 months) 2 trips/day 49 weeks/year= 0 trips 8

PAGE 13

Questions 26 and 27 should be converted into years, UP TO I YEAR MAXIMUM by dividing days by 245, weeks by 49, and months by 12. Sinee this is an annual measurement, IN NO CASE should the sum of Questions 26 and 27 be greater than 1. 5. Calculate the total vehicle trips reduced through increase in other means by calculating the following: (Question 32 +Question 35) (Qu estion 33 +Que stion 34) (0 days/week+ 0 days/week) -(0 months+ 0 months) 2 trips/day 49 weeks/ yea r = 0 trips Questions 33 and 34 should be converted into year s UP TO I YEAR MAXIMUM, by dividing days b y 245, weeks by 49, and month s by 12. Si nce this is an annual m easurement, IN NO CASE should the sum of Questions 33 and 34 be greater than I. 6 Sum the r esults of Steps 3 through 5 to detennine the total number of trips reduced after contact with the agency. Sum = 287.3 trip s To calculate the trips reduced for the entire database: 7. Calculate: (Sum of the veh icle trips reduced for all the surveys) (size of ridesbare database I number of surveys completed with members of the rideshare database ). RPS Vehicle miles eliminated This performance measur e is calculated by using follow-up survey data. To calculate, complete the following steps (ref er to Appendix B for the sample completed survey that was used to develop the example): 1. Determine the vehicle trips reduced for each survey as described above. (rem ember that th is should be 0 if the answer to Question 6 is not 1, 2, or 3) Answer i s 2 -continue 9

PAGE 14

2. Multiply the result from Step l by Question 2 for each survey. 287.3 trips 10 miles: 2873 miles To calculate VMT reduced for the entire database: 3. Calculate: (Sum of the vehicle miles reduced for all the surveys) (size of rideshare database I number of surveys completed with members of the rideshare database) RP6 Employer contacts Report number of employer contacts by the following categories: Number contacted by letter/fax Number contacted by phone Number contacted in person Number of follow-up calls or visits When reporting include the number of employees at each site These figures will be tracked and collected by the CAP staff. RP7 Major accomplishments This performance measure is a listing of all major CAP programs and/or initiatives and the accomplishments of these projects/initiatives. These may include: New Transit Services Initiatedllmproved Educational Program Initiated Transportation Planning Initiatives Guaranteed Ride Home Projects Initiated Other Implementation Activities This information would be tracked and collected by CAP staff. RP8 Parking spots saved/parking needs reduced This is a performance measure that is calculated by determining the number of people using alternative modes at each employment site. It can also be calculated by taking the number of vehicle trips reduced from a database survey and dividing by 2 trips per day/245 working days per year. RP9 Commuter costs saved This performance measure is calculated by multiplying vehicle miles eliminated by the average cost per rrule, (AAA uses $.448 per rrule, the federal government and State of Florida use $.29 per mile). 10

PAGE 15

SECTION B-DISTRICT OPTIONAL PERFORMANCE MEASURES The FOOT defined D istrict optional performance measures are: J Gasoline saved 2. Emissions reduced 3. Information materi. als distributed 4. Special events 5. Media/conununity relations The following tables have been developed to assist the Commuter Assistance Agencies in Florida track their performance relative to FOOT District optional performance measures. The tables are constructed with five supporting columns to help the CAP collect, analyze, and disseminate the results of the performance measures. The first column describes actions that the CAP agencies take to achieve program goals or potential activities that could be incorporated to achieve the goal. The second column includes the performance measures that are required by FOOT. The third column is used if benchmarks or actual results are available for each performance measure. These benchmarks/results could be taken from survey responses, from past conunuter assistance program evaluation reports, or from data available from other similar CAP programs The fourth column lists the source for evaluating achievement of the performance measure (i .e. database survey). The fifth column can be used by the conunuter assistance program to select targets to achieve for each of the performance measures. The sixth column can be used by CAP staff to explain why the selected targets have been set. Following each of the tables, a brief description of each performance measure is included along with the method to be used to collect the necessary information. Where appropriate, the formula for calculating the performance measure is included. Because some of the required performance measures require the CAP to survey their database a sample survey has been included as Appendix A. 'This survey provides the basic framework needed to collect all necessary information The CAP can use this survey, develop one on their own or usc this one as a basis for a more comprehensive survey instrument. Appendix B provides a sample completed survey to show how one database member may answer the survey questions. For assistance in developing surveys, contact the TOM Clearinghouse at the Center for Urban Transportation Research. I I

PAGE 16

District Optional Performance Measures Action Performance B enchmark/ Soune Targets Contributing Measures Results Factors OA 1.1 Promote/develop alternative OP I Gasoline Saved Survey data transportation programs. calculation OP2 Emissions Survey data Reduction calculation OA 1.2 Develop and conduct a OP3 Information Collected community outreach/promotional Materials Distributed by CAP campaign OP4 Special Events Collected I by CAP OP5 Media/Community Collected Relations by CAP 12

PAGE 17

Definitions of District Optional Evaluation Measures OPI Gasoline saved This performance measure is calculated by multiplying vehicle miles eliminated by the average miles per gallon figure from AAA. For the 1996 model year, average MPG was 20.3. OP2 Emissions reduction This performance measure is calculated by multiplying vehicle miles eliminated by the emission factors for the CAP service area. Emission factors are available from Department of Environmental Regulation and are available for ozone, carbon monoxide (CO), and nitrogen oxide (NOx). For 1996 the average passenger car emitted: 4 7 grams/mile of Ozone 23 grams/mile of CO 1 6 grams/mile ofNOx OP3 Information materials distributed This performance measure details the number and type of infonnational materials distributed by the CAP. Infonnational materials may include but are not limited to: Brochures Information packets Posters Surveys Thi s infonnation would be tracked and reported by the CAP staff. OP4 Special events This perfonnance measure reports the number and type of spec ial events conducted by the CAP staff to promote and/or encourage commute alternative use. Special events may i nclude but are not limited to: Transportation Days Co mmuter Fairs Specia l Promotions This information would be collected and tra cked by CAP staff. OPS Media/community relations This perfonnance measure tracks CAP staff efforts in inf onning the media and general public about CAP activities and programs. Categories may include but are not limited to: Number ofPSAs aired Number of newspaper articles 13

PAGE 18

Number of news stories Number of magazine articles This infonnation would be tracked and reported by CAP staff. 14

PAGE 19

SECTION C OTHER PERFORMANCE MEASURES The performance measures in this section have been developed to allow a CAP the flexibility to tailor an evaluation program that closely matches program goals and objectives. They have also been developed to measure CAP effects on markets and groups, like employers and the general public, that directly or indi r ectly are influenced by CAP efforts. The performance measures can be used to develop a more complete profile of direct and indirect effects of CAP program activities on commuter mode choice. For example, the performance measures in this section can be used to determine if advertising campaigns influenced members of. the. general public to try carpooling without ever contacting the CAP office for assistance. To assist the CAP in selecting appropriate measures from this section, some of the FDOT required and optional performance measures have been repeated under appropriate goals This provides the CAP with a mechanism to find some performance measures that can help develop a complete picture of CAP efforts The following tables have been developed to assist the Commuter Assistance Agencies in Florida track their performance relative to the their own stated goals or to regional transportation goals The tables are constructed by using a potential generic CAP or reg ional transportation goal as the major section heading with five supporting columns to help achieve the goal. The first column describes actions that the CAP agencies take to achieve the goal, or potential activities that could be incorporated to achieve the goal. The second column includes performance measures that can be used to track how well the agencies are doing in achieving the goal. The third column is used if benchmarks or actual results are available for each performance measure. These benchmarks could be taken from survey responses from past commuter assistance program evaluation reports or from data available from other similar CAP programs. The fourth column lists the source for evaluating achievement of the performance measure (i.e. database survey). The fifth colwnn can be used by the commuter assistance program to select targets to achieve for each of the performance measures. The sixth co lumn can be used by CAP staff to explain why the selected targets have been set. Following each of the goal tables a brief description of each performance measure is included along with the method to be used to collect the necessary information. Where appropriate, the formula for calculating the performance measure is included. 15

PAGE 20

Goal l Increase public awareness Action P erfor mance Benchmark/ Source Targets Con tributing Measur es R esults F act ors AI. I Develop coordin a t ed, P 1.1 % awareness of Bu sines s consistent marketing program. Commuter (at all aware) survey Assistance among (highly aware) Al.2 Develop employer outreach employers marketing materials on TOM sb:ategies. P 1 2 Numbe r of first pres en tations made Collected Al 3 Plan and cond uct kick-off to employers by CAP events with employers. P 1.3 Numbe r of A l.4 Prov i de technical assistance in follow up Collected establishing employer programs. p r esentation made to by CAP emp loyers A 1 .5 Establish employer outreach campaign to appoint Employee P 1.4 %of employer s Business Transportation Coordinated (ETCs) with TOM programs survey to in volve emp l oyers in mobility programs. A 1.6 Host ETC training program .. 16

PAGE 22

Dcfmitions of Performance Measures for Goal One Pl.l % awareness among employers A measure taken from a business survey that asks if businesses are aware of the commuter assistance program. Pl.2 Number of first presentations made to employers This is a measure that examines how many presentations were made about rideshare services to area employers. This measure represents initial presentations to employers who have shown an interest in commuter assistance program services. Thi s data would be collected through quarterly reports and year-end evaluation reports made. Pl.3 Number offollow-up presentations made to employers Thi s is an required measure that examines the number of second third and fourth presentations made to businesses in the CAP service area. This data would be collected from quarterly reports and evaluation reports submiued. Pl.4 % employers with TDM programs This performance measure represents those employers who have designated an employee transportation coordinator or offer one of the following: compressed work weeks, work at home options preferential parking parking sh11ttles, guaranteed ride home programs, or bus or pool subsidies. Data for this measure would come from a bllSiness survey. Pl.S %aided awareness of Commuter Assistance or Commuter Assistance Number among commuters This measure examines commut er awareness of the CAP agency and/or the recognition of the telephone number commuters can call to receive assistance. This measure would be collected from the results of the general public survey. PJ.6 Number of customer inquiries The number of customers who contacted the commuter assiStance program during the review period. This measure would be tracked internally by the CAP. Pl.7 %awareness of CAP promotional materials This measure examines the general public s awareness of any CAP promotional materials including highway signs, TV and radio ads, etc. This measure would be collected through the general public survey. 18

PAGE 23

Go al2-Incr ease productivity o f r oa dway sys t e m Actio n Perfor mance Ben c h mark/ Source Tar gets Cont ri b u ting Mea sure s R esu lt s Factor A 2 1 Att end and participate in MPO P 2.1 % ofTIP Collected meetings to provid e input and guide projects related to by CAP CAP activities TOM A2.2 Develop long range vision, P2.2 % of TIP budget goal s and objectives for CAP that spent on TOM relat e d Collected are consistent with area -wide projects by CAP transportation networ k goals and programs. P2.3 % increase in Current AVO : average veh i cle Gen. Pub!. A2.3 Target MPO selected corridors occupancy Database Surveys and roadways for intensive rides hare marketing programs. P2. 4 % reduction in vehicle miles of travel from 100% SOV among: I. Database members Sun'eys 2 General public P2.5% reduction in vehicle trips from 100% SOV among: I. Database members Surveys 2. General public ----------19

PAGE 24

Definitions of Performance Measures for Goal Two P2.1 % of TIP projects related to TDM This measure would be calculated by CAP agencies based upon the number ofTransponation Improvement Program (TIP) projects related to TDM in local plans vs. the total number of TIP projects. P2.2 %of TIP budget spent on TDM related projects This measure would be calculated by local rideshare agencies based upon the total value of TDM related TIP projects vs total value of all TIP projects. P2.3 % increase in average vehicle occupancy This measure would examine the increase in vehicle occupancy from one evaluation period to the next. In the table, the baseline figure will be used to help the commuter assistance program calculate tbe percent change. The measure would be taken from a general public survey and database survey. P2.4 % reduction in vehicle miles of travel This measures the percent difference between actual VMT and VMT that would occur if all commuters used an SOV for work trips. The calculation would be done once for database members and once f or the general public. To calculate: (total trips in alternative mode per week) x (duration of alternative mode use) x {llaSSenl!ets-1/.passenflerS) X (49 weeks per year) X (mjles per trip) (total trips per week) x (49 weeks per year) x (miles per trip) P2.5 % reduction in vehicle trips This performance measure would be calculated by taking the total number of trips taken versus tbe total number of trips that would have been taken assuming all alternative mode users formerl y drove alone. The percent reduction figure is derived from a database member survey and the general public survey. To calculate: (total trips in alternative mode per week) x (duration of alternative mode use) x (passenaers-lfpassenws) x (49 weeks per year) (total trips per week) x (49 weeks per year) 20

PAGE 25

Goal3 Decrease Traffic Co ngesti on I Act i o n P e rform ance Benc hma r k/ Source Targets Contribu ting Meas u res Resu l ts F actors A3.1 Decrease t he number of at P3.1 %of work trips activity centers/along corridors using alternative mode among: A3. 21ncrease the use of commute I. Database members Surveys alt ernativ e s among commuters at 2. Commuters activity centers/along target corridors P3.2 Number of peak period vehicles per Business I 00 employees surveys P3. 3 VMT reduced for: General public Database members Surveys P3. 4 Vehicle trips reduced for: General p ublic Database members Surveys 21

PAGE 26

Goal3 Decrease traffic c ong estion Action Performance Benchmark/ Source Targets Cont ributi ng Measures Res ult s Factors A3.3 Develop information on P3.S% employers compressed work weeks and flexible with compressed work work hour programs week programs among: A 3.4 Conduct workshop on AU employers Business a!temative work hour programs for 2 Targeted employers Surveys human resource managers. P3.6% employees working a compressed work week among: I. AI! employers Business 2. Targeted employers Surveys P3.7% employers with flextime programs among: I. All employers Business 2 Targeted employers Surveys P3.8% employees working a flexible work schedule among: I All employers Business 2. Targe ted employers Surveys ---22

PAGE 27

Definitions of Performance Measures for Goal Three P3.J % of work trips using alternative mode This performance measure would be calculated by taking the total number of trips made by alternative modes (carpool, vanpool, transit, walk, and bike) and dividing by the total number of trips. The figure would be calculated for both database members and from surveys of the general public. P3.2 Number of peak period vehicles per 100 employees This measure can be calculated by CAP agencies by dividing the average vehicle occupancy at a worksite by I 00. This measure should be used wherever th e commuter assistance program is conducting an employe r -based campaign. P3.3 VMT reduced This is a performance measure taken from both a general public survey and database member survey. It is calculated by taking the VMT reduced per commuter and multiplying by the number of commuters. The formula for calculating this measure is given wtder the Definitions of Required Performance Measures section beginning on Page Seven. P3.4 Vehicle trips reduced This is a performance measure taken from both a rideshare database member survey and a general public survey It is calculated by taking the vehicle trips reduced per commuter (respondent) and multiplying by the number of commuters. The formula for calculating this measure is given wtder the Definitions ofRequired Performance Measures section beginning on P age Seven. P3.4 % employers with compressed work week programs The percentage of businesses offering a compressed work week schedule as determined by a business survey. Inc luded would be figures for all surveyed employers and those targeted by the CAP. Importance would be determined by CAP focus. In other words, does the CAP provide technical assistance to specific employers, or simply market the concept P3.5 % of empl oyees working a compressed work week schedule A performance measure taken from a business survey the figure reported represents the% of employees actually participating in a compressed work week program, as reponed by the employer. Included would be figures for all employees and for those specifically targeted by the CAP. 23

PAGE 28

P3.6 % employers with flextime programs The percentage of businesses offering a flextime schedule as reported in a business survey. Included would be figures for all employers and those targeted by the CAP. P3. 7 % of employees working a flextime schedule A performance measure from a business survey, the figure reported by employers would represents the % o f employees actually participating in a flextime program. Included would be figures for all employees and for those who work at targeted employers 24

PAGE 29

Goa14 Improve a i r qualily Adion P e rformance Be n cbmar W Sou r ce Targets Contributing Mea sures Result s F ac ton A4 1 Fonn carpools. P4 I Tons carbon Database monoxide reduced survey A 4.2 Increase vanpools. P4.2 Tons ozone Database A4.3 Increase transit use. pollutants reduced survey A4.4 Increase non-motorized mode P4.3 Tons of nitrogen Database usage oxide reduced survey P4.4 Pollution Carpool Database reductions by mode survey Vanpool Database survey I Tran sit Database I I survey I N onM otorized Database survey --25

PAGE 30

Definitions of Perfo rmance Measures for Goal Four P4.1 Tons of cuboo monoxide r educed Using the results of the VMT calculation, CO reduced is derived by: (23 grams per mile) x (miles reduced per commuter) x (II of commuters/908,000 grams per ton). This is an FDOT Optional Performance measure P4.2 Ton s of ozo n e pollutants r educed Using the results of the VMT ca l culation ozone reductions are derived by: (4. 7 grams per mile) x (miles reduced per commuter) x (#of commuters/908,000 grams per ton). This is an FDOT Optional Performance measure. P4.3 Ton s of nitrogen oxide reduced Using the r esults of the VMT calculation, nitrogen oxide reductions are derived by : ( 1.6 grams per mile) x (miles reduced per commuter) x (II of commuters/908,000 grams per ton) This is an FDOT Optional Performance Measure P 4.3 Pollution redu ctions b y mod e Using the above calculations except that reductions are based on VMT reduced by mode. 26

PAGE 31

Goal 5 Co n se rv e e n e rgy r esources A c tion Pe rfo r mance Ben ch m ark/ Sou rc e T argets Co ntribut ing M e a s u res R es ult s Factors AS I Develop materials on P5.1 %employers t e lecommuting. with tel ecommuting Business program survey A5. 2 Hold a wor kshop with companies on tel ecommuting. P5.2% targeted e mp l oyers w ith A5. 3 Promot e alterna tive mode use. telecommuting Business program survey P5. 3 % employees i n a telecommuting Business arrangement survey P5.4% employees at targeted companies in a tel ecommu ting Busines s arrangement survey P5.5% reduct ion in vehicle miles of travel among: I. D atabase members 2. General public Surveys P5.6 Gallons of gasoline saved by alternate mode u s ers among: I. Databas e members 2. General public Surveys 27

PAGE 32

Definitions of Performance Measure for Goal Five PS.l % employers with a telecommuting program Taken from a business survey, the percentage of employers who offer a telecommuting option to its employees P5.2 % of targeted employers with a telecommuting program Taken from a business survey, the percentage of businesses that work directly with the CAP or are located within a CAPtargeted activity center who offer a telecommuting option to some of its employees. P5.3 % of employees in a telecommuting arrangement Taken from a business survey, the% of employees who have taken a telecommuting option, as reported by employers. P5.4 % of employees at targeted companies in a telecommuting arrangement Taken from a business survey, the% of employees who work at targeted companies who have taken a telecommuting option, as reported by employers. PS.S % reduction in vehicle miles of travel This measures the pereent difference between actual VMT and VMT that would occur if all commuters used an SOV for work trips. Th e calculation is done once for database members and once for the general public. P5.6 GaUoos of gasoline saved by alternate mode users Derived by taking tbe VMT reduction calculation and multiplying by the average miles per gallon figure for passenger vehicles as reported by the American Automobile Association (currently 20.3 mpg). The figure is derived for database members and for tbe general public from statistics taken from the database m ember and general public survey respectively. Gallons of gasoline saved by database members is an FOO T Optional Performance Measure. 28

PAGE 33

Goa16Improve mobilityCarpools Action Performance Benchmark/ Source Targets Contributing Measures Results Factors A 6 1 Seek to improve carpool P6. 1 Number of Collected matching program operated by CAP persons regis t ered by CAP A6.2 Customize brochure on options with survey form. P6.2 Number of persons placed in Collected A6.3 Develop "Guide on How to carpools by CAP Form a Carpool." Database P6.3 Duration of existing carpools survey P6.4% of trips done Database by carpool and van pool survey 29

PAGE 34

Definitions of Performance Measures for Goal Six -Carpoo ls P6.1 Number of persons registered The total number of persons who are registered i n the commuter assistance program database. This number \\111 be developed by the commuter assistance agencies as part of their performance measures. P6.2 Number of persons placed in carpools The total number of persons placed into carpools. This would be collected and disseminated as part of the quarterly perfonnance report. P6.3 Duration of existing carpools The average length of time that current poolers have been in their pooling arrangement. This figure is taken from a database members survey. P6. 4 % of trips done by earpool/vanpool The percentage of all work trips done by carpool and vaopool. This figure is taken from a database member survey and/or a general public survey. 30

PAGE 35

Goa 16 -Improve mobility-Vanp ools A c tion Pe rfo r m a n ce B e nchmark/ Source Targets Cont ribu t in g Measures Res u lts Fador A6.4 Meet with representative of P6.5 Number of Collected transit age ncie s to strengthen vanpo ols by CAP vaopool programs. A6.S Make arrangements to obtain P6.6 Number of Collected vans through purchase or lease (e.g vanpool riders by CAP VPSI). A6.6 Develop fare structure, arrange P6.7 Number of Collected for maintenance, prepare marketing vanpool pres entations by CAP materials, and introduce program. P6.8 Number of vans Collected A6.7 Develop "New Start" in service by CAP assistance program to sub s idize the cost of 4 empty seats for four months A6.8 Hold presentations wit h groups of employees who live over 20 miles away from work 31

PAGE 36

Defi ni tio n s of Performance Meas u res for Goal SixVan pool s P 6. 5 N umb t r o f v aopool s f orme d For t his performance m ea sure the CAP agencies would repo n the t otal num ber of vanpools formed during the rev iew period. P 6 6 Nu mber of vanpool rid ers For thi s performance measure, the CAP agen c i es w ould report the t otal numb e r of vanpoolers as pan of th eir q u arterly perform ance reports. P6. 7 Nu mber ofvanpool meetin gs For this performanc e measure the CAP agenc i es woul d repon the t otal n umber ofvanpoo l mee t ings hel d as part of thei r quart er l y performance reports. P 6 8 N umber of v ans in service This is an F O O T required perf onnance measure The CAP ag enc i es wou l d repon the number of commuter vans o n the road as part o f thei r quarterly perfo rmance repons 32

PAGE 37

Goal 6-Impro ve mobility -Nonmotorized (Bi cycle & Pedestrian) Action P erformance Benchmark/ Source Targets Contrib utin g Measures Results Factor A6.9 Develop a program to P6. 8 %employers encourage employers to offer with bike Business incentives and support for bicycle racks/lockers survey and pedestrian programs. P6. 9 %employers Business A6.1 0 Meet with area bike w/shower/storage survey coord inators and obtain marketing materials for distribution through P6.1 0 % commut ers General employers. who walk or bike to public work survey A6.11 Meet with employers to discuss p lans. 33

PAGE 38

Definitions of Performance Measures for Goal Six -Non-motorized P6.8 % employers with bike racks/lockers This measure would be taken from a business survey It represents the percentage of businesses that state that they have bike racks and/or lockers at the worksite. P6 9 % employers with showers/storage facilities This measure represents the percentage of employers who offer showers and storage facilities to their employees at the worksite. The figures would be taken from a business survey. P6.10 %commuters who walk or bicycle to get to work This measure would be taken from a general public survey and/or database survey. It is the percentage of commuters who use bicycles or walk to work. 34

PAGE 39

Goal6 -Improve mobilityTransit Action Performance Beoehmark/ Source Targets Contributing ; Measures Results Factor A6.12 Increase the number of P6.11 %employers I employers offering transit subsidies purchasing transit Business I to employees. passes survey A6.13 Increase the number of P6.12Numberof Colle<:ted I employers selling transit passes to passes sold by CAP employees. P6.13 %commuters A6.14 Encourage / promote the use of purchasing transit Surveys Park n Ride lots as a pick-up/droppasses off point for pools and/or accessing transit. P6.14% employers Business with transit subsidy survey programs Collected P6. I 5 park n ride lot b y CAP or utilization rates FOOT -----35

PAGE 40

Definitions of Pe.rformance Measures for Goal Six -Transit P6.11 % of employers selling transit passes This is a potential question on future rideshare surveys conducted among area businesses. It represents the percentage oflocal employers that sell discount transit passes to their employees. P6.12 Number of passes sold The measure would track the number of discount transit passe s sold on behalf of the local transit agencies by the CAP agencies. P6.13 %of commuters purchasing transit passes This is a potential perfonnance measure that would be collected in a database member and general public survey. The measure would represent the percentage of survey respondents who purchase transit passes for commuting to work via mass transit vehicles. P6.14 %of employers witb transit subsidy programs This is a performance measure taken from a survey of businesses. It would represent the percentage of local employers who indicated that they provided fmancial subsidies to employees traveling on transit vehicles P6.lS Park n ride lot ulilization rates This information is currently not tracked by CAP agencies. It represents the percentage of parking spaces being used at local park n ride facilities 36

PAGE 41

Goa17Redu c e Costs of Aut o Owner s hip A cti o n Perfo rmanc e lknc hmark! S ource T a rgets C ontr ibuting M ea s u res R esu lts Factors A7 I Develop Co111111ut er Assistance P7 I Gasoli n e costs marketing campaign based on savmg s reduced rosts Database General Public Surveys A7.2Implemen t marketing campaign P7 2 Auto main t enance savings D atabase General Public Surveys P7. 3 Commuter costs Survey data s a v ed calculation ------37

PAGE 42

Defi n itions o f Performance Measures for G oal S e ven P 7 .1 Gasoline eo5ts savi ngs This perfonnanoe measure estimates cost savings accrued from not having to purchase gasoline. It is calculated by taking the VMT reduction figure and multiplying by gallons used per mile by the average automobile and the cost per gallon of gasoline. (VMT x gallons/mile x cost/gallon). Average MPG for 1996 was 20.3, and cost per gallon figures are available from local AAA offices. P7 .l A u to mai ntenan c e savings For this perfonnance measure the savings are calculated by taking the VMT reduction figure and multiplying by the maintenance costs of an automobile/mile (VMT x maintenance cost/mile) Maintenace costs are i ncluded in the AAA cost per mile figure and generally are about 101 5 cents per mile. P7 .3 Co mmu te r costs saved This perfonnanoe measure is calculated by multiplying vehicle miles eliminated by the average cost per mile to operate an automob ile (AAA uses $.448 per mile, the federal government and Sta t e of Florida use $.29 per mile) 38

PAGE 43

Goa l 8 -Improve &c,onomic VIability Action P erformance Benchmark Source Targets Significan t Measure$ Rating A8. 1 Provid e travel choices PS. l Number of Database parking spaces saved survey A 8 2 Provide cost-effective services P8.2 C ost per trip provided direct influenc e and total Direct: Dat a b ase i nfluence Total: survey 39

PAGE 44

Definitions of Performance Measures for Goal Eight P8.1 Number of parking spaces saved This is an FOOT required performance measure. It is calculated by taking the vehicle trips reduced figure from the database survey divided by 2 trips per day/245 working days. P8.2 Cost per trip provided (direct aod total) This is a performance measure that is calculated by using the results of. the database member survey. The information needed to calculate the cost per trip provided (direct) is: I Total carpool and vanpool trips provided per commutersame measurer as trips reduced EXCEPT that the size of the pool is not taken into account. 2. Database size. 3. Influence rate per trip for carpool and vanpoolthe number ofpoolers that say their mod e choice was influenced by commuter assistance, weighted by the number of trips taken. 4. Annual budgettbe budget of the commuter assistance program. To calculate: annual budget (total carpool and vanpool trips provided per conunuter) x (database size) x (influence rate) Calculating the cost per trip provided (total) assumes that all database members that are in a pooling arrangement were, in some way influenced by the conunuter assistance program. The information needed to calculate the cost per trip provided (total) is: I. Total carpool and vanpool trips provided per commutersame measurer as trips reduced EXCEPT that the size of the pool is not taken into account. 2. Database size. 3. Annual budgettbe budget of the Commuter Assistance Program. To calculate: annual budget (total carpool and vanpool trips provided per commuter) x (database size) 40

PAGE 45

Goal 9 -Inc r ease Custome r I nqu iry Action Performance B enchma rk/ Source T argets Contributing Measures Res ults Factor A9. 1 Develop marketing campaign P9.1 Number of Collected aimed at reducing costs/congestion customer inquiries by CAP P9.2 Number of Collected applications processed by CAP P9 3 o/o of employers Business wanting assistance survey from Commut er Assistance 41

PAGE 46

Definitions of Performance Measures of Goal Nine P9 1 Number of customer inquiries The number of customers who contacted the commuter assistance program during the review period. Th is measure will be tracked internally by the CAP agencies P9.Z Number of applications processed This is a performance measure, that represents the total number of applications received and processed by the CAP agencies during the review period. P9.3 %of employers wanting assistance from Commuter Assistance This is a performance measure taken from a business survey. It represents the percent of businesses responding that Stated they would like to be contacted by a CAP agency about establishing an employer TOM program 42

PAGE 47

GoallO-Promote Trial Use Action Performance Benchm ark! Sourc e Targets Contributing Measures Results Factor A I 0 1 Develop marketing campaign PIO.I %ever tried General public: Surveys to encourage use of alternative alternate mode Database: modes A I 0 2 Provide ride s hare information PI 0 .2% of general on request to local res i dents public trying alternate Gen e ral mode bas e d on public advertising survey PI0.3% of database trying alternative mode based on CAP Database info survey PI0 .4% of general public attempting to Gene ral contact Comm u ter public Assistance survey 43

PAGE 48

Definitions of Performance Measures for Goal Teo PIO.l %ever tried alternate mode 1bis performance measure would be taken from both a general public survey and a database member survey. It represents the percentage of respond e nts that said they tried using a commute alternative at some point in time to commute to and from work. P10.2 %of general public trying alternate mode based on advertising This performance measure is taken from the general public survey. It represents the percent of respondents who said that they tried a commute alternative after hearing/seeing commuter assistance program advertisements. Pl0.3 %of database trying alternative mode based on Commuter Assistance info 1bis performance measure is taken from a database member survey It represents the percentage of respondents who stated that they tried a commute alternative after obtaining information from the Commuter Assistance Program. P10.4 %of general public attempting to contact Commuter Assistance 1bis performance measure would be taken from a general public survey. It represents the percent of respondents who stated that they had tried to contact the CAP agencies for information 44

PAGE 49

Goalll Fac ili tate Arrangement of Pools Action Performance Benchmark/ Source Targe t s Contributing Mea s u res Results Factor A I 1.1 Hold zip code m eetings at P 11.1 Number of zip Collected em ployment sites code meetings held by CAP A 11.2 Make introductory c a lls to PI 1.2 Number of potential matched poolers. introductory calls Collected made by CAl' P 11.3 % database Dat abase receiving pooling tips survey PI 1.4% database receiving GRH info Database survey PI I. 5 % database receiving m a tching Database info survey PI 1.6 % database using matchlist to try Database and form a pool survey PI I. 7 Satisfaction with Commuter Assistance among Database database members survey PI 1.8% database who would recommend Commuter Assistance Database to others surv e y -45

PAGE 50

Defini t ions of Performance Measures for Goal Eleven Pll.l Number ofzip code meetings held This performance measure would be tra cked by the CAP. It represents the number of meetings held at employment s ites to introduce matched employees residing in the same zip code. Pl1.2 Number of introductory calls made This performance measure represents th e efforts of the CAP agencies in making formation inquiry calls on behalf of database members that have been matched This measure would be collected by the commuter assistance agencies. Pl1.3 % database members receiving pooling tips This measw:e would be taken from a database member survey. It represents the percent of respondents who stated they had received pooling tips from the commuter assistance program. PI1.4 % database members receiving GRH info This measw:e would be taken from a database member survey. It represents the percent of respondents who stated they received guaranteed ride home program information from the CAP. Pll.S %database members receiving matching info This measw:e would be taken from a database member survey. It represents the percent of respondents who stated they had received matching information from the CAP. Pl1.6 % of database using tbe matchlist to try and f orm a pool 1bis measure would be taken from a database member survey. It represents the percent of respondents who reported trying to make contacts with others on their matchlist to try and form a pool. PI I. 7 Satisfaction with Commuter Assistance among database members This is a performance measure representing the satisfaction database members have with services provided by the CAP agencies. Respondents would rate agencies on a I to I 0 seale. % of database members who would recommend Commuter Assistance to others This is a performance measure that would be taken from the database member survey. It represents the percentage of database members who would defmitely recommend commute r assistance to others. 46

PAGE 51

Goal 12Reinforce Use of Commute Alternatives Action Performance Deneb mark/ Source Targe t s Contributing Measures Results Facto r A 12.1 Provide GRH program. P12.1 Number of Collected GRH rid es provided by CAP Al2.2 Develop follow-up system Pl2.2 Number Collected registered for GRH by CAP P12.3% of databas e Database provided with GRH member info survey Pl2.4% of database Database members rece iving memb er follow-up contacts survey Pl2.5 %of em ployers Business providing incentives s urvey Pl2.6% employers Business providing QRH survey Pl2.7 %of employers Business w!ETCs s urvey Pl2.8% 12 mo.+ Database database members member using commute survey a l ternative 47

PAGE 52

Definitions of Performance Measures for Goal Twelve P12.1 Number of GRH rides provided This is a performance measure that would be tracked by the CAP agencies. It represents the total number of guarante ed ride home rides provided during the review period. P12.2 Number registered for GRH This is a performance measure that would be coUected and tracked by the CAP agencies. It represents the total number of persons that have registered for the guaranteed ride home program. P12.3 % of database provided witb GRH info This measure would be taken from a database survey. It represents the percent of respondents from the entire database tha t stated they had received guaranteed ride home program information P12.4 % of database members receiving follow-up contacts This measure would be taken from a database member survey. II represents the percent of respondents who reported that they had been contacted by the commuter assistance program as a follow-up to materials that had been sent by commuter assistance Pl2.5 %of employers providing incentives This performance measure would be taken from a business survey. I t represents the percent of employers responding that they offered financial subsidies to employees who regularly used the transit system to commute to work P12.6 % of employers providing GRH This is a performance measure taken from a business survey. It represents the percent of employers who reported offering their own guaranteed ride home program to their employees. Pl2. 7 % of employers w!ETCs This is a performance measure taken from a business survey. It represents the percent of employers who reported designating their own employee transportation coordinator to assist their employees in finding commut e alternatives. P12.8 % 12 mo.+ database members u.siog commute alternative This is a performance measure taken from a database member survey. The measure represents the percent of database members wbose entry date in the database is greater than 12 months and who report that they are still using a commute alternative. 48

PAGE 53

. Goall3 -Dev elop Commute r As sistance C onstituency Ac tion Perrormance B e n c h mark/ Sou r ce Targets Contributing Measures R esult s Factor Al3. 1 Develop system to track and Pl3. 1 Number of Collected resolve complain t s. comp l aints by CAP A 13 2 D e velop s y s tem to obtain Commuter Assistance service users Pl3.2 C ompla i n t s Collected terminals resolved by CAP Pl3.3 Number of Collected testimonials received by CAP Pl3.4 Employer effectiven ess rating of Commuter Business Assistance survey P13 5 Database Database member effectiv e ness member rating for CAP survey Pl3.6% of database member s who would recommend Da ta base Commuter Assistance member To others survey -49

PAGE 54

Definitions of Performanc e Measure for Goal Thirteen P13.1 Number of complaints This is a potential performance measure for the CAP agencies. The CAP agencies would collect the number of complaints they received in regards to their services. P13.2 Complaints resolved This is a potential performance measure that would be collected and tracked by the CAP agencies. The measure would count the number of complaints resolved by the commuter assistance program to the customer's satisfaction. P13.3 Number of testimonials received This is a potential performance measure. The measure would be collected by the CAP agencies and would represent the number of testimonials and written recommendations made on behalf of the commuter assistance program. P13.4 Employer effectiveness rating of commuter assistance This is a perfonnance measure taken from a business survey. It represents the rating given by employers on the effectiveness of services provided by the CAP agencies The rating scale is from I to 10. P13 .S Satisfaction with tbe commuter assistance program among database members This is a perfonnance measure taken from a database member survey. It represents the satisfaction rating given by respondents on the services provided by the CAP agencies. Respondents would be asked to rate the agencies on a scale of I to 10. P13.6 % of database members wbo would recommend commuter assistance to otbers This is a perfonnance measure taken from a database member survey. It represents the percentage of database members wbo would definitely recommend the commuter assistance program services to others. 50

PAGE 55

SECTION D-DETERMlNlNG APPROPRIATE PERFORJI.lANCE MEASURES The CAP office should meet with their local FOOT District representative to select which performance measures will be used to evaluate the program. At a minimwn all required performance measures must be included. At CAP and/or FOOT option, performance measures taken from the optional performance measures section and from the other performance measures section may be included Selecting Performance Measures When selecting performance measures, the CAP and FOOT District offices should consider: What performance measures can be used to monitor progress in achieving stated program goals and objectives? What performance measures can be used to improve program performance or customer service? What performance measures help highlight program accomplishments? What CAP programs are important and are not measured through the required performance measures? What new initiatives or programs have been added since the last evaluation that should be measured? Does the available evaluation budget allow us to conduct other surveys besides the database survey? (See Chapter Six of the CAP Evaluation Manual for budget considerations). Assistance in selecting appropriate performance measures, and in developing survey questions to collect the data needed to assess performance is available from the TOM Clearinghouse located at the Center for Urban Transportation Research (CUTR) at the University of South Florida. An example methodology for measuring overall program effectiveness and changes in productivity One of the challenges in evaluating the performance of TOM programs across programs and over time is the diversity of goals and objectives as well as diffe rent emphasis areas. The evaluation should help CAPs enhance their performance through focus on dual, results oriented goals: 51

PAGE 56

I. delivery of ever-improving value to customers, resulting in greater use of alternatives to the single occupant vehicle by commuters; and 2. improvement of overall CAP operational performance (e.g., lower cost per person served). The selection of products and services, performance measures, and organizational structure usually depends upon many factors such as the service area, the CAP's stage of development, and employee capabilities. The CAP in cooperation with their key stakeholders should select which objectives and performance measures best describe its mission and accomplishments. A successful evaluation will use procedures that determine one or more of the following: (I) the extent to which the program has achieved its stated objectives (e.g., increases in Average Vehicle Occupancy); (2) the extent to which the accomplishment of the objectives can be attributed to the program (direct and indirect effects) (3) the degree of consistency of program implementation to plan (relationship of planned activities to actual activities), and, (4) the relationship of different tasks to the effectiveness of the program (productivity). The following CAP Productivity Index summarizes the CAP s operational performance. Once the information is collected on performance, awareness and customer satisfaction, the next challenge is how to summarize these diverse factors to give an overall assessment of the program, track progress and revise objectives. Using the attached "Productivity Matrix" for the key performance measures or ratios, one can quantify the total impact of the performance measures. Referring to the attached table the first shaded line would be the actual results of the CAP. The shaded blocks scattered below reflect nearly the same value The range of values shown are for illustrative purposes only and should be established for each CAP. Level 0 represents the lowest value recorded for the criterion ratio over a recent period of time, in which normal operating conditions existed; nominally the worst ratio reading that might be expected. Level 3 represents operating results indicative of performance proficiency at the time the rating scale is established. The highest level, Level I 0, is a realistic estimate ofresults that can be attained in the foreseeable future (e.g., 3 years) with essentially the same resources that are now available This could be the benchmark of the industry's best. By looking up the corresponding "Performance Score" on a scale of 0 to I 0 to the right, the CAP can gauge how well the program is doing on that factor. Each score is noted in the shaded line near the bottom of the table. By assigning weights to each factor, the program can recognize those items thought to contribute most to the CAPs individual program. These weights might be determined by the CAP and/or FDOT. The total "Performanc e Indicator" score reflects the combined, weighted score of each factor. Changes in this score from period to period wiU reco;:nize changes in productivity. 52

PAGE 57

Criterion Quantity (# Veh Trips Reduced) CURRENT 55,232 VALUE per yr 10 80,000 9 75,000 8 70, 000 7 65,000 6 60 000 5 55 & 4 50,000 3 45,000 2 40,000 I 35,000 0 30,000 SCORE: 5 Weight 20% Weighted Score I PRODUCTIVJTY MAT R I X (Example only) Quantity Quality Awaxe(#Vans (Customer ness '" Satisfaction Service) Rating) 6vans 82% 50% somewhat heaxd to very of CAP satisfied 20 100% 95% 18 97% 90% 16 94% 85% 14 91% 80% 12 88% 75% 10 85% 70% 8 82% 65% 6 I 79% 60% .. 4 76% 55% 2 73% 50% 0 70% 45% 3 4 I 10% 30% 1 0% 0.3 1.2 0.1 Change in Productivity ((Tota l w e ighted score/3)-1) J 00% = 5 3 Number TOTAL ofETCs 32 ETCs 65 60 55 50 45 40 35 30 25 20 IS 3 30% 100% 0.9 3.5 l.J7

PAGE 58

INTRODUCTION Chapter Three Evaluation Types In order to conduct an effective evaluation, it is necessary to understand what the evaluation is supposed to accomplish. A useful typology of evaluations has been drawn from The Evaluator's Handbook published by the Cen.ter for the Study of Evaluation at UCLA. TYPES OF EVALUATION Three basic types of evaluation exist: Needs Assessment Summative Evaluation Formative (Process) Evaluation Each of these evaluations uses different types of evaluation tools, including planning or goal setting meetings examination of existing data or performance measures, and market surveys. The implementation of each of these tools is described later in the CAP Evaluation manual. The three types of evaluations are described in detail below: Needs Assessment A Needs Assessment is conducted when the program being evaluated is to attempting to determine it' s goals and objectives. At some point in the organization's life, preferably close to the beginning, organizational goals and objectives must be set The market that the organization is going to serve and the needs of that market that will be filled by the organization must be clearly identified. Needs Assessments are also called for when the organization perceives that significant change is taking place in its market, either due to new technologies, new patterns of behavior, or other major changes that impact the organization, the way it does business, or the needs that the organization is attempting to meet. Needs assessments typically use one or more the following evaluation tools: Surveys to profile the market, including: a) Quantifiable (usually telephone, mail, or panel) surveys to determine size, needs, and to identifY and profile the market segments for targeting b) Focus groups to better understand the specific needs being served Overview of the organization's current capabilities -if applicable (i.e. if the needs assessment is occurring after the organization exists rather than as an initial step in the development of the organization) Identification/flowcharting of the organization's current processes-if applicable Strategic Planning sessions with upper management 54

PAGE 59

Summative Evaluation A "Summative" evaluation is one in which the effectiveness of the organization is examined in relation to its goals and objectives. Has th e organization met its goals? Is it worth the money that is being spent on it? How well are organizational processes performing? Many elements are used in these types of evaluations: financial records, records of sales or transactions, (in the case of CAPS, records of matches requested and performed, growth of the matchlist database, etc.), examination of performance measures data, and survey research on the market served -often including customer satisfa ction surveys. The intent of a summative evaluation is essentially to grade the performance of an organi1.ation. Surnmative evaluations typically use one or more the following evaluation tools: Surveys of the served market, including: a) Quantifiable (usually telephone, mail, or panel) surveys to determine impact of the organization on market's behaviors (use of carpools, etc.) and/or to determine organization s customer's satisfaction levels b) Focus groups to better understand the specific problems customers have with the organization usually done after a quantifiable study Examination of organizational data-(i.e accounting, marketing, and other performance) Formative or Process Evaluation A Formative Evaluation differs from a Summative Evaluation in that its purpose is to analyze organizational processes and suggest improvements to those processes to better serve the organization's goals as opposed to merely grading their cUITent effectiveness. The purpose of these evaluations is not so much to find new directions or objectives for the organization to meet as to fine-tune the method currently used in meeting objectives. If there is reasonable doubt that the processes are even corning close to meeting objectives a summative evaluation of those processes (with the purpose of determining whether or not to continue the activity) may be called for. If there is reasonable doubt that the goals which the process is designed to meet are appropriate, a needs assessment may be called for. One purpose of conducting a formative evaluation would be to examine the organization's processes as whole. A second purpose might be to compare how processes are carried out in different parts of the organization, such as at different sites. It is not uncommon to discover that two commuter assistance programs operating under a single umbrella, theoretically with the same set of procedures and guidelines, have entirely different ways of handling their customers F ormative evaluations typically use one or more the following evaluation tools: Surveys of the served market, including: a) Quantifiable (usually telephone or mail) surveys to determine customer satisfaction with processes, marke t behaviors and how processes can be better designed to mesh with those behaviors. 55

PAGE 60

Focus groups/Personal interviews to bener understand how customers use the organization's product or service and the specific needs being served F lowcharting of the organ i zation's CWTent processes Interviews with employees who carry out the organizational processes being evaluated Multi-purpose evaluations Many evaluations are conducted for multiple pwposes, particularly for both summative and formative pwposes. For instance it is quite common for a survey of an organization's customers survey to contain elements that both grade the organi zation on its current performance (summative evaluation) and that inquire into customer opinions about how service can be improved, either implicitly (through customer grading of various organizational processes low grades need improvement) or explicitly. This is an acceptable, and in many cases desirable, procedure as long as the elements of the eva l uation that are being conducted for surnmative versus formative purposes are clearly delineated. MARKET RESEARCH AND SURVEYING This chapter will provide the reader with a brief background on marke t research and surveying techniques and practices, and how they can be integrated into effective evaluations. It is intended to familiarize the reader with the concepts, terms and options available in the field o f market and survey research This chapter is intended to provide the reader with enough knowledge to manage and oversee survey research projects. However,just as a manual on TDM strategies would not in itself provide a reader with the knowledge to form and operate a Commuter Assistance Program this chapter does not in itself provide the tools and knowledge necessary to conduct research projects entirely on one's own Such abilities are gained with years of classroom instruction and field experience. Purposes of d o ing m arket research surveys Market research surveys are designed to answer questions about the attitudes and behaviors of a specific group of people (a "market"), and to provide quantitative estimates of the prevalence of such behaviors and attitudes in the subject population. Market research can be viewed primarily as a means of reducing uncertainty going from an "guess based on my own experience" to an informed estimate based on interviewing a representative sample of the market in question. A research project can improve an estimate from "I'm pretty sure that somewhere between 20% and SO% of the population has ever actually tried carpooling" to "There is a 90% chance that somewhere between 25% and 30% of the population has ever actually tried carpooling." Surveys are a tremendous aid in conducting any of the three types of evaluations discussed earlier: Needs assessments, Summative evaluations, and formative evaluations. They provide greater understanding of how your customers use your products, what they think about them, and what other products or services they may want that you may be able to provide to them. 56

PAGE 61

Market research projects generally take one of the following forms, described below: -Attribute Testing, which determines what face.ts or characteristics of a product are service are more or less appealing to a target market. An example of this type of study could be a study on competing airlines: who has better seating, baggage handling, more courteous service better on-time performance, better prices, etc. -Analysis of users, which provides demographic/psychographic profiles of a target market, often also comparing those profiles to profiles of a different market or of an overall population. This type of study is often used to direct resources in media selection for advertising/promotional campaigns. An example of this type of study could be a comparison of the demographics of carpoolers versus people who drive alone to work. Satisfaction surveys, which gauge the level of satisfaction of product or service users, and often are also structured to suggest areas where improvement would be most beneficial. An evaluation of a CAP by its members v.1JI generally take this form. For that reason, this type of study will be discussed at length in this manual. Studies of decision-making methods, which investigate how members of a target market make decisions, including what factors are used to make decisions and their relative impoJtance to a decision. An example would be a study of mode choice. Mark sizing and/or forecasting, which attempts to estimate how many people in a target market make use of a product or service and how much of that product they use. An example of this type of study could be an attempt to estimate how a CAP's activities translate into a reduction of Vehicle Miles Traveled. Whatever the results or findings from a market research study there are two things that market research never does. Research never makes a decision it merely provides better information for you to make decisions Research never guarantees success, it merely reduces the amount of uncenainty in the information you have. Allribute Testing The purpose of attribute testing is usually to determine what types of characteristics a product or service should have, and the relative importance of allocating resources to the development, maintenance, or improvement of those characteristics. Respondents are typically asked to rank rate, or othenvise compare various attributes as to their importance, desirability, value, and so forth. If a rating is used, it is often done on a numerical scale such as 1-5, 1-7, 1-10, etc. Other types of studies attempt to determine how a product is perceived in terms of its attributes. One such approach, called Multi-Dimensional Scaling, has respondents rate competing products or serv i ces in terms of their similarity and then uses mathematical modeling to help identify what attributes of the products respondents are using to make their comparisons. For example, a survey might have a respondent rate mode choices in terms of their similarity (driving alone versus carpooling versus biking, etc.) 57

PAGE 62

Other techniques have respondents rate each product on a series of attributes and create graphical comparisons of the product based on those ratings. For example, a survey might have respondents rate carpooling riding the bus, etc. on convenience, cost, efficiency, and so forth These types of analyses are often useful in identifying and understand how consumers or potential consumers view competing alternatives, and how perceptions might need to be changed in order to create greater acceptance of a particular alternative. Analysis of Users This is a classical type of analysis that usually involves asking respondents about their habits, attitudes, and demographic characteristics (age, income, education, and so forth) and then creates profiles of different groups. Often this is done to identifY what types of people are most likely to use a product. This then allows the researcher to tJy to market the product more actively to those types of people on the basis that it i s more attractive to them, or conversely they may tJy to reposition the product and target it to the types of people who are Jllll using it. It all depends on the purpose of the research and the objectives of the organization that is marketing the product. Customer Satisfaction studies As market growth began to level off and competition for the consumer dollar increased to fierce levels in the latter pan of the 1980's and early '90's, Customer satisfaction studies grew rapidly in popularity, acceptance, and use. Companies focused more efforts on retaining existing customers as it was discovered that retention was nearly always more efficient and profitable than market expansion and stealing market share from competitors. Satisfaction studies take on a variety of forms. One of the most common is to measure overall satisfaction with a product or service and also to measure sa ti sfaction with a number of the product's components. For a consumer product such as toothpaste, this might include satisfaction with taste, cleaning ability, cavity prevention, and so forth. For a service, components might include reliability courtesy of employees, timeliness, and value. Other types of studies measure satisfaction with a large number of different services provided by organizations. In some cases, statistical models are built that determine the relationship of attribute ratings to overall satisfaction. This can show either what the most important determinants of satisfaction are, or alternatively what elements are most important in explaining the difference between satisfied and unsatisfied customers. The differences here are subtle but extremely important For instance, in the case of airlines, the most important attribute in customer satisfaction may wen be safety, but since airlines are generally safe (look at the number of accidents compared to, say, roadway accidents) perceptions of safety rarely determine whether or not a customer is satisfied. Other characteristics, such as on-time performance, courtesy of employees, and so forth, become more criticaL 58

PAGE 63

Studies of Decision-Moking Methods This i s an area that has been heavily used in transponation research. A large number of studies have been done to discover the telative importance of various mode choice determinants (or travel characteristics), including in-vehicle and out-of-vehicle time by mode, perceived costs, patking availability, and so forth One of the most common approaches i s called Discrete Choice Analysis, which is used either with existing data on mode choices (such as census data), or with structured surveys that present respondents with hypothetical situations and ask them to choo se a mode given the char ac teristics of each situation. From the mode chosen and the levels of the characteristics (high parking costs low parking costs, short travel distance, long travel distance, and so on) the importance of each of the characteristics can be estimated. Market Sizing and/or Forecasting This is an extremely common application of survey research, used in many consumer goods and service industries. Respondent s are asked to estimate how much of a product or service they use or would use. The sample is then weighted to replicate the make-up of the population in question, and average usage rates are calculated. Finally, these avetage usag e rates are applied to the entire population to determine a total market size ot m31'ket potential. An example of this type of application could be VMT reduced by getting people to carpool. The population would be surveyed as to their intent to carpool, given some incentives and/or activities that !he CAP in the 31'ea might undertake. The percentage that would carpool is !hen reweighted to replicate the make-up of !he entire population (if necessary a well-designed random sampling procedure should j ust about perfectly replicate the population), and the percentage is then applied to !he population size and known ttavel characteristics. From lhese calculations overall VMT reduced by forming carpools can be estimated. This type of procedure has some major limitations. The estimation usually requires respondents to ptedict entire patterns of behavior oflong periods of time, (as opposed to merely stating preference for one product or service over anolher, or committing to one-time of a product without long-term implications, wbich is !he form most reliable product/service tests take. Sophisticated demand estimation techniques for products such as consumer goods often use either full-scale test markets or laboratory-based "shops which allow for observation of behavior and a full representation of the entire choice experience. This type of approach is impossible to apply to carpooling estimation. Carpool estimation also have a relatively rare drawback in that carpooling is seen as a public boon and carpooling is considered socially responsible and desirable. Therefore, respondents are likely to respond !hat !hey will carpool when polled as part of a public inquiry, even though their actual behavior will often not follow suit. 59

PAGE 64

Nonetheless surveying is often the only way of producing a reliable estimate of potential commuting behavior-changes. The limitations noted above should be considered when estimations and forecasting are undertaken, but it should also be kept in mind that an estimate with limitations is can be a valuable addition to subjective data and prior experience in other, possibly very different, areas. 60

PAGE 65

TYPES OF SURVEYS Chapter Four Survey Methodologies There are a number of different types of surveys, each of which have unique characteristics and limitations. The choice of survey method is dependent on the objectives invo lved in doing the project and budget available. The main types of surveys are: Focus Groups Written/Mail surveys -Telephone Surveys Personal interviews -Panels A short discussion of each of the approaches follows: Focus groups Focus groups are an excellent alternative if only a very general feel of public int erest or support for a particular subject is required, and the researcher wishes to determine which issues of great impact to the commun ity will surface. Because of the small sample sizes involved, this process will not allow for a quantitative estimate of public support nor will it determine the relative importance of is sues raised or topics discussed Typically two to four focus groups will be held. Cost w ill vary from $3,000 to $6,000 per topic, depending on th e number of focus groups held, complexity of questions, and other time-related factors. Reports from focus groups may contain references such as "75 percent of the people in the groUp were in favor of. ... This type of statement is very m i sleading, since it implies that the percentage can be applied to the general public. It is best to avoid using numerical results if at all possible i n such reports, and to concentrate on the qualitative aspects of the results issues raised and discussed, features of products or services that come up during the session, and so on. Written/mail surveys Written and mail surveys are usually the lowest cost alternative available for quantitative estimation. The survey s allow for a relatively large amount of data to be collected from each respondent. However, the format of the questions should be kept simple Difficult, complex survey formats will usually cause frustration in respondents and low response rates, thereby comprising the sample and possibly rending it unrepresentative of the population. Also, written surveys are often subject to low response rate s, further compromising project ability. Certain techniques (such as obtaining databases of names and addresses and including 61

PAGE 66

incentives) can help to improve response rates at higher costs. F i nally, written surveys usually take over a month to collect necessary data Costs will vary greatly depending on the level of project ability the researcher i s attempting to obtain. To provi de a single, reliable estimate for an area, a minimum sample size of250-300 is recommended In cases where an i ndependent estimate is required for several segments of the population (such as geographic areas, incom e levels etc.) require d sample sizes can increase greatly. Usually if a "general idea" i s required for sub-segments and an accurate estimate for the population as a whole, a sample size of 125-150 p e r segment is sufficient. The cost for this type of approach can vary from $5,000 to $10,000 and up, depending on sample size and type required. Telephone surveys Telephone surveys have the advantage of rapidly providing quantitative estimate s Telephone surveys also tend to have higher response rates than mail surveys which increase their l evel of proper representation and project ability The major drawback of telephone surveying is the cost involved Furthermore, the amount of data and complexity of responses that a respondent can provide is limited Y, hour phone interviews are not recommended Concepts presented need to be fairly simple and straightforward. As with mail surveys costs will vary greatly depending on the level of project ability the researcher is attempting to obtain To provide a single, reliable estimate for an area, a minimum sample s ize of250-300 is recommended. In cases where an independent estimate is required for several segments of the population (such as geographic areas i .ncome levels, etc.) required sample sizes can increase greatly Usually if a general idea" is required for sub-segments and an accurate estimate for the population as a whole a sample size of 125-150 per segment is sufficient. The cost for this type of approach can vary from $7,500 to $25,000 and up depending on sample size and type required and length of interview. Personal inte rviews Personal interviews are the best alternative when complicated survey formats are required and detailed information needs to be provided to respondents. This is the only alternative that would have any chance of providing an estimate of transit demand. However, even this approach would suffer from some of the limitations noted above. Costs for this type of interview tend to be extremely high if a quantitative estimate is required, since the usual purpose of using this type of i nte rview is to present fairly complex information to 62

PAGE 67

potential respondents, and to be able to judge the nuances of response. This requires rather skilled (and r elatively expensive) interviewers, and also often involves trave l expenses. If the only intent of the personal interview is to be able to present infonnation, a mail/phone approach can sometimes be used at lower cost. Panels Panels are used when the objective is to track behaviors and changes in behavior over an extended period of time. Panels also provide convenient samples for testing new ideas in product or service development. Classic examples of panel research include th e Nielsen rating panels and the a national purchase panel run by the NPD group which tracks purchases of a large number of different consumer goods. Panel research can be very expensive, particularly if the panel approach is used for a single product or service. Usually panels are most useful when a number of different product or service areas are being covered, as in the NPD panel. A table summarizing each of these approaches follows: 63

PAGE 68

FOCUS GROUPS WRITTEN/MAIL TELE PHONI!. PERSONAL PANELS SURVEYS SURVEYS I NT ERVIEWS Oesc:ription 8-10 people discuss topics survey Pre-
PAGE 69

ISSUES IN SAMPLING Many of the issues involved in proper sampling have been touched on in the above sections. 1bis section will deal with each of the issues in more depth Tbe question of sample sizes will be briefly introduced and will covered in detail in the statistics section which directly follows this section. Certain key elements that must be included in any sampling plan: Defm i tion of target population Issues in proper representation a) how to ensure proper representation quotaS & screeners random selection reweighting b) how to evaluate bow well the sample represents the population Sampling efficiency Sample size Sample sources Defmition of target popu lation 1bis issue has been touched on in the section on hypothesis generation. Usually, the hypotheses that are being tested will defme the target population, at least in a broad sense. The key is to define the target population in such a way that each respondent provides meaningful information. Even if the hypotheses do make clear the population that will be surveyed, this item should be restated when the sampling plan is being developed, to ensure that there are no misunderstandings or misinterpretations. Pro p er representatio n I Because most surveys are con ducted on a sample of the population rather than the full population, it is vital that the sample selected properly represent the population Imagine, for in s tance, if a survey of potential carpoolers were only conducted among households that had three or more cars, rather than among a sample of the population that more closely represented the e n tire population. It is very probable that this sampl e would have very low intentions of carpooling, since car availability is a major fac tor in determining mode choice. 1bis would lead the researchers to draw erroneous conc l usions about the prospects of developing carpools among the population. 65

PAGE 70

How does a researc her go about ensuring proper represen tation; and evaluating completed s urveys to check for proper representat ion? E nsuring proper represe ntation Ensuring proper repre senta t ion can be done in several ways The steps to be take n include: I Ident ify ke y variables to serve as indicators 2. Include measuremeniS of those variables in the surveys 3 Devise a random selection process 4 In some cases require that the sample meet quotas on in dicator variables. 5. Weighting results I ldentifrint: keY variables The researcher and the research sponsor should identify those variables that will most lik ely impact attitudes and behaviors being measured Thi s is done through a combination of historical data sources (if available) and u sing the expertise o f the parties involved to determine the most important variables. Usually one check s on a limited set of variab les, sa y five or six These can typically include age, income, gender presence of children, and so forth It is important that there be an independent source that measures those variables U sually when the entire population of an area is being surveyed, census data serves as a good c he ck on major demographic variables Breakdowns of census data or tables in the U. S. Statistical abstraciS can also serve as good check s when segments of a population are being surveyed. When the target sample is from an extremely specific database (for instan c e a ridesharing database ), d a ta must either be culled directly from the database or from historical surveys of that database if available. 2, lpdudine m easurements of the indica)or v ariables Clearly if a variab l e is to be used as an indicator of proper representation, that variable must be included somewhere in the data collectio n proces s. Standard demographics ar e typically pan of any surveying effort since demographics often impact attitudes and behaviors and are therefore extrem ely usefu l in extrapolating results gleaned form a survey to the entire population. Any other variables chosen as indicators, such as number of automobiles, type of housing, and so on, should have a specific question in the survey to collect that data item. 3. Devise a random selection process The most common way of ensuring a repre se ntative sampling of any given target population is through a random sampling process In t elephone based surveys, this is often accomplished through a technique known as random-digit-dialing Commercial services will obtain a list of aJJ working phone e x changes, devise a sample of random numbers fitting those exchanges, e liminate exchanges having a high incidence of business/gov ernmen t telephone numbers, and then use the resulting list as a basis for the sample. This type of list will be most effective if it is further randomized by placing the telephone 66

PAGE 71

numbers in random order. Because of the relatively large number of unlisted telephone numbers, a random selection process from a published phone book can create bias by eliminating unlisted numbers (which often belong to people with higher incomes) from the sampling universe. When sampling from databases is involved, there are several possible random selection procedures Ideally, the sample will be totally random. The process involved in creating a totally random sample involves: determining the sample base necessary determining the ratio of sample needed to total database size using a random number generator to create numbers between 0 and I, and applying those numbers to each database record, and selecting as sample all those whose assigned random number falls below the ratio of sample needed to database size. A second, less ideal but more commonly used method, is to create an nth-record sample, where the ratio of database size to sample needed is determined, rounded down, and every nth record is selected, where n is equal to the ratio of database size to sample needed. This method is acceptable when the database is not organized with some sort of regular order bias (such as all database requests sorted by day of the week received). It should be noted that sample base size, that is, the sample that is drawn to meet the needs of the survey, is usually much larger than the actual required sample size. The reason for this is that there are a large number of non-working phone numbers and/or bad addresses in databases, and that a large percentage of people may not respond to the surveys. A ratio of I 0 for sample base to desired completed surveys is not uncommon. 4. !lsin2 QUotas on indicator variablesAnother way of essentially forcing a sample to be representative of the population is to set quotas on some or all indicator variables. This is often used in selecting samples for focus groups, and often used on variables such as male/female ratio and minimum age (often 18 or older) for telephone surveys Using quotas requires that the indicator variables be identified up front in a portion of the survey called a screener. For instance, if a survey were to have quotas set on gender, age, income, and presence of cb.ildren, where a certain distribution in each of those categories was required, those questions would be the first asked in the survey. Interviewing would take place for each category desired until the quota was filled, and then people meeting the filled-quota description would no longer be interviewed. A modified form of this approach can be used in mail surveys, but only if many more returns are received than need to be used. The quota variables will be checked on the surveys as they are returned, and as each quota is filled, no more surveys fitting in to that quota will be used. Ideally, this would be done by waiting until a pre-set cutoff date was reached, processing all of the surveys received up to that date, and then randomly selecting surveys to be used for each quota. More commonly, however, quotas will be filled in the order in which the surveys are received. It 67

PAGE 72

should be noted that this technique is not often used with mail surveys, except to eliminate returns that don't fit the target population at all. Mail surveys more conunonly use weighting techniques to adjust for sample returns as described in the next section 5. sur:ycy resultsSurvey results are commonly weighted so that indicator variables will match up with independent source data. For instance, if a survey returned has only a 15% distribution of respondents 3 or more cars, and it is known that the target population has 25% (say from census data), then the survey results can be mathematically re-weighted to match the 25% figure. When this is done, all of the responses from the 3 + car group are re-weigbted, not just the indicator variables. All of their opinions and attitudes are made more prominent. As an analog y, if you are seeking a medical opinion, and you get one from a doctor who got out of medical school last week and one from a doctor who has been in pract ice for 1 0 years, you could reasonably consider all of the statements made by the experienced doctor as being more important to your final decision, on the basis of his/her years of experience. The same principle applies in reweighti.ng survey results. A critical factor in weighting survey results is that you have sufficient sample size within the group you are reweighting, particularly if you are making their opinions more prominent. If you had 5 responses from people with 3 or more cars and were to weight them as importantly as I 00 responses from other people, you run a severe risk of having unrepresentative results Your confidence in the responses given by the group to be re-weighted should be fairly high. The section on sample size, as well as the section on statistics will explain the concept of confidence in greater detail. As a ride of thumb, it is probably unwise to re-weigh! responses from a group with less than 75 responde nts. Evaluating surveys for proper representation Once the data have been collected you will have a distribution of responses on the indicator variables, such as percent male and female, percent in various income brackets, and so forth. In some cases, you may have an average (or mean) value as a check (such as mean number of vehicles, mean number of people per household, etc.). T ypically, however indicator variables are evaluated in the form of distribut ions. Checking the responses for proper representation essentially involves making statistical tests on the distributions This section will provide a very general outline of what you are looking for when conducting the tests The mechanics of conducting the tests will be described in the statistics section. Two types of tests are conunonly conducted on distributions. These tests are a variation on the stan dard t test and what is called a chi-square test. 68

PAGE 73

The first, in which you compare the percentage of people who fall in a certain category in the survey responses to th e percentage that fall in that category in the independent sample (such as the census) is a variation on a standard statistical test called at test The !test is designed to be used to compare means. However in this case, each category can be considered as a yes/no response (for example, if25% have 3 or more cars, we can trea t this as the response to the question "do you have 3 or more cars?" where 25% said yes and 75% said no), and can be essentially treated as a numerical response of I or 0. The proportion can then be compared either to h i storical data or census data, treated in the same fashion, through this t e st. The mechanic s of the tes t are described in the statistics section. The chi square test examines the entire distribution of responses simultan e ously, as opposed to comparing category-to category, and gives back a result that i ndicates whether the distributions are (statistically) significantly differen t or not. Thus this test could be applied simultaneously to the percentage of people saying they had no cars, I car, 2 cars, and 3 or more cars, to determine if the entire distribution were different. Alternatively, it can applied in the same manner as described for the t test (as a series of yes/no responses), in which case the chi-square test is equivalent to the variation on the t test. Again, the notion of sta t istical significance will be dealt with in detail in the statistics section. Means can also be compared to ensure representativeness, although th is is done much more infrequently. The reason that distributions are used more often to check how whether a sample is representative is that data is the checks are usually done on demographics, which are more typically collected in categorical form rather than in exact numbers Sampling efficiency Collecting data from respondents costs money, and the more data is collected, the more money it costs Another major cost factor is inefficiency i n sampling, where for example you set up quotas and then contact a large number of people who don't fit in the quotas. It costs time and money just to check whether or not potential respondents fit into quotas. Usually research dollars arc tight, and i t is more than worthwhile to do everything possible to ensure that the sample base i s as efficient as possible Sampling efficiency can be achieved in many ways Simple examples could be: If a sample of working commuters is desired it would be wise not to send surveys (or make telephone calls) to communities that are largely populated by retirees. If a sample of people who live in, say, S t. Petersburg, Florida is desired, all phone exchange known to be wholly in Clearwater (or Seminole, or Largo, etc ) should be eliminated Commercial databases sometimes contain demographic data that c an be used. For instance, a survey of commuters drawn from a demographic database could be restricted to those aged 18-54, if age data is available on the database. 69

PAGE 74

For efficiency purposes i f the data is no t available in advance and a screener mus t be used the screening section should clearly be the first part of the survey, so that non-qualifying respondents won't be interviewed (and thus cost money), only to determine towards the end of the surv ey that they don't qualify. Sample The issue of how many returned surveys are required is fairly complex. Some fairly advanced statistics are involved The key issue that the research sponsor needs to determine is the level of uncertainty that is acceptable in the results. As mentioned earlier, there is always a chance that the survey will not exactly represent the opinions of the population even if a completely correct random selection procedure is used. This was demonstrated with the example of the deck of cards, where we could randomly select 20 cards from the deck and had to estimate (from the cards we drew) what percentage of the cards in the deck were black and what percentage were red. It is conceivable that we would randomly select 20 red cards and no black ones. Survey results are usually presented as a single specific result, such as "25% of the population has 3 or more cars." To be completely accurate the result might be presented in the following way: There is a 95% chance that the between 22% and 28% (25% +/3%)ofthe population has 3 or more cars. There is a 90% chance that between 23% and 27% (25% +/ 2%) of the population has 3 or more cars. There is an 80% chance that. ... and so on There are two elements involved in the uncertainty about survey results one is a range of result s that the "true" result falls in (known as the confidence interval), and the other is the percent chance that the result falls into that range (known as the confidence level) Given a certain samp le size that is randomly selected from a population, for any given result either a percentage or an averagea confidence level and confidence interval can be calculated. The level and the interval are interdependent; that is the size of the interval depends on the magnitude of the level. For any given result, there is an interval corresponding to an 80"/o confidence level, a different (and larger) interval corresponding to a 90% confidence level, a third (and still larger) interval corresponding to a 95% confidence level, and so forth. One common misconception is that in order to get a reliable sample, it is necessary to survey a certain percentage of the population. The fact of the matter is that confidence levels and intervals are can be calculated completely independently from the size of the total target population. Should you happen to s urvey a large percentage of a population ( say, 10% or more), a factor can be applied that increases the level of confidence. But the basic calculation (presented in the section on statist ics) provides a minimum level of confidence (and confidence interval) independent from the size of the total target population. 70

PAGE 75

The notion of confidence intervals and levels also demonstrates why focus groups are not a rel i able source of quantitative information such as percentages Suppose there are 12 people in a focus group, and eight of them happen to agree on something. It is not uncommon for focus groups to report that a large majority" or even "two-thirds of the "market" agrees on something Application of the confidence interval formula (which really shouldn't be used for such small samples anyway) would show that the true result, at a 95% confidence level, was anywhere between 41% and 95% which might not indicate a "large majority" or even a majority at all. What the research sponsor needs to decide for the key results coming from the survey is what size of interval at what level are acceptable U sually, the confidence level is determined first (e g "I want to be 900/o confident that all the results ... "), and then the acceptable interval is determined (" ... are within 3 percentage points or less of the true values.'') This decision is then evaluated (using statistics to be presented in the statistics section) for a 50% result, and the desired sample size can then be determined. The nature of the confidence interval is that it is at its maximum size when a 50% results occurs. Sample sources There are a large number of potential sources to obtain sample addresses or telephone numbers whose use depends on the objectives of the survey. These include : Databases of, for example, rideshare club members Commercially available databases drawn from magazine subscription lists, sweepstakes entries, telephone directories, etc. These databases can have a surprisingly large number of names matched to addresses and telephone numbers Telephone numbers derived from a random-digits process, which is available from a large number of commercial suppliers Databases of business addresses and phone numbers are also available from similar sources. The choice of which database to use depends primarily on: the objectives of the project and the hypotheses being tested the extent to which the database covers the target population defined by the objectives and hypotheses. Beware of using databases that are convenient and close at hand but may represent a biased sub-sample of your true target population. For instance, a rideshare database clearly does liQl represent all caxpoolers. the expected incidence or hit rate" expected from the database for efficiency purposes, which is important but must not override the cautions noted just above. 71

PAGE 76

Summary If all of the above steps are taken, in cluding: properly defmed target popu l ation; random selection process; checking for proper representation, reweight i ng if applicable; correct sample size drawn; and corre ct source chosen for the sample; the n the survey should produce reliable information How useful that information is will depend largely on how well the survey instrument is designed to collect that information While this manual will not attempt to instruct the reader on how to write surveys (which is a skill gained through years of practice and experience), there arc a few genera l rules that a research sponsor should ensure are followed. Those guidelines are presented in the next section. 72

PAGE 77

INTRODUCTION Chapter Five Understanding Statistics It has been established earlier in this manua l that survey research is an effective way to collect information to help evaluate Commuter Assistance Programs. The surveys can produce: baseline or benchmark data to which future results will be compared results to compare against baseline data information about the marketplace which can be used to redirect resources It sho uld be noted that a survey of a sample from a population, rather than a census of the population, carries inherent uncertainty. To illustrate the issue take a deck of cards as an example. Suppose we could randomly select 20 cards from the deck and had to estimate (from the cards we drew) what percentage of the cards in the deck were black and what percentage were red. It is conceivable albeit unlikely, that we would randomly select 20 red cards and no black ones. We would then be forced to conclude (incorrectly, of course) tha t all of the cards were red. STATISTICS The question that this section will answer is, how much uncertainty arises from a given sampling procedure and how are results analyzed in light of that uncertainty. Confidence levels and confidence intervals Two statistical concepts are used to describe the uncertainty arising from a sample: Confidence levels which are a measure of the probability that the "true" result lies within a certain range. (The "true" result is the result we would have obtained if we had sampled the entire population rather than just a portion of it) Confidence intervals, which describe the size of the range mentioned above The confidence levels and confidence intervals are dependent on one another. Any given result has a confidence interval associated with a 95% level of confidence, a different (and smaller) interval associated with a 90% level of confidence, another associated with an 80% level of confidence, and so on. For any given sample, the confidenc e interval and its associated confidence level can be determined through certain stat ist ical formulas. The formulas may appear daunting at first but they are really quite simple to use. There are several different types of formulas. This section concentrates on the two types used most frequently in survey research: those relating to results reported as proportions (such as, "25% of the population CaiJlOols at least once per week) 73

PAGE 78

those relating to results reported as means or averages (suc h as, "the average commute distance in the area is 14.6 miles." While it is not vital for a research sponsor to be able to calculate confidence intervals and perform significance tests, it is a good idea to understand where intervals come from and how tests are performed and what the resulting values mean. This chapter will presen t the information necessary to make the relevant calculations, and will follow with a table of fairly typical results that should allow the reader to get a general idea of what sort of confidence intervals to expect from data. Proportioos Given a sample size and a result in the form of a proportion, the confidence interval associated .,.;th any given confidence level can be determined. The first step is to determine the standard error oftbe percentage of the result. In some cases this value bas been established, from prior research (such as the census). If the value of the standard error is not known (which is frequently the case), it can be estimated by the following formula: where: n =size of the sample p = sample proportion The standard error is then multiplied by a factor the value of whic h is dependent on the confidence level we wish to achieve. Some commonly used values are: Confi!leoce Level 80% 90% 95% 99% Fac!OI Value 1.282 1.645 1.960 B26 These values are valid as long as the associated sample sizes are relatively large (over 30 respondents or thereabouts). The result ing figure is then added to the survey result to dete .rmine the upper limit of the confidence interval, and also subtracted from the survey result to determine the lower limit of the confidence interval. 74

PAGE 79

Using the example mentioned above, suppose a sample of200 respondents yields the result that 25% (or 40 respondents) carpool at least once per week. Our estimate for how the entire population behaves woul d then be calculated as follows: (0.25)(1 -0.25) =0.031 200 The confidence interval associated with each confidence level is then calculated: Confidence Factor Standard Confidence Level Enm: ImervBI 80% 1.282 0.031 0.040 90% 1.645 0.031 0.051 95% 1.960 0.031 0.061 99% 2 326 0.031 0.072 We can say, therefore that we are 80% confident (or, to be more precise, there is an 80% probability) that the proportion of the population that carpools once per week lies berween (0.250 04= 0 .21 or) 21% and (0 .025+ 0.04 = 0 .29 or) 29%. This also implies that there is a 20% chance that the proportion of the popu l a tion that carpools once per week lies between either 0% and 21% or between 29% and 100%. We can furthennore assume that the percent chance of the population's proportion lying in the lower range is equal to the probability of the proportion lying in the upper range, meaning there is a I 0% chance of that result being between 0% and 2 1 %, and l 0% chance of the result lying between 29% and 1 000/o. We are 95% confident (or there is a 95% probability ) that the population s result lies between (0.25 0.061 = 0 .189 or) 18.9% and (0 .25+ 0 .061 =0.311 or) 31.1 %, and, as in the example above we know that there is an equal chance of the result lying above or below those limits so there is a 2.5% chance that the result is between 0 and 18.9%, and a 2.5% chance that the result is between 31.1% and 100%. In cases where a significant percentage of the entire target population was surveyed, a factor is applied which increases our confidence in the results. Since the notion of statistical confidence is based on the idea that we might not have surveyed a truly representative sampl e due to purely random circumstances, it follows that our confiden c e will increase when we survey a larger percentage of t he pop u lation, to the point where we are I 00% confident if we have in fact surveyed the entire population. This becomes parti cularly relevant when we sample for example, rideshare member databases which might have 800 members and we might survey 250 or so of them. 75

PAGE 80

The factor is calculated by the following formula: fi I (Total Target Population Size) ac or= ((Total Target Population Size) + Sample Size-!) The factor is then multiplied by the actual sample size of the survey and yields what is called the effective sample size. This effective sample size, rather than the actual sample size should be used in all calculations where confidence intervals and analysis of differences require a sample size element. You will notice from the formula that, unless the sample size is a reasonably large fraction of the target population size, the factor will be virtually equal to I. Means The procedure for determining confidence levels and confidence intervals for results involving a mean value is almost identical to determining levels and intervals for proportions. The only difference is how the standard error is estimated. Again, the value of the standard error may have been established from prior research. If the value of the standard deviation is not known (which is frequently the case) it can be estimated by the following calculation: For each observation in the data, calculate: (Result -Mean of all results )2 which is equivalent to Percentage (!-Percentage) This is known as the variance of the sample. Then continue by taking the square root of the variance. 'This is the estimate of the standard deviation of the population, and is used in cases where a prior value has not been established This is equivalent to: Next: JPercentage(i-Percentage) Standard Deviation JSample Size 76

PAGE 81

This is the standard error of the mean. It is instructive to note that the standard deviation is almost exactly equal to the average difference between each response and the mean value. The standard error is then multiplied by a factor, the value of which is dependent on the confidence level we wish to achieve. Some commonly used values are: NOTE: Confidence Level 80% 90% 95% 99% Factor Value 1.282 1.645 1.960 2.326 This type of calculation does make one major assumption that was not discussed in the section on percentages The observed value should be approximately normally distributed, which is to say there should be about Jt, the results above the mean and Jt, below the mean. and that there are more results close 10 the mean lhan I here are far from the mean. A curve of the results should be bell-shaped. If I he results do not follow this patlern, for instance if I here are a huge mass of results between 0 and the mean and I hen fewer, more spread out resulls above the mean, this 1ype of calculation is Inappropriate. Generally, survey results from larger surveys will follow the assumption of normal distribution. However, it is important to check the results to ensure that this is the case. Particularly with smaller surveys (50 or fewer respondents). the assumption may be violated. The resulting figure is then added to the survey result to detetmin e the upper limit of the confidence interval, and also subtracted from tbe survey result to detetmine the lower limit of the confidence interval. Using the example mentioned above, suppose a sample of200 respondents yields the result that the average commute distance is 14.6 miles, and the variance turns out to be 256 miles Our estimate for the standard deviation of tbe population would then be calculated as follows: J(256)=16 77

PAGE 82

The standard error would be: 16 =1.13 /200 The confidence interval associated with each confidence level is then calculated: Confidence Factor Standard Confidence Level v l!ll!!: Ermr Interval 80% 1.282 1.13 1.45 90% 1.645 1.13 1.86 95% 1.960 1.13 2.21 99% 2.57 1.13 2.63 We can say, therefore that we are 80% confident (or to be more precise, there is an 80% probability) that the true average commute distance of the population lies between (14.6-1.45 = ) 13.15 miles and (14.6 +1.45=) 15.05 miles. We are 95% confident (or there is a 95% probability) that the population's result lies between (14. 6 21=) 12.39 miles and (14.6 + 2.21=) 15.81 miles. Tab l e of typica l confidence interva l sizes at 95% confidence l eve l Below is a table of typical confidence intervals for means and proportions. 95% has b e en chosen since it is one of the most widely used confidence levels. The proportions that have been chosen are I 0%, 25%, and 500/o; the means are on 5-point and I 0 point scales with fairly typical standard deviations (which, as was mentioned earlier, are pretty much equivalent to the average difference between each response and the overall mean value). Keep in mind when using this table that the sample size refers to all respondents answering this question, not necessarily the sample size for the entire project Some surveys will ask questions of only a portion of the respondents (for instance, "how many people are in your carpool" obviously will only be asked of peop l e who do carpool). Keep in mind that this table also assumes a normal (i.e bell-shaped) distribution, which is particularly prone to be violated when small sample sizes are used. 78

PAGE 83

Sample 10% 25% 50% S-point scale I 0-point scale Size proportion proportion proportion Average diff. Average d iff. confidence confidence confidence response to response to int erval interva l interval m ean=0.8 mean = 2.2 50 1.3% 2.7% 3.5% 0.11 0 .31 100 0.9"/o 1.9% 2.5% 0 .08 0.22 !50 0.7% 1.5% 2.0% O.Q7 0.18 200 0 .6% 1.3% 1.8% 0.06 0.16 250 0.6% 1.2% 1.6% 0.05 0.1 4 300 0.5% 1.1% 1.4% 0.05 0 .13 500 0.4% 0 .8% 1.1% 0 .04 0.10 1,000 0.3% 0 .6% 0 .8% 0 .03 O.o7 1,500 0 .2% 0 .5% 0 .6% 0.02 0.06 D e termination a n d analysis of d i fferences for significance The prev i ous section demonstrated that there is uncertainty about any result that comes from a sample. The "true result of the target population that was sampled from may not be the same as the result that was obtained from the sample. Statistics allows us to know what is the probable range in which that true result falls. Now suppose this concept is taken one step further. Suppose we survey two different populations, or even one population at two different times, and obtain two results. There will be uncertainty about each of these re s ults, as demonstrated in the previous section Since we're uncertain about the first result, and uncertain about the second result, they sample results could have come out differently even if both populations bad the same "true" result. For example, suppose we sample one population at two different times and determined the percentage of commuters who carpooled at least once per week Suppose in the first sampling we obtained a result at a 95% confidence leve l of25% +/-6.1%, and in the second we obtained a result of 28% +/-6 1 %. Even though the samples both yielded different results the 'true" result could have been 26% in both cases; or it could have been 24% in both cases, or 30%. If we obtain two results from independent samples how do we know if the "true" results tha t they represent are different? The answer comes from an extension of the concept of confidence intervals and confidence levels. If it is possible to determine the percent chance that the "true" 79

PAGE 84

result lies within a certain range (for example in the first of the two carpool results we know that there is a 95% chance that the result lies between 18. 9% and 31.1 %, a 2.5% chance that the result lies between 0 and 18.9"/o, and a 2.5% chance that the result lies between 31.1% and 100%, and we know the analogous ranges for the second result), then it should be possible to determine what the chance is that both results lie with in a certain range for any given confidence level. If we can do that, we can determine what our confidence level is that the "true" results represented by the results of the sample are in fact different. That, in a nutshell, is the concept of statistically significant differences. The rest is applying the appropriate formulas. Significant differences for proportions It is not particularly important for research sponsors to comprehend the mathematics behind testing for statistically significant differences. An understanding of the discussion above i s quite sufficient. However for the more mathematically-minded readers, the formulas are presented. Given two proportion results from two independent sampl es, the procedure to determine whether or not the proportions are statistically significantly different is: I. Calculate the value of d: d= ((Sample size I Result I} + (Sample size 2 Result 2)) (Sample size I + Sample size 2) 2. Calculate the value of the. following formula: (Result I -Result 2) (Sample size (Sample size I + Sample size 2) Jd(l-d) I Sample size 2) 3. Compare this result to the following table: If the formula value is at least 1.282 1.645 1.96 2 .57 80 The confidence level that the results are different is: 80% 90% 95% 99"/o

PAGE 85

Significant f o r means The method for testing for s i g nificant differenc es between mean results follows the same general pattern as the test for proponions: I. Ca lculate the variance for eac h of the two sample r esults: (Result-Mean of all results)' 2. Calcul ate the value of the following formula: (Result I -Result 2) Varianc e I Sample size I 3. Compare this result t o the following ta ble: I f the formula value is at l east 1.28 2 1.64 5 1.96 2.57 Variance 2 Sampl e size 2 The confidence le v el that the results are sie njficantly djffere nt is: 80% 90% 95% 99% Statis tical ly sign ifi cant dJft'erences versus meaningful d iffere nces It is easy to get carried away making calcula ti ons of Slatistical signifi cance of differences, and to lose sight of whether or not those diff erences are m eaningful Panicul arl y confusing is the question, "is that difference significan t?" when what the question really means is, "is that differen ce meanin gful? The answer may very well be, "The difference is statis ti cally significan t but it isn't m eani ngful." For instan ce we might discover that left handed drivers who ride in carpool s drink I .2 c up s of coffee each morning, whereas right-banded drivers who ride in carpools drink 2.8 cups of coffee each morning Given a reasonable sam ple size and low variance this m ight very well constirute a statistically significan t difference. However while Maxv.oell H ouse might decide this difference is meaningful it is doubtful that most CAP managers would find an y use for it. 81

PAGE 86

While the above example is admittedly a bit flippant, it demonstrates clearly the difference between significant differences and meaningful differences This l eads back to the discussion at the beginning of the section on formulation of hypotheses. The concepts of confidence interva l s, confidence levels, and statistically significant differences allow you to design experiments and test hypotheses that you have made about the population. When the c onfirmation or denial of the hypotheses l eads to re-allocation of resources and effort, the survey has performed its function effectively. 82

PAGE 87

Chapter Six Survey Pl anning and Budgeting INTRODUCTION This chapter will focus on decision$ CAP will have to make before conducting an evaluation. Specifically, the focus of the chapter will be on how to plan and fund an evaluation. While this sounds simple enough, many of the considerations discussed below can have a profound impact on survey costs and data reliability. SURVEY TIMING Timing can be a key issue in conducting surveys and can have a significant impact on results if not properly controlled for. I n the cable television industry, for example, it is important not to conduct customer satisfaction surveys immediately after rate increases are announced. Employee satisfaction studies are usually not conducted immediately after reviews and/or pay increase announcements for similar reasons. Attitude s towards use of commute alternatives can be affected by prevailing weather patterns, such as extreme heat (or in the case of northern areas, extreme cold). Some elements of timing to be considered when planning surveys include: Seasonality Seasonality can be a major issue in survey r esults, particularly in an area like Florida where there is a high influx of seasonal residents with predictable impacts on traffic levels. Studies evaluating the perceived (or actual) le vel of congestion will be significantly affected by the season in which they are conducted. It is not always possible to conduct surveys at "ideal times, nor is it always possible to determine what an "ideal" time may be. The best approach is usually to do as much as possible to ensure that prevailing conditions are similar when a follow-up survey is conducted. For instance, doing an initial "congestion perception'' study during low season, implementing some reduction procedures, and then following up during high season would be methodologically poor, and would probably lead to the conclusion that the policies implemented had actually increased rather than decreased congestion. Frequency Survey frequency is another issue that must be dealt with. Budget available is usually a major issue in determining potential survey frequency. Budgets seldom allow for tracking surveys to be conducted more than once a year (if that). I n cases where seasonality may be an issue (see above), you ma y want to consider spreading your interview process throughout the year rather than doing all of the interviews at once. This allows 83

PAGE 88

for calculation of a rolling average once you have conducted enough interv iews to get a baseline and may give you fairly up-to-the-minute insight into any new situations that may affect your customers or whoever else you are surveying However, this approach generally involves more expense particularly if you are having your surveys updated every time you conduct them. Timing evaluation results for planning and budgeting purposes Evaluation results are typically desired for year-end evaluations and new year planning purposes. In order to effectively integrate the results of the evaluations into the planning process, the survey must be conducted reasonably far in advance of the planning period. Suggested advance times to s tart planning the surveys are: Budgeting Type of Survey Focus Groups Mail Surveys Written, hand-distributed surveys Telephone Surveys Personal Interviews Panels Adyance Time to Start 2 Months 4 Months 2 Months 3 Months 6-8 Months Nl A, since this is generally an ongoing process. The primary decision made when budgeting for a survey is the determination of sample size. The concept of bow sample size affects the precision of results has been discussed previously. The question that a research sponsor must answer is, how much is the extra precision and certainty from the larger sample size worth? As a rule of thwnb, to get a "quick and dirty'' estimate for a population, a sample size of at least 150-200 should be considered. This allows for a wide range of uncertainty, but generally gives a fair idea of the population's attitude. For a good, solid estimate of the tendencies of a population, sample sizes o f 400 or respondents should be considered. Often a sample size of 400 or so may be used to establish benchmarks, and then 200 additional interviews are used as follow-ups to gauge whether there has been any change since the initial study was done. PLANNING SURVEY PROJECTS Probably the single most important step in planning any research projec t is the initial planning step The survey must meet that data needs of the evaluatio n that you are conducting. If the 84

PAGE 89

project is poorly planned in the initia l stages, there is virtually no chance tha t i t will result in useful data and meaningful valuable changes in policy and operations. The most effective way to plan a research project is to take a rigorous, scientifically-based approach. Ideally, this type of project will be approached as if it were a measurement of a natural phenomenon, as in chemistry, biology, or physics. The basis of the resear c h should be the same as in those sciences. Research design should follow the classic process of hypothesis, experiment, and conclusion. Fortunately for researchers, the type s of problems encountered don't demand the analytical complexity of problems in the sciences, but they do demand proper planning and design. There are five essential elements that any research sponsor must have firmly in mind when initially organizing a research project: Given the evaluation being conducted, what decisions will be made with the results of the su rv ey? Or alternatively, how will current operations, policies, and resource allocations be changed based on the survey findings? Given the decisions that are being made with the research, what is (are) the specific hypothesis (hypotheses) that is (are) being tested by the research? What are the pieces of data that need to be determined in order to make th. e prove or disprove the hypothesis, and in what form should they be measured? Furthermore, since a sampling process is involved, bow confident do we need to be of the results? Is it sufficient for the results to be within 5%, 10%, 500/o? What are the best sources of information? Does data already exist that answers this question? If not, where is the best place to look for it? If surveying is involved, who are the best people to ask questions of and collect data from? How much budget is available to conduct th e research? Each of these areas will be discussed in more detail below. Step 1: Identify decisions to be made The evaluation selection process should be a key step in identifying the decisions that are to be made. These decisions should be made explicit at the beginning of the project. This step is unfortunately often omitted from the research process. Even if the eva luator has determined that they will conduct a needs assessment, it is easy to get into trouble by setting vague objectives such as"! want to know what rny rideshare database members demographics are." This approach often leads to faulty research design. Often the managers assume that the personnel in charge of actually conducting the research have the same perception of the project's goals, only t o find out the data comes back that some elements were left out or misinterpreted. Or the research sponsor will assume that he or she understands the process so well that the step of specifying the decisions can be skipped, and the sponsor needs only to ask for specific data elements. This is a serious mistake -the sponsor often discovers new data elements that are needed that could easily have been identified if the planned decisions bad been made exp licit. 85

PAGE 90

The sponsor should always ask for information by specifying the decisions to be made, and never merely ask for data. A research sponsor doesn't want to "know the demographics" just to know them. They want to evaluate specific portions of or processes within their organization, or perhaps want to determine which specific actions are required to make the program more effective, such as whether new marketing campaigns are needed, if the entire spectrum of the area's population is being served, and if not, which ones are undeserved and why and should resources be allocated to target those groups, and so forth. A simple profile of demographics may or may not provide the data necessary t o make those decisions. But if the decisions that are going to be made are known in advance of the design of data-collection instruments and procedures, efficient and correct instruments, sampling plans, and analytical tools can be identified and put to use. This point cannot be re-iterated too many times. A large numbe r of research projects, possibly even a majority suffer from a lack of pre-planning and ident ification of decisions to be made sometimes to tbe extent that the en tire effort ends up being useless or m isleading. It should be noted that in cases where decisions have been made and will not be changed, due to commitments, regulatory requirements, et c., it is wasteful to spend research dollars to show whether the decision is righ t or wrong. The research should be directed towards decisions that have not been made and will be made more effectively with additional information at hand. The decisions that will be made based on the survey results should be explicitly identified by the research sponsor. Will resources be re-allocated and if so, how. If the project is evaluative, how will the evaluation be used to improve operations, policies and procedures, and specifically which operations, policies, and/or procedures are being evaluated? All of this information should be laid out on paper as the first step. Following completion of this effort, the next step is to generate the hypothese s to be tested by the research project. Step 2: Hypothesis generation Any experiment in any discipline must test a hypothesis. A research project is an experiment like any other; it should test and either confirm or reject a specific hypothesis (or multiple hypotheses}. The hypothesis should take the form of a direct statement, as in "Carpoolers have a significantly different set of demographics than people who drive alone", or "75% of all rideshare database members have a high level of satisfaction with the ridematching service, 'high' being defined as 8, 9, or 10 on a 1-10 scale." The research sponsor should identify the decisions to be made by the evaluation (step 1 above). Then the research sponsor and 1he research project manager should work together on generating the hypotheses that, when tested, will provid e the sponsor with the informatio n needed for the decisions to b e made. The following elements must be present in any sound hypothesis: The measurement that is being made and tested (such as a percentage, or an average rating) 86

PAGE 91

The scale that the measurement is being made on (for example, the minimum threshold level where a numerical scale is involved, or the actual statements used in categorical scales) The source or target population from which the information will be drawn (such as "rideshare database members" or "all commuters" or "residents of the 5-county area"). If, for example, a re-allocation of resources to target groups that are under-represented in a ridesharing database (compared to the service area s population) is the decision under consideration, one might generate the following hypotheses: I The demographics of the ridesharing database are significantly different than the commuter population of the area, specifically in terms of: Income, age, race, gender, presence of children IUlder age 6. (The list might be lengthened. or some elements might be dropped. But the hypothesis should be explicit.) 2 Those demographic groups that are under-represented in the database have a certain minimum threshold interest in carpooling. The minimum threshold interest should also be made explicit: e.g., 20% of the commuters in the area who are in these groups say they are "somewhat or very" interested in carpooling at least once per week on a regular basis. Or one might hypothesize that their interest leve l is not significantly different than the interest level of the demographic groups that are over-represented in the database. 3. One might also generate a hypothesis about the media that would be most useful to use to reach this population. However, it is also quite possible that few media are available (perhaps just direct mail and newspapers) within the budgets allowed, so that regardless of what the research finds, the same approach will be taken. As mentioned above, it is a waste of time and money to identify and collect data for a decision that has already been mad e and cannot be changed. The hypothesis should be specific, and should be a direct statement that will either be confirmed or denied by the research. Vague statements like, "Rideshare database members are satisfied with the service provided to them" are not useful or effective hypotheses, because they leave open to interpretation exactly what "satisfied" means. Does this refer to every database member? Does it refer to an average level of satisfaction, and if so, how is "satisfaction" defined? A better statement would be, "75% of all rideshare database members will say that they are very satisfied (or will rate their satisfaction at least an 8 on a I 0-point scale, if a numerical scale will be used) with the ridematching service provided to them." Step 3: Identification of data needed to prove or disprove hypotheses Identifying Data Needs Many research sponsors and research project managers begin their evaluation process at this step, and call it "determining what we need to know." Sometimes this even takes the form of writing survey questions and specifying response patterns (scales, categories, etc.) without first specifying the typ e of evaluation being done, what processes or parts of the organization are 87

PAGE 92

being evaluated, the decisions to be made with the research the hypotheses being tested, or the data needed to test the hypotheses thus greatly compounding the potential for error. As we have seen, it is imposs ibl e to effectively detennine data needs without having explicit hypotheses. And it should be clear that survey questions should defmitely not be written before data needs are determined. When the hypothese s have been generated identifying the data needed is actually quite straightforward. By reviewing the hypotheses used above as examples, it is clear that respondent demographics and stated int ent i ons or interests will be included on the questiollllaire. It is likely that other hypotheses will have been generated in the planning pro ces s as well. When the data needed have been properly identified, it usually also fairly straightforward for a survey research professional to create the actual survey questions and response scales and/or categories to be used. While it is certainly appropriate for a research sponsor (and presumably this sponsor is not an experienced survey research professional) to review and comment on a questiOIUlaire, it is not advisable for a non-professional to formulate the actual questiollllaire. Issues of response bias question order bias, skip pattern complexity, response choice formatting and design, types and formats of data needed for certa in statistical tests and modeling procedures, standard response scaling used i n particular types of questions, etc are all important in questionnaire design but are not issues that most research sponsors are familiar with or need to be familiar with. The Importance of Control Groups One key concept that is often ignored i n evaluations of program effectiveness particularly where there is a question of what the impact of a program has been, i s the notion of a control group. A control group is a population that is exactly (or as close to exactly as reasonably possible) like the group on which you are measuring the effects of the program, except that it has not been exposed to the program The measured behavior (such as percentage of people carpooling) should be measured both for the experimental group and the control group to determine what the effectiveness of the program has been. Many experiments skip th e step of having a control group by assuming that a control group would have experienced no change in behavior and thus any measured change in the experimental group is due to the program This approach can lead to very erroneous conclusions. A major decrease i n the price of gasoline, for instance may reduce the number of people carpooling in the population. If the group that was exposed to the program shows a very small increase in carpooling, it may be concluded that the program was ineffective. However, if it was also know that carpooling within a control group actually dropped by 15%-20% due to the decrease in gasoline a different conclusion might very well be reached. Due to cost constraints, it is sometimes impossible to conduct a research project with an appropriate contro l group. Other data sources, such as census data, may have to serve as a 88

PAGE 93

surrogate for data from a true control group. It is extremely important, however, to understand the notion of a control group and how results from the control group may impact conclusions reached from research data. The Concept of Sampling Usually a research project will involve conducting tests on a sample of the population rather than every member of the population. This occurs because few research sponsors can afford to sample every member of a target population When this happens, statistical uncertainty is created in the results based on whether the sample accurately represents th e population This is not a question of proper sample design procedures. It is a fact of the sampling process. To illustrate the issue, take a deck of cards as an example. Suppose we could randomly select 20 cards from the deck and had to esiimate (from the cards we drew) what percentage of the cards in the deck were black and what percentage were red. It is conceivable, albeit unlikely, that we would randomly s elect 20 red cards and no black ones. We would then be forced to conclude (incorrectly of course) that all of the cards were red. Statistical procedures exist that identify what the probability is of having made an error in sampling, and how large that error might be. What must be determined before an experiment is undertaken that involves sampling is what level of potential error will be tolerated. This is usually based on the importance and economic ramifications of the decision being made with the research results This issue was discussed at length in the sections of this manual covering sampling and statistics. Step 4: Iden tifyi ng information sources There are a number of possible sources for information. To determine demographics, for example, there is a wealth of free data available from the U S. Census This includes the standard population and housing surveys In addition the census releases other, more customizab l e products, such as the Public Use Microdata Samples (PUMS) which allow the user to create customized cross-tabulations of any census long form data from a I% sample of all census long forms returned. Many Commuter Assistance Programs have a number of evaluative tools available from their own records These include match rates, number of vans in s ervice numb er of companies contacted, number of commuters in the database, and so forth. Traffic count data, available from local Department of Transportation Offices, can also be useful in evaluations and analysis. In many cases however, there will be a particular hypotheses that simply can't be proven or disproved by publicly available information, particularly when subjective evaluations (such as satisfaction ratings, ratings of agency responsiveness, and so on) are requ ired. When that situation arises, survey research can provide the means for answering many of these questions It 89

PAGE 94

is therefore imperative that the evaluation planner carefully review all available sources before beginning the survey process. In a survey research project, it is crucial to ask the right questions. That will be accomplished by carefully following the steps outlined above. It is equally important, however, to ask those questions of the right people. Identifying those people is the crucial first step in developing the sampling strategy. Suppose we determine that we want to estimate the interest level in carpooling among commuters who are not currently in our ridesharing database, as shown in some of the examples above. No matter what questions we ask, we aren't going to get good estimate by interviewing retirees. The goal of the sampling plan should be to identify commuters and interview them and only them. Data from other groups, such as retirees or vacationing families, will not provide data that will help to prove or disprove our hypotheses. The hypothesis or hypotheses should always give an indication of where to draw the sample from. The hypotheses given above specifically mention "carpoolers" and "rideshare database members." As mentioned earlier, a sound hypothesis should always contain the source, or target population, from which the information will come. If the hypothesis is properly constructed, determining the correct population should not be difficult. Actually obtaining responses from people in those groups and verifying that your respondents did belong to those groups may be more of a challenge. If no available sources exist to pre-identify the people you are contacting as belonging to your target population, it may be necessary to include an identification question (often called a screener) in your survey instrument. The screeoer is essentially a question that verifies the identity of the respondent in relation to the target population. Many surveys have quotas for males and females, for e xample. Often a research sponsor wishes only to obtain survey responses from adults. (18 or older, or 21 or older) If, as in the case above, one only wants to collect data from commuters, a question very early in the survey would ask something like, "do you commute to work at least three times per week?" to verify that the respondent was in fact in the target population. Even when a database identifies a person as a member of a target population, it is often a good idea to verify the information through use of a screener. Sometimes databases are out of date or have errors in the entry of data. Using a screener can avoid unnecessary expenditure of usually scarce research dollars on unwanted responses. Step 5: Determining budget available and the best way to use it There is often very little leeway in how much budget is available to conduct research. Budget constraints are a very important factor in determining research directions. Limitations on expenditures may el. irninate the possibility of conducting certain types of research, or may so limit the number of survey responses you can obtain as to make the information gained of little value. Some objectives may have to be recast in the light of budget realities, particularly in terms 90

PAGE 95

of the confidence levels the research sponsor is willing to accept from the data. These considerations must be weighed as the sampling and interviewing plan progresses. Different types of surveys are available at varying levels of cost. To some extent the surveys meet different types of objectives Some survey formats are incompatible with certain object i ves. For instance those with limited budgets m ay be tempted to use focus groups to prove or disprove quantitative hypotheses (such as, "50% or more of commuters favor HOV lanes over toll roads''). Unfortunate ly focus groups are not designed to handle quantitative issues. 91

PAGE 96

Chapter Seven Communicating Evaluation Findings INTRODUCTION While a CAP can take every precaution and devise a nearly flawless evaluat ion methodology, the value is lost iftbe CAP cannot effectively communicate the results of their efforts. This chapter will focus on ways in which the Commuter Assistance Programs in Florida can communicate evaluation findings to a variety of audiences. GETTING TO KNOW YOUR AUDIENCE To develop an effective evaluation report, the CAPs must first understand who their audience is what information will be of interest to them, and when should the information be available to satisfy that audiences' needs. Wbo is tbe audience for a CAP evaluation report and wbat do tbey want to know? Although the audiences for a CAP evaluation report will differ by CAP, a number of groups with interest in the CAP can be identified. These include the following: Funders CAP Staff CAP Program Directors Board of Directors Media Service Providers Politicians Clients Community Groups Other interested parties Each of these audiences bas specific needs from an evaluation It is up to the CAP to identify what those needs are and to ensure that the infonnation of interest is provided in the evaluation report. Each of these audiences is discussed below. Funders-Are an important audience for CAP Evaluation reports. This group will want to ensure that the money provided i s being used wisely to achieve identified goals Prior to beginning an evaluation, the CAP should contact its funders to determine what specific expectations of the CAP program are, and develop an evaluation that answers those questions. CAP Staff-This is an i mportant audience for CAP evaluation reports because this group is the one that will be most affected by the results. CAP staff can use the evaluation to streamline efforts, 92

PAGE 97

to clarify the customer service focus, and to correlate efforts with the achievement of CAP mission and goals. CAP Program Director-The evalua t ion should help the director determine if cwrent focus and efforts are achieving desired results. An effective evaluation will help the director refine efforts and target new actions that can help achieve stated goals Board of Directors-The evaluation is importan t to the Board beca u se it helps them determine if their guidance and policy directions are effective in meeting program goals. The evaluation will also help in determining future Board ro l es. Media-The m e dia will want evaluation two things from an evalua tion. They will be interested to see if the CAP is meeting its objectives and they will want anecdo tal information that can be used in developing newspaper copy. If anecdotal information is good, the media will develop articles that can be an excellent source of program promotion. Service Providers 'I'hird party providers, such as taxi companies for guaranteed ride home, can use CAP evaluation results to improve the services p r ovided on behalf of the CAP Many of these service providers have specific internal customer service and/or satisfaction goals that they want to achieve The CAP evaluation can help them define their success PoliticiansThe CAP evaluation can help the politician determine if the needs of constituents are being addressed The e valuation can also serve as an educationaVpromotional opportunity because it can provide the poli tician with information about CAP activities and services. Ultimately, the evaluation can serve as a decision-making tool. Clients-Customers of the CAP are interested in learning about changes in services and how these changes can affect them They may also be interested in learning how their actions have contributed to the community and/or program success Community Groups-Many community groups will be inter e sted in learning what services of the CAP can be beneficial for their succes s They may also be looking for ways in which their group and the CAP can work together collectively to achieve common goals. Finally the community groups may also view the e v aluation in the context of comparing their achievements with that of the CAP This can be especially true if the CAP is a private non-profit that may be competing for funding However, when develop i ng an evaluation for a particular set of audiences, the CAP should keep in mind several i mportant considerations. Accord i ng to Morris, Fitz-Gibbon and Freeman in "How To Evaluate Evaluation Findings, these considerations are: Different users want different information--even to answer the same question. A funding agency may accept only valid and reliab l e test data to prove that a staff training program 93

PAGE 98

has been effective, while the personnel participating in the training program would find anecdotal reports and responses from interviews or questionnaires to be the most valid and believable evidence of program effects. Other audiences might require both kinds of information. Some users do not know what they need In programs where evaluations are mandated by legal requirements for example, evaluation clients or program staff may see the assessment simply as a trial to be endured, not necessari l y as a process that will lead to useful information and enlightened decisions If the users are not willing to commit to some criteria for measuring success before the evaluation starts, it is highly unlikely that they will accept or u se your final recommendations Formative evaluators consistently face the task of helping clients define not only program objectives, but also specific evaluation information needs. Some users expect the evaluation to support a specific point of view. They have already made up their minds about the strengths and weaknesses of the program, and the y expect that the evaluation will only conftrm their opinions. The results of the evaluation may very well not support their preconceptions. So it is vital that the evaluator identity the opinions early on so that be or she can anticipate potential controversies and design reporting procedures which take them into account Alerting users to your finding discrepancies berween their assumptions and the findings as they e merge rather than solely in a final report will make the users more receptive. In fact, an effective evaluation report will contain no surprises, especially with respect to central issues. All of the major questions will have been discussed with program personnel and decision makers from the very beginning, well before the fmal reporting stage. If the evaluation does not bring these issues to light early, the evaluator loses credibility. For some users, the information needs change during the course of the evaluation. It is not at all uncommon when a formative evaluation is well under way, for the users to identity new information they would like to bave. Some trainers, for example, might mention that the computer operators in a pilot training program seem to be learning a new data processing system, but the operators have developed a strong dislike for the system. You might change your evaluation plans to include some attitude measures. Although you carmot constantly alter evaluation plans, try to reserve some small portion of your resources to meet requirements for unexpected information that crops up during program implementation "How to Communicate Evaluation Findings by Lynn Lyons Morris, Carol Taylor Fitz-Gibbon, and Marie E. Freeman, Center for the Study of Evaluation, University of California, L os Angeles, CA, pp. 14-15. 94

PAGE 99

As the CAP develops its evaluatio n it needs to be aware of these issues and plan accordingly ln most cases, the CAP office will have to decide how to best meet the needs of its primary audience, and develop its evaluation program to meet those needs When is the best time to conduct an evaluation? The simple answer to this question is to say when it will be most useful. The better answer would be to say whenever the evaluation can be used to improve services and the effectiveness of the CAP In reality, if an evaluation is to be used by all of the potential audiences listed above, then the CAPs would have to continuously evaluate their success. Such an evaluation schedule is impossible, so the CAP should prioritize the most important audiences and complete evaluations to coincide with prioritized needs. Even then, the CAP may need to make some important decisions For example, if the purpose of the evaluation is to improve service to justify increased funding, then it stands to reason that the evaluation should be completed to coincide with funding cycles. However budgets are developed after plans and programs have been determined. This often occurs six months before funding is determined. lf the evaluation cannot be used to make improvements to service or used to determine what services should be offered, then the evaluation may be completed too late to justify increased funding levels that reflect new services The following agencies should be contacted in your area to determine when budget and funding decisions are made and when the CAP should be prepared to make its pitch for funds. Metropolitan Planning Organization Florida Department of Transportation District Office Local City, County Governments Transit agency Private foundations With the exception of private foundations, most of the agencies listed above will be on one of two funding cycles, the fiscal year cycle or calender year cycle. Most fiscal year cycles run July 1-June 30, although federal programs begin a new fiscal year on October I As the name implies, calender year cycles run January !-December 31. For private foundations, the exact timing of funding decisions varies greatly and the same foundation may make funding decisions multiple times during the year. For example, the Energy Foundation meets three times a year to review proposals for funding decisions, and requires that materials and proposals be submitted at least eight weeks in advance. Regardless of who is providing the funds for the CAP, all will probably require an evaluation of effortS. When these evaluation results are due (as well as what will be evaluated and how) should be determined when the grant is provided. If an evaluation measure is to be tracked 95

PAGE 100

internally by the CAP (i.e number of inquiries about CAP services) the monitoring and/or evaluation should be continuous This can be especially beneficial if funds are received from FOOT soW"Ces who generally require that the CAP include quarterly reports of progress. Again, these requirements w ill be spelled out when the grant is provided Documenting Enlu ation Findings Once evaluations are complete, the CAP must d ecide how best to convey the results of the evaluation. This is a crucial step that must not be overlooked. A welldesigned and carefully managed evaluation can be wasted if the results are not presented in a clear and understandable format. It is also important to remember the potential audiences for the evaluation results and what reporting format will be mo st useful to meet their needs. The CAP should also be aware that documenting results of evaluations can also be done verbally. For example, the CAP may be called upon to make a presentation to the County Commissioners on the results of the evaluation. The presentation may be the first exposure the Commission has to the results and how the results are presented could go a long way in obtaining funding. If the CAP evaluation draws media attention, the results may be broadcast on the radio or television, two mediums of communication in which written documentation will not be used. While most CAP offices will commonly be required to disseminate evaluat ion results in technical reports and/or quarterly progress reports other forms of communication will typically be used. A list of potential communication mediums for evaluation results include: Technical Report Executive Summary Brochures Press Releases Trade Journal Article Memorandum Public Workshop Conference/Seminar Presentation Face-To-Face Discussion Of the audience s for a CAP evaluation report, funders board members, and CAP staff will have the most interest in a full technical r eport. Since two of thes e three audiences have other duties beside CAP oversight, the technical report should be clear and concise, as well as technically credible. A well written technical report will become a reference manual for this audience For politicians, the media, community groups, and clients the preferred written document will be the executive summary. Even funders and staff will use the executive summary for their own needs. Therefore, the executive s ummary can be the mos t important document the CAP wil l write to disseminate evaluatio n findings The summary should be brief, highlight the most 96

PAGE 101

important findings of the evaluation, and report the major recommendations of the analysis Strong support graphics that depict the most important results can be b en eficial in the executive summary The other communication mediums listed serve specific audience needs. Depending on bow the CAP chooses to handle the evaluation findings will dictate which of these mediums will be used and how they will be used. To strengthen these types of reports, the CAP office should try to determine what evaluation findings are the most important to the audience and focus on preparing a report that best meets that need. Finally while the form of communication is important, the CAP must focus its attention on the content of the document. The CAP should: Tie together evaluation findings with stated program goals, objectives, and mission of the CAP; Compare results to implementation p l an and the progress made; What effects have changes in program offerings bad on service; CAP efficiency; Examine program strengths and weaknesses What problems have arisen, or what trends have changed that may have an impact on results; What changes or actions are recommended. Other important items to consider in the report are: Relate information provided to necessary actions Make the report credible Give the audience what it needs, but don't overdo it Present an attractive and readable document Put the most important results first Highlight the successes and most important information The key for most CAP offices is to look at the evaluation and evaluation report as a powerful tool. If the tool is used effectively it can show the diligence of CAP efforts the impact the CAP has on meeting community goals and service needs, and the importance of the CAP in solving loca l and regional problems A properly planned and well-documented evaluation can be an e xcellent medium for promoting the CAP and increasing awareness of the community on the important role the CAP plays in Florida municipalities. 97

PAGE 102

APPENDIX A Sampl e Database Member Survey 98

PAGE 103

CAP Evaluation Rideshare Database Survey Good evening. My name is and I am with a market research company. This evening we are conducting a short survey on commuting in the (Insert area name here) area. We are not attempting to sell you anything, we are only interested in your opinions. (Ask to speak to person named on sample sheet repeat intro if necessary) l. How many days per week do you commute to work? __ (if 0 TERMINATE) 2. And about how far is your commute to work, in miles? _ _ 3. Have you ever beard of (Insert name of ridesharing organization here) ? 1Yes 2-No (Go to END) 9-Don't Know/Refused 4. Have you ever contacted (Insert name ofridesharing organization here) for carpool or vanpool information, or not? IYes 2-No (Go to EN D) 9Don't Know/refused 5. Did (Insert name ofridesharing organization here) provide you with carpool, vanpool, or transit information or assistance, or not? 1Yes 2 No (Go to END) 9-Don't Know/refused 6. To what extent did the information or assistance provided by (Insert name of ridesharing organization here) influence the way you commute to work? Did it: 1Have a great deal of influence 2-Have a moderate influence 3Have a slight influence 4 -or have no influence at all 7. Did you ever carpool after you received the information, or not? 1Yes 2-No (Skip to Q. 15) 9-Don't Know/refused 8. Are you still carpooling to work? 1Yes 2-No (Skip io Q. 12) 9. About how many days per week are you carpooling? __ (Enter 0 if question is skipped) 9-Don't Know/refused I 0 About how many people are usually in your carpool, including the driver? __ (Enter 0 if question is skipped) 99

PAGE 104

II. About how long have you been carpooling? ___ Days Weeks Months [SKIP TO Q. 15] 12 About how long were you in your c;upool? ___ Days Weeks Months 13. How many days per week were you carpooling? _ (Enter 0 if question is skipped) ___ Years __ Years 14. About how many people were usually in your c;upool including the driver? _ (Enter 0 if question is skipped) I 5. Did you ever vanpoo1 to work after you received the information, or not? 1Yes 2-No (Skip to Q. 23) 9-Don't Know/refused 16. Are you still vanpooling to work? 1-Yes 2-No (Skip to Q.20) 9-Don't Know/refused I 7. About how many days per week are you vanpooling? __ (Enter 0 i f question i s skipped) 18. About how many people are usually in your vanpool including the driver? __ (Enter 0 if question is skipped) 19. About how long have you been vanpooling? ___ Days Weeks Months [SKIP TO Q 23) 20. About how long were you in your vanpool? ___ Days Weeks Months 21. How many days per week were you vanpooling? ___ (Enter 0 if question is skipped) ___ Years ___ Years 22. About how many peop le were usually in your vanpool, including the driver? __ (Enter 0 if question is skipped) 23. Did you ever ride the bus to work after you received the information, or not? 1 -Yes 2-No (Skip to q. 29) 9-Don't Know/refused 100

PAGE 105

24. Are you still ri ding the bus to work? 1Yes 2-No (Skip to Q .27) 9-Don't Know/refused 25. About how many days per w eek are you riding the bus to work? ___ (Ente r 0 if question is skipped) 26. About how l ong have you been riding the bus to work? _ Days Weeks Months ___ Years [SKJP TO Q 29] 27. About how long were you riding the bus to work? _ Days Weeks Months ___ Years 28. About how many days per week were you riding the bus to work ? _ (Enter 0 if question is skipped) 29. Is there any other way you used to get to work since you received the information? 1 Yes 2-No (Go to END) 9-Don't Know / refused 30. And how we r e you gett ing to work? (Specify-----) 3 1. And are you still getting to work by ( INSERT ANSWER TO Q 30)? 1Yes 2-No (Skip to Q.34) 9 -Don't Know/refused 32. About how many days per week are you (INSERT ANSWER TO Q 30)? ___ (Enter 0 if question is s kipped) 33. About how long have you been (INSERT ANSWER TO Q 30)? ___ Days Weeks Months Years [GO TO END] 34 About how long were you getting to work by (INSERT ANSWER TO Q. 30)? ___ Days Weeks Months Years 35 About how many days per week were you getting to work by ( INSERT ANSWER TO Q. 30)? ___ (Enter 0 if question is skipped) END Thaok you very mucb for your cooperation io tbis survey. Good oigbt. 101

PAGE 106

APPENDIXB Sample Completed Ridesbare Database Survey 102

PAGE 107

CAP Evaluation Ridesbare Database Survey Sample Complet ed Survey Good evening. My name is and I am with a market research company. This evening we are conducting a short survey on commuting in the (Insert area name here) area. We are not attempting to sell you anything, we are only interested in your opinions. (Ask to speak to person named on sample sheetrepeat intro if necessary) I. Ho w many days per week do you commute to work? _5_ (i f 0 TERMINAT E) 2. And about how far is your commute to work, in miles?_IO_ 3. Have you ever heard of (Insert name ofridesharing organization here)? 1Yes 2 -No (Go to END) 9Don't Know/Refused 4 Have you ever contacted (Insert name of ridesharing organization here) for carpool or vanpool information, or not? 1Yes 2No (Go to END) 9Don't Know/refused 5. Did (Insert name of ridesharing organization here) provide you with carpool, vanpool, or transit information or assistance, or not? 1Yes 2-No (Go to END) 9Don't Know/refused 6 To what extent did the information or assistance provided by (Insert name ofridesharing organization here) influence the way you commute to work? Did it: 1-Have a great deal of influence 2-Have a moderate 3 Have a slight influence 4 or have no influence at all 7. Did you ever carpool after you received the information, or not? 1Yes 2 -No (Skip to Q. 15) 9Don't Know/refused 8. Are you still carpooling to work? 1 Yes 2-No (Skip to Q. 12) 9. About how many days per week are you carpooling? _o __ (Enter 0 if question is skipped) {SKIPPED) 9Don't Know/refused I 0 About how many people are usually in your carpool, including the driver? 0 (Enter 0 if question is skipped) {SKIPPED) 103

PAGE 108

II. About how long have you b een carpooling? ___ Days Weeks Months ___ Years [SKIP TO Q. IS) 12 About how long were you in your carpool? ___ Days Weeks Months ___ Years 13. How many days per week were you carpoo l ing? 0 (Enter 0 if question is skipped) (SKIPPED) 14. About how many people were u s ua ll y in your carpool, including the driver? 0 _(Enter 0 if question is skipped) (SKIPPED) 15 D i d you ever vanpool to work after you received the information or not? 1Yes 2-No (Skip to Q. 23) 9-Don't Know/refused 16. Are you still vanpooling to work? 1Yes 2-No (Skip to Q 20) 9-Don't Know/refused 17. About how many days per week are you vanpooling? _5 (Enter 0 if quest ion is skipped) 18. About how many people are usually in your vanpool including the driver? _8_ (Enter 0 if question is skipped) 19. About how long have you been vanpooling? ___ Days Weeks _S_Months _ Years [SKIP TO Q. 22) 20. About how long were you in your vanpool? _ Days Weeks Months __ Years 21. How many days per week were you vanpooling? 0 (Enter 0 if question is skipped) (SKIPPED) 22. About how many people were usually in your vanpool, including the driver? 0 (Enter 0 if question is skipped) (SKIPPE D ) 23 Did you ever ride the bus to work after you received the information, or not? 1Yes 2-No (Skip to Q.29) 9-Don't Know/refused 104

PAGE 109

24. Are you still riding the bus to work? 1 Yes 2 -No (Skip to Q .27) 9-Don 'I Know/refused 25. About how many days per week are you riding the bus to work? _o_ (Enter 0 if question is skipped) (SKIPPED) 26. About how long have you been riding the bus to work? ___ Days Weeks Months ___ Years [SKIP TO Q. 29) 27. Abo u t how long were you riding the bus to work? ___ Days Weeks Months _ Years 28. About how many days per week were you riding the bus to work? o (Enter 0 if question is skipped) (SKIPPED) 29 Is there any other way you used to get to work since you received the information? 1Yes 2-No (Go to END) 9-Don't Know/refused 30. And how were you getting to work? (Specify-----) 31. And are you still getting to work by (INSERT ANSWER TO Q 30)? IYes 2-No (Skip to Q.34) 9-Don t Know/refused 32. About how many days per week are you (INSERT ANSWER TO Q. 30)? 0 _(Enter 0 if question is skipped) (SKIPPED ) 33. About how long have you been (INSERT ANSWER TO Q 30)? _ Days Weeks Months Years [GO T O END] 34. About how long were you getting to work by (INSERT ANSWER TO Q 30)? _ Days Weeks Months Years 35 Abou t how many days per week were you getting to work by (INSERT ANSWER TO Q. 30)? 0 (Enter 0 if question is skipped) (SKIPPED) END Thaok you very much for your cooperation in t h is survey. Good night. 105

PAGE 110

APPENDIXC Commuter Assistance Program Procedures 106

PAGE 111

Approved: Effective: May 5, 1997 Office: Transit Ben G. Watts, P.E. secretary PURPOSE: Topic No.: 725-030-008-d COMMUTER ASSXSTANCE PROGRAM To establish procedures for the implementation of the Department's Commuter Assistance Program and develop a foundation for public/private partnerships to foster the delivery of employer-based transportation demand management (TOM) strategies. AO'l'HOBITY: Chapters 187 and 341, Florida statutes. SCOPE: The requirements or processes related to this procedure affect the state Public Transportation Office, District Public Transportation Offices and State Funded Programs. DEFINITIONS: Agency Annual Work Plan -An annual written plan submitted by agencies requesting state participation in local ridesharing projects or Transportation Management Associations/Transportation Management Organizations. This plan identifies project goals, objectives and related project information, and serves in evaluating project's progress. Annual survey -An annual survey administered by regional or local commuter assistance services. The survey is used to verify monitoring and reporting data. central Office -For the purposes of this procedure, the Department of Transportation, Public Transit Office andjor staff. pistrict Office -For the purposes of this procedure, the Department of Transportation, District Public Transportation Office and{or staff.

PAGE 112

Approved: Ben G. watts, Secretary PURPOSE: P.E. Effective: May 5, 1997 Office: Transit Topic No;: 725-030-008-d COMMUTER ASSISTANCE PROGRAM To establish procedures for the implementation of the Department's Commuter Assistanc e Program and develop a foundation for public/private partnerships to foster the delivery of employer-based transportation demand management (TDM) strategies. hVTHORITY; Chapters 187 and 341, Florida Statutes. SCOPE: The requirements or processes related to this procedure affect the State Public Transportation Office, District Public Transportation Offices and State Funded Programs. DEFINITIONS: Aaencv Annual Work Plan An annual written plan submitted by agencies requesting state participation in local ridesharing projects or Transportation Management Associations/Transportation Management Organizations. This plan identifies project goals, objectives and related project information, and serves in evaluating project's progress. Annual Survey An annual survey administered by regional or local commuter assistance services. The survey is used to verify monitoring and reporting data. Central Office For the purposes of this procedure, the Department of Transportation, Public Transit Office and/or staff. District Office For the purposes of this procedure, the Department of Transportation, District Public Transportation Office andjor staff.

PAGE 113

725-030-008-d Page 3 of 12 statewide Commuter Assistance Annual Report A report compiled by the Central Office detailing Commuter Assistance activities statewide. This report will include all the data and monitoring compliance figures provided by the projects to the District offices. This report will be included in the Public Transit Report for the Transportation commission. Telecommuting A work arrangement whereby selected employees are allowed to perform the normal duties and responsibilities of their positions through the use of computers or telecommunications, at home or an alternative worksite other than the employees' usual place of work. Transportation Demand Management CTDMl strategies A set of measures designed to reduce the number of trips made by single occupant vehicles and enhance the regional mobility of all citizens. These strategies can include but are not limited to: traditional ridesharing (carpooling & vanpooling); encouragement and enhancement of public transportation, encouragement of alternative work hours (flextime, compressed work week, etc.), encouragement of non-motorized transportation (bicycle and pedestrian modes); development and implementation of shuttle services; encouragement of priority or preferential parking for ridesharers; encouragement, facilitation and distribution of discounted transit passes; fostering telecommuting programs. TOM Clearinghouse -Is a service of the Department, currently operated by the Center for Urban Transportation Research, which provides technical support for the Department, local governments and emerging TMAs. Services include but are not limited to: strategic planning assistance, evaluations and survey assistance, training, TOM Resource Center and the TOM newsletter. The Central Office has monitoring and fiscal responsibilities for the clearinghouse. Requests will be coordinated through District office prior to approval. Transportation Management Associations/Transportation Management Organizations fTKAs/THOs} The terms Transportation Management Associations or Transportation Management Organizations have been used interchangeably. For the purposes of this procedure the acronym TMA will be used. TMAs are public/private partnerships formed so that employers, developers, building owners, and government entities can work collectively to establish policies, programs and services to address local transportation problems. TMAs realize their potential in addressing traffic congestion, air quality, and in some instances, employment issues through the use of TOM strategies. TMAs are established within a limited geographical area to address the transportation management needs of their members. TMAs are expected to obtain private sector financing in addition to public funding.

PAGE 114

. GENERAL: 725-030-008-d Page 4 of 12 Coordinated use of existing transportation resources can provide a responsive, low cost alternative for alleviating urban highway congestion, improving air quality and thereby reducing the need for costly highway improvements. The commuter assistance program focuses on the single occupant commuter trip which is the greatest cause of peak hour highway congestion. A coordinated effort to provide alternatives to these commuters, using existing or low cost resources, can be beneficial to the development of public transit statewide and the Department's priority efforts to relieve traffic congestion, improve air quality and to assure energy conservation. The State's Commuter Assistance Program encourages a public/private partnership to provide brokerage services to employers and individuals for: carpools, vanpools, buspools, express bus service, subscription transit service, group taxi services, heavyand light rail and other systems which are designed to increase vehicle occupancy. The program encourages the use of transportation demand management strategies including: employee trip reduction planning, Transportation Management Associations, alternative work hour programs, telecommuting, parking management, and bicycle and pedestrian programs. PROGRAM MANAGEMENT AND IMPLEMENTATION (1) CENTRAL OFFICE responsibilities shall include: (a) Maintaining continuing communication With the District Offices on matters regarding the Commuter Assistance Program. (b) Developing and maintaining program policies and procedures. (c) Monitoring compliance with established procedures. (d) Providing training and technical support to Districts and local programs as required. (e) staying current on national and international methods for promotion of commuter alternatives and transportation demand management, and providing this information to the Districts. (f) Providing any necessary support for demonstration projects that are statewide or regional in scope or require staffing in excess of district capabilities.

PAGE 115

725-030-008-d Page 5 of 12 (g) Assuring the coordination and implementation of support programs (Transit Corridor and Park and Ride). (h) Compiling data provided by the District into Statewide Commuter Assistance Annual Report. (i) Providing the latest transit trend and performance measurements. (2) DISTRICT OFFICE responsibilities shall include: (a) Maintaining communication with the Central Office on program status and implementation. (b) Establishing and maintaining communications with local public and private organizations to advise them of the availability of Department financial and technical assistance programs for commuter and transportation demand management. (c) Establishing specific and achievable program objectives for the District based upon input from local and regional programs. The District Work Plan provides the framework and direction for the commuter assistance activities funded by the District. (d) Assuring the provision of technical assistance in the development of commuter assistance services. (e) (f) (g) (h) Providing and managing grants to local agencies and the private sector for the implementation of Commuter Assistance Projects. This includes ensuring that grantees or contractors comply with JPA or contract requirements, and that requirements of this procedure are included in the JPA or contract. Ensuring that appropriate application of commuter alternatives further the development of public transportation projects in the Districts and the inclusion of private transportation providers. Performs quarterly review of each agency's progress to determine the effective implementation of the Agency Annual Work Plan. Modifications to the Agency Annual Work Plan will be documented. Prepares a District Quarterly Local or Regional commuter Assistance Service Report summarizing each agencies progress in the implementation of the Agency

PAGE 116

725-030-008-d Page 6 of 12 Annual Work Plans. The report will include the written quarterly reports submitted by the agencies detailing successes, mandatory reporting measures, problems and future plans. These reports are due in the Central Office by the end of the month immediately following the close of each calendar quarter. Reporting quarters are January -March, April -June, July September, and October -December. Reports from established TMAs may be submitted twice annually at the end of the 2nd and 4th quarters. (i) Participating, as appropriate, on the Boards of Directors of private non-profit TMAs and Regional Commuter Services corporations. (j) Development of Annual District Work Plan including project funding needs for the next five'years. Assuring commitment of Department funds is consistent with the established production schedule. (3) Issues not specifically mentioned in this procedure, nor with statewide implications, are left to the discretion of the individual District. PROCEDURE Commuter Assistance Projects shall be programmed by the Districts in coordination with the Central Office, the appropriate MPO, local agencies and the private sector to ensure statewide programming to optimize available funding sources. (1) ELIGIBLE PROJECT COSTS (a) Program'administration and operational costs including: salaries, marketing materials, advertising, computerized matching, reporting and other project related costs. (b) Computer hardware and software necessary to establish trip matching services, where not redundant or sharing could be a more efficient use of equipment. (c) Specialized demonstration projects of statewide or regional impact designed to demonstrate innovative approaches to commuter assistance. (d) Other capital purchases for the accomplishment of program objectives.

PAGE 117

725-030-008-d Page 7 of 12 (e) Other operating expenses for the accomplishment of program objectives, such as a guaranteed Ride Home Project or vanpool administration. (2) ELIGIBLE GRANT RECIPIENTS Local governments or their designees including Metropolitan Planni ng organizations, Regional Planning Councils, Transportation Authorities, or Community Transportation Coordinators designated pursuant to Chapter 427, Florida Statutes, are eligible recipients of matching grants. Although funds may be used to administer these projects within local government, recipients should be encouraged to consider subcontracting services to the private sector. Grants may be made to private organizations pursuant to Chapter 617, Florida Statutes. (3) FUND PARTICIPATION (a) Funding for this program will be allocated to the Districts based on a statewide assessment of Commuter Assistance Program need. Allocation requests identified in the Annual District Work Plan will be given first priority. (b) The Department is authorized to fund up to 100 percent of the eligible costs of commuter assistance projects which are determined by the District to be regional in scope and application or statewide in nature. (c) The Department's participation in a local project cannot exceed the amount of local participation. (d) State funding participation in FTA funded projects shall be at the level defined in Chapter 341, Florida Statutes. (e) The Department's participation in Federal Highway Administration funded projects shall be at the levels required for the particular highway system fund involved according to Chapter 339.08(2), Florida Statutes. (f) Specific match rates are identified in the Work Program Instructions.

PAGE 118

(4) WORK PLANS 725-030-008-d Page 8 of 12 Each District shall develop an annual work plan for its District Commuter Assistance Program. This plan will detail program goals and objectives for the period october 1 through September 30. The district work plan shall identify annual program goals and emphasis areas, targets for regional and local commuter assistance services, and targets for TMAs. It will also include a five year funding needs projection. Plans shall be submitted to the Central Office by October 1 of each year. (5) PROJECT TYPES (a) Regional or Local commuter Services operated by government agencies, transit operators or private contractors under contract to the Department shall be administered in the following manner: 1. Each agency shall submit an annual work plan consistent with Department and regional goals. The work plan will be incorporated as a "Special consideration of the Department" in all JPAs, and shall include, at a minimum: a. an organization chart identifying all personnel funded by this project b. measurable program goals and objectives with milestones to determine progress in stated emphasis areas consistent with District work plans c. a marketing plan identifying market penetration and client service targets d. an annual project budget identifying expenses and revenues by source 2. All commuter assistance service agencies receiving state funding will be required to monitor and report to the District office the following data each calendar quarter: a. numbers of commuters requesting assistance b. number of commuters switched from single occupant vehicle c. number of vans in service (where applicable)

PAGE 119

725-030-008-d Page 9 of 12 d. number of vehicle trips eliminated e. number of vehicle miles eliminated f. number of employer contacts and employers participating Definitions for each reporting category are provided in Attachment A. 3. Regional and local commuter assistance service programs shall administer an annual survey to collect and verify data for reporting requirements. This requirement may be waived by the District if the agency can show statistically accurate follow-up compiled in a monthly or quarterly manner. Requests to waive this requirement will be reviewed by the central Office. Survey may be accomplished in-house or contracted out and must not have a sample error greater than 3% and a confidence level no less than 95\. Refer to survey guidelines in attachment A. 4. All projects shall be programmed in accordance with the latest Work Program Instructions and in compliance with the provisions of Chapter 341, Florida statutes, as follows: a. If the local eligible recipient has taken action to secure or designate federal funds as a funding source for a project, in which case the appropriate federal match ratio applies. b. If the central Office has indicated on a project-by-project basis that other funds (e.g., Transit Corridor) can be reasonably anticipated for the project, the appropriate match ratio associated with such funds shall apply. c. If the project is regional in scope and no regional financing mechanism exists, the project is eligible to be programmed up to lOOt state participation.

PAGE 120

. 725-030-008-d Page 10 of 12 (b) Transportation M anagement Associations operated as public/private partnerships: 1. Funding may be provided to TMAs organized as private non-profit corporations, in cooperation with local government, that are established according to local comprehensive plans, other locally adopted plans or regional commuter assistance program goals. 2 State start-up funds may be granted in the following ratio: 50% -first year, 40% -second year, 30% -third year, fourth year or longer -TMAs will be eligible for continued funding at the lesser of .$50, 000 or 25% of their total budget, provided they are meeting the performance criteria outlined in their existing JPA. Board member inkind contributions may count toward local match requirements. However, in-kind contributions must have the prior approval of the District Office. Districts may use 49 CFR 18.24 et seq. as guidance in determining allowable in-kind contributions. Variation from these levels is permitted with prior consultation with the central Office. 3. Grants supporting TMAs may be made directly to the incorporated organization o r to the appropriate local governmental agency for pass-through to the TMA following the current JPA procedure. TMAs receiving these grants shall include the Department as an ex officio member of its Board of Directors during the period of the grant. 4. To be eligible for state funding a TMA must provide the Department with a detailed Agency Annual Work Plan, articles of incorporation as a private not for profit body, bylaws, geographical boundaries, trip management goals, a financing plan, an institutional structure, and potential membership estimates. Future year work plans will be required. A TMA shall utilize the Department's TMA Self Evaluation program on an annual basis. Results of the evaluation will be reported to the District office annually. Records of services received from regional commuter assistance program should be maintained. A summary of these activities shall be included with the quarterly reports provided to the District office. The District will determine information requirements for the quarterly reports.

PAGE 121

725-030-008-d Page 11 of 12 5. No TMA will be funded unless its Agency Annual Work Plan has been approved by the District office as consistent with regional commuter assistance program plans, MPO transportation plans, local comprehensive plans and regional strategic policy plans. 6. Funds granted to TMAs under this program are for administrative, planning, marketing and operational purposes only. The Department will not participate in the acquisition of computerized ride matching capabilities unless this service is not available through a regional or local commuter assistance program. 7. Special projects and operations (shuttles, vanpools, guaranteed ride home programs, transit discounts, etc.) may be funded on a SO% state ratio to established TMAs (over three years old). (6) PROJECT FILES The District shall maintain the official project files, which at a minimum, shall include or have readily accessible: (a) All Joint Participation Agreements and/or Contracts and a copy of any amendments or supplements thereto. (b) A copy of each invoice presented for payment. (c) Quarterly reports from the grant/contract recipient. (d) Documentation of District quarterly on-site visits and annual evaluations. (e) An inventory of all capital acquisitions including description, state participation, current location, and cost when acquired. (f) All pertinent correspondence regarding the project. (g) A copy of the agency annual audit (report) performed in accordance with the Public Transportation JPA Procedure, No. 725-000-005, and RecipientJSubrecipient Single Audit Procedure, No. 450-021-001.

PAGE 122

(7) TRAINING 725-030-009-d Page 12 of 12 The basic TOM training is mandatory for all Department CAP managers and CAP agency directors. Additionally, the State Commuter Assistance Office periodically offers training classes which provide the most recent technical assistance and program information available. (8) FORM ACCESS: There are no required forms associated with this procedure.

PAGE 123

A'l'TACHMEl'<"T A EVALUATION MEASURE DEFINITIONS N umber of Commuters requestino assistance A ?age 1 of 6 This is number of people request assistance of some sort including: Carpool matchlist Vanpool matchlist or formation assis tance Transit route and/or schedule information Telecommuting information Bicycle and/or locker/rack information Nymber of switchino mQdes This is the number of people that actually use the information you provide to change from their current SOV mode to carpooling, vanpooling, trans i:: use, telecommuting, walking and/or bicycling. This information can be gathered by doing sample survey of commuters assisted on a monthly basis by either phone or mail. Every mont h contact a random sample of the commuters assisted the previous month to see how many actually used the information you provided. Extrapolate survey results to estimate total. It is recommended that actual data be used where available. Number of vans in seryice (whet applicable} Report the number of commuter vans on the road and/or the number vanpoolers. Number of vehicle eliminated Using the follow-up survey data or actual data multiply the frequency of alternative mode use by the estimated number of commuters using a shared mode or telecommuting. Vehicl e miles Using follow-up survey data take the average trip length times the frequency of use times the number of formations.

PAGE 124

Employer contacts Attachment: A Page 2 of 6 When reporting include the number of employees at each site. Report number of employer contacts by the following categories: Number contacted by letter/fax Number contacted by phone Number contacced in person Number of follow-up calls or visics -lor Accomplishments New Transit Services Initiated/Improved Educacion Programs Iniciated Transportation Planning Initiatives Guaranteed R ide Home Projects Initiated Other lmplementation.Activities spots saved/parking needs reduced Determined by the number of people using alternative modes at each employment s ite. Costs Sayed Multiply vehicle mile eliminated by the average cost per mile. AAA is a good source for the average cost per mile.

PAGE 125

DISTRICT OPTIONAL EVALUATION DEFINITIONS Gasoline Saved Attachment A Page 3 of 6 Multiply vehicle miles eliminated by the average miles per gallon figure from AAA. Emissions Redyction Multi;l y vehicle miles eliminated by the emission factors for your area. Emission factors available from Deparcment of Environmental Protection. !nformatjpn Distributed Categories may include but are not limited to: Brochures Information packec s Pos cers Surveys Syecial Eents Cacegories may include but are not limited to: Transportation Days Commuter Fairs Special Promotions Media/Community Relations Categories may include but are not limited to: Number of PSAs shown Number of newspaper articles Number of news stories Number of magazine articles

PAGE 126

. SURVEY Attachment A Page 4 of 6 This is meant to be a guide for agencies choosing to administer an in-house, annual survey. Samples samples are those i n which everyone has an equal chance or probability of being chosen. The assumpcion is chat the people who are selected are believed to be just like those who are noc. Types of techniques With sampling include: simple random sampling, stratified random sampling and simple random cluster sampling. Samp l e Size Once the sampling methodology has been decided upon, a sample si2e may be determined. Three issues must be addressed when determining samp l e size: sampling error (the degree of precision desire), stratification {the examination of subsegmencs of population), and confidence levels (the degree of certainty with which the sample is representativ. e of the populac.ion). Samplina En;ox: The degree of precision in a survey sample can be determined by calculating the standard error. Specifically, as the sample size increases, the standard error associated with that sample decreases. The issue of precision with a survey samp l e is an important one. St .,.at if i cat ion In stratified sampling, the surveyor draws a sample with a pattern of important characteristics that is the same as the population's. If SO percent of emp loyees in the target area drive alone to work while 10 percent carpool, then the sample should have the same d istribution of modes. Levels The confidence level indicaees che degree to which the researcher i s confident that the samole is reoresentative. Freauently, the percent confidence level is chosen, meaning thae there is a 95 percent chance that the sample and the population will look alike, and a 5 percent chance that it will not.

PAGE 127

. Example Attachment A Page 5 of 6 The following example illustrates the process of determining sample size. Suppose a new TMA wants to determine mode split for employees in its area. Census data for the region suggests that the carpool rate is 15 percent. The confidence level was chosen to be 9S percenc and the standard error 2.5 percent. The following equation is used: N = (p) (1 -p)/(ce/z)' N = unadjusted sample size p = estimated proportion or incidence o f cases tes tolerable error z = the standard score of a given confidence level A new statistic used in this calculation is "tolerable error" (tel, which is defined as the standard error times the t for a 95 percent confidence interval) Given that p c 0.15, z = 1.96, and the standard error = 0 .025, te = 0.05. Thus: N = (0.15) (1-0.15)/(0.05/1.96}2 N = 196 To adjust for the population, the following equation used: N' = N/(1+(N/P)) N' = adjusted sample size N = initial sample size (calculated above} P = target population For this scenario, if the target population in the study area is 5,000, then: N' = 196/(1= (196/5000)) N' = 188 Finally, the sample size anticipated sample size. 30 percent response rate. anticipate the same. N s N' /X is determined by accounting for Many researchers report results with a Therefore, this example will also n = final sample size N'= adjusted sample size X = anticipated response rate

PAGE 128

Attachment A Page 6 o: 6 Given this equation, the f inal sample size for this example is: n = 188/0.30 n = 629 Therefore, in order to decermine mode split for its area, the new TMA m ust distribute 629 surveys to employee s of its members If the TMA is using the simple random sampling i t would randomly choose 629 names from its database. if the TMA want s to use the stratified random sampling te=hnique, t h e above process shoul d b e repeated for each organization. This will allow the TMA to construct a profile of each employer in its area that is statistically significant, and will ensure a statistically significant sample for the entire region as well.

PAGE 129

Att:.achment: B Page l of 2 EVALUATION MEASURE REPORTING GUIDANCE This is an example of how an agency could go about compiling the data needed for the reports they are submit to the Department. This is meant to be an example, not a prescribed format. However calculations must be based on known real data and mathematically correct. ln our example the agency will be called !CAP (Imaginary Commuter Assistance Program) Number of commuters regyesting assLstancc ICAP reports the following for Month X: 100 carpool matchlists processed 5 new vanpool clients Nuwber of commytgrs moges !CAP sends mail back cards to all 100 clients requesting carpool matchlists. All the information needed from the vanpoolers is available in their fare payment and registration records. 25 mai l back cards are returned by carpoolers with 5 clients reporting that they are carpooling. 5 : 100 5% Phone calls are made to the remaining 75 carpool clients. of those !CAP reaches 30 and finds out 5 more clients are carpooling. 5 + 5 : 100 a 10\ Number of yaps in seryice ICAP has 20 vans currentl y in service. Nnmber of vehicle trips eliminated The average frequency of carpooling reported on the mailback cards was 3 days a week. The frequency of the vanpoolers is 5 days a week. 10 x 3 x 2 : 60 trips eliminated by carpoolers/week 5 x 5 x 2 = 50 trips eliminated by vanpoolers/week miles eliminated The average carpool trip distance is 10 miles one way. T h e average vanpool distance is 35 miles one way.

PAGE 130

lO x 60 ; 600 miles eliminated/week 35 x SO = 1,750 miles eliminated/week Attachment B Page 2 of 2 To get the total number eliminated for the report, multiply by the number of weeks in the report. Employer contacts ICAP repor:s the following contacts: 13 employers contacted by letter 10 employers contacted by phone 5 employers visited in person Hajor !CAP expanded the guaranteed ride home program to include 3 new employers. Parking spots saved/patting neegs l S spots saved this month Commuter costs sayed The AAA average cost per mile for !CAP's service region $.40. 600 ; $240.00 saved/week by carpoolers $.40 X $.40 X 1,750 ; $700.00 saved/week by vanpoolers

PAGE 131

NM OF PROJECT WPI NUMBER STATE PROJECT" NUMBER CONTRACT" NUMBER QUARTRL Y REPORT FOR JANUARY 1. 1997MARCH 31. 199 7 I. INTRODUCTION/BACKGROUND ll. GOALS AND O.BJECTI'VI:S MET ID.. ACTlVITtESIDOCUMI:I\'TATIO!'o TV. MEASURABLE GOALS/OTHER STATISTICS Attachment C Page l of 2

PAGE 132

UKA r i l\. MEASURABLE GOALS/OTHER STATISTICS I MEASUR..o\BLE GOALS i ::OF' ;: 0? I COMMvoEP. S COMMUTEP.S ;: 0 ? VANS II\' R;P()f.7ED REQL'EST!NC SWITCHED SERVICE I P.SS! STANCE SO\" (17 AJ>)'l..ICUU.E.l I I I i I I I I I I I I -TOTALS-COM:PARJSON TO LAST REPORTING PERJOD: (REPORT lNCREA.SESIDECREASES \VITH EXPL.A-1'\A TIONS) Actaeht:!le:nt C P age 2 of 2 r: o:= VSHICi..S MJL::S =:!...J M I NA i'SD I I I ? o:= CO,..IA:TS PAF .. I I


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader cam 2200265Ia 4500
controlfield tag 001 029924261
005 20110923105634.0
008 970805s1997 flua 000|0 eng d
datafield ind1 8 ind2 024
subfield code a C01-00164
035
(OCoLC)37421434
040
CBT
c CBT
d OCL
FHM
043
n-us-fl
090
HE309.F6
b C66 1997
0 4 245
The Commuter Assistance Program evaluation manual /
prepared by Center for Urban Transportation Research, University of South Florida ; prepared for Florida Department of Transportation, Public Transit Office.
260
[Tampa] :
University of South Florida, Center for Urban Transportation Research,
[1997?]
300
106, [22] p. :
ill. ;
28 cm.
530
Also available online.
5 FTS
650
Commuting
z Florida
x Planning
v Handbooks, manuals, etc.
Ridesharing
Florida
Evaluation
Handbooks, manuals, etc.
Employer-sponsored transportation
Florida
Evaluation
Handbooks, manuals, etc.
Local transit
Florida
Evaluation
Handbooks, manuals, etc.
1 710
Florida.
Dept. of Transportation.
Public Transit Office.
2
University of South Florida.
Center for Urban Transportation Research.
773
t Center for Urban Transportation Research Publications [USF].
856
u http://digital.lib.usf.edu/?c1.164
FTS
y USF ONLINE ACCESS
951
10
SFU01:001444440;
FTS