USF Libraries
USF Digital Collections

Evaluation of first-year Florida MPO transit capacity and quality of service reports

MISSING IMAGE

Material Information

Title:
Evaluation of first-year Florida MPO transit capacity and quality of service reports
Alternate Title:
Evaluation of first-year Florida Metropolitan Planning Organization transit capacity and quality of service reports
Physical Description:
ii, 45 leaves : ; 28 cm.
Language:
English
Creator:
Perk, Victoria A
Thompson, Brenda J
Foreman, Chandra
United States -- Dept. of Transportation. -- Research and Special Programs Administration
Florida -- Dept. of Transportation
National Center for Transit Research (U.S.)
University of South Florida -- Center for Urban Transportation Research
Publisher:
National Center for Transit Research, Center for Urban Transportation Research, University of South Florida
Available through the National Technical Information Service
Place of Publication:
Tampa, Fla
Springfield, VA
Publication Date:

Subjects

Subjects / Keywords:
Local transit -- Evaluation -- Florida   ( lcsh )
Local transit -- Planning -- Evaluation -- Florida   ( lcsh )
Genre:
technical report   ( marcgt )
non-fiction   ( marcgt )

Notes

Additional Physical Form:
Also available online.
Funding:
Performed for the U.S. Dept. of Transportation Research and Special Programs Administration and Florida Dept. of Transportation under contract no.
Statement of Responsibility:
Victoria Perk, Brenda Thompson, Chandra Foreman.
General Note:
"December 2001."

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001930719
oclc - 49347445
usfldc doi - C01-00218
usfldc handle - c1.218
System ID:
SFS0032312:00001


This item is only available as the following downloads:


Full Text

PAGE 1

Evaluation of First-Year Florida MPO Transit Capacity and Quality of Service Reports December 2001

PAGE 2

'ltCI!loo1CAL Rf.PORT STANDARD nru PAGE . JCTR47302 2 Go\'EI'tlnenl. No. 3 No. Tllle:II"'CC&.J)lJ.Ie 8 Rcpon Cal c :va luation of F i rst-Year Florida MPO T ra n s it Capac i ty and Qual ity of Service December 2001 leports ... G Pcrtwrfng cocse AulhOC'{$) 'er k Victoria; Thompson, Brenda; and Foreman, Chandra 8 Rcpa11lo. Pertour..ing Orf!)3ni mfon N;sme Ad:IN4$ 1 0. W Oit\kt'tNo Center For Transit Research (NCTR) Jniversity of South Florida CU T 1 00 11, Conllatt or Gqnt No. 1 202 East Fowler Avenue, Tampa FL 33620 DTRS98G-0032 2 Spen;ori119 Name Adlll'$$$ 13.. l)'Pc c:i Rc1)ort :l,..d Period Co\'CmS )ffice of Research and Special Programs F l orida DOT J .S. Department o f Transportation 605 Suwannee '\lashington, DC 20590 Tallahassee Flor i da 32399 14. Sponsorina Aoenev Ccl!:le S. >upported by a Gran t from the USOOT Research and Special P r ograms Admi nistra t ion, and the Flor i da Department of f ransportalion 6, All$1tlld an applicati o n of the t ransit qua li ty of service framework presented in the F i rst Edition of the Tra n sit Capacity and of Service Manual (TCOSM) the Florida Departmen t of Transportation (FOOT) required all MPOs in the state vhere fixed r oule service operates to analyze those services based on the measures identified in the TCQSM: ;ervice freq u ency, hours of service, service coverage, passenger load ing, rel i ability (on-lime performance/headway 1dherence), and transtt versus auto travel time. This first year evaluation compiles t he anal yses provided by the >articipating MPOs and provides an assessment of the performan ce of the transit systems. In addition, the process >sed by the MPOs and transtt systems to eval ua t e their services Is evalua t ed, and possible refinements to the process >re suggested for f u ture years based on the first-t i me exper iences of the MPOs. This eval uation serves as a mode l for >!her a reas In the country interested In applying the customer o r iented assessment of transit based on the TCQSM t K$yWord$ 0\:;lribudonSbtcmenl 'ublic tran sij M P O, performance Available to the publ ic t hrough the National Techn i cal Information Service transit capacity and (NTIS), 5285 Port Royal Road, Spr i ngfield, VA 22161, (703) 4a7-4650, lf service htlp : //www ntis.gov/, and through t he NCTR web site at http://www.nctr usf.edu/. 1 9 $ocul'lty Clozl f (d 1ttis report) Security (of tws psoe) 2 1 No. Of Pl!OM ... Jncl asslfied Unc l assified 45 '"'DOT F 1 700.7 (8)

PAGE 3

State of Florida Department of Transportation Public Transit Office 605 Suwannee Street Tallahassee, Fl32399-0450 (850) 414-4500 Project Manager: Tara Bartee P lannin g Adm in istrator National Center for Transit Research Center for Urban Transportation Research University of South Florida 4202 E. Fowler Avenue, CUT 100 Tampa, FL33620-5375 Project Director: Project Manager: Project Staff: (813) 974-3120 Dennis Hinebaugh, Transit Program Director Viqtoria A. Perk, Research Associate Brenda J. Thompson, Research Associate Chandra Foreman Research Associate The opinions, findings and conclusions expressed in this publicati(m are those of the authors and not necessarily those of the US. Department ofTransportatton or the State of Florida Department of Transportation.

PAGE 4

TABLE OF CONTENT S LIST OFT ABLES .. ...... ... ... ..... ... ........... .............. ... ........ ............... .... .... .. ..... ... .. ... .... ... .............. ...... ii lNTRODUcriON .. .. .. . .. .. ........ .. .. .. ........ ......... .. .. . .. ....... ......... ..... ........ ................... ... ......... . .. .. .. FOOT TRANSIT QUALITY OF SERVICE INITIATIVE .. .... .. ........... ... ... .. .. .. .. .. .. ... ... .. ...... .. .. .. . 2 P ARTIOPA TING AGENCY ExPERIENCES ...... .. .. ... .. . . .. .. .. .. . .. .. ......................... . ... ... ....... . . ... 5 T rainin g Issues . . . .... ........ ...... .... .... . . ........ .... ................. ...................................................... 5 Costs and Funding ........... ................................... ......................... . .................. . .... . . ............. 6 MPO and Transi t Agency Partnerships ....... ........................ . .... . .... .......... .... . ............... . .... 7 Selection of 0-D Pairs and Scheduling ...... ...... . ............. .... . ...... . .... . ...... ................... ..... ... 8 Data Collected from APC Units ............... ................. .. ........................................................... 9 Time of Year, Window of Time, and Collection Days ......... ........... ... .... ............................. 9 A.M. versus P.M. Peak Periods ...... ............. .................... ....................... ............... ..... . ....... 10 Route Evaluation versus Trip Evaluation .... ................................................ ................. ...... 11 Data Collection and Sample S ize .... . ............... ......... ....... ............... ........ ........... ... .. ....... 11 Spreadsheet Issues . ....................................................................... ........... .... . . .... ....... . ...... 11 Travel Time I ssues ....... ....... ........................ ... ... ...... ..... ....... . . ...... . .................... ................... 12 Scoring A F .................. ....... .. .. ........ . ..... ..... . ... ............ .................... ..... .. ..... .. .... .. ......... 12 Public Image Con cerns . .... .... . . .. . ................ ....................................................................... 1 4 Purpose and V alue of TC QS 14 EVALUATIO N OF FmST-YEAR TCQS REPoRTS ..... .. ..... ............... ... .................. . .. . .... . .... ... 15 Review of the Process ...... .......... . ........... ..... ...................... ........ ....... .. ... . ....... ...... ........... ..... 15 Review of the Statewide Results .... . . .......... ........................................ ............ ................... 19 CONCLUSIONS AND RECOMMENDATIONS ... .. .. . .. . .. .. ........................... ... . ..... .. .. .. ........... ... 37 Evalunfjon of First-Year Florida MPO TCQS Reports I

PAGE 5

U S T O F TABLES Table 1: TQOS Mea sures . .. .. .. ... . . .. .... .. .. . . .. . .. .. ..... ... . . .. ... .. . .. .. .. . . .. .. ... .. . ........ ... . ...... ..... .. 3 Tabl e 2: Transi t Sys tems with Fewer than 50 Peak Vehi clesSmall Systems : 19 Tabl e 3: Transit Systems with 50 Peak Vehicles or Greater-Large Systems ............ . .. .............. 20 Table 4: Service Frequency Q0S Thresholds ..... .. ..... .. .... ...... .. . .. . ..... .. . .. .... . . .... ....... ........... .. . 20 Tabl e 5: Service F requency Q0SStatewide .. .. .. .. . .. . .. . .. ......... . .. .. ...... ... ... ... ..... .. .. ..... .. ... .. . 21 Table 6: Serv ic e Frequ ency QOS ResultsSmall Systems .. . .. . .. ..... . .. .. .... ... ... ... .... . .. . .. .... 22 Tabl e 7: Service Frequency Q0S Results Large Systems .... .. . .. .. ... . .. . . . .. ................ .... ... .... 22 Table 8: Hours of Service QOS Thresho lds ...... .... ................ ............. .. .. ............. .............. ... ...... .. 23 Table 9: Hours of Service QOS Results Statewide .. .. .. . ..... .. ... .. ......... ..... .. . .. . . . .. . ..... . .. . ..... 24 Table 10: Hours of Service QOS Results Small Sys tems . .... .. .. ............... .. . .. . . . . . . ... . .... ..... . 25 Table 11: Hours of Service QOS Results-Large Systems ...... .. ... ....... ...... ... .......... ... . .. . ...... ...... 25 Tab le 12: Service Coverage QOS Thresho lds . ........... ... ............. .................... .... .. .. .... ...... . .... .. .. .. .. 26 T a b l e 13: Service Cov erage QOS Results -Statewide . ... .. .. . ... ...... .... .... .. ...... .. ...... . .. ... .. ... .. .. 27 Table 14: Service Coverage Q0S Results -Small Systems . ... ... .. . . . . . .. .... ..... . . . . . .... . ... ...... 27 T able 15: Service Coverage QOS R esults -Large Systems .. .. .. .... ... ... .......... ...... ..... ....... .... ......... 28 Table 16: Passenger Loading QOS Thresh olds ..... .. .. .. ....... . . .. ... ...... . . ... .......... .. .. ... .. ..... ....... ..... 28 Table 17: Passenger Loading QOS Results-S tatewide .. .. . ....... ... .. ....... .. .. ... .... ........... ... ............. 29 Table 18: Passenge r Loading QOS Results-Small Systems . . . .... . . . . .. .... . .. .... ..... ..... ....... .... 30 Table 1 9 : Passenger Loading Q0S R esults-large Systems . ........ . . .. .. . .......... . .. ...... .. ... .... .... .... 30 Table 20: Reliability Q0S Thresholds ... .. ... . . ... .. .. ... .. .. . . .... ........ ... .. . ...... .......................... . . .. 3 1 Table 21: Reliability QOS Results-Sta t ewide ... .......................... . ...... . .. . .. ... . .. ......... .................. 32 Table 22: Reliability Q0S ResultsSmall Systems . .. ... ... . . ... .... .. .... .... ....... .. .. .. ... ..... ........ ......... 32 Table 23: Reliability Q0S Results -large Systems ..... .. ... .... .... .... ..... .. ...... ...... ........... ....... ... ....... 32 Table 24: Transit versu s Auto travel Time QOS Thresholds .. .. .... . ..... . .. ............ .... .. . ... ....... ...... . 33 Table 25: Transit versu s Auto travel Time QOS Results-Statewide ...... .... .. ................... ... ... ... 3 4 T able 26: Transit versus Auto travel Time QOS ResultsSmall Systems . ... .. .. .. ...... . ... . . . ....... 34 Table 27: T ransit versus A uto travel Time QOS Results La rge Systems ...... .. .......... .... .... ........ 35 Table 28: Service Coverage Q0S ResultsS ummary ........ .... ....... .. . ... . .. . ... . .... . . . . . . ........... ... 35 Table 29: TCQS R esults-Summary of All 0-D Pairs, Service Frequency, Hours of Service, and Travel Time QOS ..... ...... ... . . .. . .. ... .. .. ... ... ........ .... . .. .... .. ... 36 T able 30 : TCQS Results Summary ofTop 1 5 0-D Pairs Service Frequency, Hours o f Service, and Travel Time QOS .. ..... ... ... ........ ..... .. . . ..... .. ... . . .. ... .. .... . .. 36 Table 31 : TCQS Results -Summary of Top 15 0-D Pairs Passenger Loading and Reliability Q0S . ......... ............................................... .............. ................... .......... 36 Evaluation of First-Year Florida MPO TCQS Reports ii

PAGE 6

EVALUATION OF FIRST-YEAR FLORIDA MPO TRANSIT CAP A CITY AND QUALITY OF SERVICE REPORTS INTRODUCTION The Florida Department of Transportation (FOOl) is interested in the application of the new transit quality of service framework as presented in the First Edition of the Transit Capacity and Quality of Service Manual (TCQSM). This framework is seen as a tool to augment systematic evaluation of transit systems performed by FOOT. The goal is to provide a benchmark evaluation of transit systems within a specific time period such that the performance of the systems can be assessed from the transit users' point of view. FOOT required that the Florida Metropolitan Planning Organizations (MPOs) where fixed-route transit service operates coord inate an effort to evaluate those services within their respective regions with respect to the six transit service measures identified in the TCQSM. This project, an evaluation of these first-year Florida MPO transit capacity and quality of service (TCQS) reports, involved the collection, compilation, and analysis of these reports as gathered from the MPOs. The reports were examined to make preliminary assessments of the overall performance of the transit systems in the state in terms of these new measures. While individual system results are not presented in this report, the six Transit Capacity and Quality of Service (TQOS) measures are presented in aggregate form for the state as a whole. More than a general analysis and presentation of the results is contained in this report; this project also sought to evaluate the process undertaken by the transit systems and MPOs in completing this effort. It is understood that the results, i.e., the values of the six TCQS measures for the transit systems, from this first-time endeavor might not be as meaningful as results obtained in future attempts. This process is new for everyone involved and several issues arose involving the reporting instructions, methodologies, and time frames for analysis, all of which are discussed in this report, which impeded the achievement of optimal results for most agencies. It is FOOT's intention to discover and implement the data collection and reporting methodologies that will lead to the best and most valid TCQS results with the minimum effort on the part of the participating transit and MPOs. Any problems or inconsistencies found in gathering and reporting the required data are summarized in this report. Possible remedies and improvements to the process for subsequent years, based on the experiences of this first year, are also provided in the form of a series of recommendations. Evaluation of First Year Florida MPO TCQS Reports 1

PAGE 7

The purpose of the TCQS measures is to establish a means of evaluating the quality of tranit service, from the users' perspective, that can be comparable to the level of service measures used for roadways, which are also designed from the perspective of the user (i.e., level of congestion). It is the hope of FOOT that the routine implementation of this procedure in the future will lead to increased investment in transit services throughout the state by prompting the allocation of resources toward the improvement of transit services w ith poor TCQS measures, similar to the response when roadways are deemed to perform poorly. This is the first known statewide use o f this new customer-oriented transit performance evaluation procedure, and the results of this proje ct will be beneficial to other DOTs, MPOs, and transit systems throughout the country that may be interested in the application of the p erformance measures found in the TCQSM. The following section of this report contains background information on the six transit quality of service measures found in the TCQSM and the evaluation process itself. Then, the statewide TCQS results from the first-year effort, in terms of these measures are presented in detail. Another section identifies problems and difficulties that were encountered in this first-year process based on the exper i ences o f the participating agencies. Finally, recommendations are included that suggest remedies, refinements, and improvements to the process so that, in the future, the most beneficial and robust results will be obtained for the state. FOOT TRANSIT QUALITY OF SERVICE INITIATIVE FDOT required, as part of the FY 2000-2001 Florida State Planning Emphasis Area, that Florida MPOs where fixed-route transit exists organize an effort to evaluate those fixed"route services in terms of the six transit quality of service mea sures in the TCQSM The evaluation was to be incorporated into each MPO' s FY 2000.2001 Technical Work Program. The six transit quality of service (TQOS) measures evaluated are: service coverage; service frequency; hours of service; transit travel time versus auto travel time; passenger loading; and reliability (on-time performance or headway adherence). val11ation of FirstYear Florida MPO TCQS Reports 2

PAGE 8

The TQOS fram ework, as presented in the First Edition of the TCQSM focuses on transit service availability and comfort and convenience from the users' point of view and culminates in the six measures previously listed. The first three measures, service coverage, service frequency, and hours of service, relate to the availability of transit service to the u se r The measures of travel time (transit versus auto), passenge r loading, and reliability are associated with the comfort and convenience to the transit user. Each o f the six TQOS measures is expressed on a scale from" A:' to "F,'' similar to roadway level of service measures, with "A" denoting the best quality of service and "F representing the worst quality of service. Table 1 gives a brief description of the definitions of the measures. Further detail is provided later in this report. Table 1: TQOS Measures Measure Definition Service Coverage percent of transit-supportive area served Service Frequency headway, in minutes Hours of Service hours transit service is available per day Travel Time (transitvs. auto) travel time difference, in minutes, between transit and auto for the same 0-D pair Passenger Loading degree of vehicle crowding; space available per passenger Reliability comparison of actual versus scheduled arrival times Each measure, excep t service coverage, was to be applied on a typical weekday p.m. peak period. Service coverage was evaluated for the typical weekday. A typical weekday was defined as Tuesday, Wednesday, or Thursday, and the p.m. peak period was defined from 4:00 p.m. to 6:00p.m. The p.m. peak was chosen to mirror the p.m. peak period analysis procedures identified in FOOT's Level of Service Handbook for highways. The TCQS evaluations were to be conducted in March 2001, with a final report from each MPO area due to FDOT by July 1, 2001. The evaluation process began with the selection of major activity centers in each study area. Large areas with populations of 200,000 or more were to select at least 10 activity centers, while smaller areas with populations of less than 200,000 were to select at least six activity centers. These major activity cen ters were to con tain a balan ce of trip origins and destinations (although, as discussed later in this report, many more destinations than origins were ultimately selected, which may have impacted these first-year results) and were not to be locations necessarily best served by transit nor best served by aut omobile The objective was to choose activity centers where demand is high for people in the community to travel to and travel from. Guidelines were provided for the areas to aid in the selection of the activity centers. Once the activity EvaluatiQn of First-Year F/Qrida MPO TCQS Reports 3

PAGE 9

centers were chosen, trip pairs were developed from each activity center to all the other activity centers, thus producing at least 90 origin-destination (0-D) trips pairs for the large areas and at least 30 0-D pairs for the small areas. It was necessary to collect additional information on the 0-D pairs to assist with the calculation of the TQOS measures. From the local travel demand model, total trip demand (auto and transit), measured in trips per hour, was generated for each 0-D pair. In addition, the fastest route, or combination of routes, that connect each 0-D pair during the weekday p.m. period was determined, as was the number of opportunities to travel per hour from the given origin to the given destination. The measures for Service Frequency Quality of Service (QOS), Hours of Service QOS, and transit travel times (for use in the Travel Time QOS measure) were developed using existing transit r oute maps and schedules produced by the individual transit systems for the publi c. Auto travel times, necessary to complete the process for calculating the Trav e l Time QOS measure, were derived from the local travel demand model. Passenger loading and reliability data were required to be measured for only the 15 0-D pairs with the highest travel demands based on travel demand model results. Measurements on these trip pairs were to be made at the maximum load point for trips departing the origin betwee n 4:00 p.m. and 6:00p.m. H a passenger would need to transfer from one transit route to another to comp l ete the trip, data were collected for only the first leg of the trip Reliability information was to be recorded using the arrival time of the vehicle at the maximum load poin t Passenger Loading QOS was calculated using Automatic Passenger Counter (APC) data for two transi t systems, while all others used field measurements. Reliability Q0S could be calculated Automatic Vehicle Location (AVL) data, but all participating systems in this endeavor, except two, used field measurements. For both passenger loading and reliability, either 10 observations or three days of peak observations should have been made, whichever is grea ter. Service Coverage Q0S most easily could be determined by utilizing GIS technology. However, i GIS software was not available to an area, a manual method, described in the Agency Reporting Guide, could be applied. Two of the participants in this evaluation utilized the manual technique for measuring service coverage. Data on population, households, and employment was needed by geographical unit such as Traffic Analysis Zone (T AZ) or census block group. Transit stop or route location was also necessary. V\'hile the reporting agencies were to indicate the type of data used and the year that the data represented, there was no specification as to exactly which data or which year should be used. This makes sense, since various areas around the state may have different types of data more easily available or more recen t than others. However, the various data used by the participating agencies did not Evaluatio11 of First-Year Florida MPO TCQS Reports 4

PAGE 10

facilitate a consistent aggregation of the Service Coverage QOS for the state as a whole. This issue is discussed further later in this report. Florida !vlPOs and tTansit systems that were to participate in this effort received a comprehensive Agency Reporting Guide prepared by Kittelson & Associates, Inc., for FOOT. In addition, three training courses were held in November 2000, prior to the start of the evaluation in March 2001. The Agency Reporting Guide included detailed instructions on how to use the provided "Transit QOS Reporting Works h eet" prepared in Microsoft Excel format. In addition to completing the information in the evaluation spreadsheets, each participating MPO was to provide a brief written report summarizing the process and results in their respective areas. While all data collection was to occur in March 2001, the MPOs' individual reports were due to FOOT no later than july 1 2001. Kittelson and FOOT staff made themselves available to assist participating agencies throughout the process. PARTIOPATING AGENCY EXPERIENCES As part of the evaluation of the first-year Florida TCQS Reports, CUTR contacted representatives from each of the participating agencies, including appropriate MPO and transit system staff, to obtain a review of each individual area's experience with this process of measuring ttansit quality of service. The discussion guide that was used for these informal interviews is included in Appendix A of this report. For each MPO that conducted the T CQS process, CUTR was able to contact at least one tTansit system representative and an MPO representative. In areas that used a consulting firm to perform the analysis, CUTR was able, in a few cases, to speak with a consultant who was directly involved in the process. Since this was the first time that the MPOs and transit systems applied the process from the TCQSM in evaluating their fixed-route services, inevitably there were problems, difficulties, questions, and other issues that arose during the process of collecting and reporting the required data. The informal telephone interviews that were conducted with representatives of the participating agencies are summarized herein. Training Issues FOOT provided educational TCQS resources to the MPOs and tTansit agencies As discussed previously, training classes were offered before the TCQS initiative began. Transit planners, !vlPOs, and consultants were invited to attend one of the three classes offered. Those who attended believe that the class was indeed helpful Some planners were not able to attend one of the scheduled ttaining courses. They cited other conunitrnents, not enough advance notice of Evoluolion of fi.-.t-Yt4T Florida MPO TCQS Reports 5

PAGE 11

the class by FDOT, and long travel distanees as \he reasons for not participating. Hence, these representatives had to rely on the written material for guidance. An Agency Reporting Guide was distributed to provide additional insight. Five agencies reported that they had some difficulties utilizing the guide These individuals concluded that the guide was difficult to follow and not as informative as it needed to be. For instance, according to an agency planner who was not able to attend one of the classes, "A training course may have given me another perspective. I just used the manual. 1 didn't understand why we had to collect certain data Another said, "The directions in the manual were not simple to follow; I had to search for definitions and directions Finally, some had specific issues related to their system's operations. During the actual collection and reporting process, some questions arose that were not addressed in either the training class or the Agency Reporting Guide. If questions arose, most re lied on FDOT for advice Although all seemed pleased with the correspondence from FDOT, a few representatives from transit systems were disappointed with the technical assistance offered. Costs and Funding The costs of undertaking the TCQS evaluation depended upon several factors, including transit system size The participants cited a range of costs from $3,900 to $50,000. One system, which was able to utilize data from APCs and did not hire an outside consultant, reported the costs as "negligibl e ." Costs included travel to training classes, labor, consultant costs, and materials. Labor to collect data was the largest expense reported. Consequently, systems that were able to use volunteers, bus drivers, or temporary employees to survey trips could substantially lower their costs. Usually, the more modes a transit system had to survey, the more costly the project In particular, surveying a heavy rail mode was the most costly since all passengers had to be counted on board six cars in 40 seconds. Hence, labor costs rose as transit system size increased. Also, systems that had a major activity center, such as a malL near a transfer center or super stop encountered a cost savings since data for several trips could usually be collected at onetime. Transit systems utilized a variety of labor options for surveying. To keep costs lower, transit agencies employed volunteers, such as Boy Scout troops, employees' children, and interns. Also, four transit agencies depended upon bus drivers to collect ridership and on-time performance data. Consultants were used by a few agencies. Many times the TCQS tasks were added to a prior contract benveen consultants or researchers and the transit system. Moreover, Evalualio" of FirstY t4r Flilridtl MPO TCQS Reports 6

PAGE 12

in-house labor was used, such as full-time plartning members and agency management, to schedule trips, collect data, and report data. Two agencies stated that they used in-house labor for surveying tasks due to the fact that the bidding process was too These agencies reported that their costs increased because some mid-management employees received ov ertime compensation. In contrast, others used inhouse staff because consultants and temporary employees were too cos tly. Those utilizing in-house staff mostly complained that other tasks were co m promised, in cluding National Transit Database (NTD) surveying, special projects, and day-to-day operationa l oversight. MPO and Transit Agency Partnerships For the most part, MPOs and transit agencies worked to comp lete the TCQS evaluation together. Usually, MPOs provided modeling and geographical analysis while transit agencies were in charge of surveying. In fact, one MPO representative stated that she "worked together with transit to get it done and actually enjoyed it." Another MPO participant discussed a problem in which "everybody was trying to pass the buck" during the beginning of the process. However, after communicating, they all resolved the issue. Also, an MPO participant related the need to be a "watchdog" over the trans it agency to make sure they were reporting figures cor rectly. He monitored the effort by spot-checking and participating in the data collection process to ensure the legitimacy of the data. A few transit agencies reported significant participation difficulties with their MPOs. According to one system, its MPO did not accept the responsibility of the report until the very end of the time frame available for this evaluation. The transit system representative was disappointed that the MPO never went to a training class even after several invitation letters the MPO balked at running the local travel demand model, and the MPO did not begin to help until late March, when additional staffing was not crucial anymore. This transit agency also complained that the report was not as valuable as it could have been to get more funds from its commission since they are viewed as a biased party. TIIis individual clearly did not understand that the goal is to only provide a statewide summary at this time-individual results may be used internally to look for areas of improvement, but it is not necessary to forward the report to any loca l governing body. While there were a few instances of noncooperation between MPOs and transit systems, overall the two local entities worked well together. In some areas, MPO staff performed a larger share of the work involved, while in other areas the transit agencies shouldered the greater responsibility. It is anticipated that the division of tasks between the MPOs and transit systems will vary from area to area, depending upon the relationship between the two, staff interest, Evaluation of FirstYear FloridA MPO TCQS Reports 7

PAGE 13

and staff expertise. However FDOT intends for this to be an MPO effort and, as such, MPOs must be made weU aware that the TCQS e valuation is primarily their responsibility. Selection of 0-D Pairs and Scheduling Participants discussed the benefits of evaluating trips between activity center pairs. For instance, one com.-ultant said, "The evaluation of trip pairs, while some might find it cumb ersome is probably the best way to go to achieve the purpose o f this study." In addition, an MPO representative stated, "The selection of the activity centers is one of the most helpful parts of the study-not how good we can make the system look, but see if this tells us anything and what improvements can be made." The selection of activity cen ters was usually not difficult, albei t some had improvement suggestions. The purpose of the selection process was to select activity centers based on where people are really traveling. However, it was mentioned that the selection process could be biased to make the system look good. If pairs are not selected based on the demand for the origins and destinations, interviewees believe that the data could hamper meaningful decis ions In contrast, if the selection was representative, interviewees stated the benefits of determining how weU transit currently serves those demands. The methodology to determine origin-destination pairs was discussed by a few participants. One small system related the difficulties they had with choosing 10 to 15 0-0 pair s since they are located in a more rural area. In fact, nvo decided to change their method of selecting which 0-0 pairs would end up in the top 15 for sampling. Both discounted the use of coll ecting data on two-way trips. According to one of these participants, his methodology did not include two way trips (i.e., both directions of travel for an 0-D pair) because he "intuitively used traffic patterns to determine that the CBO to downtown is a stronger movement than the airport to downtown, consequently addressing evening peak period demand and opening up other movements that were examined in order to give a better distribution of trips." Also, a private consultant firm chose to survey the 15 highest travel demand 0-D pairs between different activity cente rs, because the demand estimate from one activity center to one other was the same as in the opposite direction. Moreover, this consultant stated that there should be a balance of origins and destinations when selecting activity centers, including selecting residences as origins and all others as destinations. EvaiURtion of First-Year Fwrida MPO TCQS Rtp
PAGE 14

Data Collected fro m APC Units Two tra nsi t systems were able to use APC data to acquire information. While one was able to just alter procedures already in place, the other had some diffkulties due to the fact that th eir APC units were newly installed. The new automated counters malfunctioned, causing some passenger loading information t o not make s ense. Hence an extension was necessary to obtain better data. Additional time was necessary to learn how to use APCs and extract meaningful data f rom them. Once familiar, this transit agency's representative believes that the process would be much smoother Both transit systems saved substantial labor surveying costs by using APCs, albeit scheduling proved to be challenging for the interlined routes Those vehicles that had the APC equipment had to be rearranged to make the randomly selected trips. Two weeks were needed to assign the appropriate bus {bus with APC) to the appropriate route for 10 units of 10 activity centers. Moreover, due to data extracting difficulties, both would like to collect data by route or month instead of by trip. Overall, APCs proved to be useful and efficient once employees were knowledgeable on how to use them. Time of Year, Window of Time, and Collection Days Participants were aske d their preference for the time of year to conduct this evaluation. Two planner s mentioned that the TCQS project should not b e done annually due to their forecasts that the information will change in one year's time and that it will continue to be a financial burden In contrast, three planners mentioned the need to sample during different seasons throughout the year. They described seasonal fluctuations that are not being captured in the current data collection process Finally, ridership fluctuations occur during the month as well as between months. Henc e, one participant believes a data collection system that considers that the beginning of the month usually has the high est ridership whereas the end of the month has the lowest would be practicaL Representatives considered workload (i.e. NTD reporting, special events), traffic patterns, and peak ridership seasons in order to determine their seasonal pre ferences. In fact, many of the planners determined ideal surveying months by their agency's ridership peaks. A system in northeastern Florida mentioned October because it was the transit agency's highest ridership month. May through June months were mentioned by systems in Florida's panhandle that experience peak ridership in the summer. In contrast, several planners specifically mentioned that surveying during the summer would not be best since ridership tends to drop, especially in July February, March, or April was chosen by peninsular state transit systems to capture seasonal residents, students, and Spring Break visitors. In summary, seasonal ridership changes affect peak ridership figures so systems would like to choose a month that best suits their Evnluation of First-Year Florida MPO TCQS &ports 9

PAGE 15

particular fluctuations. T his methodology would be appropriate if the goal of the TCQS evaluation is to show an aggregate view of how transit systems perform in their peak season All but one of the transit agency representatives agreed that the time frame of one month was too short The systems with hourly frequencies had the greatest difficulties in collecting the min imum number of samples, whereas those with high frequencies had the least. They cited agency coordin ation, interlining issues, scheduling difficulties, collection errors, special events, and labor shortages to be reasons to lengthen the period of time for collecting data. Some said that they were not able to survey all pairs in a week. Also, one stated the importance o f gathering a larger sample size. Suggested time frames include 45 days to 12 weeks. F le xibility to choose the sampling weeks within a certain period of time also was suggested by a few transit system representatives. Typically, the data collection days are Tuesd a y through Thursday. However, 10 of the interviewed planners would like to at least add Mondays for an additional survey day since three days of the week does not provide enough opportunity for them to evaluate pairs. Also, three participants from smaller systems mentioned that adding Fridays would be beneficial for them because those days are fairly typical in terms of weekday ridership and would not cause data discrepancies Only two participants did not like the idea of adding additional collection days. They mentioned holiday and flex -tim e issues as well as consistency with the data collection procedures for auto travel A.M. versus P.M. Peak Periods There was not a consensus among the respondents when asked for their preferable peak period. Although two wou l d like to measure both a m. and p.m peaks, all other participants responded differently. One planner indicated that peaks measure professional worker traveling patterns rather than shift workers who may wor k unorthodox hours Most preferred measuring during the a m. peak because they believe it would produce better results since ridership is typically higher and more concentrated than in the p m. peak In fact, one planner stated that the p.m peak period "would not be as conclusive because people tend to leave work at varying times and destinations typically vary due to errands." In addition, another system planner mentioned a m. is better due to a higher amount of coll ege student ride r ship. Finally, a few transit system representatives believe that their p.m. peak began at 3 :00 p.m and lasted as long as 7:00 p.m., and the sampling time frame is from 4:00p.m. to 6 : 00p.m. Evaluation of FirstYear Florida MPO TCQS Reports 10

PAGE 16

Route Evaluation versus Trip Evaluation When asked whether the TCQS evaluation should focus on routes or 0.0 pairs, all but one of the transit agency respondents indicated that they would like to survey and report an entire route. These respondents suggested that evaluating routes would be easier to conduc t, extract data, and report to constituents. In contrast, a consultant, an MPO representative, and a large transit autho r ity respondent stated the need to collect trip information in order to find out whether transit meets the demands of where people want to go. As the consultant said, "So what if a route has good frequency and convenient hours of service if the route does not get people where they want and need to go. Trip evaluations will help agencies see what is important." Finally, a few other participants suggested that the maximum load point on the first leg of the trip (if a transfer was needed to complete a trip, only the first leg of the trip needed to be observed for the loading and reliability information) may not be the ma ximum load point for the whole trip. Hence, the "whole picture" cannot be seen if measurements are taken only on the first leg. One proposal to fix this problem included averaging the frequency and hour information for the routes involved in the trip and then taking the measurements on the route segment that does have the maximum load point which may not be on the first leg. Data Collection and Sample Size Several transit planners would like to have the opportunity to use larger sample sizes. Citations of wasted resources were linl
PAGE 17

score for a major activity center that does not have transit service. A consultant that helped to develop the spreadsheet would like to add a column to account for congestion in subsequen t reviews. Also, the database management program, Transit Level of Service (TLOS), was not usable for a few of the larger transit systems' data sets Finally, two agencies responded that the Microsoft Excel format was difficult to utilize in Lotus rendering automated features of the parent Excel file useless. Consequently, a suggestion to use a standalone p rogram, similar to ART-P LAN, was made. Travel Time Issues There were issues with the comparison o f b u s and automobile travel times. First, comparing transit travel time during the peak and average daily automobile travel time was deemed unfair since actual bus travel times should be evaluated relative to similarly congested conditions on area roadways. Complaints that theoretical automobile results were compared with field data included, "it is impossible to estimate and compare the number of trips that are made by automobile during the peak period since the FSUTMS model only reports daily travel demand between the TAZs." Also, bus travel times are based upon schedules while automobile trips are not, rendering an "apples and oranges" comparison because schedules do not take into account congestion. Accordingly, this comparison "makes results look worse when they were quite normal." Second, the routes taken by a bus from one activity center to another may not be the same route an automobile driver would use. Third, aocording to a few participants, FSU1MS understa ted auto travel time valu es. For instance trips by auto during the peak were projected to be half the time it would realistically take and the seasonal adjustment factors were programmed to be too lenient for busy shopping districts and drawbridge allowances. Travel demand models are different from area to are a and produce results in varying forms; this must be considered when evaluating the individual area results on an aggregate statewide basis. ScoringA-F All system representatives were disappointed with the failing grades they received, although the measures were not unexpected in most cases. They were r ated on an alphabetical grading scale (A to F) that is similar to the grading scale for roadways. This scoring system was too difficult t o pass, according to a several transit planners. A few stated that their service is quality and meets the needs of their customers even though they received failing grades (Fs). In fact, one participant stated "you have to be perfe ct to get a B o r Con the evaluation." It is true that one aspect of transit service that is not addressed by the TCQS framework is the real uniqueness of each individual area and its transit service and whether the service on the streets is congruent with community goals for transit. Evaluation of First-Year Floridn MPO TCQS Reports 12

PAGE 18

On-time performance ratings often were mentioned to be unrealistic. They discussed many issues, including schedules tha t are not designed for peak service, recording arrival times rather than departure times, buses stuck in road congestion from poor highway LOS, and utilizing the times points in schedules rather than stops in between. Recommendations were given to a ll ev iate some of these on-time performance grading issues. Since "everyone's five minutes la te in the peak," there should be a longer on-time performance grace period. In fact, one transit planner stated that there should be a sliding scale. He thought that it was unfair that "30 minutes late counts the same as six minutes late" when "often a six-minute delay can be made up la ter in the run." In addition, many of the maximum load points occur at transfer centers, where many buses meet for timed transfers, and recovery time is often built into schedules at these transfer centers. However, if a bus arrived a t a transfer center more than f i v e minutes early but l ef t on schedule, it was still considered to not be on-time according to the guidelines given for measuring on-time performance. Other suggestions to enhance the TCQS scoring framework also were made. The issues encompassed a wide range of topics: seating capacity, transit-supportive areas, and unique route and service structures. One participant noted that a passenger's decision to ride would no t be affected if he or she had to stand during a short trip, yet the scoring system for passenger loading does not seem to account for the fact that some might not mind standing f or short trips: This individual also thought that riding a 12-square foot bus or a 6-square foot bus would not matter to his customers. A more suburban transit system declared that some of the standards, such as 24-hour service, were not applicable to its mostly elderly customers. Moreover, according to one planner, his system did not get credit for an area that is not transit-supportive by utilizing T AZs, but it would get credit if he measured using census tracts (although census tracts are, of course, larger units). Another stated that his system did not get credit for the service that exists in non-transit-supportive areas; however, this individual did not understand that the purpose of the service coverage information was to focus on coverage of transit supportive areas only. Also, another system had difficulty choosing a TAZ when some activity centers encompassed many T AZs (i.e., one downtown consisted of 1 7 T AZs ). Agencies with unique routing and service structures encountered difficulties. Those agencies that have to service long but narrow geographical areas (i e 75 miles) had p roblems achieving high Q0S measures. Service that included hub-and-spoke routing structures were hurt by the scoring system since transfers were needed to get from one activity center to another. Also, a system that serves the eastern and western parts of its county separately, except for one connector route, believes that it was misrepresented when comparing the commuter's park-and ride activity centers in the western portion of the county to the activity centers near the beach on the eastern side that are utilized by tourists. Hence, comparing the activity centers in the Evaluation of First-Year Fwrida MPO TCQS Reports 1.3

PAGE 19

east separately from the activity centers in th e west would fit th e transit customer patterns bett er Th i s system's MPO planne r thought that an e valuation o f th e co r e service are a only would be more representative. Publi<} Image Concerns T hree transit system representatives voiced significant public image concerns. They are c o ncerned that once the report is generated at the local level, "any reporter can come in and ask to see it." Hence, the report is open to wrong int erpre tations by public leaders and th e media. Reali:ting that the state's purpose is to aggregate the individual transit systems' data into statewide figures, these agencies are asking for help from the state to explain th e purpose of the TCQS evaluation to pressuring public groups who may utilize the information inco rre ctly and, the r e by, cause n e gativ e public opinion fo ll owed by less funding opportunities. One system did mention correspondence and useful guidance f rom FDOT whe n facing this issue. She als o stated that in the upcoming training classes in cluding informatio n about handling requests for information may be useful to other system planners. Purpose and Value ofTCQS P articipants were asked their perception of the purpose and value of the TCQS effort. There were mixed sentiments. Two planners considered the comparison between auto and transi t travel time to be useful. Moreover, according to one consulting firm, one of the most valuable parts of the evaluation was the infrastructure/ amenities analysis. Many participants found the selection of activity centers to be a very useful endeavor that set the stage for measuring how well the existing transit services meet the trav e l demands of the area. These participants agreed that this process has the potential to provide results that would not only be useful to FOOT for its purposes but also to the individual area. Most agree that the state's purpose to gain transit attention and funding is good. However, many also stated that the current process is not very valuable since it is costly and not helpful to transit agencies. A common suggestion was to integrate the TDP and TCQS processes as much as possible. Most transit system and MPO representatives complained that they already knew the outcomes of this project. Smaller syst e ms emphasized that it took a lot of time and did not tell them anything new since they can tell what is going on from day to day and are able to make adjustments on a daily basis. Larger systems also complained that the report was n o t useful since it replicates surveying and evaluations that are already being done for other reasons, such as TDPs. Consultants argued that, while the evaluation quantified a lot of intuitive factors, the failures did not help to generate solutions and did not give direction for future transit services Evaluation of First-Year Rorida MPO TCQS Reports 14

PAGE 20

It is important to note tha t these comments were collec ted before any of the par ticipants had a chance to see the statewide results of this analysis Most of those interviewed by CUTR for f o llow-up purpo ses had seen some of the initial re s ults and had more positive reactions to this effort and its purpose. EVALUATION OF FIRSTYEAR TCQS RtiFOllTS This section addre sses the process undertaken by the participating agencies and also analy z es the resulting TQOS measures. CUTR collected and reviewed each of the reports submitted by the MPOs for the purpose of providing an ove r all assessment of how well the MPOs and transit systems con d ucted the evaluation. In addition_ all of the data submitted by the MPOs and transit systems using the standardized electronic spreadsheets were comp iled into a format that can be used in the development of a statewide report to provide an overall evaluation of the performance of Florida's transi t sys t ems based on the six TQOS measures. R eview o f the Process All but one of Florida's MPOs where fixed-route transit systems exist, and that were required t o participate in this effort, did so and submitted a rep ort to FOOT. This resulted in 17 MPO TCQS reports representing 18 fixedroute transit systems. All but six were received at FDOT by July 1, 2001. Three others were submitted later in July, and the remaining three were submitted afte r July. Each participating agency submitted a completed spreadsheet. While a few submitted only the completed spreadsheets others who prepared additional written materials submitted items ranging from a simple letter or memorandum to detailed bound reports. The transit agencies represented in this evaluation are listed on the f ollowing page: Evaluation of First-Year Florida MPO TCQS Reports 15

PAGE 21

Broward County Transit Miami -Dade Transit Escambia County Area Transit Ocala/Marion MPO (SunTran) Gainesville Regional Transit System Palm Beach County Transportation Agency Hillsborough Area Regional T ransit Auth. Pas c o County Public Transportation Jacksonville Transportation Authority Sarasota County Area Transit** Lakeland Area Mass Transit District" Space Coast Area Transit Lee County Transit Tallahassee Transit Lynx (Central FL Regional Transi t Auth ) Volusia C ounty dba VOTRAN Manat ee County Area Transit'* Winter Haven Area Transit* Lakeland's and WinterHaven's services were evaluated together by the Polk TPO. "Manatee' s and Sarasota's services were evaluated separately by the Sarasota-Manatee MPO. All of the participating agencies selected at least the minimum number of activity centers. Five tra n s i t systems were evaluated using more than the minimum number of selected activity centers for their respective area populations. Although guidance was given to assist the areas in selecting major activity centers, the resulting choices were heavily represented by typical destinations (as oppose d to typical o r igins). Auto travel times between the 0-D pairs were to be determined by local travel demand model output. As mentioned previously, several participants indicated that these trav e l times derived from FSUTMS were suspect While some systems thought that the times were oversta ted, most believed they were significantly understated. The issue of comparability between the theoretically estimated auto travel times and the transit travel times reco rded from actual transit schedules was a contentious one. In addition to the apparent "apples to oranges" comparisons, it was noted that FSUTMS measures the times between the centers of T AZs, while the transit times are calculated from point to point. The compilation and reporting of the service frequency, hours of service, and transit travel time information for the 0-D trip pairs was relatively straightforward, with the relevant information being readily available from published transit m aps and schedules. Complications tended to arise with the collection of the loading and reliability data for the top 15 0-D pairs, and stemmed fro m both the det ermination of the top 15 pairs and the methods applied t o co ll ect the pertinent information once the top 15 pairs were select ed. EvahultWn of First-Year Florida MPO TCQS Reports 76

PAGE 22

Two of the six TQOS measures, Passenger Loading QOS and Reliability QOS, were to be measured using field observations or data from APCs or A VL equipment (two systems in this evaluation used data from APCs and A VL; all others used manual observations for both loading and reliability). The 15 0-D pairs with the highest travel demands, as estimated by the local travel demand model, were to be selected for sampling of the passenger loading and reliability data. The objective was to measure how well transit serves the trips with the highest travel demands in the area. There is no guidance in the reporting materials provided to the agencies concerning under what condi tions, if any, one or more of the t op 15 0-D pairs should be removed and substituted with other pairs. It might seem that, if both directions of an 0-D pair appeared in the top 15 (e.g., CBD to Airport and Airport to CBD), one should be eliminated. However, th.is is not necessarily true. In most cases, the travel demand model estimated different levels of travel demand for each direction of an 0-D pair. If the purpose of this TCQS effort is to measure the capacity and quality of service of transit in serving the origins and destinations with the highest demands, then none of the top 15 should be eliminated. However, one participant did remove 0-D pairs in the top 15 if the pair traveling in the opposite direction was also incl uded in the top 15 (although the travel demand figures were different) The 0-D pairs with the next highest levels of travel demand wer e then substituted for these eliminated pairs, also ensuring that .at least one trip including each of the 10 activity centers was surveyed. In addition, this participant also applied some reasoning as to which direction the trips were ultimately surveye d. For example, for a trip with h.igh travel demand from a residential area to an employment center, th e reverse trip was actually surveyed since the journey from the employment area to residential area would represent the p.m. peak travel pattern, despite the fact that the 0-D pair in this direction was not one of the top 15 trips. Two participants did not collect data on 15 trips; one used the 10 trips with the highest travel demands (they were instructed to do so in error) and another used 7 trips, 6 of which departed from the same origin and none of which were in the top 15 0-D pairs in terms of travel demands. Four other participants also substituted other 0-D pairs for trips in the top 15. In one case, this was due to problems with APCs. Another participant collected information for eight of the top 15 trips and substituted seven others, and one other collected data for seven of the top 15 and substituted eight other trips, including the trip with the lowest travel demand of all 90 0-D pairs. At least one of these participants wished to evaluate at least one trip represe nting each of the activity centers and so "overrode" the top 15 trip selection based on the estimated travel demands. Finally, one participant only sampled four trips out of the top 15, substituting 11 others. In this case, the reason may be that this particular participant did not survey any EvalUJJiion of FirstYear Floridn MPO TCQS Repom 17

PAGE 23

trips that did not begin and end between 4:00 p m and 6:00 p m., and so may have s u bstituted other trips that could meet this criterion. In another case for a few 0 D pairs, zero hours of service and zero travel oppo r tunities per hour were noted, but other data on those trips were included (except loading and reliability). This may have been because this particular system did not take any measurements on trips that could not be completed between 4:00p.m. and 6:00p .m., although the Agency Reporting Guide states that measurements should be taken on trips departing the origin between 4:00 p.m. and 6 :00 p m For four of anothe r sys t em's top 15 tr ips, zero travel opportunities per hour were reported, a l ong with "n/a" for transi t travel time; yet other data were reported, including loading and reliability information. The reasons f or these apparent discrepancies remain unclear For Passenger Loading QOS, eigh t participants collected fewer than the minimum sample size of 10 observations, with one sys tem having only two or three observations for each of the top 15 tri p s. In some cases, fewer than 10 observations were made for all15 trips, while in other cases, some o f the 15 trips had fewer than 10 observations and the remaining trips had more than 10. Only fou r participants sampled fewer than 10 trip occurrences for the Reliability QOS measure. Some of the participants that had fewer than 10 observations for the Loading Q06 h a d more than 10 observations for the Reliability QOS This may s tem from the fact that the section o f the Agency Reporting Guide dealing with passenger loading did not explicitly state that a minimum of 10 observations was necessary for the Loading QOS, while it was specifically mentione d in the section regarding the Reliability QOS Service Coverage Q05 could be calculated using either GIS or a manual method. Two participants in this evaluation calculated their service c overages manually It would be difficult t o use the individual participants' results for this measure in a statewide analysis o f transit service coverage due to the fact that diff erent types and different yea r s of data were used. Data from the years 1990,1995,19%,1997 1998,1999, and 200 0 were utilized (three participants had year 2000 data available for use in this analysis). As 2000 data bec ome more widely available in th e c o ming year, all participants should be able to use the most recent data, thus facilitating a valid analysis of statewide transi t service coverage. The next section presents the statewide results of this firs t -year Transit Capacity and Quality of Service Evaluation. Results and references to individual participants in this evaluation are purposefully omitte d since the objective is to examine quality of service on an aggregate statewide basis. Evaluation of FirstYear Florida MPO TCQS Reports 18

PAGE 24

Review of the Statewide R esults In an effort to provide a benchmark evaluation of transit systems within a similar tim e period, the overall performance of Florida transit systems was assessed using measures of current performance. Specifically, transi t performance was e valuated using the six TQOS measures identified in this report These measures included: service frequency, hours of service service coverage passenger loading, reliability, and transit versus auto travel time. As stated previously, these measures were evalua t e d to determine the level of service from the users' or riders' points of view and are represen t ed by a scale of" A" through "F," with" A" representing the best service from the passenger's point of view, and "F" representing the worst se r vice. All of the measures, with the exception of service coverage, were applied b y all of the trans it systems for a typica l weekday afterno o n pea k period during March 2001. The following sections contain a d escription of each measure and the procedures use d to accomplish the evaluatio n. I n addition, the resu lts of the evaluation are presented on a stat ewide leve l as well as by transi t system size. To preclude the comparison of systems of varying sizes, the ana l ysis is presented by transit systems tha t operate fewer than 50 peak vehicles and transit systems that operate 50 peak vehicles or greater. In this evaluation, there are 10 transit systems with fewer than 50 peak vehicles and 7 transit sys tems with 50 peak vehi cl es or greater. It should be noted that the systems in Lakeland and Win t e r Haven were evaluated together as one system. The groups are presented in Tables 2 and 3. Table 2: Transit Syst ems with Fewer than 50 Peak Vehicles-Small Sys tems System Name FY 2000 Peak Vehicles Ocala/Marion MPO {SunTran) 5 Pasco County Public Transportation (PCPT) 11 Manall!e County Area Transit (MCA1) 12 Space Coast Area Tra nsit (SCATB r evard County) 17 Sarasota County Area Transit (SCATSarasota County) 28 Lakeland Area Mass Transit District/Winte r Haven Area 31 Transit (Citrus Connection/WHAT\ Escambia County Area Transit (ECA1) 33 Lee County Transit (LeeTran) 43 Tallahassee Transit (faiTran) 44 County of Volusia dba VOTRAN 46 Evaluation of First Year Florida MPO TCQS Reports 19

PAGE 25

Table 3: Transit Systems with 50 Peak Vehicles or Greater-Large Systems System Name FY 2000 Peak Vehicles Regional Transit System (RTS-Gainesville) 58 Palm Beach County Transportation Agency (PalmTran) 125 Jacksonville Transportation Authority (JTA) 157 Hillsborough Area Regional Ttansit A"thority (HART) 162 Central Florida Regional Transit Authority 175 (Lvnx-Seminole, and Osceola Counties) Broward County Mass T r ansit Division (BCI') 230 Miami-Dade Transit (MDT) 625 Service Freque11cy QOS An important measure in determining the quality of transit service from the perspective of the user is the frequency of scheduled service. The service frequency level of service is a measure of scheduled fixed-route and rail service and is usually measured either by headway or number of vehicles per hour. This measure is one of the most relied upon when determining customer satisfaction, and improving frequency is often considered when transit systems \vish to strengthen their core ridership and attract new riders. While transit-dependent riders often have to adjust to the prevailing schedules, it is very difficult to attract discretionary riders out of their automobiles with infrequent service. According to the TCQSM, the designated service frequency measure for urban scheduled service is headway. The relevant thresholds are shown in Table 4. Table 4: Service Frequency QOS Thresholds QOS Headway Vehicles Q..a!itative Threshold (min.). per Hour A <10 >6 Passengers do not need schedules B 1().14 5 Frequent service; passengers consult schedules c 15-20 3-4 Maximum desirable time to wait if bus/train missed D 2HO 2 Service unattractive to choice riders E 31-60 1 Service available once during hour F >60 <1 Service unattractive to all riders According to the standards, Q0S A implies that transit vehicles arrive frequently enough that passengers need not refer to a route schedule to determine when the next vehicle will arrive. Evaluation of FirstYear Florida MPO TCQS Reports 20

PAGE 26

The other end of the spectrum presents transit service with head ways greater than 60 minutes. Q0S F represents a frequency that is unattractive to all riders, regardless of their level o f dependency on transit service. ResultsStatewide Table 5 illustrates the statewide frequency level of service results. As shown in the table, nearly half of the trip pairs involved in the evaluation (48.4 percent) received a Service Frequency Q0S E, meaning that service was available only once during the hour for 681 origin-destination pa. irs or trips. Nearly 11 percent of the total trips evalua t ed perform at a f requency of QOS F Table 5: Service Frequency QOS Statewide QOS Numl>erof Percent of Total Trips T otal Trips A 15 1 .1% B 17 1.2% c 179 12.7% D 362 25.7% E 681 48.4% F 152 10.8% ToW VW6 100.0% Results -Transit Systems with Fewer tlran 50 PCilk Vehicles Th e Service Frequency QOS for the systems with fewer than SO peak vehicles is presented in Table 6. Over half of the evalua ted trips for the smaller systems (fewer than 50 peak vehicles) are available to passengers only once during the hour. Jus t as with the statewide frequency evaluations, Q0S E represents more of the small er systems trips than any other Service Frequency QOS. None of the evaluated trips for the smaller systems earned a frequency Q0S A. This is not surprising, as many smaller systems do not have the resources to provide the type of service that allows passengers to ride without consulting a schedule Evalua!Wn of FirstYear Fwrida MPO TCQS Reports 21

PAGE 27

Table 6: Service Frequency QOS ResultsSmall Systems QOS Number of Trips Percent 6f Trips Portion of Total St.atewide Trips A 0 0.0% 0.0% 11 3 0.4% 17.6% c 48 5.8% 26.8% 0 201 24.4% 55.5% E 490 59.5% 72.0% F 82 10.0% 53.9% Total 824 100.0% Results-Transit Systems toit11 50 Peak Vehicles or Greater The frequency QOS distribution for this group of transit agencies is presented in Table 7. As shown in the table, most of the evaluated trips by the larger systems performed at a frequency QOS E, which mirrors the statewide distribution. However, the trips by the larger systems only account for 28 percent of the 681 statewide trips earning a Q0S E. This is somewhat expected, as larger systems tend t o have more resources with which to increase frequency. The larger systems accounted for 100 percent of the trips on which service is so frequent that passengers would not need to consult a schedule (QOS A). However, this only accounted for 2.6 percent of the total trips by the large r systems. Instead, most of the trips evaluated earned a QOS D or worse (72.5 percent), implying that many larger transit systems are providing service at frequencies that are unattractive to most riders. Table 7: Service Frequency QOS Results-Large Systems QOS Number of Trips Percent of Trips Portion of Total Statewide Trips A 15 2.6% 100 .0% 11 14 2.4% 82.3% c 131 22.5% 73.2% D 161 27.7% 44 .4% E 191 32.8% 28.0% F 70 12.0% 46.1% Total 582 100.0% Evaluation of First-YemFlorida MPO TCQS Reports 22

PAGE 28

Hours of Seroice QOS Another important criteria for determining transit service convenience and frequency from the perspective of the passenger is the hours of operation. Survey efforts, such as those conducted in Transit 2020 suggest that a major complaint of existing users regarding transit service is limited hours of operation. For those passengers who must depend on transit service, inconvenient hours. of operation generally require that they adjust their activities and schedules to utilize the serv ice. For those users who have alternative means of trave l they may simply opt to not use transit when the hours of service are limited. In the TCQS evaluation, the Hours of Service QOS is a measure of the number of scheduled hours of operation for fixed-route and rail service in a 24-hour period. The number of hours that a transit system operates daily is usually indicative of the type of service and has a significant impact on the kind of users that are attracted to the service Specifically, the evaluation required that each participating agency or MPO determine the earliest and latest departure time in the day at which several origin -d estination trips could be made. The hours between these times were used to determine transit hours of service Table 8 illustrates the designated Hours of Service QOS from the TCQSM. Table 8: Hours of Service QOS Thresholds QOS Hours Per Day Type of Se rvice Provided A 19-24 Night or owl service B 17-18 Late evening service c 14-16 Early evening service D 12-13 Daytime service E 4-11 Peak hour service/limited midday F ().3 Very limited or no service These standards were established under the assumption that passengers find transit systems that operate well beyond the typical work day hours more attractive Those systems that provide early morning and late night service are considered most convenient from the point of view of the passenger and, consequently, earn a QOS A. On the other hand, the lowest-ra ted leve l of service is reserved for those systems that provide very limited service to no service at all. Evahuzlion of first-Year Florida MPO TCQS Reports 23

PAGE 29

ResultsStatewide Table 9 shows the statewide distribution o f Hours of Service QOS results. As the table indicates, most of the trips involved in the evaluation (28.9 percent) represent systems in which daytime service is typical (QOS D). Nearly 40 percent of the total trips evaluated represent systems that provide some type of evening or night service on at least some routes (QOS A, B, or C). Table 9: Hours of Service QOS Results --Statewide QOS Number of Total Trips Percent of Total Trips A 88 6.3% B 159 11.3% c 297 21.1% D 407 28.9% E 278 19.8% F 177 12.6% 1"otnl 1,406 100.0% Results -Transit 5ysfms with Fewer than 50 Peak Vehicles The Hours of Servic e QOS for this group of transit agencies is presented in Ta ble 10. The table illustrates that most of the evaluated trips by smaller systems are made on routes that operate at 13 hours per day or less Similar to Frequency QOS, the ability of smaller agencies to operate at hours well beyond the typical workday may be severely limited by the resources available, as such expansion in hours is a labor-intensive invesbnent. Although the table does not reveal it, all of the six trips that earned a QOS A (night or owl service provided) for smaller systems were provided b y one syste m. This particular system provides, with the exception of the six trips provided at late night or owl hours, peak service only While the late night hours are an anomaly among -maller systems, the significant provision of peak hour service only on particular routes is common. Table 10 shows that 29.6 percent of small system trips are available during peak hour/limited midday service only and 17.1 percent of small system trips are available three hours of the day or less. Evaluation of Fil'$1-Year Florida MPO TCQS Reports 24

PAGE 30

Table 10: Hours o Service QOS R esultsSmall Systems QOS Number of Trips Percent of Trips P ortion of Total Statewide Trips A 6 0.7 % 6.8% B 2 0 .2% 1 3 % c 1 42 1 7.2% 4 7 .8% 0 289 35 .1% 71.0% E 244 29 .6% 87.8% F 141 17 .1% 79 .7% Total 8 2 4 1 00 .0% Results Transit Systems witlt 50 Peak Vehicles or Greater The evaluated trips representing the l arger systems fared much better with regard to the numb e r of hour s during which transit service is provided. As Tab l e 11 demonstrates, th e number of trips was d i strib uted fairly evenly among Q0S B, C and D (12 -18 hour s per day), with a s light edge in the n umbe r of trips availab l e 1 4-16 hours per day Very few of the large system trips that were eva luated (12 percent) had very limited o r peak hour service o nly. T a b le 11: Hou r s o f Service Q O S R esults Large Systems QOS Number o f Trips Percent o f Trips Portion of Total Statewide Trips A 82 14 .1% 93.2% B 157 27 .0% 98.7% c 155 26.6% 52.2% 0 118 2o.3% 29.0% E 34 5.8% 12.2% F 36 6.2% 20 .3% Total 582 100.0% Service Coverage QOS Ano ther good indica tor of passengers' satisfaction with transit service is whether the bus provides serv i ce to the areas where they want to go Generall y those areas of high population and e mploym ent densities are good candida tes for transit s e rvice The quality of servic e meas u re fo r service coverage is the pe rcent of the transi t -supporti v e area served for each particular system For this evaluation, an area is cons i dered transit-suppo r tive if it has a E11alua1Wn of First-Year Florida MPO TCQS Reports 25

PAGE 31

minimum population and/ or employment density to support at least hourly service. A density of three houses per acre or four employees per acre is required, and the area must be within walking distance (within one-quarter mile of a bus stop or one-half mile of a rail or busway station) to transit service. Units of measurement could include any defined geographic area such as a quarter section, census tract or block group, or TAZ. Table 12 illustrates the Service Coverage QOS thresholds found in the TCQSM used to evaluate the transit systems. Table 12: Service Coverage QOS Thresholds QOS %Transit-Supportive Area Covered A 90.0-100.0 B 80.0..89.9 c 70.0-79.9 0 60.0-69.9 E 50.0-59.9 F <50.0 ResultsStatewide Table 13 shows the distribution of all of the systems in the state along the Service Coverage QOS spectrum. Five of the 17 participating agencies (29.4 percent) represent QOS F, i.e., have service coverage of less than 50 percent of the transit-supportive area. While this suggests that several transit agencies are not providing access to areas having sufficient population or employment activity to warrant some type of transit service, Table 13 also indicates that nearly 65 percent of the agencies have service coverage in at least 60 percent of their transit-suppo rtive areas (at leas t QOS D). Interestingly, no system, small o r large, has a Service Coverage Q05 B. It is important to note the difficulty in interpreting the statewide service coverage information. Participating areas used different methodologies (GIS versus manual calculations) and varying years of data (from 1990 to 2000 and nearly every year in between) to determine Service Coverage QOS. With the continuing release of Census 2000 data, the analysis of this measure should be more meaningful with the next efforl Evaluation of First Year Florida MPO TCQS Reports 26

PAGE 32

Table 13: Service Coverage QOS ResultsStatewide QOS Number of Systems Percent of All Systems A 4 23.5% B 0 0.0% c 4 23.5% D 3 17.6% E 1 5.9% F 5 29.4% Total 17 100.0% ResultsTransit Systems with Fewer thnn 50 Peak Vehicles The smaller systems seemed to fare better than the larger systems for the Service Coverage QOS measure. This could be partly due to the fact that smaller systems may have fewer transit supportive areas to be covered. As Table 14 shows, the systems are distributed fairly evenly among the Q0S categories, with the exception of Q0S B. Table 14: Service Coverage QOS Results -Small Systems QOS Number of Systems Percent of Small Systems Percent of All Systems A 2 20 .0% 50.0% B 0 0.0% 0.0% c 3 30.0% 75.0% D 1 10.0% 33.3% E 1 10.0% 100.0% F 3 30.0% 60.0% Totnl 10 100.0% Results Transit Systems with 50 Peak Vehicles rrr Greater Table 15 illustrates that service coverage of the larger systems is relatively consistent among the QOS A through F rating. In fact, QOS A (coverag e to 90 .0 to 100 0 percent of the transit supportive area), QOS D (coverage to 60.0 to 69.9 percent o f the transit-supportive area), and QOS F (coverage to only SO. O to 59.9 percent of the transit-supportive area) e ach accounted for two of seven (28.6%) of the large systems' r esults. Evaluation of FirstYear Florida MPO TCQS Reports 27

PAGE 33

Table 15: Service Coverage LOS Results Large Systems LOS NUDlber of Systems Percent of Lalge Systems Percent of All Systems A 2 28.6% 50.0% B 0 0.0% 0 .0% c 1 1 4.3% 25.0% D 2 28.6% 66.7% E 0 0.0% 0 .0% F 2 28.6% 40.0% Total 7 100.0 Passenger Loading QOS Whil e a crowded bus signals tremendous ridership in the case of this evaluation a crowded bus represents an undesirable situation for the transit passenger. Passenger loading denotes the degree of crowding on a transit vehicle and is defined by the load factor, which is the amount of space available per passenger on the vehicle. Table 16 identifies the passenger loading measure thresholds for both bus and rail as identified in the TCQSM. No te, however, that none of Florida's systems use the rail threshold, except for Miami-Dade Transit. Table 16: Passenger Loading QOS Thresholds QOS Bus Rail Qualitative Threshold Sq FI/Pass. Pass,!Seat Sq Ft/Pass Pass/Seat A >12.9 0 .00 0 50 >19.9 0.00-0.50 No passenger n eeds to sit next to another B 8 .6-12.9 0.510.75 14.0-19.9 0 .51-0.75 Passengers can choose where to sit c 6 .5-8.5 0.76-1.00 10.2-13.9 0.76-1.00 All passengers can sit D 5.4-6.4 1.01-1.25 5 .4-10.1 l.ol2 .00 Comfortable standee load (or des.ign B 4.:3.5.3 1.25-1.50 3 .2-5. 3 2.01-3.00 Maximum schedule load F <4.3 >1.50 <3.2 >3.00 Crush loads For many of the agencies conducting this evaluation, it was inconceivable that empty transit vehicles warranted the better QOS. However, loading from the perspectives of the transi t rider and transit provider represent two very different views Since this eva l uation represents the perspective of the passenger, Q0S A indica tes that there are so many seats available on the transit vehicle that passengers can choose where to sit and do not need to sit nex t to any other passenger(s) This condition exists until the vehicle is half full. At the other end of the fvalu.ation of First-Year Florida MPO TCQS Reports 28

PAGE 34

spectrum, Q0S F represents those situations where all seats are occupied and there are at least half that many more passengers standing in the transit vehicle Each transit agency was only required to calculate the Passenger Loading QOS for the 15 0 0 pairs with the highest travel demands. M e asurements were to b e taken at the maximum load point. If the transit trip from the origin to the destination required a transfer, measurements were taken at the maximum load point of the first segment only. Two of the agencies were able t o utilize APC units; however, most of the participants collected this information manually. ResultsStalitwide According to Table 17, most of the trips on which passenger loading was measured were on transit vehicles that had sufficient seat availability such that passengers could be selective when choosing their seats. As shown in the table, Q0S A far exceeded the other levels of service with 83.9 percent of the trips of highest demand. Very few of the agencies' top trips were crowded such that QOS F was earned. In fact, the seven trips with Q0S F came from one transit agency. Table 17: Passenger Loading QOS Results-Statewide QOS Determinant (Pass/Seat) Number of Top Trips Percent of Top Trips A 0.0(1...().50 177 83.9% B 0.51-0 75 19 9 0% c 0 76-1 00 3 1 4% 0 1.01-1 25/1.01-2.00'. 0 0.0% E 1 .251.50/2.01-3.00 .. 5 2.4% F >1. 50/>3.oo 7 3.3% Total 211 100 0% . Loading data were collected on the top 15 trtps m most cases. Some agenoes collected these data for fewer than the 15 trips with the highest travel demands, and other trips were substituted 'This analysis focuses on the 15 trips with the highest trav el demands only. The total number of trips analyzed for this purpose is 211. ""*Denotes measurements for a rail vehicle, as opposed to a bus. ResultsTransit S ystems will! Fewer than SO Peak Vehicles Nearly all of the evaluated trips (122 of 129, or 94.6 percent) for small systems had 0.50 passengers per seat or Jess (QOS A). The remaining trips (seven, or 5.5 percent), which earned QOS B (five) and QOS E (two) were from two transit agencies. While this is not surprising, it indicates that none of the small systems is having problems specific to overcrowding on buses. Table 18 shows the distribution of the Passenger Loading QOS results for transit systems with fewer than 50 peak vehicles. Evaluatio>1 of First-Year Florida MPO TCQS Reports

PAGE 35

Table 18: Passenger Loading QOS ResultsSmall Systems . QOS Determinant (Pass.fSeat) Number of T op T rips Percent of Top Trips A 0 .0 0-0.50 122 94.6% B 0.51-0.75 5 3.9% c 0 76-1.00 0 0.0% D l.ot-1.25 0 0.0% E 1.25-1.50 2 1.6% F >1.50 0 0 0% rotaJ 129 100.0% Results Trm1sit Systems with 50 Peak Vehicles or GreJJter The distribution of Passenger Loading QOS results for the larger systems, as Table 19 indicates, is mor e dispersed among the various l evels. While the majority of the top trips for the larger systems were on vehicles with loads that a llowed passengers to choose where they would like t o sit (84. 2 p ercen t, QOS A and B), there a lso were trips which earned QOS C, E, and F Table 19: Pass enger Loading QOS Results-Lar ge Systems QOS Determinant (Pass/Seat ) Number of Top Trips Percent of Top Trips A 0.00 0.50 55 67.1% B 0 5 1 -0.75 14 17. 1% c 0.76.00 3 3.7% D 1 01-1 25/1.01 2.00* 0 0.0% E 125.50/2. 01-3.00' 3 3.7% F >1.50/>3.00' 7 8.5% Totnl 82 100.0% *Denotes measurements for a rail vehicle .. as opposed to a bus. Reliability QOS The reliability quality of servi c e measure r!!flects a comparison of actual vers u s scheduled arrival times of transit vehicles at stops or stations that r!!flect the maximum load point of the first segment required to take the trip via transit (assuming a transfer is involved; if not, the maximum load point of the route was used for the appropriate direction of travel). On-time performance is a critical factor when evaluating a transit system, as it is indicative of the degree E:valWllion of First Year Florida MPO TCQS Reports 30

PAGE 36

to which passengers can depnd on the service. Ontime performance is defined as the arriva l of the transit vehicle within five minutes of the printed time on the schedule. Trips that are either earlier or more than five minutes later than the time on the schedule are identified as not being on time. Table 20 i d entifies the Reliability QOS thresholds as defined in the TCQSM. Tabl e 20: Re liability QOS Thresholds QOS On-Time Peentage Resu l t A 97 .5-100.0% llate transit vehicle per month B 95.(}-97.4% 2late transit vehicles per month c 90.(}-94.4% lla. te transit vehicle pe. r week D 85.0 89 .9% More than one late transit vehicle per week E 80 0 84 .9% llate b:ansit vehicle per direction per week F <80 .0% More th a n two late transit vehicles per week An agency receiving a QOS A has been able to maintain its schedule at least 97.5 percen t of the time. Understandably, this is the most desirable situation for a passenger who uses the transi t ser vice At the other end o f the spectrum, an agency that maintains an on-time rate of less than 80 percent will have a difficu l t time attracting discretionary transit riders and will risk greatly inconveniencing its core riders For example, with Reliability QOS E or F, a passenger making a work trip will be late at least one day per week. Results -Statewide The distribution of statewide Reliability QOS re-u l ts rev eals that, while the majority of the top trips made refle c t poor reliability (55.5 percent rec eived Q0S F), there are several trips for which the agenc ies have been able to maintain a high level o f on-time performance (20.4 percent received Q0S A). Tabl e 21 outlines these results. valu a tion of First Year Florida MPO TCQS Reports 31

PAGE 37

Table 21: Reliability QOS R es ults-Statewide QOS Number of Top Trips Percent o f Top Trips A 43 20. 4% B 1 0.5% c 22 1M% 0 3 1 .4% E 25 1'1.8% F 117 55.5% Total 222 100.0% Results Tra11sit Systems with Fewer tha11 50 Peak Vehicles Table 22 indica t es tha t the distribution of Reliability QOS results for the smaller systems mirrors the statewide resu lts in tha t a clear majority of the trip s eval uated exhibited ontime performance lev e ls of Jess than 80 percent The small systems appear to have been better able to maintain their on-time performance however, which is part l y evident by the fact that 32 of the 43 statewide trips earning QOS A are from small systems. Table 22: Re l iabili ty Q OS R esults Small Systems QOS Number of Top T rips Per
PAGE 38

Tabl e 23: Reliability QOS Resu lts Large Systems QOS NumbeJ: o f T o p Trips Percent of Top Trips A 11 13.4% B 1 1.2% c 8 9.8% 0 3 3.7% E 8 9 8% F 51 62.2% Total 97 100.0% Transit versus Auto Travel Time The remaining service measure compares the travel time between selected origins and destinations using both transit schedules and model-derived estima te s for automobile travel. For those passengers who have alternatives for transit, particularly an automobile, travel time is almost always a factor in deciding which to use. For this evah1ation, transit versus auto travel time is determined by measuring the t otal difference in travel time from an origin to a destination between transit and auto. Issues regarding the disparity in the methods of determining these travel times have been discussed in this repo r t. Many of the participants in this first-time evaluation expressed concern over the model estimates being used t o calculate auto travel time, with some indicating tha t the only valid t echnique w ould be to de t ermine auto travel limes by actually testdriving an auto on the trip on the same day(s) that transit mea s urements are taken. Table 24 identifies thresholds for this measure as presen t ed in the TCQSM. Table 24: Transit versus Aut o Travel T ime QOS Thresholds QOS Travel Time Difference (minutes) Threshold A <=() As fast or faster by trans i t than by a uto B 1 -15 About as fast by transit as by auto c 16-30 Tolerab l e choice for riders 0 31-45 Round-trip at least an hour longer by transit E 46-60 Tedious for all riders; may be best possible in small cities F >60 Unacceptab l e to most riders E valuation of First-Year Florida MPO TCQS Reports 33

PAGE 39

Results S t atewide According the results identified in Table 25, nearly 30 percent o f the trips evaluated for travel lime would be considered unacceptable to most riders While nearly five percent of trips evaluated were determined t o be as fas t or faster by transi t than by auto (QOS A), the remaining trips appear to be distributed fairly evenly among the Q0S scores of B, C, D, and E Table 25 : T r ansit vs. Auto T r avel Time QOSStatew ide LOS Number of Total T r ips P e rcen t of T o tal Trips A 69 4 .9% B 281 20. 0% c 240 17.1% D 203 14.4% E 193 13.7% F 420 29.9% Total 14 0 6 100.0% Results -Transit Systems with Fewer than 50 Pellk Vehicles According to the results included in Table 26, patrons of smaller sys t ems are more likely to be able to take one of the evaluated trips just as fast on transit as by aut o (QOS A or B). However, over 35 per cent of the trips are estimated to take more than 45 minutes longer by transit than by auto (QOS E or F). Table 26: Transit vs. Auto Travel Time Q O S -Small Systems QOS Number of T otal Trips Percent of Total Trips A 59 7.2% B 211 25.6% c 156 18.9% D 107 13.0% E 102 12. 4% F 189 22.9% Total 824 100.0% Evalmttion of First-Year Fl.mida MPO TCQS Reports 34

PAGE 40

Results -Transit Systems with 50 Peak Vehicles or Greater Qearly, the majority of the evaluated trips taken on larger systems had travel times tha t would be consi dered unacceptable bj most passengers. Jus t as reliability is affected by traffic congestion, travel time is af f ected in the same manner Consequently, it is not ,,liprising that many of the larger transit systems experience difficulty in maintaining faster travel times. Nearly 14 percent of the trips are about as fast by transi t as by aut o or faster (QOS A or B), according to Table 27. I n contrast, more than 55 percen t of the trips are estimated to take at least 45 minutes longer by transi t than by auto (QOS E or F). Table '1:7: Transit vs. Auto Travel Time QOSLarge Systems QOS Number of Total Trips Pe...,ent of Total Trips A 10 1.7% B 70 1 2.0% c 84 14.4% 0 96 16.5% E 91 15.6% F 231 39.7% Total 582 100.0% Summary The information that has been comprises a statewide evaluation of Florida's transit sys t ems in t erms of the TCQS framework. Table 28 below summarizes the Service Coverage QOS information for the state as a whole, larger systems, and smaller systems. Tables 29 through 31 similarly summarize the remaining QOS measures, which dea l with trips between specific 0-D pairs r e presenting major activity centers within indiv i dual agencies' areas. Table 28: Service Coverage QOS Results Summary QOS Statewide Small Systems(< SO Peak Vehs) Large Systems (> SO Vehs) A 4 systems (23.5%) 2 systems (20.0%) 2 systems (28.6%) B 0 systems (0.0%) 0 systems (0. 0%) 0 systems (0 .0%) c 4 systems (23 5%) 3 systems (30.0%) 1 system (14.3%) D 3 systems (17.6%) 1 system (10.0%) 2 systems (28.6%) E 1 system (5 9%) 1 system (10 0%) 0 systems (0 0%) F 5 systems (29.4%) 3 systems (20.0%) 2 systems (28 6%) Total 17 systems (100.0%) 10 systems (100.0%) 7 systems (100.0%) Evaluation of FirstY eRr Florida MPO TCQS Reports 35

PAGE 41

QOS A B c D B F Table 29: TCQS Results Summary of All 0-D Pairs Service Frequency, Hotits of Service, and Travel Time QOS Service Frequency QOS Hours of Service QOS Travel Time QOS St..te Large State Small Large State Large U% 0.0% 2 .6% 6 .3% 0.7% 14. 1% 4 .9% 7.2% 1.7% 1 .2 % 0-4% 2.4% 11.3% 0 .2% 27.0% 20.0% 25.6% 12.0% 12.7% 5 .8% 22 .5% 21.1% 17.2% 26.6% 1 7.1% 18 .9% 14.4% 25.7% 24.4% 27.7% 28 .9% 35.1% 20.3% 14.4% 13 .0% 16.5% 48.4% 59 .5% 32.8% 1 9 .8% 29.6% 5 .8% 1 3.7% 12.4% 15.6% 10.8% 10.0% 12.0% 12.6% 17.1% 6 .2% 29 9% 22.9% 39.7% . .. "' .. NOTE. State d enotes total patrs r Small 1S <50 peak v ehicle systell'lS Large 1s >SO peak vclUdc systems QOS A B c 0 E F Table 30: TCQS ResultsSummary of Top 15 0-D Pairs Service Frequency, Hours of Service, and Travel Time QOS Service Frequency QOS Hours of Service QOS Travel Time QOS State Small Large State Small Large State Small Large 0.8% 0.0% 1 .9% 7 .9% 0 .7% 17.1% 9 1% 11 .7% 5 .7% 2.1% 0.7% 3.8% 12.4% 1 .5% 26 .7% 31.8% 41.6% 19.0% 21.5% 16.8% 27.6% 26.0% 27 .0% 24 .8% 18.2% 14.6% 22.9% 28.5% 32 1% 23 .8% 29.8% 38.7% 18.1% 14.5% 11.7% 18.1% 40.9% 4 7 .4% 32.4% 20 .2% 29.9% 7 .6% 8 .7% 9 .5% 7.6% 6.2% 2 .9% 10.S% 3.7% 2.2% 5 .7% 17.8% 10.9% 26.7% . NOTE: "Stat<>" denotes total pairS ; "SmaU" is <50 peek velucle systems; "Large ts >50 peak vehicle systems Table 31: TCQS Results -Summary of Top 15 0-D Pairs Passenger Loading and Reliability QOS QOS Passenger Loading QOS Reliability QOS State Small Large State Small Large A 83 .9% 94.6% 67.1% 20 4% 24.8% 13.4% 13 9.0% 3.9% 17.1% 0 .5% 0 .0% 1.2% c 1.4% 0 .0% 3 .7% 10.4% 10.9% 9.8% D 0.0% 0 .0% 0.0% 1.4% 0.0% 3.7% B 2 .4% 1 .6% 3 .7% 11.8% 13.2% 9.8% F 3.3% 0 .0% 8.5% 55.5% 51.2% 62.2% . NOTE: "Stat e" denotes total patrs; Small IS <50 peak veh1cle systems; Large is > 50 peek vehicl. e system Evaluaarm of First-Year Florida MPO TCQS Reports 36

PAGE 42

CONCLUSIONS AND RECOMMENDATIONS The first-year Transit Capacity and Quality of Service Evaluation proved to be a valuable learning experience for everyone involved: participating MPOs and transit systems, FDOT, and consultants and researchers who assisted in this effort. Undoubtedly, the developers of this evaluation process also can learn from this first statewide application of their work. While a detailed pres entation of the. statewide TQOS results is included in this report, significant emphasis also is placed on the process of the evaluation itself, since it is understood that this year's results are not as robust as those that would be obtained in future efforts when those involved gain a greater understanding of the process and methodologies. Exhaustive interviews were conducted with represenl!ltives of each participating agency (MPOs, transit systems, and consultants) to obtain insight as to how the process unfolded in each individual area and to identify obstacles, problems, and issues that arose during the conduct of U1e evaluation. In addition, while only statewide results are presented in this report, the results from individual areas have been analyzed separately not only to see if the outcomes matched expectations but to ascertain how the process was applied in each area. It is important that each participating area follow the same established procedures and apply the TQOS measures consistently to ensure a valid statewide representation of transit capacity and quality of service. This section summarizes the major issues that surfaced during the course of the evaluation and presents a series of recommendations that will help ensure the most valid results from this process in the future. 1-The minimum number of activity centers for the large and small areas (10 and 6, respectively) seemed adequate for this evaluation. Regarding the selection process, some participants indicated that they looked at this task as a useful exercise to determine the centers with the highest travel demands-where people are coming from and going toand to see how well transit serves those centers. However, some indicated that the temptation would be very strong to select centers of activity already well served by the existing transit system so as to demonstrate better QOS measures. While FOOT is aware that QOS measures are not expected to be very strong statewide, at the local level, some individual agencies feel the need to look after their own interests and present their transit system in the most positive light for fear of local media gaining access to and misinterpreting the purpose and results of the TCQS evaluation. As a result it is recommended that activity centers be reselected for each evaluation, as appropriate, to reflect new growth and travel patterns in the area. Also, it might be best for the transit systems to have less involvement in the selection of those activity centers; the MPO or an objective party should oversee the selection process. A balance between origins and destinations should also be achieved. Evalllalion of First-Year Florida MPO TCQS Reports 37

PAGE 43

2From the 0-D pairs derived according to the activity centers, the local travel demand model provides estima t es of total travel demand (auto and transit) for each trip which are then ranked. Passenger Loading and Reliability Q0S measures are then applied to the 15 0-D pairs with the highest travel demands. While this seems straightforward, most of the participants in this evaluation experienced moderate to extreme difficulty in determining the travel times between the activity centers (models will measure between the cen t ers of T AZs, not point-to-point) and expressed discontent that theoretically-estimated travel demands and trave l times were being compared to actual transit loads and travel times. Some participants believe that, although it would be labor-intensive, the best way to do the comparisons would be to take field observations on auto travel between the activity centers, i.e., drive the trip in an auto on the same day(s) the transit observati on(s) are made. This may be unrealistic, especially given the fact that no additional resources are provided to the MPOs to conduct this evaluation, but it would allow true "apples to apples" comparisons. It may be feasible to drive a few of the trips to determine how closely the model results represen t actual conditions. The travel demand models in each individual area often use different years' data, are updated on different schedules, and provide result. in varying forms. Individual areas need to be aware of exactly what information the local model is providing, and must be sure that any peak and seasonal factors are applied as appropriate. Since this evaluation is primarily the MPOs responsibility, MPO staff must take the lead in working with the travel demand model to provide the necessary information for the evaluation. In addition, as resource availability allows, participants should drive some of the trips between activity centers on the same day(s) that transit observations are made to test the validity of the model results. 3Another issue deals with selection of the top 15 trips and the occurrence of both directions of movement between two activity centers in the top 15 trips. Movement between residential areas and employment centers is different between other pairs of destinations such as between an airport and a CBD or between a mall and a hospital, for example. In the first case, it might be expected that the same number of persons who travel from a residential area to an employment center would make the return trip. Directional travel demands are not always as predictable for other pairs. Nonetheless, participants were to rank all of their 0-D pairs and select the 15 pairs with the highest travel demands for further measurements on passenger loading and reliability. The Agency Reporting Guide did not include any information regarding the logical application of this technique. Passenger Loading and Reliability QOS results in this report represent only the top 15 trips; others were excluded. However, several participants analyzed the resulting top 15 Evaluation of First-Year Florida MPO TCQS Reports 38

PAGE 44

0-D pairs and tweaked them to either remove one pair's direction of movement if both were included o r t o be sure that all activity centers were represented. The latter reason may deal more with the initial selection of activity centers if trips between some were not recognized in the top 15 pairs. Otherwise, if the top 15 trips include both directio ns of movement between one or more pairs, and the travel demand results are not identical, then both trips should be included in the final analysis. To ensure consistency across agencies in the state, all participants should analyze their top 15 0-D pairs according to the model results. Further exploration is needed on this issue, and should be addressed in any future update of the Agency Reporting Guide. 4The time frame for the QOS measurements is the p.m. peak period as defined from 4:00 p.m. to 6:00p.m. This ensures consistency with roadway traffic measurements. However, suppose one of the trips among an agency's top 15 is f rom a residential area to a center of employment, such as an industrial park or CBD. It is logical to assume that this trip has the highest travel demands in the a.m. peak as opposed to the p.m. peak. In such a case, the travel demand model is forecasting ali-day demands without accounting for time period. One participant, which had such a trip, measured the reverse direction in the p m. peak. This makes intuitive sense, but this problem would be eliminated if measurements could be taken in the a.m. peal\, as well. Several agencies liked the idea of including the a.m. peak, noting that it is a "lighter," more concentrated period. People tend to leave for work or school during the time between 7:00 a.m. and 9:00 a.m., but the afternoon peaks tend to be much more spread out, sometimes from 3:00 p.m. to 7:00 p.m. in some areas. This is due to workers leaving their jobs early to run errands or later to take advantage of employers' flex-time opportunities, and students who are often finished with classes in the early afternoon. It is recommended that participants analyze the resulting top 15 0-D pa.irs and determine the time period (a.m. versus p.m. peak) during which the individual trips would be expected to have the higher travel demands. Then, the trip could be measured during the appropriate time period. If no valid determination can be made, then the measurement should default to the p.m. peak period. 5The thresholds for passenger loading are determined using the square footage available per passenger or the number of passengers per seat on the vehicle. This accounts for standee loads and, understandably from the riders' point o f view, the more crowded the vehicle, the lo wer the QOS measure. However, one's level of comfort with a crowded vehicle or even a standee load is usually inversely proportional to the length of the trip. To be certain, there are some individuals who would never be comfortable stan ding for any length of time due to physical conditions. However, for most individuals, standing for a shorter length of time can be acceptable. Further exploration is needed to determine Evaluotion of Firsty..,. Florid MPO TCQS Reports 39

PAGE 45

whether the length of the relevant trip or segment can be incorporated into the measurements for Passenger Loading QOS. 6-For the purposes of obtaining passenger loa d ing and reliability informatio n, if a transit trip between activity centers necessitated a transfer, measurements were to be taken at the maximum load poin t of the first segment required for the trip. Many participants believe that this m ethodo logy did not result in a meaningful representation of the entire trip, especia lly if a needed transfer point was located close to the trip origin. It seems that this issue can be remedied wjth little additio nal effort on the part of those involved in the evaluations. For transit trips that include one or more transfers, it is recomm ended that service frequency and hours of service information be averaged over the routes required to accomplish the trips. The maximum load point along the entire trip should be determined, and then measurements for passenger loading and reliability taken on the segment that represents that maximum load point, which may not be on the first leg of the trip. 7Each participating agency was to co llect the field observations for passenger loading and reliability during a one-month time frame in March 2001. Several participants believe that this window of time is too short. For some, especially the systems with less service frequency, it was difficult to obtain the minimum number of observations within the given time frame due to issues with staffing. Systems with less frequent service could collect fewer observations per day and thus needed staff in the field for more days than the systems wjth higher peak frequencies. Particularly for those agencies that conducted the evaluation in-house, additional staff time out in the field for this purpose meant additional costs either in terms of overtime payments o r in terms o f less attention to other necessary tasks regularly assigned to the staff. Another issue was that, if trip o bservations were missed or collected incorrectly, there were often little or no other opportunities to collect the information as required in the Agency Reporting Guide. Some participants faced with this type of situation went ahead and collected the data outside the March 2001 period, while others substituted other 0-D pairs outside the top 15 or simply completed the evaluation using the fewer number of observations. While the need to resample trips due to data collection error will undoubtedly decline as the TCQS process is refined and familiarity among the participants improves, it is recommended that the time frame for collecting passenger loading and reliability data be increased from the current fourweek time period to a six-or eight-week period. 8Yet another issue t o consider regarding the time frame for collecting the data pertains to the varying peak months experienced by agencies in various geographic locations Evaluation of FirstY tAr Fk!rida MPO TCQS Reports 40

PAGE 46

throughout the state While locati ons in central and south Florida tend to experience peak l!avel demands and ridership between Feb ruary and April, locations in northern Florida, including the panhandle, tend to experience spikes during the summer (June, for example). It is understood that one of the intentions of this first-year effort was to examine a "snapshot" of transit performance across Florida. However, it was also the intention to measure typical weekday transit performance in the p.m. peak of the peak travel time. If it is indeed the case that FOOT wishes to measure the performance of its transit systems during peak ridership months, then participants should be able to choose the four-, six-, or eight-week time frame based upon individual agency ridership variations. Participating agencies would need to provide evidence that the selected time frame represents the ridership peak. In this case, the statewide results would be presented in terms of how well Florida's transit systems perform, overall, during their peak periods. The statewide results could be compiled at the end of a calendar or fiscal year to allow time for each participant to complete the evaluation. 9I f the recommendation to widen the window of time for the collection of field data is implemented to allow for a sixor eight-week time frame, then the requirement to collect the information only on Tuesdays, Wednesdays, or Thursdays should stand These three day s are representative of the "typical" weekday and are the days used for conducting weekday transit surveys as well as collecting traffic information. However, given the shorter four -w eek time period, it may be feasible to allow some systems to collect data on Mondays, as well. Several of the participants representing smaller areas indicated that their w eek day ridership tends to be very flat, and that there is no statistical variation Mondays through Thursdays (and sometimes even Mondays through Fridays). These smaller areas also tend to have the less frequent transit services, necessitating additional data collection days to acqune the minimum number of observations. It is recommended that, if a participating agency can s how valid data to prove that Monday ridership is not statistically different from ridership on Tuesdays, Wednesdays, or Thursdays, then that participant should be able to use M ondays to collect field data. Data collection on Fridays would not be allowab l e for any of the participants. 10The reliability, or on-time performance, measurements caused problems for many of the participants. Overall, the Reliability QOS results were poor stat ewide For all of the participating agencies, 55.5 percent of the observed top 15 0-D pairs were designated QOS F which translates to on-time performance of less than 80 percent. Approximately 51 percent of the smaller systems and 62 percent of the larger systems reported Reliability QOS F. Reasons for these results offered by participating agencies were numerous. Most indicated that transit schedules are not written for the peak periods and that "everyone" is Evaluation of FirstYear Florida MPO TCQS Reports 41

PAGE 47

always late during the peaks Some observed that, for a bus system operating on mixed traffic righ t -of-way, poor roadway LOS resulted in poor Reliability Q0S since the transit vehicles mus t negotia t e the congested traffic conditions. Also, i t is often the case that a maximum load point, where measurements should be taken, occurs at a transfer center, where seve r al buses may meet for timed transfers Many times the transit vehicles wait for each other so if one runs late they all will be late. Another issue is when recovery time is built into a schedule so that a vehicle may arr ive at a transfer center that represents a maximum load point more than five minutes early (designating i t as not on-time), but will leave on schedule Finally, participants noted that being 30 minutes late is counted the same as being 6 minutes late Most of the time, a vehicle that is just a few minutes lat e can make up the time later in the run. However, if transit vehicles are routinely 15, 30, or more minutes late, then that can indicate a more serious problem. The results o f the Reliability QOS measures s hould lead to a doser l ook at a system's sc h edules to be sure they are realistic for peak conditions. Peak versus off-peak scheduling might be considered by some. In addition, the driver of a vehicle that is running late to a timed transfer point should be encouraged to radio ahead to the other vehicles waiting a t the facility (or inform dispatch) so only those who will be receiving transfers from the lat e route will wait while the others go ahead to keep the schedule (though practiced routinely by several sys t ems, this is an internal proce dure issue for each transit system, as are any existing service standards or guidelines regarding on-time performance) Closer examination is needed of the threshold definitions of the Reliability QOS measure. A sliding scale m a y be appropria t e so that a worse QOS level i s associa ted with a greater number of minutes late. Or, an average score can be deve l oped from the f i e l d observations to account fo r how many times the vehicle w a s late and by how many minutes Furthermore, while transit service runnin g h ot'' (early) is neve r desirable, and a system's existing service gui delines or standards may incl ude a defin i tion of on-time performance that penalizes service that arrives or leaves too early, it is not clear that such a definition should apply for the TC Q S evaluation An analysis of the results shows that most trips that were not "on-time" were late, and those that arrived too early were at transfer centers and did not leave before the scheduled time. I t is certainly not desirable from the passengers' standpoint to have transit service that leaves ear l y from a designated stop. However, the measurements taken at the maximum load points for the purposes of the TCQS evaluation usually o c curred at transfer points, major activity centers, and/ or places where recovery time was built into the schedule. Transit vehicles running hot during their runs will most like l y get back on schedule once they arrive at the major activity center or transfer point (i.e., maximum load point). In summary, the TCQS evaluation f o cuses on trips that both origi n ate and arrive at a Evaluation of FirstYear Floridn MPO TCQS Reporl$ 42

PAGE 48

major activity center, where it is v e ry unlikel y that a transit vehicle will depart ahead of schedule 11 So me participants in this evaluation speculated that the application of the TCQS measures to route segments between activity cen te rs does not present a complete representation of the transit service. As described earlier in this report, participants shared their views concerning the evaluation of trip segments versus entire routes and some believe that by using the TCQS measures oo evaluate whole r outes, the results would b e easier to und e rstand and wou l d render a mo r e accurate portraya l of system perf o rmanc e. I t is believed by some that it is unreasonable to try and "fit" transit into a model meant for auto travel However, other parti ci pant s, realize that, to best evaluate how well transit serves the trips with the highest travel demands, it is necessary to evaluate the trip its elf. As such, this report recommends that the evaluation of the transit trips between major 0-D pairs continue in subsequent evaluations. 12Results from the Service Coverage QOS analysis were difficult to interpret on a statewid e level. This was due t o the fact that different methodologies and data representing various years were used by the participating agencies. Only two participants used the manual method for calculating the p er centag e o f the transit-supportive area served by transit, as opposed to the GIS method I t is hoped that these two participants will be able oo take advantage of GIS capabilities in future evaluations R e garding the data used, with the continuing release of Census 2000 data each participant should be able to use year 2000 data for the next TCQS evaluation (only three used 2000 data in this evaluation), thus enabling easier aggregation of the results. Partidpating agendes should all use data representing the same year, and FOOT may even consider standardizing the geographic unit used in calculating service coverage. If every participant uses the same method and data from the same year a valid estimation of the percentage of the transit supportive area in the state served by public transit will be obtainable. 13T r aining courses and materials were provided to the participating agencies in adv ance o f this first-time effort. Training courses were h e ld in November 2000, four months prior to the start of the evaluation. In addition, an Agency Reporting Guide, prepared by Kittelson and Associates, Inc for FOOT, was provided to the participants Staff from Kittelson and FOOT also made themselves wide l y available to assist the agencies in the preparation and conduc t of the evaluation Many participants had little trouble with the process and/ or were pleased with the support provided. The training courses were, by all accOlmts, extr e m e ly helpful. However, some areas were unabl e to send representatives to the training, and those participants tended to encounter more difficulties during the Evaluation of FirstYear Florida MPO TCQS &ports 43

PAGE 49

evaluation, and some areas weie not pleased with the level of assistance. Some of the results also indicate that further training is needed, and there are some clarifications that should be made in the written materials provided to the participants. Examples of o f the information in the Agency Reporting Guide included the idea that a p.m. peak trip could only be included in the field observations if it could be completed by 6:00 p.m. (trips departing the origin between 4:00 p.m. and 6:00 p.m. should be counted, even if they arrive at the destination past 6:00p.m.) and the assumption that fewer than a minimum of 10 observations could be collected for the passenger loading data (at least 10 observations were required). It is recommended that, with the experience gained by all invol ved with this first-time effort, additional training be held in the future, and in locations that are easy to travel to for the participants in north, central, and south Florida (e.g., Tallahassee, Tampa, Ft Lauderdale). Also, the Agency Reporting Guide should be evaluated and updated to include new or modif ie d procedures and a clarification of other issues. 14Strong opinions were voiced by participants regarding the fact that no additional funds were provided to the agencies to conduct this required evaluation, particularly by transit systems that had to shoulder a larger portion of the work involved in completing this effort. Peihaps a few of the MPOs, realizing there were no new funds for this project, expected the transit systems to take on more of the tasks, resulting in the. notion of "passing the buck," as expressed by one participant. Costs to conduct this evaluation ranged from "negligible" to $50,000, with most about $4,000 to $5,000. Clearly, in some areas, a much higher level of resources was expended on the TCQS evaluation than should have been necessary. With proper advance planning and by taking advantage of less expensive local labor opt ions if needed (e.g., temporary workers, local college/University students, senior citizen groups, volunteer organizations, etc.), costs should be kept at a minimum. If an area wishes to contract out to conduct the evaluation, it is expected that there will continue to be plenty of advance notice of future evaluations so that the contractual process can be completed. It is anticipated that future efforts will cost less as participants prepare earlier and become more familiar with the process, thus needing to spend fewer hours directly on TCQS tasks. 15The Florida MPO TCQS Evaluation was intended to be an annual effort. However, it has already been determined that field data collection will not be required for the next evaluation The reselection of activity centers on a regular basis is a useful exercise. However, given the fact that the local travel deinand models are not updated annually and the fact that transit services do not typically experience significant changes from year to year, the benefits (i.e., useful information) from annual evaluations may outweigh Evaluatio11 of First Year Florida MPO TCQS Reports 44

PAGE 50

the costs. Therefore, it is recommended that the TCQS Evaluation be conducted in full as part of the Long Range Transportation Plan Update Process. The rese!ection of activity cen ters and the examination of transit system performan ce in serving those centers bi-annually will capture major changes in travel patterns in the area and the transit system's response to those changes, even if the local travel demand model is not updated. Once a schedule for the evaluations is determined, each agency that is expected to participate should be sure to plan appropriately for the proper collection and reporting of the TQOS measures. To the exten t possible, the TCQS effort should be coordinated with other data collection efforts such as the TOP or other in-house programs. The items and recommendations presented in this report are intended to provide FOOT with an overall assessment of Florida transit performance in terms of the six TQOS measures included in the TCQSM. More importantly, this report evalua tes the process of the first-year statewide implementation of these measures and summarizes the experiences o f those involved. It is the objective of this report to provide FOOT and other interested parties guidance on refining the process in order to extract meaningful, useful, and valid results in the future, with minimum effort. In general, further research is needed in areas regarding the selection of 0-0 pairs, data collection for passenger loading and reliability, and the thresholds for the TQOS measures, particularly reliability and overall on-time performance issues. With better, consistent results, the aim of evaluating statewide transit service on an "A" through "F" scale, similar to roadway LOS, can move the state closer to the ultimate goal o f increasing investment in public transit services and can serve as a model for other states with the same objectives. Evaluation of FirstYear Florida MPO TCQS Reports 45


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader cam 2200361Ia 4500
controlfield tag 001 001930719
005 20080327124829.0
008 020320s2001 flu t 000 0 eng d
datafield ind1 7 ind2 024
subfield code a C01-00218
2 local
027
NCTR-473-02
035
(OCoLC)49347445
040
CBT
c CBT
d OCLCQ
FHM
043
n-us-fl
049
FHMM
090
HE4487.F6
b P47 2001
1 100
Perk, Victoria A.
0 245
Evaluation of first-year Florida MPO transit capacity and quality of service reports /
[Victoria Perk, Brenda Thompson, Chandra Foreman].
3 246
Evaluation of first-year Florida Metropolitan Planning Organization transit capacity and quality of service reports
260
Tampa, Fla. :
National Center for Transit Research, Center for Urban Transportation Research, University of South Florida ;
[Springfield, VA :
Available through the National Technical Information Service,
2001]
300
ii, 45 leaves ;
28 cm.
536
Performed for the U.S. Dept. of Transportation Research and Special Programs Administration and Florida Dept. of Transportation under contract no.
DTRS93-G-0032.
500
"December 2001."
530
Also available online.
650
Local transit
z Florida
x Evaluation.
Local transit
Florida
Planning
Evaluation.
700
Thompson, Brenda J.
Foreman, Chandra.
710
United States.
Dept. of Transportation.
Research and Special Programs Administration
Florida.
Dept. of Transportation.
National Center for Transit Research (U.S.)
University of South Florida.
Center for Urban Transportation Research.
994
C0
FHM
8 773
t Center for Urban Transportation Research Publications [USF].
4 856
u http://digital.lib.usf.edu/?c1.218