Transit service performance analysis and monitoring process : final report

Transit service performance analysis and monitoring process : final report

Material Information

Transit service performance analysis and monitoring process : final report
University of South Florida. Center for Urban Transportation Research
Metro-Dade Transit Agency. Service Planning and Schedules Division
Place of Publication:
Tampa, Fla
Center for Urban Transportation Research (CUTR)
Publication Date:


Subjects / Keywords:
Bus lines--Florida--Miami--Evaluation ( lcsh )
Bus lines--Florida--Miami--Ridership ( lcsh )
Local transit--Florida--Miami--Evaluation ( lcsh )
letter ( marcgt )

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
33213404 ( OCLC )
C01-00264 ( USFLDC DOI )
c1.264 ( USFLDC Handle )

Postcard Information



This item has the following downloads:

Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader ntm 22 Ka 4500
controlfield tag 008 s flunnn| ||||ineng
datafield ind1 8 ind2 024
subfield code a C01-00264
(OCoLC) 33213404
2 110
University of South Florida. Center for Urban Transportation Research.
0 245
Transit service performance analysis and monitoring process : final report
Tampa, Fla
b Center for Urban Transportation Research (CUTR)
c 1995 August
Bus lines--Florida--Miami--Evaluation
Bus lines--Florida--Miami--Ridership
Local transit--Florida--Miami--Evaluation
Metro-Dade Transit Agency. Service Planning and Schedules Division
1 773
t Center for Urban Transportation Research Publications [USF].
4 856


TRANSIT SERVICE PERFORMANCE ANALYSIS AND MONITORING PROCESS FINAL REPORT Prepared for: Metro-Dade Transit Agency Service Planning and Schedules Division Prepared by: Center for Urban Transportation Research University of South Florida August 1995




TRANSIT SERVICE PERFORMANCE ANALYSIS AND MONITORING PROCESS FINAL REPORT TABLE OF CONT ENTS SECTION PAGE LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . v LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . VI FOREWORD . . . . . . . . . . . . . . . . . . . . . . . . . . VII CHAPTER ONE: DATA COLLECTION, ANALYSIS, AND USE ......... ... I ntroduction ....................... ........... ....... .... Curren t Practices . . . . . . . . . . . . . . . . . . . . . Data Collection ... ........... ... ................... . ... Select ing Routes . . . . . . . . . . . . . . . . . . . . Data Collection Personnel ................. .............. Size of the Data Collection Staff .............. ...... ....... Data Collectio n Techniques ...................... ....... Data Collection Forms .... ................. ...... ...... Automated Data Collection .................... ........... F u ll Survey Versus a Sample of Trips ....... ...... . ...... . Quality Control ........ ........ ...... . .... .......... Fare box Reports . . . . . . . . . . . . . . . . . . . . Data Analysis ........................... . ............. Processing of Traffic Checker Data Sheets ..................... Summary of Ridership and Running Time Data ............. . . Data Anal ysis Staff ............ .... .... ........ . ...... Users of Ridership and Running T ime Information ... ....... . . . Time Frame for Analysis of Data ........ ................... Data Storage ....... ................ ...... . ........ . Data Usage .......... ... ..... . ....... .... . ......... Data Users ... ...... ........... .......... ......... Information Use ....... . ... .... . ..... . ............ Data and Service Decisions ....... ........ .... . . . .... I I 2 2 4 5 6 6 6 8 12 12 12 1 2 13 15 15 15 18 1 8 18 19 19 19 The Service Planning Process . . . . . . . . . . . . . . . 20 Service Guidelines . . . . . . . . . . . . . . . . 24 Reports . . . . . . . . . . . . . . . . . . . . . . 24 Summary ..... ... .............. ......... .......... . 25 iii


CHAPTER TWO: SURVEY O F OPERATIONS PLANN IN G DEPARTMENTS AT OTHER TRANSIT AGENCI E S ...... . ..... .. ... ... ... . ....... Introduction .............. . .... ... . . . .... ........ Survey Results .......... . ........... . . ...... ......... . D ata Collection . ......... . . . . .... . .... . ........ I n House V ersus P rivate F irm ....... ......... . . ........ S t a f f Size and Composition ... . . . . ............... . D ata Collect ion Techniques ..... .... . . . ... ... .... ... Technology .......... ... ..... . . ...... ....... .... . Data Analysis ..... ... . . . ... ......... ..... . .... Dat a Input and Storage .... . ....... ........ ... .... . Analytical Staff ...... ... ... ........... .... . ... Data S u mmaries ..... . . . . . ... .. .... ... ... . ... .. Types of Analysis ....... . . . . . . ................ New T echno logies ...... .... ....... .... ....... . ... Data Use . . . . . . . . . . . . . . . . . . . . . . Data Users ..... . . ....... ..... .... . . ......... Data Uses .... ........ . .... ............... ......... Disseminat ion .... . .................. . . . ...... .. Time Frame and Bott l enecks in Ro u te Analysis ... ............ .. ... S u mmary . . . . . . . . . . . . . . . . . . . . . . . . CHAPTER THREE: F I N DINGS AN D RECOMMENDATIONS ........ ...... Introduction ...... .. .. ... . ... ......... ..... ........ ...... Findings and Recommendations ........... . .... ........ ....... Data Collection Personnel .... ................. .... . ..... Data Coll e ction/A n alysis Technology ...... ... . . .... ..... ..... Cooperat ion with Other Divi sions ... ............. ..... .... ... Report Pro du ction ................... . ....... ........ . . Summary ... ... ........................ .... .... ......... ATTACHMENT A: OPERA TIONS P LA NNING Q UESTIONS FOR SELECTED TRAN S I T AGENCIES ... .... .... ......... .................. ATTACHMENT B: INDIVIDUALS AND AGENCI E S INTERVIEWED ....... . IV 27 27 30 30 30 32 35 36 42 42 43 4 4 44 45 46 46 47 49 50 51 53 53 53 5 3 54 55 56 56 59 65


LIST OF FIGURES FIGURE PAGE Figure I Organizational Chart, Service Planning and Monitoring . . . . . . . 3 Figure 2 Metrobus Ride Check Form . . . . . . . . . . . . . . . . . 7 Figure 3 Metrobus Point Check Form . . . . . . . . . . . . . . . . 9 Figure 4 Metrorail Ride Check Form . . . . . . . . . . . . . . . . . 10 Fi gure 5 Metrorail Poi n t Check Form . . . . . . . . . . . . . . . . . II Figure 6 Route II, Running Time Summary .............. . . . . . . . 16 Figure 7 Route 42, Average Weekday Passenger Loads And Load Factors By T i me Period . . . . . . . . . . . . . . . . . . . . . . 17 v


LIST OF TABLES TABLE PAGE Tab l e I Data Collec t ion Procedures . . . . . . . . . . . . . . . . . 31 Table 2 Data Collection Staff Size . . . . . . . . . . . . . . . . . 32 Table 3 Number of Peak Vehicles Per Traffic Checker . . . . . . . . . . . 33 Table 4 Agency Preference for Point Checks or Ride Checks . . . . . . . . 35 Table 5 Da ta Collection Technology . . . . . . . . . . . . . . . . . 37 Table 6 Use of Automated Passenger Counters (APCs) . . . . . . . . . . 38 Table 7 Use of Automatic Vehicle Location (AVL) Systems . . . . . . . . . 39 Table 8 Data Input Procedures . . . . . . . . . . . . . . . . . . . 43 Table 9 Service Guidelines . . . . . . . . . . . . . . . . . . . . 48 T able 10 Route Report Production and Dissemination . . . . . . . . . . . 49 Table II Commonly C ited Bottl e necks . . . . . . . . . . . . . . . . 50 VI


FOREWORD The Center for Ur ban Transportation Research (CUTR) has been retained by the Metro-Dade Transit Agency (MD T A) to study its existing service performance analysis and monitoring process, specifically the collection and usage of routele vel ridership data in MDT A s Service Planning and Scheduling Division (SPSD), part of the Customer Services Department. The purposes of this study are to iden tify the processes currently used to collect analyze and use ridership and schedul erelated data at MDT A; to survey operations planning departments at other major transit agencies regarding the "state of the art" for these processes; and to recommend improvements in all phases of performance analysis and monitoring. The primary focus of the study is the Metrobus system, since a large portion of the data collection and analysis activities is related to the bus routes, but Metrorail and Metromover also receive attention. Recommendations are constrained by the recognition that funding is not readily available within MOTA for staff additions or program expansion. This report is o ne of two separate ly bound documents prepared as part of this effort. In ad diti on to this report, a supplementary appendix was also prepared CUTR would like to thank MOTA and each of the su rv eyed t ra n sit systems for their cooperation and assistance in the successful completion o f this study. Center for Urban Tramportation Research Colleg e of Engineering University of So uth Florida 4202 E. Fowler Ave.. EN B 118 Tampa FL 33620-5350 (813) 974-3120./ax (8/3) 9 7 4 -5/68 Gary L Brosch, Director Project Director: Daniel K. Doyle Project Staff: Victoria A Perk . VII




CHAPTER ONE: Data Collection, Analysis, and Use INTRODUCTION The Center for Urba n Transportation Research (CUTR) has been retai ned by the Metw-Dade Transit Agency (MDTA) to study its existing service performance analysis and monitoring process. These activities are generally defined as part of tbe operations planning process, which includes service planning and scheduling. At MDT A, these functions are carried out by the Service Planning and Scheduling Division (SPSD) part of the Customer Services Directorate. A major goal of operations planning is to match service to demand. The operations planning process entails ga thering and analyzing route-level data and using the results to make appropriate changes to transit routes and schedules. Because of its data collection efforts, the operations planning department is usually the principal s ource ofinformat io n regarding ridership and on-time performance within a transit agency. Consequently, the operations planning department tends to be very involved in responding to customer compla in t s with regard to transit service. Given its functions as a repository of ridership and schedule data for the transit system and as an agent of change (be it routi ne sc he dule adju stments or major servi c e changes) SPSD might be expected to play a major role i n a time of funding and b udget crises. In order to do so effectively, SPSD must ensure that it is operating at peak efficiency and that its knowledge of the system and ability t o implement changes are recognized throughout the agency. The purposes of this study are to identify the processes currently used to collect, analyze, and use ridership and sched ule-related data at MDT A to survey operations planning departments at other major transit agencies regarding the "state-of-the-art" for these processes, and to recommend improvements in all phases of performance analysis and m onitor ing. The p rimary focus o f the study is the Metrobus system, since a large portion of the data collection and ana l ysis activities is related to the bus routes, but Metro rail and Me t rom over will also receive attention. Recommendations will focus on strate gie s for achieving greater cost efficiency in SPSD' s various activities. Benefits of the study a re expected to include more cost-ellicient data collection procedures, effective analytical techniques and greater dissemination of SPSD's findings These benet1ts will


enhance the ability of SPSD to analyze system performance in a timely manner. SPSD will play a significant role in major decisions regarding MDT A's response to budget cuts. These decisions are expected to be difficult and to involve unwanted service reductions There is an opportun ity for SPSD to demonstrate its ability to assess proposed routing and scheduling changes in an objective fashion and to further the development of route-specific data on a more routine basis. Figure I shows an organizational chart for SPSD. SPSD currently has 29 employees (counting actual employees. not budgeted positions), who plan and schedule service for 73 bus rou tes, Metrorail (a single heavy-rail line), Metromover (a people-mover technology in downtown Miami), and the Specialized Transportation Services (STS, paratransit service primarily for the transportation-disadvantaged). There are three line-ups per year, at which time changes are made to schedules and operators select their assignments. As shown in Figure I SPSD contains the Vehicle and Crew Scheduling Unit, the Transit Mobility Planning Unit, and the Transit Planning and Monitoring Unit. The latter unit is the focus of this study, since it is responsible for fixed route data collection, analysis and reporting. There arc four sub units (Service Planning, Market Research, Data Analysis, and Monitoring ) within the Transit Planning and Monitoring Unit. MDT A has a bus fleet of 612, and 50 I buses are in service i n the peak period. Me tro bus operates from three garages. MDT A contract s out a few bus routes to private operators; these account for an additional 22 buses in service during the peak period. This chapter reviews and documents current pro cedu res within SPSD. Schedules and techniques for data collection, reports produced as a resull of the process, personnel needs. and the le ngth of time elapsed between data collection and use of analy7.ed information are included in this review. Separate sections address data collection. analysis. and usage. but the need to integrate these functions is a constant overall theme. CURRENT PRACTICES Data Collection The service planning process carried out by most transit agencies typically begins with the gathering of data on transit ridership and schedule adherence. Route-level boarding and vehicle load data are collected and summari7..ed by direction and time of day. Running times for a route 2


are noted by route segment. Ridership and running tim e data .are used in adjusting route schedul es to ensure that service levels are appropriate to meet the demand and that the operator has sufficient time in the schedule to complete each trip in a timely fashion Sentc;ep!anning "" Plnner 1J (2) F igure l MDT A -Service P lanning and Monitoring Davtcl Pialkoff Chiflf, Sorvke Plattnir\g ond Scheduling Robert Pearsall Manacer, Service Planning ond Monitoring ftlarket Rf-HAN!h Data Analyals TYNh Mrk et. Tnnsit. Pl.utr 0 (I) Anab-ilt.l (1) "l'r-nftt l u > STAFF 4 Tra.ntlt Pla.o.ow Us 1 Tran1Jt Planner 1 1 Trai\Jit &larket Analyst I 1 Field Tec.bDieiaD n 6 Field Teclu,ieian h 13 EmployaeJJ Dlpnltarinr Tt.NIIt Plnncr 0 (1) TnntU field 'l'eeh D (1 ) 'f'lo._ntltFic ldT4:1eb I (8 ) 3


Ridership counts are critical to service planning because they represent the observed level of demand for transit service. Overall ridership figures are available through turnstile counts on Metrorail and Metromover and electronic fareboxes on buses. However, disaggregate numbers by time of day, direction, or trip are not always available from these sources. Disaggregate counts are important for adjusting service levels to meet ridership demand as it varies throughout the day. Running time is monitored to check the accuracy of the existing schedule. A route's running time is measured from one end of the route to the other, but intermediate time points are also used to make sure that each route segment has sufficient time allotted to it. Too little running t ime in a schedule results in late buses and a stressful operating environment, while too much running time increases costs unnecessarily and can make printed schedules unreliable. On most routes, running time varies with levels of congestion throughout the day. CUTR identified major issues to be addressed in each subject area at the outset of the study. The is sues are described in this chapter in terms of current conditions at MDT A, and also shape the questions asked of other transit properties to determine the in service planning. In the area of data collection, the following questions are discussed How are routes selected for analysis? Is ther e an established schedule'? Who docs data collection? What is the size of the data collection start'? How are the data collected? What forms are used to record data ? Is the data collection process automated? When a given route is analyzed, are all trips or a sample of trips surveyed? What methods are used for quality control ? Are farebox reports useful? Selecting Routes MOTA gives top priority to the Section 15 counts required by th e Federal Transit Administration (FT A). Following established FTA procedures, MOTA selects a sample of trips at the beginning of each line-up and schedules the data collection by month. SPSD is responsible for this activity. 4


and attempts to dovetail required Section 15 work with its ongoing data collection. Thus, when a Section 15 trip is scheduled, SPSD assigns the entire run not just the required trip, to an individual. Depending on other needs, SPSD often tries to schedule data collection effort in conjunction with the Section 15 work for a given route if i t has not been surveyed recently. SPSD tries to schedule each bus route for major data collection and analysis once every two years to ensure that current data are available for the entire MDT A system. I n practice, SPSD has experienced difficulty in keeping to this plan Given limited resources, SPSD must adjust its calendar to address organizational priorities. There are frequently requests to examine a specific route, possibly as the result of complaints by passengers or operators or due to concerns expressed to MDTA by local officials. Also, routes that have recently undergone or are being considered for service changes receive greater attention. Thus, the two year cycle for performance analysis is not always met, especially for routes that perform acceptab ly and are not part of any service reorganization. Data collection schedules include on-board assignments, assignments at specific locations (which may involve more than one route), and Section 15 counts. Schedu l es routinely call for early morning and late night shifts, and cover weekends as well as weekdays. SPSD makes every effort not to mix early and late assignments in one week in any employee's schedule Data Collection Personnel The people who go out into the field to collect ridership and running lime data are generally referred to as "traffic checkers" in the transit industry. The formal position title in Dade County is Transit Field Technician I (TFTI) There is one T ransit Field Technician 2, who is the "lead worker," and who schedules the work, supports the supervision of the traffic checkers in the field, and fills in as needed. The traffic checkers are full-time MDTA employees, and are unionized through the America n Federation of State, County, and Municipal Employees. Traffic checkers work 40 hours per week over 5 consecutive days. Days off are assigned and are rotated throughout the year Their salary is in the range of 9 to I 0 dollars per hour, plus county benefits. Occasionally, SPSD will hire a temporary worker to assist in data collection. Traffic checkers may also perform other assigned duties, such as handing out passenger information and assisting in the distribution of market research surveys. 5


One focus in the survey of c urre nt practices at transit properties across the nation is the employment status of traffic checkers. Other agencies have experimented with part-time personnel, with temporary workers, and with contracting out the entire traffic checking operation to a priva te tirm. SPSD has always used full -t ime personnel. At one time, operators on light duty were utilized as traftic checkers with some success, but this has not been done in recent years. Size of rhe Dar a Collection Sra.ff SPSD bas six traftic checkers (TFT I), plus one lead worker (TFT2). This is a relatively small staff, and affects the division's ability to conduct full checks of busy routes with many runs Dara Collecrion Techniques SPSD staff estimates that approximately 80 percent of all traffic checking assignments are ride checks in which a tratlic checker records information while on board a transit vehicle. The remaining 20 percent of assignments are point checks, in which a traftic checker is stationed at a particular location and records information (described more fully below) for a ll routes or selected routes pas si ng by that location The assigned location is usually a time point on a route or group of routes. Traftic checkers report in to SPSD at the beginning and end of each working day to pick up their assignments and to deliver the completed work This arrangement gives SPSD an additional face to face opportun it y for supervision SPSD supervisory staff also visit the traflic checkers in the field to monitor their activities Dara Col/ecrion Forms Figure 2 is an example of a Mctrobus ride check form. The traffic checkers receive route specitic forms (one form per one -wa y trip), with major stops pre-listed along wi. th the route number and direction Starred stops represent time points on the route's schedule. The trafli c checker fills in standard information at the top of the form (day, date, name, bus number. run. time, weather and farcbox data) The remainder of the tllrm is used to record times and passenger activity during the trip. Actual arrival and departure times at listed s t ops are recorded. 6


Figure 2 Metrobus Ride Cbeck Form ROUTE: 1 DIR: SOO'l'll BUS NO. 760_f" RIJN 3o 1/ SEATS CAS!! STAAT 6 3'". 'f / TOTAL NAME j_.-..-..., DATE f/1g( 1 'ff DAY wer WEATHER ,;yea-f. SnET NO. / O F (, TIME START !?Of TIME END (fO/ EFFEC'riW: 12-6-93 Vl HTIMEU PASSENGERS ... t.OC>.TION .:-SCHEO--ooAC'l'OAL-. ------>.T POINTS-----BTWN PT-S ARRV I LV ARRV I tV. ARRV I OFF I ON I LV OFF I ON -----------------------+--------+--------+----+----+----------___ J7flf. sw 80 ST PARKING LOT I 7101-(;,I 0 I 0 I 0 I --------------------------+-----------+----+ ---n,r--+,--sw 72 AVE/82 ST I ?II 1-(P I I I (} I .f t I -------------------------+--------+--------+----+----+-----------DADELAND aLvo,sw 88 sT 1 17/vl-31 tJ 1 ol -g tJ 1 o ----------------------+--------+--------+----+----+-------+r--Dl>JlLl\NI) souTH lt1tJ. 11111'1r a 1 o 1 12.1 o ------------------------------+ -----------+----+--------------1----.. __ us 11128 sT 1 rfJ."I--1 o 1 o 1 tf'f o 1 __ J_. ____ us l/136 ST lt1J..7 !15o't-f11 ; l o l Jf.5 0 I o + +- -------us 11144 ST I 1.5J..I-I:!' I I I 0 I ()..I 0 ---------------------------------+---US1/152S'l' I t1MI-1//l o l ol41 o l o -----------------------------------____ l ____ _jJ_l __ v_q_! g __ us 1/168 ST ;?HI 3 f I 0 I JS i. I ---C'.. : .,,, I :-; / ., _.. c r.o.l/loO: ... 1 fJ""fL _.. .... : "' ..... "'" ----------------------------------------------------I 1 ,,. 1 -;, / , ::/") 7 / ... .. -..-......,_-......;;; __ /'r'J.W I ., -'-" ,, .. --c-T-5:. -------------------------........ .. _._ _______ t!f!!:.! : ...... .. .: .. ::_ :: .. .. _ .. t .. _. .. ----! .. ---.. C:.-: .. _(_.: __ ? _________ ____ ____ .!!.! .. __ 196 ST!ll2 AV'i: I I' fbi 10 l I : (; I q 0 I 0 ------------------------+ ----------5>1 192 ST/112 AVE I 11[11 I )._ 0 I 7 6 () --------------------------+----------___ ___ S>l 192 ST/lH ;.v. I /f)fl 1 I t I 0 I 7 6 I () ------------------------+--.. ----+---------QUAIL ROOST/Hi A V E lfol I 1 1 ()I i) 6 I 0 --------------------------------------------:---------1 I ; ____ ----------------------:---------------------7 ----------------------,---------------------____ T____ ..... t


along with the number of arriving, alighting, boarding, and leaving passengers. The traffic checkers also keep track of hoardings and alightings at stops in between the listed stops F igure 3 is an example of a Metro bus point check form. Standard informat ion at the top of th is form includes route day, date name time, weather, direction, and lo cation The traffic checker records the bus number and route of each bus observed at the location, along with arriv e and leave times and the number of passengers arriving, getting otT, getting on and leaving. Figures 4 and 5 are examples of Metrorail ride check and point check forms. The traffic che cker circles the car number that represents his or her location on the train. Checkers are assigned one or two cars, depending on the s ize of the train and the time of day. Me tro mover forms are similar to Metrorail's. Checkers are encouraged and even expected to write comments on the forms as can be seen in Figure 4. The extent and depth of their written comments are considered in their annual evaluation. Automated Data Co llection The data collection procedure a t MDTA is entir ely manual at this point. In 1990. SPSD had requested approval to purchase hand-held units for the traffic checkers. Procurement was delayed to the point that th e units were outdated by the t ime they arrived There were additional problems with programmi ng and using th e units. and SPSD soon returned to manual data collection The new automatic v eh icle location (A VL) system sched u led for installation in l at e 1995 will gather information on running time and on-time performance. The A VL system should resul t in improved service by allowing MDTA greater control over what occurs in the 11eld. T here is space on the buses and within MDT A's communication system for automated passenger counters (AJ'Cs) but a shortage o f capital funds makes it unlikely that these will be purchased in the near future. 8


...,.__ 3 Metrobus Point Cbeck Form METRO-DADE TRANSIT AGENCY TRANSIT SER VICE CHECK ROUTF; I\ ol-l"'";:) TIME OF DAY 255-4&33 SHEET tlO._L_Or_L NAME \ LOCAT!ONW.fVdl--1-1/ZIIr<>' DATE < FIRST SCHED. BUS t'ZSS. LAST SCHED. BUS.:lo'C>o WEATHER'i1R>NWj DAY !&hJ ACTUAL TIME YOU ,1\RRIVEO AT STOP 1'3";1!.1?" T IME YOU LEFT STOP ,.,.., 111 IJ,.'i lA 111 r 1 /S'S3 tfi'2 IZ "Z-1'"'--[to l ll .P1 "2. "" /6. J7Sl """'-I I i I I I liON I I ' 9


Fi gu r e 4 Me t ro r a il Rid e Che c k Form ROUTE: METRORAIL DIR: SOOTH EFFECTIVE: 3 -6-89 SHEET NO. I OF '2> CAR NO. 4 5 6 NAME Jfl TRAIN NO. fo/p NUMBER OF CARS 'f DATV7&1Zf5 DAYfl4t:JI.AJ4( TIME START t3;;;<0 10 TIME END I .LLAPATTAH I "0 10 -------------------------+---SANTA CLARA I I I I .;ns --------------------+---+ ---+ ---+ --+----C IVIC CENTER I t(Jfj I I 5 I L.j / ---------------+ -------+------+---+ ----+--C ULHER : 0<1/113'1 I vi I : I l _.z? I'/ 0 ------------------ -------+---(lQV:ERNJ-fENT'CE!NTER 1'81(3113tfi/ 1/ I !115 i 5 -----------! :::u_,..J, ..,q-, ""? ,/_ =2ICKELL-.., .1!f?:.. 0 ;' ,.t:::;:/ ___ i ___ ---------------------+--------+----+--+------1 ------+------r---liffi ___ -------------------+ --------+-------+----+------o;..DELAND .SOUT H 11't?2j_ q 19 l ----1 ----------------------------------------------------


FigureS Metr o rail Point Check Form Y.ETRO-DADt TRANSIT AGENCY I {D/ J 1 DATE J-f, c;!; oAY We.fl SHttT NO._l_or 2 TRAIN NUMIIER TillE '1'00 t O'F' PJI.SSE"NGERS LEAVING ON 'l'R.Al CAR )")... 3 5 NUMBE R OF CARS ARRIVING LEAVING 1 15'd-4-tJS:P';3. @S.3t./ /I {g osss oss (p 1 12-1 4 o&sc. oft,S"D I i lf5 )O.b 7 oc,s(p I I I/?. I .:l-si Lf. o?os 010.5 i I I?/ I IH 4 1 /J7/C.I07/&, I I !12 I tf. n7::m I o l I I .;tO I \ /SJ. Lf I nT3 I {)/ -,/1 i J-1! 1 .J).S I Ll I IJ7JC. I o/n I 8-Z\ f'.f(p I Li i cu((]l o/'-[9, I 1./7 IX ).1 CJ I J))SO I bl,rl: l I I J lOS i i l oto c i )I .Jo? '-{ oJart orotf t!. 1/1./ J, Ot/0 Oi/0 )3


Full Survey Versus a Sample of Trips Ideally, a complete route check covers all runs on the route. As noted earlier, however, the small size of the traffic checking staff limits the ability of SPSD to conduct a full count of busy routes. The full check is sometimes done over a period of days. On busy routes, SPSD attempts to staff as many trips as possible during the peak periods and a sample of runs at other times of day. This provides reasonably accurate information on load factors and running time within a given time frame. Quality Control On ride checks, the traffic checker takes farebox readings at the beginning and end of each trip to provide a revenue comparison to the ridership count. The practice of starting and ending the checkers' work day in the office where work is picked up and dropped off, also provides some measure of quality control. All completed ridership sheets and computer files containing the raw data are reviewed for inconsistencies by at least one person. On occasion, traffic checkers are visited in the field to ensure that they are performing their work accurately. Farebox Reports SPSD does not rely solely on farebox reports for service planning and scheduling purposes. Aggregate revenue reports produced from farebox data provide useful information on total boardings by route. However, operators do not always press the correct keys at the end of each trip, thus hindering the use of farebox data at the trip level. In addition, boarding counts from the farebox do not necessarily yield accurate information on peak loads. one of the types of information needed to plan and schedule routes effectively. Data Analysis While data collection is a vital lirst step in service planning the conversion of the raw data into useful information to guide planners and schedule makers is the heart of the operations planning process. The data analysis process summarizes ridership and running time data by location and time period to yield information on passenger loads and schedule adherence. SPSD then analyzes this information to decide whether service or schedule changes are warranted. 12


In general, making the data analysis process as routine as possible helps in several areas. A routine analytical procedure can increase productivity by allowing SPSD to evaluate a greater number of routes within a given time period. Once established, a routine process also minimizes the time it takes to analyze a specific route and thus improves the timeliness of the data. As noted below, SPSD is in the final stages of developing and refining a computerized program to assist in and standardize tbrmal data analysis. This computerized procedure i s suppl emented by a more informal analysis of traffic checkers' data by SPSD personnel i n response to special requests or complaints. Issues of special interest in the area of data analysis include the following. How are the traffic checker data sheets processed? How are ridership and running time data summar ized ? Who is responsible for data analysis? Who ultimately uses this information? What is the time frame for analysis? Are data stored for future reference? Processing of Traffic Checker Data Sheets All traffic checker work is returned to SPSD on the same day, where it is logged in and reviewed. Depending on the assignment, data for the route is either analyzed manually or entered into the computer. Manual analysis of ridership and running time data was standard at SPSD until rece nt ly and is still typically done for smaller counts undertaken in response to a specific complaint or request. The data sheet s are summari7,.ed as needed to respond to the complaint or request. If the results indicate a problem with overcrowd ing or running time, the makes a recommendation for corrective action. The originator of the request or complaint is notified regardi n g the lindings and any recomme nde d action SPSD is transitioning to computerized analysis lor full route checks. In this case, the data are input to the computer and the Data Analysis sub-unit runs the program to analyze the route. To 13


summarize route-level ridership and running time information and produce reports on route performance, collected data is entered from forms filled in by hand into a program known as SAS (Statistical Analysis Software). SAS was originally recommended by MOTA's Management Information Services Division to program the hand-held data colle ction units. SPSD currently uses version II of SAS operated in a Windows environment on a personal computer. SAS is also available on the MOTA mainframe computer network but the network is operating well over capacity and consequently is slow at times of peak usage. SAS is not known to be particularly user-friendly Therefore, methods that quicken the process of getting data from the field running in the program (which ultimately generates results and summaries) are desirable. Currently, a total of seven programs combine to produce the running time report summaries. To make data entry faster and easier, work is continuing on the creation of a template w hich lists all the variables from the ride check forms. Those unfamiliar with the SAS program can enter data for the major stops and time points on each trip This is important, because only one SPSD staff person knows the SAS programming language. All the informati on is automatically e ntered into a SAS data set without the use of additional programming commands T he person entering data needs only to enter the information from the top of the ride check form and the numerical data, and to change the text for the title headings in the reports. The program can be run with a click of a mouse button. Commands for report analysis and formatting are already written. thereby executing when the program is run. To devise a truly standardized data entry t emplate, language already within the program separa tes the data from whole routes and groups them into hours The categories of hours will be the same no matter which route is being analyzed. If no data exis ts within a certain category it will simply be ignored. This process is not complete however. One purpose of th is study is to explore other methods and techniques for improving the efficiency with which ride check reports are produced. 14


Summary of Ridership and Running Time Data Ridership and running time data are analyzed by direction for each route Ridership is usually measured in t erms of passenger loads at a certain point on the route withi n a given time period. Different time periods are used to summarize data, depending on the purpose of the analysis A recent Ro ute II study averaged running time by hour in the morning and by three-hour time periods in the afternoon (see Figure 6). A 1993 evaluation of Ro\lte 42 used a.m., midday and p.m. time per iods in its published report (see Figure 7) while older studies have mixed time periods of two and three hours Generally, finer time breakdow n s are used i n peak periods, and off-peak time per iods are longer. For analyses ofMetrorail ridership, half-hour time periods are used in the peak. SPSD has attempted to standardize to 30-minutc time periods in the peak hours and 60-minute periods in the off-peak. Data arc nor always summarized in a formal report, especially when responding to complaints or small-scale requests. At times, only a portion of the data is summarized to ascertain whether there is actually a problem. Some traffic checker sheets are reviewed and sent directly to the fi les as raw data. Data A'nalysis Staff Various SPSD personnel are responsible for data analysis. Informal analyses might be carried out by a transit planner or by the unit Manager. As noted earlier, SPSD is shifting toward a more formal data analysis process using a computer program written in SAS, a statistical software package. The Data Analysis unit is responsible for the more formal analyses, generally including full route studies. Users of Ridership and Running Time Information Currently, SPSD is virtually the only part of MOTA that uses information on route ridership and running time. The most common use is to re-write schedu les for individual routes The information also forms the basis for decisions regarding the ext ension, restructuring truncation or discontinuation of a specific route. Data usage is discussed more fully in the next se ction, but it is interesting to note that recent changes in Metrorail head ways were made without the active participat ion of SPSD. 15


"' 3:00 6:00 6:00 7:00 7:00 8:00 8:00 9 : 00 9:00 -10:00 10:00 -11:00 11:00 -12:00 12:00 -15: 00 15:00 -18:00 18:00 -19:00 JICT SCD JICT SCil ACT SCil ACT SCil ACT SCil ACT SCil ACT sen A C T SCil ACT SCil 1\CT SCil PAT TIME 59 50 66 6 4 0 4 79 93 91 96 68 72 73 83 82 79 77 77 77 74 71 NOTB: SCIIEDULB TDIB IN IJOLD I'IU II AIN . . . Fi g u re 6 UOUTE 11 nUNNING 'l' IH E $Uflt1Ail'i }'IU 1'0 CIJD VIA FI,AGLEil tboun d (EU7.1 11211 FI,JIG I"LJIG '191\V 'JJI v FI.M ; 9 lll I I 10 17. G 9 1 I I :1 II 10 11 Jl. 1 1 I I I., r 2 I 3 13 H 13 13 211 Ill 15 l:i 1 6 !I I I 15 1'1 1 2 1] tG II 1 () IJ l I ) 10 13 I II 1 1 12 I ,. G .I 10 13 u 10 I 3 I II lU 12 II 13 IJ I II II 12 16 'I 12 14 IG I 10 12 15 0 17AV SlS' f 5 4 6 5 5 5 6 s 7 5 7 5 0 s 6 5 6 G SvllC'l' AV 9 9 10 10 9 9 16 l G 13 13 11 11 12 12 1 0 10 9 u 0 GOV'l' CH'I' R IU lll 9 9 1 6 16 15 15 21 21 I I 11 19 19 )'/ n H H 17. 12


Figure 7 ROUTE 42 AVERAGE WEEKDAY PASSENGER LOADS AND LOAD FACTORS BY TIME PERIOD (Passengers departing fr9 m time points) SOUTHBOUND TIME POINT FISHERMAN/CPA-LOCKA E.8 AVE/49 ST E.B AVE/OKEECHOBEE MIAMI INTL :AIRPORT SW 37 AVE/FLAGLER ST GABLES :TERMINAL SW 40 ST/42 :AVE DINNER KEY NORTHBOUND TIME ::POINT DINNER ::KEY ooUGW.iRD.::STATION sw.::4 o 1 2 :Q. VE . GABiiESi!l'EJUUNAL W :::F.IOAmiERiiST1..3 7 ::AVE E ,1! 8 ,Vf,. FISHERMANlOPA-LOCKA AM Load Load Factor B 17\ 10 22\ 13 2 8 \ 9 20\ 12 26\ 10 22\ 4 9\ 2 4% 1 2\ 1 2% 6 13% 6 13% 5 11\ 8 17% 2 1 46\ 22 48\ 8 17% 0 0\ MIDDAY Load Load Factor 2 4% 7 15% 10 22\ 10 22% 6 13\ 3 7% 5 11% 3 7% 1 2% l 2% 4 9 20% ll 24% 10 22% 12 26% 11 24% 3 7\ 0 0% PM Load Load Factor 1 2% 9 20 14 30% 17 37\ 8 1H 5 11\ 5 llt 3 n 1 21> 1 n 5 lH 5 lH 10 22\ 8 17i 16 33\ 16 35" 1 0 22% 0 0\ 17


Time Frame for Analysis of Data SPSD staff expressed some frustration at the length of time it takes to collect and analyze data. This is due in part to the size of the staff. Manual summaries of full ride checks are very labor intensive The computerized analyses face a major bottleneck in the time it takes to input the data from the traffic checker sheets to the SAS program. A temporary employee is usually hired to perform data input, and the process can take three weeks or more for a bu sy route. Data entry work must also be checked for quality and accuracy. The end result is a long time frame from the initiation of data co llectio n to the distribution of a final report. The process of transferring data collected in the field to an electronic format is an area of special interest to SPSD that will be explored in the survey of other transit properties. Data Storage SPSD keeps raw data and manual summan es m its files by route for up to three years. Computerized data are stored on floppy disks to maximize the availability of memory on the hard drive. Farebox data are usually used for trend analysis. Data Usage Once ridership and running time data arc collected and analyzed. the final part of the operations planning process is to use the in for mation to make appropriate changes to the transit system Objective information on ridersh ip patterns and schedule adherence i s important to ensure the efficient operat ion of the transi t system. The implementation of route and schedule adjustments is the means by which the goa l of greater efficiency can be reached. This section of the chapter describes how and by whom the information is used within SPSD and more broadly within MOTA. The service change process is also detailed. Key issues include the following se v en questions. Who uses the data ? How is the information used? What decisions rely on the data? How arc service changes made? 18


Are service guidelines important? What reports are produced? Is there a standard format? Who receives the reports? Data Users SPSD is the primary "client" for its own data. Schedulers and service planners use route-level data to change routes and schedules. Summarized, and in some cases raw data appear in route and running time reports. Information Use Sununarized ridership and running time data are used to adjust schedules and to make route and service changes. As examples, the Route II running time summary shown in Figure 7 indicate d the need to add time to the route during most of the day, so the Route I I schedule was changed for the next line-up. The performance evaluation of Route 42, based on data shown in Figure 8, contained recommendations to extend the route to serve a Metrorail/Tri-Rail stat ion, to comb ine it with another route to eliminate duplicative service, to red u ce weekday peak headways and to adjust running time. Data are also used to assess whether complaints are valid and, if so, to develop solutions to particular service or schedule problems. As noted earlier, data collection wor k is often scheduled in response to complaints, but recent route data from the files can sometimes be use d to respond to comp lai nts and requests. SPSD's role as rep os itory of route-level ridership data is important, because data are frequently used for purposes other than those for which they were originally collected. Data and Servi ce Decisions Traffic checker data are routinely used for schedule changes, especially to adjust running times on specific routes. Ridership data are also used to make schedule and service changes, but th e data gathered by the traffic checkers constitute only one of the various inputs into service-related decisions. Running time changes t end to be open-and-shut techni cal decisions, while service changes are more political in nature and are not driven solely by ridership data. 19


The Service Planning Proce s s Transit service planning is a continuing and iterative process which dicta tes the balancing of passenger needs and operating and budgetary constraints. The objective of service planning is to meet the m obility needs of transit users in the most effective and efficient manner The service planning process is required t o respond to changes in t ransit rider demand and travel behavior seasonal ridersh i p fluctuations. changes in operating conditions. new development. requests from passengers as well as the community as a whole, and. necessarily, funding limitations Poss i ble service changes can originate from several sources. such as community meetings, customer suggestions and complaints, passenger surveys, MOTA staff meetings (e.g. operators, supervisors, and information clerks), and the five -year Transit Deve lo pment Program (TOP). Types of the potential service changes may include the following: new routes; increased frequency of service on a particular route ; increased span of service ; existing route realignment or extension ; schedule changes; discontinued routes or route segments with very low ridership; and service adjustments on existing routes to reduce duplic a tion and increase productivity and efficiency. The service planning process a t MDT A begins prior to the receipt of budgetary guidelin e s and ends with the implementation of plans approved by the Board of County Commissioners. It shou l d be noted that minor changes do not have to go before the Board. In the bes t case the process' nine steps (discussed in detail in the following paragraphs) are t aken sequentially. However in actuality, the realities of budget deadlines, provisions of collective barga ining agreements, and operational necessities can often impede the sequential nature of the process. Every recommended service change is carefully scrutinized based on the following impac t s: 20


consistency with long range plans such as the TDP as well as other corridor studies; estimated changes in trave l time, number of transfers, and other service issues; route duplication or the availability of alternative servic es for discontinued routes or route segments; estimated number of passengers affected or additional ridership generated by the proposed change ; operating costs or savings; existing as well as est imated future travel patterns; accessibility issues; and transit system connectivity. Servic e change proposals must be approved by management based on existing polic ies and budgetary issues. Proposals approved by management are then presented at public meetings he l d throughout Dade County. As appropriate the proposals may be amended as a resu l t o f any comments received Final recomm endations for changes are submitted to the Transportation Committee of the Metropolitan Plan nin g Organization (lv1PO) and the Board of Coun t y Commissioners for acceptance The nine different steps of the serv i ce planning process are each discussed briefly below. Service Mon itor ing. The transit system is continuously monitored to assess whether eKisting serv i ce should be adjusted to more accurately reflect the current operating condit io ns. The monitor ing p rogr am utilizes severa l methods of collecting data ( which have b een previo u sly discussed in detail): transit field passenger and time checks; electronic fareboxes; Metrorail and Metromover turnstile coun t s ; passenger surveys; rider suggestions and complaints; and observations by operators,supervisors, and the public (through public meetings and l ocal public officials) 21


Receipt of Budgetary Gui delines For the Fall line-up (which normally occurs in November), the fiscal year budgets for service to be operated are determined by the County Commission during budget hearings held in September. However as early as Spring, there are good indications of the quantity of service required for the upcoming fiscal year It is not unusual for two or more sets of service plans to be developed to match different possible budget scenarios. The Spring and Summer (usually April and J uly respectively) line-ups afford good opportunities for any modificat ions to ensure compliance with budget targets. Senice Evaluation For transit service evaluation, service planning guidelines provide standardized measures. Concerning service reduction, some of the more fundantental and common l y-used guidelines are listed below. Routes which are below hal f the system averag e number of passengers per hour are targeted for service reductions to improve their performance Routes with a net cost per passenger trip 2.7 times greater than the system average are placed under consideration for service efficiencies to imp rove their economic performance. Trips with fewer than 10 passengers are subject to elimination or service frequency reduction if headways are less than those dicta t ed by policy. Routes or segments which are duplicated are candidates for service reduction or e limination as long as enough seats are availab l e to meet passenger loading guidelines Also, any route which does not satisfy minimum service planning guidelines is further analyzed to determine the transit-dependency of those served by the route Since major factors in d e cisions concerning poorly perlorming bus routes are the availability of alternative service and the impact on ridership, modifications can often be made to more satisfactory routes to allow for the cutback of unproductive routes and still maintain service. Wit h regard to se rvice expansion, one method of evaluation is to ana l yze load factors. The maximum loading guidelines rellect maximum load factors for different transit services over different time periods. Bus routes with peak-period loads exceeding 130 percent of capacity ( i .e . with 30 percent of riders forced to stand at the peak load point) are candidat es tor increased service, according to service guidelines 22


Evaluation guidelines for new service are compatible with existing service evaluation with regard to service span, headways, and route design, and also provide recommendations for service into a new area. New service guidelines take into account population and employment densities, new development, system continuity, and projected ridership. Development of Preliminary Proposals. U tilizing information from the evaluation procedure, service planning staff then develops possible changes in serv ice including improvements as well as reductions. Service improvements, for example may entail additional run time to ensure on t ime performance or the extension of a route into a new area. Current and future travel patterns, accessibility, and transit system connectivity are all considered in the formation of the service changes. The proposed changes are reviewed by Bus Operations staff, the Public Services Division, and the Transport Workers Union (TWU). The n, a recommended series of service changes is developed to satisfy budget guidelines available at the time. MDTA Management Appronl. The proposa ls are submitted up the management organizationa l chart to gain approval on the basis of existing policies and budgetary constraints. Public Meetings. Service change proposals approved by management are presented at the biannual sets of public meetings. These meetings are held throughout Dade County during January for the Spring line-up and in July for the Fall line-up (the Summer li ne-up usually does not contain service changes) Approximately 15 community meetings take place in city halls, libraries, and other meeting hall s Staff f r om the Public Services and Serv ice PlanniJ;lg Divisio ns attend all meetings 10 obta in input from the public concerning the proposed changes and any other issues invo l ving MDT A service s Fin. al Recommendations. Based on comments from the pub l ic meetings as well as from any other means by which the public communicates to MOTA (such as by telephone or l etter), proposals are amended, as appropriate to develop the final recommendations. These tina recommen dations are presented to the MPO's T ransportation Committee and the Board of County Commissioners. 23


Presentation to the Transportation Committee of the MPO and the County Commission. Tbe final recommendations are presented to these groups for their acceptance. Any changes necessary are made. This step completes the planning process. Implementation of the plan, including scheduling, development of public notices, and installation of required signage and passenger facilities then begins. Scheduling and Implementation Scheduling of the service changes is accomplished as efficiently as possible while remaining within TWU contract requirements. Significant lead time is required, however, for the implementation of service and schedule change proposals. During this period a public information campaign that includes maps, timetables, and rider notices is prepared for MOTA customers. Service Guidelines SPSD developed serv1ce planning guidelines for use in short-range and long-range transit planning in July !988. These guidelines. mentioned in the discussion on the service planning process, have served as general rules of thumb to determine when service changes might be appropriate. The Metro-Dade Board of County Commissioners has never formally adopted the service guidelines, so their use has been more informal in nature Among areas addr essed by the guidelines are service span, headways, night and weekend service, bus route design and criteria for new service According to SPSD staff, the guidelines do enter into service decisions but are not necessarily the guiding force behind all decisions. SPSD tends to usc the guidelines as one tool in judging the need for a service change. For example, the Route 42 service evaluation mentioned ea rlier uses guideline minimum values for net cost per boarding and boardings per revenue hour to assess Route 42 service levels. However, none of the proposed changes to Route 42 cited service guidelines as the reason for the recommendation. Reports Copies of the Route II Running Time and the Route 42 Service Evaluation reports arc included in an Appendix, which is bound separately. SPSD has adopted a sta ndard format for presenting route-level inlormation. The Route II report is an example of a report generated from the SAS 24


data analysis process. These reports are distributed to a reasonable number of people and departments within MDT A, but the list might be expanded. It does not appear, for example, that the Directo r of Operations is included on the distribution list. SPSD does not produce as many reports as it would like. The infonnal nature of much of the analysis is one contributing factor, along with time and staff constraints and division priorities. Yet this informal approach hurts SPSD at the agency level, because other departments are not aware of all of the activities that take place within tb.e division. Management and Information System's monthly reports on ridership and revenue by mode and route receive much greater attention. By contrast, SPSD has produced only one o r two route reports per year in recent years The end result is that SPSD's function of matching service to demand, while being carried out, is not widely understood or appreciated within MDT A. The types and number of reports produced by other operations planning departments around the United States and Canada was a major focus of an external survey. Chapter Two, whic h describes the results of that survey, includes examples of route-level scheduling and planning reports from elsewhere These might serve as a guide for SPSD s attempts to improve its own reporting capability. SUMMARY This preliminary assessment of data collection analysis, and usage in the Service Planning and Scheduling Division suggests that most components of the proc ess are in place. Major i ssues inclu de the best way to collect data, given limited personnel and financial resources, the role of automation within SPSD, and the need to stre ngthen report production in order to share the route level information more fully throughout the agency. The integrated nature of SPSD's functions mean s that a single bottleneck can affect the entire division's productivity. Potential or actual bottlenecks exist in the size of the traffic checking staff, the laborious data input process, and the report-generating capabilities. These need to be addressed concurrently in order to streamline the operations planning process. Only then might it be possible to meet the goal of reviewing ridership and schedule adherence for each route on a two-year cycle. 25


T echnologjcal improvements will clearly affect SPSD. The automatic vehicle location (AVL) system scheduled for installation in the near future will automate the process of monitoring route running times and schedule adherence. This may allow a redeployment of staff effort to analyze ridership data The next task in this study called for a survey of operations planning departments at other major transit agencies in the United States and Canada to determine the current "state-of-the-art" with regard to data collection and analysis, performance monitoring and report production. Survey results guided the final recommendat ions of this study by demonstrating best practices in the operations planning field 26


CHAPTER TWO: Survey of Operations Planning Departments at Other Transit Agencies INTRODUCTION The purpose of this chapter i s to examine how issues related to data collection, ana l ysis, and usage are addressed in operations planning departments of other transit agencie s in North America. The scope for this study notes that the composition of the data collection workforce and alternate approaches and techniques used in collecting and analyzing ridership data would receive particular attention. CUTR, in conjunction with Service Planning and Scheduling (SPSD) personnel at MDTA developed a series of questions concerning operations planning processes and practices. The survey gives particular attention to issues highlighted in the review of current procedures within SPSD, documented i n Chapter One. Questions included in the survey address the following topics: staff for data collection (including in-house versus contracted); data collection techniques; schedules for analysis of routes; data input; data analysis techniques; staff for data analysis; data storage; uses of route-level data, espec i al l y regarding service or schedule changes; role of emerging technologies ; and bottlenecks in the overall process. The primary focus of this study is the monitoring of bus operation s na t ionwide, since a l arge portion of the data co ll ection and analysis activities is related to the bus r outes The survey of other operations planning departments asked about differences by mod e, especially with regard to data collection because data collection techniques for bus and rail can differ. 27


Attachment A contains a copy of the survey. Telephone interviews were conducted with representatives of 20 transit systems in the United States and Canada. Systems contacted as part of a previous CUTR study for MDT A on bus operator productivity and efficiency formed the basis of the peer group. These agencies were augmented with other systems identified as having active operations planning deparrments. The person contacted at each system was generally the director or a senior manager in planning, scheduling, or operations planning, depending on the transit agency's structure. In several cases, the CUTR interviewer was transferred to a person directly in charge of data collection and analysis Attachment B includes a list of individuals interviewed at all of the transit agencies. CUTR conducted the interviews in late June and early July of 1995. During this time frame, the Transit Cooperative Research Program (TCRP) released a synthesis report, titled Bus Route Evaluation Standards : A Synrhesis a/Transit Practice. This report is a comprehensive review of service standards and guidelines in use at transit systems nationwide. Some questions concerning data collection techniques were included in the TCRP study. Also, MTA-New York City Transit shared the results of a brief survey it had conducted of large transit agencies in the United States. The New York survey focused on the size of the data collection staff as well as data collection technologies. On the few questions that overlapped those on the other two surveys, CUTR checked its own survey responses against the other surveys to ensure consistency Other than these few questions. this survey's purpose was sufficiently different to produce new results directly useful to MDTA and of interest to the national transit community. One interesting finding to emerge from the interviews was the effects of technology on the entire process of data collection, analysis and usage. Transit systems that had automated passenger counters (APCs), hand-held data collection units, or an automatic vehicle location (A VL) system up and running report markedly different procedures from those still relying on manual data collection and input. Technological advances generally result in a more frequent and regular schedule for data collection. On the negative side, switching from manual to automated processes can involve an extended period of ensuring that the processes work as intended, and in some cases can increase dependence on the agency's computer or management information services (MIS) personnel, who may have different priorities from those of operations planning. 28


In the coming years, the monitoring and analysis of transit operations is likely to undergo extensive changes as new technologies make it possible to track route-level ridership and running time more closely Some int e rviewees raised the prospect of a "Sorcerer's Apprentice" situation of data overflow without sufficient resources for analysis and interpretation. It is clear that new ways of analyzing more detailed route-level data will be developed. Hopefully, these new techniques will produce more useful information for senior agency management and governing boards to consider in making decisions to imp r ove the efficiency and effectiveness of the transit network. Following this introductory section, the report presents survey results in the areas of data collection, data analysis, use of the resulting information, and the general process of monitoring route-level performance. Responses are summarized across transit systems. Specific procedures or individual observations of particular interest are included as appropriate. Issues of particular concern to MDT A include composition and size of the data collection force, the best way to collect data, and techniques and formats for disseminating route-level information Agencies are referred to in the text by the city or state they serve, to avoid confusion among the various names. T he following 20 agencies were included in the survey. Metropolitan Atlanta Rapid Transi t Authority Maryland Mass Transit Administration (Baltimore) Massachusetts Bay Transportation Authority (Boston) Chicago T ransit Authority Greater Cleveland Regional Transit Authority Dallas Area Rapid Transit Authority (DART) Regional Transit District (Denver) Metropolitan Transportation Authority (Houston) Milwaukee County Transit System Metropolitan Commission Transit Operations (Minneapolis) New Jersey Transit Corporation MTA-New York City Transit AC Trans i t (Oakland) Port Authority of Allegheny County (Pittsburgh) Tri-County Metropolitan Transit District o f Oregon (Portland) Bi-State Development Agency (St. Louis) Metropolitan Transit Development Board (San Diego) 29


King County Metro (Seattle) Toronto Transit Commission Washington Metropolitan Area Transit Authority SURVEY RESULTS Data Collection The majority of questions on the survey addressed issues surrounding data collection. The survey focused on the types of data routinely analyzed by operations planning departments including hoardings, a!i ght ings, passenger l oads, and times. W ithin the broad category of data collection specific information was sought concerning the size and composition of the staff, teclutiques for gathering data, frequency o f data collection for each route, and emerging teclutologies in use or under consideration. It should be noted that on-board origin/destination surveys or surveys regar ding fare payment methods or transfer activity were not intended to be part of this analysis, although some agencies volunteered information on these types of data collection activities. In-House Versus Private Firm SPSD was very interested to learn how data collection was handled at other transit agencies One of the guiding concerns leading to this study was whether there is a better way of gathering route l evel information for planning and scheduling use. The s urvey asked who collected the data, how many people were involved, whether the data collection staff was full-time or part-time, where the data collection function was located within the agency, if t emporary emp l oyees were used, if there were any unusual union-related work rules, and how data collection personnel were supervised. While terminology differs across agencies, employees invo l ved in data collection are most frequently referred to as "traffic checkers" or simply "ch eckers ." Sixteen of the 20 systems surveyed use in-house personnel to conduct data collection, as shown in Table l. Two systems (Dallas and Portland) contract out data collection to a private firm, one (San Diego) has its Metropolitan Plann ing Organ iza tion (MPO) collect data, and one (St. Louis) relies solely on APCs. Portland and Seattle also use APCs; Portland's private firm is responsible for Supplementary checks. The most common reasons for keeping this function in-house were to keep high quality standards, maintain accountability and avoid union problems Some 30


agencies questioned the potential extent of cost savings resulting from using private finns Cleveland received a cost proposal from a local wliversity that exceeded current spending. Boston uses light.duty operators who would be paid anyway. The experiences of agencies using private firms are generally positive Portland relies primarily on APCs, but supplements these results with spot checks done through a local market research firm using parttime college students SANDAG, the MPO in the San Diego urbanized area, uses temporary employees to collect data for all transit routes. Houston supplements its in house personnel with temporary employees on larger routes, but the quality of the work done by the temporary workers is below that of the permanent staff. Three other agencies with in-house staff indicate that they use private firms for data collection on major studies or surveys. Tabl e 1 Data Coll ection Procedures .. :;!"iP! ;,f!.:rt.., ..... ;.: ,.:.: ;: .. J : : : .Xi<'V''' " ' ',-< ; :. : N 'lltnti"' r.tot:S ... te "-"" .. . ... n;\ ....... In-House 16 Private 2 Rely on APC 1 MPO (Private) 1 Dallas is the best example of an agency that bas completely privatized its data collection. DART contracts with a local engineering company that is also a DBE (Disadvantaged Business Enterprise) fmn, and pays a fiXed monthly overhead plus an hourly rate. The agency defines the routes to be checked and the type of check, and the private firm then schedules its employees, conducts the checks, and returns the data via hand-held units connected to agency computers The genesis of this privatization can be traced to th e realization in 1991 that the agency was spending a lot of money on data collection without obtaining all the data required to plan and schedule service. Estimated savings from privatization would be in the range of 50 to 60 thousand dollars per year but in practice Dallas chooses to spend the same amount of money as it previously did and collect more data A unique factor in Dallas is that DART is not unionized, and thus did not have to address any wlion concerns in privatizing data collection. 31


Seven agencies report that they have never considered contracting, for the reasons mentioned earlier (union issues, questionable savings, quality and accountability). Four agencies have considered contract ing in the past, while four are actively considering it now. Budget cutbacks are cited as a main reason to consider the idea of privatizing data collection. Some agencies are prohibited by contract from performing any function in a given area for one year following layoffs in that area St. Louis let its traffic checkers go in a budget cutback, and t hen did not collect data until it purchased APCs. Overall, the traffic checking function remains in-house at most agencies. The likelihood of continued budget reductions will encourage more systems to consider contracting data collection out to a private firm. Staff Size and Composition As shown in Table 2, data collection staff size ranges from 5 to well over I 00 among the systems surveyed. Of the 16 systems that have an in-house staff 9 report a staff of between 5 and I 0 traffic chec kers (in terms of full-time equivalents). Four agencies have between II and 20 checkers, 2 between 21 and 50, and New York has 422 part-time checkers. The size of private sector checker forces is generally comparable. Houston reports an in-house staff of 6, but can bring in up to 50 temporary employees for a big route Dallas' private contractor generally has about 12 checkers in the field, but must be able to put out 25 to 30 at one time, according to the terms of the contract. Some agencies augment their regular data collection staffs with light duty operators, but the availability of light-duty operators varies. With five traffic checkers, MOTA is at the bottom of the range. Table 2 Data Collection Staff Size In-House Staff Size (FTE) Number of Systems 5-10 9 11-20 4 21-50 2 Ove r 50 1 No In-House Staff 4 32


Another way to measure the relative size of the data collection staff is to use the ratio of peak vehicles to traffic checkers, thus controlling for system size. The 1993 Section 15 published reports are the source for the number of peak vehicles required for each system. Of the 15 systems with in-house staffs in the United States, this ratio ranges from 30 for New York to 200 for Seattle. Table 3 presents a breakdown of peak vehi cle s per traffic checker Several of the systems on the high end of this scale rely on APCs or electronic registering fareboxes for their data and use traffic checkers only for special counts. MDT A's average is 117 peak vehicles per traffic checker, placing it in the middle of the transit agencies. Table 3 Number of Peak Vehicles per Traffic Checker -.;,..;-,....,,..*'It, ... .-,. ,, v t\'.'' Oli. umblll' !ilrns: '"-"' ... ....: ........ . 1-50 2 51-75 5 76-100 1 101-150 1 151-200 6 n/a 5 Nearly all systems with an in-house staff locate the traffic checker function within the Operations Planning or Planning department. Five systems specify the Schedules section (within Operations Planning), two systems have a Traffic section, and two ho use their checkers within the Service Planning section. Houston's traffic checkers are part of the Market/Service Research departmen t, while the MPO in San Diego is responsible for data collection. For most systems, one supervisor is sufficient; only six systems report more than one supervisor. Field visits by supervisors are the primary means for ensuring that checkers are collecting data accurately. Some agencies compare counts on the bus with counts at specific locations along a given route, and also use farebox data as a check on More than one system volunteered that the dedication of the supervisors makes a big difference in checker reliability and data quality. Several agencies indicate that they cannot provide as much supervision as they would 33


like. However, Seattle reports that they do not need a supervisor for their checkers: the checkers are used to supplement the APC counts. Most agencies employ their traffic checkers in full-time positions. New York and Baltimore are the only systems with all part-time checkers, although New Jersey has a mix of full-time and part-time checkers Fifteen of the 20 agencies report never using temporary employees from an outside agency. The four agencies that have used temporary workers report varying experiences. Systems that employ temporary workers on a regular basis have built a relationship with the employment agency and have established a cadre of reliable workers. Other agencies cite problems with reliability and behavior of temporary workers. The general consensus i s that the quali ty of work by temporary employees is less than the quality by regular traffic checkers, partly because temporary workers do not know the system well enough In-house traffic checkers are unionized at all agencies except New Jersey and Toronto. Un ion s are viewed as reasonable with regard to the traffic checking fwtction, with no onerous work rules cited and no real complaints other than an occasional grievance. One of the most interes ting provisions is in Milwaukee, where traffic checkers are granted eight hours of paid leave for "foul weather days," to use in inclement weather at any time during the year. Seattle has a "snow assignment" that reassigns checkers to help out at the control center. Traffic checkers also perform other duties in the field at many agencies. These can include fare payment checks, on-board surveys (beyond routine data collection activities), park-and-ride lot counts, transfer surveys, and distribution of fliers. Seven transit systems use one or more checke rs for data input and other office duties on at least an occasional basis. Toronto considers checkers for full-time office vacancies; most of its processing staff are former checkers. In New Jersey, eight current members of the scheduling staff began their careers as traffic checkers: others throughout the agency started as checkers as well. The traffic checking assignment is considered an entry-level position in the agency. Additionally, nearly all positions (including traffic checkers) within New Jersey's system require a college degree. In most other systems, a traffic checking position is more or less a dead end and does not usua lly lead to another more responsible j ob. 34


Data Collection Techniques Traffic checkers perform two basic types of tasks: ride checks and point checks. The information collected on both types of checks is essentially ident i ca l : passenger loads; hoardi ng s; alightings; and time leav ing selected stops. A ride check is when a checker rides on the transit vehicle and records the information for eac h stop. A point check is when a checker is stationed at a particular location and records the information for each transit vehicle passing by tha t location. Ride checks are sometimes referred to as run checks riding checks, or riding counts, w h ile point checks are a l so known as stand checks or spot checks. The decision to concentrate on one type of check appears to be dependent on the philosophy of the department managers and the history of a given agency, although changes in l eadership have resulted in changes in emphasis. Ride checks provide a more complete picrure of ridership activity all along a route, whi l e point checks can provide nearly as much i nformation (especially at or near the peak load points on a route) with fewer checkers, especially on busy routes. Some agencies argue that point checks actua ll y give a broader perspective on the route in question. There is a comple t e lack of consensus on this issue. As seen in Table 4 point checks predominate at six agencies, seven do more ri de checks, five report an even split betwee n the two types of c h ecks, and two (Portland and St. Louis) rely primar ily or exclusive l y on APCs. C l eveland uses point checks to flag potential prob l ems, and then conducts ride checks to find the specific cause of the problem. Many agencies focus their point ch ecks a t peak load points only, w h ile others do point checks at all time points along a route. Of the six agencies reporti n g a difference in emphasis by mode, five conduct more point checks on the rail system than on the buses. Table 4 Agency Preference for Point Checks or Ride Checks '' .... : > ' > . Nunibet-"oi .!1redolii i nimfJYpe ot"Cbeck : .. ' ..... . ., .. ' . . . . Ride Check 7 Point Check 6 Eve n Sp l it 5 APC 2 35


When using ride checks to analyze a specific route, nearly all agencies do coun ts on all runs for that route, even if it takes several days. In practice, most accept a sample that may be missing a few trips (due to interlining, breakdowns, or checker absence). Late or hawk runs are not always considered crucial, because head ways at those times tend to be policy-driven. Obviously, if a ride check is being done in response to a specific problem, all runs are not necessarily surveyed. ln general, however, a 100 percent sample of runs is the goal for a full route analysis. It is also standard to pro vide checkers wit h a list of all stops on a given route although rwo systems require counts only at time points along the route. Counts related to the Section 15 program constitute a significant portion of MDT A's data collection efforts Cleveland is the only transit agency at which Section 15 counts make up 20 percent of data collection, because its rapid transit sample is non-revenue-based A typical range for Section 15 counts is berween five and 15 percent of total effort At many agencies, Section 15 counts are separate and apart from the traffic checking function. Technology As mentioned in the introduction, the use of "new" technologies such as automated passenger counters (APCs), automatic vehicle location systems (A VLs), and hand-held data co lle ction units changes the discussion of how operations planning departments conduct data collection and analysis. These technologies also raise new issues for transit agencies to address. A major benefit of hand-held units is their capability to be connected directly to personal computers (PCs) for data transfer, thus eliminating the need to input data manually. Of the 20 systems surveyed, 3 (Dallas, San D iego, and Oakland) usc hand-held equipment exclusively, while 4 others use hand-held units for some checks and manual fonns (Chicago Cleveland, and New Jersey) or APCs (Seattle) for others (see Table 5). Eleven agencies surv eyed continue to rely exclusively on manual fonns for recording ride check and point check data The remaining two systems (Portland and St. Louis) use APCs. 36


Table 5 Data Collection Technology Hand-Held Units 3 Manual Forms 11 Combination 4 APC 2 The hand-held units appear to require a period of adjustment and debugging before they can be used to their full potential. Once the problems are worked out, reaction is generally positive, but close coordination with the vendor and the agency's computer personnel is required. In MDT A's case, hand-held units were purchased but are not being used due to programming problems. Software is most frequently developed by the vendor or by an outside consultant, sometimes with in-house assistance and modifications. Five of the II agencies using manual forms are in vario us stages of acquiring hand-held units. Among other trans it systems Baltimore and Houston are planning to invest in APCs tied to an AVL system and electronic fareboxes and thus are not interested in hand-held units. Portland, Seattle, and St. Louis are using APCs, while Chicago and Minneapolis have pilot programs still in the testing phase. As noted above, Seattle also uses hand-held units. Eight other transi t agencies are i n various stages of considering APCs, as shown in Table 6 By providing automatic counts of passenger hoardings and alightings and registering actual time at schedule time points, APCs can vastly increase the amount of specific data available to service planners and schedulers regarding route performance. Seattle and Portland have the most extensive APC programs with 25 and 15 percent, respectively, of the fleet equipped with APC. The other systems have APC on less than J 0 percent of their fleets. The APC buses are rotated throughout the fleet on a predetermined basis. 37


Table 6 Use of Automated Passenger Counten (APCs) L '_i Ilium tiel' :Ofis)'Steiria .,, In Use/P ilot Program 5 Unde r Consideration 9 No Plans 6 Systems with APCs are generally happy with the results, although there have been a few problems along the way with system hardware, location mismatches, route detours lack of immediate availability of the data (most systems batch APC counts at the end of a line-up), and a need for additional units and transmitters. Minneapolis reports very accurate and reliable APC data that meet demanding standards. The lack of signposts or other location references at Portland has resulted in up to 50 percent of APC data being invalidated, but the agency still gets more valid data than before APCs Agencies considering APCs are attracted by their potential, but some are also concerned by the additional coordination needed to ensure that they work properly Concerns of this type are paramount to the seven transit systems that are not considering APCs at this time. Three cite perceptions of unreliability, preferring to wait until a tested APC system is available. Toronto conducted a pilot APC program with mixed results; the software worked well, but there were hardware and equ ipment problems. Cost is also a major issue. Some agencies are focused on establishing an A VL system first, before evaluating APCs. One interesting question raised with regard to APCs is what to do with all the data that can be collected. Are there sufficient staff resources or, more critically, staff time to digest and analyze the data? Will the vastly increased data available improve decision-making? At least one planning director expressed the opinion tbat too much data often lead to less well-defined remedies for problems and deferrals of decisions. Seattle, Toronto, and New Jersey have A V L systems in operation, while Denver Dallas and Milwaukee are in various stages of debugging and implementing A VL. Portland, Baltimore and Atlanta should be operational i n the near futur e, and Minneapolis has a demonstration project on 38


one corridor. Seven other systems are considering A VL, while three agencies indicate no real plans at this time. Satellite or global positioning systems (GPS) appear to be the preferred technology for A VL. New Jersey cunently has a signpost system in Essex County, but is looking into a satellite system. Tabl e 7 summarizes the use of A VL systems among agencies surveyed. Table 7 Use of Automatic Vehicle Location (A VL) Systems ... (J (>:.;':$.,.: \4.'.l ... . -...... .. : r .\.. ''':"X'Ba(' .... .. Nllm t Of'SyiitiliiiS'; : .P ": "-'-"<>; ;-.? In Use/Being Implemented 6 To Be Implemented/Demo 4 Under Consideration 7 No Plans 3 A VL systems are generally seen to have the biggest impact in the operations area, where dispatchers can make needed service adjustments in real time from behind a console. For operations plaruting departments the most important aspect of A VL is its ability to collect running time data for on-time performance analyses and schedule adjustments. Some planning departme nts have been involved only tangentially in A VL decisions within their agencies and are worried that their needs have not been adequately addressed. Agencies not considering A VL a t this time mention the cost and the unproven aspects of a new techno logy Pittsbuq;h is designing specifications for APCs that will be compatible with a future A VL system; its APC system will be GPS-based and will function as an A VL without the real-time aspect. There are also concerns over the reliability of AVL data; Toronto reports that its time point data's accuracy is plus or minus two minutes. Most operations planning departments do not use electronic farebox data extensively. Chicago is the major exception to this observation, since farebox data constitute the basis of its service monitoring process. Chicago has trained its operators to punch the appropriate buttons at the end of each trip, and has also researched the undercount phenomenon common with fare box data and developed factors to adjust ridership appropriately. The farebox data are not sufficiently detailed for scheduling purposes; however, Chicago's scheduling department relies almost exclusively on point checks at maximum load points. 39


Tiuee agenc1es (Milwaukee, Toronto, and Seattle, which only recently acquired electronic registering farehoxes) do not use electronic farebox data at all. In Milwaukee, the operators' union protested and management decided that operators would not have to record hoardings The finance department is the primary user of electronic farehox data at most agencies. Operations planning departments tend to use the data to track overall trends and as a check on their own counts. The major complaint is that trip -l evel data are not reliable, because operators do not punch the correct buttons at the correct times. Cleveland reports that the highest ridership from farebox data always seems to be on Route 0. A few agencies use computer software to adjust operators' mistakes based on recent trip averages Other uses of electronic farebox data include counting special hoardings (wheelchairs, bicycles), ridership at special events, hoardings by the elderly on a route serving a senior citizen center, and trip -by-trip express bus ridership One system looks at past years' data to decide the appropr iate schedule to ope rate on a holiday that falls on a Saturday. Many agencies use farebox data to calculate route productivity measures. In general, however, operations planning deparrments do not routinely use electronic farebox data because of trip-level reliability issues. The effects of new data collection technologies on transit agencies can be seen most clearly in the data collection schedules. A typical goal for transit systems is to do a full route analysis once every two years for each route. Some systems aim for annual route analyses, while others se t once every three years as a goal. Systems with APCs collect data for each run between one and live times during every line-up. APCs result in the routine availability of current data and permit a closer tracking of routelev el trends Systems with hand-held data collection units also tend to do more frequent counts than those collecting data manually, possibly because of the greater capacity for data input. Most systems schedule data collection for weekday service only, and do counts on weekends on an exception basis. Only four systems indicated a schedule for weekend counts although others are considering establishment of a weekend schedule, possibly at a reduced frequency compared to weekdays. Five systems do not have a formal schedule for route-level data collection By a two-to-one margin, systems indicate that they are not able to meet their data collection schedules for every route The reasons most often cited by the I 0 agencies that cannot meet their 40


schedules are special requests, one-time events, and a shortage of personnel. Examples of one time events i ncl ude the opening of a new light rail line in Denver, where a local political controversy led to elctensive counts of light rail ridership and a diversion of personnel from bus counts, and implementation of new crosstown routes in Boston. Other systems indicate that there are always special requests or "brush fires" that disrupt the planned schedule of data collection. Routes that are postponed tend to be those that are performing reasonably well, are not part of any planned service changes, or are not attracting any attention. While data collection schedule adherence is still desirable, most operations planning directors indicate that the counts that need to be done are being conducted. Thus there is no undue concern if a few routes slip behind schedule. APCs and A VL systems are changing standard practices at operations planning departments. These technological advances enable more freq u ent and comprehensive data collection regarding passenger loads and running times. Current data are much more readily available, and this availability may reduce the need to divert personnel to meet special requests. APCs and A VL are expected to reduce the need for traf fic checkers, according to most (but not all) agencies Traffic checkers are likely to be used less for routine data collection and more for immediate, specialized tasks. Operations planning directors have some trepidation about the need to deal more closely with MIS or computer services departments as a resul t of automated data collection techniques. Denver's Service Planning and Scheduling department was not heavily involved in the implementation of the A VL system. The department would like the opportunity to define the screen for its interface with the system. As noted in a subsequent section, the relationship between operations plruming and MIS departments at many agen c ies is poor, and the problems this creates could be worsened by an increase d reliance on technology. The "Sorcerer s Apprentice" sc enario of overflowing data threatening to wash away understaffed departments is a serious concern. Even at agencies with manual data collection techniques, there is a sense that more data are available than can be analyzed and used. Agencies adopting n ew data collection technologies need to defme clearly how their potential benefits will be achieved in practice 41


Data Anal y sis This section addresses the overall procedures for analyzing passenger load and running time data, and includes data input and storage, staff devoted to analysis, data summary, types of ana l y s is, and new technologies. The discussion of APCs and A VL systems in the previous section highlights the importance of analyzing and evaluating the raw data collected The r e sulting information can then be used to make decisions regarding specific routes. Data input and Storage The first step after collecting data is to transfer it from the collection forms or device to a computer Nine systems input data manually with in-h ouse staff, as s h own in Tabl e 8. Seven systems with hand he l d units transfer the data directly, either in the office by modem, or in the field using a laptop computer At the same time the hand-held units can receive new assignments for the next day or week. Four systems have the APC units or fareoox probed at the garage. New York co n tracts out data inp ut. Pi t tsburgh forwards the data to the person or department tha t requested it, and does not necessarily input data into a computer database or s pr e adsheet. Chicago also indicated that its point checks are an alyzed manually, not via computer. Toronto uses a scanner as an input device. The total number of systems in Tab l e 8 exceeds 20 because Chicago New Jersey, and Seattle use more than one method of data input. Personal computers have made inroads in transit agencies, but many systems contin u e to make use of mainframes or minicomputers in addition to PCs. Spreadsheets database programs scheduling packages and specialized software are all used. The amount of editing required after data input varies from agency to agency Direct data entry from hand-held units or APCs creates th e need for quite a bit of editing in the beginning as part of the debugging process. 42


Table 8 Data Input Procedures Man u al In House 9 D i rect from Hand Held Units 7 APC/Farebox Probed 4 No Input 2 Pr i vate Rrm 1 Scanner 1 Toro nto k eeps careful track of its scanning process The accuracy of the scanned data is 95 percent and 13 percent of the staff person's time is spent on scann ingrelated activities. Toronto estima tes the cost of data collection to be one-half cent per paying passenger. Data are stored in hard copy and computer formats. Prin!ed reports or key printouts are kept for one to five years, and are then either discarded or stored off-site Toronto makes microfiche copies of all data, and reports that it is a useful storage mediwn in addition t o mainframe and PC disk-based storage As transit systems migrate to personal computers, disk-based storage i s increasingly popular. Mainframe-based data are often archived on tape, and access to historical i nformation is cwnbersome. One agency described its archive procedures as a punch card system without the punch cards. Only four agencies (Bo-ston, Seattle, Atlanta, and Dallas) indicate that th ey can automatically combine data for several rou tes in a corridor Analytical Staff Nine age ncies assign staff exclus ively or primarily to data ana lysis. Size of the analysis staff ranges from l person to about 20 people in New York's System Data and Research unit. E le ven agenc ies treat data analysis as part of the duties of schedule makers and service planners, and do not assign staff specifically to this function The introductio n of APCs or A VL syst ems often requires a staff person dedicated to data analysis, at least in the early stages. 43


Data Summaries Seattle's answer to the question of how data are summarized could be repeated by most transit agencies: "Every way we can." All systems analyze passenger loads and 11llllling times by route, and most use time also. Eleven of the 20 agencies summarize passenger load by time period (morning peak base, afternoon peak are typical periods). Six systems look at 15-or 30minute periods in the peak periods and 30to 60-minute periods at other times. One system relies on hourly summaries. After time, peak load point is the most common means to summarize data. Fourteen systems indicate routine use of peak load point data, and several point out that there may be more than one peak load point on a given route Other common means of summarizing data include by trip (12 systems), by stop {II systems, often used to respond to shelter requests), and by route segment (9 systems). Types of Analysis Most agencies examine passenger loads at the peak load points on a route, since this is a primary measure of demand Only one agency does not routinely perfonn running time analyses or check schedule adherence. Another system analyzes 11llllling times on an exception basis only Analyses are also conducted by route segment, stop, time period, and trip. Computerized analysis is standard at most agencies. There appears to be an even split between personal computers and mainframe/minicomputer platfonns. Eight systems report using spreadsheets, while four rely on database programs and four use specialized software. Two agencies not currently using database packages mentioned the possibility of migrating to a database in light of increased data availability and the flexibility offered by relational data bases. PC-based analysis using readily available software packages is an attractive option to operations planning departments because it is a relatively se lf-con tained activity The use of mainframes, minicomputers customized software packages or new technologies requires cooperation with the MIS or computer services department. Four survey respondents indicate that the lack of responsiveness or low priority given to their needs by computer personnel within the agency is a serious problem. Operations planning departments with a computer expert included in the staff 44


have fev.-er computer-related problems and appear better able to introduce new technologies and procedures with minimal disruption. P o rtland cites tbe fact' tha t programmers are part of the scheduling departmen t instead of MIS a s a major factor \hat makes the APC p r ocess work well. Loc a t i o n of the pro gr ammers a t P o rtland can b e traced to th e early days o f th e sc heduling p r ogram RUCUS, when the agen c y defined the fun ction of s upporting R UCUS as p art of schedul ing W ash i ngton also stress es the importance of in-bouse data processing capabili t ies. Another operations planning department recently recruited a computer expert to avoid being left in the deep blue sea" by computer services. New Technologies Ge o graphic info rmation system (GIS) cap a bilit y i s th e m ost exciti ng new dev e lop lt\ent i n t h e area of data anal ysis Several agencies are ac tive l y considering o r are already using GIS as an analytical tool. The p rocess of integr a tin g GIS w ith bus stop or scheduling data bases to ensur e a common system is underway in Waslti n gto n and Seattle wltile San Diego's MPO is putting the ftnislting touches on the traosit network on its GIS system. The develo pmen t of GIS capabilities is no t usua ll y driven by the transit a gency; the MPO or a regional plannin g agency often takes t h e lead Thus, transit age n c ie s w ith stro n g MPOs are more like ly to be invo lved wi t h GIS. S eattle includes a wate r quality division along with. a transit division, and the p o t ential use of GIS on a c o u nty or regional l evel was a m ore important factor in lhe agency's decision to purchase a GIS system thali its transit appli ca t i ons. Dallas provides an example of a transit agency with a strong MPO that has been a leader in GIS applications Dallas is integrating its rider s ltip database with a GIS bus stop list into a single uni fied lis t and has just begun to t a p t he pote ntial uses of GIS Cen s u s j ourney-to-work data by tr affi c analy sis zon e, year 2010 ride rship proje ctio n s, cens u s e mployment data, and arm ually upda t e d popu l ation d ata can be ana lyzed th.rough. GIS. Not all agencies have had positive experie n ces with GIS. St. Louis reports internal problems with the intricacies o f a Unix-based Arclnfo system as well as concerns regarding the ability to support GIS with in-house resources. However St. Louis bas digitiud its route structure and is moving toward its goal of integrating its APC data with GIS. 4 5


Data Use Once passenger load and running time data have been collected and analyzed, the final step in the operations planning process is to use this information to make decisions on the appropriate level of service for a given route. This section focuses on who uses the information, how decisions are made, and how route-level information is disseminated within and outside the agency. Interactions within operations planning (primarily between service planners and schedule makers) and between operations planning and other departments are briefly discussed. The role of service guidelines or evaluation standards in guiding decisions is examined. This section also looks at the types of reports produced by operations planning departments and the exten t to which these reports are circulated. Data Users Schedule makers and service planners are the primary users of the route-level information developed through the operations planning process Since the data collection is tailored to the types o f information needed by scheduling and service planning, this is not surprising. In most agencies, other departments request the data on at least an occasional basis Bus operations, tinan ce marketing / market research and strategic planning departments are most frequent ly mentioned by survey respondents. It is clear, however, that operations planning departments rely on and make the most use of passenger load and running time data. In thinking about future technological applications MDT A service planners considered the feasibility of a direct link between the output of the running time analysis and the scheduling package. The oretically a link of this nature would improve the department's efficiency by eliminating the need to input running time changes into the scheduling package manually Nearly all systems surveyed report that there is not an automated link and that running t ime data are submitted manually to the schedule makers. Even agencies with the capability to make this connection have opted not to do so One system with off-peak t imed transfers downtown suggests that those meets would require manual override Another agency comments th at its schedule makers do not base schedules on average running times, but use their knowledge of conditions on the routes and driving styles of the operators to produce an optimal schedule for a given route with adequate recovery time to allow for the route's bad days. Other reasons, such as difficulty in interfac i ng with the scheduling data base, are more prosaic. Next-generation 46


scheduling packages and GIS applications could result in direct links between analytical output and scheduling input, but at present these are relatively rare. Other departments within the transit agency are not the only sources of interest in data and reports produced by operations planning. Public agencies and private businesses can and do make requests for specific data. Toronto receives 350 requests annually for various ridership information, a s i gn ifi cant proportion of which is from municipalities for stop-level information to be u sed in determining she l ter locations. Outs id e businesses are interested in rail station usage and T oronto charges them for providing this information. Data Uses The major purpose of data collection and analysis is to monitor route -le vel performance. Implicit in this purpose is the need to make changes on routes that do not perform well. As noted in the introductory secti on, one of the primary functions of an operations planning department is to make appropriate changes to routes and schedules. Most operations planning personnel would characterize route-level analysis as half art and ha l f science. A first step in using information on a route is simply to review the data. T he schedule maker or service planner looks for patterns in the data that reveal problems in the route s performance. Some problems may be fairly obvious, such as a wide variation between scheduled and actual running time throughout the day or heavy passenger loads at the maximum load point during peak hours, while others may be more subtle and require knowledge of the route. In times of severe budget constraints, one agency indicates th at the review of route-level data foc u ses on a search for clues as to bow to save money. The science half of the equation is supported by service guidelines or routel evel evalua tio n standards. Formal guidelines and standards have become increasingly common within the transit industry over the past I 0 yearS. At the outset of th i s project CUTR planned to ask systems to provide a copy of their guidelines but a TCRP synthesis project specifically addressing guidelines and standards was already underway and has since been published.' The report categorizes route Howard P. Benn, "Bus Route Evaluation Standards: A Synthesis of Transit Practice." TCRP Syntbesis10. Washington, D.C.: National Academy Press, 1995. 47


evaluation standards into five broad groupings: route design ; schedule design; economics and productivity; service delivery monitoring; and passenger comfort and safety. Given the extent of analysis done in the TCRP report, this study scaled back its examination of standards and guidelines, and did not ask for samples. Agencies were asked if they had guidelines, if the guidelines had been formally adopted or approved by the Board of Directors and if they were applied in the decision-making process Seventeen of the 20 agencies surveyed bave service guidelines, as shown in Table 9; the other agencies are in the process of developing standards. The guidelines have been formally adopted by the Board at 14 agencies. The Board of Directors at three systems (Boston, Washington, and Houston) d id not formally adopt the guidelines, although in Boston the Advisory Board to the Board of Directors did approve the guidelines. All three of these systems use the guidelines informally. Status Developed Table 9 Service Guidelines Number of Systems 17 Formally Adopted by Board 14 Routinely Applied 15 The application of guidelines to service and schedu le decisions is a sensitive top ic at some agencies. All systems recognize that there are gray areas where professional judgment is more reliable than strict adherence to guidelines and understand that there are certain cases in which the Board may not faithfully follow what it has adopted Two systems report that their Boards routinely violate or ignore adopted guidelines. Several Boards tend to focus on one or two specific standards such as passengers per vehicle hour, passenger loads at the peak load point, or farebox recovery ratio, as the most important performance measures Some agencies rely on route rankings, as opposed to specific standards, in making service and schedule decisions. Apart from using data to make service changes in accordance with route evaluation standards agencies also rely on certain types of data for other decisions. Bus stop activity is often used to 48


determine the need for a bus shelter. Agencies also base decisions to assign articulated buses on passenger load data. Dissemination Nearly all operations planning departments must produce a report for the Board in the cases of majo r route or schedule changes. One agency reports that its Board is not so interested in route level details and does no t require a lot of justification for route changes. Beyond the Board requirement for major changes, 16 agencies routinely produce reports that foeus on or include route analysis. As shown in Table I 0, six agencies produce route profiles after a route has been analyzed and distribute these to other sections of the agency. Houston and Toronto have begun to distribute their reports on-line through the computer network; others rely on the old-fashioned hard copy circulation. Six systems produce route-level reports, some in computerized form, but do not distribute them beyond service plarming and scheduling. Four other agencies include route analyses in quarterly or annual reports. Service planning is more likely than scheduling to produce reports. Among the four agencies not routinely producing route-level reports, two cite lack of Board interest (either in the bus system or overall), and two others are planning route profiles. Table 10 Route Report Production and Dissemination < Route bt sY.iiiams r . ' .... Distributed within Agency 6 Limited Distribution 6 In Quarterly/Annual Report 4 No Route Reports 4 Aside from disseminating information, route reports serve to remind other departments within the tran .sit agency that operations planning has the most current detailed data on system usage and performance. Washington points out that the distribution of data and reports ensures that the agency is consistent with regard to public comments on ridership trends. Recognizing the benefits of data dissemination, several operations planning departments are investigati ng new 49


teclmiques or formats to share their data. Cleveland intends to prepare and distr i bute route profiles. Portland is planning line-by-line reports on individual routes, including schedules, service plans, bus stops, shelters, dem ographics within the route's serv i ce area and ridership. Boston will introduce a report card with on-time and passenger load data by time period. Washingto n is moving toward providing route level data on diskettes for other departments t o customize and analyze as they see fit. Time Frame a n d Bottl enecks in R o u te Analysis Agencies have some difficulty in estimating the average amount of time involved in collect i ng inputting, and analyzing data and producing a report or recommendation for a sing l e route. In part, this difficulty stems from the perception that there is no "average" route Estima t es range from one wee k to one year. Most agencies fall within a range of three weeks to three months depending on the complexity of the route and the extent of the analysis. This was a difficult q u estion to analyze, because systems differ in the level of detail involved in a route analysis. Table II shows t hat the most common sources of delay are limited staff resources and data input Other bottlenecks cited by more than one agency include delays in processing automated data, difficu l ties in completing counts, other priorities, data ana l ysis and evaluation, public information requirements, and the Board of Directors. Some systems make major c h anges at only one time during the year; while they can collect and analyze data in a reasonab l e time frame implementation is not immediate. 50 Tabl e 11 Commonly Cited Bo ttl enecks Source of Delay Numbe r of Systems Lim it ed Staff Resources 5 Data Input 5 Processing Automated Data 4 Completing Ass ignm ents 3 Analysis/Eva l uat ion 2 Other Prior i t ies 2 Pub l ic Info Requ i rements 2 Board of Directors 2


SUMMARY Most bottlenecks can be traced back to limits on staff size Given current trends in the industry, these limits are not likely to be eased in the near future. The challenge for operations planning departments is to make the most of scarce resources by "working smarter." Departments that have been successful in routinizing as many tasks as possib le ge nerally fare better than those that have not been able to standardize their procedures. As noted in the discussion of APCs and A VL systems, technological advances in one area are not sufficient to overcome the problems of limited resources; the bottleneck is shifted, for example, from data entry to data analysis. Advanced analytical procedures and standardized reporting formats are needed in addition to technological advances Over a five-year period, these types of changes can result in a marked i ncrease in efficiency, but debugging prob lems and the learnin g curve involved with any new technology or procedure will cause short-term problems. As transit agencies now on t he cutting edge continue to refine their procedures and impro ve the reliability of unproven tec hniques, the lessons learned in the process can be used profitably by the entire industry. 51




CHAPTER THREE: Findings and Recommendations INTRODUCTION The first chapter of this report reviewed existing data collection and analysis procedures in MOTA's Service Planning and Scheduling Division, which is responsible for route-level performance analysis and monitoring. The review focused on data collection, data input, and report production. The second chapter presented the results of telephone interviews with operations planning personnel at 20 major transit agencies in the United States and Canada concerning data collection, analysis, and use. Processes and technologies in use at other agencies bave provided insight into possible solutions to continuing problems facing MDT A. Based upon CUTR's analysis of current procedures and the findings ofthe telephone survey, this final chapter presents conclusions and recommendations for refining and streamlining the performance analysis and monitoring process at Metro-Dade Transit Agency. FINDINGS AND RECOMMENDATIONS CUTR's conclusions and recommendations are summarized in the paragraphs below. These fall into four major areas: data collection personnel; data collection technology; cooperation with other divisions; and report production. Data Collection Personnel There is no ove"iding reason to change personnel organization for data collection, based on the results of the s urvey of other operations planning departments. In-house data collec.tion i s still the overwhelming preference among the 20 agencies surveyed. Most have full-time data collection staffs and do not use temporary employees as a rule (SPSD sometimes brings in potential full-time workers as temporary employees on a trial basis). The experience at oth er agencies is that in-house employees produce work of higher quality, have a more complete knowledge of the transit system; and keep data collection directly under agency control. Four agencies indicate that use of private firrils to collect data is under consideration as a direct result of projected budget shortfalls but none of the four bas taken significant actions in this 53


direction MOTA s current organization is in line with industry standards, but it is possible that the transit industry will see changes come about in the next year or two. MDTA should stay with fuU-time traffic checkers for now, but should monitor national trends and reconsider the issue at this time next year. The optimal size of the traffic checking unit is not easy to determine. MDT A's current active staff of five traffic checkers matc hes the smallest staff among the agencies surveyed that do data collection in -house while its ratio of peak vehicles to traffic checkers is in the middle of transit agencies. MDT A has experienced difficulty in carrying out scheduled counts particu larly given the need to divert resources to special requests. To function more efficiently and improve its ability to complete route checks in a timely fas hion, MDTA should add one or two traffic checkers to its staff This may be a difficult recommendation to carry o u t at this time, when MDT A is facing severe budget constraints and staff cuts are a realistic possibility. The use of hand-held data collection units, discussed in the next section, is a higher-priority alternative than the need for two additional traffic checkers. Regardless of the technology used, the data collection program cannot function effectively with fewer than five traffic checkers. Data CollectioDIAnalysis Technology SPSD should e nlist help either from inside the agency or from external sources, to get the It and held units working properly. SPSD owns 10 Telxon PTC 610 units These units were programmed to conduct ride checks, but Management Informa tion Services (MIS) was not able to devote the time and staff resources required to ensure that the units worked correctly. Other operations planning departments included in the survey successfully use Telxon units. Op t ions include contracting with t he vendor to reprogram the units requesting MIS assistance, or upgrading the units to a more current model. SPSD has been developing computer expertise in software packages such as SAS, a database program that provides strong analytical capabilities. SPSD should continue to develop its in house computer expertise However, the handheld units and the new scheduling package expected to be installed in the near future both require more extensive computer support and knowledge than is available within SPSD. MIS should provide support in the form of a programmer or systems analyst dedicated at least half-time to SPSD. This person can assist in the debugging of the hand-held units, work with SPSD to install the new Trapeze scheduling 54


software, and customize and update data entry and scheduling prooedures Ideally, the person ,. would be loca ted w ithin SPSD to gain a full understanding of the purpose and uses of the computer technology. This person could also assist SPSD in carefully considerin g what add-on modules to the Trapeze program would be most useful and productive. Other operations planning departments with successful monitoring programs emphasize the i mportance of readily available computer assistance SPSD and MIS discussed a similar arrangement in the past, but nothing came of the effort Asswning that the hand-held units are brought on-line, their ability to plug into a computer and transfe r data electronically will solve data entry problems. SPSD will no long er need to have temporary employee to ente r manually-co llected data. Tbe increased data flow will require additional checking and editing. After the hand-held units are operational, SPSD s laould assign an employee to be responsible for data managemen t and analysis. This person should be knowledgeable in computer technology and be familiar with the software used in the division (primarily SAS at present). This position's functions would include editing data from the hand held units, using SAS to analyze the route-level data, monitoring the data base, and providing general computer support. Depending on the ease of transition to the new scheduling package and the reliability and usefulness of the hand-held units, this position could eventually replace the halftime MIS computer expert. Cooperation with Other Divisions As technological advances continue to be implemented within MDT A, it is important that the interests and needs of all divisi ons are understood. At leas t one other transit agency is concerned that placin g the development of new technologies under the con trol of a single division prevents other divisions that may want or need to use the output of the technology from participating in pot entially critical decisions. Within MDT A, the Transit Operating System (TOS) is one example of a comput erized program that bas not fully met the needs of either division (bus operations or maintenance) that it was designed to help. CUTR recommends that as new technologies come on line, divisions witJJin MDTA work together to address all concerns and meet all needs A formal committee is a possibility, but a mote informal working group made up of representatives involved with the technology or its output in each affected division might be more productive and useful. 55


More immediately a new automatic vehicle location (A VL) system is currently undergoing testing at MDT A. SPSD should be involved in the AVL implementation to ensure that it can have ready access to the data and output that it needs. Report Production SPSD prepares reports summarizing the results of route-level studies in terms of passenger loads and running time. T hese reports also contain recommendations for route improvements. With staff and time constrai nts, these reports are not produced as frequently as SPSD would like SPSD should develop a streamlined route report structure, eliminating extraneous information and presenting a succinct summary of route fmdings and recommendations. The Portland and Seattle examples in the Appendix (bound separately) show effective route-level reports and summaries of proposed changes. The report preparation process should be routinized, allowing SPSD to produce a report for every route it analyzes. These reports should be disseminated to senior managers throughout the agency. A streamlined structure should make the reports more readable. Background information and more detailed data will continue to be shared primarily among SPSD staff. In the long run, SPSD should work toward the development of an annual report perhaps along the lines of a report card, summarizing the most recent information on all routes in the system. The New York example in the separately-bound Appendix presents a great deal of route-level information in a concise format. The combination of brief reports on individual routes and an annual summary will ensure that current information is readily available as input to the decision making p rocess. SUMMARY Service Plarming and Scheduling Division's current practices in data collection analysis, and route performance monitoring are reasonably typical of those in the transit industry. MOTA is not on t he c utting edge of new technological developments such as automated passenger counters but it is far from alone in waiting for the start-up problems inevitably associated with new technologies to be so lved As the new technologies are assimilated into the transit industry, the challenge will be for the analytical capabilities to keep pace with available data SPSD s h ould pursue solutions to allow the use of hand -hel d data collection units, and should be closely 56


involved i n the impl ementation of the new automatic vehicle location system. SPSD should also strengthen its analytical and report generating abilities to ensure that data are readily available to decision-makers throughout MOTA. Finally SPSD should continue to monitor trend s at oth er operations planning departments to keep up-to-date on changes in the transit industry 57


5 8






I. Do you contl'act out data collection to a private firm or do it in -ho use? If contract out: Name/type of firm (university, private, temp agency) In-house prior to contl'act? Savings compared to in -house? Quality of work? How much does private firm do? Where i n the process does work go out, and where does it come back (e.g., do they input also)? If in-house: Ever consider contl'acting? Reason why you didn't? 2. How large is your data collection staff? Where is the data collection unit located within the agency? If> I 0 checkers: How many supervisors? 3. Are your tl'affic checkers full-time or part-time employees? Do you use temporary workers? If temp: %of staff Why temps? Problems with temps? 4. Are the traffic checkers unionized? Are there any unusual work rules? Are their hours of work restricted (e.g., overtime if outside of "normal" day)? 5 Do you rely primarily on point checks or ride checks? Approximate percentages of each? Different by mode (bus/rail)? 6. What data do your checkers collect? Offs/ons Passenger loads Times Different by mode? Other regular duties (data input)? 7. How often is each rou t e scheduled for data collection (full route analysis)? Are weekday and weekend counts done on this schedule? Are you able to adhere to this sc hedule? 61


8. Do you sample aU runs on a route or a sample of runs? If a sample, how do you select the sample? 9. Do traffic checkers enter data on ride check/point check fonns, or do they use hand-held units? If hand-held : What type of equipment do you use? How well do they work? Any problems? How much training is required? Who developed software for units (vendor/in-house/consultant)? I 0. Are you using or considering automated passenger counters? II. On ride checks, do you provide checkers with a list of all stop s, sel ected stops, or time points only on a given route? 12. How do you monitor checkers to determine if data are being coiJected accurately? 13. How are Section 15 counts related to your overall data collection efforts? What percentage of counts is Section 15 related? 14. Do you make use of electronic farebox data? If so, how? 15. How do you transfer data f rom checker forms to be analyzed? Is the process computerized or manual? If computerized : Scanner used? Direct from hand-held What type of computer equipment? Does process require much editing after data inp ut? 16. How are data summarized? (by hour or time period by trip, geographically by stop, route segment, time point route, route type) 17. Who analyzes the data? How many staff people are assigned to data analysis? 62


18. What types of analysis are performed using the data? (Ridership/run time, schedule adherence, peak load) Is the analysis computerized or manual? If computerized : What software do you use (specialized program, purchased, modified spreadsheet, modified data base)? Developed in-house? Is there an in-house programming guru? Do you make use of GIS? I 9. Who uses this information? How do you use this information to make schedule or routing changes? 20. How is run time data submitted to Schedules? Directly into scheduling program? Report that Scheduling inputs? 2 I. Do you have an A VL system? How effective is it? Does it take care of all run time analyses? Has i t changed the duties of the checkers, or the types of data they collect? Has it affected the number of checkers employed? 22. Does your agency have service guidelines? Have they been formally adopted/approved by the Board? Are these guidelines applied in the decision-making process? 23. How are data stored for future reference? Computerized/Manual file? If computerized: Can it be easily accessed, especially if you need a detailed piece of info (e.g., specific trip, load at certa in point)? Can you combine data for several routes (e.g., in corridor)? What type of data are stored? In what form? 24. Do you routinely produce route reports that disseminate fmdings? Is a standard format used? Are these distributed throughout the agency? 25. On average, how long does it take to collect, transfer and analy z e data and produce a report for a single route? Where are the bottlenecks in the process? 26. May we get copies of any planning and scheduling reports, data collection forms procedures, service guidelines? 63








AC Transit Bi-State Development Agency Chicago Transit Authority Dallas Area Rapid T ransit Authority Greater Cleveland Regional Transit Authority King County Metro (Seattle) Maryland Mass Transit Administration Massachusetts Bay Transportation Authority Metropolitan Atlanta Rapid Transit Authority Metropolitan Transit Development Board (San D iego) Don Larson Traffic & Sc heduling Manager Mark Huffer Director, Service Planning & Schedules Mary Kay Christopher General Manager, Market Research Gary Hufstedler Senior Manager, Service Planning Michael York Director of Operations Planning Tom Kryza Schedule Analyst Julie Corman Supervisor of Schedules Larry Dougherty Maureen Trainor Manager, Service Planning John McMath Chief Schedulemaker Wilford Beal Planning Chief Paulette Duve Manager, Service Planning Section SANDAG (San Diego Association of Governments) Jeff Martin Metropolitan Transportation Authority (Houston) Senior Transit Planner Mark Douglas Acting Assistant General Manager, Ridership Development Milwaukee County Transit System Thomas Labs Director, Scheduling & Planning Metropolitan Council Transit Operations (Minneapolis) Scott Peterson Dennis Tellofsbol New Jersey Transit Corporation Kevin Landerkin Scheduling Manager New York City Transit Authority Sharif Abdus-Salaam Port Authority of Allegheny County (Pittsburgh) Sr. Dir. Traffic Checking Operations Pete Donner 67


Regional Transit D i strict (Denver) Tri-County Metropolitan Transit District of Oregon Toronto Transit Commiss ion Washington Metropolitan Area Transit Authority 68 Supervisor of Service Analysis Bob Rynerson Asst. Mgr, Service Planning & Scheds Richard L. Gerhart Director Op Planning & Scheduling Peter Janas Supervisor, Data Collection & Analysis Millard L. (Butch) Seay Director, Office of Planning


. CUTR Center for Urban Transportation Research University of South Florida College of E ngineering, ENB 118 Tampa, Florida 33620..5350 e (813> 974.3120 (813) 974-5168


Download Options

Choose Size
Choose file type
Cite this item close


Cras ut cursus ante, a fringilla nunc. Mauris lorem nunc, cursus sit amet enim ac, vehicula vestibulum mi. Mauris viverra nisl vel enim faucibus porta. Praesent sit amet ornare diam, non finibus nulla.


Cras efficitur magna et sapien varius, luctus ullamcorper dolor convallis. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Fusce sit amet justo ut erat laoreet congue sed a ante.


Phasellus ornare in augue eu imperdiet. Donec malesuada sapien ante, at vehicula orci tempor molestie. Proin vitae urna elit. Pellentesque vitae nisi et diam euismod malesuada aliquet non erat.


Nunc fringilla dolor ut dictum placerat. Proin ac neque rutrum, consectetur ligula id, laoreet ligula. Nulla lorem massa, consectetur vitae consequat in, lobortis at dolor. Nunc sed leo odio.