Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Executive Summary

While the ATM initially provided users with counts at the distinct milestones in the pipeline, from application starts to commits, it has evolved into a much more detailed view. The ATM is now poised to be a central resource for understanding the application pipeline, from the RFIs at the top of the funnel, to the final matriculation of accepted applicants, and everything in between.

In order to ensure that the future development of the ATM aligns with both the overall goals of the school, and those of the individual programs and administrative units, a research project has been started to collect information and ideas from across the organization. This document covers the findings, along with development opportunities that can help improve how information is presented, and guide data-driven decisions whenever possible.

Research Focus

Funnel Progression Rates

  • Submit Rate, RFI App Start Rate
  • Admit Rate, Reject Rate, Commit Rate
  • Fraud Rate, Domestic Rate

Matriculant Yield Rates

  • Admit Yield (Traditional Selectivity)
  • Total Apps Yield, Submit Yield
  • Commit/Uncommit Yield (Domestic/International)

Potential Next Research Areas


  • Abandoned Applications
  • Waived Deposits/Application Fees
  • International/Domestic Targets


Overview

The purpose of this document is to capture thoughts and notes about how the application pipeline data, primarily surfaced in the Applicant to Matriculant (ATM) report, can be used to inform next steps for programs to meet targets. In addition, the information gathered through this process will be used to inform future development cycles for the ATM.

Historical Rates

The following calculated rates based on historical performance have been investigated as potentially useful indicators.

Submit Rate (NEW)

The percentage of the total started applications that were submitted.

Observations

  • For most MS programs, the submit rate has been declining over the past four years.
  • There is no discernible pattern in how submit rate impacts the other indicators.
    • For example, terms with a higher submit rate can end up with a lower admit rate from other terms with lower submit rates.
    • Also looked to see if removing fraud cases made the submit rate any more consistent or useful, and this did not change the numbers dramatically.
  • Submit rates are noticeably higher across the board for international applicants.
  • Programs with high fraud rates have an inflated submit rate. Removing fraudulent applications from the pool drops the submit rate considerably.
  • Students that designate themselves as part-time in their application are far less likely to submit their application.
    • While the volume of part-time applications is much smaller, the difference in the submit rate is significant.
    • However, despite being less likely to submit their applications, part-time applicants are far more likely to stay in the funnel and matriculate once they submit.

Opportunities


  • Projections/Actions Adjusted for Student Status


    • Applying a different submit rate based on the student status may give a better indication of the pool of potential submissions in the funnel.

RFI App Start Rate (NEW)

The percentage of the total RFIs that resulted in a started application.

Observations

    • The current data only supports consistent RFI counting from 2020 forward.
      • While there is some consistency, this is only enough data to make broad generalizations of how RFI rates will impact overall performance.
      • There is a clear difference in the rates between domestic and international, but with only 2020 and 2021 (and a portion of 2022), and the known impact of COVID on the 2020 for international, it is too early to look for patterns across this population.


  • We should revisit this when we have complete data from 2022.


  • The number of fraudulent applications that started as an RFI is pretty small.
    • This means that we don’t have an RFI to fraud problem, and that will allow us to rely more on the accuracy of the app start rates from RFIs.
  • The rates for the two years are in the ~20-30% range, and hold fairly steady within the programs.

Opportunities


  • RFI Rate View


    • Add an indicator to the Actions page of the report to show how the program is tracking against historic RFIs, and if they are within the expected app start rate.
    • Show the program how many app starts have come from RFIs, and give them an indication of how generating more RFIs will impact the overall applicant pool.

Admit Rate (Selectivity)

The percentage of applications that were submitted that ended up being admitted to the program.

Observations

  • Admit rates are noticeably higher for domestic applicants across the board, but particularly in the programs that have high concentrations of international matriculants.

Commit Rate

The percentage of applications that were admitted that ended up committing to the program.

Observations

  • For all years except 2020, the commit rate is regularly within 5-10% of other years, sometimes much closer, and the rate tends toward the 50% mark in the aggregate.

    • In 2020, the commit rate is still consistent across the programs, but is closer to 30-35%, particularly for the largest programs.

  • Executive programs have a consistently high commit rate (80%+).
    • This is a likely indicator of how serious the applicants are about the program when looking at executive offerings.

Opportunities


  • Executive Programs


  • Packaging offerings in a condensed “executive” format could be a way to attract a more serious audience around any flagging programs, particularly in the non-degree segment where changes can be implemented more rapidly, with fewer hurdles.


    • While considerably more complicated to execute, offering more masters degrees in an executive format could be an opportunity to grow this clearly motivated audience.

Reject Rate (NEW)

The percentage of the total submitted applications that were rejected.

Fraud Rate (NEW)

This is a subset of the overall rejection rate, with a focus on the submitted applications that were rejected due to fraud.

Observations

  • The reject for fraud decision code became widely used in Spring 2020, so the best numbers are from that point forward.
  • The fraud rate is considerably higher in the Fall term for impacted programs (double).
  • ERM, ACTU, and APAN appear to be most impacted with 2021 fraud rates of over 40%, though there are many programs with rates greater than 20%.

Opportunities


  • Projections/Actions Adjusted for Fraud


    • Because applications are rejected as soon as fraud is detected, the ATM can calculate the current fraud rate for any given program.
      • With this information, the Actions panel could let the viewer know at what rate they are tracking, and alter the suggested actions based on the predicted number of fraudulent applications they can expect to receive from history.
    • In addition, the calculated projection could factor in an expected fraud rate to change the predicted landing point for matriculants.

Domestic Rate (NEW)

The percentage of total applicants that are domestic.

Opportunities

  • This rate will allow the projections to be split based on the program’s unique split across the domestic/international populations to allow for more granular predictions.

Total Apps Yield (NEW)

The percentage of total started applications that resulted in new matriculants.

Observations

  • There is a clear pattern in the aggregate across the programs, despite fluctuations of 5-10% YOY, which indicates that a 3-5 year average would probably give us a good benchmark for what to expect in any given year.
  • This indicator is also very different depending on the applicant’s region and would benefit from a split across domestic and international populations.
  • Part-time applicants tend to convert at a noticeably higher rate than full-time applicants, though with a smaller number of applications. Splitting them out should get us closer to a more accurate prediction.

Opportunities


  • Total Application “Target”


    • Given the somewhat consistent yield for total apps, a 3 year average yield rate applied to the target enrollment goal could be a good actionable indicator to include on the report.
      • This would be more accurate if the application pool was sliced by region and student status, and the target set for each audience segment.

Submit Yield (NEW)

The percentage of total submitted applications that resulted in new matriculants.

Observations

  • The submit yield rate fluctuates greatly from year to year, which makes this a difficult indicator to use as a measure toward targets. The pattern tends to follow the admit rate (selectivity) which is used to move closer to targets when the applicant pool is smaller than expected.
  • A 3-5 year average could give programs a sense of how many applications will need to be submitted if the pipeline is short, however, this indicator could work against programs if they see plenty of applications in the pipeline based on historical yield.
    • Submit yield is not consistent enough to rely on it as an indicator of final performance, and should only be used to indicate a potential flag that there is a problem in the pipeline, rather than as a harbinger of success.

Opportunities


  • Submitted Applications Gauge


    • Showing an indicator of the number of applications that are necessary to achieve the target enrollment would be a helpful way to highlight if the pipeline is short.
    • It is also important to make it crystal clear that a full gauge does not mean the goal will be met, just that there are enough applications if history holds true.

Admit Yield (Traditional Yield)

The percentage of admitted applicants that became new matriculants.

Observations

  • Everything that applies to the submitted apps yield holds true for admit yield, though the numbers are higher and more stable across the admits vs. submits.
  • The relative stability of the admit yield across the programs makes this a good candidate for tracking how likely the target will be met late in the admissions process.
    • Note that the international population is much more volatile, and historic indicators of admit yield will not be as effective for predicting behavior.
    • The most recent year in history is likely the best indicator of the current term expected admit yield for international applicants, while a 3-5 year aggregate is likely best for domestic.

Opportunities


  • Admit Yield International/Domestic Split “Target”


    • Separating the international and domestic populations for this particular indicator would be very helpful when gauging the health of the funnel.

Commit Yield Rate - Domestic/International (NEW)

The percentage of committed applicants that became new matriculants.

Observations

  • Because the yield rates can vary dramatically depending on domestic or international status, this rate indicator is further segmented based on those two populations.

Uncommitted Yield Rate - Domestic/International (NEW)

The percentage of admitted applicants that did not respond with either a commit or decline but still became new matriculants.

Observations

  • Because the yield rates can vary dramatically depending on domestic or international status, this rate indicator is further segmented based on those two populations.

Other Indicators to Investigate

    • Abandoned Applications
      • Submitted applications that are never completed.
      • Abandon Rate: Percentage of all apps that are never completed
      • How does this track over time? Any interesting patterns?


  • Waived Deposits


      • How does waiving the deposit impact the matriculation rates?
      • Do we see higher or lower matriculation rates for applicants that are not required to pay a deposit before attending?


  • Waived Application Fees


      • How does waiving the application fee affect starts and submits?
      • Is there a correlation to fraud cases when there is no actual fee associated with submission?
      • Do we see higher or lower matriculation rates, and is it possible to factor in fee waivers when analyzing the strength of the pipeline within the projection formula?


  • Decline Rate


      • Determine if there is any pattern to the rate of declines across programs, and if so, include it in the projection calculations.


  • Student Status


      • As noted, the part-time designation can be an indicator of a lower interest level from the applicant. Viewing key indicators through the student status lens could yield interesting insights.
      • One notable exception to the part-time designation appears to be within online-specific programs. Taking a closer look at how indicators shift, including the student status, based on the modality of the program is another untapped area for analysis.


  • Deferral Matriculation


      • It is apparent that most of the applicants that deferred over the last two years are not behaving as historic deferrals.
      • A deeper analysis of the deferrals starting in 2019 to date showing how many have become matriculants would be useful on two fronts:
        • Understanding how many potential deferrals are still out there, and the likelihood of closing the deal with them.
        • Using the deferral matriculation rates to adjust the numbers for the years that were impacted by the pandemic. This could give us a better sense of whether  behavior has truly changed, or if the behavior was the same, though postponed.


  • International/Domestic Targets


      • While our enrollment projection model does not break down by region, we could use history to determine how much of the target should come from domestic vs. international applicants.
        • By exposing this in the ATM, it would make it more clear when the admission numbers are or are not coming from the populations expected by the program.


  • RFI Matriculant Yield


    • This is a complicated change to the data model for the current research, but it would be interesting to know how many RFIs result in a matriculant, and to see if it is consistent over time.
      • This will allow us to measure the effectiveness of the RFI approach, and perhaps to see if there are cases where the RFI is more effective than others and alter the information that is being shared to improve the yield rate.
        • Watching this rate as RFI strategy changes could show if the new approach is working.

Other Thoughts

  • New Prediction Basis Scenarios
    • Some programs were dramatically impacted by the pandemic, while others remained stable.
      • This was most evident in those programs that are heavily dependent on international matriculants.
    • It would appear that there are three potential models for prediction that could be applied to the admissions funnel.
      • Traditional: Previous year indicators. 
        • How is the funnel tracking against PY metrics?
        • What are the expected final numbers based on the current pipeline when compared to the PY.
        • This could also be used to track against any given year, instead of the PY.
      • Pre-COVID: Use only 2019 calendar year indicators.
        • How is the funnel tracking against Spring-Fall 2019 metrics?
        • What are the expected final numbers based on the current pipeline when compared to 2019.
      • New Normal: 2021 or later.
        • How is the funnel tracking against Fall 2021+ metrics?
        • What are the expected final numbers based on the current pipeline when compared to Fall 2021 (or later).
        • Looking more closely at Fall 2021 could be an opportunity to better understand the long-term impact, if any, on programs based on how the world has changed over the past two years.
          • Are the rates and yields returning to history, or moving in different directions?
          • Which programs have remained stable? Which have seen unexpected growth, which are losing ground?
  • Is there a correlation between RFIs and Started apps?
    • Keeping in mind that correlation is not causation, a quick review shows a lot of similarity between these two indicators, there could be something there.
      • Similarity alone is not enough, even if it plays out. Look at how RFIs are related to outcomes, particularly matriculation.
  • Now that there is a more robust dataset for events, it would be interesting to create a blended view of the admissions funnel data and the recruitment data.
    • With the full funnel plotted out across the timeline, how do events impact those numbers on or near the delivery date?
  • Just curious: How much money do we make from fees for the fraud cases?
    • Is there any link to waived fees, or are we actually getting a decent sum by allowing these cases to continue instead of trying to crack down?
  • No labels