Skip to main content

The challenge for many districts isn’t tutoring itself. It’s how tutoring is placed within the MTSS framework. The tutoring is happening, but it isn’t tethered to the MTSS framework: wrong students, inconsistent dosage, no data loop back to the classroom. The result looks like intervention support but doesn’t function like it.

Used correctly, tutoring is one of the most scalable and evidence-supported tools you have for Tier 2 and Tier 3 delivery. This guide walks through how to structure it: which tier gets which model, what dosage looks like, how data integration actually works, and how to build the teacher-tutor handoff so sessions reinforce instruction instead of running parallel to it.

Matching the Model to the Tier

The tier assignment should drive the session structure, not the other way around.

Tier 2 targets students with identified skill gaps who need more than core instruction but haven’t yet crossed the intensive intervention threshold. Small group tutoring (three to five students) fits this tier well. Students share a skill gap cluster, sessions stay focused, and group dynamics support engagement without diluting rigor. Benchmark data showing students performing roughly between the 25th and 40th percentile often signals the need for Tier 2 support. These students need more repetitions and more feedback loops, not a completely different instructional approach.

Tier 3 is for students with persistent, significant gaps who haven’t responded adequately to Tier 2 supports. One-on-one or very small group (two students maximum) tutoring fits here. The diagnostic profile is more complex, skill gaps tend to be multi-layered, and instruction needs to adapt session to session. Students showing growth well below benchmark despite prior Tier 2 support, or students with IEP-aligned intervention goals, are candidates for Tier 3 placement.

Tier placement should be driven by student data, not by scheduling convenience.

Dosage: What Actually Moves the Needle

High-impact tutoring research consistently shows stronger outcomes when tutoring occurs three or more times per week in sessions of 30-45 minutes. Below that frequency, tutoring tends to function more like enrichment than targeted intervention.

That distinction has direct implications for how you build your master schedule. Tier 2 students need three consistent weekly sessions that land in the same intervention block, not whenever a student gets pulled or a period opens up. Tier 3 students may need that same frequency with longer sessions or additional touchpoints, depending on their profile.

The goal is fidelity, not flexibility. Dosage drift is one of the most common reasons tutoring programs fall short of expected outcomes. Sessions that start late some weeks but not others, tutors swapped mid-semester, students who miss without makeup time: each of these erodes results faster than most districts expect. Structural consistency matters as much as instructional quality.

Data Integration Without Another Silo

The most common operational failure in school-based tutoring programs is the data silo problem. Weekly session summaries exist somewhere, benchmark data exists somewhere else, and classroom teachers never see either. The intervention loop never closes.

The fix requires intentional design from the start, but it’s not complicated. Session reports need to map directly to the metrics your benchmark system already tracks. If you’re using Renaissance Star, your tutoring program’s progress monitoring should reference the same domains: reading comprehension, phonics, math operations. Not parallel categories invented by the tutoring program. When a tutor submits a weekly summary, the skills referenced should be legible to any teacher who reads the student’s intervention file.

This isn’t about generating more data. It’s about generating data in the format your existing systems can absorb. Independent ESSA Level 2 research shows measurable gains on Renaissance Star Reading (33% more growth, Grades 1-3) and Renaissance Star Math (44% more growth, Grades 2-6). That alignment to the same benchmark domains keeps tutoring progress visible inside your existing reporting structure, rather than sitting in a separate system nobody opens.

The Teacher-Tutor Handoff

The handoff between classroom teacher and tutor is where most programs lose coherence. The tutor works on a skill the teacher hasn’t introduced yet. Or revisits a concept the class moved past. Or reinforces a strategy that conflicts with the classroom approach to the same standard. None of this is malicious. It’s a coordination failure.

The solution is a structured alignment protocol at the start of the program and a lightweight check-in cadence from there. Before a tutor begins working with a student, they need three things: the current pacing guide position, the priority standards for the next four to six weeks, and any student-specific flags from the classroom teacher (accommodation requirements, engagement patterns, strategies already tried). That’s a 15-minute handoff, not a formal meeting.

A brief weekly touchpoint keeps the alignment intact from there. It doesn’t need to be synchronous. A shared notes document or a session summary routed to the classroom teacher produces the same result. Tutoring should function as a reinforcement system for core instruction, not a separate curriculum running alongside it.

Referral Decision Framework

Before a student is placed in Tier 2 or Tier 3 tutoring, your referral process should answer three questions:

  1. What does the benchmark data show? Tier 2 referrals should be data-initiated, not teacher-nominated alone.
  2. Has the student received consistent Tier 1 core instruction? Tutoring doesn’t compensate for core instructional gaps at the classroom level.
  3. What is the specific skill target? Vague referrals (“needs help in math”) produce unfocused sessions. The referral should name the standard cluster or skill domain.

For Tier 3, add a fourth question: what Tier 2 supports has this student already received, and what did the data show? Tier 3 placement should reflect documented non-response to prior supports, not initial severity alone.

With those four elements in place, placement is defensible, instruction is focused, and progress monitoring has a clear baseline to measure against.

If you want to see how this maps to your current MTSS framework, we’re glad to walk through it with your team. Request a sample MTSS alignment framework or schedule time to review how the model fits your existing intervention structure: