Ali Gueye NeveuAli Gueye NeveuยทCase Study ยท 2026

Scaling
Product Ops
at Preply.

Supporting a scaling Product organization presented two core challenges: adapting the product planning cycle & levelling up AI adoption across the org

Starting Point

Where Preply stands today.

Understanding the current operating model is the foundation for any meaningful change. Here is the context that shapes both answers.

Current Planning Rhythm

Preply Strategy
Annual
CPO Guidance
Quarterly
Squad Roadmaps
OKRs + Features
Alignment Day
Full-day share-out
Execution
12 weeks
๐Ÿ’ก

Key Assumptions

I assume the 12-month squad growth (28 โ†’ ~40) is confirmed, funded and will not grow beyond forecasts in size. I also assume the CPO and Tribe Leads are open to process iteration, and that there is no existing program management function โ€” meaning this is a greenfield opportunity. For the AI question, I assume PMs have access to standard AI tools (ChatGPT, Gemini, Copilot) but no structured adoption programme exists yet.

Question 1 ยท ~20 minutes

Scaling the Product
Planning Cycle.

From 28 to ~40 squads in 12 months. The current planning rhythm was designed for a smaller organisation. Here is how to evolve it โ€” without breaking what works.

The scaling challenge

Preply product management planning cycle

Proposed new rhythm

1
CPO GuidanceQuarterly kick-off
2
Tribe Planning SessionsNEW โ€” 2h per tribe
3
Squad Roadmaps + OKRsWith quality gate
4
Dependency RegisterNEW โ€” async logging
5
Tribe ShowcasesNEW โ€” 90min per tribe
6
Cross-Tribe SummitNEW โ€” 3h focused
7
Execution12 weeks
8
Mid-Quarter Health CheckNEW โ€” async at week 6
Risk Assessment

Assessing potential gaps in the existing model

Whilst right for the company at this stage, planning should evole to meet the need of a larger team. Lets assess the current planning cycle and flag opportunities for improvement

1

Preply Strategic Planning Cycle

High / P0

Leadership review cadences

A wider scope and more features to ship means increasing the possibilities for frictions, delays and roadmap conflicts. The current cadence migh not be best suited for agile and dynamic roadmaps as regular leadership input will be critical

2

Product & Tech Planning Cycle

High / P0

Alignment Day bottleneck

A single full-day session for 40 squads creates a coordination ceiling. With 12 more squads, the current format becomes logistically unmanageable โ€” dependency identification requires more structured pre-work. Alignment day being the final roadmap share out sounds way too late.

High / P0

CPO guidance as single point of failure

One quarterly memo from the CPO cannot adequately address the nuanced needs of 4 tribes with distinct missions. Tribe-level interpretation gaps will widen as the organisation grows.

Medium / P1

Team level OKRs and laddering up

2 issues: 1- Team OKRs should all feed into a wider company or Org wide set of OKRs. Without it, it becomes a lot harder to ensure that everything goaled for matter and fit with the wider strategy. 2- Feature-based OKRs ('launch X feature') are a proxy for impact, not impact itself. At 40 squads, the proportion of low-quality OKRs will compound, making prioritisation harder.

Medium / P1

Dependency management at scale

Cross-tribe dependencies currently surface only at Alignment Day โ€” too late to resolve without disrupting roadmaps. 40 squads across 4 tribes will generate significantly more cross-cutting work.

Low / P2

No mid-quarter health check

The current rhythm has no structured mechanism to surface blockers, re-prioritise, or course-correct between quarterly planning and Alignment Day. This creates a 12-week blind spot.

โšก

In Summary

The current planning process would struggle to scale in 2 key areas: staying aligned with strategic directions as well as coordinating and prioritising efforts across a growing org.

Proposed Changes

Five specific interventions.

Each proposal targets a specific failure mode. Together they create a planning rhythm that scales to 40 squads and beyond.

01

Introduce Tribe-Level Planning Layers

Add a Tribe Planning session between CPO guidance and squad-level planning. Each Tribe Lead facilitates a 2-hour working session with their PMs to interpret CPO guidance, identify tribe-level priorities, and surface cross-squad dependencies before squads begin building roadmaps.

Impact: Reduces CPO bottleneck, improves guidance quality at squad levelEffort: Low
02

Split Alignment Day into Tribe Showcases + Cross-Tribe Summit

Replace the single Alignment Day with two events: (1) Tribe Showcases โ€” 90-minute sessions per tribe for internal alignment and dependency mapping; (2) a 3-hour Cross-Tribe Summit for the top 10โ€“15 cross-cutting dependencies and strategic bets. This scales to 40 squads without losing depth.

Impact: Eliminates the coordination ceiling, improves dependency resolutionEffort: Medium
03

Mandate Outcome-Based OKRs with a Quality Gate

Introduce a simple OKR quality rubric: every OKR must link to a measurable business outcome (not just a feature launch) and pass a 'so what?' test. Tribe Leads review OKRs before they are finalised. Feature-based OKRs are permitted only with an explicit rationale.

Impact: Improves planning quality for PMs and leadership visibilityEffort: Low
04

Add a Mid-Quarter Health Check

Introduce a lightweight 30-minute async update at week 6 of each quarter: each squad posts a 3-line status (on track / at risk / blocked) in a shared doc. Tribe Leads triage blockers. CPO reviews a one-page summary. No meetings unless escalation is needed.

Impact: Reduces end-of-quarter surprises, enables faster course correctionEffort: Very Low
05

Create a Dependency Register

A lightweight shared artefact (Notion or Jira) where squads log cross-squad dependencies during planning. Tribe Leads own resolution. Reviewed at Tribe Showcases and Cross-Tribe Summit. Eliminates the 'discovered too late' problem.

Impact: Reduces delivery risk from unresolved cross-tribe dependenciesEffort: Low
Stakeholder Engagement

Who needs to be in the room.

StakeholderHow I engage themPhase
CPOSponsor and decision-maker. Co-design the new rhythm, approve changes, model the behaviour.Design & Launch
Tribe Leads (ร—4)Core co-designers. Run Tribe Planning sessions. Own dependency resolution. Must feel ownership, not compliance.Design & Ongoing
PMs (ร—28โ†’40)Primary users of the new process. Involve 2โ€“3 senior PMs as design partners. Run pilots with one tribe first.Pilot & Rollout
Engineering LeadershipAlign on technical requirements, tech stack dependencies and cross-squad commitments. Include in Alignment Day redesign.Design & Launch
Preply Leadership (CEO, COO)Inform on changes. Show how the new rhythm improves visibility and reduces strategic risk.Launch
Expected Improvements

Better planning for everyone.

Product Managers

Today

Vague CPO guidance โ†’ interpret alone โ†’ OKR quality varies

With new rhythm

Tribe-level context โ†’ clear priorities โ†’ higher OKR quality โ†’ less rework

Product Leadership

Today

Limited visibility between quarters โ†’ surprises at Alignment Day

With new rhythm

Mid-quarter health check โ†’ dependency register โ†’ proactive risk management

Preply Leadership

Today

Quarterly planning feels disconnected from annual strategy

With new rhythm

Tribe Planning sessions create explicit links between annual goals and quarterly OKRs

Implementation

How I would roll this out.

Change management is as important as the design. I would pilot before scaling, involve the people most affected, and make the new process feel like an upgrade โ€” not an imposition.

Weeks 1โ€“2

Design Sprint

Workshop with CPO + Tribe Leads to co-design the new rhythm. Document the new process in a single-page playbook.

Weeks 3โ€“6

Pilot with One Tribe

Run the new process with the Growth tribe for Q3 planning. Gather feedback from PMs and EMs. Iterate on the playbook.

Weeks 7โ€“8

Retrospective & Refinement

Structured retro with pilot tribe. Identify what worked, what didn't. Finalise the playbook for full rollout.

Weeks 9โ€“12

Full Rollout

Roll out to all 4 tribes for Q4 planning. CPO communicates the change at an all-hands. Tribe Leads facilitate their first sessions.

Quarter 2+

Embed & Iterate

Quarterly retrospective on the planning process itself. Adjust cadence and artefacts as the org continues to scale.

Change Management Principles

I would frame the new rhythm as "less ceremony, more clarity" โ€” not more process. The pilot tribe becomes internal advocates. I would create a simple one-page playbook that any PM can reference. Tribe Leads get a facilitation guide. The CPO publicly endorses the change at the first Alignment Day using the new format.

Measuring Success

How we know it's working.

Planning Quality
OKR quality score
>80% outcome-based OKRs within 2 quarters
Dependency Mgmt
Dependencies identified pre-Alignment Day
>70% of cross-tribe deps logged before summit
PM Experience
PM satisfaction with planning process
NPS >30 within 2 quarters of rollout
Leadership Visibility
Surprises at Alignment Day
Reduce unresolved dependencies by 50%
Efficiency
Time in planning ceremonies
No increase despite 40% more squads
Question 2 ยท ~20 minutes

Levelling Up
AI Adoption.

From AI champions to AI-fluent teams. A practical plan to close the adoption gap, embed AI into PM workflows, and build a culture of continuous learning.

Building AI-native Product teams
Objective

Why this matters.

The objective is to move from a bimodal distribution โ€” a few AI champions and many non-users โ€” to a baseline of AI fluency across all 28 (soon 40) PMs. This is not about replacing PM judgment; it is about freeing PMs from low-value tasks so they can spend more time on the work only they can do: customer understanding, strategic thinking, and cross-functional alignment.

At Preply specifically, AI fluency in the PM team directly supports the company's own AI-first product strategy. PMs who understand AI tools are better positioned to build AI-powered features and to evaluate AI vendor claims critically.

Benefits

  • โœ“2โ€“4 hours/week saved per PM on routine writing and research tasks
  • โœ“Faster discovery cycles โ€” AI-assisted synthesis reduces time from interview to insight
  • โœ“Higher OKR quality โ€” AI as a structured thinking partner for drafting and critique
  • โœ“Competitive advantage โ€” AI-fluent PMs build better AI products
  • โœ“Talent retention โ€” PMs want to work with modern tools

Risks (headline)

  • โš AI outputs replacing PM thinking rather than augmenting it
  • โš Hallucinations eroding trust if not managed carefully
  • โš Tool fragmentation without a clear standard
The Plan

Five steps to AI fluency.

A structured, time-bound programme that starts with diagnosis, builds skills through practice, embeds AI into daily workflows, and sustains the change as AI continues to evolve.

Diagnose
Month 1โ€“2

Baseline & Champions

Run a 10-minute survey to map current AI usage across all 28 PMs. Identify 3โ€“4 'AI champions' who are already getting value. Conduct 1:1s with low adopters to understand barriers (time, trust, skill, tool access). Publish the baseline โ€” transparency creates accountability.

Build Skills
Month 2โ€“3

Structured Learning Sprints

Run 4 fortnightly 90-minute 'AI Labs' โ€” hands-on sessions where PMs practice AI on real work (not toy examples). Topics: prompt engineering for PM tasks, AI-assisted discovery, AI for writing, AI for data. Champions co-facilitate. Sessions are recorded for async access.

Embed
Month 3โ€“4

Embed in the Workflow

Introduce 'AI-assisted' as a standard step in key PM workflows: PRD writing, user research synthesis, OKR drafting. Create a shared prompt library in Notion โ€” PMs contribute their best prompts. Add a 5-minute 'AI win of the week' slot to the existing PM all-hands.

Measure
Month 4โ€“6

Measure & Iterate

Re-run the baseline survey. Track time saved per PM per week (self-reported). Identify the top 3 use cases delivering the most value. Double down on those. Retire sessions that aren't landing. Publish a quarterly 'AI at Preply PM' report to leadership.

Sustain
Ongoing

Stay Current

Appoint a rotating 'AI Scout' role (1 PM per quarter) responsible for tracking new AI tools and reporting back. Monthly 15-minute 'AI news' slot in PM all-hands. Quarterly review of the prompt library and workflow integrations. The programme must adapt as AI evolves.

Where AI Helps Most

Generative AI for PMs: the highest-value use cases.

๐Ÿ”

Discovery & Research

  • โ†’Synthesise user interview transcripts into themes
  • โ†’Generate competitive landscape summaries from public sources
  • โ†’Draft user personas from research data
โš–๏ธ

Prioritisation & Strategy

  • โ†’Structure RICE/ICE scoring frameworks and run sensitivity analysis
  • โ†’Generate 'steel man' arguments for competing roadmap options
  • โ†’Summarise stakeholder feedback into decision memos
โœ๏ธ

Writing & Communication

  • โ†’Draft PRDs, one-pagers, and briefing documents
  • โ†’Generate meeting summaries and action items
  • โ†’Create OKR drafts from strategy documents
๐Ÿ“ˆ

Data & Metrics

  • โ†’Interpret A/B test results and surface anomalies
  • โ†’Generate SQL queries for self-serve analytics
  • โ†’Create data storytelling narratives from dashboards
Measuring Success

Short and long-term signals.

Short-term (0โ€“6 months)

AI tool adoption rate
>80% of PMs using AI weekly within 6 months
AI Lab attendance
>85% attendance across 4 sessions
Prompt library contributions
>50 prompts contributed in first quarter
PM satisfaction with AI tools (survey)
NPS >20 within 6 months

Long-term (6โ€“12 months)

Time saved on routine tasks
>2 hours/week per PM by month 12
OKR quality score
Measurable improvement linked to AI-assisted drafting
Discovery cycle time
Reduction in time from research to insight
PM confidence in AI (survey)
>70% reporting 'confident' or 'very confident'
Risk Assessment

Where this could go wrong.

Honest risk assessment is a sign of strategic maturity. Here are the most likely failure modes and how I would mitigate them.

High

AI outputs replace thinking, not augment it

Mitigation: Frame AI as a 'first draft' tool, not a decision-maker. Require PMs to critique and edit AI outputs. Include this in the AI Lab curriculum.

Medium

Hallucinations erode trust

Mitigation: Teach PMs to verify AI outputs against primary sources. Create a 'trust but verify' checklist for common use cases.

Medium

Tool fragmentation (everyone uses different tools)

Mitigation: Agree on 2โ€“3 approved tools as the standard. Allow experimentation but centralise the prompt library to create shared value.

Medium

Champions burn out from carrying the programme

Mitigation: Rotate the AI Scout role. Recognise champions publicly. Ensure the programme is self-sustaining, not dependent on individuals.

Low

Low adopters feel judged or left behind

Mitigation: Frame adoption as a skill, not a personality trait. Pair low adopters with champions. Make sessions safe to fail in.

In Summary

Practice makes possible.

Both answers share a common thread: sustainable change comes from co-design, piloting, and embedding โ€” not mandating. Here is the one-line version of each.

Question 1

Planning at Scale

Introduce tribe-level planning layers, split Alignment Day into focused sessions, mandate outcome-based OKRs, add a mid-quarter health check, and create a dependency register. Pilot with one tribe, then roll out. Measure OKR quality, dependency resolution, and PM satisfaction.

Question 2

AI Adoption

Baseline current usage, run hands-on AI Labs with champions as co-facilitators, embed AI into key PM workflows, build a shared prompt library, and sustain through a rotating AI Scout role. Measure adoption rate, time saved, and PM confidence โ€” short and long term.

Ready to go deeper on any section.

I have made reasonable assumptions throughout โ€” happy to discuss any of them, explore alternative approaches, or dive into specific implementation details during Q&A.

Ali Gueye Neveu ยท Sr. Product Strategy & Ops Manager

Case Study ยท 2026 ยท Confidential