Supporting a scaling Product organization presented two core challenges: adapting the product planning cycle & levelling up AI adoption across the org
Understanding the current operating model is the foundation for any meaningful change. Here is the context that shapes both answers.
I assume the 12-month squad growth (28 โ ~40) is confirmed, funded and will not grow beyond forecasts in size. I also assume the CPO and Tribe Leads are open to process iteration, and that there is no existing program management function โ meaning this is a greenfield opportunity. For the AI question, I assume PMs have access to standard AI tools (ChatGPT, Gemini, Copilot) but no structured adoption programme exists yet.
From 28 to ~40 squads in 12 months. The current planning rhythm was designed for a smaller organisation. Here is how to evolve it โ without breaking what works.
The scaling challenge

Proposed new rhythm
Whilst right for the company at this stage, planning should evole to meet the need of a larger team. Lets assess the current planning cycle and flag opportunities for improvement
A wider scope and more features to ship means increasing the possibilities for frictions, delays and roadmap conflicts. The current cadence migh not be best suited for agile and dynamic roadmaps as regular leadership input will be critical
A single full-day session for 40 squads creates a coordination ceiling. With 12 more squads, the current format becomes logistically unmanageable โ dependency identification requires more structured pre-work. Alignment day being the final roadmap share out sounds way too late.
One quarterly memo from the CPO cannot adequately address the nuanced needs of 4 tribes with distinct missions. Tribe-level interpretation gaps will widen as the organisation grows.
2 issues: 1- Team OKRs should all feed into a wider company or Org wide set of OKRs. Without it, it becomes a lot harder to ensure that everything goaled for matter and fit with the wider strategy. 2- Feature-based OKRs ('launch X feature') are a proxy for impact, not impact itself. At 40 squads, the proportion of low-quality OKRs will compound, making prioritisation harder.
Cross-tribe dependencies currently surface only at Alignment Day โ too late to resolve without disrupting roadmaps. 40 squads across 4 tribes will generate significantly more cross-cutting work.
The current rhythm has no structured mechanism to surface blockers, re-prioritise, or course-correct between quarterly planning and Alignment Day. This creates a 12-week blind spot.
In Summary
The current planning process would struggle to scale in 2 key areas: staying aligned with strategic directions as well as coordinating and prioritising efforts across a growing org.
Each proposal targets a specific failure mode. Together they create a planning rhythm that scales to 40 squads and beyond.
Add a Tribe Planning session between CPO guidance and squad-level planning. Each Tribe Lead facilitates a 2-hour working session with their PMs to interpret CPO guidance, identify tribe-level priorities, and surface cross-squad dependencies before squads begin building roadmaps.
Replace the single Alignment Day with two events: (1) Tribe Showcases โ 90-minute sessions per tribe for internal alignment and dependency mapping; (2) a 3-hour Cross-Tribe Summit for the top 10โ15 cross-cutting dependencies and strategic bets. This scales to 40 squads without losing depth.
Introduce a simple OKR quality rubric: every OKR must link to a measurable business outcome (not just a feature launch) and pass a 'so what?' test. Tribe Leads review OKRs before they are finalised. Feature-based OKRs are permitted only with an explicit rationale.
Introduce a lightweight 30-minute async update at week 6 of each quarter: each squad posts a 3-line status (on track / at risk / blocked) in a shared doc. Tribe Leads triage blockers. CPO reviews a one-page summary. No meetings unless escalation is needed.
A lightweight shared artefact (Notion or Jira) where squads log cross-squad dependencies during planning. Tribe Leads own resolution. Reviewed at Tribe Showcases and Cross-Tribe Summit. Eliminates the 'discovered too late' problem.
| Stakeholder | How I engage them | Phase |
|---|---|---|
| CPO | Sponsor and decision-maker. Co-design the new rhythm, approve changes, model the behaviour. | Design & Launch |
| Tribe Leads (ร4) | Core co-designers. Run Tribe Planning sessions. Own dependency resolution. Must feel ownership, not compliance. | Design & Ongoing |
| PMs (ร28โ40) | Primary users of the new process. Involve 2โ3 senior PMs as design partners. Run pilots with one tribe first. | Pilot & Rollout |
| Engineering Leadership | Align on technical requirements, tech stack dependencies and cross-squad commitments. Include in Alignment Day redesign. | Design & Launch |
| Preply Leadership (CEO, COO) | Inform on changes. Show how the new rhythm improves visibility and reduces strategic risk. | Launch |
Vague CPO guidance โ interpret alone โ OKR quality varies
Tribe-level context โ clear priorities โ higher OKR quality โ less rework
Limited visibility between quarters โ surprises at Alignment Day
Mid-quarter health check โ dependency register โ proactive risk management
Quarterly planning feels disconnected from annual strategy
Tribe Planning sessions create explicit links between annual goals and quarterly OKRs
Change management is as important as the design. I would pilot before scaling, involve the people most affected, and make the new process feel like an upgrade โ not an imposition.
Workshop with CPO + Tribe Leads to co-design the new rhythm. Document the new process in a single-page playbook.
Run the new process with the Growth tribe for Q3 planning. Gather feedback from PMs and EMs. Iterate on the playbook.
Structured retro with pilot tribe. Identify what worked, what didn't. Finalise the playbook for full rollout.
Roll out to all 4 tribes for Q4 planning. CPO communicates the change at an all-hands. Tribe Leads facilitate their first sessions.
Quarterly retrospective on the planning process itself. Adjust cadence and artefacts as the org continues to scale.
I would frame the new rhythm as "less ceremony, more clarity" โ not more process. The pilot tribe becomes internal advocates. I would create a simple one-page playbook that any PM can reference. Tribe Leads get a facilitation guide. The CPO publicly endorses the change at the first Alignment Day using the new format.
From AI champions to AI-fluent teams. A practical plan to close the adoption gap, embed AI into PM workflows, and build a culture of continuous learning.

The objective is to move from a bimodal distribution โ a few AI champions and many non-users โ to a baseline of AI fluency across all 28 (soon 40) PMs. This is not about replacing PM judgment; it is about freeing PMs from low-value tasks so they can spend more time on the work only they can do: customer understanding, strategic thinking, and cross-functional alignment.
At Preply specifically, AI fluency in the PM team directly supports the company's own AI-first product strategy. PMs who understand AI tools are better positioned to build AI-powered features and to evaluate AI vendor claims critically.
A structured, time-bound programme that starts with diagnosis, builds skills through practice, embeds AI into daily workflows, and sustains the change as AI continues to evolve.
Run a 10-minute survey to map current AI usage across all 28 PMs. Identify 3โ4 'AI champions' who are already getting value. Conduct 1:1s with low adopters to understand barriers (time, trust, skill, tool access). Publish the baseline โ transparency creates accountability.
Run 4 fortnightly 90-minute 'AI Labs' โ hands-on sessions where PMs practice AI on real work (not toy examples). Topics: prompt engineering for PM tasks, AI-assisted discovery, AI for writing, AI for data. Champions co-facilitate. Sessions are recorded for async access.
Introduce 'AI-assisted' as a standard step in key PM workflows: PRD writing, user research synthesis, OKR drafting. Create a shared prompt library in Notion โ PMs contribute their best prompts. Add a 5-minute 'AI win of the week' slot to the existing PM all-hands.
Re-run the baseline survey. Track time saved per PM per week (self-reported). Identify the top 3 use cases delivering the most value. Double down on those. Retire sessions that aren't landing. Publish a quarterly 'AI at Preply PM' report to leadership.
Appoint a rotating 'AI Scout' role (1 PM per quarter) responsible for tracking new AI tools and reporting back. Monthly 15-minute 'AI news' slot in PM all-hands. Quarterly review of the prompt library and workflow integrations. The programme must adapt as AI evolves.
Honest risk assessment is a sign of strategic maturity. Here are the most likely failure modes and how I would mitigate them.
Mitigation: Frame AI as a 'first draft' tool, not a decision-maker. Require PMs to critique and edit AI outputs. Include this in the AI Lab curriculum.
Mitigation: Teach PMs to verify AI outputs against primary sources. Create a 'trust but verify' checklist for common use cases.
Mitigation: Agree on 2โ3 approved tools as the standard. Allow experimentation but centralise the prompt library to create shared value.
Mitigation: Rotate the AI Scout role. Recognise champions publicly. Ensure the programme is self-sustaining, not dependent on individuals.
Mitigation: Frame adoption as a skill, not a personality trait. Pair low adopters with champions. Make sessions safe to fail in.
Both answers share a common thread: sustainable change comes from co-design, piloting, and embedding โ not mandating. Here is the one-line version of each.
Introduce tribe-level planning layers, split Alignment Day into focused sessions, mandate outcome-based OKRs, add a mid-quarter health check, and create a dependency register. Pilot with one tribe, then roll out. Measure OKR quality, dependency resolution, and PM satisfaction.
Baseline current usage, run hands-on AI Labs with champions as co-facilitators, embed AI into key PM workflows, build a shared prompt library, and sustain through a rotating AI Scout role. Measure adoption rate, time saved, and PM confidence โ short and long term.
I have made reasonable assumptions throughout โ happy to discuss any of them, explore alternative approaches, or dive into specific implementation details during Q&A.
Ali Gueye Neveu ยท Sr. Product Strategy & Ops Manager
Case Study ยท 2026 ยท Confidential