The problem
The CEO of a 5,000-person company asks one question every Monday morning: "How are my people doing?"
Most of the time, the honest answer is "we don't know yet." The annual engagement survey was three months ago. The results came back as a 90-page PDF that nobody read past page 12. Three months from now, the next survey will go out and nobody will remember the action items from the last one.
That's the state of employee experience at most companies today. And that's with the big incumbent tools — Glint, Culture Amp, Qualtrics, Peakon. The tools are decent at running the survey. They're terrible at turning the survey into something a CEO, an HR business partner, and a line manager can all act on today.
The platform's founders had a vision for something better:
- Run any kind of survey — annual, pulse, onboarding, exit, lifecycle — from one platform
- Show the answers in real time — the moment responses come in, the dashboards update
- Slice the data any way someone needs — by department, location, tenure, manager, demographic, custom segment, or any combination
- Find the why, not just the what — surface which specific things are driving engagement up or down for each segment
- Map the organization as a network — show who actually works with whom, who the real influencers are, which communities form across the org chart
- Read the comments without reading the comments — sentiment analysis, themes, summaries
- Answer questions in plain English — let any HR business partner ask the platform "which segments have low engagement and what's driving it?" and get a real answer
- Turn insights into action — built-in action plans tied to specific items, with manager adoption tracking
- Run the whole thing as a multi-tenant platform — onboard new client companies in minutes, give the platform team a single command center to manage them all
- Pass the kind of security audit a US enterprise demands — full penetration test, role-based permissions at every level, audit trails for every action
And one more constraint that made the project actually difficult: the launch client was a US enterprise with thousands of employees, and they needed the platform live in two months.
The approach
A project of this scope can fail in two opposite ways. You can build everything halfway and ship something nobody trusts. Or you can perfect each module before moving on and never reach a launch date.
We chose a third path: build the spine first, ship it, then layer the smart features on top.
The spine looked like this:
- Multi-tenant from day one. Every decision in the architecture had to assume multiple customer companies would live on the same platform. We never built a "single-tenant version" we'd later have to retrofit.
- Workforce data is the foundation, not the survey. Most survey tools start with the survey. We started with the workforce — get the employee data in cleanly first (with proper validation, history tracking, and demographics), and the survey results become slicing-and-dicing exercises on top of solid data.
- Permissions are a first-class feature, not an afterthought. A US enterprise will not adopt a platform where every user can see everything. We built role-based access at four levels — pages, surveys, workforce attributes, and results — from the very first commit.
- The dashboards have to be the main product. Not the survey-builder. Not the export. The dashboards. Anything that doesn't help a person see and understand the data faster gets cut.
- Then layer on the differentiators — the network analysis, the driver analysis, the AI assistant, the auto-generated reports — once the spine was rock solid.
We sequenced the work so that week 1 of the engagement, the platform could already accept a workforce upload and run a basic survey. Then every subsequent week added a layer: dashboards, segments, analytics, network analysis, action plans, AI assistant, integrations, security audit.
The other key decision: we treated the platform owner and the launch customer (the first tenant) as two different stakeholders with two different needs. The launch customer needed the actual product. The platform owner needed the operational tooling to run the platform — a way to onboard new clients, switch between them for support, impersonate users to debug issues. We built both in parallel because we knew that if the platform owner couldn't run the platform smoothly, the second client would never happen.
What we built
A real survey engine
- Build any survey from a wizard with a library of predefined question templates, OR write custom questions from scratch
- Mix question types: rating scales, multiple choice, free-text comments, "connection" questions for network analysis, "heartbeat" up/down votes
- Group questions into themes (Manager Trust, Career Growth, Wellbeing, etc.)
- Translate every question into multiple languages — employees pick their language at the top of the page
- Schedule the survey with a start and end date
- Configure who's invited, when reminders go out, how many reminder waves
- Track who's responded and who hasn't, in real time
Lifecycle survey automation
This was a major piece of work. The platform automatically sends:
- Onboarding surveys to new hires X days after their start date
- Exit surveys to leaving employees X days before their termination date
- Reminder waves to anyone who hasn't responded after Y days
- All of it driven off the workforce data — when HR uploads new hires or termination dates, the platform takes care of everything else
- Each automated email is queued, scheduled in the recipient's time zone, and sent at the configured send time
- A daily background job checks for new triggers, a more frequent job actually sends the queued emails
The dashboards — where the actual product lives
The platform has six main views, each with its own deep set of charts and tables:
- Home — the executive summary. Engagement score, retention risk in dollars, attrition risk count, top 3 strengths, top 3 opportunities, alerts grid, action plan adoption.
- Participation — who responded vs who didn't, broken down by segment. Pace and reach over time. Representation analysis (are the respondents representative of the workforce?). Team-level participation (which managers got their team to respond?).
- Questions — every survey item ranked highest to lowest, with favorability bars. Comments view with sentiment analysis, themes, and AI summaries. Comment moderation tools. Multiple choice question results.
- Analytics — driver analysis (which items most strongly drive engagement and retention). Heatmaps (every item × every segment in one colored grid). Heartbeat pulse view. Team-level analytics.
- Influencers — the network analysis module. Top influencers, communities (clusters of people who actually work together), leaders, the full network graph rendered as an interactive force-directed layout.
- Action — action plans by manager, by segment, by status. Manager adoption rate. Resources library.
Every single one of these views responds to global filters — pick a segment once and the entire dashboard updates.
The slicing engine
This is what separates a "scoring tool" from an "insight platform." Every chart, every score, every comment, every report can be filtered by:
- Department, business unit, division, location
- Tenure group, age group, gender, ethnicity, education, performance level, job level
- Custom groupings (up to 10 fields the customer defines themselves)
- Any specific manager and their direct reports — or any manager's full org tree
- Combinations of the above ("Phoenix office, 1–3 year tenure, under Manager X")
When a filter changes, every number on the screen recalculates from the underlying data — it's not pre-computed slices.
Driver analysis — the why
Most survey tools tell you what the scores are. Driver analysis tells you which scores will move the needle if you fix them. The platform uses statistical modeling on the survey data to figure out which items have the biggest impact on the outcomes you actually care about — engagement, retention, performance — and ranks them by impact level. Then it lets you slice that analysis by segment, so you can see what drives engagement specifically for the under-1-year tenure group (which is often very different from the company average).
Organizational network analysis
This is the differentiator most competitors don't have. The survey asks employees who they actually work with. The platform builds a network from the answers, then runs analysis on it:
- Influencers — people with the most connections, ranked by influence score
- Communities — clusters of people who form natural working groups (often crossing department lines)
- Leaders — managers whose teams mention them positively
- The network graph itself — rendered as an interactive force-directed layout where you can hover any employee and see their connections light up
A CHRO who's never seen this kind of view before tends to react the same way: "So that's how my company actually works."
Comments analysis
Free-text comments are the most valuable part of any survey and the hardest to use. The platform handles them at scale:
- Every comment is auto-tagged with sentiment (favorable / neutral / unfavorable)
- An AI clusters comments into themes — "career growth," "compensation," "manager support," "workload" — automatically pulled from the data, not pre-defined
- Each theme has its own sentiment breakdown and a written AI summary
- Word clouds visualize what employees are talking about most
- Moderation tools let admins hide policy-violating comments
- Hashed IDs preserve confidentiality while still allowing meaningful analysis
The AI assistant
A chat assistant that lives inside the platform and answers natural-language questions about the data. We built it with 17 different "tools" it can pull from — each tool answers a specific kind of question (participation stats, top strengths, alerts by segment, ratings, comments, influencers, heatmaps, drivers, etc.). The assistant decides which tool to use based on what the user asks.
It also has a built-in knowledge base of 23 articles covering methodology and concepts ("What's the difference between confidentiality and anonymity?" "How is retention risk calculated?" "What's a driver analysis?"). When a user asks a methodology question, the assistant searches the knowledge base and answers from there.
The assistant generates inline charts based on the actual data (not made-up visuals), suggests follow-up questions, persists conversations so users can pick up where they left off, and is fully isolated per customer company.
Action plans — closing the loop
Insights without action are entertainment. The platform turns every survey item into something managers can actually act on:
- Managers create action plans tied to specific survey items
- Plans have status, owners, deadlines, notes
- The platform tracks manager adoption rate — what percent of managers in the company have actually created plans
- Reports show plans by segment, by manager, by status
- A resources library gives managers ready-made playbooks to use
Multi-channel distribution
- Email invitations and reminders via a queue system that handles thousands of recipients across time zones
- Slack integration — push survey invites and reminders into Slack DMs
- Microsoft Teams integration — same, for Teams customers
- All three channels share the same queue system, so a survey rolling out across email + Slack + Teams is one operation
Workforce data uploads (the hidden hard part)
The launch client uploaded 11,000+ workers with full historical snapshots. We built:
- A 7-required + 23-optional column upload schema covering everything from basic identity to demographics to compensation to org dimensions to custom groupings
- Strict validation: all-row date consistency, duplicate detection, manager-ID matching against the same file, all-identical-value detection (a column where every row has the same value is almost always a mistake)
- Excel and CSV both supported, files up to 50MB / 500K rows
- A second upload type for outcomes data (retention status, performance scores, anything you want to correlate against)
- A third upload type for benchmark data so customers can compare their scores to industry/country/size benchmarks
- A fourth upload type for comparison data between custom-defined groups
- Historical snapshot tracking — the same employee uploaded across multiple snapshots becomes a timeline you can query
Permissions, audit, and impersonation
This is what makes the platform actually deployable in a US enterprise:
- Page-level permissions — a role can access "Home" but not "Analytics"
- Survey-level permissions — a role can see the 2025 engagement survey but not the 2024 one (or "all surveys" or "no surveys")
- Attribute-level permissions — a role can see data sliced by department but not by ethnicity
- Result-level permissions — a role can see results down to a manager team but not below 5 respondents (anonymity threshold)
- Audit log — every important action (user created, role changed, survey launched, data uploaded, comment moderated) is logged with who, when, before/after values, IP, and request details
- Impersonation — admins can impersonate any user to debug issues, with sessions that auto-expire after 1 hour and full audit trails so it's never abused
The platform super admin layer
On top of the per-customer permissions, the platform team itself has a platform layer that lets them:
- See all customer companies in one place
- Onboard a new customer in minutes (creates the workspace, sets up roles, provisions the platform team into it as account admins)
- Click "Access Client" to switch into any customer's environment for support
- Use URL-based context switching — go to
/<customer-slug>and you're seeing that customer's data; different slug, different tenant - Impersonate any user inside any customer for support
- Run the entire SaaS centrally with proper separation between platform and tenant
Auto-generated PowerPoint reports
Pick a survey, pick a segment, click generate. The platform produces a complete branded PowerPoint deck with all the key insights, charts, and recommendations. Used by HR teams to present results to the CEO, the board, department heads. No more building decks by hand the night before a board meeting.
Background jobs and data infrastructure
- A scheduler running every 5 minutes processes the email queue, queues new invitations as surveys go live, queues reminders as deadlines approach
- A daily job at 6 AM UTC handles lifecycle survey triggers (new hires, upcoming terminations) and creates new invitations as needed
- Background jobs sync workforce data from the upload pipeline into the analytics warehouse
- Background jobs sync survey responses into the analytics layer for fast slicing
- All of it logged, all of it monitored, all of it recoverable if something fails
Security
- WorkOS-powered authentication with enterprise SSO support
- Role-based access at four levels (pages, surveys, attributes, results)
- Hashed employee IDs for confidentiality
- Soft-delete throughout (nothing is actually erased)
- Rate limiting on every API endpoint
- Suspicious activity tracking
- A formal Vulnerability Assessment and Penetration Test (VAPT) — the platform was tested by an external security firm and the report was reviewed and addressed before launch
The hard part — and how we solved it
There were three "hard parts" on this project, and they all happened in parallel.
Hard part #1 — Building a slicing engine that doesn't slow down
A scoring tool is easy. You compute averages and show them. A real slicing engine — where any chart, any score, any comment can be filtered by any combination of segments and stay responsive — is one of the hardest things to build in a SaaS product. With 11,000 employees and hundreds of survey items, "filter the dashboard by Phoenix office × 1–3 year tenure × under Manager X" can mean recalculating thousands of numbers.
We solved this by separating the operational data from the analytics data from day one. The operational data — workforce records, survey responses, action plans — lives in one database optimized for writes and updates. The analytics data — the slicable, filterable, "every chart on every dashboard" data — lives in a separate analytics warehouse optimized for reading huge amounts of data fast. Background jobs continuously sync the operational data into the analytics warehouse so the dashboards always have fresh data without slowing down the operational side.
The result: a CHRO can change the global filter and watch six dashboards repaint in under a second, even with 11,000+ employees and a year of survey history loaded.
Hard part #2 — Permissions that go four levels deep without becoming a maze
Most enterprise SaaS gets permissions wrong in one of two ways. Either it has too few (everyone can see everything, which kills the deal) or too many (the customer can't figure out how to set it up, which also kills the deal).
We built a four-level permissions model — pages, surveys, workforce attributes, results — but layered it on top of a role-based system so the customer admins never have to think about individual permissions. They create roles ("Department Head", "Survey Owner", "Regional Manager"), they configure what each role can see once, then they assign users to roles. New user joins? Pick their role. Done.
On top of that, the platform team itself has a "super admin" layer that bypasses the customer's permissions entirely — so they can support any customer without becoming entangled in that customer's permission setup. This is the kind of thing that sounds easy but is full of edge cases (what if a super admin tries to access a customer that doesn't exist? what if a customer's URL changes? what if an impersonation session expires mid-action?). We thought about every one of those edge cases up front and the platform handles them gracefully.
Hard part #3 — Shipping an enterprise-grade SaaS in 2 months
This is where most projects of this scope fail. The temptation is to build everything to 70% and ship something soft. We built it the other way: picked the spine, built that to 100%, shipped it, then added layers.
In practice this meant:
- Week 1–2: Multi-tenant foundation, workforce uploads, survey creation, basic dashboards
- Week 3–4: Full slicing engine, segments, comparisons, comments
- Week 5–6: Driver analysis, heatmaps, network analysis, influencers
- Week 7: Action plans, AI assistant, auto-generated reports
- Week 8: Slack/Teams integrations, email queue, lifecycle automation, security audit, launch
Every week shipped something the customer could actually use. Nothing waited for "the big launch." By the time week 8 came, the customer had already been using the platform for six weeks and was asking for refinements, not bug reports.
The other thing that made this work: we sequenced the customer's expectations alongside the build. The launch customer knew exactly what would be live in week 2, week 4, week 6. They were never surprised, they were never blocked, and they had time to upload their workforce data, train their HR team, and prepare their first survey before the platform was fully complete.
The outcome
- Live in production at the launch customer. A real US enterprise with 11,000+ employees running their People Insights program on the platform.
- Delivered in 2 months from project start to launch.
- Passed an external Vulnerability Assessment and Penetration Test (VAPT) — the security audit a US enterprise actually requires before adopting a vendor.
- Multi-tenant from day one — the platform team can onboard new customers in minutes and run the platform centrally.
- Six full dashboard modules — Home, Participation, Questions, Analytics, Influencers, Action — each with deep functionality.
- Sub-second slicing — global filter changes recompute the entire dashboard in real time, even at 11,000+ employee scale.
- Four-level permissions — pages, surveys, attributes, results — implemented through a clean role-based system that customers can manage themselves.
- Real organizational network analysis — most competitors in the space don't have this. The platform does.
- Built-in AI assistant with 17 data tools and a 23-article methodology knowledge base — most competitors don't have this either.
- Lifecycle survey automation — onboarding and exit surveys trigger automatically off workforce data with no manual intervention.
- Auto-generated PowerPoint reports — HR teams stop building decks by hand.
- Multi-channel distribution — email, Slack, Microsoft Teams.
- Full audit trail and impersonation — built for the kind of compliance-heavy environment a US enterprise lives in.
The big-picture outcome: a brand new HR Tech product that competes credibly with Glint, Culture Amp, Qualtrics, and Peakon — built and launched in 2 months for a real US enterprise customer.
Finish reading — and take the PDF.
Drop your details to unlock the rest of US Employee-Experience SaaS on this page and download the full write-up as a PDF.