Support Model Design
Pre-work for Session 4 | Prepared: 2026-02-17
Summary (3 sentences for Danny)
We are three people building a product that promises 24/7 AI support to tenants — we cannot and should not promise the same to our own customers. The recommended model is business-hours support (Mon—Fri 9am—6pm) with automated alerting only for genuine platform outages outside those hours, scaling from email-and-chat self-service at Starter tier through priority response for Professional to a named contact for Enterprise. The single most impactful investment is a self-service help centre and in-app guidance that deflects 60—80% of support volume before it reaches a human — because every ticket one of us answers is time not spent building or selling.
Support Philosophy
What kind of support company are we?
We are an automation-first, self-service-first company. The entire product thesis is that AI handles repetitive tenant communication so landlords do not need to — we should apply the same philosophy to our own support. Humans handle the complex and the relational. Automation handles the repetitive and the informational.
Tone
- Direct and competent. No corporate fluff. Landlords are busy people running businesses — they want answers, not apologies.
- British English, professional but not formal. First names, plain language, no jargon.
- Honest about limitations. “We’re a small team and here’s what that means for you” is better than over-promising and under-delivering.
What we promise
- We will respond within the stated timeframe, every time. If we say 4 business hours, we mean it.
- We will not hide behind a ticketing system. You will reach a real person who understands the product.
- We will tell you when we do not know the answer and give you a realistic timeline for when we will.
- We will never make you repeat yourself. Full context travels with the conversation.
What we explicitly do NOT promise
- 24/7 human support. We are three people. We will burn out and the product will suffer.
- Phone support at launch. The overhead is too high for a team this size.
- Instant responses outside business hours for non-critical issues.
Support Tiers by Plan
| Starter | Professional | Enterprise | |
|---|---|---|---|
| Channels | Email, help centre, in-app chat (bot-first) | Email, in-app chat (human-routed), help centre | Email, in-app chat (priority queue), scheduled video calls |
| First Response Time | Within 1 business day | Within 4 business hours | Within 2 business hours |
| Resolution Target | 3 business days | 1 business day | Same business day (P1/P2) |
| Hours | Mon—Fri 9am—6pm GMT | Mon—Fri 9am—6pm GMT | Mon—Fri 9am—6pm GMT + emergency line for P1 |
| Dedicated Contact | No (pool) | No (pool, but priority queue) | Yes (named contact, likely Bilal or Deen) |
| Onboarding | Self-service + help centre | Guided 30-min video call | Dedicated onboarding (up to 3 sessions) |
| Quarterly Review | No | No | Yes (30-min check-in) |
| Status Page | Yes | Yes | Yes + proactive notifications |
Why these numbers are realistic:
- At fewer than 50 customers, Bilal and Deen can comfortably handle support alongside development if the self-service layer deflects routine questions.
- Danny acts as first-response filter for non-technical queries (billing, onboarding, account questions), routing technical issues to Bilal or Deen.
- The 4-business-hour SLA for Professional means we check and respond to the queue three times per working day — achievable without interrupting deep work.
- Enterprise “within 2 hours” is only viable with a tiny number of Enterprise accounts (target: 1—5 at launch). Each Enterprise account justifies the attention because of revenue.
Support Channels
| Channel | Pros | Cons | Recommended? | Tool |
|---|---|---|---|---|
| Async, auditable, no real-time pressure, customers expect it | Slow for urgent issues, easy to lose threads | Yes — primary channel from day one | Crisp or Plain (shared inbox) | |
| In-app chat | Low friction, contextual (we know who they are and what page they are on), feels modern | Creates expectation of instant response, requires someone watching the queue | Yes — with bot/AI first responder, human escalation during business hours | Crisp widget or custom (using our own chat infra) |
| Familiar to UK landlords, mirrors our tenant-facing product (“eat your own dogfood”) | Blurs personal/professional boundaries, hard to manage at scale, GDPR implications | Not at launch. Revisit at 100+ customers. Interesting for “meta” reasons (our product uses WhatsApp, so should our support?) | — | |
| Phone | High-touch, great for complex issues and sales-to-support handoff | Enormous time sink, impossible to scale with 3 people, no async benefit | No at launch. Enterprise only via scheduled calls. | — |
| Video call | High-touch for onboarding and complex troubleshooting, builds relationship | Time-intensive, needs scheduling | Yes — for onboarding (Professional/Enterprise) and escalated issues. Scheduled only, never ad-hoc. | Google Meet or Zoom |
| Help centre / docs | Scales infinitely, available 24/7, reduces ticket volume dramatically | Requires upfront investment to build, must be maintained | Yes — highest priority investment. This is the support channel that works while we sleep. | Docs site (could be a Sanity-powered section on ehq.tech, or a simple tool like GitBook/Mintlify) |
| Community forum | Peer-to-peer support, reduces load on team, builds engagement | Needs critical mass to be useful, moderation overhead | No at launch. Premature before 200+ customers. | — |
Recommended channel stack at launch
- Help centre (self-service, 24/7)
- In-app chat (bot-first, human during business hours)
- Email (shared inbox, async)
- Scheduled video calls (onboarding and escalation only)
That is four channels, which is already a lot for three people. Resist the temptation to add more.
SLA Framework
| Priority | Definition | Example | First Response Target | Resolution Target |
|---|---|---|---|---|
| P1 — Critical | Platform is down or core functionality is broken for multiple customers. Tenants cannot report issues. Data loss risk. | Complete outage, database corruption, AI engine down, auth broken | 1 hour (all tiers) | 4 hours |
| P2 — High | Core feature is broken for a single customer, or significant degradation for multiple customers. Workaround may exist. | WhatsApp channel not receiving messages for one org, dashboard not loading, vendor links broken | 4 business hours (Pro/Enterprise), 1 business day (Starter) | 1 business day |
| P3 — Medium | Non-critical feature is broken or behaving unexpectedly. Workaround exists. | Report export failing, compliance alert not showing, document upload timing out | 1 business day (all tiers) | 3 business days |
| P4 — Low | Cosmetic issue, feature request, general question, “how do I” query | UI alignment bug, request for CSV column, “can I change the logo?“ | 2 business days (all tiers) | Best effort / backlog |
SLA commitments by tier
| Starter | Professional | Enterprise | |
|---|---|---|---|
| P1 | 1 hour / 4 hours | 1 hour / 4 hours | 1 hour / 4 hours |
| P2 | 1 business day / 3 days | 4 business hours / 1 day | 2 business hours / same day |
| P3 | 1 business day / 3 days | 1 business day / 2 days | 4 business hours / 1 day |
| P4 | 2 business days / best effort | 1 business day / best effort | 1 business day / 3 days |
Critical rule: P1 is the only priority that operates outside business hours
A genuine P1 (platform down) triggers automated alerting to the on-call person regardless of time. Everything else waits until 9am. This is the only way a 3-person team survives.
Escalation Path
Level 0: Self-Service (Help Centre + AI Chat Bot)
|
| Customer cannot find answer or issue is account-specific
v
Level 1: Danny (First Human Contact)
- Handles: billing, onboarding, account setup, feature questions, non-technical issues
- Resolves: ~40% of human-routed tickets
- Tools: Shared inbox, help centre links, canned responses
- Escalates to L2 if: technical issue, bug report, system behaviour question
|
v
Level 2: Bilal or Deen (Technical Support)
- Handles: bug reports, system behaviour issues, data questions, integration problems
- Resolves: ~55% of remaining tickets
- Tools: Supabase dashboard, application logs, Sentry, direct DB access (read-only)
- Escalates to L3 if: requires code change, infrastructure intervention, or security issue
|
v
Level 3: Bilal + Deen (Engineering Response)
- Handles: code fixes, infrastructure issues, security incidents, data recovery
- This is not "support" -- this is engineering work triggered by a support issue
- Tracked as a bug/incident, not a ticket
When does it wake someone up?
Only P1 incidents — and only if automated monitoring confirms the issue first.
The alert chain:
- UptimeRobot detects the site is down or health check fails
- Automated alert fires to PagerDuty/Opsgenie (or even a simple webhook to a phone notification)
- On-call person is alerted
- If no acknowledgement within 15 minutes, the second person is alerted
Everything else waits for business hours. A customer emailing at 11pm about a broken report will get a response by 10am the next morning. That is acceptable.
On-Call Model
This is the hardest section. Three people is below the minimum recommended team size for sustainable on-call (industry guidance says four engineers minimum). We have to be creative.
| Model | How It Works | Pros | Cons | Recommended? |
|---|---|---|---|---|
| A. No on-call (business hours only) | All support is 9am—6pm Mon—Fri. P1 incidents detected by automated monitoring send push notifications but no formal on-call obligation. Whoever sees it first responds. | No burnout risk, sustainable indefinitely, honest about our capacity | If the platform goes down at 2am on Saturday, no one may notice for hours. Tenants (our customers’ customers) are affected. | Recommended for launch (0—50 customers) |
| B. Weekly rotation (Bilal/Deen) | One person is “on-call” each week. Responsible for acknowledging P1 alerts only (not general support). Danny never on-call for technical issues. | Clear ownership, predictable schedule, each person is on-call every other week | Every other week is constrained. With two people, you are on-call 50% of the time. Industry says this is unsustainable long-term. | Recommended at 50—200 customers |
| C. Danny as gatekeeper + engineering escalation | Danny monitors the support inbox/chat during business hours. Bilal and Deen only get pulled in for technical issues. After hours, automated monitoring only. | Protects engineering time, Danny is the natural first contact for customers anyway, reduces context-switching for Bilal/Deen | Danny needs enough technical knowledge to triage (training investment). Danny becomes a single point of failure for support routing. | Recommended as a complement to A or B |
| D. Outsourced after-hours L1 | Use a service like JustAfterMidnight or a virtual answering service for out-of-hours P1 triage. They follow a runbook: check status page, try basic restart, escalate to Bilal/Deen only if confirmed P1. | True 24/7 coverage without team burnout. Professional. | Cost (GBP 300—800/month for basic after-hours coverage). Requires well-documented runbooks. Third party accessing your infrastructure. | Consider at 100+ customers or when Enterprise accounts demand it |
| E. “Follow the sun” (future) | Hire support in a different timezone. Not viable now. | Full coverage without night shifts | Requires hiring. We are 3 people. | No. Not until we hire. |
Recommended approach: Model A + C now, transition to B + C at 50 customers
Phase 1 (0—50 customers):
- Danny handles all first-contact support during business hours (Model C)
- Bilal and Deen are escalation-only during business hours
- After hours: automated monitoring (UptimeRobot) sends push notifications to a shared channel. No formal on-call. Whoever sees a P1 alert responds voluntarily
- Set customer expectations clearly: “Business hours support, Mon—Fri 9am—6pm GMT. Platform monitored 24/7 with automated alerts.”
Phase 2 (50—200 customers):
- Introduce formal weekly rotation between Bilal and Deen for P1 after-hours only (Model B)
- Danny continues as L1 during business hours
- Compensate on-call weeks: the on-call person gets a lighter sprint commitment that week
- Maximum 2 after-hours pages per week threshold — if exceeded, the system reliability needs fixing, not the on-call schedule
Phase 3 (200+ customers or first Enterprise with SLA):
- Evaluate outsourced after-hours L1 (Model D) or hire a dedicated support person
- This is the point where support becomes a full-time role, not a side duty
On-call rules (non-negotiable)
- Only P1 pages after hours. Everything else waits. No exceptions.
- If you are woken up, you get time off the next day. Not optional — mandatory recovery.
- On-call person does not do deep feature work. Their sprint capacity is reduced by 30% during on-call weeks.
- Review every after-hours page monthly. If more than 2 per month are genuine, fix the reliability problem. If they are false alarms, fix the alerting.
- Annual review of on-call sustainability. If either Bilal or Deen reports burnout, we immediately move to Phase 3 regardless of customer count.
Support Tooling
Keep it minimal. Every tool is a thing to maintain, configure, and pay for.
| Need | Recommended Tool | Monthly Cost (est.) | Why This One |
|---|---|---|---|
| Shared inbox + in-app chat | Crisp (Essentials plan) | ~GBP 80/mo (EUR 95) | 10 seats included (future-proof), AI chatbot built in, knowledge base included, WhatsApp integration available later, significantly cheaper than Intercom. Free plan available to start with (2 seats). |
| Help centre / docs | Mintlify or GitBook (free tier) | GBP 0—30/mo | Clean, developer-friendly, version-controlled. Alternatively, build on ehq.tech with Sanity-powered content. |
| Uptime monitoring | UptimeRobot (free tier) | GBP 0 | 50 monitors free, 5-minute checks, email/Slack/webhook alerts. Upgrade to paid (GBP 6/mo) for 1-minute checks. |
| Error tracking | Sentry (free tier) | GBP 0 | 5K errors/month free. Essential for knowing about issues before customers report them. |
| Alerting / on-call | Opsgenie (free tier, 5 users) or PagerDuty Starter | GBP 0—8/mo | Free tier covers our needs. Escalation policies, schedules, phone/push alerts. Only needed from Phase 2. |
| Canned responses / templates | Built into Crisp | GBP 0 | Saved replies for common questions. Reduces response time by 50%+. |
| Internal knowledge base | This Obsidian vault + runbooks | GBP 0 | We already have it. Add a “Support Runbooks” section. |
Total support tooling cost at launch: GBP 0—80/month
Start with free tiers. Upgrade when the free tier limits bite.
What we do NOT need yet
- Zendesk, Freshdesk, or any enterprise help desk — massive overkill for 3 people
- Intercom — excellent product but GBP 60+/seat/month, AI resolution charges add up fast
- A CRM for support — Crisp or Plain handles this at our scale
- Custom-built support tooling — build product, not internal tools
Self-Service First
This is the most important section. Every ticket deflected is 15—30 minutes of engineering time saved. At 3 people, self-service is not a nice-to-have — it is a survival strategy.
| Deflection Method | Effort to Build | Expected Impact | Priority |
|---|---|---|---|
| Help centre with searchable articles | Medium (2—3 days to write initial 20 articles) | High — deflects 40—60% of “how do I” questions | P1 — build before first paying customer |
| In-app tooltips and onboarding flow | Medium (1—2 days, using a tool like Tooltip or built into the dashboard) | High — reduces onboarding support load by 50%+ | P1 |
| AI chatbot on help centre | Low (configure Crisp’s built-in AI bot with help centre content) | Medium — handles simple Q&A, routes complex issues to humans | P2 — set up in first month |
| Status page | Low (use UptimeRobot’s free status page or Instatus) | Medium — eliminates “is it down?” tickets entirely | P1 — trivial to set up |
| Video walkthroughs | Medium (record 5—10 Loom videos of key workflows) | Medium — visual learners prefer video, reduces “how do I” by 20% | P2 |
| Changelog / release notes | Low (add a page to the help centre or use a tool like Featurebase) | Low-Medium — reduces “when will X be available?” questions | P3 |
| FAQ in dashboard sidebar | Low (static content) | Medium — catches common questions at the point of confusion | P2 |
| Automated onboarding emails | Low (set up 5-email drip sequence in SendGrid) | Medium — proactively answers questions before they are asked | P2 |
| CSV import templates + validation | Low (provide downloadable templates with clear headers) | High — CSV import is a known support-heavy area | P1 |
The 20 articles to write before launch
- Getting started: adding your first property
- Getting started: adding tenants
- Getting started: how the AI talks to your tenants
- How tenant identity verification works
- Understanding issue categories and urgency levels
- How to assign a vendor to an issue
- How vendor secure links work
- Uploading compliance documents
- Understanding compliance alerts and expiry tracking
- How the AI uses your property documents (RAG)
- Dashboard overview: what every number means
- Managing your organisation settings
- How WhatsApp integration works
- How voice AI works (when available)
- Understanding your conversation history
- Data security and GDPR: what we store and why
- Billing and plan management (when billing is live)
- CSV import: how to format your data
- Troubleshooting: tenant says they messaged but no issue appeared
- Troubleshooting: vendor did not receive their secure link
Metrics to Track
| Metric | Target | Why It Matters |
|---|---|---|
| First response time | < 4 business hours (Professional), < 1 business day (Starter) | The metric customers feel most. Meeting SLA consistently builds trust. |
| Resolution time | < 1 business day (P2), < 3 business days (P3) | Measures whether we actually fix problems, not just acknowledge them. |
| Ticket volume per customer per month | < 1.0 | If customers are raising more than 1 ticket/month on average, our product or docs have a problem. |
| Self-service deflection rate | > 60% | Percentage of support interactions resolved without a human. The higher this is, the more sustainable our model. |
| Time to resolution by priority | Track weekly | Spot trends — if P2 resolution is creeping up, we are either under-resourced or the product has systemic issues. |
| Support satisfaction (CSAT) | > 4.0 / 5.0 | Simple post-resolution survey. “How was your support experience?” |
| Tickets per engineer per day | < 3 | If Bilal or Deen are handling more than 3 tickets/day, they are not building product. Alarm bell. |
| After-hours P1 incidents per month | < 2 | Measures system reliability. If we are getting woken up more than twice a month, fix the platform, not the rota. |
| Help centre article views | Track monthly | Shows whether self-service is being used. Low views = discoverability problem or content gap. |
Cost of Support
| At 10 Customers | At 50 Customers | At 200 Customers | |
|---|---|---|---|
| Estimated tickets/month | 10—20 | 40—80 | 150—300 |
| Hours/month (team) | 5—10 hrs | 15—30 hrs | 50—100 hrs |
| Who handles it | Danny (L1) + Bilal/Deen (L2, occasional) | Danny (L1 full-time element) + Bilal/Deen (L2, ~5 hrs/week each) | Dedicated support hire needed |
| Tooling cost | GBP 0—80/mo | GBP 80—150/mo | GBP 150—300/mo |
| Total cost (labour + tools) | Absorbed into existing roles | ~GBP 500—800/mo in opportunity cost (engineering time diverted) | ~GBP 2,500—3,500/mo (dedicated hire + tools) |
| Revenue at this scale | ~GBP 3,000—5,000/mo | ~GBP 15,000—30,000/mo | ~GBP 60,000—120,000/mo |
| Support cost as % of revenue | 2—3% (absorbed) | 3—5% | 3—5% |
When do we need to hire a dedicated support person?
Trigger point: when either of these is true:
- Bilal or Deen are spending more than 8 hours/week on support. That is a full day of engineering lost every week. At 3 people, that is devastating.
- We exceed 100 customers. At this point, ticket volume likely exceeds what Danny can handle as L1 alongside sales/BD work.
- We sign our first Enterprise customer with an SLA. Enterprise support expectations require dedicated attention.
Estimated timing: Somewhere between 50 and 150 customers, depending on self-service deflection effectiveness. If deflection is strong (>70%), we can push to 150. If it is weak (<50%), we need to hire at 50.
The hire profile: A technical support specialist who can handle L1 and L2 independently — not just a ticket router but someone who understands the product deeply enough to resolve most issues without escalating to engineering. Ideally someone who can also write help centre content. Salary range: GBP 28,000—38,000 in the UK.
The Irony Section
We sell 24/7 AI support for tenants — can we use the same approach for our own customer support?
Short answer: yes, and we absolutely should.
The opportunity:
Our product literally has an AI conversation engine with RAG (retrieval-augmented generation) that can answer questions from documents. We could point that same engine at our own help centre content and offer an AI-powered support chatbot to our landlord customers that:
- Answers “how do I” questions from our help centre articles
- Checks the status page and reports whether there is a known issue
- Collects structured information (org name, what they were trying to do, what happened) before routing to a human
- Is available 24/7 — not to fix problems, but to acknowledge them and set expectations (“A team member will respond by 10am GMT”)
- Speaks in the same direct, competent tone we want our brand to convey
Why this is strategically brilliant:
- Dogfooding. We are literally using our own product to support our own customers. This proves the product works, surfaces bugs in our own AI pipeline, and gives us stories to tell in sales conversations.
- Deflection at zero marginal cost. The AI handles the first 60% of queries. The human handles the remaining 40% with full context from the AI conversation.
- 24/7 presence without 24/7 humans. The AI chatbot is always available. It cannot fix bugs, but it can answer questions, log issues, and set expectations. This makes “business hours support” feel much less limited.
- Meta-narrative for marketing. “Our support runs on the same AI engine we build for your tenants.” That is a compelling trust signal.
What it takes to build:
- We already have the conversation engine, the RAG pipeline, and the chat UI.
- We need a separate knowledge base for Envo product docs (distinct from property documents).
- Estimated effort: 2—3 days to configure, assuming the Document AI pipeline is live.
- Cost: minimal additional (one more “organisation” in our own system, conversations cost ~GBP 0.12 each).
Recommendation: Build this as soon as the Document AI pipeline (E-005) is production-ready. It is low effort, high impact, and a genuinely great story.
Open Questions for Session
-
Do we commit to business-hours-only support at launch? The recommendation is yes, but Danny may have heard different expectations from prospects. What are landlords used to from Fixflo, Arthur, etc.?
-
Should Enterprise get a contractual SLA, or just a stated commitment? Contractual SLAs create legal obligations and require credits/penalties for breach. A “service level target” is softer. At our size, contractual SLAs are risky. But Enterprise customers may expect them.
-
Is Danny willing and able to be the L1 support filter? This is a significant ask — it means Danny is doing sales AND front-line support. Is that realistic, or does it compromise the sales motion?
-
What do our competitors offer for support? Arthur has UK-based phone and email support. Fixflo offers tiered support. If landlords switching from Arthur expect phone support and we do not offer it, is that a deal-breaker?
-
Should we charge for onboarding? The recommendation is free onboarding for Professional/Enterprise (included in the price) because it reduces future support load. But some SaaS companies charge GBP 200—500 for onboarding sessions. Danny’s view?
-
When do we invest in the self-service help centre? It needs to exist before the first paying customer. Who writes the articles? The engineering team has the deepest product knowledge but also the least available time. Can Danny write the non-technical ones?
-
The dogfooding question: do we prioritise building AI-powered support for ourselves? It is a 2—3 day investment once the Document AI pipeline is live. It would be a powerful demo and marketing tool. But is it a distraction from shipping customer-facing features?
-
BYOAK support complexity: ADR-014 notes that BYOAK increases support complexity (debugging customer’s own API keys). Do we need a policy like “BYOAK customers handle their own provider issues, we only support the Envo platform”? This needs to be explicit in the Enterprise agreement.
-
Support for tenants vs. support for landlords: Our product supports tenants (the landlord’s customers). But who do tenants contact if the AI gives a wrong answer or the chatbot is down? The landlord? Us? We need a clear boundary here — recommendation is that we only support landlords, never tenants directly.
-
What is our incident communication strategy? When the platform goes down, how do we communicate? Status page is a given. But do we email all affected customers? Post on social media? Have a template ready? This needs a documented runbook before the first outage.
Research Sources
This analysis drew on the following external references:
- Help Scout — SaaS Customer Support Guide 2026
- Zendesk — SaaS Customer Support 2026
- Rootly — On-Call Schedule Design and Burnout Risk
- Datadog — How We Structure On-Call Rotations
- Atlassian — Manager’s Guide to Improving On-Call
- Emailmeter — Industry Standard SLA Response Times
- Freshworks — SLA Response Time Guide
- Plain — AI-Powered Support for B2B Teams
- Crisp Pricing 2026
- Plain — Intercom Alternatives for B2B SaaS 2026
- JustAfterMidnight — Preventing On-Call Burnout
- CIO — 5 Steps to Avoid Burning Out On-Call Staff
- Landlord Studio — Best Property Management Software UK
Prepared for Session 4: Post-Sale Ops. This is a recommendation, not a commitment. The session should validate these assumptions against Danny’s market knowledge and prospect feedback.