Back

Submit offer Independent Technical Platform Review

Independent Technical Platform Review

1. General Information 

Title of tender: Independent Technical Platform Review – Health Platform Architecture, Data Flows, and AI Integration
Company name: Scita Health ApS
Address: Absalonsgade 3, 4tv
CVR nr. 46117131
Date: 10 March 2026
Deadline for tender: 31 March 2026 
Contact info: Lars Persson, CTO – Lars@scita.health

2. Presentation of Company
Scita Health ApS is a Danish healthtech startup developing an AI-powered health companion and Clinical Decision Support System (CDSS) for perimenopausal women, combining clinical intelligence with behavioural science for personalised non-communicable disease (NCD) prevention. 

The platform is built on a EU-compliant stack (Xano backend, Next.js frontend, n8n automation, WhatsApp via Twilio, Claude API for personalised communication) with a deterministic safety architecture that separates clinical decision-making from AI communication layers. 

We have completed proof-of-concept testing, qualitative research with approximately 200 women, and have a 55-person waitlist. We are launching a 50-woman pilot in Q2 2026 and need the platform reviewed before real users depend on it. 

3. What We Have Built
Below is a description of our platform, its components, and how they connect. We provide this so bidders can understand the system and propose a meaningful review approach. We are deliberately not prescribing what the reviewer should focus on – we want the bidder’s proposal to reflect their own technical judgment about where the risks are. 

3.1 Stack Overview
The platform is built on a managed infrastructure stack with a deterministic safety architecture that separates clinical decision-making from AI communication layers. 

Xano is our backend. It provides a PostgreSQL database, a REST API layer, and server-side logic through "function stacks" (visual logic chains). All clinical scoring is deterministic and runs in Xano function stacks, not in the AI model. Xano also hosts our vector store

(pgvector) for the RAG system. We have approximately 35 database tables covering user profiles, clinical assessments, biomarkers, symptoms, risk scores, reference data, supplement mappings, and behavioural profiles. 

n8n is our workflow automation layer. It handles message routing (WhatsApp incoming/outgoing), orchestrates the onboarding flow, runs scheduled jobs (daily check-ins, periodic reviews), and, critically, executes our safety rules. Red flag detection (medical emergency keywords, crisis language, oncology alerts) runs as deterministic keyword matching in n8n before any message reaches the AI model. n8n is the glue between all other components. 

Next.js on Vercel is our frontend. It handles onboarding (multi-frame assessment), the user dashboard, health reports (Insights Page), and the provider-facing clinical summary. It communicates with Xano via REST APIs. 

WhatsApp via Twilio is the primary user interface for ongoing interactions. Daily check-ins, on-demand health questions, coaching, and proactive outreach all happen here. Messages flow through n8n for routing and safety checks before reaching Claude. 

Anthropic Claude API generates personalised natural language responses. It receives structured context (user profile, scores, journey, behavioural communication profile, relevant evidence) assembled by n8n from Xano, and produces responses matched to the user's communication style. Claude is configured with zero data retention. It does not make clinical decisions; it communicates decisions already made by deterministic logic. 

The RAG system provides evidence-based health guidance. PubMed abstracts (systematic reviews, meta-analyses, RCTs focused on women 45-65) are ingested, embedded using OpenAI embeddings, and stored in Xano's pgvector. When a user asks a health question, the system retrieves relevant evidence, scores it using a GRADE-aligned framework, and includes it in Claude's prompt context with PubMed citation IDs. User personal data and PubMed evidence are stored in separate tables with no cross-contamination by design. 

The system is developed by the founding team using AI-assisted engineering tools (e.g., Anthropic Claude Code) within a controlled software development lifecycle, where all outputs are verified and validated against defined software requirements. 

3.2 Key Data Flows
To give bidders a sense of the system’s complexity, here are the main data flows: 

Onboarding: User completes a multi-frame assessment in Next.js. Responses are sent to Xano via API. Xano function stacks process the data: scoring across five clinical domains, journey assignment, risk bucket classification, modifier application, gut health flag evaluation, and content block determination. All computed results are stored. n8n triggers a WhatsApp welcome message via Twilio. 

Daily WhatsApp interaction: Incoming message arrives via Twilio webhook to n8n. n8n runs deterministic red flag keyword matching. If safe, n8n assembles user context from Xano (profile, scores, journey, recent interactions, behavioural profile) and sends it with the message to Claude API. Claude’s response returns through n8n to Twilio to WhatsApp. Conversation data is logged. 

Evidence-backed health question: User asks a health question via WhatsApp. After safety check, n8n triggers the RAG pipeline: the question is embedded, pgvector similarity search retrieves relevant PubMed evidence from Xano, evidence is scored and ranked, and the top results are included in Claude’s prompt alongside user context. Claude generates a response with PubMed citation references. 

Clinical scoring: All risk scoring uses deterministic function stacks in Xano. Inputs are user-reported symptoms, medical history, biomarkers (manually entered), medications, and

genetic information. Outputs are Health Strength Scores (0–100) for five clinical domains, plus flags from a hidden meta-domain (cancer signals, specialist referral triggers). No AI is involved in scoring. 

3.3 What We Are Concerned About
We share these not to prescribe the review scope, but to be transparent about the uncertainties that motivated this task: 

• We built this fast and learned as we went. We don’t know what we don’t know. • n8n is the single orchestration layer for safety-critical functions. If it fails, we need to understand what happens to the user. 

• The RAG system works in testing, but we have not stress-tested retrieval quality or verified citation integrity at scale. 

• Xano is a capable platform, but it is uncommon. We have limited ability to get external peer review on our Xano implementation. 

• Error handling across the n8n → Xano → Claude → Twilio chain is something we have built incrementally. We suspect there are silent failure modes we have not identified. 

• Data deletion across the full chain (Xano, n8n logs, Twilio message history) has not been tested end-to-end. We need confidence that we can fulfil GDPR erasure requests completely. 

• We will soon have 50 real users whose health data and trust we are responsible for. We want to be confident the platform is solid before that happens. 

4. Description of the Task
We are asking the bidder to conduct an independent, hands-on technical review of the platform described above. We will provide full access to the Xano workspace, n8n instance, Vercel deployment, and Twilio configuration. 

Rather than prescribing the review scope, we ask bidders to propose their own review plan. 

Based on the platform description in section 3, we want the bidder’s proposal to include: 

1. What they would focus on and why – given the stack, the data flows, and the context (health platform, real users imminent, non-technical founders), what do they see as the highest-risk areas? We want to understand the bidder’s technical reasoning, not just a list of services. 

2. How they would conduct the review – what does their process look like? How would they familiarise themselves with the platform? What would they trace, test, or probe, and in what order? How do they distinguish between surface-level observations and deep findings? 

3. What questions they already have – based on the platform description alone, a strong reviewer will already have technical questions or hypotheses about potential weaknesses. We invite bidders to include these in their proposal. This helps us assess depth of understanding before the engagement begins. 

4. What they would deliver – what does the output look like? How would findings be structured and prioritised? 

5. Estimated time allocation – how many hours for onboarding/familiarisation, hands-on review, and reporting? 

4.1 What We Value in a Proposal

We will evaluate proposals primarily on the quality of technical reasoning demonstrated, not on length or polish. Specifically: 

• Specificity over generality: a proposal that says “I would trace the n8n red flag workflow to verify that keyword matching cannot be bypassed by Unicode substitution or message splitting” tells us more than one that says “I would review the safety system.” 

• Honest questions over confident claims: a proposal that asks “how does the system handle a Xano API timeout during the scoring function stack – does n8n retry, fail silently, or alert?” tells us the bidder is thinking critically about our architecture. 

• A proposal that asks 'what happens when the RAG system returns zero relevant results for a health question — does Claude generate an answer from its training data, or does the system explicitly handle the empty-retrieval case?' tells us the bidder understands the risk boundary between retrieved evidence and model hallucination. 

• Relevant experience over credentials: we care about what the bidder has actually built, reviewed, or broken. Specific examples of past technical reviews, architecture decisions in health/fintech, or debugging of distributed systems are more valuable than certifications. 

• Understanding of our context: we are a small team, pre-revenue, with real users coming soon. Recommendations need to be proportionate – not enterprise-grade solutions to startup-stage problems. 

5. Task Objectives and Success Criteria 

The objective is to identify what is solid and what is fragile in our platform before real users depend on it, and to give our team the knowledge to fix what needs fixing. 

Success criteria: 

● The reviewer has worked inside our live platform (not reviewed documentation about it). Findings reference specific components: table names, workflow IDs, function stack names, API endpoints. 

● The reviewer has traced at least one complete end-to-end flow (e.g., incoming WhatsApp message through safety check, context assembly, Claude generation, and delivery) through live system components, not just inspected components in isolation. 

● Findings are specific and actionable. Our team can create concrete development tasks from them within one working day. 

● Each finding is categorised by severity and timing: fix before pilot, fix before scaling, or improve when possible. 

● The reviewer has identified at least some issues we were not already aware of. If the report contains only observations we could have made ourselves, it has not added sufficient value. 

● Our team's understanding of our own platform's strengths and weaknesses has materially improved as a result of the engagement. 

6. Budget and Specification of an Offer 

This engagement is funded through the Beyond Beta accelerator programme. We invite bidders to propose a scope and price that reflects the complexity of the task. Please provide a transparent breakdown so we can assess the correlation between price and quality. We expect a written offer to include: 

• Date of submission.

• Brief presentation of the bidder, including CVR number, contact details, and relevant experience. We are particularly interested in concrete examples of past technical reviews, platform architecture work, or systems debugging in health, fintech, or other data-sensitive domains. 

• The bidder's proposed review plan as described in section 4 (focus areas, methodology, initial questions, deliverable format, time allocation). 

• Any prior experience with Xano, n8n, or similar managed backend/workflow platforms is relevant but not required. We are more interested in the ability to assess unfamiliar systems than in prior platform-specific knowledge. 

• Specification of the price, including hourly rate and total estimate. 

• Proposed timeframe and end date. 

• Any conditions for the offer. 

7. Background for the Tender 

Beyond Beta is subject to a number of requirements for good, healthy financial management, including documentation that the agreed price for external purchases is an expression of the market price. This tender is part of these requirements. 

We emphasize that the bidder must only make an offer on the requested task. Services of executing or implementing nature cannot be approved. The winning bid is chosen based on an assessment of the best correlation between price and quality. 

For clarity: we are seeking an independent technical assessment that transfers knowledge to our team. We are not seeking a consultant to fix issues, rebuild components, or take over development. The deliverables are findings, assessments, and prioritised recommendations that our team can act on.

Show/hide description

Tenderer

Tenderer's contact person

Offer

Clear

Tender

Tender no.
002207
Budget ex. VAT
43.000,00
Offer deadline
31-03-2026 09.42

Advertiser

Danish Life Science Cluster
Lersø Parkallé 101
2100 København Ø
 
31778078
info@danishlifesciencecluster.dk

Contact person

Julie Justi Andreasen

Projektleder
Danish Life Science Cluster

26207656