Feelpath Logo
Funding

Our Connected Care System

We are building a connected care system for longitudinal, session-based measurement, personalized between-session support, clinician workflow tools, and research-ready infrastructure.

Feelpath is designed to get more value out of existing care by making sessions easier to learn from and easier to carry forward.

Founder note

A note from the founder

Feelpath founder standing in a calm, plant-filled workspace

For four years, I've been facilitating peer-based mental health support groups, and I've seen first hand how emotional clarity and healthy emotional expression have changed my life and the lives of my group members.

I grew up without much emotional language in my home, and it was a confusing journey to build up my awareness and vocabulary. I've felt first hand the difference this can bring to my life and among my peers: more engagement, more agency, more encouragement week to week, and more ability to see our growth and efforts have an impact.

I've also talked to hundreds of therapists who have validated the importance of this work, and the benefit that more visibility could bring to their practice. Now I'm wanting to do the difficult task of validation and translation, to bring these tools to more therapy practices.

Nick Venturino, Founder

The gap we are starting from

Important clinical information often emerges in care before it becomes durable, measurable, or easy to build on. That creates a gap not only for continuity in treatment, but also for personalization and for learning from care over time.

The result is a fragmented learning loop and fragmented care. The same system is being asked to support continuity in care, track meaningful progress, personalize what happens next, reduce clinician burden, and produce evidence about what works in the real world, but those responsibilities rarely live in one connected loop, and are instead split across different tools, workflows, and moments in care. What is missing is infrastructure that can turn everyday care into something more measurable, more actionable, and more learnable over time.

What this system is, and what needs validation

Feelpath is designed to get more value out of existing care, by making sessions easier to learn from and easier to carry forward.

We are building a connected system for longitudinal, session-based measurement, personalized between-session support, clinician workflow tools, and research-ready infrastructure.

Feelpath is built around three connected layers. One layer is the transcript-derived measurement layer: whether conversation-based markers can capture patterns that are interpretable, clinically meaningful, and worth validating against stronger reference points. A second layer is the client-facing adjunctive layer: whether tools like wheels, annotation, and session-based review actually help clients reflect, label emotions more clearly, and carry the work forward between sessions. A third layer is the therapist workflow layer: whether these tools can fit into ordinary practice in a way that supports follow-up, recall, and continuity without creating too much friction.

That is why the next phase is not simply to prove the product. The task now is more specific: to study whether the measurement layer is valid, whether the client tools are helpful, and whether the whole system fits real care well enough to deserve broader use. That is the transition Feelpath is trying to make now, from a working integrated system to a validated approach.

Research significance

Why this matters for research and care

We see Feelpath's scientific opportunity in testing whether routine therapy conversations can become a practical measurement and personalization layer for mental health care. If session-grounded signals can be made reliable and workable in everyday practice, they could help clinicians tailor follow-up more effectively and help the field learn more clearly what works for whom, in which contexts, and why.

The general research questions we hope to answer:

  1. Measurement

    Can routine therapy language become a better measurement layer than self-report alone?

  2. Personalization

    Can session-grounded supports make care more tailored between sessions?

  3. Implementation / routine care

    Can this actually work in routine care without adding burden?

  4. Learning system

    Can this help mental health care learn faster what works for whom, in what context, and why?

If we can answer these questions well, the payoff is a more practice-ready measurement layer, better tailored follow-up in real care, and stronger real-world evidence for mental health research.

What we’ve already done

Here is a visual overview of our progression timeline. It shows what is already in place, what has already been used, and where the work goes next.

Done
Done

Developed the core platform

We built the core platform and put HIPAA compliance, privacy, security, transcript capture, consent, redaction, safety features, video UI, the telehealth room, announcements, and login in place.

Done
Done

Launched client + therapist tools

Designed and launched tools for review, reflection, and follow-through.

Done
Done

Used across 300+ peer group sessions

Parts of the system have already been exercised in real peer-group conversations.

We are here

Early study

We are now running the early learning study to see how the product fits real care, how people use it, and what seems helpful in practice.

5
Next

Validation

Next is a more formal validation phase: testing the digital psychometric, the client tools, and how these fit into therapist workflow.

6
Next

Readiness

Then comes broader readiness across more partners and practices, with the training, support, and study setup needed for wider adoption.

We have already built a real product system around therapy sessions: session and transcript infrastructure, transcript-derived analysis, client-facing emotional learning tools, therapist-facing review tools, and the first layer of research work around them. What exists today is not just session capture, but a set of connected tools for seeing more from the session, carrying the work forward between sessions, and making that work more visible to both clinicians and clients.

Parts of this system have also already been used across more than 300 peer group sessions, giving us an early setting to see how these tools support reflection, expression, and follow-through in real conversations over time. On the client side, that already includes tools like emotion wheels, emotion annotation, and session-based review.

On the therapist side, it already includes emotional review, emotion annotation, emotion analytics, and other session insight views. The next phase is to study this well, support real use in practice, and move from a working integrated system to a validated approach.

What needs to happen next

The immediate next step is not a large formal validation trial. It is a first feasibility study in routine care. We need enough real sessions on the platform to learn whether the tools are usable, helpful, and workable for clients and therapists in practice.

That is the main bottleneck right now. We already have substantial interest, but recruitment, participant compensation, design support, and research support are still practical constraints. Without enough supported real-world use, the learning loop slows down before we can gather the feedback and data needed to improve the system.

  • run real therapy sessions on the platform with compensated participants
  • work with therapist design partners in routine care
  • collect therapist and client feedback on usefulness and workflow fit
  • gather usage and engagement data from real sessions and between-session use
  • track early pre/post signals where appropriate
  • use those learnings to refine the product and prepare later ALI and ELI studies

Feelpath Research

Our Study Plans

Our aim is to start with practical questions about usefulness, workflow, and trust and learn from real session data responsibly.

Feelpath Funding

We are seeking staged support across Early Proof, Validation, and Readiness to move from real-world feasibility to rigorous validation and broader implementation.

GoalsTypical rangeSteps
Early proof
$300k-$600kThe steps of this stage are to run the early learning study in real care and learn whether the system is usable, helpful, and worth deeper study.
Validation
$1.0M-$2.0MTurn early proof into a more formal validation phase for the transcript-derived measures, client tools, and therapist workflow.
Readiness
$3.0M-$7.0MMove from early evidence to readiness across more partners and practices, with the support and study setup needed for wider use.

Funding paths in detail

Early proof ($300k-$600k)

The steps of this stage are to run the early learning study in real care and learn whether the system is usable, helpful, and worth deeper study.

  • recruit and support a small number of therapist partners
  • run the early learning study in routine care
  • therapist and client interviews about workflow, trust, and usefulness
  • collect workflow, usability, engagement, and feedback data
  • track early pre/post signals in emotion labeling and alexithymia-related change
  • document what seems helpful, for whom, and under what conditions
  • product work needed specifically to deliver the study and capture the right data

The spending at this stage will go towards:

  • therapist partner onboarding and support
  • participant recruitment, screening, and compensation
  • therapist and client interview time
  • IRB / ethics review
  • study coordination and scheduling
  • product work needed to run the study smoothly
  • methods support for early analysis

Validation ($1.0M-$2.0M)

Turn early proof into a more formal validation phase for the transcript-derived measures, client tools, and therapist workflow.

  • prepare and launch a more formal validation study
  • validate the transcript-derived measures against stronger reference points
  • human coding and review of excerpts
  • validate the client-facing tools and whether they support emotional learning
  • validate therapist workflow fit more rigorously
  • improve study methods and analysis

The spending at this stage will go towards:

  • therapist onboarding and site support for the validation study
  • participant recruitment, compensation, and follow-up
  • human coding and review of excerpts
  • protocol development, statistics, and analysis
  • product and data-capture work needed for the validation study

Readiness ($3.0M-$7.0M)

Move from early evidence to readiness across more partners and practices, with the support and study setup needed for wider use.

  • support more partner practices and study sites
  • build training, onboarding, and support for wider use
  • learn what implementation requires in real practice
  • complete broader validation work across more partners
  • prepare for larger studies and broader rollout into practice

The spending at this stage will go towards:

  • partner onboarding and implementation support across more practices
  • training materials and technical support
  • implementation interviews and workflow review
  • study operations across more sites
  • product work needed for broader use
  • analysis, reporting, and rollout materials

If this work resonates

We'd be glad to talk. We are looking for support that helps us run the right studies, validate the platform carefully, and build from what we learn.

We would especially welcome conversation around research funding, philanthropic support, pilot partnerships, clinical research collaborators, and introductions to program officers, funders, or academic partners.