Tag: preeclampsia

  • Hidden Risks in Biopharma: Why Market Access and HTA Preparedness Matter

    Hidden Risks in Biopharma: Why Market Access and HTA Preparedness Matter

    In the high-stakes world of biotechnology, success often seems to hinge on a single, defining goal: securing regulatory approval. For many biopharma leaders, finally receiving that green light from agencies such as the U.S. Food and Drug Administration (FDA) or the European Medicines Agency (EMA) is a cause for celebration—and rightfully so. It represents years of research, millions (sometimes billions) of dollars invested, and significant risk. Yet, if there’s one thing that recent high-profile drug launches have taught us, it’s that a therapy’s journey to market is far from over at the approval stage. Without robust Market Access and Health Technology Assessment (HTA) preparedness, even ground-breaking treatments can fail to reach the patients who need them most.

    In this article, we’ll delve into why ignoring Market Access and HTA readiness is a major risk, the pitfalls of assuming regulatory approval equals commercial success, and how real-world examples underline the importance of early planning. Understanding these dynamics helps biopharma companies chart a more sustainable path to ensuring patients have timely access to life-saving therapies.


    1. The All-Too-Common Myth: “Approval Equals Success”

    One of the most pervasive misconceptions in biopharma is the belief that “once a therapy has market authorization, it’s smooth sailing.” The reality? Approval is only the first of several hurdles.

    From the Laboratory to Patients

    When a biotech company invests heavily in clinical development—selecting the right endpoints, assembling a robust study design, and navigating regulatory checkpoints—it’s easy to see why so much emphasis is placed on obtaining that FDA or EMA approval. After all, achieving this milestone validates a therapy’s safety and efficacy, which is an enormous accomplishment. However, the practical aspect of ensuring patients can actually receive the newly approved therapy involves a parallel and equally critical process: reimbursement negotiations with payers.

    Why Reimbursement Matters

    In many healthcare systems worldwide—especially those with single-payer or heavily regulated insurance markets—no reimbursement means no meaningful patient access. After regulatory approval, payers (insurance companies, government bodies, or national health systems) typically conduct their own evaluations to determine if a therapy is cost-effective. This process, often guided by HTA agencies, includes a thorough review of clinical trial data, cost-benefit analysis, and real-world evidence if available. If the therapy doesn’t meet the required thresholds, coverage is restricted or denied entirely. For biotech innovators, that can translate into a diminished or completely eroded commercial opportunity, regardless of how scientifically groundbreaking the product might be.


    2. Real-World Examples: Costly Lessons in Ignoring Market Access

    A few high-profile drug launches illustrate how even blockbuster therapies can face significant hurdles if Market Access considerations aren’t addressed early and thoroughly.

    Aduhelm (Biogen)

    When the FDA approved Biogen‘s Aduhelm (aducanumab) for Alzheimer’s disease, many heralded it as a breakthrough in a field with few therapeutic options. Yet, the Centers for Medicare & Medicaid Services (CMS) adopted a restrictive coverage policy due to questions surrounding Aduhelm’s clinical effectiveness and overall value for money. According to CMS’s official announcement, the therapy could only be covered in the context of clinical trials, severely limiting broader patient access. This decision dramatically impacted Aduhelm’s revenue potential and underscored a vital lesson: securing FDA approval alone is no guarantee of commercial success if payers aren’t convinced of the product’s real-world benefits and cost-effectiveness.

    Zolgensma (Novartis)

    Novartis‘ Zolgensma is a gene therapy for spinal muscular atrophy that made headlines for its high cost—somewhere in the multimillion-dollar range for a single infusion. While it was considered revolutionary, negotiations for reimbursement in various European countries ran into significant delays. As reported by pharmaphorum, different healthcare systems questioned the long-term data and sustainability of such an expensive therapy. Novartis faced hurdles in achieving swift reimbursement approvals, highlighting the need for solid cost-effectiveness evidence to convince payers that the therapy is worth the investment.

    Exondys 51 (Sarepta Therapeutics)

    Sarepta Therapeutics‘ Exondys 51, a treatment for Duchenne Muscular Dystrophy, was approved by the FDA through an accelerated pathway amid controversy surrounding its efficacy data. Yet, as Fierce Pharma reported, payer coverage was far from guaranteed. Limited efficacy data and the therapy’s high price point led many payers to impose strict coverage criteria. Despite having regulatory approval, Exondys 51 did not experience the widespread uptake Sarepta hoped for, illustrating the importance of robust clinical and economic evidence to support coverage decisions.


    3. The High Price of Overlooking HTA and Early Economic Evidence

    Shaping Clinical Development with Market Access in Mind

    A critical part of successful Market Access is weaving payer perspectives into a product’s clinical development strategy from the outset. Designing trials that capture data relevant to payers—such as comparative effectiveness, patient-reported outcomes, or health economic measures—can go a long way toward smoothing the path to reimbursement. When companies wait until after Phase III trials are complete to think about HTA requirements, they often find they haven’t collected the right types of data to convince payers. This oversight can lead to expensive follow-up studies or delayed product launches.

    The Role of Health Technology Assessment (HTA)

    HTA agencies, such as the UK’s National Institute for Health and Care Excellence (NICE) or the Institute for Quality and Efficiency in Health Care (IQWiG) in Germany, employ systematic methods to evaluate a therapy’s clinical effectiveness and cost-effectiveness relative to existing treatments. Their decisions frequently inform national or regional coverage and pricing. For biotech innovators, early engagement with these agencies—either directly or through advisory bodies—can provide valuable insights into the evidence thresholds and data endpoints that will be scrutinized most closely.

    Balancing Clinical and Economic Evidence

    While it’s natural to emphasize clinical trial results showing safety and efficacy, payers seek evidence of real-world value. Economic models examining cost savings over time (such as reductions in hospitalizations or improved patient quality of life) are essential to building a compelling argument for coverage. Failing to present this data can result in significant pushback or delayed decisions, which can cost companies both revenue and reputational goodwill.


    4. How to Integrate Market Access Strategies Early

    Start with a Forward-Thinking Mindset

    Market Access and HTA preparedness shouldn’t be treated as an afterthought. Instead, they should be embedded in the earliest phases of drug development. This shift in mindset can reduce costly reruns of clinical studies and establish a clear roadmap for demonstrating cost-effectiveness.

    Collaborate Across Functions

    Biotech companies should promote collaboration between clinical, regulatory, health economics, and commercial teams. Having these stakeholders at the table together ensures that trial designs incorporate payer-relevant endpoints and that the marketing strategy is informed by current reimbursement landscapes.

    Engage with Stakeholders and Adapt

    Regulatory agencies, payers, and HTA bodies each have unique perspectives on what evidence matters most. Ongoing dialogue with these stakeholders can provide clarity on the types of data needed. Being agile in designing and adjusting clinical programs can pay dividends down the road, streamlining the coverage decision and accelerating patient access.


    Don’t Let the “First Step” Be Your Only Step

    While achieving FDA or EMA approval is undoubtedly a monumental milestone, it’s only the beginning of a therapy’s journey. The hidden risk in biotech today is the assumption that approval alone will ensure broad and sustained commercial success. As seen with Aduhelm, Zolgensma, and Exondys 51, even cutting-edge treatments can face tough battles with payers and HTA bodies if their economic and real-world value isn’t clearly substantiated.

    For biotech companies aiming to ensure their innovations actually reach the patients who need them, a proactive approach to Market Access and HTA preparedness is non-negotiable. This means building payer perspectives into clinical trial designs, investing in robust economic models, and engaging stakeholders early. By heeding these principles, companies can avoid costly post-approval surprises and better fulfill their core mission: improving patients’ lives.


    Ready for Real-World Success? Elevate Your Market Access Strategy.

    If you want to dive deeper into how to integrate Market Access strategies into your clinical development roadmap, subscribe to our blog or reach out to discuss your specific challenges. In upcoming posts, we’ll explore how to effectively balance clinical and economic evidence to meet both regulatory and payer expectations—and ultimately drive patient access. Don’t miss out on actionable insights that can make the difference between a successful commercial launch and a missed opportunity.

    The Looney Tools

    Navigating the complexities of Market Access and HTA doesn’t have to be a daunting, years-long process. At Loon, we combine innovation with precision to transform how biopharma approaches evidence synthesis and market access forecasting. Our suite of tools—Loon Lens™, Loon Hatch™, and Loon Waters™—is designed to empower your team with faster, smarter, and scientifically validated solutions to elevate your Access Strategy.

    • Loon Lens™ accelerates literature screening with AI precision, cutting timelines dramatically while ensuring no critical studies are overlooked.
    • Loon Hatch™ revolutionizes systematic reviews, reducing 2,500 person-hours of effort to just 85, delivering HTA-ready evidence in days, not years.
    • Loon Waters™ optimizes market access forecasting, enabling you to predict and enhance reimbursement outcomes with unparalleled accuracy.

    Don’t let delays or inefficiencies hold back your innovation. Subscribe to our blog or reach out today to see how our tools can transform your commercialization pathway and ensure your therapies reach patients faster. Together, let’s make timely, life-saving access a reality.

  • Loon Lens™: Autonomous AI Agents for Literature Screening in Systematic Reviews

    Loon Lens™: Autonomous AI Agents for Literature Screening in Systematic Reviews

    We are pleased to share the results of our recent validation study, “Loon Lens 1.0 Validation: Agentic AI for Title and Abstract Screening in Systematic Literature Reviews,” now available on medRxiv. This study evaluates the effectiveness of Loon Lens™, our autonomous AI literature screener designed to automate the Title and Abstract (TiAb) screening process in systematic literature reviews (SLRs).

    Photo of Loon Lens 1.0 Scientific Validation Paper: Agentic AI for Title and Abstract Screening in Systematic Literature Reviews

    Addressing the Challenges of Systematic Reviews

    Systematic literature reviews are fundamental to evidence-based research across various disciplines, including healthcare, social sciences, and technology. They provide comprehensive analyses that inform clinical guidelines, policy-making, and future research directions. However, the process of conducting SLRs is often resource-intensive and time-consuming. According to studies, an average SLR can take over a year to complete and cost more than $140,000 USD, with TiAb screening being one of the most laborious stages.

    The Burden of Title and Abstract Screening

    TiAb screening involves manually reviewing thousands of citations to identify studies relevant to a specific research question. This step is crucial but can be a bottleneck due to the sheer volume of literature and the need for meticulous attention to inclusion and exclusion criteria.


    Introducing Loon Lens

    Loon Lens™ is an autonomous AI literature screener that alleviates the burden of TiAb screening by leveraging large language models (LLMs). Unlike traditional methods that require initial manual screening of hundreds of studies or semi-automated approaches dependent on pre-labelled data, Loon Lens™ autonomously screens citations based solely on user-defined inclusion and exclusion criteria.


    Key Features

    Fully Autonomous ScreeningNo need for pre-labeled training data or initial researcher screening of hundreds of studies.
    No-code, simple InterfaceLoon Lens™ is designed for researchers to use it out of the box. No expertise in AI or machine learning is required to use it.
    Scalable SolutionLoon Lens™ is capable of handling large volumes of citations efficiently, giving you the results in just a few hours and cutting weeks of work off your plate.

    The Validation Study: Assessing Performance and Reliability

    To evaluate Loon Lens’s effectiveness, we conducted a validation study comparing its performance against human reviewers in TiAb screening across eight systematic literature reviews.

    Study Design Overview

    Data Source: We replicated eight SLRs conducted by Canada’s Drug Agency (CDA), covering various drugs and medical conditions.

    Citations Reviewed: A total of 3,796 citations were retrieved using OpenAlex, an open-source scholarly database.

    Human Review: Dual independent reviewers screened citations, identifying 287 studies (7.6%) for inclusion.

    Loon Lens Screening: Loon Lens™ used the same citations and eligibility criteria to perform autonomous TiAb screening.


    Understanding the Metrics

    To provide a comprehensive assessment, we calculated several performance metrics:

    AccuracyThe proportion of correct predictions (both inclusions and exclusions).
    Recall (Sensitivity)The ability to identify all relevant studies.
    Precision (Positive Predictive Value)The proportion of correctly identified relevant studies among those flagged.
    F1 ScoreThe harmonic mean of precision and recall, balancing both metrics.
    SpecificityThe ability to correctly exclude irrelevant studies.
    Negative Predictive Value (NPV)The proportion of correctly identified irrelevant studies among those excluded.

    Bootstrapping was applied to compute 95% confidence intervals, providing robustness to our estimates.

    Results: High Recall and Accuracy

    The validation study yielded encouraging results:

    Accuracy: 95.5%95% CI: 94.8%–96.1%
    Recall (Sensitivity): 98.9595% CI: 97.57%–100%
    Specificity: 95.24%95% CI: 94.54%–95.89%
    F1 Score: 0.77095% CI: 0.734–0.802
    Precision (Positive Predictive Value): 62.97%95% CI: 58.39%–67.27%

    Interpreting the Results

    High Recall: Loon Lens successfully identified nearly all relevant studies, which is crucial in SLRs to ensure comprehensive evidence synthesis.

    Good Specificity: The platform effectively excluded irrelevant studies, minimizing the burden of unnecessary full-text reviews.

    Precision: Erring on the safe. side: While the precision indicates that some irrelevant studies were included, in SLR contexts where missing relevant studies is a greater concern than reviewing additional ones, this means the precision of the study leans towards caution.

    Confusion Matrix

    StudiesPredicted IncludedPredicted Excluded
    Actually Included2843
    Actually Excluded1673342

    True Positives: 284 studies correctly identified as relevant.

    False Positives: 167 studies were incorrectly identified as relevant.

    False Negatives: 3 studies were missed.

    True Negatives: 3,342 studies correctly identified as irrelevant.

    Discussion: Implications, Limitations, and Future Directions

    The validation of Loon Lens represents a significant advancement in the application of AI to systematic literature reviews. By achieving high recall and specificity, Loon Lens demonstrates its potential to substantially reduce the time and effort required for TiAb screening. However, it’s essential to critically examine these results to understand their implications fully.

    Implications for the Research Community

    Efficiency Gains: The high accuracy and recall suggest that Loon Lens can reliably identify relevant studies, allowing researchers to allocate their time more effectively. This efficiency is particularly beneficial for large-scale reviews where the volume of citations can be overwhelming.

    Resource Allocation: With reduced time spent on initial screening, resources can be redirected towards more in-depth analysis, quality assessment, and synthesis of findings.

    Accessibility: By lowering the barriers to conducting systematic reviews, Loon Lens may enable smaller research teams or those with limited funding to undertake comprehensive reviews.

    Balancing Recall and Precision

    While Loon Lens excels in recall, ensuring that almost all relevant studies are identified, the moderate precision indicates a higher rate of false positives compared to human reviewers. This trade-off is important to consider:

    Acceptable Trade-off: In systematic reviews, missing a relevant study (false negative) can have more significant consequences than including an irrelevant one (false positive). Therefore, a higher recall is often prioritized over precision.

    Impact on Workload: The increase in false positives means that researchers may need to screen more studies at the full-text level. However, this additional effort is generally less burdensome than the initial TiAb screening and is a reasonable compromise to ensure comprehensiveness.

    Limitations of the Study

    Scope of Validation: The study focused on eight SLRs in the healthcare domain, specifically related to drug evaluations. While these reviews covered a range of topics and eligibility criteria, the results may not be fully generalizable to other fields or types of studies, such as qualitative research or reviews in social sciences.

    Data Source: The use of OpenAlex as the sole bibliographic database may have influenced the pool of citations. Differences in indexing between databases like PubMed, Scopus, or Web of Science could affect the generalizability of the findings.

    Language and Cultural Bias: LLMs can sometimes exhibit biases based on the language and cultural contexts present in their training data. This could potentially impact the screening of studies from diverse geographical regions or non-English publications.

    Addressing Ethical and Practical Considerations

    AI Transparency: Understanding how Loon Lens makes decisions is crucial for trust and acceptance. While LLMs can be seen as “black boxes,” our efforts to provide explanations for inclusion or exclusion decisions can enhance transparency.

    Data Privacy: Ensuring that uploaded citations and any associated data are handled securely is essential. Adhering to data protection regulations and best practices is a priority.

    User Control: Providing users with options to adjust the sensitivity of the screening process or to review borderline cases can empower researchers and tailor the tool to specific needs.

    Future Directions and Enhancements

    Algorithm Refinement: Ongoing development aims to refine the algorithms to reduce false positives without compromising recall. This may involve incorporating additional contextual understanding or domain-specific knowledge.

    Full-Text Screening Capability: Extending Loon Lens to assist with or autonomously perform full-text screening could further streamline the systematic review process.

    Cross-Disciplinary Validation: Conducting validation studies in other fields, such as psychology, education, or environmental science, will help assess the tool’s adaptability and effectiveness across disciplines.

    Integration with Existing Workflows: Developing integrations with popular reference management software and systematic review tools can enhance usability and encourage adoption.

    User Feedback Mechanisms: Incorporating feedback loops where users can provide input on screening decisions can help improve the model over time and increase accuracy.

    Collaboration and Community Engagement

    Open Dialogue: We encourage discussions within the research community about the role of AI in systematic reviews. Sharing experiences, challenges, and solutions will benefit all stakeholders.

    Ethical AI Practices: Collaborating with ethicists and AI experts to address concerns about biases, fairness, and accountability is important for responsible deployment.

    Training and Support: Providing resources, tutorials, and support to help users make the most of Loon Lens will facilitate smoother transitions to incorporating AI into research workflows.

    Comparison with Existing Solutions

    Unlike semi-automated tools that require human input for labelling or training, Loon Lens™ operates fully autonomously. This sets it apart by offering:

    No Need for Pre-Labeled Data: Reduces setup time and allows immediate use.

    No Need for Pre-Screening: Researchers don’t need to screen hundreds of studies, unlike current literature screeners.

    User-Friendly Experience: Simplifies the screening process without technical complexities.

    How to Get Started with Loon Lens

    We invite researchers and institutions to try Loon Lens™:

    1. Request Access: Visit https://loonlens.com/ and submit a request.

    2. Prepare Your Data: Export your citations in RIS format from your reference management software.

    3. Define Criteria: Clearly outline your inclusion and exclusion criteria.

    4. Initiate Screening: Upload your data and criteria, and let Loon Lens handle the screening.

    Conclusion: Advancing Literature Screening for Systematic Reviews with Loon AI

    Loon Lens™ represents a significant step forward in leveraging AI to support researchers in conducting literature screening for systematic literature reviews more efficiently. While the tool demonstrates high recall and accuracy in identifying relevant studies, we acknowledge that it is not without limitations. The precision levels indicate room for improvement, particularly in reducing false positives to minimize unnecessary workload during full-text screening.

    Our commitment is to continue refining Loon Lens™ based on user feedback and ongoing research. By addressing the limitations and expanding its capabilities, we aim to make Loon Lens™ an indispensable tool across various research domains.

    We believe that while AI cannot replace the nuanced judgment of experienced researchers, it can serve as a powerful assistant. By automating the most time-consuming aspects of systematic reviews, Loon Lens™ allows researchers to focus on critical analysis, interpretation, and the generation of new insights that advance their fields.

    For more information or to request access, please visit https://loonlens.com/ or contact us at contact@loonbio.com.

    Collaborate with Us

    We are keen to collaborate with the research community to further enhance Loon Lens:

    Feedback: Share your experiences to help us improve.

    Partnerships: Academic and industry partnerships are welcome for joint projects and studies.

    Beta Testing: Participate in testing new features and provide valuable insights.

    Ready to Transform Your Evidence Synthesis Process?

    Contact us today for a demo, or visit loonbio.com to learn more about how we’re revolutionizing market access and clinical research with AI-driven solutions that reduce research timelines from years to days.


    About Loon

    Loon Inc. is at the forefront of AI-driven market access and clinical research. We help biopharma companies navigate the complexities of market access with confidence, providing innovative solutions that dramatically reduce research timelines while maintaining the highest standards of quality and compliance.

  • Ensuring Loon’s Compliance with NICE Guidelines on AI Use in Evidence Synthesis

    Ensuring Loon’s Compliance with NICE Guidelines on AI Use in Evidence Synthesis

    In this article, we navigate NICE’s Position on the Use of AI in Evidence Generation for Health Technology Assessment (HTA) and explain how Loon Hatch™ – our end-to-end, fully automated, and expert-validated evidence synthesis solution – and Loon Lens™ – our scientifically validated, autonomous literature screener – align with the HTA body’s’ guidelines on the use of AI in Health Economics and Outcomes Research (HEOR).

    NICE AI Position

    Revolutionizing Evidence Synthesis with AI: Loon’s Compliance with NICE Guidelines

    The National Institute for Health and Care Excellence (NICE) has recently released guidelines on the responsible use of AI in evidence synthesis for HTA. At Loon, we’re delighted to demonstrate how our AI-powered solutions, such as Loon Hatch™ and Loon Lens ™, align seamlessly with these guidelines. At Loon, we’re not just meeting these guidelines — we’re exceeding them and setting new standards in speed, accuracy, and compliance for Market Access, HTA, and HEOR workflows.

    Loon’s AI Solutions: Exceeding NICE Standards

    Our end-to-end AI-powered solutions for evidence synthesis are designed to redefine evidence synthesis while adhering to NICE’s stringent guidelines:

    NICE GuidelineLoon’s Approach to Compliance
    Human OversightLoon Hatch™ AI outputs are always assessed and validated by human experts, ensuring efficiency and accuracy.
    Validation Audit TraceWe show when and why an expert overrode an AI recommendation, ensuring that all validation decisions are transparent and traceable, enhancing accountability and trust in the AI system.
    Scientific MethodologyLoon offers full disclosure of the scientific methodologies used in our AI systems, including validation data.
    Transparency and JustificationLoon provides clear explanations of AI’s role and outcomes through comprehensive documentation, allowing users to track and verify AI decisions alongside expert assessments.
    Ethical and Legal ComplianceLoon ensures strict adherence to legal frameworks and ethical guidelines, including GDPR, for data protection and fairness.
    Security and Risk MitigationRobust cybersecurity measures and risk management strategies such as air-gapping are in place to protect AI systems and prevent cyber incidents.
    Detailed ReportingLoon maintains thorough documentation of AI operations, ensuring transparency and continuous improvement.
    Early Engagement with NICELoon will initiate proactive dialogue with NICE to align AI methods with their frameworks right from the start.



    Scientific Validation: Loon Lens™ Literature Screener

    Loon Lens™, our fully automated literature screener, has undergone rigorous scientific validation to ensure its accuracy and reliability in identifying relevant studies for systematic reviews. Recently, Loon published a validation paper detailing the performance of Loon Lens™ on medRxiv, which demonstrates an accuracy of 95.5% (95% CI: 94.8–96.1), with sensitivity (recall) at 98.95% (95% CI: 97.57–100%) and specificity at 95.24% (95% CI: 94.54–95.89%). These results set a new standard for AI-assisted literature screening. This paper offers full transparency on the methodologies used, model performance, and validation processes, fostering trust and credibility in AI-driven research.

    For a more detailed view of the paper, please refer to the full text and article metrics on medRxiv.

    Transforming Evidence Synthesis with Loon Hatch™

    Loon Hatch™ leverages our patent-pending Cognitive Ensemble AI Systems™ to revolutionize the evidence synthesis process:

    1. Unparalleled Efficiency: Reduces systematic literature review timelines from 2,500 hours to just 85, accelerating patient access to therapies and ensuring that biopharmaceutical innovators maximize the reimbursement potential of their therapies and eliminate market access delays.
    2. Complete Transparency:
      • Data Audit Trace: Ensures research integrity through full disclosure of data sources.
      • Scientific Validation: Loon offers papers with full disclosure of the scientific methodologies used in our AI systems, including validation data. This allows users to fully understand how our AI models operate, their performance metrics, and any limitations, fostering trust and enabling informed decision-making.
      • AI Decision Transparency: Allows users to track AI decisions alongside expert assessments.
    3. Full Automation and Expert Validation: While our AI fully automates labour-intensive processes, human experts make final decisions, ensuring the highest quality outcomes. This oversight ensures that AI acts as a tool to augment human expertise, not replace it.

    Aligning with NICE’s Vision for AI in HTA

    NICE emphasizes AI as a tool to enhance, not replace, human involvement in evidence synthesis. This aligns perfectly with Loon’s approach. For instance, Loon Hatch™ rapidly processes vast amounts of literature, but human experts make the final inclusion decisions.

    Our solutions comply with NICE’s recommendations on machine learning (ML) and large language models (LLMs) in evidence synthesis:

    • Supporting evidence identification
    • Automating study classification
    • Streamlining screenings

    All of these processes are conducted with rigorous expert oversight, ensuring accuracy and reliability.

    Loon’s Commitment to Responsible AI Use

    As we continue to innovate, we remain deeply committed to adhering to industry standards and guidelines, ensuring that our AI solutions automate processes and enhance efficiency while also meeting the highest standards of transparency and ethical use. Our collaboration with regulatory bodies and profound understanding of clinical research challenges position us as leaders in the future of evidence synthesis.

    By choosing Loon Hatch™, you are accelerating your evidence synthesis process and ensuring full compliance with the latest industry guidelines, making your HTA submissions more robust and reliable.

    Ready to Transform Your Evidence Synthesis Process?

    Contact us today for a demo, or visit loonbio.com to learn more about how we’re revolutionizing market access and clinical research with AI-driven solutions that reduce research timelines from years to days.


    About Loon

    Loon Inc. is at the forefront of AI-driven market access and clinical research. We help biopharma companies navigate the complexities of market access with confidence, providing innovative solutions that dramatically reduce research timelines while maintaining the highest standards of quality and compliance.