In partnership with Trilogy Writing & Consulting
grammatically correct but factually incorrect. In this way, language models differ from cognitive models, which are closer to our own abilities to solve problems. The challenge of interpreting new concepts is an important consideration for AI. This is a reflection of the language model’s inability to reason in the same way as a human – to make deductions from premises or to process insights rather than to make probabilistic inferences from word frequencies. This explains why making new inferences from data can be challenging, and it is exactly the kind of challenge we face in interpreting statistical data from new drugs. The margins for error in this context are significantly smaller so we cannot rely on language models alone. Like any technology, ChatGPT is just a tool. As with any tool, it is only as good as the person using it. ChatGPT is incredibly powerful, but to build products around it, its underlying working models, nuances, and other details need to be understood.
How could AI help medical writers? Many generic language models can create authentic content, but they do not always perform well when the content is novel or its frame of reference is new. This is a result of the training data used because language models can only produce content related to the data they’ve been trained on. A well-documented downside of generic language models is ‘computer hallucinations’, where a language model ‘makes up’ information or cites references when it has no information. This is obviously a major concern for the field of scientific writing. To address this, some niche tools have been specifically trained to produce content relating to scientific information. Our own tool, TriloDocs, combines a sector-specific language model with a core of expert rules to provide a set of ‘guiderails’ and only interprets relevant information from clinical trial data in relation to specific best practice criteria. It seems that the future of AI in the medical writing sphere may not be as stand- alone tools but rather within platforms that use it in the context of wider rules and other elements. Using AI tools in the medical writing space as more of a ‘walled garden’ makes sense because of reluctance to upload intellectual property, personal data, or other sensitive information to open platforms, where data ownership and data protection are currently being debated. Regulatory Authorities need to be confident in the accountability and traceability of raw data and documents supporting any claims. GDPR (General Data Protection Regulation), protection of commercially sensitive information, and ‘AI hallucinations’, not to mention the specific context of medical writing remain major concerns. Nonetheless, language models are undoubtedly powerful tools for creating authentic- looking texts from certain prompts, rewriting texts for different audiences (e.g., in other languages), and
Clinical Trials Insight /
www.worldpharmaceuticals.net
producing simplified summaries. Most medical writers would be delighted to pass on routine, mundane, and repetitive tasks to a computer, which can do them more efficiently, accurately, and quickly. This could liberate writers to concentrate on the skilled tasks of contextualising and interpreting clinical data and allow them to have meaningful data discussions with clinical teams earlier than currently possible.
What are the risks of AI? We’ve already touched on some of the key risks involved in using AI. Data privacy is often the main risk that springs to mind. However, this is an inherent risk of any technology and not specific to AI. Some AI platforms present a risk of being internet-based. Also, ‘open’ systems present a risk even in a non-AI context. Risk of errors. In our experience with TriloDocs, the risk of human error has been significantly reduced, if not eliminated. Important data that humans may miss are identified by the tool, and we have not yet found an issue raised during quality assurance that was not already identified by the technology. The problem of AI hallucination is a cause for real concern because there is no room for false data, inferences, or references when dealing with clinical and scientific data. From a medical writing perspective, a conservative approach is always best. Our experience is that it is better for the tool to highlight where something is missing or interpretations cannot be made, flagging data points for the medical writer to investigate rather than having a tool that produces a ‘complete’ but misleading draft. Other considerations include the ethical debate about AI, which is far outside the scope of this article.
What’s the result for medical writers? One thing we always stress when talking about our own platform, TriloDocs, is that it does not replace the medical writer. It simply accelerates and enhances the writer’s ability to have meaningful data discussions with the clinical team and speeds crafting of the report. Highly skilled medical writers bring value as critical thinkers as they create study reports and related documentation. We’re some way off from the ultimate goal of Artificial General Intelligence, which moves AI into the realm of human-like thought. Until then, critical thinking can only be done by humans. In the short time that tools like ChatGPT have captured our imagination, there’s already an adage to describe where things could be going in the short term: AI might not take your job, but someone who uses AI will. If we view AI as a tool that can supplement our work, make us more efficient and accurate, and relieve us of some of the heavy lifting, it can become a powerful resource, freeing us to focus on more valuable work of critical thinking and crafting a strong narrative in our highly complex and vital work.● References available upon request.
13
Regulatory
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41