search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
IN PARTNERSHIP


study design should focus on answering these. Many studies collect far too much extraneous data, which means that the investigators and sites are busy doing things that won’t actually contribute to the task at hand5,6


.


The practicality of running the study and the clarity with which the study is communicated is critical to its success. When writing a study protocol, teams need to consider how feasible it will be to do all the things planned. The logistics of the study design need to be practical in the real world. If not, many protocol amendments will be needed because, after that study is up and running, it becomes clear that certain activities just can’t be done the way they were planned, or site staff misunderstand what they are meant to do. Protocols must present the ideas and activities in a clear and consistent way to make sure that everyone involved truly understands the intention of the study, what is meant to happen, and when. The user-friendliness of the protocol plays a huge role in ensuring that the same activities happen in the same way across all sites and will ultimately be reflected in the consistency of the data collected across sites.


Appropriately accounting for all these factors contributes to a study protocol that is more likely to avoid misunderstanding by investigators (thereby avoiding protocol violations and difficulties in subject enrolment, for instance), is less expensive and more practical to run, and thus overall is more effective1,3,5


.


Common problems that reduce protocol effectiveness


The most common problems with study protocols are as follows: • too many objectives (for instance, a lack of precision)


• too many inclusion and exclusion criteria and too many secondary variables (a lack of simplicity)


• too many activities planned (a lack of practicality)


• inconsistency and poor communication of the intentions (a lack of clarity) Study findings reported in 2012 by the Tufts Center for the Study of Drug Development in Boston, Massachusetts, found that a typical protocol has an average of seven objectives and 13 endpoints7


. If a study is trying to answer seven different key questions, that 6 | H1 Virtual Events: Review and Summary Handbook


“It has been estimated that the cost of activities included in studies not considered essential to the objectives and endpoints is between US$4bn and US $6bn each year. These are substantial costs that could be avoided”


is probably five too many to truly answer any properly!


Consider this: how many secondary variables does it take to be sure you will be able to answer a question posed by a key objective (e.g., does this dose of the drug work)? Teams need to think about how the data collected in the study will be used and reported later in submission documents to tell the ‘big picture’ story. Often, less is more, and having too much tangential information can simply cloud the picture and distract from what could otherwise be a crisp, clean message. Teams also need to keep in mind that drug


development and the task of getting a drug approved is a different exercise from a scientific exploration of the many facets of a drug. They need to ask themselves what data are critical to demonstrate that the drug works and is safe in the particular indication being sought. If different endpoints simply give the same answer in another way, a choice should be made to select the most relevant tests, and the others can be removed. Let’s face it, showing that a drug works (or doesn’t work) in 10 different ways is probably not needed; five good, solid ways are certainly sufficient. Even if the study design is simple and practical, inconsistent and confusing information will not aid usability of the protocol. The different sections of a protocol should fulfil their intended purpose with minimal repetition between sections. For example, the introduction should make clear why this study is needed; the objectives section should clearly state the (limited) aims of the study; the assessments section should identify what is to be measured to answer the stated objectives; and the statistics section should make clear how the data collected from the assessments will be analysed.


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58