Women’s health
Advancing point of care testing in women’s health
Martha Mackenzie looks at the history of point of care testing in women’s health and how diagnostics continues to advance and evolve to meet the needs of the female population.
Since time immemorial, women’s health has been seeking point-of-care testing (POCT) solutions to provide insight and support in the diagnosis of a plethora of diseases and conditions. Throughout medical history, there is well-documented evidence of an age-old quest for medical diagnostics that could be executed in a timely way that was both convenient to the patient and the clinician. While the reliability of some of the approaches can be questioned, the notion that bodily fluids held the key to diagnosis has proven over time to have a sound medical basis. As long as 4500 years ago, depictions on
ancient Egyptian papyrus illustrate urine testing as a means of confirming pregnancy. The papyri depict women urinating on wheat and barley seeds over several days. If the barley sprouted first, the woman was pregnant with a boy, if the wheat grew first, she was pregnant with a girl. If none of the seeds sprouted, then she wasn’t pregnant.1
During the classical period, over
two thousand years ago, physicians diagnosed urinary tract infections by pouring urine on the ground. If insects were attracted to the urine the test was considered positive for infection.2 Prophets during the Middle Ages claimed
to be able to predict pregnancy with a variety of bizarre urine tests. It was believed that a pregnant women’s urine would rust a nail, change the colour of a leaf, or be home to tiny, living creatures. Based on what we know today, it’s unlikely that any of these tests were able to correctly detect pregnancy, notwithstanding the matter of how the nails, leaves and insects were calibrated remains elusive as does any evidence of internal and external quality control! Medieval physicians were interested in their patients’ liquid excretions to provide immediate insight into sexual history, systemic disease, and impending death. By the thirteenth century, uroscopy, the examination of urine for the purpose of diagnosis and prognosis, was becoming the centrepiece of the practise of medicine. St Comas, the patron saint of medicine, is rarely depicted without a sample of urine.
There are multiple medical texts with illustrated diagrams, suggesting uroscopy was a widely used diagnostic tool at the time and that weight was given to its clinical significance. Indeed Hippocrates, universally recognised as the father of modern medicine, believed that the diagnosis and treatment of disease should be based on the observation and consideration of the four liquids, namely blood, phlegm, bile and urine.3 Fast forward to 1674, Thomas Willis, an Oxford University physician, was the first documented case of point-of-care diagnostics utilising the sense of taste. Recording his observations in Pharmaceutice Rationalis: he remarked “but why that it [urine] is wonderfully sweet like sugar or honey? This difficulty is worthy of explanation.” Willis never figured out why his specimen was sweet, but his observations helped future researchers isolate the cause of diabetes.4 So, while the tasting of urine might be considered to be suffering for one’s art, it underlines the enduring commitment to develop diagnostic tests and methods for use at the point of care that exists within the medical and scientific community to deal with imperfect
medical data. It is therefore unsurprising that contemporary medicine and medical science continue to push the boundaries of the development of new diagnostic techniques. During the early 19th century, and in the absence of central labs, physicians were routinely analysing the physical properties of blood and urine for diagnostic purposes. Tools for diagnosis had to be small enough to take along on house calls. By this time chemical analysis was rapidly becoming more sophisticated. However, many physicians lacked confidence in their scientific abilities to perform such analysis and adoption lagged far behind invention. This was combined with the realisation that for tests repeated on a large scale, a degree of centralisation might be beneficial along with certain economies of scale, particularly when protocols were becoming more scientifically complex. For a medical test to be useful it needed to be actionable, and the information provided had to be delivered in an acceptable timeframe. The mid-19th century saw growth in the UK hospital network, combined with advances in chemistry and cell staining which brought new chemical, physical, and microscopic tests.
October 2024 I
www.clinicalservicesjournal.com 33
t
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68