hull length, waterline length, max beam, draft and displacement and gets its idea of hull shape from qualifications like ‘fair’ or ‘full length hard chine on waterline’. Neither IRC nor ORC measure sail or rig shapes anywhere near
as accurately as ORC measures hull shapes, yet sail and rig designers spend their life optimising these. Surely they feel second- rate to yacht designers? And owners spend a fortune each year on new sails, maybe even a new rig. They must know something the rulemakers do not? We can look at this two ways. We could say it is a positive that
sails and rigs are not judged by their true shape and so can be built to the optimum shape without worrying about the rating effect. Of course measuring sails the way we do already leads to typeforming of sorts, such as forcing a Code-0 headsail into a ‘spinnaker’ by respecting the latter’s girth limits. One thing for sure, establishing the performance-rating relation from the true shape of sails would lead to different choices. And cause a lot of stress! Imagine designing a sail to a less than optimum 3D shape
sailing weight further introducing stored power, balancing rating credits and penalties as well as other advantages and disadvan- tages from trading away crew for better technology. The finer details of ORC measurement really only kick in when
using ORC’s more complicated scoring options than a single number. Complexity, in measuring, rating or scoring, might facilitate better accuracy but it surely does not guarantee it! And we know it is more prone to inconsistency and mistakes, where humans are involved, as in the case of measuring and race management. Also mistakes are typically harder to track down in complex areas
like hull files; it takes a blatant mistake to demand a repeat hull scan. But in simple areas mistakes often have larger consequences. Get inputs like bulb weight or boat weight wrong in IRC and you soon have a boat that is difficult to beat…or is labelled a dog. I am mainly trying to get my mind around what kind of rated sailing
would trigger the most participants… surely an important topic? And approaching this from the two most extreme options, one a rule in which all equipment is measured in great detail pursuing ever closer results for a wide variation of craft, based on ever ‘better’ science; and two, a rule that lets design and engineering flow freely within a few key measurements and their relationship, leading to ever closer results as both the rated and the unrated elements progress towards their optimum. But when relying on progressing towards the optimum, so type-
forming, we rely strongly on those in charge having a clear sense of where they want to go. Nothing worse than goalposts being moved without clear reason or with an arbitrary change of direction. Sense of direction can best be gathered through direct observation and client contact. What do sailors want? Which boats would they prefer to own and race? How? What? Where? When? When relying on VPP and scoring perfection, in theory there is
When the great Californian designer Doug Peterson rocked up at the age of 28 at the 1973 One Ton Cup in Sardinia, he brought his first design, Ganbare; Peterson had failed to find any clients so built Ganbare himself with a family loan. After an 11-week build Ganbare walked the light-air North Americans but not before some rating trouble when it came to the IOR inclining (above). Before Sardinia Peterson had just enough time to get the lead off the foredeck and remeasure before shipping out. In Italy his very first design would dominate the world’s most competitive big boat regatta, only missing out on a maiden One Ton Cup victory following a very 2nd-grade navigation error in the short offshore
because it rates better… Grrrr. And yet we freely accept that same risk for our hulls… A system like IRC, on the other hand, mainly relies on designers
getting it right; there is hardly any ‘reward’ or ‘escape’ in IRC for not producing the fastest hull shape within the few measured para- meters. You can safely assume there are such ‘go slow/rate better’ trade-offs in ORC, where the focus is on the most accurate VPP for each hull. The consequence is that there will be less of a ‘soft landing’ in IRC for plain getting it wrong. It is confusing, however, why what is marketed by ORC as the
way to go for hulls and appendages would not be the way to go for sails and rigs? For a Dutchman at least, that could be my handicap! I know air is about 80 -times less dense than water but performance is not only related to drag! And all the while ORC is working with ratings to four decimal places(!!)… claiming or alluding to an accu- racy that might be considerably more than a little optimistic. IRC works with three digits behind the dot, probably more realistic. Of course today a tired or underperforming sail can easily be
replaced for a brand new one without affecting the rating, which is not so easy for an underperforming hull. Replacing a rig is a major for seagoing yachts and in most cases more complicated and more expensive than replacing a keel or rudder, but like sails it can be done without the change itself being rated… so also without losing age allowance which is not the case for hulls. Consequently, at the top end we have boats with not just inshore
and offshore keels, rudders, rigs and sails, but also throwing water ballast and crew weight in the mix. Some then refine crew and total
42 SEAHORSE
no optimum or typeforming and equipment optimisation will focus mainly on unrated elements, like a clean hull or new sails. In practice, and we see this all around us, the scoring soon gets so complicated that events require expert support. In itself not so bad as then client contact and observation are covered as well, but not that easy to deliver. It takes a special breed to not mind travelling that much, having the technical as well as social skills to deal with not just those organising the racing and scoring but also the competitors. Taken in their purest form neither approach will be the solution.
Yacht and equipment design for rated racing will always take advan- tage of the weak points of the rules it is designing to. And too much typeforming, all being forced by the rules towards similar boats and equipment, will not work either. The perfect balance between the two then will be an ever-moving target with success measured by the number of certificates issued over a year; not a very accurate tool as for most the quality of the races or events is the main decider. Personally, I feel that high-end international yacht racing is best
served by letting design, engineering and equipment optimisation flow more freely than for recreational yacht racing, which is best served by cost control, rule stability and equipment reliability, rather than replacing equipment that is functioning perfectly well for reasons of optimisation. Strangely, almost the diametrical opposite of what most would intuitively imagine, ie a complex system for top- end racing and a simpler solution at the grass-roots level, it seems better to employ ‘simple’ rating and scoring to the high end and a ‘complex’ rating and scoring system for recreational competition. If this can be accepted then two types of yacht racing, which are
hard if not impossible to merge, will both benefit from accepting a twin-track philosophy. To be clear, I do not say that IRC is perfect for high-end inter-
national yacht racing, and that ORC is perfect for recreational racing. And I certainly do not say that both are useless in the other arena. But by first accepting that high-end and recreational racing have quite different goals and needs, with each able to benefit from their own dedicated rule management, the result could be better racing and more stable growth for both. I still like to think that one day in the distant future both con-
stituencies could yet be served for their rating and scoring from under one roof based upon identical science? But not on identical rules or measurement… that is a step too far. Let’s not waste time dreaming of the impossible. Rob Weiland, TP52 class manager
q
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68 |
Page 69 |
Page 70 |
Page 71 |
Page 72 |
Page 73 |
Page 74 |
Page 75 |
Page 76 |
Page 77 |
Page 78 |
Page 79 |
Page 80 |
Page 81 |
Page 82 |
Page 83 |
Page 84 |
Page 85 |
Page 86 |
Page 87 |
Page 88 |
Page 89 |
Page 90 |
Page 91 |
Page 92 |
Page 93 |
Page 94 |
Page 95 |
Page 96 |
Page 97 |
Page 98 |
Page 99 |
Page 100 |
Page 101 |
Page 102 |
Page 103 |
Page 104 |
Page 105 |
Page 106 |
Page 107 |
Page 108 |
Page 109 |
Page 110 |
Page 111 |
Page 112