such analyses, even if their individual and collective judgement arouses suspicions about the veracity and validity of the output. If we accept that a key tenet of empiricism is that “knowledge is based on experience” and “knowledge is tentative and probabilistic, subject to continued revision and falsification.” (Shelley, M. (2006). Empiricism, in F. English (Ed.), Encyclopedia of educational leadership
and administration), then this highlights a possible shift in the paradigm of our investigations, which is potentially, but not necessarily, rather more insidious; namely that the ostensible risk in credibility terms lies less, as it has previously, in faulty data, and rather more in the scope for faulty processing. It must be stressed that this is not about depicting ‘Big Data / AI’ in a negative light, but rather to underline the need for heightened awareness about the processes of analysing and interpreting the output from ‘Big Data / AI’, challenging and questioning their underlying assumptions, both from the aspect of this being uncharted terrain, and a more fundamental need for intellectual rigour. Let us be clear, it is not the models that
algorithms to largely dispel the need for qualitative and philosophically regulated modes of research and analysis. However, this rides roughshod over the clear need to understand how we engage with such an unprecedented plethora of data, how and why we categorize it and, at a more fundamental level, how it will impact the way we learn, and ultimately how this metamorphoses the multitude of human reaction functions. As Foucault observed: “People know what they do; frequently they know why they do what they do; but
what they don’t know is what what they do does.” (Madness and Civilization: A History of Insanity in the Age of Reason, 1961). In perhaps simpler terms, the point is this, from a theory of research perspective, how do we ‘know’ what questions we wish to ask of all this data, and of what hypotheses or phenomena do we wish to test the validity of? This applies as much to the area of psycho-social behavioural research, as indeed to those in the sphere of life and physical sciences. GIGO (Garbage in, garbage out) was an
acronym from the early age of computational science and communications technology, though the de nouveau variation of “Garbage in, gospel out” is perhaps a rather better variant, above all in the context of the questions above. The latter form highlights a type of progression whereby the ability to digitally record data to a certain extent mitigates some of the risks of faulty, imperfect or imprecise inputs, but it certainly does not preclude the risk of a faulty processing algorithm. The latter becomes a much more acute risk with regards to ‘Big Data / AI’, in so far as the only way to generate the data and analyse it is via a computer, per se excluding many from delving into the mechanics of the output from
are problematic, but rather understanding the metaphors, perspectives and paradigms which underpin them, in so far as they are often critical to organizing our thinking, investigations and explorations of a given subject, and they will frequently dictate the parameters by which we set and guide our progress towards a goal. A model only becomes problematic when it is allowed to supersede our rational function, and becomes a static article of faith or a rigid procedure, and thus to evolve into what William Pfaff has called ‘dead stars’. He defines these as “ideas that people want or need to be true”, and as “theoretical formulations that are generally conceded to be false but have become conventional, and for which no replacement is evident, continue to be employed by people who certainly know better… There is a real dissociation of perception
from analysis and decision.” (Barbarian Sentiments - How The American Century Ends 1989). Per se, the imperatives with regards to ‘Big
Data / AI’ relate to being conscious of the complex methodological processes that underlie the analysis of the data. As CG Jung observed: “Consciousness
is a precondition of being”. Any theory based on experience is necessarily statistical; that is to say, it formulates an ideal average which abolishes all exceptions at either end of the scale and replaces them by an abstract mean. This mean is quite valid though it need not necessarily occur in reality. Despite this it figures in the theory as an unassailable fundamental fact. ... If, for instance, I determine the weight of each stone in a bed of pebbles and get an average weight of 145 grams, this tells me very little about the real nature of the pebbles. Anyone who thought, on the basis of these findings, that he could pick up a pebble of 145 grams at the first try
27 | ADMISI - The Ghost In The Machine | Q2 Edition 2023
THERE IS A REAL DISSOCIATION OF PERCEPTION FROM ANALYSIS AND DECISION.
William Pfaff
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33