ADVERTISEMENT AI: Bogeyman, Hero, or Sidekick? Written by Lam Nguyen-Bull T
here is a lot of heightened anxiety about the growing prevalence of Artificial Intelligence (AI) in the tools we use every day and its apparent growing ability to
mimic the human voice – “will AI replace me in my job?” “How do I know if I’m interacting with a real person?” And there is competitive pressure on Edulog to use AI in our software, and/or to claim that we do. Yet, both this anxiety and this pressure are premature and will remain so for the foreseeable future. That said, we have had to grapple with the evolving role of AI in our world, present and future. First, let’s agree that the term “AI” as used by most non-computer scientists today refers to a particular kind of AI – that is, the kind of AI that is built on neural networks, or large language models. This is the AI that most of us have exposure to: funny pictures created by DALL-E or sometimes chillingly realistic text ex- changes with ChatGPT. But “artificial intelligence” has existed for as long as computing machines have. And until the more recent development of neural network-type AI, computing was about crunching numbers and applying mathematical rules. Edulog’s own routing software is built on a pure mathe- matical model – a complex one, but a mathematical one. The solutions it delivers are true solutions, not just collections of information that look like they might be solutions. The New York Times recently explained that while today’s AI (the kind built on neural networks) “can write poetry, summarize books and answer questions, often with human-level fluency,” it often cannot be relied on to solve math problems. The article further quoted Kristian Hammond, a professor at Northwest- ern University, who commented that AI “[has] difficulty with math because it was never designed to do it.” Today’s AI tools are not designed to handle mathemati-
cal problems, because they are based on language models, not logic models. Language models are good at processing natural language, such as words, sentences, and paragraphs, but they are not good at processing mathematical language, such as symbols, equations, and proofs. Language models are also prone to errors, such as ambiguity, inconsistency, and incompleteness, which can lead to incorrect or misleading re- sults. This is the reason why for a period, Google’s AI-assisted search results were providing answers to questions that were grammatically correct, but wrong in their substantive content. The language models inform the structure of the information but can miss on the actual information provided. For this reason, you should be skeptical of any routing or transporta- tion software that claims to use AI to provide solutions. Ask questions. Many questions. It is entirely possible that it will not take long for combina- tion solutions to emerge which take advantage of the communications skills of AI and the computational skills of
more traditional computing software. Indeed, the same New York Times article cited above reported that the popular tutoring website Khan Academy had begun to offer just such a combination solution as its AI chatbot. Numerical problems are now sent to a computational program instead of asking the AI to solve the math. However, even these combination solutions still have a key defect – AI lacks intuition or the ability to replicate intuition. Today’s AI can generate lots of simulations of solutions to math problems (such as how to route a population of students efficiently and safely). Tomor- row’s combination solution might harness a real mathematical model, such as the one in Edulog’s proprietary software, to generate true solutions and use AI to frame and present the possible solutions. But AI cannot yet exercise judgment or intuition. AI is not good at finding the optimal solution among many possible alternatives, or finding the best compromise among conflicting objectives, or finding the most robust solu- tion under uncertain conditions. Finally, mistakes are inevitable for any technology, but they
are especially dangerous for AI, because AI is not accountable, transparent, or explainable. AI is not accountable, because it is not responsible for its actions, and it cannot be punished or corrected. AI is not transparent, because it is not clear how it works, and it cannot be audited or verified. AI is not explain- able, because it is not able to justify its decisions, and it cannot be questioned or challenged. All of this makes the use of AI to perform complex tasks involving the safety of real people a question with serious public policy implications that need real consideration. What recourse is available if AI places a bus stop across a busy street? How do we know it won’t happen again? Are the answers we currently have to these questions satisfactory if there is a catastrophic outcome? This doesn’t mean that there isn’t room for AI or AI-assist-
ed tools in the work that we do. But its inability to deliver true solutions and its lack of intuition and judgment mean that AI has a long way to go before it can replace real people using Edulog software.
If you’d like to learn more about how Edulog’s software continues to empower school districts across North America to safely and efficiently route their students with the most robust technology available today, visit our website
edulog.com/stn.
Lam Nguyen-Bull is the Chief Experience Officer at Education Logistics. She has worked as a business strategy consultant to mobile communications companies in the US & Europe and served as in-house legal counsel for transportation and logistics companies.
Learn more at
edulog.com/stn
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68 |
Page 69 |
Page 70 |
Page 71 |
Page 72 |
Page 73 |
Page 74 |
Page 75 |
Page 76