Before you succumb to the temptation of intense number-crunching, understand the underlying real-world analog environment in which your model and its assumptions actually live.
I recently had a chat with a newly graduated EE at a family event and was somewhat surprised by what he casually said (although maybe I shouldn’t have been). He claimed that with so much computing “horsepower” now available for so little, almost any technical-analysis problem could be solved by going “right away” to number crunching.
That seemed somewhat naïve and simplistic to me, but maybe he had a point. He even followed with a non-electronic example to show me that his “expertise” went beyond electronics and that today’s graduates are really multidisciplinary. He pointed out that for analysis of mechanical shapes and their strengths, you can simply use a finite element analysis (FEA) package which divides the structure into thousands or even millions of tiny cells. Then it does innumerable calculations on the resultant mesh to determine the stress and associated strain on each cell, given the material’s properties.
I didn’t feel like getting into a spirited debate right there, but I have been a regular reader of Tony Abbey’s monthly tutorial column “Abbey’s Analysis” in Digital Engineering hey, (it looks as if I am multidisciplinary, too). Abbey often shows where the standard FEA default models for certain joints, connections, and other structures have serious weaknesses (figuratively and literally, so more-accurate models need to be devised in many cases and configured by the user. If you don’t do this, you can have detailed results with reassuringly high precision, but with significant errors.
Coincidentally, I have been reading a recently published book “Eight Years to the Moon: The History of the Apollo Missions” by Nancy Atkinson, Figure 1. The book not only highlights individuals and personalities as it should, but it also discusses some of the many analytical unknowns that had to be figured out for the first time. For example, the mechanics of orbital paths, rendezvous with another spacecraft in orbit, and navigation and guidance between Earth and Moon—all of which we now take as routine—were not well understood.
Smart people, many of them working at Draper Laboratories under the direction of Dr. Charles Stark Draper and Richard H. Battin (his book is the classic introductory text, **Ref. 1**), looked at the complex equations of orbits and perturbations, especially those for getting two spacecraft to meet, Even though they had access to the best computer at the time (an IBM 7094 mainframe, **Ref. 2**) the issue that consumed most of their time early in the project was figuring out which equations were the right ones under various orbital and link-up trajectories and scenarios.
They had to grapple with critical corrections to account for data, tracking, and other errors, all while balancing fuel, time, and uncertainty factors; you often don’t get second chances, for many reasons. It was all so new that Apollo 11 astronaut Edwin “Buzz” Aldrin did his MIT doctoral thesis “Line-of-Sight Guidance Techniques for Manned Orbital Rendezvous” on this subject in 1963—several years before any rendezvous had been attempted—and was one of the few who had studied the subject in detail, Figure 2 and Reference 3.
Despite our powerful processors, there’s still plenty of analog analysis being done before rushing to calculate and number crunching. I just came across a paper addressing an interesting and important problem: how does a landing craft determine true North on Mars? It turns out that this is not easy. The paper, “Determining True North on Mars by Using a Sundial on InSight” (Insight is a Mars mission), looked at using that ancient navigation technique; it involves much more than just measuring a shadow angle at known time and looking in a pre-calculated solar table.
First, the described algorithm used a detailed analysis using geometry and trigonometry, looking at where simplifications are possible—all very analog stuff. Only once this analysis was done was computing “power” used to calculate the details and produce an answer. Just applying countless CPU cycles before that analysis would have been very premature, as it was first necessary to fully understand the multiple relationships among all the elements of the puzzle.
Have you even been told that solving a problem is just a matter of more processing power? Have you ever “rushed to compute” (not the same as estimating) before doing proper diligence and analysis of the problem?