What if the world had no nuclear bombs? It’s a fanciful dream and one that will likely never occur now that the technology is so widespread and so integral to many nations’ territorial defense strategy – but at one point in time, it was a possibility. There was one algorithm, one method of decoding a single signal, that almost prevented the entire nuclear arms race.
Stopping nuclear tests
The US had just dropped two atomic bombs on Japan, ending World War II and causing astonishing destruction that is still noticeable generations on. The world had its eyes opened to the power of these bombs, magnitudes higher than any explosion that had ever been seen before, and the US demonstrated that such devices could be dropped anywhere, any time. However, the US understood that while they had the immediate advantage, it would not be long before rival nations made their own, and the stage would be set for a standoff that could eradicate humanity.
As a result, the US held talks with the Soviets and other nuclear-capable nations with the ultimate goal of stopping the development of nuclear arms, but (predictably) the nations were unable to trust each other enough to accept an agreement. The US continued to test nuclear bombs and other nations made their own, creating an arms race that made devices bigger and more dangerous than before.
One US test famously went awry in Bikini Atoll, raining radioactive matter on nearby islands and fishing vessels and causing acute radiation poisoning, forcing the nuclear nations to the negotiating table once more; this time, they were to agree never to test nuclear arms again. To do so, each nation must have the technology to identify that a test was happening – hydrophones could detect them under the sea and residual atoms could be identified in the sky from ground-based tests. But underground tests? These posed the greatest challenge.
The Discrete Fourier Transform
Enter one of the most important algorithms in the history of technology. Be it a phone signal, Wi-Fi – or, as it so happens, the seismological readings of a nation doing an underground nuclear test – within a complex signal, there are many sine waves of different frequencies all contributing to form the final result, and this can be decoded if we can isolate each frequency. Think of it like a song: the drums, guitar, and vocals all join to create it, but within that song are instruments and individual notes that can be isolated.
The Discrete Fourier Transform (DFT) was the first to be able to do this to a complex signal by taking the number of samples (N) in a signal and multiplying them by sine and cosine waves of frequencies that fit within that signal. It is a bit more complicated than that and Veritasium does a great explainer on just how that works, but essentially, the more samples within a signal, the better resolution the result will be, but also the more calculations we will have to make.
For example, if a signal has 8 samples, then each sample needs to be multiplied by 8, resulting in 64 calculations. This is called an O(N2) algorithm (N2 because N numbers need to be multiplied N times). O(N2) algorithms are not very efficient, because when you scale a signal up to thousands or millions of samples, the number of calculations needed becomes overwhelming and even computers of today can begin to struggle, let alone computers in the mid-1900s.
How does this relate to our nuclear tests? Well, underground nuclear tests can be identified with seismometers, but they must somehow be isolated from the background noise of small earthquakes, of which there are many each day. The Discrete Fourier Transform can do this for us, but computers at the time would take years to decode each signal, which doesn’t really work well for accountability.
The Fast Fourier Transform – an algorithm for the history books
James Cooley and John Tukey, scientific and mathematical advisors to the US President at the time, developed a new type of DFT, which they called the Fast Fourier Transform (FFT).
As you may guess from the name, the FFT is significantly faster than a DFT by identifying that waves from samples overlap at specific points, so this can be used to eliminate useless calculations. Instead of being O(N2), the FFT is now Nlog2(N), making it exponentially faster. If there were 64 calculations to be done using the DFT, there are now just 24, and this gets even better as the samples become much bigger – if there are thousands of samples, there are magnitudes fewer calculations needed than with a DFT.
Tragically, though, the FFT was published in a paper by these two scientists in 1965, by which point other major nations had joined the US and Soviet Union in becoming nuclear powers. It was now too late to sign a comprehensive test ban, and nuclear tests were forced underground, where testing ramped up to a remarkable rate of around once per week.
The FFT has since found new uses in almost every single communications and data signalling application humans have created, making it one of the most important algorithms ever designed. However, it had the potential to be much greater – it was almost the algorithm that stopped the nuclear arms race, had it come just a few years prior.
Source Link: The Algorithm That Almost Stopped The Development Of Nuclear Weapons