Over two thousand years ago, humans invented natural numbers to make it easier to "count sheep." The concept of "one sheep" was easy to grasp, followed by "two sheep" and "three sheep," and so on. People soon realized that counting this way was an immense task, so at a certain point, they stopped counting and replaced it with "many sheep." Different civilizations stopped counting at different points. As civilization developed, we gradually became able to support specialized people to think about "numbers"—individuals who didn't have to worry about their livelihood. These wise people established the concept of "zero" (no sheep). They weren't prepared to name all natural numbers; instead, they developed various counting systems and named natural numbers as needed, using digits like "1", "2", etc. (Romans used symbols like "I", "V", "X"). Thus, mathematics was born.
People were accustomed to lining up sheep in a row to count them, which led to the concept of the number line—marking numbers at equal intervals on a straight line. In theory, the number line can extend infinitely, but for convenience, we only mark it up to a certain value and use an arrow to indicate that the line continues. Thinkers throughout history realized it could represent infinite numbers, but sheep traders might not have understood this concept, as it was beyond their imagination.
If you were very talkative, you could persuade someone to buy a sheep you didn't actually have, giving rise to the concepts of debt and negative numbers. By selling this imaginary sheep, you effectively have "negative one" sheep. This situation led to the creation of integers—composed of natural numbers and their opposites (negative numbers).
Poverty obviously appeared before debt. Poverty meant some people could only afford half a sheep, or even a quarter. Thus, fractions were born—formed by dividing one integer by another, such as 2/3 or 111/27. Mathematicians call these numbers rational numbers, and they fill the gaps between integers on the number line. For convenience, people invented decimal point notation, using "3.1415" to replace the lengthy 31415/10000.
After some time, people discovered that some numbers used in daily life could not be expressed as rational numbers. The most typical example is the ratio of a circle's circumference to its diameter, denoted as PI. This led to so-called real numbers, which include rational numbers and irrational numbers like PI—which, if expressed in decimal form, require infinitely many digits after the decimal point. Real number mathematics is considered by many to be one of the most important fields in math because it is the foundation of engineering. Humans used real numbers to create modern civilization. The coolest thing is that rational numbers are countable, while real numbers are uncountable. The field studying natural numbers and integers is called discrete mathematics, while the field studying real numbers is called continuous mathematics.
In fact, real numbers are merely a conventional concept recognized by civilized culture. Many famous physicists believe that real numbers are just an illusion because the universe is discrete and finite. If the physical world is composed of a finite number of discrete things, we can only count up to a fixed value—because we would have finished counting everything in the world (not just sheep, but toasters, repairmen, and so on). From this, we conclude that discrete mathematics alone can describe the entire universe, requiring only a finite subset of natural numbers (large, but countable). Perhaps there is a civilization in the universe beyond our technology that has never heard of continuous mathematics, basic calculus, or even the concept of infinity. They never use PI; instead, they use 3.14159 (or more precisely, 3.1415926535897932382626433832795) to build a perfect world.
(The example in The Three-Body Problem I about aliens marking a single point on a spaceship to carry an entire encyclopedia refers exactly to this.)
So why use continuous mathematics? Because it is extremely useful in engineering. However, it's worth noting that the term "real numbers" as used in the real world usually means something discrete. What should designers of 3D virtual worlds keep in mind? Just like the real world, you will be dealing with a series of discrete and finite objects. You can use various data types provided by C++ to describe a 3D virtual world, including short, int, float, and double. A short is a 16-bit integer that can represent 65,536 different values. While this number is large, it is far from enough for measuring the real world. An int is a 32-bit integer that can represent 4.2 billion different values. A float is a 32-bit rational number that can represent 4.2 billion values. A double is similar to a float but is a 64-bit rational number.
The key to choosing units of measurement for a virtual world is selecting discrete precision. There is a common misconception that short and int are discrete while float and double are continuous; in practice, all of these data types are discrete. Old computer graphics textbooks usually recommended using integers because hardware at the time was weaker at processing floating-point numbers than integers. However, for modern hardware, this advice is outdated. How should precision be chosen? The First Rule of Computer Graphics:
The First Rule of Computer Graphics: The Principle of Approximation—If it looks right, it is right.
(PI in a computer is not a real number, but we treat it as if it were—as the commonly accepted decimal.)
Afterword
1D math involves one number line (this interpretation assumes numbers are continuous), 2D math involves two number lines (2D Cartesian coordinate system), 3D math involves three number lines (3D Cartesian coordinate system)...
References
- 3D Math Primer for Graphics and Game Development
No comments yet. Be the first to share your thoughts.