The Trigamon Project

Article 5: An Alien Mathematical Framework



           

Introduction

THIS ARTICLE IS IN PROGRESS AND SUBJECT TO CHANGE

On Earth, math is fairly universal, which makes it hard to imagine any other type of mathematical framework. However, to extraterrestrials, or system of mathematics may be completely nonsensical. For instance, they may be completely dumbfounded how a civilization advanced as our still has not figured out how do divide by zero. (Spoiler, you can).

I've tried to write this article several times, with several titles including "Everything you know about set theory is wrong," and each time it's emotional for me how in trying to make things simple, trained mathematicians (we're talking professors, experts, and the people who told you that you were wrong) get things entirely wrong. Sometimes it's because they're using bad definitions (such as in the case of the size of infinite sets) and sometimes it's because they are using faulty assumptions (such as the law of the excluded middle).

Personal anecdotes aside, in this article, instead of standing on the shoulders of giants, we look at the ground under their feet, and find that it's not as solid as you are told to believe.


Logic

Proof by Observation

On Earth, all of mathematics is based on logical axioms, which are, by default, assumed to be true. Thus, if you doubt one of these axioms, then all of Earth mathematics falls apart. But basing mathematics on assumptions is sub-optimal, so lets base it on something else. Or rather, nothing else.

Basing mathematics on nothing leaves us with a form of circular reasoning, which is famous as a logical fallacy: Things are true because I say they are, and I say things are true because they are. While this argument obviously doesn't hold water on it's own, we can actually start here. To put a circular reasoning argument into logical notation:

A → B
B → A

The two statements above don't prove that A or B is true. But they do prove something else extremely valuable. From the previous statements, we can derive:

A = B

This derivation goes both ways. And it means that if we can prove A, we have proved B and vice versa. Any kind of proof will work.

Proof by observation is perhaps the simplest kind of proof. Usually used to show that something is possible, since to show that something is possible, only one example needs to be found. But to prove that something is always true, a proof by observation of a single example is insufficient. We have to add a second step.

Proof by observation is something rarely used on Earth, because it takes very careful definition to utilize, but it's ultimately how Fenyrans proved addition works (and thus, that all of arithmetic works). This was originally proved with apes and stones, with steps as follows:

Note that this works because counting is already defined in the circular definition that is being verified. So in effect, we are validating a scientific model, albeit an extremely simplistic one.

Note that the key in this proof is that you don't modify the stones, otherwise, one can become two can become three as you break apart your clods of dirt. This proof of mathematics was so successful that it won it's discoverer the first ever prize in logic. (This prize had no name, as names for prizes hadn't yet been invented)

The Law of the Included Middle

In boolean logic a statement can only be true or false. While this is useful to allow for boolean logic, it's ultimately limiting. And then there are statements like the following:

"This statement is false."

When written in logical notation, the prior statement is as follows: (Note that we use !A to represent NOT(A), NOT can also be written with a '~' )

A → !A

This should make the statement false. However if the statement is false, then it is true, and vice versa. Some people have attempted to remedy this by placing recursive statements outside the realm of boolean logic, however this is just a proverbial band-aid on top of the bigger problem.

As a result, Fenyrans were initially forced to abandon boolean logic, or at least the premise that proving something true proved that something was not false. In practice however, this was not as big a problem as one would think: usually if you can prove that something is true, you can re-arrange the proof to show that the opposite is false. Alternatively, if you can prove that something cannot be both true and false (and must be one or the other), then you can apply boolean logic as on Earth.

And while this seems difficult, in practice such proofs are usually trivial, often aided by definitions. But ultimately, this does make a slight difference when dealing with a very few theoretical problems.

Arithmetic

Dividing by Zero

Dividing by zero probably the most famous math problem on Earth with no solution. Technically, some say it's undefined, but that's just because we haven't defined it yet. There's no reason why we can't define it just as we define things like dividing by any other finite number. While this may seem intuitive for (at least some of) us Earthlings, this could easily confuse aliens whose societies developed math differently.

The ability to divide by zero doesn't come directly from the different logical framework of Fenryan mathematics. Just because something isn't defined, doesn't mean we can't define it. And in this case, we can. To define division by zero, we are brought back to the reason division exists in the first place: the practical application. To teach division by zero, one might see the following word problem in a Fenyran textbook:

Problem: If you have a bucket of 5 (five) cookies, and you are tasked with giving zero of them to as many people as you can, how many people can you give cookies to? (assume the cookies are indestructible). How many cookies do you have left over when you are finished?

This is equivalent to the equation x = 5 ÷ 0. In the problem, x is the number of people you can give cookies to. "How many cookies do you have left over?" is the remainder. To generalize this, the next questions would be: If you are distributing zero cookies, does the number of people you can give cookies to depend on the number of cookies you have? Does it change the number of cookies that you have left over?

The answers are no and yes.

From this we can conclude: (albeit, it's not a formal proof)

K ÷ 0 = ∞, Remainder K

If you thought the above was fairly obvious, congratulations; you aren't alone. This is a massive cosmic embarrassment for our species, both in the Trigamon Universe, and probably also in the real one.

What about negative infinity?

Some people like to be smart and ask, "Why go all the way to infinity?" However the answer is hidden in the original question: we are specifically looking for a maximum. Another frequently asked question is why is it positive infinity? Why not negative infinity? This is a bit trickier to answer, but it is also hidden in the original question. We're specifically asking the maximum people can you give cookies to. To get negative infinity, you would have to be asking the maximum number you could take cookies from.

At first glance, this seems like it should give you infinity also, since there's no difference between giving and taking zero. However the difference isn't in the actions being not taken, but how one is thinking about the problem. Taking is negative giving, so whatever answer you get is going to be the result of dividing by – 0, and thus the result will be negative. The sign of the remainder will not change, however, since you will still have what you started with.

K ÷ – 0 = – ∞

Ultimately, it's a bit of a mind bender. To put the distinction more simply if K ÷ 0 = – ∞, giving would be taking:

Congratulations! You can now divide by zero!

Definitions

Infinite Sets are not like Finite ones!

Most mathematicians will tell you that there are the same number of even integers as integers. They're wrong, but they think that way because they use a nonsensical definition for the size of infinity.

Before we can come up with a better definition, we need to understand what's wrong with the standard one. For a set to be infinite, the size has to be infinity. And while this might seem obvious, this provides a problem for mathematicians. In the standard school of though, infinity is not a number. By definition alone, this would mean the size of the set is not a number, so you have to use another technique for comparing sizes. And mathematicians typically go with the mapping method.

The mapping method means that if for every number in one set there exists a number in the other set, then the sets are equal in size. This is obvious for finite sets, but it doesn't work when the sets are infinite, because when they are, you get the bizarre conclusion I mentioned earlier. With infinite sets, you can also map one number in one set to multiple numbers in the other set, which you cannot do with finite sets. This really should have been a red flag.

There also is another red flag: with the definition most widely in use, it's impossible to prove that there exists a set of numbers with a size between the integers and the total set of numbers, and it's also impossible to prove that one doesn't exist.

This is Fixable

The fix to this problem requires a new definition and involves making infinity into a number. This means we have to give infinity a value, fortunately there is one that makes a bit more sense than all of the others: the number of positive integers. Defining infinity in this way gives us a lot of power. Now there are 2∞ numbers and &infin+1; even integers (zero is even). It's now obvious which one is greater, and things finally make sense.

We can apply this definition to any set, such as the set of primes. To figure out the size of the set of primes, we need to figure out what fraction of integers are prime.

We can also apply this to sets of numbers larger than the integers the rational numbers, but it gets a little bit tricky. A rational number is really two numbers: a numerator and a denominator. For every integer that can be a numerator, there are an 2∞+1 integers that can be the denominator. We can treat ∞ as a variable x and then solve (2∞+1) × (2∞+1) as a polynomial (2x+1) × (2x+1). The result is that there are 4∞2 + 4∞ + 1 rational numbers

This makes a lot more sense than the previous method, and is all doable by defining infinity a number.

Finally, this brings us to uncountably infinite sets, such as the set of the irrational numbers. And while this method doesn't assign a size to that set, there's no reason why we have to know the size of everything.