title | filename | chapternum |
---|---|---|
Computation and Representation |
lec_02_representation |
2 |
- Distinguish between specification and implementation, or equivalently between mathematical functions and algorithms/programs.
- Representing an object as a string (often of zeroes and ones).
- Examples of representations for common objects such as numbers, vectors, lists, and graphs.
- Prefix-free representations.
- Cantor's Theorem: The real numbers cannot be represented exactly as finite strings.
"The alphabet (sic) was a great invention, which enabled men (sic) to store and to learn with little effort what others had learned the hard way -- that is, to learn from books rather than from direct, possibly painful, contact with the real world.", B.F. Skinner
"The name of the song is called `HADDOCK'S EYES.'" [said the Knight]
"Oh, that's the name of the song, is it?" Alice said, trying to feel interested.
"No, you don't understand," the Knight said, looking a little vexed. "That's what the name is CALLED. The name really is `THE AGED AGED MAN.' "
"Then I ought to have said `That's what the SONG is called'?" Alice corrected herself.
"No, you oughtn't: that's quite another thing! The SONG is called `WAYS AND MEANS': but that's only what it's CALLED, you know!"
"Well, what IS the song, then?" said Alice, who was by this time completely bewildered.
"I was coming to that," the Knight said. "The song really IS `A-SITTING ON A GATE': and the tune's my own invention."
Lewis Carroll, Through the Looking-Glass
To a first approximation, computation is a process that maps an input to an output.
{#computationinputtooutputfig .margin }
When discussing computation, it is essential to separate the question of what is the task we need to perform (i.e., the specification) from the question of how we achieve this task (i.e., the implementation). For example, as we've seen, there is more than one way to achieve the computational task of computing the product of two integers.
In this chapter we focus on the what part, namely defining computational tasks.
For starters, we need to define the inputs and outputs.
Capturing all the potential inputs and outputs that we might ever want to compute seems challenging, since computation today is applied to a wide variety of objects.
We do not compute merely on numbers, but also on texts, images, videos, connection graphs of social networks, MRI scans, gene data, and even other programs.
We will represent all these objects as strings of zeroes and ones, that is objects such as
{#zerosandonesgreenfig .margin }
Today, we are so used to the notion of digital representation that we are not surprised by the existence of such an encoding. But it is actually a deep insight with significant implications. Many animals can convey a particular fear or desire, but what is unique about humans is language: we use a finite collection of basic symbols to describe a potentially unlimited range of experiences. Language allows transmission of information over both time and space and enables societies that span a great many people and accumulate a body of shared knowledge over time.
Over the last several decades, we have seen a revolution in what we can represent and convey in digital form.
We can capture experiences with almost perfect fidelity, and disseminate it essentially instantaneously to an unlimited audience.
Moreover, once information is in digital form, we can compute over it, and gain insights from data that were not accessible in prior times.
At the heart of this revolution is the simple but profound observation that we can represent an unbounded variety of objects using a finite set of symbols (and in fact using only the two symbols 0
and 1
).
In later chapters, we will typically take such representations for granted, and hence use expressions such as "program
::: {.nonmath}
The main takeaways from this chapter are:
-
We can represent all kinds of objects we want to use as inputs and outputs using binary strings. For example, we can use the binary basis to represent integers and rational numbers as binary strings (see naturalnumsec{.ref} and morerepressec{.ref}).
-
We can compose the representations of simple objects to represent more complex objects. In this way, we can represent lists of integers or rational numbers, and use that to represent objects such as matrices, images, and graphs. Prefix-free encoding is one way to achieve such a composition (see prefixfreesec{.ref}).
-
A computational task specifies a map from an input to an output--- a function. It is crucially important to distinguish between the "what" and the "how", or the specification and implementation (see secimplvsspec{.ref}). A function simply defines which output corresponds to which input. It does not specify how to compute the output from the input, and as we've seen in the context of multiplication, there can be more than one way to compute the same function.
-
While the set of all possible binary strings is infinite, it still cannot represent everything. In particular, there is no representation of the real numbers (with absolute accuracy) as binary strings. This result is also known as "Cantor's Theorem" (see cantorsec{.ref}) and is typically referred to as the result that the "reals are uncountable." It is also implied that there are different levels of infinity though we will not get into this topic in this book (see generalizepowerset{.ref}).
The two "big ideas" we discuss are representtuplesidea{.ref} - we can compose representations for simple objects to represent more complex objects and functionprogramidea{.ref} - it is crucial to distinguish between functions' ("what") and programs' ("how"). The latter will be a theme we will come back to time and again in this book. :::
Every time we store numbers, images, sounds, databases, or other objects on a computer, what we actually store in the computer's memory is the representation of these objects. Moreover, the idea of representation is not restricted to digital computers. When we write down text or make a drawing we are representing ideas or experiences as sequences of symbols (which might as well be strings of zeroes and ones). Even our brain does not store the actual sensory inputs we experience, but rather only a representation of them.
To use objects such as numbers, images, graphs, or others as inputs for computation, we need to define precisely how to represent these objects as binary strings.
A representation scheme is a way to map an object
We now show how we can represent natural numbers as binary strings.
Over the years people have represented numbers in a variety of ways, including Roman numerals, tally marks, our own Hindu-Arabic decimal system, and many others.
We can use any one of those as well as many others to represent a number as a string (see bitmapdigitsfig{.ref}).
However, for the sake of concreteness, we use the binary basis as our default representation of natural numbers as strings.
For example, we represent the number six as the string
Number (decimal representation) | Number (binary representation) |
---|---|
0 | 0 |
1 | 1 |
2 | 10 |
5 | 101 |
16 | 10000 |
40 | 101000 |
53 | 110101 |
389 | 110000101 |
3750 | 111010100110 |
Table: Representing numbers in the binary basis. The left-hand column contains representations of natural numbers in the decimal basis, while the right-hand column contains representations of the same numbers in the binary basis.
If
$$NtS(n) = \begin{cases}
0 & n=0 \
1 & n=1 \
NtS(\floor{n/2}) parity(n) & n>1
\end{cases} \label{ntseq}$$
where
Throughout most of this book, the particular choices of representation of numbers as binary strings would not matter much: we just need to know that such a representation exists.
In fact, for many of our purposes we can even use the simpler representation of mapping a natural number
::: {.remark title="Binary representation in python (optional)" #pythonbinary} We can implement the binary representation in Python as follows:
def NtS(n):# natural numbers to strings
if n > 1:
return NtS(n // 2) str(n % 2)
else:
return str(n % 2)
print(NtS(236))
# 11101100
print(NtS(19))
# 10011
We can also use Python to implement the inverse transformation, mapping a string back to the natural number it represents.
def StN(x):# String to number
k = len(x)-1
return sum(int(x[i])*(2**(k-i)) for i in range(k 1))
print(StN(NtS(236)))
# 236
:::
::: {.remark title="Programming examples" #programmingrem} In this book, we sometimes use code examples as in pythonbinary{.ref}. The point is always to emphasize that certain computations can be achieved concretely, rather than illustrating the features of Python or any other programming language. Indeed, one of the messages of this book is that all programming languages are in a certain precise sense equivalent to one another, and hence we could have just as well used JavaScript, C, COBOL, Visual Basic or even BrainF*ck. This book is not about programming, and it is absolutely OK if you are not familiar with Python or do not follow code examples such as those in pythonbinary{.ref}. :::
It is natural for us to think of CCXXXVI
would be the "actual" number and
So what is the "actual" number? This is a question that philosophers of mathematics have pondered throughout history.
Plato argued that mathematical objects exist in some ideal sphere of existence (that to a certain extent is more "real" than the world we perceive via our senses, as this latter world is merely the shadow of this ideal sphere).
In Plato's vision, the symbols
The Austrian philosopher Ludwig Wittgenstein, on the other hand, argued that mathematical objects do not exist at all, and the only things that exist are the actual marks on paper that make up CCXXXVI
.
In Wittgenstein's view, mathematics is merely about formal manipulation of symbols that do not have any inherent meaning.
You can think of the "actual" number as (somewhat recursively) "that thing which is common to CCXXXVI
and all other past and future representations that are meant to capture the same object".
While reading this book, you are free to choose your own philosophy of mathematics, as long as you maintain the distinction between the mathematical objects themselves and the various particular choices of representing them, whether as splotches of ink, pixels on a screen, zeroes and ones, or any other form.
We have seen that natural numbers can be represented as binary strings. We now show that the same is true for other types of objects, including (potentially negative) integers, rational numbers, vectors, lists, graphs and many others. In many instances, choosing the "right" string representation for a piece of data is highly non-trivial, and finding the "best" one (e.g., most compact, best fidelity, most efficiently manipulable, robust to errors, most informative features, etc.) is the object of intense research. But for now, we focus on presenting some simple representations for various objects that we would like to use as inputs and outputs for computation.
Since we can represent natural numbers as strings, we can represent the full set of integers (i.e., members of the set
While the encoding function of a representation needs to be one to one, it does not have to be onto. For example, in the representation above there is no number that is represented by the empty string but it is still a fine representation, since every integer is represented uniquely by some string.
Given a string
repnegativeintegerssec{.ref}'s approach of representing an integer using a specific "sign bit" is known as the Signed Magnitude Representation and was used in some early computers.
However, the two's complement representation is much more common in practice.
The two's complement representation of an integer
Another way to say this is that we represent a potentially negative number
We can represent a rational number of the form
We tackle this by giving a general representation for pairs of strings.
If we were using a pen and paper, we would just use a separator symbol such as
Our final representation for rational numbers is obtained by composing the following steps:
-
Representing a (potentially negative) rational number as a pair of integers
$a,b$ such that$r=a/b$ . -
Representing an integer by a string via the binary representation.
-
Combining 1 and 2 to obtain a representation of a rational number as a pair of strings.
-
Representing a pair of strings over
${0,1}$ as a single string over$\Sigma = {0,1,|}$ . -
Representing a string over
$\Sigma$ as a longer string over${0,1}$ .
::: {.example title="Representing a rational number as a string" #represnumberbypairs}
Consider the rational number
The same idea can be used to represent triples of strings, quadruples, and so on as a string. Indeed, this is one instance of a very general principle that we use time and again in both the theory and practice of computer science (for example, in Object Oriented programming):
::: { .bigidea #representtuplesidea }
If we can represent objects of type
Repeating the same idea, once we can represent objects of type
The set of real numbers
The above representation of real numbers via rational numbers that approximate them is a fine choice for a representation scheme.
However, typically in computing applications, it is more common to use the floating-point representation scheme (see floatingpointfig{.ref}) to represent real numbers.
In the floating-point representation scheme we represent 0.1 0.2
will result in 0.30000000000000004
and not 0.3
, see here, here and here for more.
The reader might be (rightly) worried about the fact that the floating-point representation (or the rational number one) can only approximately represent real numbers. In many (though not all) computational applications, one can make the accuracy tight enough so that this does not affect the final result, though sometimes we do need to be careful. Indeed, floating-point bugs can sometimes be no joking matter. For example, floating-point rounding errors have been implicated in the failure of a U.S. Patriot missile to intercept an Iraqi Scud missile, costing 28 lives, as well as a 100 million pound error in computing payouts to British pensioners.
::: {.quote} "For any collection of fruits, we can make more fruit salads than there are fruits. If not, we could label each salad with a different fruit, and consider the salad of all fruits not in their salad. The label of this salad is in it if and only if it is not.", Martha Storey. :::
Given the issues with floating-point approximations for real numbers, a natural question is whether it is possible to represent real numbers exactly as strings. Unfortunately, the following theorem shows that this cannot be done:
There does not exist a one-to-one function
Countable sets. We say that a set
The reals are uncountable. That is, there does not exist an onto function
cantorthmtwo{.ref} was proven by Georg Cantor in 1874.
This result (and the theory around it) was quite shocking to mathematicians at the time.
By showing that there is no one-to-one map from
Now that we have discussed cantorthm{.ref}'s importance, let us see the proof. It is achieved in two steps:
-
Define some infinite set
$\mathcal{X}$ for which it is easier for us to prove that$\mathcal{X}$ is not countable (namely, it's easier for us to prove that there is no one-to-one function from$\mathcal{X}$ to${0,1}^*$ ). -
Prove that there is a one-to-one function
$G$ mapping$\mathcal{X}$ to$\mathbb{R}$ .
We can use a proof by contradiction to show that these two facts together imply cantorthm{.ref}.
Specifically, if we assume (towards the sake of contradiction) that there exists some one-to-one
To turn this idea into a full proof of cantorthm{.ref} we need to:
-
Define the set
$\mathcal{X}$ . -
Prove that there is no one-to-one function from
$\mathcal{X}$ to${0,1}^*$ -
Prove that there is a one-to-one function from
$\mathcal{X}$ to$\R$ .
We now proceed to do precisely that.
That is, we will define the set
::: {.definition #bitsinfdef}
We denote by
That is,
There does not exist a one-to-one map
There does exist a one-to-one map
As we've seen above, sequencestostrings{.ref} and sequencestoreals{.ref} together imply cantorthm{.ref}.
To repeat the argument more formally, suppose, for the sake of contradiction, that there did exist a one-to-one function $RtS:\R \rightarrow {0,1}^$.
By sequencestoreals{.ref}, there exists a one-to-one function $FtR:{0,1}^\infty \rightarrow \R$.
Thus, under this assumption, since the composition of two one-to-one functions is one-to-one (see onetoonecompex{.ref}), the function $FtS:{0,1}^\infty \rightarrow {0,1}^$ defined as
Now all that is left is to prove these two lemmas. We start by proving sequencestostrings{.ref} which is really the heart of cantorthm{.ref}.
Warm-up: "Baby Cantor". The proof of sequencestostrings{.ref} is rather subtle. One way to get intuition for it is to consider the following finite statement "there is no onto function
::: {.proof data-ref="sequencestostrings"}
We will prove that there does not exist an onto function
The technique of this proof is known as the "diagonal argument" and is illustrated in diagrealsfig{.ref}.
We assume, towards a contradiction, that there exists such a function
The definition of the function
which correspond to the elements
To complete the proof that
::: {.remark title="Generalizing beyond strings and reals" #generalizepowerset}
sequencestostrings{.ref} doesn't really have much to do with the natural numbers or the strings.
An examination of the proof shows that it really shows that for every set
The proof of sequencestostrings{.ref} can be generalized to show that there is no one-to-one map between a set and its power set.
In particular, it means that the set
To complete the proof of cantorthm{.ref}, we need to show sequencestoreals{.ref}. This requires some calculus background but is otherwise straightforward. If you have not had much experience with limits of a real series before, then the formal proof below might be a little hard to follow. This part is not the core of Cantor's argument, nor are such limits important to the remainder of this book, so you can feel free to take sequencestoreals{.ref} on faith and skip the proof.
::: {.proofidea data-ref="sequencestoreals"}
We define
::: {.proof data-ref="sequencestoreals"}
For every
We now prove that
Since the infinite series
::: {.remark title="Using decimal expansion (optional)" #decimal}
In the proof above we used the fact that
Cantor's Theorem yields the following corollary that we will use several times in this book: the set of all Boolean functions (mapping
Let
This is a direct consequence of sequencestostrings{.ref}, since we can use the binary representation to show a one-to-one map
from
::: {.proof data-ref="uncountalbefuncthm"}
Since
We now show this one-to-one map. We simply map a function
This map is one-to-one since if
The results above establish many equivalent ways to phrase the fact that a set is countable. Specifically, the following statements are all equivalent:
-
The set
$S$ is countable -
There exists an onto map from
$\N$ to$S$ -
There exists an onto map from
${0,1}^*$ to$S$ . -
There exists a one-to-one map from
$S$ to$\N$ -
There exists a one-to-one map from
$S$ to${0,1}^*$ . -
There exists an onto map from some countable set
$T$ to$S$ . -
There exists a one-to-one map from
$S$ to some countable set$T$ .
::: { .pause } Make sure you know how to prove the equivalence of all the results above. :::
Numbers are of course by no means the only objects that we can represent as binary strings.
A representation scheme for representing objects from some set
Let
Note that the condition
Suppose that $E: \mathcal{O} \rightarrow {0,1}^$ is one-to-one. Then there exists a function $D:{0,1}^ \rightarrow \mathcal{O}$ such that
Let
::: {.remark title="Total decoding functions" #totaldecoding} While the decoding function of a representation scheme can in general be a partial function, the proof of decodelem{.ref} implies that every representation scheme has a total decoding function. This observation can sometimes be useful. :::
If
To obtain a representation of objects in
For every two non-empty finite sets
Let
When showing a representation scheme for rational numbers, we used the "hack" of encoding the alphabet
It turns out that we can transform every representation to a prefix-free form.
This justifies representtuplesidea{.ref}, and allows us to transform a representation scheme for objects of a type
::: {.definition title="Prefix free encoding" #prefixfreedef}
For two strings
Let
Recall that for every set
::: {.theorem title="Prefix-free implies tuple encoding" #prefixfreethm}
Suppose that $E:\mathcal{O} \rightarrow {0,1}^$ is prefix-free.
Then the following map $\overline{E}:\mathcal{O}^ \rightarrow {0,1}^$ is one to one, for every $(o_0,\ldots,o_{k-1}) \in \mathcal{O}^
prefixfreethm{.ref} is an example of a theorem that is a little hard to parse, but in fact is fairly straightforward to prove once you understand what it means. Therefore, I highly recommend that you pause here to make sure you understand the statement of this theorem. You should also try to prove it on your own before proceeding further.
{#prefixfreerepconcat .margin }
The idea behind the proof is simple.
Suppose that for example we want to decode a triple
::: {.proof data-ref="prefixfreethm"}
We now show the formal proof.
Suppose, towards the sake of contradiction, that there exist two distinct tuples
$$
\overline{E}(o_0,\ldots,o_{k-1})= \overline{E}(o'0,\ldots,o'{k'-1}) ;. \label{prefixfreeassump}
$$
We will denote the string
Let
and
$$ \overline{x} = \overline{E}(o'0,\ldots,o'{k'-1}) = x_0\cdots x_{i-1} E(o'i) E(o'{i 1}) \cdots E(o'_{k'-1}) $$
where $x_j = E(o_j) = E(o'j)$ for all $j<i$.
Let $\overline{y}$ be the string obtained after removing the prefix $x_0 \cdots x{i-1}$ from
In the case that
$$\overline{x} = E(o_0)\cdots E(o_{k-1}) = E(o_0) \cdots E(o_{k-1}) E(o'k) \cdots E(o'{k'-1})$$
which means that $E(o'k) \cdots E(o'{k'-1})$ must correspond to the empty string
::: {.remark title="Prefix freeness of list representation" #prefixfreelistsrem}
Even if the representation
Some natural representations are prefix-free.
For example, every fixed output length representation (i.e., one-to-one function
Let
For the sake of completeness, we will include the proof below, but it is a good idea for you to pause here and try to prove it on your own, using the same technique we used for representing rational numbers.
::: {.proof data-ref="prefixfreetransformationlem"}
The idea behind the proof is to use the map
To prove the lemma we need to show that (1)
Let
The proof of prefixfreetransformationlem{.ref} is not the only or even the best way to transform an arbitrary representation into prefix-free form.
prefix-free-ex{.ref} asks you to construct a more efficient prefix-free transformation satisfying
The proofs of prefixfreethm{.ref} and prefixfreetransformationlem{.ref} are constructive in the sense that they give us:
-
A way to transform the encoding and decoding functions of any representation of an object
$O$ to encoding and decoding functions that are prefix-free, and -
A way to extend prefix-free encoding and decoding of single objects to encoding and decoding of lists of objects by concatenation.
Specifically, we could transform any pair of Python functions encode
and decode
to functions pfencode
and pfdecode
that correspond to a prefix-free encoding and decoding.
Similarly, given pfencode
and pfdecode
for single objects, we can extend them to encoding of lists.
Let us show how this works for the case of the NtS
and StN
functions we defined above.
We start with the "Python proof" of prefixfreetransformationlem{.ref}: a way to transform an arbitrary representation into one that is prefix free.
The function prefixfree
below takes as input a pair of encoding and decoding functions, and returns a triple of functions containing prefix-free encoding and decoding functions, as well as a function that checks whether a string is a valid encoding of an object.
# takes functions encode and decode mapping
# objects to lists of bits and vice versa,
# and returns functions pfencode and pfdecode that
# maps objects to lists of bits and vice versa
# in a prefix-free way.
# Also returns a function pfvalid that says
# whether a list is a valid encoding
def prefixfree(encode, decode):
def pfencode(o):
L = encode(o)
return [L[i//2] for i in range(2*len(L))] [0,1]
def pfdecode(L):
return decode([L[j] for j in range(0,len(L)-2,2)])
def pfvalid(L):
return (len(L) % 2 == 0 ) and all(L[2*i]==L[2*i 1] for i in range((len(L)-2)//2)) and L[-2:]==[0,1]
return pfencode, pfdecode, pfvalid
pfNtS, pfStN , pfvalidN = prefixfree(NtS,StN)
NtS(234)
# 11101010
pfNtS(234)
# 111111001100110001
pfStN(pfNtS(234))
# 234
pfvalidM(pfNtS(234))
# true
Note that the Python function prefixfree
above takes two Python functions as input and outputs three Python functions as output. (When it's not too awkward, we use the term "Python function" or "subroutine" to distinguish between such snippets of Python programs and mathematical functions.)
You don't have to know Python in this course, but you do need to get comfortable with the idea of functions as mathematical objects in their own right, that can be used as inputs and outputs of other functions.
We now show a "Python proof" of prefixfreethm{.ref}. Namely, we show a function represlists
that takes as input a prefix-free representation scheme (implemented via encoding, decoding, and validity testing functions) and outputs a representation scheme for lists of such objects. If we want to make this representation prefix-free then we could fit it into the function prefixfree
above.
def represlists(pfencode,pfdecode,pfvalid):
"""
Takes functions pfencode, pfdecode and pfvalid,
and returns functions encodelists, decodelists
that can encode and decode lists of the objects
respectively.
"""
def encodelist(L):
"""Gets list of objects, encodes it as list of bits"""
return "".join([pfencode(obj) for obj in L])
def decodelist(S):
"""Gets lists of bits, returns lists of objects"""
i=0; j=1 ; res = []
while j<=len(S):
if pfvalid(S[i:j]):
res = [pfdecode(S[i:j])]
i=j
j = 1
return res
return encodelist,decodelist
LtS , StL = represlists(pfNtS,pfStN,pfvalidN)
LtS([234,12,5])
# 111111001100110001111100000111001101
StL(LtS([234,12,5]))
# [234, 12, 5]
We can represent a letter or symbol by a string, and then if this representation is prefix-free, we can represent a sequence of symbols by merely concatenating the representation of each symbol.
One such representation is the ASCII that represents
::: {.example title="The Braille representation" #braille}
The Braille system is another way to encode letters and other symbols as binary strings. Specifically, in Braille, every letter is encoded as a string in
The Braille system was invented in 1821 by Louis Braille when he was just 12 years old (though he continued working on it and improving it throughout his life). Braille was a French boy who lost his eyesight at the age of 5 as the result of an accident. :::
::: {.example title="Representing objects in C (optional)" #Crepresentation}
We can use programming languages to probe how our computing environment represents various values.
This is easiest to do in "unsafe" programming languages such as C
that allow direct access to the memory.
Using a simple C
program we have produced the following representations of various values.
One can see that for integers, multiplying by 2 corresponds to a "left shift" inside each byte.
In contrast, for floating-point numbers, multiplying by two corresponds to adding one to the exponent part of the representation.
In the architecture we used, a negative number is represented using the two's complement approach.
C
represents strings in a prefix-free form by ensuring that a zero byte is at their end.
int 2 : 00000010 00000000 00000000 00000000
int 4 : 00000100 00000000 00000000 00000000
int 513 : 00000001 00000010 00000000 00000000
long 513 : 00000001 00000010 00000000 00000000 00000000 00000000 00000000 00000000
int -1 : 11111111 11111111 11111111 11111111
int -2 : 11111110 11111111 11111111 11111111
string Hello: 01001000 01100101 01101100 01101100 01101111 00000000
string abcd : 01100001 01100010 01100011 01100100 00000000
float 33.0 : 00000000 00000000 00000100 01000010
float 66.0 : 00000000 00000000 10000100 01000010
float 132.0: 00000000 00000000 00000100 01000011
double 132.0: 00000000 00000000 00000000 00000000 00000000 10000000 01100000 01000000
:::
Once we can represent numbers and lists of numbers, then we can also represent vectors (which are just lists of numbers).
Similarly, we can represent lists of lists, and thus, in particular, can represent matrices.
To represent an image, we can represent the color at each pixel by a list of three numbers corresponding to the intensity of Red, Green and Blue.
(We can restrict to three primary colors since most humans only have three types of cones in their retinas; we would have needed 16 primary colors to represent colors visible to the Mantis Shrimp.)
Thus an image of
A graph on
Another representation for graphs is the adjacency list representation. That is, we identify the vertex set
{#representinggraphsfig .margin }
If we have a way of representing objects from a set 0
,1
,[
, ]
, ,
0011
, 10011
, and 00111
, then we can represent the nested list "[0011,[10011,00111]]"
over the alphabet
We will typically identify an object with its representation as a string.
For example, if
This convention of identifying an object with its representation as a string is one that we humans follow all the time.
For example, when people say a statement such as "$17$ is a prime number", what they really mean is that the integer whose decimal representation is the string "17
", is prime.
::: {.quote} When we say
$A$ is an algorithm that computes the multiplication function on natural numbers.
what we really mean is that
$A$ is an algorithm that computes the function $F:{0,1}^* \rightarrow {0,1}^$ such that for every pair $a,b \in \N$, if $x\in {0,1}^$ is a string representing the pair $(a,b)$ then $F(x)$ will be a string representing their product $a\cdot b$. :::
Abstractly, a computational process is some process that takes an input which is a string of bits and produces an output which is a string of bits. This transformation of input to output can be done using a modern computer, a person following instructions, the evolution of some natural system, or any other means.
In future chapters, we will turn to mathematically defining computational processes, but, as we discussed above, at the moment we focus on computational tasks. That is, we focus on the specification and not the implementation. Again, at an abstract level, a computational task can specify any relation that the output needs to have with the input. However, for most of this book, we will focus on the simplest and most common task of computing a function. Here are some examples:
-
Given (a representation of) two integers
$x,y$ , compute the product$x\times y$ . Using our representation above, this corresponds to computing a function from ${0,1}^$ to ${0,1}^$. We have seen that there is more than one way to solve this computational task, and in fact, we still do not know the best algorithm for this problem. -
Given (a representation of) an integer
$z>1$ , compute its factorization; i.e., the list of primes$p_1 \leq \cdots \leq p_k$ such that$z = p_1\cdots p_k$ . This again corresponds to computing a function from ${0,1}^$ to ${0,1}^$. The gaps in our knowledge of the complexity of this problem are even larger. -
Given (a representation of) a graph
$G$ and two vertices$s$ and$t$ , compute the length of the shortest path in$G$ between$s$ and$t$ , or do the same for the longest path (with no repeated vertices) between$s$ and$t$ . Both these tasks correspond to computing a function from ${0,1}^$ to ${0,1}^$, though it turns out that there is a vast difference in their computational difficulty. -
Given the code of a Python program, determine whether there is an input that would force it into an infinite loop. This task corresponds to computing a partial function from
${0,1}^*$ to${0,1}$ since not every string corresponds to a syntactically valid Python program. We will see that we do understand the computational status of this problem, but the answer is quite surprising. -
Given (a representation of) an image
$I$ , decide if$I$ is a photo of a cat or a dog. This corresponds to computing some (partial) function from${0,1}^*$ to${0,1}$ .
An important special case of computational tasks corresponds to computing Boolean functions, whose output is a single bit
For every particular function
-
For a given function
$F$ , can it be the case that there is no algorithm to compute$F$ ? -
If there is an algorithm, what is the best one? Could it be that
$F$ is "effectively uncomputable" in the sense that every algorithm for computing$F$ requires a prohibitively large amount of resources? -
If we cannot answer this question, can we show equivalence between different functions
$F$ and$F'$ in the sense that either they are both easy (i.e., have fast algorithms) or they are both hard? -
Can a function being hard to compute ever be a good thing? Can we use it for applications in areas such as cryptography?
In order to do that, we will need to mathematically define the notion of an algorithm, which is what we will do in compchap{.ref}.
You should always watch out for potential confusions between specifications and implementations or equivalently between mathematical functions and algorithms/programs. It does not help that programming languages (Python included) use the term "functions" to denote (parts of) programs. This confusion also stems from thousands of years of mathematical history, where people typically defined functions by means of a way to compute them.
For example, consider the multiplication function on natural numbers.
This is the function
def mult1(x,y):
res = 0
while y>0:
res = x
y -= 1
return res
def mult2(x,y):
a = str(x) # represent x as string in decimal notation
b = str(y) # represent y as string in decimal notation
res = 0
for i in range(len(a)):
for j in range(len(b)):
res = int(a[len(a)-i])*int(b[len(b)-j])*(10**(i j))
return res
print(mult1(12,7))
# 84
print(mult2(12,7))
# 84
Both mult1
and mult2
produce the same output given the same pair of natural number inputs.
(Though mult1
will take far longer to do so when the numbers become large.)
Hence, even though these are two different programs, they compute the same mathematical function.
This distinction between a program or algorithm
::: { .bigidea #functionprogramidea } A function is not the same as a program. A program computes a function. :::
Distinguishing functions from programs (or other ways for computing, including circuits and machines) is a crucial theme for this course. For this reason, this is often a running theme in questions that I (and many other instructors) assign in homework and exams (hint, hint).
::: {.remark title="Computation beyond functions (advanced, optional)" #beyonfdunc}
Functions capture quite a lot of computational tasks, but one can consider more general settings as well.
For starters, we can and will talk about partial functions, that are not defined on all inputs.
When computing a partial function, we only need to worry about the inputs on which the function is defined.
Another way to say it is that we can design an algorithm for a partial function
Another generalization is to consider relations that may have more than one possible admissible output.
For example, consider the task of finding any solution for a given set of equations.
A relation
Later in this book, we will consider even more general tasks, including interactive tasks, such as finding a good strategy in a game, tasks defined using probabilistic notions, and others. However, for much of this book, we will focus on the task of computing a function, and often even a Boolean function, that has only a single bit of output. It turns out that a great deal of the theory of computation can be studied in the context of this task, and the insights learned are applicable in the more general settings. :::
- We can represent objects we want to compute on using binary strings.
- A representation scheme for a set of objects
$\mathcal{O}$ is a one-to-one map from$\mathcal{O}$ to${0,1}^*$ . - We can use prefix-free encoding to "boost" a representation for a set
$\mathcal{O}$ into representations of lists of elements in$\mathcal{O}$ . - A basic computational task is the task of computing a function
$F:{0,1}^* \rightarrow {0,1}^*$ . This task encompasses not just arithmetical computations such as multiplication, factoring, etc. but a great many other tasks arising in areas as diverse as scientific computing, artificial intelligence, image processing, data mining and many many more. - We will study the question of finding (or at least giving bounds on) what the best algorithm for computing
$F$ for various interesting functions$F$ is.
::: {.exercise} Which one of these objects can be represented by a binary string?
a. An integer
b. An undirected graph
c. A directed graph
d. All of the above. :::
::: {.exercise title="Binary representation" #binaryrepex}
a. Prove that the function
b. Prove that
::: {.exercise title="More compact than ASCII representation" #compactrepletters}
The ASCII encoding can be used to encode a string of
-
Prove that there exists a representation scheme
$(E,D)$ for strings over the 26-letter alphabet${ a, b,c,\ldots,z }$ as binary strings such that for every$n>0$ and length-$n$ string$x \in { a,b,\ldots,z }^n$ , the representation$E(x)$ is a binary string of length at most$4.8n 1000$ . In other words, prove that for every$n$ , there exists a one-to-one function$E:{a,b,\ldots, z}^n \rightarrow {0,1}^{\lfloor 4.8n 1000 \rfloor}$ . -
Prove that there exists no representation scheme for strings over the alphabet
${ a, b,\ldots,z }$ as binary strings such that for every length-$n$ string$x \in { a,b,\ldots, z}^n$ , the representation$E(x)$ is a binary string of length$\lfloor 4.6n 1000 \rfloor$ . In other words, prove that there exists some$n>0$ such that there is no one-to-one function$E:{a,b,\ldots,z }^n \rightarrow {0,1}^{\lfloor 4.6n 1000 \rfloor}$ . -
Python's
bz2.compress
function is a mapping from strings to strings, which uses the lossless (and hence one to one) bzip2 algorithm for compression. After converting to lowercase, and truncating spaces and numbers, the text of Tolstoy's "War and Peace" contains$n=2,517,262$ . Yet, if we runbz2.compress
on the string of the text of "War and Peace" we get a string of length$k=6,274,768$ bits, which is only$2.49n$ (and in particular much smaller than$4.6n$ ). Explain why this does not contradict your answer to the previous question. -
Interestingly, if we try to apply
bz2.compress
on a random string, we get much worse performance. In my experiments, I got a ratio of about$4.78$ between the number of bits in the output and the number of characters in the input. However, one could imagine that one could do better and that there exists a company called "Pied Piper" with an algorithm that can losslessly compress a string of$n$ random lowercase letters to fewer than$4.6n$ bits.^[Actually that particular fictional company uses a metric that focuses more on compression speed then ratio, see here and here.] Show that this is not the case by proving that for every$n>100$ and one to one function$Encode:{a,\ldots,z}^{n} \rightarrow {0,1}^*$ , if we let$Z$ be the random variable$|Encode(x)|$ (i.e., the length of $Encode(x)$) for$x$ chosen uniformly at random from the set${a,\ldots,z}^n$ , then the expected value of$Z$ is at least$4.6n$ . :::
::: {.exercise title="Representing graphs: upper bound" #representinggraphsex}
Show that there is a string representation of directed graphs with vertex set
::: {.exercise title="Representing graphs: lower bound" #represgraphlbex}
-
Define
$S_n$ to be the set of one-to-one and onto functions mapping$[n]$ to$[n]$ . Prove that there is a one-to-one mapping from$S_n$ to$G_{2n}$ , where$G_{2n}$ is the set defined in representinggraphsex{.ref} above. -
In this question you will show that one cannot improve the representation of representinggraphsex{.ref} to length
$o(n \log n)$ . Specifically, prove for every sufficiently large$n\in \mathbb{N}$ there is no one-to-one function$E:G_n \rightarrow {0,1}^{\lfloor 0.001 n \log n \rfloor 1000}$ . :::
::: {.exercise title="Multiplying in different representation" #multrepres }
Recall that the grade-school algorithm for multiplying two numbers requires
a. The standard binary representation:
b. The reverse binary representation:
c. Binary coded decimal representation:
d. All of the above. :::
::: {.exercise }
Suppose that
::: {.exercise }
Recall that if
-
Prove that
$x < 2^k$ if and only if$|B(x)| \leq k$ . -
Use 1. to compute the size of the set
${ y \in {0,1}^* : |y| \leq k }$ where$|y|$ denotes the length of the string$y$ . -
Use 1. and 2. to prove that
$2^k-1 = 1 2 4 \cdots 2^{k-1}$ . :::
::: {.exercise title="Prefix-free encoding of tuples" #prefix-free-tuples-ex}
Suppose that
a. Prove that
b. Prove that $F_:\N^\rightarrow{0,1}^$ defined as $F_(a_1,\ldots,a_k) = F(a_1)\cdots F(a_k)$ is a one-to-one function, where
::: {.exercise title="More efficient prefix-free transformation" #prefix-free-ex}
Suppose that $F:O\rightarrow{0,1}^$ is some (not necessarily prefix-free) representation of the objects in the set $O$, and $G:\N\rightarrow{0,1}^$ is a prefix-free representation of the natural numbers. Define
a. Prove that
b. Show that we can transform any representation to a prefix-free one by a modification that takes a
c. Show that we can transform any representation to a prefix-free one by a modification that takes a
::: {.exercise title="Kraft's Inequality" #prefix-free-lb}
Suppose that
a. For every
b. Prove that
c. Prove that there is no prefix-free encoding of strings with less than logarithmic overhead. That is, prove that there is no function $PF:{0,1}^* \rightarrow {0,1}^$ s.t. $|PF(x)| \leq |x| 0.9\log |x|$ for every sufficiently large $x\in {0,1}^$ and such that the set
Prove that for every two one-to-one functions
::: {.exercise title="Natural numbers and strings" #naturalsstringsmapex}
-
We have shown that the natural numbers can be represented as strings. Prove that the other direction holds as well: that there is a one-to-one map
$StN:{0,1}^* \rightarrow \N$ . ($StN$ stands for "strings to numbers.") -
Recall that Cantor proved that there is no one-to-one map
$RtN:\R \rightarrow \N$ . Show that Cantor's result implies cantorthm{.ref}. :::
::: {.exercise title="Map lists of integers to a number" #listsinttonumex}
Recall that for every set
The study of representing data as strings, including issues such as compression and error corrections falls under the purview of information theory, as covered in the classic textbook of Cover and Thomas [@CoverThomas06]. Representations are also studied in the field of data structures design, as covered in texts such as [@CLRS].
The question of whether to represent integers with the most significant digit first or last is known as Big Endian vs. Little Endian representation. This terminology comes from Cohen's [@cohen1981holy] entertaining and informative paper about the conflict between adherents of both schools which he compared to the warring tribes in Jonathan Swift's "Gulliver's Travels". The two's complement representation of signed integers was suggested in von Neumann's classic report [@vonNeumann45] that detailed the design approaches for a stored-program computer, though similar representations have been used even earlier in abacus and other mechanical computation devices.
The idea that we should separate the definition or specification of a function from its implementation or computation might seem "obvious," but it took quite a lot of time for mathematicians to arrive at this viewpoint.
Historically, a function
We have mentioned that all representations of the real numbers are inherently approximate. Thus an important endeavor is to understand what guarantees we can offer on the approximation quality of the output of an algorithm, as a function of the approximation quality of the inputs. This question is known as the question of determining the numerical stability of given equations. The Floating-Point Guide website contains an extensive description of the floating-point representation, as well the many ways in which it could subtly fail, see also the website 0.30000000000000004.com.
Dauben [@Dauben90cantor] gives a biography of Cantor with emphasis on the development of his mathematical ideas. [@halmos1960naive] is a classic textbook on set theory, also including Cantor's theorem. Cantor's Theorem is also covered in many texts on discrete mathematics, including [@LehmanLeightonMeyer, @LewisZax19].
The adjacency matrix representation of graphs is not merely a convenient way to map a graph into a binary string, but it turns out that many natural notions and operations on matrices are useful for graphs as well. (For example, Google's PageRank algorithm relies on this viewpoint.) The notes of Spielman's course are an excellent source for this area, known as spectral graph theory. We will return to this view much later in this book when we talk about random walks.