Tuesday, June 28, 2005

Update and Apology

So I've been vacationing a bit and am thus way way behind on math posts (two, I think, is the number of my behindness), and I've actually really wanted to say something about Grokster, but haven't been able to formalize my thoughts into something resembling a post. Regardless, there will likely be at least one math post will be here by tomorrow; falling behinder, however, is likely in the near future (after which I'll try to catch up lots).

Tuesday, June 21, 2005

Math Post the Seventh: Kernels and Quotients

Last time I was talking about homomorphisms, the "nice maps" in the land of groups, and I gave some of the usual map definitions (injective, surjective, and so on). One of the things we (well, mathematicians -- I can't really answer for the rest of you) always want to know about maps is what the stuff over a particular point looks like; if you had a map to the integers, what stuff goes to three? Since this concept is so common, the set of stuff going to a particular point has a name: the fiber over that point. In the case of homomorphisms, the answer to this question has a simplifying wrinkle: the answer is the same for all points. To symbolify things a bit, let f:G->H be a homomorphism of groups and let h be in H. Now, say f(g)=f(g')=h, so both g and g' are in the fiber over h. Now, since f is a homomorphism, f(g^(-1)g)=h^(-1)h=e, so the difference between any two elements of the fiber over h are in the fiber over the identity; thus if you pick a particular element g in the fiber over h, and we call the fiber over the identity K (for reasons soon to become clear), the fiber over h is exactly gK. So all the information about the fibers is contained in the set K (called the kernel of the map f).

Now, we can say some quick things about K. First, K is a subgroup of G; two elements g and g' in K both map to the identity under f, so their product maps to the product of the identity with itself, which is still the identity, as is the inverse of the identity. However, K has one more special property: the left and right cosets of K are the same, that is for any g, gK=Kg (since both of these are precisely the fiber over the point f(g) in H). This implies the interesting property that gKg^(-1)=K, since this is the fiber over the identity again; this funny equation is called "conjugation by g". Now, if your group isn't abelian, conjugation may not fix all the elements of the subgroup K, but since K is a kernel, conjugation can only move an element of K to another element of K. Subgroups with the property of being fixed under conjugation are called "normal," in one of math's most badly overloaded terms.

Normal subgroups have one extra bonus fun property: their cosets form a group. Let N be a normal subgroup of G, and consider the cosets gN and g'N; if we try to multiply them in the straightforward way, we get gNg'N=gg'NN=gg'N (since N is closed under multiplication, NN=N). If N weren't normal, the left and right cosets wouldn't conicide, but since it is they do, and the cosets inherit all the group properties from the same properties of G. This new group is written G/N, and is called the quotient group of G by N. The easiest examples of this can be realized by the clock arithmetic groups Z/n (and now you understand the notation; because the integers are so familiar we omit the extra Z on the end) for various integers n; remember the cosets of the subgroup H=nZ are H, 1+H, 2+H, and so on up to (n-1)+H, and adding cosets is just adding the numbers out front and subtracting off n's if the number gets bigger than n. Of course, since Z is abelian, all subgroups are normal (the inverse just commutes through), so you need to be more careful in general.

G/N admits a nice map from G (an element g goes to the coset gN), nice in the way called "canonical," meaning that no choice is involved in determining the map -- we don't have to pick anything to make it up, it just comes to us. If you made a version of this map up, yours would be the same as mine. There are lots of non-canonical maps, and we'll meet plenty next time, but it's important to know about the differences in the kinds of nice maps.*

Next time: isomorphism theorems and factoring.

*And it's important to be specific about words; the inventors of a branch of mathematics called category theory, which is in some sense a language in which all other mathematics should be done, gave a definition for a kind of niceness they called "naturality." Fortunately for them (and us), the natural transformations of objects called functors are important and cool; unfortunately for us as students of math, people are sadly nonspecific with the word natural, rarely specifying when they mean the technical sense. One of my more trying math experiences was realizing after finishing a problem set (from the canonical algebraic geometry textbook) that when the word natural appeared in every problem it didn't just mean pleasant or straightforward but that there was suddenly much more work to do.

Friday, June 17, 2005

Math Post the Sixth: Isomorphism

This is going to verge on the philosophical, since I'll have to justify a "why" inside, but I'll kick off with the answer to the question from last time: describe all the groups of prime order. Start with the hint; pick an element g in a group G of prime order p, and say that g is not the identity. Now, g generates a subgroup, say H, and it has some nonidentity element in it, since g is such an element. Now Lagrange's theorem tells us that the order of H divides the order of G, but the order of G is prime, so H must have order either 1 or p. Since H has at least two elements (the identity and g), H has order p, so H=G. This argument shows that any group of prime order is generated by any element not the identity, has no nontrivial subgroups, and is "structurally the same" as the group where we add hours on a p-hour clock. Now the only issue is what I mean by "structurally the same."

Consider two groups: one is the group S_3 (underscore is ASCII for subscript) of permutations of three elements (i.e. all the moves the three-card-monte-guy can do) and the other is D_3, the rigid symmetries of an equilateral triangle. Both of these groups have six elements, and they have much in common in terms of structure, since they both can achieve any configuration of three things being moved around (you can reflect the triangle on an axis as well as rotate it, and think of it as acting on the corners). I want to say that they're the same, but I can't quite; after all, they're given in different ways, written with different letters, and so on. On the other hand, the situation is the same as if I took a clock and rubbed off all the numbers, replacing them with fruits -- I might now be saying that pear plus strawberrry equals pineapple, but that doesn't stop it from being a clock. So the question is how to capture this notion of "identical except in name."

The first part of the answer starts with the notion of a homomorphism. A homomorphism is a "nice"* function between two groups, and the buzzword is that it "preserves the group structure." Homomorphisms are traditionally written with Greek letters which I'm not sure how to render here, so until I find a better way I'm going to do it with LaTeX (math typesetting) notation, which should be pretty straightforward: a small phi will be written \alpha, a capital one \Alpha, and so on. The symbolic way to write a map \phi from a set G to a set H is like this: \phi : G->H, where the arrow-looking thing is in fact an arrrow. If G and H are groups, the map \phi is said to be a homomorphism if for any g and g' in G, \phi(gg')=\phi(g)\phi(g'). The trick to this simpleminded equation is that the multiplication is carried out in G on the left and H on the right. Now, say we have two homomorphisms which are inverses, say \phi:G->H and \psi:H->G such that doing \psi first then \phi gets you back to where you started in H and doing \phi first then \psi gets you back where you started in G (\phi(\psi(h))=h and \psi(\phi(g))=g for all g in G and h in H); then we can multiply in G by going over to H, multiplying there, and coming back and vice versa. This means that we can think of either G or H as just alternate names for the other group, which tells you that they're really the same.

Before concluding I'll define quickly some relevent terms. A map is said to be one-to-one or injective if no element gets hit twice; symbolically, if f(x)=f(y) implies x=y. For people who remember precalculus, this is the "horizontal line test" for a graph. A map is onto or surjective if everything gets hit. A map which is both injective and surjective is bijective. Bijective maps are invertible**, and bijective homomorphisms have inverses and are called isomorphisms.

To restate the conclusion of the first paragraph in nicer language, we classified all groups of prime order up to isomorphism: a group of order p is isomorphic to a clock arithmetic (or cyclic) group of order p, via the isomorphism defined by taking any nonidentity element to the generator 1.

Next time: what groups are homomorphic images of other groups.

*Practically every definition in math is a definition of "nice," though the connotation that goes with it can be anything from "super duper extra bonus spiffy" to "not something you made up just to annoy me." Most branches of math have a fundamental definition of "nice function" that follows shortly after the definition of the objects under discussion; in most algebraic disciplines what this definition should include is obvious to anybody who's been doing algebra for a while. Geometry and topology have it slightly harder.

**If you need to convince yourself of this, draw two collections of dots and some arrows from one side to the other to make a function (each dot on one side gets exactly one outgoing arrow). Also, there is no short phrase for "one-to-one and onto," however little sense that may seem to make.

Wednesday, June 15, 2005

Math Post the Fifth: Cosets, Lagrange's Theorem, Generation

I'm sorry for the lateness of this; I've had a busy weekend/early week.

I asked two questions last time, and while I was planning to answer the second one first, I realized there was a reason I asked them in that order. Silly me. The question was about the cosets of the subgroup H=5Z in the group G=Z (Z is the group of integers under addition). To answer it, let's start by doing out a few simple examples. For these, I'm going to break down and write the group operation with a + rather than simply omitting it, since if I leave it out it'll look like integer multiplication, which is not what I mean. The easiest coset to consider is 0+H; this is pretty obviously H back again. Slightly (but only slightly) less obviously, there is the coset 5+H, which is also H back again, since the element 5 in is the subgroup H. In fact, any element of H will simply give back H again as the corresponding coset, because H is closed under multiplication and inversion.

What about elements not in H? Well, 1+H is a coset -- it contains -4, 1, 6, 11, and so on in both directions. By the same principle that any element of H gives H back H, any other element of 1+H gives back 1+H: 6+H=1+(5+H)=1+H. A little more thought shows that there are exactly five cosets of H, one for each of the numbers 0-4. These are the possible remainders upon dividing by five, which I'll touch on again later, but for now just think of them as elements.

As to the second question, the first part is obvious: of course cosets of H cover G, since for any element g in G the coset gH contains g (the identity is in H). Whether they overlap is harder, and will be my first major invocation of equations on this string of posts. Say that two cosets, gH and g'H, for elements g and g' in G, overlap; this means there are elements h and h' in H such that gh=g'h', so g'=ghh'^(-1). So g and g' differ on the right by elements of H, meaning that they give the same coset by the reasoning in the first paragraph.

This non-overlapping of cosets yields an interesting result. Say the group G is finite (has only finitely many elements, like permutations of a set, symmetries of a regular polygon, or is a 'clock arithmetic' type group). Then if H is a subgroup of G, the cosets of H cover G without overlapping and are all the same size (they all have the same number of elements as H). This means that the number of elements in H divides the number of elements of G, so, for instance, a group with twelve elements can only have subgroups of size 1, 2, 3, 4, and 6. This fact is called Lagrange's Theorem, and has a fairly giant list of obvious corollaries of decreasing size (the professor who taught me this quipped, "we'll eventually get to the empty corollary."), which eventually end in most of the facts underlying public-key cryptography.

For next time, I'll leave you with one definition and one question which should now be soluble. If S is a set of elements in a group G, the subgroup generated by S, written with angle brackets usually but because I can't remember the way to get escape characters in HTML written here as [S], is the subgroup you get by starting with S and adding all the elements you can make by multiplying and inverting until you can't get any more. As an example, the subgroup H from the example way back at the beginning is generated by the number 5 (or equally by -5), so I might also write it as [5]. With this definition, the following question should be easy: describe all groups of prime order (that is, with a prime number of elements). It sounds big, but in spite of this it's easy enough I can bear to give only one hint: pick an element g in G.

Next time: what that act of "describing" really was...

Thursday, June 09, 2005

Math Post the Fourth: Abelianness, Subgroups, Closure

When last we left our heros, they were groups. They were a giant menagerie of strange examples, but they were groups. A new reader named Vito commented that I should talk about a thing called closure, so I will (disclaimer: I was going to anyway).

But before I get to the new stuff, something I forgot to mention last time. Some groups have an operation that, in addition to being associative, is also commutative. That is to say they satisfy the relation ab=ba for every a and b. Many familiar examples have this property, like the integers, like the integers on a clock (aka modulo some number n), or even like the real numbers (not including zero) under multiplication. Other examples don't: for instance, if we think about the permutations of a set of three elements, switching one and two first and then switching two and three gives a different result from if we to the switching in the opposite order. Groups that satisfy this property (like the integers), are called abelian, after a mathematician named Abel who was one of the fathers of group theory. This gives rise to one of the traditional math jokes: What's purple and commutes? An abelian grape. The more you know...

Sometimes, groups have other groups living inside them. For instance, the integers under addition are a group; the integers divisible by three, also under addition, are also a group. Permutations of five elements are a group; permutations of the first three of those five are too. One group sitting inside another group is called a "subgroup," just like a subset in quasi-technical parlance (ask me about the prefix quasi- in math sometime). This is where we care in an official sense about the property of closure: a subset of an ambient structure with an operation(s) on it is said to be closed under that(/those) operations if, when you do those operations to all the elements of the subset in all the possible ways, you don't get any new stuff. In our case, that is to say that if we multiply two elements together or invert a single element, we stay in the set; such sets are subgroups.

In order to set up for next time, let me start with this new definition: given a subset S of a group G and an element g of G, the set gS is the set of elements in S multiplied on the left by g, and similarly for Sg; in the same way, if T and S are subsets of G, the set TS is the set of elements of G ts, where t is in T and s in S. If H is a subgroup of G, we call a set of the form gH a left coset of H in G (there's actually no universal standard on this; a set gH is called a right coset roughly as often as it's called a righ coset. I'm sticking with where the element is). The questions for next time are these:

1) What are the cosets of the subgroup 5Z in Z, where Z is the integers?

2) Given any group G and any subgroup H, do the left cosets of H cover G? Do they overlap?

Until next time.

Sunday, June 05, 2005

Math Post the Third: Groups and Axioms

At last, I'm able to tackle my actual topic of interest: group theory. Groups are one of the simplest (not easiest) algebraic structures around, but in spite of the ease of giving an axiomatic definition, I'm going to start with examples before we start; it'll make things much easier.

The first example of practically everything in algebra is the same: the integers, those guys I've been talking about for a week, positive and negative whole numbers. I'll ignore multiplication for a moment and focus on the fact that you can add two integers together. And that, my friends, is algebra.

The canonical second example of a group is still just adding numbers, but adding them on a clock face. This is called "integers modulo twelve," or mod twelve for short, and the idea is just that twelve is the same as zero and otherwise we add normally (so 3+11=2*, and so on). Obviously twelve can be replaced by whatever number I want here; the concept will remain the same.

Less additively, imagine drawing a square on a table and putting down a square piece of paper with labelled corners (the same front and back) that fits directly on top of it. If you wanted to, you could pick up the paper and put it down with the corners in a different position. In fact, there's something like a "multiplication" that you can do on these motions: make a new motion by taking two old motions, doing one, then doing the other. This just physically realizes the symmetries of the square (rotate, reflect, and so on). In fact, symmetries of things (broadly construed) provide a wonderful source of groups (experts may here recall the old saw about the catagorical definition of group**, which I'll probably explain at some point). Another way to see this is to think about a Rubik's cube -- beyond just being a cube and having those symmetries, there are others you can get from the operation of twisting a face, since twisting a face keeps the cube a cube and just moves the colors around.

More abstruse examples abound. For anyone who recalls matrices, the set of n by n matrices with nonzero determinant is a group with matrix multiplication; so is the set of matrices with determinant equal to one (both of these are actually symmetry groups in disguise). The permutations of a set also constitute a group, called of course a permutation group. If you write down words composed of the letters a, b, a^-1, and b^-1, stipulating that a letter and its inverse cancel if they appear next to each other, you get a group by concatenating (and then reducing) words; just write one, then the other. And this is only the beginning; stranger examples will appear soon enough.

As you might expect, a definition that actually captures all of these examples is likely to be a little out-there. It'll involve many symbols when I give it, but first I'd like to say a word or two about axioms. Obviously there is a lot of theory you can give about all of these examples individually, but they do all involve some common features, and capturing them in and then working with an axiomatic defintion permits you to apply old theory in new situations (for the computer scientists in the audience, think of a well-documented code library). While it'll be a while before examples of this come up (though they will, fear not), it'll be handy to start collecting axiomatic structures now.

So, a group is a set, say G, with a multiplication, written here by juxtaposition of elements, which has:

  • Associativity: given any three elements g,h, and k in G, g(hk)=(gh)k.
  • A unit element: there is an element e in G such that for any g in G, eg=ge=g.
  • Inverses: for any g in G, there is an element g^-1 in G such that g(g^-1)=(g^-1)g=e.

Associativity just means that you don't have to worry about where you put the parentheses in a long list of elements being multiplied together; this is usually either pretty hard or pretty easy to prove (there are two or three examples above which require individual lengthy proofs of this; the others can be handled easily together if you're slightly clever). The other two are easier to understand; in fact, I'm going to leave understanding what they mean in all the above examples as an exercise. You can answer in comments for the harder ones (though please don't post a first response if you already know this stuff) the following question: for each example of a group listed above, what is the unit element, what is an inverse element, and how would you get one? It shouldn't take more than a few minutes of thought, but it's a good place to start getting a feel for how to think about this stuff.

Next time: subgroups, cosets, quotients, and one of my favorite cute little theorems ever.

*Technically, that equals sign should have three bars in it, signifying that it means equality modulo something, not normal integer equality. It's not strictly required, since the addition is really taking place within the mod twelve group structure anyhow, but it's sometimes nice to have.

**"A group is a category with only one object in which every morphism is an isomorphism."

Friday, June 03, 2005

Math Post the Second: On Factoring and Fermat

Prime factorization of integers, as promised.

This is a strongly intuitive thing. Prime numbers don't have any divisors; any number not a prime has several prime divisors, and since when you divide a number by something it gets smaller, if you just keep dividing out primes you'll eventually just get a big list of primes with no more composite part left. That part isn't actually so strange or difficult to understand; the other part is very obvious and yet extremely subtle. It's this: that factorization is unique, at least the list of factors is unique (six is two times three as much as it is three times two, but the way this is non-unique is pretty dumb, so we don't count it). Part of the subtlety is that the way to see the uniqueness isn't something I can explain as glibly as I can the existence; I hope another part will be illustrated by the following story.

Some people in the world have still heard of Fermat's Last Theorem; it got lots of play ten years ago, when it was just proven after a 350-year wait. The statement is simple enough: for any integers x, y, z, and n, none zero and n bigger than two, it is not the case that x^n+y^n=z^n. What most moderately informed non-mathematicians know is that the statement appeared in the margin of a book owned by Fermat accompanied by a notation to the effect that he knew a beautiful proof, but that the margin was too small to contain it. What moderately informed non-mathematicians generally do not know is that Fermat, like Newton, was quite the ass. Fermat was a lawyer and did math in his spare time, which is well and good until it turns out that the primary way he did math was by sending other mathematicians letters that amounted to schoolyard teasing: "I can prove this and you-oo ca-aan't!" It wasn't yet the tradition of his day to actually tell people how you proved the things you proved (you wouldn't believe how long the cubic and quartic equations were unpublished; solving cubics was a courtly version of a party trick for some of these guys), but he did occasionally do it. Thing is, knowing his methods, some very smart guys in the 1800s came up with what we still think was the proof he had.

It doesn't work. And the reason it doesn't is that prime factorizations stop being unique when you start looking at things only slightly more complicated than integers, but it's desperately easy to assume that it doesn't, particularly if you live hundreds of years ago and don't have access to modern algebra. In fact, this failure was only discovered because of this proof, and the methods for getting around it (and what grew out of them) currently go by the name of algebraic number theory. So uniqueness of factorization is a bit subtle. I'll assume the fact until I need the other things I need to explain it for other things, but just understand that it's subtle.

Next time: introduction to Group theory, my actual honest-to-goodness field.

(For the record, the example of non-unique factorization appears if you start with the integers, but just for kicks throw in a square root of negative five and all the sums and products you can make with integers and this new number. 'Prime' still makes sense, as does 'factorization,' as does 'unique.' The trouble is that the numbers 2, 3, 1 + sqrt(-5), and 1 - sqrt(-5) are all prime, and 6=2*3=(1 + sqrt(-5))(1 - sqrt(-5)), so factorization is still good but uniqueness is out the window)