Friday, December 26, 2008

Linear Algebra


What do you say about Linear Algebra? You learn Gaussian elimination and vector spaces and you're done. Except if the textbook your teacher chooses insists that you approach LA from a geometric perspective. Then you must interpret systems of linear equations as hyperplanes and such craziness.

To summarize there are two ways of representing a subspace V Ì Rn:
1. V=Span(v1,…,vk) for appropriate vectors v1,…,vk. This is the generalization of our parametric representation of lines and planes in R3.

2. V= N(A) for an appropriate m x n matrix A. This is the representation by a homogeneous system of linear equations, in which we interpret V as the intersection of the m hyperplanes defined by Ai × x = 0.

That's it, enough said.

Wednesday, December 24, 2008

Advanced Calculus = Real Analysis




Newton and Leibnitz are both credited with inventing calculus around the turn of the 18th century. A lay person might think that these geniuses both wrote a 300 page math textbook on the subject to gain fame -- wrong! They had the key ideas regarding rate of change (differentiation) and its opposite (integration). The ideas were so powerful that physicists, astronomers, and engineers started to use the new math to solve the pressing problems of the day. Everyone was happy except ensuing mathematicians who introduced functions and series that the undeveloped calculus could not address.


The Advanced Calculus course I just completed used the book "A Radical Approach to Real Analysis" to teach the foundations of calculus from an historical perspective. The book cites the work of Fourier, Cauchy, Abel, and Dirichlet (there were certainly others) as the 19th century mathematicians who laid the rigorous foundation for calculus so that strange new functions like the one below could be analyzed.



Fourier Series


I perceived two central tools used for the area of mathematics called real analysis (sometimes called "the "epsilon/delta" game or "epsilonics") -- the Mean Value Theorem and Uniform Convergence Theorems. Every student of calculus should remember the Mean Value Theorem:



Let y = f(x) be a function with the following two properties: f(x) is continuous on the closed interval [a,b]; and f(x) is differentiable on the open interval (a,b). Then there exists at least one point c in the open interval (a,b) such that:



This is an incredibly powerful theorem that can be used to prove numerous results. Here's something I had to prove as a homework assignment:


The notion of Uniform Convergence and the theorems related to it really gave mathematicians a handle on functions defined as the sum of infinite series of functions (e.g. Fourier series). This topic is a little too complex to discuss here except to summarize that a function having the property of uniform convergence can be integrated term by term and potentially differentiated term by term (for this the sum of derivatives must also be uniformly convergent).


Generally if a function is not uniformly convergent it fails at some troublesome point. Consider the function:
on the interval [0,1]. If you take enough terms (increase the value of N) this function behaves like the zero function, f(x) = 0, for all points in the interval [0,1] except at 0. Here the function "blows up" as shown in this plot:





Simply put the function is not convergent at x = 0 (see the big hump just to the left of .2).



All of this analysis helped put calculus on a rigorous footing going into the 20th century and enabled the production of the calculus tomes (mine is almost 900 pages!) from which many of us studied. So even though calc students distain epsilon/delta arguments and the like, I salute the pioneers of calculus. Hail to Fourier, Cauchy, Abel, and Dirichlet!


Sunday, July 27, 2008

Grad Math - Those Super Scandinavians

Tore EngsetThis summer's (2008) graduate course was Mathematical Modeling. There was something suspicious about the enrollment -- the course was filled to capacity with 25 students. Reason? The teacher was considered an easy grader.

The course required a presentation on a math modeling topic of one's own choosing. At the very start of the course I chose the "floating license problem" which has to do with determining how many floating (concurrent) licenses to purchase for a community of users. The floating license model is one of many ways that commercial software is licensed. The idea is that you don't have to buy every user a license. You buy a few with the knowledge that at peak usage some users may have to wait to obtain a license to use the software.


Agner Erlang
Now this may sound like a modern problem because this form of licensing is at most 40 years old. However the problem is a classic one from queuing theory and was solved when the first telephone systems were being built in the early twentieth century. The research was published by Agner Erlang (1917) and Tore Engset (1918). Erlang was Danish; Engset was Norwegian.


I picked this topic for two reasons: to get credit at school and to get credit at work. At work my department manages licenses for software development tools and some of the licenses are of the floating variety. Since I had no idea where to start I did a literature search and came up with a fantastic hit in IEEE Transactions on Software Engineering -- "Determining the Proper Number and Price of Software Licenses" -- written by five Finns at the University of Turku in Finland. The theory and applications were all there in the paper.

Now my fellow students were content to regurgitate the papers on topics that they found interesting. I decided to go a mile farther and reproduce the results of the authors. In the paper they presented analytic results for different combinations of users and licenses. They also described a software simulation they performed to reinforce the analytic results.

So there you have it. I decided to implement a simulation using Mathematica. Mathematica is a symbolic math package that is partly responsible for bringing me back to higher mathematics. Here's what I had to go by (the "requirements" if you will) from the Finnish paper:

The generation of the requests is done separately for each user by reserving the given number of nonoverlapped intervals from the set of working hours. This is done by choosing the starting times randomly and then by checking whether the service intervals obtained in this way overlap. If there are overlapping intervals, the simulation generates a new set of intervals...

Here is my schedule generating function written in Mathmatica:


GenerateSchedule[p_, l_, u_]:= Module[{duplicates,randTimes,schedule,j},
duplicates = True;
While [duplicates == True,
randTimes = RandomChoice[Range[1,p],RandomChoice[Range[l,u]]];
duplicates = (Length[Union[randTimes]] < Length[randTimes]);
];
schedule = Table[0, {p}];
Do[schedule[[j]] = 1, {j, randTimes}];
schedule
]

The meaning of the parameters:
p - number of time units for the schedule
l - lower limit for the # requests during the time period
u - upper limit for the # requests during the time period



Here is a sample schedule for 10 minutes with 4 license requests:


minute12345678910
use0110010010


Back to class. On the last session of class everyone did their presentations. Most of the presentations did not involve the use of any math tools. A couple did use Microsoft Excel and one used Minitab. So I guess I won with Mathematica.

Thursday, April 17, 2008

Graduate Seminar Topic - Gödel's Theorems



I took this course because it was required and because I didn't like any of the other courses offered. The easiest way to describe the course is that it's an independent study but with a group of students. Everyone picks their own topic and is required to give four half-hour presentations and write four 10-page papers. The topic I picked was Gödel's incompleteness theorems. A topic from what some people call logic and others call metamathematics. It's a great subject that few math majors even know about. I knew about it from the classic book "Gödel, Escher, Bach : an Eternal Golden Braid" by Douglas Hofstadter which was given to me as a Christmas present when it was first published in 1979. Of course I didn't read it then. Heck I was single and who in their right mind reads a 700+ page technical book?

Now I must pick the main thrust of the rest of this posting. I could: 1) describe my presentations and try to teach you Gödel's results 2) give short reviews of the books that I read for the class 3) describe my experience of the class. The last choice might be amusing since the other students' topics ranged far and wide. Some topics were poorly chosen and equally poorly presented. A couple of the topics were very amusing like the construction of magic squares or the history of computational devices including finger counting to 100,000. I don't think that I can really explain the theorems before you get bored so I'll review my sources for the presentations and the papers.

"Gödel, Escher, Bach" or GEB is certainly a classic. Only a portion is about Gödel's theorems, of course. In general it's hard to say what this book is about because it's about so many different things. Many Escher drawings are reproduced and the text explains their significance in context of recursion, self-reference, and self-replication. Bach's fugues are analyzed in a similar way. And the book amusingly explains basic logic and Gödel's theorems. The author develops some fantastic analogies to give you different ways to understand what's going on. He compares the Gödel sentence to a vinyl record that's specifically designed to break the record player that will play it (nowadays, think of an mp3 file that's designed to break the player that will play it). And he invents amusing, yet deeply philosophical, dialogs between Achilles, the Tortoise, and the Crab. And let's not overlook the theme of Zen Buddhism that runs through the book.

My favorite takeaway from this book is a diagram that Hofstadter devised that explains Gödel's theorems visually. Here it is:



Gödel basically proved that a formal system designed to encompass arithmetic cannot prove all arithmetic truths. There are unreachable truths that are beyond the formal system. That's the left side of the picture. Since the book covers Eastern thought, or at least Zen thinking, he included the opposite of formal arithmetic on the right. In that system you start with negative axioms and prove the negation of theorems. But that side too is incomplete since it has unreachable falsehoods.

So now you do know Gödel's results. I bit the bullet and read the entire book (742 pages) and recommend it to anyone (who really, really likes math, logic, programming, or artificial intelligence).

My primary source was "A Profile of Mathematical Logic" by Harold DeLong. This book is a readable history of logic from the ancient Greeks up through the middle of the 20th century. DeLong gives a good treatment of Gödel's theorems adhering very closely to Gödel's research paper that was published in 1931. I "borrowed" the whole progression for my presentations/papers from this book. The scholarship behind this book is superb as the bibliography is 24 pages long. Highly recommended for the philosophy or math lover.

In 1958 a small paperback entitled "Gödel's Proof" was written by Nagel and Newman. This book is highly readable. If you had to read a book on Gödel this should be the one.

Next is a collection of scholarly papers on metamathematics titled "From Frege to Gödel: A Source Book of Mathematical Logic, 1879 – 1931." This collection includes two of Gödel's most famous papers translated into English. One proves that first order predicate calculus is complete. The other that formal arithmetic is incomplete.

My last source was a brand new book "An Introduction to Gödel's Theorems." This is a serious textbook designed for a one or two semester course leading up to the theorems. It is as much fun to read as a standard logic textbook. In fact I found a number of other books like this that were written for a course taught by the book's author.

About Kurt Gödel

To say that Gödel was a genius is an understatement. The originality of his first incompleteness proof is breathtaking. He has been called the greatest logician since Aristotle. Unfortunately he suffered from paranoia and hypochondria most of his adult life. In his 72nd year he literally starved himself to death fearing that his food was poisoned.

Thursday, January 17, 2008

My Mathematical Biography

I had to write this for a course I'm taking so I thought that I might as well share it with the world.

Currently I’m a first year graduate student in Villanova’s master’s degree mathematics program. A lot of my friends have asked me, “are you crazy? Why are you doing this?” To them I give a short reply, “I like math.” Besides liking math I have to catch up to the rest of my family. My two brothers have Phd degrees (not in math). My wife has two masters degree (certainly not in math). I feel like the dummy in the family holding just an MS degree in computer science (from Villanova, of course).

My interest in math was piqued by raising my two sons. One is now a sophomore at Tufts University. He never needed any help with his homework and went through his school district’s “gifted program.” As a gifted student in mathematics he was compelled to participate in MathCounts competition during middle school. I went to the first competition and saw the team do poorly due to lack of preparation. So I called the teacher in charge the next year and asked if I could assist coaching the team. He said “sure,” and by eighth grade the team placed 5th in Chester county, and one member of the team finished 6th individually.

I learned that having one very bright son does not guarantee two bright sons. My other son is not academically oriented. I tutored him a lot in math and had to review his text books closely in order to explain the concepts the way that I thought they were presented in class. I was amazed at how much new material is taught in geometry compared to what I learned many, many years ago. The same goes for Algebra 1 and 2 although the difference was not so dramatic in these subjects.

So I re-entered the field of mathematics without a definite goal in mind knowing that I received a random mathematics education as an undergraduate. I say random because I don’t have a good foundation in calculus (it’s a long story) and the only undergraduate courses that I really enjoyed were number theory and abstract algebra. I majored in math as an undergraduate because because math was one subject where there as absolutely no BS (I don’t mean Bachelor of Science here). I’m hoping to squeeze by on the required analysis courses having strengthened my calculus by taking the MIT Open Course Ware course in single variable calculus (using a great textbook by George Simmons).

I guess I may teach a few math courses since an early retirement looks feasible for me. I did try teaching computer science in the early eighties but I didn’t like it at the time. I do know that I enjoy being a student again.

Perhaps the real reason that I’m studying math again is to impress a college friend who is a bone-fide mathematician. His name is John P. D’Angelo and he’s a tenured professor at the University of Illinois at Urbana-Champaign. John is a legend in my mind, and I hope to see him again some day. Believe it or not, as an undergraduate his teachers would have him read their research papers to look for errors prior to submitting them for publication. He earned his Phd from Princeton University during the period when there was a phantom in Fine Hall. The phantom was named John Forbes Nash and John spoke with him on many occasions.

So if you know of any good employment opportunities for a “seasoned” worker with a double masters degree, please let me know.