Thursday, October 17, 2013

Announcing Testosterone Driven Development (TDD)



It seems like every few months someone announces a new development methodology that will result in drastic improvements to software production and is the "one true" way to develop software. Perhaps creating your own development methodology is a rite of passage that all mindful developers must pass through. You may be reading this and thinking "Oh great here we go again. Some nutjob is going to announce his new development methodology." Well, I'm sorry to disappoint but I am not going to announce a new creation today. The development methodology that I'm introducing today is not one that I created but one that exists naturally in the universe. I am not its creator. I am only its discoverer and spokesperson. This development methodology exists as a fundamental building block, like atoms, quarks, energy and bacon. Today I'm proud to announce a development methodology I lovingly call "Testosterone Driven Development".

I would love to tell you about how Testosterone Driven Development will change the world, but that's unnecessary. It already has. I'd love to tell you about how it will become so popular it will sweep the globe, but it already has. It's only a matter of time before you encounter this methodology, so you better start studying now, or you risk being left behind. There's a chance some members of your team are using it already, but perhaps, like the spinning of your CPU's fan, its presence is so constant and unchanging that it's become invisible to you, only noticeable when you focus your caffeine sharpened senses on it.

As your humble guide, I will share with you the gospel of Testosterone Driven Development (hereafter called TDD, because no one has ever used that acronym before).

1. TDD'ers never profile before optimizing

Because you have intimate working knowledge of your program, and all of the libraries it uses, and the libraries they use, and the kernel your program is running on, and the machine code instructions that will be generated by the compilers and the number of cycles each machine code instruction will take, and the scheduling of threads, the lock contention, the interaction of the garbage collector, and its I/O latency. Since you have all of that in your head, you know exactly where the bottleneck is and can optimize it by injecting hand written assembly into your code. Anyone that doesn't hold all of that state in their brain is a wuss and should not be allowed on your team.

2. TDD'ers never compile before pushing

Your fingers are so full of testosterone that they simply can't type code that doesn't compile. In fact your code is so manly it will even compile in the presence of syntax errors. Does anyone really think the 98 pound weakling of a compiler will even be able to emit a warning with your code? NO!

3. TDD'ers never write unit tests

Unit tests are for developers that write bugs. TDD'ers don't write bugs. If you think your code needs unit tests, you should put your keyboard back in your purse and go home.

4. TDD'ers never write comments

Comments only encourage the weak to modify your code. You don't want them leaking any of their estrogen on your manly code, so don't encourage them by adding comments.

5. TDD'ers always use global variables

Your testosteronian influence is global. Why shouldn't your variables be too? Yes there are some little pre-pubescent boys that can't hold global state and every execution path that could mutate the global state in their tiny immature minds while simultaneously hammering out studly man code. But do you really want those little boys hanging around? Of course not! Their screams might interrupt your flow and their tears might fall into your keyboard. Or worse, your Red Bull.

6. TDD'ers never use third party libraries

You wouldn't use a junkie's syringe now would you? You don't want their diseased guts in your pristine god-like body. You know third party libraries are nothing more than binary encoded diseased guts. They were written by people who are dumber and weaker than you. You know that third party libraries only work for other developers that don't write manly software.

7. TDD'ers never consult with the team before making big changes

Real men don't ask permission to go to the bathroom. Why would ask permission to make sweeping architectural changes to your team's code base? You know they aren't smart enough to understand your plan anyway, just make the change and let them bask in your glory when you're done.

8. TDD'ers never use revision control

Sure if you start at a company and they keep their code in git, mercurial or some other garbage system, you'll clone the repo and get to work. But you certainly aren't going to merge or check in small changes on a daily or weekly basis, you'll wait until it’s all done. Delivering source code to the team is like giving your mom a puppy for Christmas. Sure you could give her the puppy in bits and pieces and then on the last day give her a needle and thread for her to assemble the fur and guts. But you know your mother will want the puppy delivered in one piece. Likewise your team won't be able to comprehend the brilliance of your code unless you deliver it in its final state.

9. TDD'ers only use revision control to revert other people's changes

OK so there is one and only one reason for a man to use revision control and that is it makes it really easy to throw away others' crap commits. Let's go back to the puppy analogy. So here you are boxing up a puppy to mail to your mother and one of your "teammates" says it would be a smart idea to to shove his pet cockroach right in the puppy's eye socket. Do you try and "merge" the roach into the puppy's eye? Do you try and make the puppy's eye work around the roach? No! You’d never allow someone to do that to the puppy and so you should never allow anyone to do that to your code. If someone commits some code that conflicts with your masterpiece, revert it immediately and then get back to the job of being a real man.

10. TDD'ers don't need bug trackers

Bug trackers track bugs. You don't write bugs, you write beautifully crafted perfection. Setting up software to track the zero bugs you are going to write is a waste of time. Even if you did write a bug just for fun, there are no bug trackers which are manly enough to be able to track the hypothetical bug that you would write. You would crash the bug tracker, and reporting that bug back to the sissy that wrote the bug tracker would crash it too.

11. TDD'ers modify production code

Sure you could first modify the code on your machine and then push the code to a staging area for some idiot to "test" and then push your code to production. You could also not date that hot girl in your class but first date her grandmother, and then her mother and then finally date her. But you testosterone doesn’t play stupid games. It’s laser focused on getting hot chicks and writing sick code. You focus your testosterone fueled laser beams on hand optimizing the code for the production server environment. You make your change to production then go home early to take that hot girl to the monster truck rally.

My hairy man friend. Keep your eyes open for examples of TDD in the wild, your testosterone focused on radiating chiseled bits of code, your Red Bull close, and don't you ever let sissies tell you how to do your job.

Special thanks to Dave Smith for his encouragement, ideas, and understanding of the English language.

Wednesday, June 13, 2012

Unit testing isn't enough. You need static typing too.

When I was working on my research for my Masters degree I promised myself that I would publish my paper online under a free license, as soon as I had graduated. Unfortunately there seems to be an unwritten rule of Graduate School research. You spend so much time focusing on a single topic of study that by the time you graduate you are sick of it. So more than year later I'm finally putting my paper online. For those that don't want to read the full paper (it's not terribly long for a research paper at 60 pages, but it's no tweet either) I'll include a shorter summary below. The summary will omit some important information and so if you would like to provide constructive or destructive feedback I ask that the feedback be directed towards the full paper and not the quick summary.

For me research I wanted to test the frequently cited claim by proponents of dynamically typed programming languages that static typing was not needed for detecting bugs in programs. The core of this claim is as follows:
  1. Static typing is insufficient for detecting bugs, and so unit testing is required.
  2. Once you have unit testing static type checking is redundant.
  3. Because static typing rejects some valid programs static typing is harmful.

Despite the fact that I had heard and read this claim many times I couldn't find any research to back this claim up. So I decided to conduct an experiment to see if in practice unit tests really did obviate static typing for error detection. I also wanted to see if developers frequently use dynamic constructs that can't be expressed in a statically typed programming language.

My experiment would consist of finding examples of open source, unit tested programs written in a dynamically typed programming language and manually translating them into a statically typed programming language. I would then quantify how many (if any) defects were detected by the type checker, and how many dynamic constructs couldn't be directly expressed due to being rejected by the static type checker. I should emphasize that for this experiment I would *not* be simply rewriting the program, but doing a direct line by line translation from one programming language to another. I would not count defects that were not detected by the type checker, nor any defects that could not be reproduced in the original program.

Before starting the experiment I needed to choose a dynamically typed programming language that I would translate programs from. I also needed to choose a statically typed programming language that I would translate those programs to. The criteria for the dynamically typed programming language were as follows:
  • The language should be dynamically typed
  • The language should have support for and a culture of unit testing
  • The language should have a large corpus of open source software for studying
  • The language should be well known and considered a good language among dynamic typing proponents
With this criteria in mind I selected Python. The next step is to chose the statically typed programming language. For this selection I used the following criteria:
  • The language should be statically typed
  • The language should execute on the same platform as Python
  • The language should be strongly typed
  • The language should be considered a good language among static typing proponents
I selected Haskell for the statically typed programming language.

The next step was to choose some unit tested programs to translate from Python into Haskell. I randomly picked four projects, The Python NMEA ToolkitMIDIUtilGrapeFruit and PyFontInfo from the https://code.google.com/ and https://bitbucket.org source code hosting sites.

The Python NMEA Toolkit

The translation of the Python NMEA Tookit from Python to Haskell led to the discovery of nine type errors. Three of them could be triggered by malformed input and the other six by an incorrect usage of the API. Only one of the type errors would have been guaranteed to have been discovered had full unit test coverage been employed. Additionally there was one run time error that could be eliminated once static typing was applied. Two unit tests could have been eliminated as their only function was to perform type checking. No dynamic constructs were used that could not be directly translated into Haskell.

MIDIUtil

The translation of MIDIUtil led to the discovery of 2 type errors. Only one of the type errors would have been certainly been caught had full unit test coverage been employed. An additional run time error could also be eliminated by static typing. None of the unit tests only tested for type safety and so none of them could be eliminated. The MIDIUtil code did use struct.pack and struct.unpack which could not be directly translated as they both rely on format strings that determine the type of arguments and return values. However in all cases the format strings were hard-coded, so the Haskell version could instead use hard-coded functions instead of the hard-coded format strings with no loss in expressiveness. Had the MIDIUtil code stored these format strings in external configuration files then the program would likely have required a re-design to express it in a statically typed language.

GrapeFruit

The translation of GrapeFruit to Haskell did not result in the discovery of any type errors. A single run time error could be eliminated by static typing. Additionally a single unit test could have been eliminated that only tested for type safety. No dynamic constructs were used that could not be directly translated into Haskell.

PyFontInfo

The translation of PyFontInfo resulted in the discovery of six type errors. Two run time errors could be eliminated by static typing. A single unit test could have been eliminated. The PyFontInfo code also used struct.pack and struct.unpack which can not be directly translated, but a simple work around exists.

Results

The translation of these projects revealed that all of these projects could have been written in a statically typed programming language with only minor code changes. Furthermore, unit testing did not seem to be an adequate replacement for static type checking. A total of seventeen type errors were discovered. All of the type errors that were discovered were the result of bugs in the original Python code that were not discovered by the unit tests. Many of the bugs existed in code that did have unit test coverage.

Conclusion

The results of this experiment indicate that unit testing is not an adequate replacement for static typing for defect detection. While unit testing does catch many errors it is difficult to construct unit tests that will detect the kinds of defects that would be programatically detected by static typing. The application of static type checking to many programs written in dynamically typed programming languages would catch many defects that were not detected with unit testing, and would not require significant redesign of the programs.

Future Work

The translation of these four projects do provide an interesting data point on the effectiveness of unit testing for defect detection. I hope that others will try to conduct similar experiments on more samples of dynamically typed programs.


The full length paper is located here.
The original Python code and the Haskell translation are here.

Thursday, May 24, 2012

The Go Programming Language Has Exceptions

It seems like every other post I read about Go states that the Go programming language doesn't have exceptions. This is followed by either an excited explanation on why exceptions are bad and why you don't want them anyway, or an explanation on why Go is deficient because it doesn't have exceptions. It's time to set the record straight. Go has exceptions. Now I really want to believe it. So close your eyes and repeat the following ten times "Go has exceptions". It's OK I'll wait for you to finish. Are you done? Do you believe it? OK now that you know that Go has exceptions in your heart of hearts. I should probably point out that Go really doesn't have exceptions. If you look at the language spec you'll see that there is no section on exceptions. There is no "try", "catch" or "throw" keywords, and nothing on "finally". Even though Go doesn't have exceptions it does have constructs which exhibit exception like semantics that can be used when you want to use exceptions for error handling. Their just not called exceptions.

In the Go language spec there is a section on handling panics. If you read it carefully you'll notice that the description of "panic" sounds a lot like the description of "throw". You may also notice that "recover" sounds lot like "catch", and you can use "defer" in place of "finally". There is no replacement for the "try" key word as Go's "panic", "recover" and "defer" work on function boundaries not on "try/catch" scopes. So there you have it. Even though Go doesn't have exceptions it does have constructs that can be used to handle exceptional error conditions and that closely resemble exceptions.

This raises the question. If Go has exception like constructs, why do people frequently claim that it doesn't have exceptions and you only handle errors via returning an "error" value? I think there are two reasons for this. First of all in the early days of Go there was no "recover" keyword. Any call to panic would terminate the program so people got used to the idea that Go didn't have an exception mechanism. Secondly the convention is for packages to handle any uses of "panic" internally and then to return error values to their callers. This convention means that you never have to worry about recovering from someone else's panic when calling into a Go package. You can however use exception handling internal to your package or in your application, but you don't have to. So it's possible to write Go code and never encounter exceptions. Ensuring that a libraries panics won't leak into your code solves many of the problems with exceptions.

So can we please stop saying that Go doesn't have exceptions? While it may be technically accurate it's misleading. Instead why don't we just say that Go has two complementary error handling mechanisms, one is by returning error values, but much nicer than C's, the other is exception like, but it solves many of the issues with exceptions.

Tuesday, May 1, 2012

Making Elastic Applications More Friendly in Go

Many networked applications are elastic. They'll use as much or as little bandwidth as available. For example web browsing, email and ssh sessions will continue to function properly over lower bandwidth connections. However many applications such as audio or video streaming are inelastic and require a minimum bandwidth in order to work properly. If you download email over a dial-up connection it may take a long time but it will still work. If you stream video over a dial-up connection it will likely not play correctly. Unfortunately many elastic applications are written to use as much bandwidth as possible and so  they may interfere with inelastic applications. For example if you run iTunes while streaming video in a browser iTunes will use as much bandwidth as it can, even if ends up interrupting the streaming video. A simple solution to this problem is to add controls to elastic applications to limit how much bandwidth they'll use. Both the curl and wget command line utilities provide such controls, but applications with such controls seem to be the exception rather than the rule.

A few weeks ago I started rewriting an application in Go for managing podcasts. As I frequently get internet connection over a 3G connection I wanted to make sure that this application didn't monopolize my network connection. I needed a way to limit the amount of bandwidth that my podcast application would use. As I started working on a solution I recognized that it may be beneficial to others. A bit of work and a lot of fun later iothrottler was born.

Using iothrottler is quite easy. You create an 'IOThrottlerPool' with the maximum amount of bandwidth that should be used by clients of the pool, and then add clients by calling the 'AddReader', 'AddWriter', 'AddReadWriter', or 'AddConn' methods. These methods return types who's bandwidth are limited by the pool. Of course the pools bandwidth limitation can be dynamically adjusted by calling the 'SetBandwidth' method. More detailed documentation and examples are provided by the package.

Implementing this package in Go was really fun. Go's interfaces make it easy to add network bandwidth control to both new and existing applications. Go's interfaces also mean that you can use iothrottler to limit other types of IO such as file IO.

If you're writing an elastic application in Go please consider bandwidth throttling to your application. Bandwidth throttling makes your user's lives much better.

Bug reports, pull requests, feature requests and questions about iothrottler are always welcome.

Happy Hacking!

Tuesday, April 3, 2012

Object Oriented Go

Because Go doesn't have classes some have concluded that it's not an object oriented language. This is simply not true. I thought I'd share a simple problem that I needed to solve in Go in an object oriented way. Note that this won't demonstrate all of it's object oriented capabilities but it shows how one can create simple objects with methods.

When writing code I've frequently used a technique for poor mans profiling where you start a timer, perform a set of operations and then see how much time has elapsed. This technique is not a substitute for real profiling tools like valgrind, but it sometimes is handy and sufficient. In C++ or Java I'd create a Timer class that will get the current time when it's constructed and then add an elapsed() method which will return how many nano/milli seconds have passed since the object was constructed. I needed this for a Go project I'm working on and this is how I did it:

First of all I define a new type.

This code defines a new type Timer that will have the same representation in memory as the time.Time structure but it's treated as a new type by the type checker. This means that if you try to call a time.Time method on a Timer object you'll get a compiler error. You can however convert a time.Time to a Timer and vice-versa by type casting.

Now that we have a new type lets create a function for creating a new Timer. In Java we'd probably call this method timerFactory, but will just call it StartTimer().

The time.Now() call returns a new time.Time object representing the current date and time, which we then type cast to a Timer and return it.

Now we want to add a method to our Timer object that will return the number of nanoseconds that have elapsed since the Timer was created.

The ElapsedNanoseconds() method gets the new date and time with time.Now() and subtracts the original time. We must cast the Timer to time.Time because it's a different type, and then we return the number of nanoseconds that have elapsed.

We can now use our new Timer type as follows:

That's it! This works great as is but sometimes you don't need nanosecond resolution, millisecond resolution might be just fine. So lets add a new method to our Timer type to get the elapsed time in milliseconds instead.


While this code works perfectly fine, the code will make it easy for us to write a bug. Consider the following:

This code creates two Timer objects and attempts to compare the elapsed time between them. Unfortunately the units on the shortDelay are nanoseconds and the units on the longDelay are milliseconds. Comparing these values this way would make our High School Algebra teachers sad. Luckily we can fix this by defining new types for the nanosecond and millisecond delays.
We defined two new types Nanosecond and Millisecond. Each are based off of int64 so they'll have the same memory layout as an int64, but they're treated as different types by Go's type system. Now if we try and write code to compare these values we get an error message like "invalid operation: shortDelay > longDelay (mismatched types Nanosecond and Millisecond)". We're using the type system to enforce units on time! Now wouldn't it be nice to be able to convert from a Nanosecond to a Millisecond? We can do that by adding conversion methods to the Nanosecond and Millisecond types.

We can now fix our buggy use of Timer like so.



That's it! We created three new types Timer, Millisecond, Nanosecond (they would be classes in C++/Java) and one factory function and three new methods. The full code with the example usage can be downloaded and executed by typing "go run main.go".

Tuesday, December 20, 2011

Android development on Ubuntu 11.10 Oneiric

I decided to play around with Android development but I had a hard time finding good instructions on how to set up the Android development environment on Ubuntu 11.10. With a little trial and error I've figured it out. I'm posting my notes in hopes that they will be useful to me in the future and possibly to others also. The aren't a complete step-by-step guide and they assume you're comfortable with the Linux command line, but hopefully they'll be useful.

  1. Download and untar/gzip the Android SDK
  2. Install the JRE
    1. sudo apt-get install icedtea6-plugin openjdk-6-jre openjdk-6-jdk ia32-libs
  3. Install ant
    1. sudo apt-get install ant
  4. Set your JAVA_HOME and CLASSPATH
    1. export JAVA_HOME=/usr/lib/jvm/java-6-openjdk
    2. export CLASSPATH=/usr/lib/jvm/java-6-openjdk/lib
  5. Update the build tools
    1. cd android*; ./tools/android update sdk --no-ui
  6. Download eclipse (Helios release) (Note the one in the Ubuntu repositories doesn't seem to work)
    1. google-chrome www.eclipse.org/downloads/
  7. Untar eclipse
    1. tar -xvf ./eclipse*.tar.gz
  8. Install the eclipse Android plugin
    1. eclipse : Help -> Install New Software ...
    2. enter: "Android Plugin"
    3. enter: "https://dl-ssl.google.com/android/eclipse/"
    4. Select 'Developer Tools'
    5. Click "Next"
  9. Restart eclipse
  10. Select the Android SDK
At this point you should have the Android SDK and eclipse installed and configured for Android development.

Here are some notes on doing Android development if you prefer to live on the command line instead of living in eclipse.

  1. Create and Android Emulator
    1. cd android*; ./tools/android -> Tools -> Manage AVDs
  2. Start the Android Emulator
    1. cd android*; ./tools/emulator -avd
  3. Create a new Android project
    1. cd android*; /tools/android create project --target android-14 --name HelloAndroid --path ../HelloAndroid.git --activity HelloAndroidActivity --package com.mydomain.helloandroid
  4. Compile your project
    1. cd ../HelloAndroid.git; ant debug
  5. Push your project onto your running Android emulator
    1. ant debug install

Tuesday, August 23, 2011

Why you should get a college degree

Introduction
I've been thinking about why it's important to get a college degree for many years and have finally decided to write down why I think it's important. I've heard many arguments over the years both for and against college. I'll try and address those arguments here. I should point out that my arguments may only apply to those going into a technical field (Computer Science, Electrical Engineering, Mechanical Engineering), but many of the arguments may apply to non-technical fields.  Since I don't fully understand these fields, nor do I fully understand the college experience for non-technical degrees, I am likely not qualified to discuss the related benefits or costs.

Background
Since many of my arguments will be based on empirical evidence, I believe my personal background is relevant. My career goals have been to be a Software Engineer. Computer Programming has been a long hobby and passion of mine. I started working as a Computer Programmer before I started college. Since I was working in my desired field, I didn't think that college would be that important.  I did, however, decide to get my bachelors degree as career insurance in case I later needed or wanted to change careers. At the time, I didn't think there would be much educational benefit (I felt I already had many of the skills and knowledge that I would need for my career), but it seemed like a good idea. Many of the advanced CS courses I took showed me that I did have a lot to learn from college and I continue to use the knowledge I gained from my undergraduate studies at work.

A few years after getting my bachelors degree, I decided to go back to get my Masters Degree. Graduate school was more of a hobby than a career advancement strategy. I enjoyed reading papers and learning new things, and I wanted to get experience doing academic research and writing academic papers on my research. Just like with my Bachelors degree, I've been pleasantly surprised at how much I've learned while getting my Masters Degree. I've now graduated, and since my school work is done, I have had a lot of time to review my schooling and to think of the costs and benefits of my undergraduate and graduate work.

Addressing the Myths
Over the years I have heard repeated many arguments against college which I simply don't believe are valid. Here are the common myths that I've heard along with my counter argument.

    1. A degree is just a piece of paper
    This is simply not true. A degree is a certification from an educational institution that you have successfully completed an academic program. A degree is intangible, a diploma is tangible and is typically a piece of high quality paper.

    2. A diploma is just a piece of paper
    This is true, but the diploma is not really the goal of going to college. The diploma is a token that represents something of greater value. The diploma is nothing more than a certificate to show you have a degree. The concept of a token that represents something of value is not only limited to college diplomas. Cash, paychecks, car and house titles are all just pieces of paper, but their value is much greater than the cost of the paper that they're printed on. There is also a greater probability that you will be able to obtain the later pieces of paper such as cash, titles, paychecks if you have first obtained the first piece of paper a diploma.

    3. Just because you have a degree doesn't mean you're smart.
    This is certainly true. I have met and interviewed many degreed Software Engineers who managed to make it through college without retaining much of what they studied. Many of these unqualified Software Engineers were even able to pass their classes with good grades. So, I completely agree that the process of getting a degree will not guarantee competence. I have also met and interviewed non-degreed Software Engineers who were also incompetent and unqualified. So if a degree does not guarantee competence then why bother getting a degree? The answer is that a degree can help you become more competent, and therefore, a degreed individual is much more likely to be competent.  My experience in talking and interviewing Software Engineers leads me to believe that there are a higher percentage of degreed Software Engineering that are competent than there are non-degreed Software Engineers that are competent. Just like safe driving habits won't guarantee you won't die in an auto accident a degree won't guarantee you'll be competent, but both will increase the likelihood of safety and competency respectively.

    4. I don't have a degree, and I'm smarter than those that do.
    This is entirely possible. As I mentioned before, I've met non-degreed Software Engineers who surpassed the average degreed counterparts in ability and competency. I do, however, think that this scenario is rare and unlikely. Of the non-degreed Software Engineers that I've known and interviewed, I have observed that there are many more who had an exaggerated perception of their abilities then those who's abilities we're above the average degreed Software Engineer. The idea that there are a lot non-degreed Software Engineers who erroneously think they are extra competent should not come as a surprise to those that are familiar with illusory superiority where people tend to overestimate their abilities or the Dunning-Kruger effect where the least competent tend to overestimate their abilities the most. If college degrees do in fact increase competency, then it would be expected that non-degreed individuals would not only be less competent, but would also not have the knowledge required to grasp the depth of their incompetence.

    5. College is a waste of money.  They don't teach anything that you can't learn on your own.
    While I agree that college doesn't teach you anything that you can't learn on your own. I do not think that it is a waste of money. I believe that college provides several benefits over self learning. Namely:
        A. Access to experts. Many of my classes were taught by professors who were considered experts in their fields. They kept up to date on the latest research and technology and shared that with their students. I remember discussing with a professor at the University of Utah an idea that I had for a research project for my Masters Thesis. He not only was able to quickly point out that my research topic had already been fully explored about 10 years earlier (a fact that I was not able to discover on my own despite several internet searches), he was also able to point me to that research. If I didn't have access to this professor, I would have spent a lot of time gaining knowledge that could have been gained in much less time by reading a handful of academic papers. This professor helped me to focus my research on areas that had not already been extensively explored.
        B. Immediate feedback. More that once I've done an assignment, taken a test, or participated in a class lecture thinking that I fully understood a subject, only to find out the next week when the graded work was returned that I didn't. If you study on your own you may likewise misunderstand a topic, but you may not get the feedback that you need.
        C. A well rounded curriculum. Both my Masters and Bachelors degrees surprised me at not only the depth of learning that I received but also the breadth. I think it would be possible (although much more difficult) for me to learn those topics on my own, but I'm less confident I would have known that some of those topics even existed or that they were important. I didn't know what I didn't know. It is impossible to study a topic that you don't know exists. A good example of this is big-O notation. Many un-degreed Computer Scientists have never heard of big-O notation or have shallow or incorrect understanding of what it is. In my job we use it all the time, if you don't know what it is you'll be left behind in the conversation or you'll require us to stop work and teach you about the topic (at work we pay you to be effective, we don't want to pay you to get an education). It's not a terribly difficult topic, but you won't learn it if you don't know it exists, and it's easy to get confused about what it really means. A good degree in Computer Science will guarantee the exposure to the topic and a good professor will give ample feedback to ensure a proper understanding.
        D. Access to equipment and technologies. Somethings you just can't learn (adequately) from books. You need to get you hands dirty and work with it. Many of these equipment and technologies are out of the price range for many people. Colleges can provide access to these technologies at a cheaper cost than if you purchased all the technologies on your own.


    6. But if you did learn it all on your own and you are competent why don't employers just interview you so you can prove that you're qualified? They could but interviewing is expensive. I believe the right way to interview for a technical position is to have your most qualified employees ask candidates technical questions. This process is likely to result in hiring a qualified candidate. First of all, your most qualified employees are also likely your highest paid employees, so you're paying them a lot of money to interview. They are also likely to be overworked as they can perform the most tasks (hence the most qualified) and if they weren't overworked you wouldn't be hiring.   If you interview every candidate you will spend a lot of money paying your existing employees to interview and it is likely that your employees will get tired of interviewing and they may leave for another job where they can do real work and not perform job interviews. Because interviewing is expensive you must be selective with who you interview, so it makes sense to only interview those candidates that are most likely to be qualified. In my experience degrees individuals are most likely to be qualified and so it may make good business sense to toss the resumes of the candidates that don't have degrees. An obvious exception to this rule is if you simply do not have enough candidates, in which case, you may end up interviewing all candidates, but you should start with those who are most likely to be qualified first.
   
    7. What about professors who live in their ivory towers and know nothing about the real world?  This is a common criticism about professors, and one that I've probably uttered a time or two. Such professors do exist, but I've found that often what they teach isn't applicable to today's real world problems but it often becomes applicable in the future. I remember learning about closures while working on my Bachelors degree. At the time I didn't think I'd ever use them as C/C++ Java ruled the industry. Since then Perl, Python, Ruby, JavaScript, Google Go have all become more predominant and all support closures. Even C++ recently added support for lambdas which in C++ are close to full grown closures. Had I ignored this topic, I would have had a more difficult time learning these languages and taking advantage of the power of closures. Additionally many of the students that are complaining about ivory tower professor do not have any real experience in the "real world" either. They are college students, who haven't spent a lot of time working in the "real wold," and, therefore, likely unqualified to determine what knowledge will have real world significance.



Conclusion


While I don't think college is perfect, I do believe the vast majority of people entering technical fields would greatly benefit from obtaining a college degree. I've talked to many who don't have degrees who feel that they aren't useful, but very few who have earned a technical degree who question the benefit. If there are additional augments against earning a college degree in a technical field that I haven't addressed, I would love to hear them.