Can You Learn (But Not Master) Any Programming Language in 1 Hour?
Tim Ferriss has a fascinating post today about how he deconstructs any language to determine if it’s feasible to reach fluency in that language within 3 months. Being the technology geek that I am, I wondered if the same principles can be applied to programming languages.
The foundation of Tim’s process is that you already know at least one language and what you’re really trying to discover is how different the language you’re investigating is from the languages you already know. If you are able to read Tim’s post it is safe to say that you already know at least one language (English). The same does not apply to people reading this post and programming languages. Going from 0 to 1 is always the most difficult. For this post let’s assume that you already know at least one programming language and you’re interested in picking up a new one.
Who’s in your family?
Is your target language on the same major branch of the computer languages family tree as a language that you already know? Learning Lisp from Basic is going to much more difficult than going from C++ to Java.
Looking at evolution of a language is much more useful for programming language than spoken languages, since the former typically has a very logical evolution. After all programming languages have to be understood by computers.
Functional or Procedural?
Most programming languages are procedural, i.e. code is roughly executed one line at at time in an order that resembles the order of the lines in the code. Some languages, e.g. ML, simply evaluate mathematical functions. At my university we had to learn ML as our first programming language. The theory was that going from a functional language to a “regular” programming language would be easy. I can’t vouch for that theory, but I know that the opposite was true: all of us in the class who already know how to program, were really struggling with ML.
Maybe the notation and language of higher mathematics compared to English is an equivalent analogy in the non-programming world.
Symbols and Tokens
Most programming languages use the western set of characters to write tokens (reserved words) that make up the language. The notable exception is APL, the Chinese equivalent in the programming world.
In a programming language you also need to pay special attention to the use of symbols. In many cases a semicolon is required to terminate a statement, or brackets are used to surround blocks of code. But some programming languages have an over-reliance on special symbols: parenthesis in Lisp comes to mind.
In a spoken language punctuation and diacritical symbols do not nearly carry the same weight as special symbols do in programming.
Unintended side effects are the hidden land mines of programming languages. For example changing the value of a global variable in one place, could have an unintended effect on another piece of code. As programming languages have evolved the goal for each new language is often to minimize the possibility to shoot yourself in the foot by limiting the side effects.
One of the few ways to really learn the side effects of a programming language is through painful hands-on experience.
A larger vocabulary comes with practice both in spoken and programming languages. The size of the dictionary or available libraries of a language is not a major factor in the initial learning of the language. Unknown words or functions are easy to lookup.
Finally there is one more crucial difference between learning a spoken language and a programming language: People who you interact with are often likely to go out of their way to try to understand what you’re saying when they realize that you’re trying really hard to learn their language. Computers are not so forgiving. Even the smallest error, for example a missing semicolon, will cause the computer to discard your entire program, even when it ’s obvious to the computer where it should be placed,