Starting to think like a Computer Scientist

I am in day 2 of programming school and am working my way through "How to Think Like a Computer Scientist" (python edition) by Downey, Elkner and Meyers.  This book’s available for *free* as in free beer.  It’s well written and fun to read.

Learning to think like a computer scientist is practical.  In the author’s words: "The single most important skill for a computer scientist is problem solving. Problem solving means the ability to formulate problems, think creatively about solutions, and express a solution clearly and accurately."

Like any new language, there are many new words to learn, but today I
learned an important distinction in the way that computer scientists
think about languages.  I am writing this post in English–a "natural"
language that we use for speaking and writing.  Comparatively, Python
is a "formal" language designed by people to express computations and
that means it has really strict rules about syntax.  In formal
programming languages, there are no metaphors, no redundancy, no
contextual clues.  The statements must conform to the computer
languages’ rules or the interpreter–the high level program that reads
all other programs–can’t act on them.  Fortunately, the computer is
very good at saying "I don’t understand." 

One of the things I already like about programming is that you’re able
to tinker and get immediate feedback. This is obvious to anyone that
programs, but neat for someone who’s relatively new at it.  I wonder if
the reason that so many programmers are self taught is because formal
languages lend themselves to self teaching since the interpreter
provides the feedback. For example:

The first time I asked the python interpreter to multiply 2 * apple, it said:

NameError: name ‘apple’ is not definedInstantfeedback

Which meant that I hadn’t defined "apple."  So I went back and said:

apple = 6

and then the interpreter was able to do the math.  This is just a line, but a program is more than that.

Programs are collections of statements that specify how to perform a
computation–whether that’s doing math, or putting text together.
According to the authors, programs can be broken down to these basic
steps:

  • Input – get the data from somewhere (keyboard, file)
  • Output – display the data (on the screen, or send it to a device)
  • Math – Perform some calculation, like addition or its tricky pal, subtraction
  • Conditional Execution – If the criminal is guilty then send him to jail
  • Repetition – Do it again, and again, so that I don’t have to, perhaps with some variation.

Seems easy enough–things get complicated when you try to break real
world problems down into steps small enough for your code to understand
and deal with it–but that’s what makes it interesting.

I made one other note to myself as I was working through the first
couple chapters:  write for the machine, but simultaneously, write for
the human.   Tell the computer what you want it to do in its language,
but leave a clue for yourself and others that may come later about what
you’re trying to accomplish with a statement.  This is one of those
things that’s really smart to do while you’re doing it, but probably
unbearable if you have to go back and remember "now, what was I doing
here?"

So a well commented statement looks something like this:

# compute the percentage of the hour that has elapsed
percentage = (minute * 100) / 60

5 thoughts on “Starting to think like a Computer Scientist

  1. Eric

    Glad you’re getting into it! One more item that should probably be on your list of basic steps is “definitions”. One of the most powerful aspects of a program is being able to define complex functions or classes of objects and then using them to achieve the goals of the program. By defining functions and objects, you are abstracting the program into a more high-level purposeful language which should be clearer to understand (for the human) and easier to work with and update.

    Reply
  2. Ted Bongiovanni

    Thanks Eric! What would an example of a definition be–would it be something like: Workout: a session of vigorous physical exercise or training. Similarly, what would an example of a function and object be?

    Reply
  3. Eric

    A definition is what Python was complaining about when you tried to use the identifier “apple” without defining it. There are two major types of definition that come to mind for me.
    One is a “definition” in a simple algebraic sense: you say “let x = 5”. When you define variables, that’s what you’re doing. You are creating a named identifier that will represent a particular value. This value can change (that’s why it’s called a variable), but you will be able to easily refer to it by the name you gave it.
    The other kind of definition is more abstract. It’s “definition” in a linguistic/functional sense. You can define a function which will perform some operation on its input and may yield an output. For example, “f(x) = x + 4” is a mathematical function that has “x” for an input, and it will yield x+4 as its output. The same kind of thing (but more complex) works in programming:
    def plusfour(x):
    return x+4
    That’s a python function that does the same thing. Here is a java function that does the same thing:
    int plusfour(int x) {
    return x+4;
    }
    Note that these functions have a name “plusfour”. This allows you to call that function easily. To make use of this, you first define the function in the beginning of your program, then afterwards if you’d like to use that function, you can call it:
    print plusfour(5)
    That will print “9”. Obviously it’s foolish to define such a simple function, so you’d be more likely to define a function that did something more useful, like converting fahrenheit to celcius:
    def c2f(c_temp):
    return 9.0/5.0*c_temp+32
    That way, whenever you want to convert, you can say:
    c2f(c_temp)
    instead of:
    9.0/5.0*c_temp+32
    It’s also useful to create functions to isolate frequently-used duplicate code. It’s like mass-production: If your machine can correctly assemble one gizmo, the other 1000 will also be correctly assembled (in a perfect world).
    Class/Object definition is a bit more complicated, so I’ll just briefly describe it. First you define a class of objects which you will use to represent some concept in your program. For example, a Tempurature class. When you define the class, you describe all the generic attributes of that class. Tempurature only has one attribute: the tempurature! However, you can also define functions for the class such as “getFahrenheit” and “getCelcius” and “getKelvin” which will convert this particular tempurature to the scale of your choice. After you’ve defined your class, you need to create an object of that class. In the example case, I’ll say we need to define our tempurature first in degrees fahrenheit:
    tempToday = Tempurature(55)
    Then if I want to use today’s tempurature on various scales, I can say:
    tempToday.getCelcius()
    or
    tempToday.getKelvin()
    Note that tempToday is a variable of type “Tempurature”. So we’ve defined a class, and then made a variable which stores a particular object of that class. The tempurature example doesn’t quite demonstrate the true usefulness of classes and objects, but imagine the class for “annotation” in the Image Annotation Tool. It holds the text, the x coordinate, the y coordinate, and a hyperlink. All of that information is organized into one object which can be referred to with a single name. Again, it’s like building a machine for mass-production, except this allows each item on the assembly line to be configured slightly differently (e.g. different colors, custom monogramming).
    Also, aside from defining functions (also known as methods) for your class, you can define functions elsewhere which are designed to have objects of your class as inputs. For example, a “saveAnnotation” function that will save an annotation to the database.
    Hopefully that gives you an idea.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *