CS62 - Spring 2021 - Class 6

Example code in this lecture

   AsymptoticsExamples

Lecture notes

  • admin
       - Darwin part 1 (World and Species)
          - Due Tuesday

       - Where we are
          - covered most of the "big" new things in Java
          - We'll continue to introduce a few random Java things each class, but we will now transition to the data structures side of things

  • this(...)
       - We'll often have multiple constructors in a class. Why?
          - Allow for different ways of creating the object
          - Often, allow for having versions that specify more details

       - It can be convenient in these cases to call one of the constructors from another constructor
          - there's a special syntax to do this because we can just say new ..., since that would create a whole new instance of the class

       - Look at the Matrix class from Darwin
          - We have two constructors
          - The first one simply calls the second one with the default parameters

       - the "this" call (like a super call), must be the first line in the constructor

  • char
       - one of the other built in types for representing a single character
       - to create them use single quotes
          char c = 'a';

       - returned by some of the String methods, e.g.
          String s = "...";
          char first = s.charAt(0);

  • type casting
       - If we have a class B extends A, is the following legal?
          A varA = new B(...);

          - yes, we can always assign a subclass to a variable of the type of a parent class

       - Could we then do the following?
          B varB = varA;

          - No! Even though in this case we know that there is a B in varA, the Java compiler cannot be sure in all cases

       - However, sometimes we (as the programmer) know the contents. We can tell Java that we know the type and to cast (interpret) the value as that type.
       - The way to do that is with parentheses and the type:

          B varB = (B)varA

  • look at the sum method in AsymptoticsExamples code
       - How long will it take if we pass it an array with 1000 numbers in it? 10,000 numbers?
          - We don't know!
          - We could time it and find out
       - Even if you time it, can you say conclusively how long it will take?
          - No!
          - Variation from run to run
          - Depends on the computer
          - etc.
       - If I tell you the time it took on 10,000 numbers was t. Could you tell me approximately how long it would take on 20,000 numbers?
          - would take about twice as long, i.e., 2t
          - why?
             - does a linear pass through the data
             - doubling the size of the data, means about twice as much work

  • Asymptotics
       - Precisely calculating the actual cost of a method is tedious and not generally useful
       - Different operations take different amounts of time. Even from run to run, things such as caching, etc. will complicate things
       - Want to identify categories of algorithmic runtimes
       - What we really want to do is compare different algorithms
          - Want to know which algorithms will be much worse (or much better) than others

  • Big O
       - We write that an algorithm's run-time is O(g(n)) if the run-time can be bounded as n gets larger by some constant times g(n)
       - For example, sum is O(n), i.e. linear

  • look at the lastElement and sumProduct method in AsymptoticsExamples code
       - What do they do?
       - What are their Big O running times?
          - Put another way, how does their run-time grow as you increase the size of the input?
       - lastElement
          - O(1), aka constant
          - no matter how large the array is (assuming .length is a constant time operation) it always does the same amount of work
       - sumProduct
          - O(n^2), aka quadratic
          - for each element, it must do a linear amount of work

  • if we know that sumProduct on an array of size 10,000 takes time t. Could you tell me approximately how long it would take on 20,000 numbers?
       - about 4t
          - We can figure this out by plugging our time into the O equation:
             - t time for size of size x array
                g(n) = n^2
                g(10000) = 10000^2 = t
             - doubling the size means:
                g(20000) = 20000^2 = (2 * 10000)^2 = 4 * 10000^2 = 4t
                
                - the time will roughly quadruple if we double the size
                
  • How does Big-O notation allow us to ignore irrelevant details?
       - look at the doubleSum method in AsymptoticsExamples code
          - What is the Big O runtime of this method?
             - O(n)
          - How does it's runtime compare to that of sum?
             - twice as long (calls sum twice)
       - Even though doubleSum is twice as slow as sum, they're still in the same category since they will roughly grow at the same rate

  • Show running time table https://cs.pomona.edu/classes/cs62/lectures/big_O.jpg