Here’s the solution:
# Exercise 2 # Load Data filename = "Data/PendulumData.txt" [length,time] = loadtxt(filename, delimiter=',', unpack=True)
So what’s going on ?
We’ve started playing with two fundamental concepts: functions and variables. Python is (usually) smart enough that, contrary to many other languages, we don’t have to declare our variables; instead, all variables are treated as objects. So when we write
filename =, we’re creating a new object called “filename” and we’re about to give it a value (whatever the nature of that value actually is). We set the value to be
"Data/PendulumData.txt", i.e. a string (of text) containing the address of our data file.
filename is now an object of type string.
Next, we use a function inherited from the library
numpy we loaded in exercise 1. This is the whole point of libraries, we don’t have to define our own functions for every single operation we want to do. On the other hand, we have to respect the way these functions are written, and in particular what they take as input and return as output. This is usually well documented, e.g. here:
numpy.loadtxt(fname, dtype=<type 'float'>, comments='#', delimiter=None, converters=None, skiprows=0, usecols=None, unpack=False, ndmin=0)
We see that the function
loadtxt takes a number of arguments, separated by commas. Some of them have a default behaviour (with an
=), so we can skip them when calling the function (as long as the default behaviour is what we want, of course):
- fname: the name of the file to read, expressed as a string (exactly what we’ve prepared with our variable
- dtype: the type of objects we’re reading in the file. By default, it is set to
float(decimal numbers, basically) which is what we want; other possibilities could have been, e.g.
- comments: how to recognize the elements we want to ignore when reading our data file, like column titles. In this particular instance, they are indeed prefaced with a
#, so we can resort to the default behaviour.
- delimiter: how to separate the values.
Nonemeans a single space; anything else must be input as a string.
- converters: if we wanted to convert the format of a column (such as a timestamp or a binary number) to a float format.
- skiprows: skips the specified number of rows before starting to read the file, maybe to remove a header or target only a specific part of the data. In our case, we could have set it to 1 to skip the row containing the titles of the columns, but this is superfluous since they already fall within the
- usecols: similar behaviour as
skiprows, but for columns.
- unpack: if
=False, the function returns one array of numbers for each row; if
=True, it instead returns one array for each column, which is the behaviour we want.
- ndmin: minimum number of dimensions of the returned array.
In our example, we have float-type data arranged in columns and separated by commas so
loadtxt(filename, delimiter=',', unpack=True) is indeed the way to go. This will output two arrays corresponding to the two columns in our data file, and we can collect them by creating two variables
time; they will automatically take the type “array”.
Note: the square brackets around
length, time are redundant. In fact, what
loadtxt outputs is an array of array, but Python is smart enough to understand that we mean “assign the first array to
length and the second to