Project #2 Description --- Milestone on the term project path.
Due Date: Sunday April 1 2012 (no fooling!)
This project can be viewed as a milestone along the path
to your term project, so that you will be "encouraged" to:
- discover early if there are substantial problems with your original
conceptual proposal, and so have time to change things;
- get an early start on programming your model, to help you
get more programming done before the term is over without
having a frantic, desperate rush of programming and
running experiments at the end;
- get experience developing programs/models stepwise,
including developing measures you need and
doing verification all through the development process; and
- get an early start on understanding how your model behaves,
so you can have more confidence in the results you get later.
For project #2 you should turn in a brief "memo" describing your progress
on your term project model and the program implementing that model.
The memo should be 4-8 pages of text plus any additional supplemental
materials, e.g., screen-shots, graphs of results, example runs
that the reader (e.g., rlr!) can try out, etc.
For project #2 you should have:
- A running program that includes some (or all) of the components
(Agents, environment) of your conceptual model, and those components should
be doing at least some (but probably not all) of the behaviors
they will do as part of the final model.
- The program should include some or all of the parameters your model requires.
Some of these should be related to the components/mechanisms you have
implemented at this point in the development of your program.
- The program should include some of the measures you
need for carrying out your computational experiments, written
to report files and, if you wish, displayed on the screen
if the program is run in GUI-mode. It is also a good idea
to include measures that are not the focus of your study but
that give you evidence that your model is behaving "reasonably."
For example, if you have birth/death in your model, its a good idea
to track the distribution of age, to be sure you are not
getting some weird distribution that would invalidate your conclusions.
- You should carry out and report on some trial runs you
have done to verify that the components you have implemented
are behaving as desired. For instance you could set parameters
to selected values and then verify that the agents behave
as expected, e.g., set parameters to extreme values and be sure
the model behaves as expected. Or you could include debugging messages and
examine step-by-step output of just a few agents under
selected conditions designed to test agents' responses
under various conditions, to be sure they respond as expected.
You also may want to implement various GUI-based displays to help
you track what your model is doing as it runs under various conditions.
- You also should report on things you have done to verify that the measures
you have included are implemented correctly, e.g., show that the measure is what
you expect for selected states of the model, or show how the measure changes
over the course of runs and verify it "makes sense" given what you can observe
via a GUI display of the agents in their environment.
- As part of the verification, you should carry out and report on some
computational experiment(s) in which you "sweep" some parameter(s) over some
range of values, and then report on how some measure varies as that parameter
is varied. For this you should do multiple runs with different RNG seeds
for each parameter value "case" and then analyze the output of those runs
as appropriate for your model, e.g., report on average values at
"equilibrium" (e.g, over the last 10% of runs).
For instance, if you have a parameter that controls how often agents make
some random movement or other action, you could sweep that parameter and
measure something that reflects the agents' behaviors to see how it varies
with randomness. Or perhaps you have a parameter that controls the
distribution of some attribute across agents or some environmental feature
that impacts agent or environment dynamics.
Please be creative in what trial runs and experiments you do,
as well as in what you measure and display.
The point of all the tests, trial runs and experiments
you do is to help you understand what your model does under
various conditions and most importantly, why it does what it does.
If you don't understand why it does what it does, and don't have
solid evidence for your understanding, it could be that the
behavior is a result of some unknown factors, and worse, the
result of undiscovered biases introduced by arbitrary
implementation choices or by programming errors ("bugs").
Remember, as in all research projects, you want
think of various potential problems and objections that others
are likely to raise, so that you can carry out additional experiments
and gather other support for your answers to those questions/objections.