Computer Science and Biology 361
The brain is absolutely so complex and interesting. I like how Professor Blank defined an emergent system as being below the level of meaning and rational systems as being at or above the level of meaning. This really clarified much of the confusion I have had about the different systems. It finally has been clear to be the difference between an emergent system and intelligent design; the fact that an emergent system does not have an outcome in mind. I do agree that some emergent systems might not have an outcome in mind, but whose “outcome in mind” is the answer. I do agree with Laura when she said that an emergent system might not have an outcome in mind, but it might have a certain outcome for some other system. I think that something interesting does not always have to be a solution to something, but I think that it can be. I think that emergent systems are emergent until they give a solution for something. It is also possible that anything interesting could always be a solution to something, and maybe we just do not know yet what that something is. I also feel that there can be systems that are emergent and rational. As of now, I think that is how I view the brain. I think that it has components from both emergent and rational systems. I agree that they brain is like a “super computer” made up of many neurons (computers). But I also feel that the brain is not necessarily like an emergent system because it has some goals, such as forming us humans physically the same, but it some sense different, and this is where I think the emergent system somewhat comes in.
A friend of mine sent me an interesting link to an article
about a new company's mission to enter the market of 'biotech' pets. GeneDupe consists of a team of biologists and computer scientists who have created a virtual cell that represents the real thing right down to the mitochondria, Golgi bodies, etc. They then 'load' the genome of a particular species, which turns into a fertilized egg and ultimately grows into an adult.
To make matters interesting, it seems as though GeneDupe also employs image recognition/processing. The software is able to take a picture of a mythical creature (ie centaur, dragon, griffin, or what have you) and it finds genomes of similar animals, splices the genes together, allows mutation, and then lets evolution take control.
OK, so my title was a bit provocative, but here's what I'm going to do for my project (and hopefully it will work). (When we went around on Wednesday and explained our projects, I said basically this, but now I'd like to elaborate it more.)
Training Neural Networks to do ANDs and ORs is all fine and good, but I feel that it misses the point, at least in terms of emergence. Neural Networks show great potential in terms of solving computing and AI problems, but I'd like to go somewhere different. I want to write a simulated world where the creatures are run by neural networks. The inputs to the neural networks will be the "sense": vision, for example, could be represented by two parameters: one for the distance of the nearest object in the line of sight, and another one for the "color", where food would have one color, other creatures another color, and obstacles a third. (The distance and color would have to be normalized so as to be a number between zero and one, of course.) The outputs, then, could be actions: one output could be whether to move forward or not, another to be whether to turn left, right, or not at all, and maybe another could be to change the creature's own color.
I read this article
for another class, I thought it was good/applicable to our discussions of connectionism/neural networks. It's a recent fMRI study that looked at memory reinstatement. The researchers used pattern association algorithms to look at distributed representations of the brain during cueing/recall. Their results support a more connectionist model of memory encoding citing, for example, that the fusiform face area does contribute to 'face memory' but if such maxima of activation are removed from the representation other areas are just as indicative of the (face) stimulus.
In my final project, I want to take several cryptosystems/encryption algorithms and see if it's possible for a neural network to recognize the pattern. If all goes to plan, my dataset will consist of a ciphertext and it's plaintext equivalent; this is what I'd like to train my network with. I'll test simple substitution/transposition ciphers and maybe even ones that rely on the factoring of large primes (ie RSA); however, I highly doubt the network will be able to 'crack' the latter. If time permits, I may also explore the idea of training two networks on one another in an attempt to create an encryption/decryption system. Anyway, I'm running into a problem when I try to set the inputs and outputs for the neural network.
'The line between living organisms and machines has just become a whole lot blurrier. European researchers have developed "neuro-chips" in which living brain cells and silicon circuits are coupled together.'
I heard this headline on the way in to school yesterday morning and dug up this link
Sounds pretty exciting.. but I don't know enough about neurons to imagine how they would intereact with the electronics. If anyone has an idea of how this might work, could you fill me in??
Also, I thought one of the pictures was interesting.. have a look at the snail neurons.. is that why netlogo calls them turtles??
I saw this article
on Slashdot today touting the "first digital simulation of an entire life form," and couldn't help but think of the Karl Sims' "evolved virtual creates"
that we saw a few weeks ago during our discussion of evolutionary algorithms. Though the mechanisms and scale of simulation are wholly different (Sims, as I understand it, worked with locomotion mechanisms, while these researchers worked on the cellular level)--the resulting "life forms" existing only in virtual space are quite similar in concept. The premise of both experiments is also remarkably similar in that both groups believed that they could effectively simulate organic life forms using computer algorithms.
I'd like to do a model in NetLogo for the final project, I want to make a simulation of ant colonies based off of Deborah Gordon's work, which I read about in Johnson's Emergence and which was also a part of the talk at the Swarm Exhibit. Anyway, I want to try to model how colony behavior changes as it gets older/larger as well as task allocation based on encounters with other ants/agents and possibly interactions with other colonies. Working on getting some of Gordon's papers now so I'm just starting to play around with ideas how to implement it in NetLogo.
I found a neat article
on an unlikely connection between physics and math--a similarity between energy levels of nuclei of heavy elements and Riemann's zeta function. I don't know the first thing about math or physics, but maybe someone with more background can say something interesting about this.
Just posting to say that I have no clue what to do a project about. (Neither does jferraio, she says, though I have great faith that she'll come up with somethine awesome.) I hope I'm not alone. Coding, fine, I can do that. But I just don't know where to start. Anyone have any castoff ideas they're willing to hand down? Jumping points? When I asked Doug about this on Saturday, he asked me what I thought was most interesting in the course so far. I still can't decide. Help?