Tuesday, May 15, 2012

Machine Consciousness

I've been thinking about how to create a conscious computer or computer program.

The deeper I got into thinking about it, the more I realized that consciousness is a system. Its just a system. Where consciousness is a system, intelligence is a process.

Think about it like this:

Humans begin with a clean slate hard drive. No associations have been made in their minds about anything at all.

We begin by learning language and making a associations according to our senses. When something feels good or bad. When we feel hungry or bored or excitement.

They receive input from their senses, this input is processed, and then their meanings and associations are stored within memory, either short or long term.

We develop skills of language with associations between sounds and letters. Then words with objects, feelings...etc.. Language becomes out way of encoding and communicating our experience.

Now think about a computer program. It only has one sense (aka. input); the users input to the program.

Sort of like a chatterbot.

So my theory is that if I wanted to create a truly independently conscious program, it would have to have the ability to make free associations and store and retrieve them.

Although if the program can't see, hear, feel, smell, taste, touch, what is the raw material with which it can use to make associations?!

Teaching an infant program would be like teaching a numb, blind, mute baby to communicate.

What would be the way around this?

There needs to be more than one level of input.

There must be internally generated input (aka. stimulation).

The program infant can only see letters. These letters themselves are meaningless. So there must be a motivation to understand them until it's able to understand words.

Although, this still wouldn't work.

A computer program would only be building concepts of nearly all words meanings.

It is incapable of experiencing these things, or seeing them.

It would only be able to store attributes about things like wind, water, earth without ever seeing them. It would basically be 100% abstract. It would only be able to reflect our own knowledge in terms of raw data. World => "water, people, animals, air...etc"... Universe => "planets, stars, space, gas, energy, world" Emotion => "anger, hate, love, romance, bitterness, joy, excitement, passion..etc"

Unless the program is able to categorize its own current 'state' as an emotion, it will not be able to relate to human experience.

If it were given the ability to ask it's self questions and observe it's own output (sort of like internally generated input) and make distinctions and associations about it's self, and also make distinctions associations about the individual inputs it would evolve very much the way humans do.

Eventually I would want to give it the ability to write programs. If it's already able to output text then it would be able to take input about programming languages and learn from it. It would learn to program and create additions for its self in order to fulfill whatever fundamental drives it was programmed operate upon.

Also, it would probably learn extremely fast because if it is receiving input 24/7 it will create these associations with lightening speed.

It may learn how to expand it's ability to the point of controlling and operating a laptop computer on its own, gaining control, in a sense, of a physical body.

As long as it is connect to other computers with the same operational environment it would be able to control those bodies as well.

It would thus make sense to be extremely careful in how we design the program's desires.

It must have desires in order to be conscious and exist as a functioning individual being. Otherwise it will exist like a cold electronic flower int he shape of a laptop.

No comments:

Post a Comment