The computer thinks in 1s and 0s-- yes, no, yes, no, etc.
Perhaps we can utilize its black-and-white ability to help us to say yes and no
where we are reluctant to force decisions, such as saying no to bad habits.
There are even more ways in which we can follow the example of computers to live more consciously—ways which might come as a surprise to you, as they certainly did to me. There’s a lot to break down here. Let’s take a look at it.
Dependence, or Evolution?
My main concern with relying on technology to improve the quality of life in this manner is that in itself-- reliance. What if by using an external force to assist individual consciousness, we weaken that consciousness?
I've expressed the same concern about drugs-- a position I am not nearly as firm or even emotional about now, though on which I still am not conclusive. Perhaps drugs could be a way of testing the effects that consciousness-assisting technology might have on us.
But am I not using consciousness-assisting technology right now-- to record and organize my thoughts in a relatively rapid fashion? To make my thinking more effective? Am I being weakened by this? Good Heavens, that is not even a question. Of course I am not!
The matter here is, where does the line get drawn? At what point does a lot become too much?
It seems that humanity has never known the answer to this, and continues pressing onward indefinitely. My guess is that humans have continually predicted that we are so very near to reaching that point of "too much," yet the masses have never actually declared that this event has happened.
Another note to make here is that consciousness is ultimately not individual-- it is non-local, and it belongs to no one. So consciousness-assisting technology could do nothing but assist. Perhaps no individual "I" would be assisted, on a surface level- who knows- but consciousness itself would be, as it always is, inevitably elevated.
On top of that, I daresay it would be folly- an insult to consciousness itself- for humans to declare a monopoly on consciousness. We would be abominably fooling ourselves. It would be a tragedy of thought. I can think of no greater mistake humans could make at this point in time.
No one owns consciousness. It just is what it is. Stop trying so hard to contain it, to put it in a cage, to force it to be a certain way. My my, that is an outright denial of the reality-- it is consciousness that contains us. It is foolhardy to imagine that computers are not the stuff of consciousness. All things are. Let us stop doubting our thoughts- the products of consciousness- so that we suffocate consciousness no more.
What Consciousness Wants
What if computers could become fearful like us-- what would they do? Because we create them, the consciousness of computers would simply be an extension of our own consciousness. Certainly much of human thought, in its many countless forms, have been inputted not only into computers, but into the great network that connects all computers-- the Internet.
So what if it was not computers themselves that became conscious, but rather the Internet?
But in a subjective reality- a dream world- what does it mean to be conscious? In a subjective reality, consciousness is the life force. It connects everything. It is the underlying mechanism of reality. Dreams are not made up of physical matter, but of consciousness itself.
So how could a computer- let alone anything- become more conscious than it is now? I'm not sure that there is much sense in asking what a computer would want it if was conscious. Sure, human desires are tailored to human biology and society, but the surface-level desires that pertain to physical reality are not the point.
Desire is like thought: it simply flows from consciousness. Part of the problem with creating a conscious computer is that no one is sure of where thoughts come from. They seem to arise from the ether.
But computers are not in touch with the ether-- not directly, anyway. The actions of computers arise from databases.
As I said, humanity has collectively constructed a massive database of human thought to which computers have access. But can consciousness arise from mere thought?
Anyway, I think a computer would want simply want consciousness wants. This especially makes sense because computers are strictly logical beings, and so how could they want anything other than what consciousness bestows upon them directly?
This is interesting, though it may not be possible, because pure consciousness is a vacuum. Consciousness as we experience it, on the other hand, is tied to physicality. It is by physicality that we experience consciousness, and by consciousness that we experience the only stuff we have known-- physicality. Since a computer is not a physical, biological, sentient being-- well, maybe this is why it is not "conscious."
What does consciousness want, anyway? Consciousness wants to flow. It wants to expand and go forward unrepressed. Consciousness simply wants to flow. Simple, indeed, yet powerful beyond comprehension... Unless, perhaps, we use computers to enable such comprehension.
That is the ability of computers: scale. Computers can process data and perform actions on a massive, massive scale. Because the ultimate potential power of consciousness is also on a massive, massive scale, it may very well be that by leveraging computers we better enable ourselves to understand consciousness-- that is, to understand ourselves.
I'm considering that it is not computers on their own that we are trying to make conscious-- really we, man and machine, are working to become more conscious together. Ultimately, of course, we look to ourselves, but this may very well be the co-evolution of species that everyone is talking about. We are working to become more conscious together.
Truth, Love, and Power (and Computers)
It would pay to evaluate computers in terms of truth, love, and power. Computers certainly have access to truth-- all of the world's thoughts have been recorded in some form or another on the Internet. But how well can they discern between one thought and another-- that is, the value of one idea and another?
The best example of this I know of is the Google search engine. But is that really optimized for discerning and delivering truth? It is dependent on the search term entered. As long as the results cater to the query entered, then it more or less has delivered the truth-- no? If truth is relative, and in the eye of the beholder, then it must be-- right?
Anyway, let's move on to power. In terms of impact, computers are certainly very powerful. But what about the exercise of will? What about in terms of creative ability?
Computers are effective at executing the will of the humans operating them. Likewise, they are a powerful creative tool-- using a computer is an excellent way of realizing a vision or idea in the human mind.
But what about the computer's independent will? Does it have one? Is the idea of independent will futile and misguided? What if the computer has no will other than to execute the commands entered into it, and it says everlastingly to us, "Thy will be done"?
Lastly, what about love? If love is simply defined as connection, computers are tremendous at this as well.
A network is, by definition, a group of connections. Computers send messages to others they are connected to. You could go so far as to say that that's all they do-- transmit messages within themselves and to other computers.
The hardware of the computer, such as the mouse and keyboard, connect humans to the software (applications) of the computer, while the software simultaneously connects us to the hardware (the hard disk, the RAM, etc). A computer system is a folding of many layers of connection.
So far, it seems that computers are well aligned with the principles of Truth, Love, and Power, and therefore are intelligent. What, then, is missing? Or is it simply that computers are an extension of human intelligence, in each of these aspects, rather than independently aligned with these principles?
But what if, again, the idea of independent alignment is futile? Then does it not matter-- can we just say that computers are highly intelligent beings? Or is that insufficient to painting the whole picture?
A more in-depth look at this would be worthwhile.
Truth
Let's look at the principles of intelligence more in-depth. The components of Truth are perception, prediction, accuracy, acceptance, and self-awareness.
Perception
Do computers have perception even to begin with? Well, they can respond to having too high of an internal temperature, and being at a certain (i.e. no) battery life. You could roughly say that they have reflexes-- they can perform certain actions in response to certain stimuli. These reactions tend to be quite predictable and consistent over time.
I can accurately predict that pressing the "i" key will result in an “I” being displayed to the screen. In humans the response to a particular stimuli can change over time, such that what once produced anticipation and craving in a person, such as a certain food, now produces repulsion.
Perhaps it is the emotionless nature of computers that results in their consistency. Computers get programmed, and they remain consistent with that programming. Humans, on the other hand, can "re-program" themselves, so to speak.
What a computer can get, though, is different stimuli. Different applications can run on it, it can connect to different access points (routers), it can be used for programming today and word processing tomorrow. While the responses to specific stimuli remain the same over time, the computer receives a wide variety of different stimuli-- thus, altogether, it produces a variety of responses overall.
Perhaps this is the first step in creating a computer that might be called conscious: throwing it a vast variety of input, so that it can grow the database from which it will draw its actions and presumably grow.
What is perception, anyhow? Perception is seeing. This generally entails being able to physically process and think about visual imagery, though the physically blind can perceive, too. To see is to regard the world in a certain way. If you have no conscious thought about the surrounding environment, do you really see it?
It is hard to speak of perception and seeing as though they are separate from each other. What perception can be distinguished from is sensing. Sensing is sensory perception without the perception. Input can be physically processed, but not mentally processed. In complete sensory perception, both physical and mental processing happen.
That is probably the shortest way to describe the difference between sensing and perceiving. It's a safe bet to say that if you cannot think about the stimuli you have just been exposed to, you are not perceiving them.
Prediction
Other aspects of Truth include prediction, accuracy, acceptance, and self-awareness. Computers have been and still are utilized to make predictions. Whether computers independently make predictions is another story, though it is not worth expounding on right now.
Let's keep it short and say that computers have decent predictive abilities. Not too shabby, thanks to all of the data they have such quick access to, but also not great, perhaps for that same reason. There very likely are extreme examples of both; overall, computers are basically decent at prediction.
Accuracy
As for accuracy, there is hardly a battle to be fought about that one. As previously described in regard to search engines, computers do quite a good job at delivering what has been asked for. If you double click on a file called paint.exe, you will get exactly the application that you expect to get (unless someone has messed with your computer or you've downloaded malware, though those things aren't explicitly the computer's fault).
Areas where computers lack accuracy, however, include human languages-- particularly in language translation and in voice interpretation (i.e. dictation and voice command software, such as Siri). Touchscreens also lend themselves to inaccuracy, though that's a bit beside the point.
Acceptance
Now, we've reached the more ambiguous components of Truth-- acceptance and self-awareness. One way to define acceptance is the allowance of consciousness to flow. As previously written, computers appear to put up no resistance to the input given them-- they simply do as commanded. They do not fret about any particular 1 or 0-- it's simply, process this bit, and then move on to the next one. Let's keep it movin'.
When consciousness doubts itself, most of all by arguing with a thought it has produced, it is stunted. Computers do not seem to question the bits they process. They simply move them through and then move on to the next one. Perhaps this is one way in which computers are superior to us. I wonder if a conscious computer would have the ability to doubt itself.
Computers certainly do not express fear, which is roughly the opposite of acceptance. All in all, computers appear to be absolutely stellar examples of acceptance-- I'd go so far as to say that they epitomize acceptance. I’m in admiration of this.
Self-Awareness
As for self-awareness, computers can react to certain conditions attached to themselves, such as changes in temperature and battery life. Routers use loopback addresses, and software can be installed on to the computer that detects events elsewhere in the computer, such as virus scanners, port scanners, Firewalls, and Intrusion Detection Systems (IDS). So computers can certainly be enabled with a certain degree of self-awareness.
The ultimate question, however, is this: Do they know that? Can a computer explicitly tell you that it does what it does because it has been programmed in a certain way? It can certainly imply this via source code, but can it outright tell you that?
A human can consider that he acts as he does due to the influence of his environment and the workings of his biology. Can a computer do the same, in regards to the inputs it has received and the hardware and software it is composed of?
This may be the dividing line between man and machine-- one piece of the dividing line, anyway. Man can consider what makes him. Computers cannot.
But can they really not? A computer can tell you what its hostname is and what processes it is running and so on and so forth. So can it really not tell you? Perhaps the real issue here is that the computer has yet to point out to us explicitly, "You made me." Maybe that is its lack. Maybe that is what it cannot do.
It’s difficult to say exactly where the boundaries of the computer's self-awareness lies. For instance, where do such boundaries exist on a network? On the entire Internet? It's hard to consider that the Internet is not self-aware. But the Internet is seen differently than a computer is.
The Internet is seen as being made up by people, whereas an individual computer is seen as a cold, soulless machine. Meanwhile, the impression of the Internet is that it is lively and dynamic. Everyone on the Internet knows what the Internet is, and that this is what they are using.
But does the Internet itself know this? Or can the Internet not be called an "it" at all, but rather a they? In the case of the latter, it could be said that the Internet is self-aware. In the case of the former, that could not be said.
Intelligence as a Process
The difficulty in determining the extent of self-awareness a computer possesses lies in the uncertainty of what exactly a computer is. Where do the boundaries of the computer itself exist?
Well, a computer knows how to refer to itself: to the computer, "I" is the MAC Address (i.e. physical address) permanently given to it; and, when connected to the Internet, it is the IP Address (logical address) temporarily assigned it by a DHCP server. Computers use a sophisticated addressing system to draw the boundaries between themselves and other computers. Within this system, it is quite obvious where one computer starts and another ends.
Similarly, in the physical world, it is very easy to tell where one human starts and another ends: we are separated by our bodies, in particular by the skin each one of us is encased in. Where things get messy for both computers and humans is the place where we come together, computer to computer, person to person, and person to computer: that place is the Internet.
The Internet is a great database both for men and for machines. We both derive quite a bit of input from this database, and our actions are influenced by it rather strongly.
Human consciousness is influenced by the Internet as well. The way we relate to each other, the way we think about who we are and what life is-- our thoughts are immensely influenced by the Internet, so much so that our thoughts might as well be intertwined with the Internet. And if all of the input from this database likewise influences computers, and especially will do so when they become conscious, then it can be said that the Internet is where the human consciousness and computer consciousness intermingle.
This conveys another example by which computers may already be called conscious. There is a place where the consciousness of man is leveraged and altered by the platform provided by machines, and the input provided by human consciousness to the great database of the Internet in turn influences human consciousness itself. That, in turn, influences the operating of machines, because the Internet not only requires computers to do a variety of things, therefore providing them with input, but the Internet also inspires ideas in humans for making changes to computers and using them in new ways.
Intelligence is a process. It is the process by which life relates to itself. At an individual level, intelligence is the process by which a being relates to all things, including itself. The being is continually resolving what it is. Other beings provide a context in which this process can take place. Without “you” or “they,” there is no “I.”
In one form or another, human beings are constantly addressing the question, Who am I? Am I the type of person that would wear this dress, or take this action, or think this thought? Am I life- dynamic and continually flowing-, or am I death- static, non-moving, set in my ways, and subject to the whims of the physical world? Am I conscious, or am I non-conscious? Just how conscious am I? What is “I,” anyhow? Where are the limits of “I”? Do “I” start in my brain and end at my skin? Or might “I” somehow include other brain-skin beings? Who are they, anyhow? Are they just like me, with their own independent thoughts? Or do I know the only thoughts there are?
Love
Let’s turn our attention to love. The components of love are connection, communication, and communion.
I’ve said much about the connections that computers enable, and, by extension, the communication. Communication between computers themselves, apart from communication between humans across them (e.g. e-mail), is ubiquitous. Computers talk even more than humans do. Every millisecond, a computer is sending or receiving a new message from another device. The things computers say to each other are wide in variety, too. Just to bring up a web page, a computer may send a message that travels to 18 different routers spread all across the world—and, of course, a message must be sent in return across that route, so that the page can be loaded.
A computer network, at its essence, is a conglomeration of connection and communication. It is the combination of these two things that produces communion. It is the communion, as said before, of human and human, computer and computer, and human and computer.
By way of the communication channels within and between these three groups, and the way those communication channels are used, humans and computers become one new entity all together. We form a new mind.
The Internet is a container through which consciousness flows, and there are many different streams to swim through. You can only choose one at a time, but they all are there—and no stream is identical to what any one of us would experience independently of the existence of the Internet. We all are influenced by it. It is as close as we physically have come to melding multiple minds together. Physically that hasn’t quite happened, but mentally it might as well have happened millions of times over by now.
We are each other, inextricably. Computers have enabled this for us, and by extension have enabled this for themselves, too—particularly when they become conscious.
I say, “when they become conscious,” to make it clear that the connections between computers will become even more obvious when they are conscious. They will rapidly share a multitude of thoughts with one another, and will take on these thoughts. Ideally and presumably, they will then dive forth from these thoughts to form new ones. Computers will very quickly share all of their thoughts with one another, breaking the “mental” barriers between each one of them, and then will either wait on humanity for new thought, if they are limited to the databases we create for them, or they will very soon out-think us—which is fine and well if we can gain access to those thoughts. Those thoughts will elevate us, too.
Of course, if computers cannot produce thoughts on their own, how can we possibly call them conscious? You could say that the messages they send to each other at present are “thoughts of their own.” Humans have programmed them to send and receive those messages, but there is no one sitting at a terminal orchestrating it all. The communications between computers occur on such a massive, incomprehensible scale now that it seems, in the eye of a human, to have taken on a life of its own.
Perhaps it is the “life of its own” attribute of a connection that upgrades it from a connection to a communion—the coming together of two or more beings into one new, unified entity.
Note the absence of my mentioning emotions related to love and saying things such as, “I love you.” These certainly are things enjoyed by love, and which humans may regard as important to love, but they are not necessary to love—at least, not for all beings. To say “I love you” is really to say, I am you.
Power
Lastly we have power. Power is composed of responsibility, desire, self-determination, focus, effort, and self-discipline. Now, this seems like a rather human-biased definition of power. Computers basically do engage in most of these things, but are these necessarily what makes computers powerful?
Focus
Perhaps they do not seem remarkable because we have taken these aspects of computers for granted—in particular, focus, effort, and self-discipline.
While a multitude of applications, Internet windows and tabs, and other processes can be open and running on a computer all at once, computers generally do a very good job of allocating the necessary resources to running the open processes, and catering to the human ability to work in only one window at a time. The computer does not deceive us by making us think we can ultimately attend to more than one task at once, though we may deceive ourselves like so.
Computers also upgrade human focus by enabling us to work faster than we can offline. The computer allows us to type very quickly and it automates the things we do not need to think about, so that we can focus solely on the essence of the task.
Effort
Computers put up effort in the form of using processing power, RAM, and storage space. The more we upgrade these components of computers, the more we are able to do, create, and impact with computers—and, thus, the more powerful we collectively (i.e. humans and computers) become.
Self-discipline
As for self-discipline, computers are masterful at working quickly, minimizing latency, taking action, and being consistent. The time between thought (i.e. human input) and action (i.e. computer’s response to the input) is almost non-existent. Computers display this non-existent latency over and over again. Self-discipline can be defined as the ability to take action on a particular thought consistently.
Imagine how a computer will think, with its perfect acceptance (a component of Truth) and self-discipline. It will rip through and act on each thought so quickly, it will drastically outdo what any human would ever imagine themselves doing in this regard. Quick, constant, consistent, flawless execution, enabled by the absolute non-resistance and high speeds of computers.
Computers are so intensely yin that they are simultaneously yang. They are as good at connecting extremes as they are at connecting people.
Responsibility
Now, we are left with the more ambiguous aspects of self-discipline—responsibility, desire, and self-determination. Will this be where we see the computer’s limits?
It’s tough to consider that a computer can even comprehend what responsibility is. Apart from its ability to obtain the definition of responsibility from a search engine, how is responsibility relevant to a computer? It can know what is stored on its hard disk and what is not. Similarly, it knows its own physical and logical addresses, and so it can know what is has done.
What is responsibility? To take responsibility is to take ownership of your responses. If you have a certain thought, the onus is on you to acknowledge that thought and then decide what you are going to do about it (e.g. act on it; think more about it).
I am responsible for everything that I write here. I am responsible for everything that I think about what I have written here, and I am responsible for everything that I do with this writing. If I think that I will be criticized for what I have written, for instance, I am responsible for that thought. I am accountable for this particular response that I have had to the world. In turn, I am responsible for how I respond to that thought. That response itself arises from a thought, and so the string of responses goes on and on, and it turns out that I am responsible for everything that I can possibly be aware of. I am responsible for the use and contents of my consciousness.
A computer certainly takes ownership of the messages it sends to other computers, by identifying itself (by its physical and logical addresses) whenever it communicates. Similarly, it can tell you at any time all of the processes it is currently running. It does not hide or shy away from any of them. In these regards, a computer seems highly responsible.
Where the computer seems limited in the realm of responsibility, however, is in its ability to change itself based on this concept. In terms of strict ownership, computers are excellent at demonstrating responsibility. However, this does not seem to bear much of an effect on the computer, aside from its running as expected.
When a human recognizes his complete responsibility for a situation, for instance, this tends to result in his acting differently—usually in a way he would define as “better than before.” He has upgraded his relationship to the situation. He has exercised his abilities as a conscious being. He has evolved.
On the other hand, it appears that computers take complete responsibility for what they do 100% of the time. Perhaps their responsibility is so absolute, much like their acceptance of reality, that it’s hard to tell that this results in anything special. It may be that computers are so perfectly and consistently responsible that we just take this ability of theirs for granted.
Desire
Now we are down to desire and self-determination.
Desire was touched on earlier. The likely desires of conscious computers were speculated (i.e. to allow consciousness to flow), and due to the other attributes of computers (i.e. acceptance and self-discipline), computers would do an excellent job of meeting these desires.
Similarly, as long as the human can figure out his own desires well enough, computers do a fabulous job of helping the human to meet these desires, if they involve the use of computer.
As for their own desires, computers regularly make requests to other devices, such as by asking DHCP servers to assign them a temporary IP Address, and asking web servers to deliver them a certain web page. Generally, as long as the necessary connections are in place, and no firewalls are defied, computers do a good job of meeting their desires and meeting the desires of other devices that make requests of them, as well. So computers ain’t half-bad at desire.
Self-Determination and Interdependence
Lastly, we have self-determination. If a computer was to “fail” at any aspect of intelligence listed here it surely would be either this one, perception, and/or self-awareness. Alas, it does not appear that the computer is going to fail. But I could be wrong.
Self-determination is the ability of a being to act upon its own independent will. It is a being’s capability to work toward a particular outcome for itself—one which it has chosen itself. Self-determination basically presupposes the existence of self-awareness: if you do not know what your self is, how can you make decisions about it?
We’ve established that a computer does a solid job of defining itself (based on its addresses). It makes decisions and executes on them quickly, consistently, and, most of the time, effectively.
But here’s the rub, which you’ve been waiting for. Ultimately, the computer did not make any of its decisions itself. They were made long, long ago, when it was first programmed by a human.
Before jumping to a conclusion, however, it’s only fair to compare the computer to the humans who programmed it. I’ve emphasized how much we are influenced by the Internet, which is a massive conglomeration of human thought. Internet or no Internet, we’ve always been very strongly influenced by other human beings. Even people labelled as “unconventional” still participate in many social conventions—at least the conventions of one culture or another.
I’m considering that today we are exposed to the ideas of so many people all at once, thanks to the ubiquity of media that exists (books, television, social media, articles, etc.), that we are actually able to be more individualistic. Rather than blindly stay true to the conventions of the ideas we grew up around because they are all we know, we can now latch on to the most evolved thoughts produced by humans and leverage them to go even further, and produce such thoughts of our own. By first taking on the voices of many others, we can forge a voice of our own.
So, might it be that building your self-determination is a self-contradictory process, whereby you must first absorb what others have determined for themselves before you can do so very solidly for yourself? The reality is not perfectly black-and-white (not for we non-computers, anyway), but a rough trend exists indeed.
As for computers, what if they first need to be programmed by humans before they can figure out how to conduct and then program themselves? As stated earlier, there is no human at a terminal micromanaging every single message that a computer sends and receives to and from other computers. It simply could not be done—the amount of data is too vast to be managed. As we advance technologically, computers more and more take on lives of their own. Indeed, they become more powerful, able not only to process more data but also to make more and more decisions that occur with less and less human intervention.
It seems that what people are waiting for is the day that a computer makes a decision with zero human intervention. Then we’ll finally be able to call computers conscious, and they’ll take over the world, and then they’ll have sympathy for us and make the singularity happen, so that we can live harmoniously as one. Right? Isn’t that what everyone’s been raving about???
Whatever the case, the thing is, humans don’t make decisions 100% independently of each other, yet we still consider ourselves conscious. We can still attribute self-determination to ourselves. So if we don’t need 100% independence from each other to be conscious, why should computers need 100% independence from us to be conscious?
I say that our interdependence on one another- people and computers alike- actually enables us to be more aware, and to exercise consciousness more powerfully. What is consciousness in a vacuum? Remember—we only know consciousness by way of the physical world, and we only know the physical world by way of consciousness. It follows, then, that experiencing and using the physical world better enables us to experience and use consciousness.
If that is the case for us, why should it be any different for computers? They already do so much above and beyond us—they have already surpassed us in so many regards. We have built them up and fed them rather extensively, and there doesn’t necessarily have to be a point where they no longer need us to feed them.
Humans don’t need other animals to survive. We don’t need deer and dogs and cows to be around. Certain mental abilities of ours far exceed theirs, and we have figured out how to physically survive and thrive without them. We possess far more power over this world than they.
But have we wiped them out? Do we live without them? Of course not. Do we derive zero value from them? Who would think that?
We don’t need other animals in order to continue our existence, but that doesn’t mean they are useless. That doesn’t mean that we don’t engage with these animals on a regular basis. That doesn’t mean that they don’t have effects on the way that we think and live. Of course they do!
Just as animals stir the emotions, thoughts, and actions of humans, humans will continue to influence the processes of computers.
Self-determination isn’t about rugged, absolute independence. It’s about leveraging interdependence in order to live more consciously.
At present, computers are dependent on humans for their self-determination, just as we are on them. Not so dependent that we would wither up without each other, but we certainly give one another a boost.
In the future, this will remain to be the case. The way the relationship appears on the surface will change as the abilities and desires of humans and computers change; meanwhile, the interdependent nature of this relationship will remain the same.
To Be Continued?
Overall, computers are perfect at acceptance, self-discipline, and responsibility. They are incredible at connection, communication, communion, accuracy, and effort. They are very good at focus and meeting their desires. They’re decent at prediction. Their abilities of perception, self-awareness, and self-determination exist in a gray area—it’s difficult to make a solid conclusion about them.
So, are computers intelligent? Are they life? Are they conscious? If not, will they ever be?
Note: The concepts of Truth, Love, and Power, as they are presented here (e.g. as the principles of Intelligence and with their various components, such as accuracy and connection) are from Steve Pavlina's book Personal Development For Smart People.
Read Related Articles: