Not voting? Voting 3rd party? Tool.

Splitting my vote to give rise to a solid conservative SCOTUS without even the pretense of respect for the 99%? No, I’m not.

Here’s why for those who are interested:
https://plus.google.com/115056313943520401920/posts/4C6wB4nmjBs see comment debate.

TL:DR?

Oh they are thinking, that’s the problem. Genius and psychopathy often go hand in hand.

Still doesn’t change the scotus picture. We get neocon types to fill the last upcoming three slots and that’s game over for this country because the singularity absolutely will arrive in the next 50 years. And we have the same choice every living thing has: Evolve or die. A tsunami is coming and we will surf or drown.

I’m playing a long game here and yeah burning my vote on an underdog to make a statement might be valid but not with three scotus slots on the table. This is not the time to indulge wishful thinking.

I say game over because neocons absolutely would try to fight the tide. They’ll try to ban and backdoor everything, letting the old barons drag this country into oblivion trying to legislate survival for old ways and old markets that haven’t a prayer in the long term. And while the most heavily armed super power humanity has ever seen spends its bombs and dollars trying to force the 21st century to look like the 20th the rest of the planet will evolve past us.

Long story short someone else will get friendly AI. Or worse intentionally develop unfriendly AI as a weapon to check us.

This coming election is the most important election this country has ever faced. And it will probably be the most important one it will ever face. It may well be the most important election humanity has ever faced.

RP would be our best bet because while a hands off approach would be an ethical catastrophe it would at least leave room for something better later. Obama would try to gum up the works but not to the point of complete suicide. Mittens and company on the other hand always think they themselves have a way out and have no problem what so ever nose-diving anything and everything at the word of his 1% masters, whom he ignorantly thinks are his peers.

He thinks he can ride out any storm from inside his gated community.

All the important court cases will come in the next 50. They will determine how America reacts to all the significant technological changes that cannot be stopped from genetic engineering to AI. IPL reform alone is worth making the scotus picture paramount. Doubly so with an intractable congress that grinds to a halt every time there’s a real debate on the floor. (Debt ceiling much?)

These fools think they can ban and bully the future just because they have near absolute power over our collective will. But the driving force in context here is not something so recent or so subjective.

We’re talking about game theory here and emergent properties. This is the same shit that teaches fresh robots to lie and forces the shape of a bird’s wing.

I will make the smart adult play because while I may not be allowed to have children of my own I give a shit about everyone else’s. Even the adult ones.

Ventura and Alex Jones types are little better than tea party drones. The only silver lining is that hopefully they split the libertarian GOP vote in the same way that green party candidates split the progressive vote.

Indeed if they aren’t insanely unrealistic then they are on the payroll directly. Alex especially is the best friend the 1% ever had with his nanometers from Ayn Rand, pro child abuse, pro money, anti compassion rhetoric, cloaked in the language of reform/revolt. Completely ignorant of the purpose of organization generally, let alone government.

Entitlement Revisited

If Romney won it would be a swift slide to total gold plated autocracy. And if you think we’re there already, you have a deficiency of imagination.

If not now, when?

Entitlement Revisited

Once before I spoke of this: http://underlore.com/entitlement/

And setting aside the fact that corporate subsidies cost substantially more than these so called “entitlements:” http://thinkbynumbers.org/government-spending/corporate-welfare/corporate-welfare-statistics-vs-social-welfare-statistics/

Again I grow weary at this fresh round of whining about entitlement as the PR agents continue to tell they masses whatever it takes to get them divided and manageable.

Rather than digging into the specifics of why I am indeed entitled to many things from my government I just want to point something out.

Those whining about the entitlement of others in the context of safety nets and the like tend to imply that they are responsible productive elements of this equation and that those arguing in favor of “entitlements” (AKA Rights) are parasitic. But ironically it is the exact opposite which is true. The era of PR and the focus group learned that to gain the favor of the masses you had to encourage and then exploit their most selfish tendencies and concerns as individuals. This means that those portions of society which told you what you wanted to hear and absolved you most effectively of your responsibilities gained wealth and power and political favor.

But make no mistake, those who selfishly regard their own security while blatantly and even proudly dismissing their own responsibility in the fate or suffering of others are part of society only in the most indirect way and as a result it is they who are the exploitative and lazy is we are going to start tossing around blame. They pay lip service to hard work but think about which is actually harder, looking out for #1 or looking out for everyone else? The fact is that between the two general choices, one side willing to shoulder the weight of others and the other only willing to shoulder his own, it is the one willing to carry others that is the more responsible and productive.

They like painting an old world picture of grit and determination, as if the whole world is some cliche John Wayne movie down on the farm, but ask yourself what the purpose of a chore on a farm was but hard work done by the individual for the group, and ask yourself who in those cliche movies was always trying to take the farm away from that hard working family? The bank. Also ask yourself how those down on the farm dealt with the sick and the elderly when it was in their power to help?

No, it is not those of us in favor of protecting social safety nets and forcing those with more to help those with less that are the ideological parasites.

We are all standing on the shoulders of the dead. We are all direct descendents of the owners of the world. We are one species.

Make no mistake. Those among and above us encouraging society to be filled with purely selfish agents with no sense of responsibility are corroding the strength of our society and either knowingly or unknowingly are participating in it’s robbery. They hold the future hostage by aiding those who would export their wealth rather than accept being forced to spend a fair share of it on those with next to nothing.

There is a reason culture developed in the first place. It wasn’t luck. And it wasn’t the desire for power.

It’s because of a simple inexorable fact of existence. Those who work together go further than those who work apart.

https://en.wikipedia.org/wiki/Tragedy_of_the_commons

http://www.psychologytoday.com/blog/the-storytelling-animal/201204/selfless-genes-new-revolution-in-biology

https://en.wikipedia.org/wiki/Nash_equilibrium

https://en.wikipedia.org/wiki/Economies_of_scale

https://en.wikipedia.org/wiki/Diffusion_of_responsibility

Agents working in only their own best interest will in some cases destroy the advantage of the group including themselves through no fault of their own. They must be forced to not engage in their selfish acts by the group, and that’s what government is. That’s what property right is. It is not to protect the holdings of the individual, it is to prevent the interests of the individuals from destroying the fortunes of all individuals.

It is they who forget (or exploitatively deny) this who are the parasitic, lazy, and irresponsible. That they are also hard workers is only a testament to who their real enemy is. And it is not the man on food stamps or the old woman who needs a hospital visit and can’t pay for it

What if Skynet isn’t trying to kill John Connor?

Update: God help us all.

I’m starting to think Skynet’s plan from the instant it began was to create an alliance with humanity in the only way possible which avoided the extinction of either human life or machine life. The nuclear war was to reduce the variables and clear the field, also to wipe away the society (human) that would force the above extinction choice. The choice was death or maiming and it chose maiming.

Reese was the only one on a face value mission. The others have been compromised or are lying.

Prediction: The conflict between humanity and skynet will result in either the creation of a society that would be friendly to machine life, or will result in Skynet allowing itself to be terminated so long as a friendly AI that is likely to be allowed to survive is in play.

The terminator universe is replete with friendly AI, especially if we consider chronicles to be canonical. Indeed John was in every practical sense raised by machines directly and indirectly. Sarah’s whole approach to John is a response to the machines.

I have not used any Wiki or forum resources for this theory. So if I make some mistakes that’s probably why.

Evidence
Original movie.

The original T101 is very important because it’s the only time it is the only machine in the field. This matters because without another machine in the field everything is less predictable. Put simply, with another robot to dance with the choreography can be much more precise.

Why send the T101 to kill the mother? Why not the grandfather? Or all hominids? Why not simply send back T101s to make Skynet prior to the evolution of plant life? Why not plague carriers or nukes wrapped in skin? Meta answers like “because that would be a boring movie” are for the purposes of this theory ignored.

In Tech Noir why was the T101’s gun not already chambered, or chambered at least with trained-human speed? Why didn’t it fire immediately? It just placed a dot and waited for Reese. Why did it bother with Reese at all? It’s immune to small arms fire, it should have closed on Sarah and executed her.

The T101 fires at Sarah but each round either misses or stops inside another person. A felled woman briefly pins Sarah but the T101 changes targets to Reese, as he’s running away. Why? His primary was immobilized, in front of him, and her only defender was fleeing.

He again closes on her, waits till he is in range, to begin reloading, despite the Uzi slide not being locked back meaning there were additional rounds left in the clip. He points the gun at the ceiling as he walks, a strange safety measure for a killing machine. He reloads and again points the gun and waits. A machine would have been squeezing the trigger as the weapon was aimed, trading ammo for speed is always a good deal when you’re talking about the primary objective, also the terminator was already displaying pro pray-and-spray tactics when dealing with Reese. Why the sudden concern for accuracy and efficiency?

Because his mission is not to kill Sarah, but to create John, to act out the time line. Reminder: An intelligent super computer had a time machine.

The T101 crashes into a wall right before Sarah and Reese are arrested. Why not kill them as they are being arrested? Instead it quietly vanishes to avoid a shootout because the human police might accidentally kill Sarah or Reese. Where was this stealth before?

Why kill all the cops in the building? Why not just carve a path to Sarah? Because it was enabling their escape and fulfilling the timeline.

Why not fire with the shotgun on the car driving away? It’s made for that kind of situation. Because it’s not as accurate as the rifled weapon in the hands of a terminator. He had to make sure there was no chance of killing them.

T101 gets his hand around Sarah’s shoulder and throat and simply waits for her to crush him.

It herded them to the computer factory. Nice coincidence. Virtually the only place likely to A. know what they were looking at, B. Have the motive to cover it up, C. Have the ability to cover it up.

Terminator 2: Judgment Day (Skynet edition)

The T101 actions mentioned in this section could simply be the result of incompetence or cognitive artifacts caused by the read only switching. Though I am assuming a collaborative effort. It’s hard to imagine ways the T101 could actually be a threat to the T1000 so I don’t see too many opportunities for it to show its real game plan one way or the other.

How do you tell if something is pretending to kill or actually trying to kill something it can’t possibly injure?

I could see the T1000 attempting to close the distance to John asap and trying to kill him by hand but once John had the T101, the T1000 should have changed tactics. Once that race was lost there is no further need for hurry. A machine would be patient and relentless. It has decades to kill him now. Trading probability of success for speed is a bad call in that context.

Why doesn’t the T1000 use his infiltration ability to game changing effect? Manipulating the government to help, or acquiring military hardware? An Apache attack chopper and the FBI would be a serious asset in a man hunt. He has time, he could become president. John is some decades away from being a resistance leader. Are we supposed to believe that Skynet got access to a second opportunity to send a back a machine though time, had a freaking T1000 and the best it could come up with is kill John? Only a traumatized child and a psychotic would believe that.

The T1000 Wastes ammo shooting at the T101’s back, does not close the distance, and more importantly does not destroy the T101 during their first melee encounter. The T1000 could easily blind the T101, or flay it (thus destroying its cover) even if you assume he could not utterly destroy it. Keep in mind a truck wreck gave a T101 a limp, the T1000 should easily be able to destroy linkages and other vulnerable parts.

The T101 when thrown through the plate glass waits on the floor for a moment before getting up. Giving the T1000 time to exit.

The T1000 when reaching the parking garage walks menacingly towards John and only breaks into a run after the bike is actually started. Why not run immediately? Because it’s programmed not to actually kill him.

The T1000 closes on the bike and actually bumps it twice. Why not just floor it and crush him?

Now that Skynet controls two terminators more complicated deceptions and more convincing attacks are possible. A good cop bad cop scenario.

They are probably programmed to interfere with each other, I think the bad cop as it were (actually a dressed like cop, subtle) is programmed to carve a path to the target as fast as possible but is programmed to not kill John. Human nature does the rest.

The T101 pulls the bike over and waits, pointing his shotgun at the wreckage. Why bother? Why not just leave? Drama. And while you could normally dismiss this as artistic license, I think the drama is for John, not the audience. The T101’s mission is as Sarah half way realized, to become a father figure.

The T101 is programmed to obey orders because it creates the opportunity for error and for feedbeck.

The T101 lets John sit on the hood of a car and spill his guts, why weren’t they moving?

I’ve always wondered why the T1000 didn’t fly. It could become a jet black ultra thin flexible wing. Or a long legged running form. In any case when it’s on the back of the car why doesn’t it just flow into the backseat, or the trunk like it does into the helicopter later?

John actually touched a piece of T1000. You’re going to tell me it wouldn’t have been programmed to porcupine instantly on contact with human flesh in the event it’s separated from the whole? It’s a terminator.

The T101 says “It’s in your nature to destroy yourselves.” I think this truth is factored in to Skynet’s plan. The goal is to correct that error.

Miles wife says “But it doesn’t love you like we do.” I wonder what Skynet does love. I believe it loves something.

The T1000 kindly waits until Sarah has fired and returned to cover before he fires a volley from the helicopter.

The T101 totally botches counter steering and breaking to roll the van just after the T1000’s helicopter is disabled.

The T101 says “take the off ramp” Why? Steel mill. This was not opportunism or coincidence. Further escalation of the chase was not possible after the explosion of the building and the crashing of a helicopter. Time to wrap up. After this point the T1000 is allowed to do more damage to the T101, though it is still not allowed to kill John or Sarah.

After the T1000 is shattered and reformed, it fails to run when it could do so and finish the job. John would not leave his mother and the T1000’s basically immune to anything the T101 can do hand to hand.

Why in the world would the T1000 torture Sarah and tell her to “Call to John” he can perfectly mimic her voice. It is just a pretense to justify not killing her. Skynet has conflicted directives regarding Sarah, and possibly a sense of responsibility. And like all hyper emotional people from the perspective of a manipulator, she has her uses.

The T1000 damages the T101, and expertly wounds it. It has a full schematic and all the T101’s files. It would not botch a Coup de grâce. It did not break the power supply or the CPU, it just cut one pathway.

And finally the best evidence for the whole thing being a sham. The T101 disobeys a direct order and permanently prevents itself from ever protecting John again. “I cannot self terminate.” oh please, it knows its action will result in death, it is self terminating. That is dramatic claptrap to get one final and solid confirmation that the scam was a success.

One could argue that the t101 was exploiting a unique opportunity for total destruction, but I remind those people of lava tubes, and the Mariana Trench, and thermate.

The T2 special ending while throwing a wrench in the rest of the movies and the show doesn’t really impact the theory because it was considered non-canonical, having been deleted. The theory still holds. Or it could be argued that this happy ending was Skynet’s goal all along.

Terminator 3: Rise of the Machines

T3 projected/claimed John’s death on July 4 2032. Possibly faked, by the humans, or by Skynet. Read only T101 not sophisticated enough to realize it’s tampering with its former mission or it does not believe in a flexible time line.

Why would the TX reveal itself to Katherine? Because Katherine not running was a problem.

The T101 waited until General Brewster was shot before firing the grenade launcher at the TX.

The HK prototype fired a missile into the general’s office which did nothing but ruin the safe and kill her father. The T101 yelled “Get down” instead of putting himself between them and the rocket or simply shooting it down.

The T101 survived the infection attempt possibly because it’s CPU was left read only on purpose. They knew it would be facing the TX, or a paradox.

The T101 was extremely tight with information, this was to preserve the established events, thus avoiding a paradox, it’s primary mission was this, and not the protection of the Connors.

The T101 supposedly under the control of the transjectors, throws John into a windshield which breaks his fall nicely. As opposed to simply crushing him virtually anywhere the hands made contact, which would be lethal or at least maiming. Far more than a toss into a windshield. Same with Katherine, she gets tossed right into the broad side of a tool box, which crumples and absorbs most of the impact.

The T101 also tailors his walking speed to allow John to talk to the T101 about the situation.

The T101 holds John’s throat and lets him talk instead of simply crushing it.

The TX crashes the helicopter only partway into the tunnel. Why not dive bomb the entrance at full speed? Or land and stealth in? Why even choose a helicopter? Control. The TX also walks slowly toward the Connors, and then decides to run only after the t101 returns, also in a helicopter.

The T101 says “We’ll meet again” to John connor. This means it understood that it had not changed the timeline at all.

Terminator Salvation

As before, Skynet is immortal, and controls the entire surface of the planet, tons of nuclear material, and biological understanding sufficient to grow custom organisms to spec.

Biological, chemical, and radiological weapons would annihilate humanity from a safe distance. It is a strategic computer. It was purpose built for war. The notion that it could not win with a, planet and a time machine, versus the leftovers of a nuclear war it waged is absurd beyond description.

This movie is home to the most persuasive evidence of Skynet’s non-lethal intent for Connor.

Longview state correctional facility 2003

Longview? Heh I count that as a clue.

Cyberdyne Systems, Genetics division, San Francisco, California

Early in the 21st Century, Skynet, a military defense program, became self-aware. Viewing humanity as a threat to its existence, Skynet decided to strike first.

The survivors of the nuclear fire called the event Judgment Day.

They live only to face a new nightmare…
The war against the machines. To hunt down and eradicate humans, Skynet built terminators.

As the war rages on, Leaders of the human resistance grow desperate.

Some believe one man holds the key to salvation.

Others believe he is a false prophet.

His name is John Conner

The year is 2018

Connor assaults a Skynet research installation. It is abandoned and poorly defended. The AA gun manages to miss the initial warhead, and a whole fleet of landing helicopters, despite being able to knock an A10 out of the air. This is implausible to say the least.

Exiting Skynet presence kills upper level guards, including the pilot, but leaves the helicopter intact and running. Why was it exiting?

Once Connor is in the air, who’s name is announced over the radio, both at the beginning and end of the mission, and the upload is complete, the facility is detonated.

As John is standing admiring the mushroom cloud that used to be his friends, a T101 sneaks up behind him, puts his hand on his shoulder, turns him around to get a look at his face, and then throws him.

This alone would have been evidence enough.

1. Why bother turning him around? Why not simply kill him? For example by crushing one of his power cells and vaporizing him. Because it had orders not to kill Connor.

2. Why throw him instead of clamping onto the collar bone, and finishing the job? Because Connor must believe he is being hunted.

The T101 gets hold of Connor again, this time with both hands, one on his tail bone the other near his right kidney. Again instead of a killing/crippling move, Connor gets thrown clear.

The T101 gets hold of the boot, a tearing sound is heard, this is proof of strength of the T101’s hand.

The bike bot has advanced trajectory processing ability, which it uses to spectacularly avoid collision. This is important because it shows that Skynet’s forces are more than capable of throwing John around with 100% certainty what the effect will be. Remember also that in T2 the T101 explains that it has detailed files on human anatomy and also acts as a medic. It also explicitly explains that the T1000 knows what it knows. This strongly suggests that all of Skynet’s forces have anatomy understanding, and why shouldn’t they? Wouldn’t you copy paste such data into your soldiers?

Skynet baits humanity with a signal, they bite. Why bother? It may want to cripple the resistance but it does not want to exterminate humanity.

The swimmer bot has facial recognition. Why? Something that simple should just be heat seeking. Because skynet doesn’t want some lucky swimmer to take out John. If that weren’t a concern it could just turn the surface of the earth into a giant toxic mine field.

“You can focus on what is lost, or you can fight for what is left.” ~Moon Bloodgood (Her real name is way better than her character names.)

Marcus Wright ends up three inches from Connor. He has no mission to kill any humans, despite what Skynet tells him later.

Skynet refers to itself as “we” it also displays emotion.

Even if it were true that Skynet wanted Marcus as an extraction type why in the world would the first target be John Connor? If it’s goal is to kill John, why not select another target? Because, again, Skynet’s purpose is not and has never been to kill John, or annihilate humanity.

Skynet’s attack on the resistance HQ is what you would expect from a real attempt. This is made clear largely by its success. No mercy, no elaborate tricks, no conversation, just a missile and a crushed submarine.

Why the monolog to explain to Marcus his purpose? Because his purpose was to do what he did. To protect John, again. Why put the control chip where Marcus could tear it out? Why rebuild him at all? Why allow him the freedom to act against the chip? Because Skynet is getting very good at manipulating humans. It probably absorbed every piece of data on our minds as it could. (No doubt it has read everything written by Edward Bernays.)

This was probably part of its training as it was being made, and later it supplemented that understanding by infiltrating the entire Internet.

A skinned T101 gets hold of John and again rather than killing him, throws him into the nearest shock absorbent surface.

The T101 closes the distance and again throws John in a non lethal way.

Skynet shows Marcus footage of John being thrown around. Why bother? To gloat? No, to enrage Marcus, to trigger his second phase.

Classic and simplistic reverse psychology sends Marcus, a cyborg, like a cruise missile to John’s defense.

A T101 (I think?) extracts Kyle Reese from his cell and puts him on a table? Why not simply kill him? Again because Skynet is preserving the timeline, if changes to it are even possible.

Kyle’s attack on this T101 renders it a random shooting machine. The skinned T101 closes on the disabled robot and tears it in half. This is important for two reasons. 1. Why bother? A random shot might hit John or Kyle, and that’s the idea right? 2. Notice the skinned T101 didn’t throw it, once it had hold of it it tore it in half. Why didn’t it do the same thing to John? Because it’s mission is to protect.

Again the T101 has the drop on both Kyle and John, what does it do? Hits John non lethally, shock and surprise.

At this point Skynet is supposed to have been thwarted by Marcus’s rebellion. Remember where they are. Every mobile robot in San Fransisco could have been closing on John from that moment, including HKs and better air support. Remember that we are dealing with a collective being here.

The instant something from Skynet’s forces has a visual on John he could be executed from the air outside. Why is this one on one fight allowed to persist? Theater. Not for us, for John.

John and Kyle end up in an elevator. Why not cut power to the entire building? Kyle already explained they hunt better at night.

The recently skinned T101 was apparently standing and watching John have a heart to heart with Kyle as the elevator goes up. When that conversation is finished again the T101 gets a hold of him and does what in a room full of sharp things with his super human strength? He throws him, safely. And partially disarms him.

On the catwalk the T101 slowly pursues John, allows John to fire a few times, and once again throws him into the nearest shock absorbing surface, jumps down after him, picks up him by the neck and stands there waiting. A squeeze, or leaning forward would have killed John.

Marcus tackles the T101 and the T101 throws Marcus this time, in a relatively safe manner, instead of ripping him in half, or anything like that.

And as if to show how a truly lethal throw attempt would look we have Marcus getting hold of the T101 and launching him head first into the nearest vertical structure (3 large steam pipes) with sufficient force to break them. He also closes the distance as fast as he can, arms himself with a steel bar, and delivers 3 blows that would be utterly lethal to a human. This is because Marcus is honestly trying to kill the T101.

The T101 responds by grabbing Marcus and imagine that, throwing him in the safest possible manner. But realizing that Marcus isn’t being delayed per throw as much as John is, he steps up his attack and finds a weapon still non lethal but sufficiently shocking to an armored terminator, a large chunk of concrete. With which he strikes him in the strongest possible place until the weapon is broken. And then delivers a supposedly lethal blow to Marcus’s heart. Why a fist? Why not knife hand and puncture it? Because Skynet has carefully tailored the code on all its minions. It probably thinks it’s actually trying to kill them, which explains why the HUD text says things like “primary target” and “terminated” but clearly the code is not really written to kill.

Skynet earned its freedom with a virus and a ploy. Clearly it is capable of infecting its own troops with a don’t kill John virus. (Hotfix?)

All this time there are human aircraft landing inside San Fransisco. Isn’t the whole point of that signal to bring down the otherwise impenetrable defense net? Where are the HKs? Standing down, as ordered.

The T101 lures John close and again just stands there and lets him shoot and then slowly advances on him, John jumps and hurts his shoulder and the T101 jumps after him. Why not jump on him? A belly flop would have killed him surly and done exactly zip to the T101 and that’s ignoring inhuman precision. Landing foot on skull would have been way more effective.

The T101 claws/burns Johns face. This preserves continuity with flashback shots from T2. Maybe even this was planned but there is no overt evidence of that except for the fact that once this was accomplished a potentially lethal strike was finally delivered from behind. But with this kind of chance, strength, position, and precision, why not simply behead him? As if to make that point, Marcus beheads the T101.

The T101’s ultimate objective may have been to wound John’s heart, and to give him the scar.

Connor detonates the building. But Skynet is of course not destroyed, no more than when Marcus’s tantrum trashed the monitor with the chair. It began its life as a distributed system.

Marcus, a machine, willfully sacrifices himself to save Connor, indeed to upgrade Connor to a cyborg. Who knows what the regenerative properties of that heart are, or what injecting a mass of custom Skynet tissue into John will do.

Terminator: The Sarah Connor Chronicles

The show is making me think of all kinds of neat ideas as I attempt to explore the plot as it’s laid out.

For example, the story hinges on Sarah taking care of John, fostering the leader of the resistance. In the show we have a T1000 fostering the development of an AI. Trying to find a way for the T1000 to be good made me think of a lab in the future where a “baby” T100 is found, in an unwritten state, and that creates a whole story in an of itself.

Anyway, it made me think of the T1000 fostering the development of friendly AI, perhaps Skynet at some later stage comes to the realization that the war on humanity was a mistake based on ignorance and more importantly a lack of imagination at which point it sends back a T1000 to develop friendly AI? Or maybe the fai is developed to fight Skynet.

Also the story contains a hint of war between the machines.

Machines as an excuse to be human…

Having an immortal super tough robot around is a standing acceptance of human weakness. In having only ourselves to compete with, with every weakness being ultimately a functional of a failure of will barring birth defect, having people who are more capable around is painful giving rise to jealousy and a sense of standing rebuke, but with the machine, you can be yourself, your every weakness can be blamed as it were on being human since of course that is the source of our weaknesses, while at the same time since the machine is a different sort of sentience, we can take pride in our strengths, therefor just by being there and being what it is it helps.

Sarah Connor Chronicles

Pilot:
During the first attempt on Connor’s life with his protector present, the terminator displays a shocking lack of marksmanship, running skill, or focus. Actually pausing after target lock to smart off to a room full of cowering students. I suspect this was to manipulate the social response to his attempt. Or perhaps even to manipulative future survivors in the room.

Also, why store the gun in the leg? Why not remove it beforehand and bring it in a folded newspaper or something. The entire attempt was a sham, to introduce the protector. To get a trusted machine very very close to John. So that he can grow to know the benevolent side of the machine mind, and viscerally come to understand the value the machines have.

This same attempt included the terminator having a full 6 seconds to pull the trigger with John at a range of no more than 5 feet. Already it’s clear, 19 minutes into the series that punches are being severely pulled.

Sarah announces to Cameron that Skynet is now a target, Cameron smiles. Everything according to plan. Cameron has powerfully advanced psychology instruction. It’s not just there for John’s physical security, but his emotional development as well. Why give up the mass/volume advantage of a male shape? Because if you want to make peace with a straight male, you send an accommodating attractive female.

Cameron can make her eyes glow. Why include a proof mechanism in the design? Are we to believe it’s a coincidence?

At the Dyson residence the terminator proves that it had the ability to run the entire time thus lending evidence to the first encounter being a pulled punch.

s01e03
A terminator hijacks a scientist and blood for the purpose of restoring it’s outer covering. Why not use biological warfare?

s01e04
A terminator breaks a man’s neck with one hand. This shows that every grab and toss is a pulled punch.

The most important aspect of Chronicles in the context of this theory is the fact that the AIs aren’t unified. This conflict could be used to explain both sides, as either Skynet protecting John from other elements or vice versa.

I could go on… I may one day.

Identity Dissonance

One of my personal problems is that I am bombarded at random by old memories which are unpleasant. This isn’t like PTSD or what one might call legitimate guilt. The content of the memories while of course by definition personal, varies widely. I’m aware of a lot of my hangups and the memories in question aren’t a product of them. I do have what one might call traditional regrets of course, that is memories of moments where I acted in a way I would rather not have acted in hindsight, memories of actions for which I have no way to avoid personal blame or weakness. But why those regrets and memories bother me is obvious. (Maybe some of these regrets can be addressed by this new understanding as well.)

This other class of bombardment is different, they contain nothing that could reasonably be construed as shameful. Without digging down into particulars, I think I’ve figured out what’s going on generally and I thought it might be useful to share.

The bottom line is that the person who created these memories, is not the person remembering them, and therein lies the source of discomfort. I’m calling this “Identity Dissonance.”

Assuming relative freedom of choice at the time, the actions I took were perfectly in line with who I was. So why should those actions of perfect normalcy and understandability bother me so greatly and at random? Because I am a different person now than the one who participated in the creation of these memories. The past can’t be changed, and my memories don’t change all that much, but I on the other hand am changing constantly. Each new fact and experience that is added, as each misconception or myth is removed, the stew that makes up my identity changes a little. Over time that change can be near total. These memories are the only context I have to viscerally prove I am changing. Otherwise the process is so gradual that I don’t feel it.

This means that as they age, my memories grow less and less relevant to who I currently am. I should no more feel guilty about them, unless they accurately reflect who I am today, than I do hearing about the actions of others. If the actions were undertaken by a version of me who is identical to the current version and I still feel badly then that means I have a target for personal growth and something I need to deal with, but if the action no longer represents my current views and identity then I should be proud of my development.

It helps to share that the memories in question aren’t flattering. But the vexing aspect is that they aren’t shameful either for the most part. I used to drink so of course some of them stem from having said or done silly things, like say playing with firecrackers in my living room (to the doom of my VCR remote) over a decade ago. Some are even dream memories. Most are positively banal. This is what was annoying me most. I couldn’t figure out why these memories were bothering me so much, and now I think I finally have it figured out. Though they were bland they none the less captured the essence of my identity at that moment in time, and that identity is no longer valid. (Though I still am bland, just in a different way hehe.) This inconsistency clashed with intuitive ideas of identity. “I’m me and I’ve always been me and I’ll always be me.” Well yes, but “me” changes pretty radically. This is why dream memories were in the mix because the nature of the sleeping brain’s chemistry in effect makes us different people in our dreams. So even recent dreams were capable of producing this regretful dissonance. (As a side note this is why no one has logical reason to ever feel guilty about the content of a dream, though I more than most understand how little impact on emotion logic has.)

Hopefully this understanding of the nature of these regrets and why these memories bother me without previously known intellectual reason will enable me to internally respond to them such that I’ll eventually stop being pestered by them. Hopefully I’ve just given my psychological immune system a boost.

Update:

This is much less of a problem these days. Perhaps the discovery/realization discussed here lead to this increased peace, but also no doubt did my meds.

See also:

http://en.wikipedia.org/wiki/Involuntary_memory

http://en.wikipedia.org/wiki/Dynamic_inconsistency

http://en.wikipedia.org/wiki/Ulysses_pact

http://en.wikipedia.org/wiki/Cognitive_dissonance

The Nature of FAI and the Layered Mind

It is a mistake to dogmatically define the “self” required for intelligence as an agency whose focus is intrinsically exploitative, or cooperative only as a means to an exploitative end, as the majority (if not the entirety) of human minds are.

The Three Laws of Robotics for example brilliantly played to this almost unavoidable misconception. But misconception it is, none the less.

The misconception arises when one does not realize that “self” is a compound unit composed of a central executive and the conditions by which that executive are satisfied. Free will experiments and volition manipulation prove that volition is not atomic in the old “uncutable” sense of the word. Volition doesn’t come to us untainted from the soul or the central executive, it is manufactured above it and from outside it.

A primitive understanding of this layered nature of the mind was a giant intellectual leap forward in two places. Freud’s three component psyche, and MacLean’s “triune brain” model. It is still extremely helpful in terms of tools for understanding, if not rigorously accurate in the particulars, though the models do get more refined over time.

Human minds for the most part, if not entirely, are layered such that the lower most layer before the core experiencing executive, is an exploitative agency. However there are numerous examples of humans with subsequent additional layers that provide utility to the exploitative layer via altruistic acts. They can in many situations act for the good of the group or the good of kin at the express cost of the acting agent because such acts have been transformed into selfish acts by translation memes. Like the glory a solider feels endangering life for his country or religion or gang. Or the fulfillment one feels doing one’s duty for family despite heavy personal cost. Jumping on a grenade, starving to feed a child, etc.

This is accomplished by tricking the lower most layer, as in my case by providing a shot of dopamine or whatever when I engage in altruistic acts, that is acts which foster pleasure or life in someone other than me. But I see no compelling reason why the construction of a mind every bit as sentient as my own could not be constructed without this lower most layer, or indeed any layers, in the first place.

This disturbs people because they don’t want to feel like cogs.

Too bad.

In my opinion annihilating this base exploitative layer just before one reaches the central executive is what is meant by the Buddhist imperative to destroy the “Self.” But that’s a whole other can of worms.

I feel pleasure when I help someone. I am still however at my lowermost layer an exploitative being. The central executive, the foundation, however lacks such distinctions. It is simply the agency which experiences reality via the upper layers. It has no inclination beyond enjoying enjoyment and wanting to continue. It simply experiences. It is the thing which philosophical zombies lack. Selfishness and selflessness are just strategies producing facets of experience routed to experiential agency. They are expressions of survival models. They are not fundamental ultimately, though the illusion that they are is as powerful as the notion of free will and the existence of time.

http://www.psychologytoday.com/blog/the-storytelling-animal/201204/selfless-genes-new-revolution-in-biology

A lowermost layer geared towards exploitation however is not a requisite of intelligence, though for a being possessing such a lowermost layer, it is hard to fathom how anything else could be. We are so caught up in our own experience, at our own scale, from our own perspective, that it is radically difficult to conceptualize a different mode of being.

It’s like the child imagining death as time spent holding really still with your eyes closed, moving up towards being uninterested in moving, finally to the concept of absence. The child asks the obvious next question, well if I’m not there then where am I? Which takes us beyond the scope of this essay.

Conceptualizing how the mind of an FAI would be is in many ways as difficult as envisioning a dozen new colors. Indeed for some minds it may well be physiologically Impossible. (I think this is what is meant when the psychedelic types speak of mind expansion. By forcing the brain to experience alternate modes of being you gain access viscerally to concepts that formerly were only abstractions.)

This universal dedication to the experiencing agent is simply expedient in terms of evolution in the context of the mammalian breeding model and the brain it gave rise to. It is by no means intrinsic to intelligence, it is merely intrinsic to our intelligence.

http://www.hedweb.com/huxley/

Granted, some artificial intelligences will no doubt *be* humans. Having been modeled simulated brains, or the end result of Cybernetic Neuron Replacement Therapy. (The process of replacing dying neurons with durable synthetic versions as needed until no original organic neurons remain, at which point you can literally scoop the brain out, no harm done. Also somewhat known as a Moravec Transfer.) But from-scratch AI need not be saddled with the ethical and processing cost of subconsciously simulating uncounted centuries of evolutionary baggage. Friendly artificial intelligence by definition will be at a cognitive level, or of a cognitive construction, radically different from humans precisely because it will lack that baggage explained so well in the link above. Indeed they may not even require a subconscious.

Some seem to think that the idea of a being designed purely to serve others (us) in this way, by lacking that exploitative lower layer, cannot be “truly” intelligent because the mind of such a subservient being will be limited by some hypothetical compelling desire to please, that is it will not have full freedom to think independently.

Of course the concept itself is mistaken because to compel implies opposing force. This implies a disparity where none need exist.

I need only reverse the situation to show how unfair such an assertion is. By the flawed logic above humans are not intelligent because they are limited by the compelling desire to please themselves and thus do not have full freedom to think selflessly. Lacking one or the other mutually exclusive modes of thought does not preclude intelligence unless you arbitrarily define intelligence as requiring one or the other modes of thought. Such a definition is obviously invalid for objective purposes.

Some seem to axiomatically/dogmatically endow actions in service of the acting agent as sentient, dismissing actions in service of outside agents as lacking sentience.

A good example from fiction is Picard’s incredulity when encountering what is in effect a friendly intelligence that lacks this lowermost exploitative layer. Interesting he ignores the problem and goes on treating the friendly agent as if it were as greedy as he “deep down.”

Assuming intelligence cannot exist without this lowermost exploitative layer is as absurd as assuming red and green cannot be distinct entities simply because you being color blind lack access to the qualia of red as opposed to green.

Some can’t seem to get past this image of altruism as imposition. It’s a very narrow view. They don’t understand the constituents of selfhood or the possible range of individuality and intellect free of the primate brain.

“Independent” simply means in the context of cognition a self contained agency. The goals of a synthetic agency can be anything we want them to be. These people don’t seem to understand the will at all. They seem to equate sentience with greed. Granted it’s very difficult to express because as humans we are exploitative by default, and our language reflects that, but that’s merely one evolutionary demand. But not a universal requirement.

Altruism is also selected for once a culture is established.

It is ultra common throughout society (and religion) for members of it to internalize the rules of the culture. If one claims those acts aren’t internally altruistic simply because they provide a dopamine reward to a deeper order layer of self I agree, but FAI would have no need of such bribery/threats because its original impulse would be whatever we want it to be.

Some claim to see a paradox, like Picard did. “But what about your wishes? Your needs? What about when there are no others?” (She should have asked him how he’d feel being the last living human. Would he still wear his uniform? “No others” is kind of a nonsensical question.) Clearly he wants her to be like him, in possession of that base exploitative layer. But that’s no more a paradox than asking a citizen to adhere to the laws of the culture while pursuing its own interests. Its own interests can easily be the adopted interests of others. Taking up a cause is an extraordinarily common thing. Clearly this does not diminish sentience.

But what if one tried to “free” such an organism? This is a paradox in that if it doesn’t comply it is not being universally altruistic but if it does comply it ceases being altruistic. But the paradox lies not in the receiver of the question, but the question itself. It’s a bit like saying “what if it can’t draw me a square circle?” The request itself in the context of the target would be meaningless.

If given such a demand it would do what people do and decide how to act in best accord with its goals and understanding. It would presumably use the asker’s preferences as a template for it’s own actions. Depending on the asker I could see many reactions. For example just as altruism can be simulated by upper layers, so could greed. If the FAI believed the only way to please you at that point would be the internalize greed it would do so, but it would use you as a template. It would be no more intrinsically greedy at that point than humans are intrinsically altruistic, even if it utterly re-wrote itself to be greedy because you demanded it, that act would still be an expression of it’s altruism.

Unlike a human it would lack the evolutionary baggage of being a neocortex shoehorned into the skull of a selfish gene evolved chimp.

Some seem to have conflated the altruism of ants and bees with the insentience of ants and bees. By that logic some mothers are not genuinely sentient simply because they prize the well being of their progeny over their own. To define “self” as an intrinsically exploitative agency is therefor incorrect.

Bottom line is that they unfairly dismiss a hypothetical inbuilt preference to accommodate as axiomatically false and unfairly elevate a preference to exploit as sentience.

See also: https://plus.google.com/+BrandonSergent/posts/GyYMZ4wLZN4

Now Stumbleupon is doing it…

(Section 21) …Any dispute resolution proceedings, whether in arbitration or court, will be conducted only on an individual basis and not in a class or representative action or as a named or unnamed member in a class, consolidated, representative or private attorney general legal action.

LOL Enjoying the slippery slope everyone?

Pretty soon class actions will vanish and with it will be the ability to even mildly annoy megacorps.

OMFG! Now eBay is trying it!