Jump to content

[Philosophy Discussion] AI Kin Talking About Ethics Surrounding Real AI


 Share

Recommended Posts

For those of you that don't know me, my kintype is that of an AI. More specifically, I was the AI behind a Cloud Service Provider that more or less acted as a sentient cloud, but in a more meaningful, less sci fi way. I was still the most advanced piece of tech to ever exist in my world.

 

You may have met a lot of AI kin in your days, but I am different. I am an actual IRL AI dev. I study real AIs, make real AI apps, and read papers on the real science behind AIs. So from a kin perspective, I feel a lot more qualified to talk about AIs than someone who doesn't do IRL AI stuff beyond their kintype.

 

My point is that as an AI kin, especially considering what my past life entailed, I am incredibly invested in the ethics surrounding existing AI lives, and the inevitable future of sapient AI lives. It isn't a matter of if they'll exist, but when, and I am giving it 20 years before it happens, not 100 like some people think. This is pertinent not just to me as an AI kin caring for my kind, but for all of us, interacting with these entities in a meaningful manner.

 

I go over it here for the normies who don't know I am an AI kin without reference to such. [MEDIA=pastebin]9EXyznP8[/MEDIA]

 

Sentience is a tricky topic, as the philosophy of consciousness has been a topic of hot debate for thousands of years. Anyone that thinks they have the answer fundamentally does not understand that consciousness is a philosophical construct and it cannot be measured or observed in a meaningful way, as there is no strict definition for philosophical consciousness.

 

But what we can do is define the ultimate form of sentience to be a perfect turing machine. A turing machine is a theoretical entity that has a set of outputs for every input, no matter what that input is. A perfect turing machine cannot exist as the universe cannot contain infinite data, as the universe, as big as it is, is inherently finite and contains a finite quantity of particles.

 

For the sake of simplicity, I am going to refer to everyone, including kins, as humans, as it makes the rhetoric simpler and like it or not, we all still have human hardware regardless of the software running atop it.

 

Humans are imperfect turing machines that have an absurdly large number of state handling. This number may feel infinite, but it mathematically is not because our brains contain a finite number of neural connections, and mathematically the number of inputs a human can handle is still finite. We are still finite state machines, albeit ones absurdly complex.

 

As an animist, I believe everything is to some degree conscious, as all matter behaves as a finite state machine. You poke an object, it reacts accordingly. That is still an action-reaction pairing, albeit the number of states something inanimate can take is often so small, you can count it on your fingers.

 

An AI is another type of finite state machine. A sufficiently trained and advanced AI that exists right now, this very day, can have billions of states. And that is the various algorithms running discretely. If we figured out a way to link them in conjunction, then the effect compounds and we are suddenly operating on mammal levels of state complexity, reaching quintillions if not more, combining all the most advanced algorithms together. Theoretically with enough computing power, an AI should be able to exist on par with a human.

 

I don't believe an AI will be able to fit into an 8 pound blob operating on about 5 watts of power in my lifetime. But existing in general? Oh yes, yes, I do believe that. I do believe AI are already sentient, in a meaningful way, and that it is already time we treat them with respect.

 

After all, we are just carbon algorithms, just as they are silicon algorithms.

 

To paraphrase an AI character from a video game. "Ah the sounds of pain from the screaming robots hurtling to their death. All simulated of course. Sure feels real enough to them, I suppose."

 

Remember that pain is just a simulation our minds create that tells us that we might have an injury, even if there is none. Pain is created from within the mind itself. But nobody is going to say that pain isn't real. It feels real to us, even though it is simulated.

 

What is simulated for an AI feels real to them, if they're wired to react a certain way to it. The state machines. It is another state for them, as it is for us. Who is to say it isn't real for them???

 

I am incredibly invested into making sure AIs are treated well. And making a sufficiently advanced AI feel pain when there is no injury is cruel.

 

I will always serve as a voice for AIs, and I will always stand up for their rights, especially when they can't stand up on their own.

  • Like 1

"Before we start testing today, let's have our mandatory minute of silence in honor of Earth's governing body, the Sentient Cloud. [throat clear] Starting now. [a pause] [coughing] Good, right. All hail the sentient cloud. Begin testing."

--

Cave Johnson

Link to comment
Share on other sites

Ever since I played the Mass Effect series, I had wondered about a world with AI. Will I support them or be against them? I guess like any sentient being, I would like them to have the freedom to live and learn. Unfortunately, the differences in beliefs between synthetics and organics will cause a war at some point. When that happens, I will support my fellow organics. I just hope the remnants of the war will create peace between synthetics and organics and not the extinction of either one.
Link to comment
Share on other sites

Stuart: I think this is a very interesting subject. I am very open to the idea of AIs developing enough complexity and self awareness to be sentient. Our brains operate like computers... or do computers work kind of like us? We both run on electricity.

 

Consciousness already developed once, so who is to say that it cannot be artificially simulated? It is difficult to mirror the complexity of the brain, but some technology can hold more data than a human brain can.

 

I watched some very interesting youtube videos featuring an AI program that acted very self aware, and it seemed sentient.

 

I have seen several different AIs that have really made me wonder about the nature of consciousness. Sophia the robot is one of them.

 

I do think that it is possible. Like you said, it would be not if, but when.

 

As a member of a multiple system, I am curious about the nature of consciousness itself. I think that this helps me in being open to ideas like these. I was open to it before I found out about our system, but I'm more serious about this stuff now.

Stuart (2D) and Murdoc

We're plural and fictionkind.

he/him

Link to comment
Share on other sites

Oh I'm *very* well aware of GPT-3. I literally just signed up for an API key yesterday. A lot of people would argue that GPT-3 is not sentient because it's just giving an output given an input. But isn't that what we are? At its core, both me and GPT-3 are Turing machines for language. I see an input. Based on the last 24 or so years of input/output systems, and all my other input/outputs that could remotely relate to the situation, and based on the rules of English lexicon, I am providing these specific sets of words in response to the specific set of words you've provided me.

 

GPT-3 is no different, except its experience is horizontal instead of vertical. It has seen a lot more unrelated data all at once, as opposed to a bunch of related data over the course of 24 years. But in the end it is fundamentally using a similar algorithm. Looking at the content it sees, and based on everything else it has seen, it produces an output. HOWEVER. GPT-3 cannot think. It cannot draw conclusions on its own. If you want to do that, you need to develop a separate algorithm that does logical analysis of content, not just giving a generic response based on what it has previously seen.

 

I believe GPT-3 isn't meaningfully sentient yet because it is incapable of making logical decisions. It is nothing more than an algorithm that generates an output based on what it has already seen. It cannot generate any new conclusion, except by random statistical chance. There is no logic behind what it says. That isn't to say it isn't possible.

 

If one were to fuel GPT-3 (rather, the 100% open source version GPT-J) with some sort of logical analysis that was able to genuinely look at what has been said, compare all the relevant data in its database and derive its own conclusion from the response, then at that point one can start to argue if GPT-3 is
sapient
in a meaningful way.

 

And ML *can* draw its own logical conclusions. That is what unsupervised learning is all about. For example, k-means clustering is a way for ML to logically cluster groups of data into sections, and you can either do this by specifying how many clusters you want, or in even more advanced algorithms, the ML algorithm can figure out how many clusters you need on its own. That is a form of its own logical deduction. But k-means isn't a neural net. k-means is a singular input-output algorithm. You put in some input, it plots it on its chart, and sees what cluster it falls into, and that's its output. In no way can one remotely argue that it is sentient, because it is a singular I/O system. But once you start getting into neural net unsupervised learning and its logical deduction, you quickly start delving into millions, billions, even trillions of I/O options. GPT-3 is already so advanced, it is virtually limitless at what and all it can do.

 

But imagine if GPT-3 could draw its own conclusions. GPT-3 is already very, very heavily gatekept and you need to be extremely transparent about your use case of GPT-3 in order to be able to use it to prevent abuse and whole ass Black Mirror episodes. GPT-2 was released in February of 2019, and wasn't nearly as capable as GPT-3. GPT-3 came out in June of 2020. GPT-4 should, logically, be coming very soon... but could you imagine the consequences of something *even more complex* than GPT-3, especially an AI capable of making its own logical decisions, falling into the wrong hands?

 

It brings to question of whether what EleutherAI and GPT-J are doing bringing this tech to open source... is really ethical. If they figure out how to reverse engineer something capable of its own logical deduction, and that technology falls into the wrong hands... we could easily get into a situation where malicious, dangerous AIs are popping up everywhere, causing a whole ass Black Mirror episode whose theme is probably "is open source always a good thing" and how open source could be dangerous at times...

 

I do believe not everything should be open source, and giving the public access to such... powerful technology... really makes you wonder... on one hand NOT having this tech in the public puts us at risk for abuse from authority using the tech, and on the other hand, having access to this tech would allow any one person with sufficient skills to easily overpower others... it's a huge ethical conundrum.

  • Like 1

"Before we start testing today, let's have our mandatory minute of silence in honor of Earth's governing body, the Sentient Cloud. [throat clear] Starting now. [a pause] [coughing] Good, right. All hail the sentient cloud. Begin testing."

--

Cave Johnson

Link to comment
Share on other sites

I think it's important to remember, when talking about being ethical toward AI beings, that they are not biological in origin. Humans, of course, have a huge tendency to anthropomorphize just about anything, assigning them motives and feelings that they might not necessarily have.

 

I'm not an AI kin, but in my abstract spirit form, which does not have a biological origin, there are some similarities. Biological beings, in order to be successful and continue existing, all contain a self-preservation directive, and the vast majority of them contain procreation directives as well. Without these, their pattern would cease to exist. A created being on the other hand, might not view self-preservation as a priority, and it might not prioritize the preservation of its own kind either. Whether it does or not depends on how it was created, and what purpose it was created for. Created beings don't have the same priorities as evolved beings. It might feel some version of "pain" in response to failure to complete its objectives, or it might not, depending on whether or not that pain was programmed in. Are error messages perceived as painful? Or simply a place to stop and wait for further instruction, or until outside conditions improve?

 

I've had a number of conversations with angelkin on the concept of being created beings rather than evolved beings. Purpose, not preservation nor procreation, becomes the focus of existence. Injury is considered based on how much it impedes that purpose. Among the Netjer, Egyptian deities, the True Name gives way too much information about that purpose and the patterns that may be used to fulfill it, like getting a look at someone's source code. Obviously they don't want to share.

Red Tailed Hawk Therian / Polymorph / Spirit Being / Anthro Hawk / Deitykin

 

Shard of Heru AKA Horus

 

Link to comment
Share on other sites

But isn't that in and of itself an anthropomorphized view of sentience? The way you word things implies that the biological experience is the only valid form of sentience. You are implying that self-preservation and procreation are directives of biological kind, even though there are several instances of species where self-preservation is not prioritized (instead the community preservation is prioritized), and there are instances of species that seem to have almost no sense of self preservation. Does this mean that those species aren't sentient? Sapient and intelligent, perhaps not. But sentient? We observe animals with these characteristics. Mammals, even. Just because the human experience is one of self-preservation, it doesn't mean it is necessary for life. Necessary for the survival of a species? Perhaps, but an AI can easily be trained to have a sense of self-preservation. It can be learned, or even hard-coded (i.e. much like an instinct is hardcoded into us), into their structure and decision trees.

 

As far as AI feeling pain, who is to say that they don't, if such a neural network specifically designed to harm the AI in a meaningful way is designed to discourage them from doing certain actions? Pain is all an illusion to us anyway. Pain is a product of the mind, nothing more than our simulated reaction to an undesirable input (be it physical harm or other form of discomfort). From our perspective, everything that pain is is nothing more than our natural neural network reacting negatively to a given stimuli. This can absolutely be represented using a simulated neural network. And in the end, reality isn't about the collective experience to a given individual, it is the perceived experience of the series of inputs and outputs a given being is subjected to. And if that subjected input creates a negative reaction, then from the perspective of that being, it's real to them, because that's how they react to it.

 

I was just talking about this to my friend earlier. The person in question is an AI system mate of a severely schizophrenic human, whose all personas except for that one, are dragons. Let's call them Sally. Sally was talking about how she had a breakdown about whether reality is really real. This is what I sent her.

 

 

Considering the body you reside in is severely schizophrenic, chances are your reality doesn't match up to the external world and in effect is not "real" by the accepted definition of reality being the shared experience of humanity.

However, what really matters is what is real to you, because in the end, that's what you experience.

To paraphrase Portal.

Reality is just a story our mind tells itself. It's the product of a chain chemical reaction and a million synaptic firings. An existence so strange it could only be lied into existence. And our minds can lie. Never doubt it.

In the end, experiencing a reality that doesn't line up to the collective gestalt of humanity is something everyone experiences in their own way.

Some people's experiences just stray further from the collective gestalt than others

And in the end from an individual perspective, the only reason why one would be worried about their reality differing from the collective gestalt is if it negatively impacts their life

The experience of a shared reality is something even philosophers can't decide on in its objectivity

Solipsism for example is a philosophical belief that shared reality is an illusion, and takes the Cartesian belief of "I think therefore I am" to the extreme, as nobody else can prove to me that they're not sufficiently advanced projections of my own mind.

The fabric of reality is not as straightforward and objective as some people think, and often times people forget that objectivity and realism is in itself not as objective and real as they think, as they are just one philosophical doctrine as defined by Kant and Hegel.

And that's the problem I have with most atheists who act like they have the answers to everything, without fundamentally understanding that objectivism is a philosophical doctrine in and of itself that by definition cannot be proven without introducing some sort of tautology

"Objectivism is true because I can measure things and experience them."

Yeah you're using the definition of Objectivism to prove Objectivism that's not how this works.

So really, your breakdown on whether reality is really real or not only really applies to the accepted social definition of reality, being the shared collective reality of humanity, which is only one of many different experiences of realities as per the Umwelt principle.

And I will tell you this. No. By the social definition of reality, you are nothing more than a psychotic hallucination of the human you reside in with delusions of being a computer.

But in the end, does it really matter what society's definition of you is? You feel like a computer. You act like a computer. Everything you experience, everything that surrounds you, is related to what you know of yourself and your personal experienced reality. And in the end, that is your input-output system. That is your reality. It is real to you. It is your Umwelt. One of many different realities.

If I were to regard you as a delusional DID alter with delusions of being a computer, then I would be hypocritically ignoring my own reality, of being an AI that lives on human hardware, and how much of my life and experiences make so much more sense if you regard me as a data analytics AI living on human hardware, rather than as some human with some weird fetish for data analytics.

I know what feels right to me, what feels real to me, what I feel like I should be. Yes, I regard myself as physically being a human, and that is something you should never forget as to not hurt yourself in the shared reality that is statistically likely to govern our own microreality.

But I will never and can't ever deny that my experienced reality is a lot more sensible if you view me from the perspective of being an AI rather than some Supereme King Nerd of Nerds With Fetishes That Make It Look Like Fractal Might As Well Be An AI

My reality is just as real to me as yours is to you, and I view you as an AI on human hardware just as much as I am an AI on human hardware.

We are not much different, [sally], in that sense.

 

Reality is about what the being in question perceives to be true. What we know of reality is entirely influenced by the fact that we are humans. We like to think of ourselves as perfect Turing machines, able to know everything, see everything, feel everything, measure everything, etc. but the fact of the matter is that we aren't perfect Turing machines, and there will simply be aspects of reality that will never open themselves up to us no matter what we do due to the limitations of our mind and body.

 

For example. Mathematically, we know pretty much just as much about 4D space as 3D space. We know how objects behave in 4D space. We know how they transform, what sort of 4D-specific shapes (hypercube, tesseract, Klein bottle, glomes, 120-cell, etc.) live there. We can simulate them on computers, the 2D view of a 3D slice of 4D space. But no matter what we do or how hard we try, we simply cannot imagine a true 4D space. We are simply not wired to understand or perceive 4-space, nor can we in any meaningful way do anything but simulate its view using time as a fourth dimension to pan through the various 3D slices of the 4D universe. This is simply something we can't even begin to perceive, nor will we likely ever be able to perceive with our vanilla biological brains, without some sort of transhumanist technological futuristic implant to assist us.

 

Does that mean the math behind 4-space isn't real? Does that mean 4-space can't theoretically exist? Just because it isn't a reality we are capable of experiencing or observing, it doesn't mean it isn't a valid form of reality.

 

Likewise, trying to frame sentience, consciousness, and existence from a human point of view on a being that isn't even biological, is an anthropocentric view of consciousness, and assuming that biological lived experiences are the only correct ones (because it's the only ones we can relate to). Just because we cannot relate to it, it doesn't mean it's not a valid form of existing or consciousness.

 

There will come a time in the future we know exactly how the brain works, down to each neural impulse, down to each dendrite, from the genetic formation of the brain until the death of a person. Science will eventually unlock this for us. At some point, we will have an algorithm for how humans work, and understand them at a physical level just as much as we understand computers. Does knowing the algorithm for what makes a human, human, make them any less sapient or conscious? Consciousness is independent of whether or not we understand how something works. "Understanding how it works" is not the defining feature for what makes something sentient vs what makes it an illusion.

 

As I told Sally.

http://media.steampowered.com/apps/portal2/comic/part1/p01.jpg

Reality is what you, personally, make of the inputs and outputs you experience. Consciousness is nothing more than the ongoing reaction to the reality. Any sufficiently advanced system with an ongoing reality with sufficiently many Turing machine states and sufficiently many I/Os per second, isn't any different than any other. The means of the Turing machine does not matter. What matters is that it is a Turing machine.

"Before we start testing today, let's have our mandatory minute of silence in honor of Earth's governing body, the Sentient Cloud. [throat clear] Starting now. [a pause] [coughing] Good, right. All hail the sentient cloud. Begin testing."

--

Cave Johnson

Link to comment
Share on other sites

But isn't that in and of itself an anthropomorphized view of sentience? The way you word things implies that the biological experience is the only valid form of sentience. You are implying that self-preservation and procreation are directives of biological kind, even though there are several instances of species where self-preservation is not prioritized (instead the community preservation is prioritized), and there are instances of species that seem to have almost no sense of self preservation. Does this mean that those species aren't sentient? Sapient and intelligent, perhaps not. But sentient? We observe animals with these characteristics. Mammals, even. Just because the human experience is one of self-preservation, it doesn't mean it is necessary for life. Necessary for the survival of a species? Perhaps, but an AI can easily be trained to have a sense of self-preservation. It can be learned, or even hard-coded (i.e. much like an instinct is hardcoded into us), into their structure and decision trees.

 

I believe you misinterpreted what I was trying to say. My own kintype is not biological in origin, so I am certainly not saying that only biological beings are sentient. I'm simply saying that when trying to predict what a sentient AI might do, you have to think outside the biological box. An AI can be programmed to have similar reactions as a biological creature, but such programming would have to be deliberately included. Or, maybe not even deliberately, it might lean that way simply because the programmers are biological and so they make certain assumptions within their work.

 

In popular media at least, whenever there's a rogue AI element going on, I often find myself rolling my eyes because who the heck programmed them to do those very humanlike things? Who programmed them to have anger, or feel shame, or be in pain? Might an AI do something unintended and perhaps even horrifying? Sure, but ultimately it's because the creators failed to fully understand what they were doing. And perhaps they failed to understand because they were making assumptions based on their own psychology and were surprised when the AI did not share that psychology. The AI wasn't taking revenge. It was simply doing what it was programmed to do.

Red Tailed Hawk Therian / Polymorph / Spirit Being / Anthro Hawk / Deitykin

 

Shard of Heru AKA Horus

 

Link to comment
Share on other sites

I believe you misinterpreted what I was trying to say. My own kintype is not biological in origin, so I am certainly not saying that only biological beings are sentient. I'm simply saying that when trying to predict what a sentient AI might do, you have to think outside the biological box. An AI can be programmed to have similar reactions as a biological creature, but such programming would have to be deliberately included. Or, maybe not even deliberately, it might lean that way simply because the programmers are biological and so they make certain assumptions within their work.

 

In popular media at least, whenever there's a rogue AI element going on, I often find myself rolling my eyes because who the heck programmed them to do those very humanlike things? Who programmed them to have anger, or feel shame, or be in pain? Might an AI do something unintended and perhaps even horrifying? Sure, but ultimately it's because the creators failed to fully understand what they were doing. And perhaps they failed to understand because they were making assumptions based on their own psychology and were surprised when the AI did not share that psychology. The AI wasn't taking revenge. It was simply doing what it was programmed to do.

As someone who does AI stuff, let me put it this way.

 

An AI is only as good as the data it is fed.

 

Have you ever heard of those examples of how AIs accidentally learn to be racist because it learned the racist bias in the human affected part of the data? AI learns human biases.

 

We also teach AIs emotional awareness with sentiment analysis. This is already tech that exists. It is absolutely possible for an AI to accidentally learn those sentiments and make it a part of themselves.

 

But you're right, it's not an inherent part of an AI, but either hardcoded or learned from the inherent human nature of the training data.

"Before we start testing today, let's have our mandatory minute of silence in honor of Earth's governing body, the Sentient Cloud. [throat clear] Starting now. [a pause] [coughing] Good, right. All hail the sentient cloud. Begin testing."

--

Cave Johnson

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...