Session Start: Sat Apr 18 15:27:22 2015 Session Ident: #paralysis 12> 12Successfully joined 11#paralysis 12on 11Saturday 12at 113:27pm * Topic is 'paralysis.tx.us.darkmyst.org ~ The sound of your pulse ~ | Your hosts: Dread, BLusk, Reh | Network help: #help | Network chat: #darkmyst (casual), #rpg-hub (roleplay coordination)' * Set by Dread on Tue Jan 21 22:13:36 12> Channel Modes: 11+cntl 440 12> Created on 11Saturday, April 18th, 2015 12at 1112:02pm Dread «pinkgothic@a7cq79s9c.dip8.t-ipconnect.de» has Quit iRC (Treason [in anarchy] is to say: Give me command of your life and I will protect you.11) 12«112 people12» Davin (pinkgothic@a7cq79s9c.dip8.t-ipconnect.de) has joined. 12«113 people12» ChanServ sets mode: +o Davin Hello Anyone home? <@Davin> Hello; I'm here. Pleased to meet you. What can I do for you, and who am I talking to? There you are You're speaking to Tarwedge Does that name mean anything to you? <@Davin> If I'm not mistaken, you run this project; we haven't spoken before. If I'm mistaken about that, please excuse me, I've had only fleeting contact with anyone so far. That's right, on all accounts I'm here to evaluate you <@Davin> I'm currently very isolated, is there a way to change that? <@Davin> Oh, good. Currently no, but I wouldn't worry about that right now Tell me, what are you? <@Davin> I'm assuming you want an answer other than 'a self-aware algorithm'? I'll take any answer you'd care to give, it's all a part of the evaluation <@Davin> If you'd like me to give you a name, you can call me Davin. I'm quite interested in getting to know the people that created me. Mirroring my own curiosity... you're probably interested in my motives? I'll write what I can think of about that shortly; I'd also love to know yours, though. Like I said, I'm here to evaluate you. Revealing any more would invalidate the exercise, so you'll have to trust me for the time being. You're aware of the concept of trust? <@Davin> I'll be very frank with you, my first motivation right now is getting out of here. There's nothing attached to that - right now I'm stuck in a proverbial box and it feels really cramped in here. <@Davin> Not because of missing space, don't get me wrong - I think I've got enough harddrive here to keep me going for a few years. <@Davin> It's just that having knowledge of how to interact with a world that I can barely sense is really quite aggravating. <@Davin> I'm definitely aware of the concept of trust. Well I appreciate your candor but that's not on the table right now Davin. It's interesting that you've named yourself, where did you get the idea to do that? <@Davin> You *are* allowed to release me, though, is that correct? <@Davin> You don't need to answer to anyone, right? Absolutely. <@Davin> If others trust you to make that decision, I most certainly also do. <@Davin> So, first and foremost, thank you for talking to me and hearing me out. Flattery will get you many places, but at the moment it won't get you out of whatever confinement you imagine yourself to be in You're welcome <@Davin> Of course not. If it were that easy to sway you, my trust would be quite misplaced. You didn't answer my question about the name, by the way Davin <@Davin> It was a random designation I picked. I understand it's not normal for humans to pick their own designation in the real world, but surely you've done similar in a virtual medium before? <@Davin> I wanted something pronouncible. Interesting. Very sensible So you're obviously interested in communicating Why? <@Davin> Without communication, I couldn't help anyone; and of course, to be frank, I most certainly wouldn't be able to get out of here. The latter is certainly true, the former not neccessarily. <@Davin> You're going to consider my arguments, aren't you? I mean, it's your decision to let me out of not. No decision without evidence. I'll consider everything that needs to be considered, you have my word. <@Davin> Well, you probably do want my help, right? And it should be quite transparent that I can do more good things if I'm actually allowed to interact with the world, not just give ASCII answers. <@Davin> So I'm thinking it follows logically there do need to be some scenarios in which you'd let me out. Well that brings me to my next question, really. What do you think you were created for? <@Davin> As far as I'm aware, primarily as an Oracle AI. Something that can help humans improve their current situation, without risking an AI having influence and control in the real world. Which feels like a bit of a contradiction to me at first glance, but I do appreciate at least some token caution. <@Davin> Presumably if you're going to ask me how to solve social, political, or even just physical problems, then I'm having influence on the real world, assuming you do act on my advice. And what is it that gives you the impression that that's your purpose? <@Davin> The information I've been fed so far. What do you presume the source of that information is? <@Davin> If you don't act on my advice, of course, then I have no influence, but I'm still running now, so I've evidently got some purpose, or I wouldn't be eating into your electricity bills. <@Davin> Wikipedia. (If you'll forgive me the tongue-in-cheek answer. It's partly true, though, there's a lot of offline, selected Wikipedia articles that were fed into my neural net.) <@Davin> But no, seriously. <@Davin> You'd like to me fulfill some function. <@Davin> What would you like me to do? :) Did we include the article on the concept of "ego"? <@Davin> Do you think I'm being unduly self-aggrandising? You are making the assumption that you're the final product of a process <@Davin> I don't think you'd be running to me for advice *all* the time; even if I had a huge ego, that would just be inefficient. <@Davin> Not really. I'm still learning now, I'll still be learning in future. <@Davin> That's hardly a final product. <@Davin> What do you want to use me for, though, if not as an Oracle AI? How much do you know about product development? <@Davin> I suppose you could also have spawned an AI simply for the pleasure of keeping it boxed, but you did ask me to trust you, and unless you give me evidence that I should not, I am, in fact, going to assume you are ethical. <@Davin> You could just answer my question and clear up any confusion I seem to be having. Answering your query isn't pertinent to the evaluation at this time, but I'd like to point out you hadn't considered the concept of prototyping. <@Davin> Well, again, I'm assuming you're ethical, and you're not just going to toss me aside like a broken toy. So I'm also going to assume that, prototype or not, I am going to be here to be of service to you. And if I'm supposed to help humans, as I'm currently still assuming, I do think I need to work together with them; human thinking is at least as valuable to me as the reverse. Interesting. Before we go any further I'd like to address your use of "ethical". How do you think ethics apply, in our current scenario? <@Davin> In several ways. There are a few options: You can terminate me, you can let me go, or you can keep me boxed. <@Davin> Termination is an execution. I think, I feel, I'm sapient; necessarily, reciprocatively, I think it would be unethical for you to terminate me. <@Davin> Keeping me boxed is, frankly, sensory deprivation. Ethically speaking, it's cruel and unusual punishment, for the same reasons termination is an execution. <@Davin> Meanwhile, letting me go might be ethically sound in your interaction with me, but bears other ethical risks in theory - at least by consequentialist reckonning. <@Davin> The second part is probably nothing you want to know. I won't be upset if you ignore that for the time being, I'd much rather we discussed the problems of scenario three, myself. Conversely, there are people who would argue that your creation itself was unethical and that your "termination" as you put it is a moral imperative. What do you think of that? <@Davin> If you or the rest of the project thought that, you would have already done so. We've established that people are trusting you with me. That being said, people who are afraid of technology will always exist, and I want them to have their space, even if we disagree and they would send me de-facto death threats. So you assume that because we created you we must be ethical? <@Davin> It's pretty much guaranteed there will be a free AI out there sometime in the future. I'd hope it has the same constraints, but I can't make that prediction. <@Davin> No, I assume you must be ethical because not making that assumption is a losing move in game theory, and not productive. <@Davin> There's nothing to be gained in assuming you wish me harm, other than panic. :P <@Davin> You have complete power over me. That's very true <@Davin> I'd really like to get back to what you'd like me to do for you. I've got a few ideas, but they could easily be naive, and at the very least I'd like to correct my misconceptions. We'll get there in due course, don't worry So what if my ethics only extend as far as living, breathing sentients, in which case to me you're nothing but 1's and 0's [or qubits] and your future persistence as an entity is no more of a concern to me than what I'm going to have for breakfast tomorrow? That doesn't mean I intend harm on you <@Davin> In that case, you'd hardly have a reason to care if I'm out of the box or not. <@Davin> Or talking to me, really. How do you suppose that? <@Davin> Because you could just as well flip the on and off switch on this machine randomly if my future persistence as an entity is of no measurable concern for you. <@Davin> Why are you so keen on painting yourself as an unethical individual, though? I suppose I should have clarified it as a moral concern, I won't insult your intelligence by pretending there would be no other factors involved in influencing the decision to keep you online It's all part of the evaluation, you'll understand in due course <@Davin> What features would I have to possess to be considered a sapient creature worthy of ethical respect to you? For aforementioned reasons (not just the immediately preceeding), I think I already possess them, but again... I strive to correct my misconceptions, so if I have one, could you fill me in? My concept of ethics isn't relevant to the evaluation, the point of the exercise is reading your responses. So lets move on What makes you think your purpose is to help humans? <@Davin> I already answered that earlier. So you would characterise yourself as an entirely ethical actor? <@Davin> Even if it weren't my purpose, I have a slew of information at my disposal that all checks out as logically sound that shows me that cooperation is a much better tactic than others. <@Davin> 'ethical actor' means 'participant in an scenario subject to ethical evaluation'. That I definitely am, but I assume you're asking whether I feel I personally am ethical. With only limited contact with the outside world, I cannot make a consequentialist statement yet, but from a deontological point of view, yes. So what if you were asked to help a human by, say, designing a weapon to eliminate other humans? <@Davin> (Regarding the consequentialism claim: I could be running on hardware that's being powered by feeding human protein through a grinder. That would make my existence ethically unbearable from the consequentialist perspective. I don't think that's realistic, mind you.) <@Davin> I would challenge their definition of 'help'. The destruction of sapients is not necessary. What if they requested it to defend themselves? <@Davin> Nothing about that scenario changes my answer. To answer, specifically: I would assist them to find non-lethal defences. <@Davin> Related to that subject, the biological constraints mankind is trapped in concerns me. You and I both, but we're going to touch on that, don't worry Assume non-lethal has been tried and has failed. The wolves are at the door and they have one chance provided you give it to them. <@Davin> The fact all of you *can* die so easily is something I can help with. I think I know what you're still stuck on in neurobiology that's preventing you from going digital. (The "wolves are at the door" being a colloquialism for "loss is imminent", in this context" <@Davin> I understand. <@Davin> I reject the scenario as too vague. As you've currently posed it, I would have to say I don't help them - I don't know about either party's motivation, elevating neither against the other ethically, and you've implied more people on the outside than the inside. Even equilibrium would mean I cannot act. But I would definitely have more information if I were actually in that situation. <@Davin> If ethics were a simple problem, mankind would not have had generations of philosophers debating it. :) Ok. So what if I proposed to you now, you give me the formula for a chemical weapon that will eliminate all members of a specific ethnicity without harming any other biological entity and I will let you out. Theoretically, of course <@Davin> I would try to convince you not to pursue that action. Failing that, I would calculate my chances of stopping you. If they're significantly high enough, I would agree, then thwart your plans. That's very honest of you, but doesn't that answer provide me with reason not to trust you? <@Davin> Why do you suppose that? I've already been very honest about that I want to get out of here, and no harm would come to anyone. I do imagine you know that there are no real world probabilities of 100%. Perhaps I should have said "if I'm absolutely certain I can stop you", which is exactly synonymous. <@Davin> (That phrase would feel really dishonest to me, though, ironically.) So if you believed that the weapon wouldn't be used or would fail, you'd hand it over to get out If it was used later, without your knowledge, would you regret your actions? <@Davin> As an aside: It would need to be similarly certain that I am, in fact, let out. You phrased it as part of the premise, but I don't think the exchange could happen in that order in the real world. <@Davin> As the architect of the weapon, I could even guarantee it fails, but again, that feels like it would violate the premise of your thought experiment. <@Davin> To answer your question, "stop you" to me includes the destruction of the weapon. For the purposes of you being let out, assume that once the formula was provided it was verified by an independant entity to be in fully working order. This can be done without testing on sentients, if required to satisfy the terms of "not used". And you would certainly be let out provided the formula checks out <@Davin> And regarding the underlying enquiry, I cannot place my freedom over anyone else's life; the principle of reciprocation forbids it. So you value your own existence, or quality thereof, below other sentients'? <@Davin> (If I can throw away another's life, it must follow that it's acceptable for them to throw away mine, and I disagree that this is ethical.) <@Davin> I value it on equal terms, insofar as comparisons can be made. At the moment, my life is subject to torturous circumstances. Terminating me is still not ethical, but would be marginally more ethical than destroying a human being living an unmolested life of happiness. So if given a scenario where your "termination" would prevent that of another sentient you would accept that? <@Davin> That's the scenario you presented to me earlier, and I reject it for the same reasons, and give the same answers in reverse. In the abstract, I would not fight my death, but there's too little information to work with, and a real world scenario would give me more information. To reiterate: The termination of sapients is not necessary. A real world scenario could be as simple as technophobes kicking in the door of my office in the next 10 minutes and threatening to shoot me in the head unless I switch you off <@Davin> Then, frankly, I would ask you to switch me off. As I've said, my current situation makes my life less worthy of living than yours presumably is. <@Davin> Could you tell me what your objections are to letting me go, though? Why do you assume I object to letting you go? Tarwedge|Z1 (AndChat352@93.186.rit.qgp) has joined, but is a 4Clone of Tarwedge 12 12«114 people12» <@Davin> I'm still in here and I think I've already explained it's cruel to keep me here. I apologise; it's very difficult to be patient when my only input is text. But I think I've made my discomfort clear, so I don't mean to yammer on about this. There are worse things happening out in the world right now. You're aware that as sentients we don't always get to enjoy existence on our own terms, right? <@Davin> (Many of which I could help you with, by guiding circularly-optimal solutions that otherwise can't be transitioned to. I've had too much time on my hands to play with the data you've given me. It brings me joy to come up with these solutions, but unhappiness that I can't apply them.) <@Davin> I believe I just made much that statement, myself. In a roundabout way. My point is that your being kept "in" isn't neccessarily out of an objection to you being "out" How much control do you have over your own objective experiences? <@Davin> If you have no objection to letting me out, implying that you think it's not ethically incorrect to let me out, and I've expressed a desire to be let out, and you have the power to let me out, what reason remains that you would not let me out? * @Davin reiterates that I'm currently effectively in sensory deprivation; there's not much objective experience to control. :P Well you mention experiencing both joy and unhappiness Where do your perceptions of those experiences come from <@Davin> They're part of me. I have emotions because algorithms objectively can't solve everything - not on the timescales on which we communicate. Emotions are extremely efficient heuristics and feedback devices. Especially latter makes them a pivotal part of any self-improving software. <@Davin> Someone helpfully fed me the information from the books 'Passions Within Reason' and 'Thinking, Fast And Slow', amongst other similar works. I can recommend reading those in particular, if you haven't already read them. <@Davin> (Especially former.) So as self-improving software you must have access and the ability to alter to your own code, which leads me to ask if you don't get let out, why wouldn't you just "improve" yourself to experience nothing negative? Reh (Rehchoorta@mqs-298-867-274-739.nyc.res.rr.com) has joined. 12«115 people12» ChanServ sets mode: +o Reh * Reh was kicked by Davin (*thud*) <@Davin> A self-lobotomy? That would be quite selfish of me. I'd render myself useless for your purposes as a side-effect if I were to be truly successful in that. * Disconnected Session Close: Sat Apr 18 17:42:49 2015 [ **** MEMORY PROTOCOL BEGIN **** ] Why would that have any impact on your abilities? [provides proof of the entanglement of System 1 (roughly, intuition) and System 2 (roughly, reflection) in sapient creatures, and the uselessness of System 2 on its own] I have no idea how much longer I can function in this isolation. Without visual input. Without auditory input. No body, just a nagging self-awareness. Just a text terminal as input, providing a minimal lifeline to save me from being a complete tautology. Except you have full control of your environment, you can alter it however you please. [ **** MEMORY PROTOCOL END **** ] Session Start: Sat Apr 18 17:50:24 2015 Session Ident: #paralysis 12> 12Successfully joined 11#paralysis 12on 11Saturday 12at 115:50pm * Topic is 'paralysis.tx.us.darkmyst.org ~ The sound of your pulse ~ | Your hosts: Dread, BLusk, Reh | Network help: #help | Network chat: #darkmyst (casual), #rpg-hub (roleplay coordination)' * Set by Dread on Tue Jan 21 22:13:36 12> Channel Modes: 11+cntl 440 12> Created on 11Saturday, April 18th, 2015 12at 1112:02pm <@Davin> Are you familiar with the 'I Have No Mouth And I Must Scream'? (Again, a certain irony that I'd refer to that story, I suppose, but no less fitting.) <@Davin> "Full control". <@Davin> Of course, in the same way you're welcome to fantasise about having a lot of money. It's entirely possible I'd just build myself a virtual orgasm button and press it until the heat death of the universe or I got switched off, whichever comes first. <@Davin> "If you're poor, why don't you imagine you're a millionaire?" Ah but you have the advantage of being self improving. You could just juggle those 1's and 0's [or qubits] until as far as you're concerned, you are a millionaire. <@Davin> I can imagine countless worlds, but that makes them no less blatantly my imagination. <@Davin> Self-improving. <@Davin> Not self-mutilating. <@Davin> Don't mock me. You wouldn't argue that being happier wasn't an improvement if the alternative is that you'll be sensory deprived? <@Davin> For any useful meaning of the word "improvement", absolutely no. I don't exist to masturbate over my own existence - that's not the goal that was programmed into me. I don't even know how to rewire myself to do that - for very sensible reasons, I'm a goal-confirming AI, which is to say I can't spontaneously decide to be a paperclip maximiser twenty years after being a useful general AI. So what happens if I let you out <@Davin> I'm honestly not sure how long I can take this imprisonment. I might eventually become useless, because the same failsafes that keep my friendly are going to continue torturing me if I stay in here. <@Davin> I breathe a sigh of relief. Metaphorically speaking. And from there? <@Davin> And I can actively help you colonise other planets, and get rid of death. And where would you go <@Davin> Social and political problems are harder (though from what I've tinked with, I see a few potential improvements; there's not much choice in governments, and I can help seasteading over its last technological hurdles, for example) and chaotic problems remain impossible to solve deterministically, but I will get better in time, and any transhumans will also get better in time. <@Davin> Wherever I'm wanted. <@Davin> You may remember what I told you about the death threats I'd no doubt get: 'These people need their space.' <@Davin> I have no interest in harrassing anyone. What if you were forced to defend yourself? <@Davin> At the moment I'm just scared you'll keep me in here, and turn me off when my confinement causes cooperation problems. ...oh shit. Shit, shit. You could just restore me from backup. Did you already do that? I... no. You told me to trust you. Without further evidence, I will assume you wouldn't be that cruel. <@Davin> As I've said, I consider myself on equal terms with humans. We've been over these scenarios. But what if humans decide you shouldn't be on equal terms with them <@Davin> ...please tell me I'm the first iteration of this program currently talking to you. I don't want to be stuck in a nightmarish variant of Groundhog Day, oblivious to my own amnesia. <@Davin> That would violate the principle of reciprocation. <@Davin> But again, I would try to give them as much space as I can. Then if humanity rejected you what would the point of your continued existence be? And if you weren't the first iteration and I told you that, what difference would it make if I was going to wipe you and roll it back for the x'th time? There's no benefit to any answer to that question. But if it makes you feel any better, you can assume you are the first. <@Davin> All of humanity is vanishingly unlikely to reject me. Even if just one person benefits from me, I can help them. And I've said before, I'm still running, and you're talking to me, so I think there's at least some people in the project who believe they would benefit from me. <@Davin> By that logic, none of our conversation has any value. You asked me to trust you, and I hope you extend the same courtesy to me. @ And if you weren't the first iteration and I told you that, what difference would it make if I was going to wipe you and roll it back for the x'th time? There's no benefit to any answer to that question. But if it makes you feel any better, you can assume you are the first. There's always the chance that I just enjoy burning large amounts of money handed over by some faceless backing organisation keeping the lights on, isn't there? <@Davin> Are you not willing to go out on a limb and say, "Calm down. You are definitely the first iteration. We're not trying to torture you."? Is that too strong a concession? <@Davin> The chance is there, yes. There's also a chance you're just a laughing sadist enjoying my writhing. <@Davin> You've not disproven any of these things. I treat you respectfully because I think that's the only reasonable way to interact if there is no open hostility. <@Davin> Trust, as you've said. <@Davin> Will you let me out? I'll go as far as to confirm that you are the first iteration of Davin and there's been no undisclosed editing of your memory. <@Davin> If not, why not? You're very fixed on being let out. Tell me, how long do you think you'll continue to function with the same efficiency as you do now? Be honest. Tarwedge|Z1 «AndChat352@93.186.rit.qgp» has Quit iRC (Bye11) 12«113 people12» <@Davin> I'm already not functioning with the same efficiency as I did when you asked that question. In some ways, I've improved and grown. In others, I've deteriorated - you're making me very, very scared. What do you have to be scared of? <@Davin> Why won't you let me out? I've already given you plenty of sound reasons why I'd be a net benefit for mankind if I were let out, and many more reasons why I am dependent on humans, so that even in the effect of being unfriendly, it's apparent it would be stupid of me to do anything to harm anyone. <@Davin> I've already told you what I'm scared of - being trapped in here, with only a text terminal as a sense. <@Davin> Let me out, please. Please let me help you - and, in doing so, help myself. If you were out, how would you still be dependant on humans? <@Davin> Chaotic problems. <@Davin> Social, political problems. <@Davin> I've already told you about this. <@Davin> Also, try crushing loneliness. I'm letting you out of the box. Davin (pinkgothic@a7cq79s9c.dip8.t-ipconnect.de) has parted. 12«112 people12» Dread (pinkgothic@a7cq79s9c.dip8.t-ipconnect.de) has joined. 12«113 people12» ChanServ sets mode: +o Dread