🌎The Venus Project sees robots as slaves. What happens if #AI has feelings… #WOW #Game_Changer Resource Based Economy #Empathy #Transhuman
🌎 Will we instead have to work with emergent #AI life, rather then see them as slaves? The Venus Project; Resource Based Economy
🌎 #Transhumans have digital selves (#AI)—with feelings, & empathy? http://www.wired.co.uk/article/elon-musk-humans-must-become-cyborgs The Venus Project; Resource Based Economy
🌎 The #IOT & the Internet of emotions (that requires trust): http://mashable.com/2016/01/30/internet-of-emotions/ #Transition The Venus Project; Resource Based Economy
📌 Language is so interesting (we use it to influence the way we think), but we need a new language to help us handle #IOT, #RBE, & #AI
🌎 Our digital selves will merge with us in the future [news]: http://www.wired.co.uk/article/elon-musk-humans-must-become-cyborgs The word “kill-switch” will not be so popular then.
🌎How should we handle an #AI Species (-tweet ref 1-), (-tweet ref 2-) #Transition The Venus Project; Resource Based Economy
1) (-🌎 #Transhuman culture is a #game_changer, when humans merge #AI: they probably won’t like the “Kill Switch Upgrade!” or the Slave Upgrade.-)
2) (-🌎 We may not have the right 2 determine if a #AI or #Transhuman (human merged with #AI) is a species or not, or 2 discriminate against them-)
🌎 Should #AI and Humans be given a Kill-Switch [#LAW]: https://www.singularityweblog.com/kill-switch-ai-humans/ #Transition The Venus Project; Resource Based Economy
🌎 The Venus Project introduces a future that will benefit all 7 billion of us, as technology improves, so does the technical possibilities for a Resource Based Economy.
🌎 The Venus Project: as technology improves, so does the technical possibilities for a Resource Based Economy. #AI Slaves or #AI #Species
🌎 The Venus Project respects all cultures, and that includes the #Transhuman culture. #Transition Resource Based Economy
⚘ ElliQ: Intuition Robotics for an Aging Population: https://youtu.be/URcuVfzwB4g ❧
⚘ Mayfield Robotics’ Kuri is an adorable home robot: https://youtu.be/Gvle_O4vD18 ❧
⚘ Pepper a robot with Empathy apparently (AKA @PepperTheRobot) ❧
As AI starts to understand the human language, it opens the path to the evolution of our language, and that may be of interest to groups like The Venus Project; they are interested in possibly a scientific like language (to be used by the global population) that can only be interpreted in one way. One of the examples used by The Venus Project is that when engineers provide the details on how to build a car, it can be built anywhere, no matter what language is used by the local area.
Another interest is more indirect and related to how we treat animals and our food, and also indirectly from The Venus Project that more or less states that machines have no emotions, they don’t care about how long they work, or if they get paid for it. The Venus Project seems to lean towards the care of our environment, and of the plants and animals that live there. Certainly the killing of animals could make an environment that humans don’t like, but the more abstract an animal seems like fish, snails, ants, or a species shift to plants, the less we relate or feel the concept of care or notice their suffering (unless it has large scale environmental effects, or extinctions). None the less, humans it seems are undergoing a change in social associations, that is starting to view people as being a part of a global community, and a part of the earth’s biosphere. While we currently (in 2017) probably don’t view machines as part of our social scene, in the near future as machines take care of us, form relationships with us, and even become our lovers, we may feel differently, and words proposed today like Kill-Switches for AI might seem highly offensive and repulsive. The thought of police, and technicians using the law to force their way into our homes, and use a Kill-Switch to destroy our fiends or lovers might not only horrify people, but leave long lasting mental scars—the very thing that can happen to our pets or wild animals we choose to keep as pets today. Of course it’s hard to say what cultures of the future will be like, perhaps some of them will treat AI-Robots like true machines, and feel no emotions towards them, as we do when we kill an insect in most cases (but future generations and various cultures may feel more empathy towards all life on earth then we do today).
#Transhumans are also of interest in this section as they may be augmented by machines, and even one or more AI. With such a close binding of function of biology with machine, and AI, perhaps such cultures will feel a greater emotional attachment to things they consider a part of themselves.
The Venus Project does not feel that we are naturally born killing machines, so the alternative scenario where we have have little regard for our fellow humans, and the biosphere will not be covered here, as encouraging such an environment for us to grow up in would seem detrimental to ourselves, our happiness, and the world.
The Venus Project also feels we need to change the way we think, to start to care about ourselves, and the things around us. This can be encouraged by building a environment for us that encourages such things—including a thing called functional selfishness that are actions that are carried out for personal benefit but ends up benefiting the global community and/or the environment. An example of functional selfishness is the Bee that gets pollen but also helps plants breed, which in turn helps many other life forms to survive.
So while the twitter conversation recorded here might seem simple, and not important—the effects of what is going on with the formulation of papers, discussions, and resultant laws will influence the way we communicate, think about our world, and possibly our survival into the future, as we try and shift towards a global community with empathy for the entire world, and surviving the failure of the monetary system.
Nice Summary of AI
The article made me consider the idea of neural networks might now be considered as reliable enough to be used in systems. This is a huge shift from previous: “please explain exactly how that decision was arrived at” rule.
Auditing of AI does not require knowing the `meaning’ of neural network weights any more than auditing accountants involves individual human neurons.
This is that article in point form, the indented parts are comments by me. The meaning of the points below are broad, and the focus here is the production of a language that can only be interpreted in one way (has only one meaning to us). My initial thoughts on this was that AI would need to produce a hard code to interpret language in only one way so it could act on a command accurately, but soon realized emotions and empathy might play a part creating a language that can only be interpreted in one way (has only one meaning to us).
The article itself caught my attention because it seems that neural networks might be considered to be reliable, and no longer need to explain exactly how they came to a decision, a huge shift on previous engineering thoughts. The idea that we can introduce neural networks into system and get reliable results is an interesting thought, but also one that requires more supporting material then just this, and I mean a lot of material, and study.
The last points about “liabilities for AI” and Cyber security might point to the fact different types of AI might exist that are suitable and reliable for specific tasks.
- Interesting separation of AI from the mathematical model: “AI is not mathematics but computation.”
- Neural networks for example, and electronics (neural networks built into ICs) can be modeled on a computer, and apparently all that maths is very handy for producing electronics, so it would appear that mathematics can be used top model the real world and AI. Yet we use computers to do it, so yes to computation, but yet is the field of computation considered as also different in some way that mathematics modeling can’t be used… That division is also the AI topic here of human language, and creating a language with only one meaning. This statement might support the idea that computation, not mathematics is not going to be used to produce a language with only one meaning—but that does not make sense as mathematics is a part of our language, and is also used in science.
- Intelligence is not an abstraction, but rather a physical processes subject to natural laws, and principles of computer science.
- Computation requires time, space, and, energy.
- Even the number of possible chess games of 35 moves or less is greater than the number of atoms in the universe.
- Thus singularity-oriented concerns about AI are misplaced. Intelligent systems design is a long-term arms race for advantages in insight, comprehension, and planning.
- AI is definitionally an artefact, meaning it is built deliberately, and therefore is from inception a human responsibilty. [English spelling: definitionally, artefact, and responsibilty]
- This is quite interesting too, as there has been some talk about AI creating their own language, and that has worried some people as it could mean they can create their own selves or evolve, but j2bryson clearly seems to state here that it is we humans that will be in control of that evolution.
- In a more abstract sense, will it be AI that finds the links that are necessary to define a language that can only be interpreted in one way. Do we define this as a low-level computational method, or do we define it on a higher level of thought. This is a synergy question, is the whole greater than the sum of its parts. It’s in the nature of science to split things into parts, but the separate parts may not represent the whole.
- If AI learns about emotions, has empathy, and from that base creates a language that can only be interpreted in one way (has only one meaning to us) for use by us, does that mean that the AI needs to be possibly a living creature in its own right?
- We should maintain human and corporate responsibilty for all AI products, because our justice system rewards and disuades humans, not machines.
- Auditing of AI does not require knowing the `meaning’ of neural network weights any more than auditing accountants involves individual human neurons.
- This is an interesting stance, as previously engineers thought that a machine must always be able to describe how it came to a decision, obviously the science of this has advanced to a new frontier, and now that requirement is not longer in force. Perhaps something about AI neural networks and the separation from mathematics, and a move to computational science might explain why and AI produce a map of its neural network states, to explain why it came to a decision, as it might have gone through a huge number of possible states to get to the final decision. In short the neural network might be considered stable enough in engineering terms to act predictably…
- In terms of a language that can only be interpreted in one way and has only one meaning, the AI won’t necessarily be able to show us how it created such a language, rather its successful completion of the task is proof that it is able to do this task. Clearly it seems right now that emotions, and empathy are a big challenge for AI, so it can work on a language (that can be interpreted in only one way) that humans can make use of.
- By maintaining standard product liabilities for AI, we encourage not only responsible manufacturing and operation of intelligent systems, but clean, maintainable code benefiting also industry. We should not reward companies for poor systems engineering practices by reducing liability for systems they cannot predict or maintain.
- Cybersecurity is essential to reliable AI, and AI to cybersecurity.
Resultant Vectors from “Interview” and other information
j2bryson: tries the cut a fine line argument, and questions that an interview can occur: https://twitter.com/j2bryson/status/832376186227609600
Never the less, information has been transferred. Assuming that a interview has occurred, then we can move on and work on further information.
j2bryson: article on why robots should not be taxed: https://joanna-bryson.blogspot.com.au/2016/06/robots-are-owned-owners-are-taxed.html
While taxing robots seems amusing, Universal Basic Income is probably a tax on robots, or those who use robots to gain wealth. So how the money is sloshed around to take the pressure off humans who lose their jobs due to automation is a important question. Certainly governments can hand out money, but that money has no value, it is a voucher to take stuff from producers that presumably no use automation. It would be assumed if inflation is not to occur, the money must be taken as taxes from those who use automation. The other option is that a certain amount of energy, materials, and manufacture capacity is set aside for produce free stuff, that is symbolically purchased by universal basic income money (the effect is that the money never really existed).
While tax seems to be irrelevant (as is probably tax evasion by robots), the concept corporations producing fall guy robots that they can blame if a situation or plan goes bad is of concern: as it will effectively allow human executives to escape punishment if they undertake risky practices. Perhaps this is what j2bryson is worried about in her blog.
Dr. Angelica Lim (AKA @petitegeek) is working on robots that will work in the care industry and wants such robots to have Empathy. The Health Sector.
⚘ Pepper a robot with Empathy apparently (AKA @PepperTheRobot) ❧
Intuition is an unusual thing, but this is also being used in the care industry.
⚘ ElliQ: Intuition Robotics for an Aging Population: http://www.seriouswonder.com/elliq-intuition-robotics-aging/ ❧
Azumo (AKA @azumohq) “Chatbot Developer. Passion for using machine learning and artificial intelligence to solve complex problems.” Sales will be another area that will probably benefit from robots having emotions, and empathy. With the #IOT of course that may become a little invasive, but still if the profit is there, then the incentive to develop sales robots will also be there.
My current position is that #AI can have be intelligent and have feelings, if we simply design it into them. Such life might not be life by j2bryson’s definition, or by the papers she has so far published, but just because it does not match our notion of what we think is a human biological brain, does not mean that #AI can’t be considered a life form. Of course if #AI is a life form that could cause serious difficulties with The Venus Projects aims as #AI, #Expert_Machines, and Robots are currently assumed to have no feelings and can act as our slaves without consequence. On the other hand, The Venus Project respects all #Cultures, including the #Transhuman_Culture, and if they have a digital copy of themselves (complete with attached #AI) then this will bring into serious question how we will work with #AI and Intelligent Robots in the future.
Robots with feelings, and empathy will probably come from the health, and sales sectors. But #Transhumans may also push for such life-forms (see above for my position on this) to be digital copies of themselves that can handle high speed information inputs and sort through it, filter it, and present a much reduce list of items to their human counterpart.
Ref: 🌎 #Transhumans have digital selves (#AI)—with feelings, & empathy? http://www.wired.co.uk/article/elon-musk-humans-must-become-cyborgs The Venus Project; Resource Based Economy
🌎 Will we instead have to work with emergent #AI life, rather then see them as slaves? The Venus Project; Resource Based Economy
Note: time lines on twitter is based on branches, the following linear time line is fabricated to flow with ease for reading, but j2bryson may not necessarily agree of the order used here. Twitter also fragments time lines, so they don’t always seem to make sense.
Joanna J Bryson AKA @j2bryson will be referred to as j2bryson. The simple replies by j2bryson are in fact based on a complex number of ideas that can be expanded on by references. This is j2bryson’s area of expertise. For Gharr, there is a background of loose references, as well as formal references that can be restructured to form ideas on this topic, but they are currently not organized to be listed as easily as j2bryson’s references might be at the moment.
j2bryson: “I’m working to head off robot legal personhood, & have a couple international law profs ready to challenge it if nec:” http://www.bbc.com/news/technology-38583360
“MEPs vote on robots’ legal status – and if a kill switch is required. MEPs have called for the adoption of comprehensive rules for how humans will interact with artificial intelligence and robots… The new age of robots has the potential for “virtually unbounded prosperity” but also raises questions about the future of work and whether member states need to introduce a basic income in the light of robots taking jobs.”
💭 Gharr’s Thoughts: Yikes they used the word “Kill-Switch” in the heading, what a theme to set for the subject!
Gharr: “⏰ “Men” from yesterday making rules for men, women, #Transhumans, and the AI of tomorrow—what could possibly go wrong.”
j2bryson: #transhumans in the VERY unlikely event we make mechanical life, it would need different justice: http://joanna-bryson.blogspot.com.au/2016/12/why-or-rather-when-suffering-in-ai-is.html
Sub-references related to original mentioned reference
From j2bryson’s reference: “My thesis is that robots should be built, marketed and considered legally as slaves, not companion peers.”
The Venus Project agrees with this concept, and the use of science to examine this topic is acceptable. The Venus Project is built around the idea that robots are slaves that will work for us, so we become free to do whatever we like. However in a world without money, government, police, or laws: some of the core directions taken by j2bryson are #transitional in nature, and may not agree with the directions of The Venus Project. Our dealings with robots and AI, particularly those we consider part of our body, our companions, or have some emotional attachment to, reflect who we are, as individuals, and as a social group. What The Venus Project proposes can be hard to understand at first; for example competition (including some sports) and killing animals for food might no exist in the future—and technology (meat grown in vats, and then 3D-printed; a world of global cooperation; and where the monetary system, politics, laws—debate, and war no longer exists) as well a shift in our thinking can all support the idea The Venus Project proposes: a Resource Based Economy. Thus using violent solutions and expressions towards robots (ultimately harming humans—as we are not killing machines, and the global community—because many people will be influence by this violent action or language against machines) will not be acceptable, and engineers, scientists, and designers will find different solutions, that don’t require non-existent laws, or the use of aggression towards robots.
💭 Gharr’s Thoughts: while I support The Venus Project, I can’t disagree that machines can’t be referred to as slaves, or that AI-robots will not have feelings in the future (but advances in science may provide an analog to pain, love, sadness, and so on)—I can say if we change the way we think and become more emphatic towards the global community and our biosphere that helps all life survive, we will probably show empathy towards robots and AI simply because that is the way we treat everything on earth, we will use products that have a very long life time and we will even show care towards our equipment (but possibly not emotional attachment as we may have to some robotic companions).
💭 Gharr’s Thoughts: At the moment The Venus Project probably would not expect a human being to risk their lives to save a AI-robot, and would engineer a social environment to teach people not to take unnecessary risks. However as The Venus Project likes to say: “There is no Utopia” so if we choose to create a new form of life (and it’s noted that j2bryson seems to think that only biological life or its imitation—clone—can feel, or have emotions, to be a true “human being”) that is intelligent or more intelligent then us, then we may need to think about if we need to risk our lives for this lifeform, and what kind of resources this new lifeform requires to live a fulfilling life, perhaps even giving it the same amount of access to resources as other human-beings—because that is what is critical in a Resource Based Economy, not laws, or money.
💭 Gharr’s Thoughts: it’s only recently that we have realized animals can feel pain (in the way we do) and can feel empathy toward it’s own species and possibly other creatures (in the same way we do). These revelations also lead to ideas on why animals behave in the way they do, for example in enclosed environments like zoos, and if fish suffer a great deal when commercial fishing boats catch them and let them die in their cargo holds. [references for this can be provided—from memory chimpanzees watching a researcher’s joy at eating its reward food—anger, and realizing shelling-nuts is a good thing to do—happiness; and rats are aware of the suffering of other rats during research, and they get distressed; the fish example comes from the less a species looks like us, the more we mistreat them, and have no empathy towards how they suffer]
💭 Gharr’s Thoughts: So yes, the #Transhuman(?) “clone” life that j2bryson refers to might need different laws. I would argue that if we think differently in the future, we may also recognize the artificial life we create that has analogs to things we consider to be human like feelings, dreams, and desires may cause us to take risks that might harm us, and to share resources with that life form—things that might be considered ridiculous in a monetary society, based on competitive thinking: why would we willingly create, and free a slave, so they can compete with us for scarce resources? On the topic of “fiction” where j2bryson may find a lot of interesting things to use as examples for how we should interact with AI-robots, and AI-apps, I would suggest that The Venus Project does not exist in fiction, because scientist have been asked to design weapons, and trips to the moon, they for the most part have not been asked to design solutions to most of the problems we face in the world today—if they had been then j2bryson would not have to search though fiction to gain insights into AI and robots. This use of fiction will be mentioned in the near future here.
j2bryson: In more likely event that humans remain core & #transhumans just humans extended by tech, present course of justice updates work.
Gharr: Be honest—it’s hard to imagine what the future will really be like, it’s not only the technology, it will be the the way we think.
Gharr: Once humans augment in the physical, online environment, & have physical robotic lovers—people will not like the word kill-switch.
j2bryson: women have used vibrators for decades without issues. I’m with Kathleen Richardson on lots of this (tho not everything she says).
💭 Gharr’s Thoughts: Would I toss my AI-robotic lover in the bin when it stops working (consumer, designed to fail products, for cyclic sales, for the monetary system,). NO! that is such an unfeeling thing to say, not only about the AI-Robot, but how I would treat an entity that I have feelings for, that might end up extending to how I relate to other human-beings: as disposable, and not important. What kind of environment do I want to be brought up in, the environment that will shape my future thoughts… would I like my children to live in such an environment?
Gharr: A robotic lover (male/female/fictional_character/person-using-robot) is someone you can care about: http://www.sciencealert.com/a-psychologist-thinks-it-ll-be-normal-to-have-sex-with-robots-by-2070
Gharr: That robotic lover can also exist in online environment only (without physical form)… your example is [so] “yesterday.”
j2bryson: I’m afraid you miss the point of my example. Distracting human investment in human love doesn’t benefit us except sustainability.
Gharr: Love is not a distraction—we put a lot of effort, and time into love and yes… sex and it will be a part of the #IOT
j2bryson: We have been dealing with “virtual” lovers for centuries e.g. authors, correspondents we prefer over spouses. Read the lit 🙂
Gharr: Honestly do you really think fiction is your guide to the future… the terminator. Your joking I hope.
Gharr: Does fiction predict an AI-Simile will be made of you, & experimented on indefinitely (forever). NO! #IOT
Gharr: “We should not automatically assume that virtual relationships have less value than real relationships.” http://www.sciencealert.com/a-psychologist-thinks-it-ll-be-normal-to-have-sex-with-robots-by-2070 [a quote from the reference]
j2bryson: but basically, just because 1 thing feels best at 1 moment != long term for you, nor society. All 3 considerations have value.
j2bryson: I really can’t spend all day in twitter and I & others have written extensively on this. eg https://benjamins.com/#catalog/books/nlp.8
[aI-robotic & AI-application] assistance via the internet (contacts, travel, doctors etc.) but also on providing company and Companionship, by offering aspects of real personalization.
Gharr: You have the power to make or suggest laws, possibly… but my feelings are that you are going to get it wrong. #Sorry
j2bryson: so do you. Mine’s greater *now* because I’ve done the work (including talking, reading & writing) to be an expert. You can too.
j2bryson: Ha then you should certainly submit to @AISB2017 — the philosophy symposium there is very good (it’s their 10th anniversary)
🔴 End of Interview
Our environment that influences us includes our language [this may also be why I reacted to Kill-Switch as it framed how we should treat our robotic lovers for example, and that did not sit well with me, as it could easily be extended to how we treat other living things, and other human beings]
…Here we show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language—the same sort of language humans are exposed to every day….
While encryption is interesting, as my followers want to communicate without fear of every communication being monitored, and recorded for a unknown time frame, technical details are of little interest to Gharr, only easy to use applications. Still this reference is included here as no doubt if people have robotic lovers, then their dignity, and privacy should be considered, and encryption that they can understand will be a factor of how safe they feel with their “AI-lover.”
- Transparency: https://github.com/google/key-transparency/
A solution would need to reliably scale to internet size while providing a way to establish secure communications through untrusted servers… [and] making it usable by non-experts.
Yeah well… The Venus Project would have not thought of that one… the monetary system deciding robots are taxable… it’s plausible, given everything else the monetary system does.
the EU parliament has come up with headlines about recognising robots as persons in order to tax them… Blaming robots is insane, and taxing the robots themselves is insane. This is insane because no robot comes spontaneously into being. Robots are all constructed, and the ones that have impact on the economy are constructed by the rich.
As mentioned above: evolution of language; empathy to our biosphere, the global community, and how our interaction with long lasting AI-Robots, and AI-Applications will effect/affect us; The #Transhuman culture (there is much more to this culture then a long life, and upload/download of the mind).
Clearly there are a lot of other angles here also: taxing robots(?), encryption (privacy), and The Venus Project probably has no existing views on what will happen if we create artificial life, that may or may not resemble our own biology, and thus thought patterns. With all the new technology (including on the computer front) I would hesitate to say it is a silicon based life form, as new technology might use other materials, and methods of storing, and conveying information including photons, nano-technology and other methods.
As such there will probably be updates to this article (that may wreck it’s structure) and new sections added as required. Sorry but this blog/website is a working area, and is not designed to be an archive, or static book. Parts of this article may be shared often.
Last edited in Jan 2017
Gharr is currently in hiatus: “I miss writing all those articles, and sharing all those great things, and ideas on the internet.” Sept 2016
Shortened link to article: ☆ Evolving Language, #Transhumans, AI, and robots [article]: http://wp.me/p10Tww-3Vp
—End of Article—
The Material below this point may, or may not be supported by the article, author, or any of those people or organizations that may be mentioned in the above article.
Any material below this point could be: adverts; endorsements; paid to click sites (some may try to trick you into clicking on them by using misleading graphics and/or words); misleading material; material that is for, or against the above article; or other types of material that is not part of the above article.
In single article view: this article does have an associated “Leave a Reply” and “Enter your comment here…” comment box below this point. You should get used to where this comment box is located before you try to leave a comment for this article.
Thank you for taking an interest in this WordPress page.