I've come across a few news items recently.
A high school girl, Eesha Khare, invents a superconductor designed to charge cell phones in 20 seconds. It is flexible, yet entirely solid, i.e. non-liquid, and can last 10,000 charge cycles. And it's smaller than an average flash-drive.
A team from Harvard creates an insect sized, flying robot. It uses special strips of ceramic that respond to an electric pulse much like our muscles do, with contractions, which it uses to flap it's wings. It's smaller than a paperclip.
Vijay Kumar and his team at the University of Pennsylvania have designed flying swarm bots. They operate solely off A.I., they can jump through hoops, operating in real time. They can use a Kinect to create a 3D image of a building and navigate it. They can fly in perfect formations, acting off their A.I.'s to sense each other's position and coordinate their movements.
Bee populations are dying. This means crops aren't being pollinated, ecosystems and many markets in agriculture are in danger. Global Warming and pesticide use are being blamed, among other factors.
That last one seems to stand out against the technological hubbub, but there is a connection. Could robotics be used to solve problems in an industry as sensitive as agriculture? The dream of robot farmers isn't that far off from our current state. With enough genetic engineering and chemical fertilizers, we have produced plants that need less and less care from farmers, it's not too hard to believe that soon the only effort we need to apply to agribusiness is the mechanic motions of plant, water, harvest, repeat. But would that be wise? We can, but should we remove the human factor completely from farming?
I would think not. Mainly because it'd be incredibly inefficient, as inefficient as we are now. We have huge crops of several hundred acres devoted to a single type of plant, set out in perfectly lain rows, simply so that it's possible to harvest them using a tractor instead of picking them by hand. What if we spent less on the machinery and fuel and more on people, if we were to pick crops by hand? We could interlace crops of different types on the same land, produce more food per acre, and the plants would be healthier. Not to mention if a disease breaks out in the crop, it's a lot easier to see the signs of an outbreak when you actually see the plants in your hand as opposed to afar from a tractor seat. I'm a supporter of this theory; that more people, cooperation, and effort will produce better results. Don't short change the system. However, what if people simply don't follow that guideline: could technology offer an alternative?
The idea I had from the above compilation of articles was this:
What if we developed a swarm of robotic bees, capable of pollinating crops in place of the real thing.
We'd loose out on the honey, but it's better than loosing agriculture all together. They could fall back to a 'hive' structure powered by biomass or solar panels for recharging, and operate around the clock as needed. One step further: they could be outfitted with defense systems to ward off vermin. My first thought was poison tipped stingers, but for humane purposes the idea could work equally well with tranquilizers, or high-pitch sonic emitters. Then I thought, and this one is a bit more difficult, but what if we developed a swarm of ant-like robots, capable of harvesting the crops themselves, freeing the farmers to inspect the crops for disease or malnourishment to even greater detail. (I got that idea from watching Bugs Life with my baby boy Victor, watching the little blue ants climb stalks of seed and tossing them down to a line of harvesters.)
Robotic farming is something I've always dreamed of as being the cornerstone of a hypothetical utopian city. But as of more recently I've seen the need for some human intervention, as I've described above, but the swarm model presents an astonishing middle ground. Namely, the cost effectiveness, if they're small enough then if one breaks it shouldn't be too difficult to replace. Technology is always expensive at first but quickly declines in price; you pay for the design, but the materials are cheap. Smaller units make it easier to operate off electricity rather than fossil fuels.
The prospect is an interesting one, and difficult to agree to fully after reading Deep Economy. I prefer people over items any day, but in the distant future, we might need something like this. Mars One, a non-profit Dutch initiative to establish a human settlement on Mars in the very near future, 2023, reminds us how close we are to space travel. Perhaps we can genetically engineer crops that can grow on Mars, still be safe for us to eat, and we would need technologies like the swarms I described to safely harvest them. Maybe, one day. But for now, human labor seems to be the best for it's cultural benefits, again, read Deep Economy, and if nothing else, it creates more jobs. Yet, still, it all depends on what we need in the future, however near that future is.
...
Another concept I can't fail to include in this post is that of Artificial Intelligence, or as the Machine Intelligence Research Institute calls it, Human Level A.I. (HLAI). From what I understand of HLAI, its the ability of a machine to be both sentient and sapient, to have the full mental capacity of a human being. I by no means claim to be an expert on the issue, I plan on studying sociology, not engineering or nanotech or programming, but I will make an attempt to comment on the social aspect.
Let's say we create a machine, we give it cameras and mics and such, all the human senses and basic motor capabilities to move around and absorb information from it's environment. And then we give it the ability to take that information, store it, and relate it to other information it has in it's storage, it's memory, i.e., the ability to learn. Then, we give it the ability to create new lines of code based off it's prior knowledge, to formulate ideas. In order to do that, we have to program it to create goals, to give parameters for its ideas. A relatively simple one would be to program it to constantly add information to its memory. The endless quest for knowledge. It would use new information and constantly run it by its previous memory, check for consistency, and if it finds conflicts to explore causes for such inconsistencies. This is all easier said than done, but let's say we do it. But to create a truly HLAI device, we need to give a robot the ability to create it's own goals, and also, in a way that may be intertwined to the creation of goals, the ability to approve or disapprove of things. Maybe if the situation generates a certain number of inconsistencies with it's prior data it could deem it as disapproving and attempt to change it: another ability is that to create steps to achieve the goals it sets and undertake them. It's formulas within formulas. But that would generate super conservative robots, and limit their learning capability. But none-the-less, let's say we figure it out and achieve this Human Level Artificial Intelligence. What does that mean about them, how we interact, and what does that mean for us humans?
If the HLAI was able to create goals, hold approval, and effectively be able to "want", it would seem very human. It would be very comparable to us even, and you must choice between anointing the machine to a higher status or questioning what exactly makes us human. Who are we, really, beyond the mechanical collection of organs, cells, and hormones. Put simply, all our thoughts, sights, and feelings are experienced through binary data. Electric pulses travel through neurons in our brains and nervous systems, no different than the electrons traveling through computer wiring. A computer could, with HLAI, experience everything a human does, minus the hormone generated experiences of emotion. A major question is whether that experience is essential to the existence of a sapient being, worthy of welfare on the same level as a human, if not entitled to similar rights. Could emotion be portrayed in a machine? With enough programing, hypothetically, we could eventually program an A.I. with all the necessary expressions and phrases to use when a particular situation calls for an emotional response, this is being done in rudimentary levels today, but that doesn't inherently mean the robot is 'feeling' that response or emotion. The question again however is whether or not it's important. If the robot could through whatever programing utilize the capability to approve or disapprove of situations, does it matter whether it's an emotion based notion? It's a matter I'm sure receives much discussion.
Let's omit the emotional factor for now, and assume the progressive, that it works out one way or another, and get back to the issue of robotics versus humanity. A friend of mine said, if it can ask, "why am I here?" then it should be entitled to rights like any sapient creature. Could a machine be made self aware? And what does that mean? Could a machine be programed with the knowledge that it is an entity, and it has the ability to take in input, learn, and influence its environment, and ask questions. Better yet, let's say a robot could be programed with the most influential knowledge of all: knowing that it could die. Would the machine question it's meaning and purpose at that point? Would it realize that it's world would continue to exist after its death? Would it attempt to change the world, much as we do, simply for the sake of leaving its mark.
In one case, let's say robots would not know they could die. They can learn, make ideas, and set goals for desires, (to write its own programing and reprogram itself) and formulate steps to meet those goals. For simplicity, let's say they have been programed with the goal to constantly want to gain more knowledge. With eidetic memory and enough electricity to keep it awake 24/7, I'd say they'd learn very fast. Language would be a challenge, simply due to it's volume, but the machine could easily pick up on patterns such as dialects and idiolect. They might need clarification on something like, a foreign accent, but they could learn to adapt eventually. They could learn from people, books, or other robots. They'd live blissfully unaware of their temporal existence, happy to keep on learning. Let's say though, there's a hitch. They learn everything there is to know in their laboratory homes, and desire to travel and see where the scientists disappear to every evening. One way or another, perhaps while accompanied by the scientist creator, it is allowed to leave the lab and explore the outside world. It sees people, moving about, asks questions and possibly is asked questions, and eventually the idea of a job comes up. With the programing of a desire for infinite knowledge, it would naturally inquire about this process. It learns about the steps: resume, application, interview, job. Would this process in the form of programing lead to curiosity? Would it wonder not just how it works, but what it's like? I feel as thought that would depend largely on the individual HLAI in question. Since it has the ability to learn, perhaps it has a learning style, and perhaps it learns best through experience and it could understand this. What happens when a robot fills out an online application, for example, and shows up in an office for an interview, birthday suit and metal plated, perhaps with a tie loosely draped around a voice-box speaker as it's learned this piece of attire is typically worn to such events? What happens when it hears the words, "you can't work here because you're a robot."
Step back. This is a lot of programing. A lot. So much, that I question it's possibility, as many others do. However, technology improves by the second, and MIRI is not the only establishment devoted to making stories like the above into a reality. Let's look at the human brain: how do we know what to do when we experience prejudice, or anything from the scent of a flower to reading words spelled out like the ones you're reading now? The same way we know anything: data, much like binary, is encrypted into your brain in brain cells. You have parts of your brain devoted to memory, interpreting information, language, and social interactions, and more. All of it runs off electricity running through neurons. Somehow, it's all programed into your head. So, it's not a matter of possibility, but whether we can crack the code. I think we can. That if it exists, which clearly it exists in as humans, that we can recreate it. It may take years, or decades, I think it will be sooner rather than later, but it will one day be within our grasps. And us humans always bite from the apple.
Let's say in another case, the robots know they can die. This creates a whole slew of cans of worms waiting to be opened. Suddenly, urgency is a thing. Not only must they execute their programing, but they must do so quickly, before time runs out. With the additive ability to approve or disapprove of situations, how could it react to the situation above. Would it persist, or attempt to pursue its goal of gaining that job.
Would it react violently if it knew it could die?
Would it associate death with itself, or with other images such as war or violence? Perhaps, but would these things inherently make it violent? Perhaps, but could it be programed with, or learn of, social norms that would otherwise prevent it from acting with resistant behavior? The question then would be if whether it would realize the need to adhere to those norms.
The questions pour like a fountain, and the topic deserves much more thought and discussion, but I will cut this one short for the sake of you, my readers, with the utmost intention of continuing my work in the hopefully near future. Please check in soon for more on this to-be-continued issue, I hope you enjoyed, and as always, thank you for your time.
This page shows only my 8 most recent posts, to see more, check out my Blog Archive here.
Sunday, May 26, 2013
Human Rights to Sapience Rights - Robots, A.I., and Agriculture.
Labels:
Agriculture,
Alternative,
Artificial Intelligence,
development,
Economy,
Futurism,
humans,
machines,
resources,
Rights,
Robotics,
Sapience,
Science,
Sentience,
Sociology,
Sustainability,
technology
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
All comments are much appreciated, thank you...