- representation, metabolic computing, machine consciousness

2017-04-02 02:25

a smarter bullet

I read this article today about Taylor Huddleston who is being prosecuted for writing software. http://www.thedailybeast.com/articles/2017/03/31/fbi-arrests-hacker-who-hacked-no-one.html

It’s a depressing read all around for any software author. what shocks me, as a term of his bail, is that he is forbidden to use a computer connected to the internet. technically, that likely means he can’t use much digital technology. If he has a phone, it probably routes packet based calls over the internet after the first cell tower. No text messages, no emails, no looking up case law. He can’t even access his court records… published on the internet. No netflix, no cable tv because that’s all digital now. All the modern TVs run android now. It’s a perplexing requirement from an era when the internet was seen as a privilege vs the way we now live. He probably can’t even go to the library and look up a book, because the computer he uses to access the catalog is… connected to the internet.

Thank god for the first amendement…I guess? Because this case leads me to wonder, could we be prosecuted for misuses of AI software? What is going to happen when some malicious actor uses AI software to make a smart bullet” that kills only the target it’s fired at? is the author of that AI software culpable for making such an outcome possible?

I wonder how long it will take for the bulk of humanity to realize that any automated system becomes a smart bullet. If you can hack a Tesla car, you could probably turn it into a smart bullet, where it just happens to hit one particular pedestrian it recognizes from a trained neural network. Who would know why the car did that? And if you can hack a fleet of such cars, they all become that smart bullet. How could we know a machine did such a thing intentionally, or if there was some bug” in the code?

Which brings us to malicious actors. Why does malice happen? And isn’t most malice the result of ignorance and myopia?

I optimistically believe these problems can be solved - if we can learn fast enough. If a malicious actor can out learn the ideas which drive the malicious behavior, we prevent that act itself. But that seems to always to be a problem for human beings. How can we get more of humanity to develop and accept new ideas and abandon old ideas faster? How can we get humans to engage in that development process faster? If we have to wait for people to die for progress” to be made, then we will be surpassed by systems or organisms that can change their ideas multiple times in a single lifetime. [http://www.nber.org/digest/mar16/w21788.html]

Of course the underlying question of development is: What is progress? And the answer must be, the maximization of aggregate and individual liberty (power and freedom). We naturally recognize both increases of liberty and preservation of liberty are important goods, without us often being able to articulate why. The preservation of life maximizes liberty -unless that life threatens life or liberty itself. For us to get better at changing our beliefs and ideas means we have to get better at evaluating our beliefs and ideas against some kind of durable standard. The implicit standard is liberty - maximal aggregate liberty.

But human beings often seem so much more interested in conflict and fighting each other over our beliefs instead of maximizing our liberty. Often we engage in conflict simply because we like the conflict! Would it be surprising if we could be surpassed by any group of creatures that simply avoids the useless conflicts? I think the conflict of ideas is important and necessary, but it’s the rate of experience and idea acquisition I find alarming. Any group of organisms or systems that can learn and improve ideas faster presents a potential danger to human beings… but also presents an opportunity.

I have to remind myself of the maxim: Move so fast people do not have the time to make hurtful decisions or actions because they fall behind.

Good advice for any day I suppose

notes: Kevin Kelly makes an argument similar to the maximization of liberty about the Technium, that the increase of choice itself is a good, even if the technologies that increase choices increase the risks of harm. [https://www.edge.org/conversation/kevin_kelly-the-technium]

copyright 1990-2019