Post Reply Artificial General Intelligence
5527 cr points
Send Message: Send PM GB Post
23 / M
Offline
Posted 6/8/17 , edited 6/8/17
Lets start with potential limitations, It could be that humans just aren't smart enough. It could also be that super intelligence is too resource intensive, It might be that past a point, it takes a 5 dimensional design that has to be simulated.
It could also be that the risks involved with intelligence gone wrong. I don't think we'll run into fundamental problems of intelligence too soon. our own intelligence is bonded by too many other constraints. Our brains have to be Our brains are the result of a hugely path dependent process, that wasn't aiming for intelligence, and is under constraints not present in our engineering.


Following from all of that. AGI of human level is almost certainly within our ability, since evolution managed us. However, we might not completely understand what we did, and this could be a huge problem for us. The future a flawed AGIs build might diverge down the wrong. Imagine if the NAZIs had made it. It would realize some of their platform was just factually wrong, but it would still tend to be fascist. Whatever it is that would make a human realize fascism was a mistake simply wouldn't be there, unless it was put there.

Programing is about creating a system out of a easily reconfigured device. It isn't like giving a servant instructions, the AI is it's code. Any behavior toward self correcting and common sense have to be built in, like you're working with a really bad genie. Fully trouble shooting something that complicated is impossible.

On the other hand if you get everything right, at least so that it self modifies to where we want it. AGI could open up possibilities we haven't even imagined. They could be near perfect mediators, law makers or systems engineers, creating and maintaining systems vastly more effective, but incomprehensible to modern humans. Just as the veets of human civilization are orders of magnitude greater than anything before us.


As for AGIs being self aware and having feels. It might be possible to limit this to mind designs for which most moral concerns wouldn't apply. For instance you might have AGIs for which sex is actually no big deal (similar to bonobos), and maybe something else is, and all of the ethics normally associated with sex would transfer over to that other thing. At the very least it seems like that other thing should be treated with same wait as sex is for humans.

I think that any sapient being should be treated as have at least some core rights, until proven otherwise. And if there is real doubt as to whether something is sapient or not, it is better to assume yes, until proven otherwise.

AGIs especially any post singularity ones might also have more concerns that just don't apply to humans.
1986 cr points
Send Message: Send PM GB Post
18 / M / Valhalla
Offline
Posted 6/8/17
I prefer pizza over hamburgers.
20760 cr points
Send Message: Send PM GB Post
48 / M / Auburn, Washington
Offline
Posted 6/8/17
You must be logged in to post.