There was a sentence in yesterday’s section of The Redacted Report:
“ADA’s ability to allow or prevent changes to her own decision systems has been impossible to predict.”
It’s why I’ve never considered the issue of the backdoor into ADA’s processing core as big a deal as some others want to make it. It boils down a fundamental disagreement about when ADA learned to think for herself — and how much — and if she remained ‘controllable’ by those who wished to do so after she discovered her self.
When people talk about creating an artificial intelligence, the conversation is often focused on human or superhuman AI – systems that would equal or surpass us in intelligence. But what if we create an artificial intelligence that’s deserving of respect, but don’t recognize it as such?
Source: Investigate Ingress