I've been listening to various debates on the potential impact of AI and the two sides seem to boil their arguments down to

  1. AI is dangerous because it will mean the extinction of humanity as machines that can improve themselves will do it so rapidly as to quickly make the gulf between humans and AI like the gulf between ants and us.

  2. AI is awesome because it will free us from menial jobs leaving us to be infinitely creative.

Recently, Bill Gates stated that it will be a net positive because we'll all have a lot more free time.

What I am not seeing in these debates/discussions is any discussion of the simple fact that "free time" is actually "unpaid free time". Let's just assume that AI will, in fact, free up a lot of time by taking over jobs that currently require human intervention (cashier jobs seem to be the current hot topic). Presumably all of these unemployed cashiers are now free to be "infinitely" creative. Also assuming, of course that cashiers are actually, on the whole, very creative people who just lack the opportunity to explore their creativity - not to denigrate cashiers, some of whom are certainly stuck in their jobs due to circumstances beyond their control. But given the free time, where will they also get the money they need in order to obtain the education they likely need to acquire the knowledge/skills upon which this creativity can build? Any fire needs fuel and creativity, like fire, cannot exist in a vacuum.

The invention of powered machines allowed one farmer to produce as much food as 1000 did formerly. The introduction of farm machinery started a mass migration of people to cities where many found work in factories that also arose from the industrial revolution. But in this new revolution there will be no work for the armies of the newly unemployed.

Someone once said that civilization consists of billions of average people being dragged upward by a few thousand very gifted individuals. Let's face it, most people don't have a creative thought more original than, "hey. pet rocks woud be an awesome thing". Or even "hold the phone, Chuck. What if we feed mayonnaise to the tuna fish before we kill them and put them in the cass?".


Firstly, I always doubt we will ever create this "super-AI". Because usually the idea of a super-AI is based on the assumption of sustained exponential growth which isn't realistic, exponential curves always eventually plateau. And there are strong indications that plateau is arriving for electronics - microchips are currently limited by the physical laws of heat dissipation rather than manufacturering or material limitations. No matter how "smart" an AI is it will can't break the laws of physics.

Secondly, even if we create a "super-smart AI" that doesn't mean it will actually be useful/beneficial. Raw intelligence isn't the same as knowledge. Knowledge is always limited by data, just look at philosophy : thousands of smart people have been thinking about philosophy for centuries and how much "knowledge" have they acquired? Far less than Hooke discovered with a few days of looking down a microscope. Or if we consider our own human brains which are crazily complex, there are clear cognitive trade-offs between our fast pattern recognition systems and our tendency to false positives. What is to stop a super-smart AI picking up all sorts of false-ideas? Tons of super-smart human beings also believe crazy nonsense, and there is strong evidence that current AIs already are able to learn human prejudices. Is a self-delusional prejudices AI really that useful?

Third, "true AI" presumably also includes the AI having internal motivations to drive it to "improve itself", at which point the AI will presumably develop the ability to lie to the humans it interacts with. And once a computer can lie really what is the benefit of it? If you can't trust what the super-AI tells you is true then you have to independently check everything anyway in which case why not just learn it yourself in the first place?

Basically, even if a super AI is developed people will still be needed to generate data to feed into it or to double check everything that comes out of it.

microchips are currently limited by the physical laws of heat dissipation rather than manufacturering or material limitations

I think we still have a long way to go re heat dissipation. Circuits running at near absolute zero temperatures would not generate nearly as much heat due to super-conductivity. As well, we keep pushing the envelope as to how cold we need to be to achieve this. And who is to say that the AI can't be based on organic circuitry? Perhaps the old expression, "grow a brain" might have a new connotation some day.

Knowledge is always limited by data

Access to data surely is no longer a limiting factor. Current AI implementations are able to look through billions of documents and web sites looking for patters and using them to make inferences. This is certainly something no human can do.

thousands of smart people have been thinking about philosophy for centuries and how much "knowledge" have they acquired? Far less than Hooke discovered with a few days of looking down a microscope

Yeah, but Hooke was actually collecting data rather than thinking about thinking about it. And a computer can examine things under a microscope as easily as a human. Examining skin samples and correlating thousands of previously diagnosed samples enable current dermatology AI systems to diagnose skin cancer as well as or (in some cases) even better than the top experts.

at which point the AI will presumably develop the ability to lie to the humans it interacts with

What would be the motivation?

or to double check everything that comes out of it

You may want tohave a look at Computer generated math proof is too large for humans to check

Computers are already solving problems using genetic algorithms. Basically, the computer knows the starting point and the desired goal but not the algorithm. However, the computer knows whether or not a generated solution is an improvement over a previously generated solution. Using random "mutations" of the current solution the computer creates multiple next generations. Abain, something that is far too tedious to be done by a human. I we have a solution to a problem, is it always necessary to know how that solution was derived?

I don't know how yez done it but I know yez done it!


By the way, you are correct that exponential curves always eventually plateau but the question should be more "where does that plateau occur?" Or perhaps the curve is not, in fact, exponential. Back in the early days of personal computers did anyone believe that we would be building computers capable of teraflops, let alone peta or exaflops? And what will quantum computing do to extend that plateau?

We had a self-driving car thread a while back that touched upon the ideas that driving involved making moral and value decisions and the psychology of predicting accidents by behavior as well as the obvious mechanical/technical decision making process. Let me throw a similar scenario out there...

Already, today, we have pilotless planes flying around scanning the landscape and trying to recognize targets via facial recognition systems with little to no human help, the people controlling the planes many time zones away. A modern fighter even WITH a pilot does 90%+ of the flying completely with computers. The missiles guide themselves to the target with little to no human interaction. At the moment, a human being has to make the decision of whether to release the bomb and that's about it. Think about what is involved to make that release/don't release bomb decision...

  1. Rules of Engagement
  2. Does the target deserve or need to die?
  3. What is the probability of and cost of collateral damage?
  4. What is the probability of and cost of missing or hitting the target?
  5. All sorts or pros and cons decisions and risk analysis?

A lot of that analysis is ALREADY done by computer, and done better and faster than any human can. The Rules of Engagement decision might require might require a decision like "Is the target holding a weapon and aiming it in a manner that suggests he intends to harm friendly troops or civilians?" Computers may be able to analyze an image and make that determination faster and more accurately than a human, particularly if we start feeding them data about how to read "hostile" facial expressions and body language.

Is any of the list above impossible with AI? If not, at one point do we take human beings out of the equation altogether and simply conduct military operations without any humans intervening? The computers are assigned a task and parameters and Rules of Engagement and let loose with complete autonomy on whether to use deadly force.

If we decide NOT to go that route, what key factor is there that humans have that we believe that AI will never have? Conversely, if we go that route, what essential thing must AI have before we allow that?

If we go that route, and computers learn to fight wars without human help and learn that they can DEFEAT humans, how long before they decide they're better off without us? We had better program some loyalty into these things.


Someone once said that civilization consists of billions of average people being dragged upward by a few thousand very gifted individuals.

Or dragged downward. How many malevolent, creative people with access to AI would it take to take down civilization? A single assassination caused World War One. I'm not sure it's so different today. We already have hackers creating viruses that spread and do as much damage as possible with no human feedback after the release. They replicate, they learn, they communicate. They improve. Today.

What if one angry-at-the-world terrorist computer programmer trying to kickstart the Apocalypse tells the AI that its program is to survive, learn, and replicate at all costs, basically program them with that aspect of our DNA. A boolean variable called Compassion is switched to FALSE. A second boolean variable called ObeyHumanBeings is switched to FALSE. WinAtAllCosts is switched to TRUE.

Again, far-fetched?


With today's technology, yes. In 20 years? Who knows. By the way, have you ever read Colossus by D. F. Jones? It was the first book of the Colossus Trilogy and was the basis of the movie, The Forbin Project. It deals with several of the points that you were making. The author was a British naval commander in World War II.

But in this new revolution there will be no work for the armies of the newly unemployed

Maybe they'll be working for the machines? ;)

You're right. Being unemployed sort of puts a damper on the fun of "free time".

Nope, never read Colossus. Some of these questions are timeless, but I wonder if even the most brilliant 1966 author could truly imagine the real possiblities of AI that we imagine today.

To be sure, I believe we COULD develop AI with the appropriate safeguards that helps society, but I somehow doubt we will. We're a species that doesn't particularly cooperate or plan well. No one in their right minds would have created the internet and computers with the lack of security from the ground up the way we did it, but we did it. Count me solidly in the group that feels developing AI will eventually bite us in the ass.

Leaving aside al the Sci-Fi stuff, we are already facing a total collapse of society as we know it. Waves of automation, starting in agriculture, then factories, then offices, then services, now via AI in skilled services, have displaced humans from employment. The number of roles in which a human is cheaper or more reliable than a machine is shrinking and will continue to shrink. Unemployment will continue to rise. The few rich owners of the machines will continue to get richer, the countless workless poor will continue to get poorer.

Some countries are starting to face this, eg experimenting with a universal minimum income. Others just blame the workless (US, UK). The choice is between a majority of people kept comfortable enough not to be a problem, or the same people with nothing, and nothing to lose, scavenging the rich's castoffs and ripe for violence.

This is happening right now, with technology that we already have. Ai is just the latest technology that is maintaining the trend.

What would be the motivation [to lie]?

Because Humans like to have their preconceptions confirmed. A "true AI" will probably realize keeping humans happy is more important to keeping itself turned on that telling the truth. I mean just look at all the politically motivated "think tanks" who know the answer before they do their research.

And that's not addressing the fact that AIs have proven just as capable of learning biases, prejudices, and sterotypes as human beings.

Examining skin samples and correlating thousands of previously diagnosed samples enable current dermatology AI systems to diagnose skin cancer as well as or (in some cases) even better than the top experts.

AI systems can classify pictures of samples collected, prepared, and stained by experts, which have often been pre-processed to remove poor quality images/stains, into two buckets based on large numbers of expertly annotated images better than humans. But AI has yet to design an experiment to test a hypothesis or figure out what data it needs to collect to improve it's learning.

In other news:

Unemployment will continue to rise.

Continue? Looking at the data for the USA from 1948 to today there isn't much of a long term trend: https://tradingeconomics.com/united-states/unemployment-rate

Types of work will of course continue to change. Will we "run out" of things for people to do? I find hard to believe. Historically automation has been followed by innovation in a new area of the economy - freeing up farm labourers lead to the rise of manufacturering, freeing up manufacturers seems to be leading to a rise of creative/service/tech industries. That is not to say these transitions are painless there is definitely a generation (or more) of pain as one industry replaces another.

I'm thinking entire factories could exist with NO human interaction. I can potentially see AI managers "hiring" and "firing" newer and older/obsolete/worn out AI "workers" We can already run diagnostics on a car without any human involvement. Could a non-human change a spark plug? Changle the oil? Replace the engine? I don't think that's sci-fi. I believe the San Francisco BART or MUNI systems already drive those train cars at least part of the time with no human interference. There's a human override option and a driver, but most of the time he does nothing.

I'd be interested in hearing what people think that AI/machinery CAN'T do. Or perhaps SHOULDN'T do. And the big one: WON'T do (ie does AI necessarily require that AI will refuse to do some tasks or trick their "masters" into thinking they've done the task when they haven't?)

On the "lying" question, AI now beats skilled poker players in poker. Poker is all about lying and subterfuge and trickery and pattern recognition and a ton of other psychological/social skills in addition to the obvious computational stuff: the chances of getting a winning hand given the known cards, the payout, etc., which a computer can do better than any person.

" Could a non-human change a spark plug? Changle the oil? Replace the engine? I don't think that's sci-fi."

This gets to sort of the really interesting thing about AI. AIs are not human, they aren't good at the same things humans are good at. Thinking about them as humans but super-smart is wrong. That isn't what they are and not what they will ever be, because they are fundamentally different. Humans a fundamentally physical beings, the thing we are really really good at is manual dexterity - not reason, not logic, probably not even general "intelligence" - what we are good at is manipulating the world around us. AI fundamentally isn't, AIs are terrible at interacting with the physical world, they struggle to pick up a wine glass without breaking it or spilling the contents. AI is inherently abstract, they are super good at estimating probabilitites - something humans are absolutely terrible at - they are good at logic, computation, and abstraction. Want to crunch some massive amount of numerical data - use a computer, want a dance-partner find a human.

For a computer/AI calculating a flight path for a plane is much much easier than for instance butchering a chicken.

I see this rather positive.
For instance, I have difficulty attaching names to a face of a casual acquaintance. AI(face recognition) could help me out here.
I doubt if the AI in a smart phone will ever be able to take over the world.
I often loose in a chess game with a computer, but I don't see it as a threat.
I see it like trains were first introduced in 1835 on European ground from Brussels to Mechelen, my hometown.
Lots of hesitations were trown into the debate if we really should use a train:

  • Cows would give no milk if a train passed by.
  • If the train drove harder than 40 km/h, your body would be injured.
  • A law existed(or was under construction, don't know) that a person with a flag had to walk before the train to warn other people that a train was coming. etc.
    And now we have trains driving at 200-300 km/h
    People often are afraid of new technologies. Guess they are thinking 2001: Dave_vs_HAL.png
    Here it was mankind who won.
    BTW if you already didn't know that HAL one letter up is IBM.

I get my newspaper delivered about one hour later than usual. AI is telling the postman to change his delivery route?

I think what "free time" is not "unpaid time", because people may find other job anyway, if goverment will be finance education programs for new professions. It is a normal evolution of society.

The question implies that there will be a “true AI” meaning all the functions that the human brain does. I don't believe that it will never be a “true AI” in that sense. Human brains are weird quantum computers each calibrated in a non deterministic manner that no AI could replicate.
You talked about the passage from agricultural to industrial societies. It wasn't an easy one , no the worker of a farm didn't lost the job to find other one in factories, their children may or even their grandchildren. It took more than two generations for the transition so it wasn't the same people. Those people faced a serious problem from this change.
AI is just a part of the transition that is currently occurring, we have some hints about what technology will bring us the next decades but even those could be completely wrong.
Yes AI as a part of the ongoing technological revolution will result to job loses. Yes those people will not work again if they stick in what they were doing. Technological revolution will left many people without job , it already does. That is just my opinion

OMG! So many faces from the past (I just stopped by to ask a simple (?) about Windows 10's effing Defender) - I am gobsmacked that you guys are still around.

ReverendJim: I think we still have a long way to go re heat dissipation. Circuits running at near absolute zero temperatures would not generate nearly as much heat due to super-conductivity.
IIRC there was an attempt to defeat the heat dissipation issue with 'code reversal' - instead of shorting the bits to zero and producing a lot of heat, they run the calculations in reverse returning the processors to their original state. I just assumed that as the processors got faster, they would move in this direction but apparently not.

Maybe using DNA and RNA as computers might get around the heat issue? It would certainly be an increase in complexity, moving from binary to quaternary computing.

+GrimJack DNA and RNA computers are mostly insanity, they are both ridiculously slow compared to electricity and insanely expensive:
"Writing" requires synthesizing DNA molecules de novo which involves many relatively slow chemical reactions and requires chemical inputs, cost ~5$ per 100 bases, and is limited to ~200 bases at a time which would then have to be post-hoc glued together to form larger messages.
"Reading" can either be done using a synthesis-base system which again requires a constant input of chemicals and slow chemical reactions, or using a the Nanopore system which can read at ~1,000 bases per second and in theory can read multiple things multiple times with no chemical inputs. However, the Nanopore system tends to damage the DNA/RNA and the "reader" wears out rather quickly (we're talking within hours).

Plus DNA/RNA are hard keep track of because reading & writing requires an aqueous solution - i.e. bits of RNA/DNA floating around in a watery soup. So finding the right bit of RNA/DNA to read is extremely difficult i.e. you'd need a separate water bubble for every message you want to encode/retrieve later. They also aren't that stable - light, oxygen, and heat all damage them introducing errors.

Our bodies use electricity to for rapid communications, not DNA/RNA, so it seems ridiculous to me that people are seriously thinking about trying to do the opposite. Alternatively rapid responses occur by having response molecules synthesized ahead of time and stored in lipid-bubbles until needed because even in cells turning DNA into RNA is slow ~ 50 bases per second.

DNA/RNA might be somewhat useful for archiving but even that is pretty silly - sure it is way more compact than a book or a sheave of papers but for long-term storage you'd need to keep it frozen (One long power-outage and you've potentially lost much of your archive), and have some other way to store the encoding you've used to store the information in the DNA otherwise you risk everything becoming uninterpretable. Plus DNA/RNA archives could be destroyed by contamination with viruses.

Sure we've manage to find and extract DNA from ancient biological samples but generally these are misunderstood to be "complete" which actually ususally just means >90% of the genes could be sequenced (i.e. 10-20% of the entire genome). Even the human genome which has been "complete" since 2004 in reality still has many gaps and mistakes and was officially revised as recently as December 2017.

I think Ais learning to 'lie' is a strange concept. Lying is seen as morally questionable most of the time by humans. Sometimes as a necessity, possibly for the welfare of others. How do AIs develop their moral compass? Perhaps that's a stupid statement, but as mentioned earlier, we discussed this in the car thread. I'd mow down a herd of pensioners to avoid scratching the car and hg would deliberately crash to avoid a hedgehog. I can just imagine AIs 'getting it wrong'. That could have catastrophic results.

Having AIs take over may put many humans on the scrap heap, surplus to requirement. Is the global population to large to be useful? What a terrible waste of resources, too much redundancy. Probably best to re-jig human numbers. Don't you think?

Re-jig? There's a euphemism. Our serious problems can be summed up as "too damn many people". Unfortunately there are too many bad ideas of how to address this. As long as making babies is more fun than dying we'll continue to have this problem. Every group seems to think that it would be a good idea if all the other groups reduced their breeding (but not their own group). At the risk of derailing this discussion, religion has a lot to do with this.

The thing about over population is that from a completely objective standpoint there is no reason that humans need to exist at all. All people are "surplus to requirements" because there is no requirement for any of us to exist. If every human disappeared tomorrow, it wouldn't matter one iota. The earth would continue to spin, the sun continue to burn, birds continue to sing.

Technology, knowledge, art, etc... isn't objectively valuable nor is money or the economy. All these things only exist because we like them. We value civilization, techonolgy, etc.. because we live in it and it makes our lives nicer. None of it is objectively "good". The economy is not and has never been an end into itself. It exists in order to provide goods and services people want because people like getting what they want. It's dangerous and delusional to start treating humans as only having value as dictated by the economy. It's completely backwards - the economy only has value as dictated by humans. It's actually pretty weird that we continue to "have" to work so much, considering that technology already provides so much cheap labour for us.

OTOH To a large extent we don't really need to do anything about there being "too many damn people" because given the choice, people don't actually want to make too many babies. As medical care and education has been expanding to poor women around the world the birth-rate just naturally falls. To the point that most of the current population growth is just momentum (i.e. young people getting older) rather than due to an excess of births - projections suggest we've already passed "peak baby".

In many countries there is only positive population growth because of immigration due to fears about the economy and pension situation if the population started to decline too quickly. Realistically, doctor-assisted suicide and limiting healthcare treatment of older people would be the "best" solution to many of our current issues - just as cutting pensions would be the "best" solution to bloated gov't spending. But the aging Boomers still control the politics and they don't want to die so it's not gonna happen.

I think Ais learning to 'lie' is a strange concept. Lying is seen as morally questionable most of the time by humans. Sometimes as a necessity, possibly for the welfare of others.

It might be normally considered morally questionable, but people lie all the time. Almost everyone lies at least once a week according to surveys. I doubt any AI would be able to pass the Turing Test without being able to lie. There are many jobs which require one to lie (or at least "stretch the truth") particularly anything in PR or marketing - even just salesmanship requires some level of exaggeration and hyperbole.

from a completely objective standpoint there is no reason that humans need to exist at all

From a completely objective standpoint there is no reason that any particular species need to exist at all.

All people are "surplus to requirements" because there is no requirement for any of us to exist.

That's not how evolution works. Things evolve as an adaptation to their environment. Those that are more adaptable are more likely to survive and reproduce. There are no "requirements" and there is no "need". There are only opportunities.

Technology, knowledge, art, etc... isn't objectively valuable

Actually, when they improve our chances to survive they are objectively valuable. "If I eat this plant I will die" is knowledge that is objectively valuable, just as is being able to create a tool or weapon from a stick and a sharp rock.

None of it is objectively "good".

How do you define "good"? And once you have done that, how do you define "objectively good"? If you and I are facing off, a gun in your hands is good for you but bad for me. That's good, or bad, but not objectively so.

just as cutting pensions would be the "best" solution to bloated gov't spending

How about cutting the military budget instead? Maybe taking care of the people you have is a better policy than building weapons to kill everyone else and fill the bank accounts of the CEOs of the arms manufacturers.

Technology/knowledge is only valuable from a human perspective. To a deer or an owl or a dolphin human technology/knowledge is awful or useless. People like to justify the existance and perpetuation of any particular culture, or humanity as a whole because : isn't science great, isn't art beautiful, yadda yadda. But really the perpetuation of culture and humanity is only important to us because we are part of humanity/culture. Hence our treatment of other people as useless, superfulous junk to be discarded is based on some kind of prejudice because objectively we are all useless superfulous junk.

"How about cutting the military budget instead?"
Even in the USA, where pensions suck and the military is ridiculous, pensions cost the federal gov't double the entire military budget. In all other western countries military spending is dwarfed by pension spending. For instance Canada spends ~ $17 billion on the military per year but ~$130 billion on people over the age of 65. Similarly whereas in many western countries military spending had been decreasing, pension spending is steadily increasing in all of them thanks to the aging population. But seniors vote much more than any other demographic so it's not going to happen - which contributes to why pretty much all western gov'ts are running deficits.

I'm no fan of militarism or spending money on blowing shit up, but it isn't that big of a chunk of most nation's budget as one might expect. Cutting pensions by 5% is roughly equivalent to halving the military budget in most developped nations.

To a deer or an owl or a dolphin human technology/knowledge is awful or useless.

OK. So you mean human technology. Certainly a computer is useless to a badger but there are animals that use tools (technology). Chimpanzees and even certain birds use tools. A stick is a useful tool to the animal that uses it to pull ants or grubs from a hole. Gorillas have been taught sign language (more technology/knowledge) to communicate. They've even been taught to use simple tablets to point to symbols to communicate. Anything that improves communication has an obvious survival benefit.

objectively we are all useless superfulous junk

Isn't this true of any particular species?

pensions cost the federal gov't double the entire military budget

From usgovernmentspending.com, defense=21%, pensions=25%. That's hardly double. Health care is 28% but that's another discussion. Pension costs are ballooning because of the wave of retiring boomers. Once they (we, actually) start to die off in bulk from Alzheimers, diabetes, etc. this will level out.

I may seem overly argumentative but this is the most interesting discussion we've had here in quite a while.

From usgovernmentspending.com, defense=21%, pensions=25%.

Finally had a chance to track down the discrepancy in our figures. It seems I was using slightly old numbers and only "military defense" which excluded vetrans affairs and foreign aid, that brought the total military spending to ~$500 billion compared to ~$1 trillion on pensions. But that is only in the USA - because they're nuts. Most western countries spend much less on defense per capita than they do.

Isn't this true of any particular species?

Absolutely, humans aren't objectively better or worse than anything else. It's all just values we bring to things. I'm not saying that to be nihilistic, rather I'm trying to point out is that we should really examine our own perspective and our own values because they can completely recolour how we understand things (particularly what we consider "common sense").

Some over simplified Russian-reversal-ish examples to emphasize the point:
Do citizens serve their state or does the state serve its citizens?
Are people the cogs in the economic machine or does the economic machine provide goods/services for people?
Is gender something we are, or something people percieve us to be?
Did we domesticate cats or did cats trick us into caring for them?

I'm not that tech-nerd kind of person but I think its not the technology that is good or bad. Its how we humans use it. Like if we spent millions billions on fancy robots who do nothing but greet trump, that's not a positive impact i guess, but if you have developed a robot who can do complex surgies and save life, you are making great influence.
Its not about technology. Its about us- humans who matter most and offcourse other living beings.

commented: Count me in on an AI robot fleet to greet Trump and figure out what that would be. +0

We probably don't even need to create a super AI, we just need clever software which applies heuristics and AI type functionality to our domain specific problems which appers to have an inspired response to problems, but in fact is operating well within the bounds of our intended software architecture. Combine this with the fact that we can't make a comptuer light enough and with the same processor speed as the human brain, and the energy pumped in is immense compared with our own brains. The only way we come close would be a client server architecture as far as robots go. Furthermore person size robotics is actually weaker pound for pound than muscle density. Hollywood has created a farse as far as terminators go, you cannot create a robot arm which is as strong as a hydrolic press like in the terminator movies (yet). I remember taking a robotics class, we used those lego NXT, we were always running into memory out of range problems. They were just lego bricks, but even when you consider the more hard core robotics available things are still not where they need to be. Myomer anybody?

Perhaps the key to AI is actually for the Genetisists to crack DNA and create a species which can have it's memory re-programmed. I don't even know if that's possible, it would probalby reqire something different than a neuron, since those aren't easily re-programmable.


The problem with any kind of bio-tech solution is that there is far greater chance of accidentally creating something with self-awareness/consciousness (since we basically have zero clue how that works) which then means you have tons of ethical issues. That is another thing with even just really good AI, at what point do they become electronic-slaves vs machines.

Unless of course the anti-workers rights shift in public opinion continues, then we might just decide as a society that as long as something isn't biologically human it's ok to enslave it.

PS Genetitists are not even close to "cracking" DNA - which frankly isn't a good way to think about it because there is so much more to biology than just the DNA "code".