0

I've been listening to various debates on the potential impact of AI and the two sides seem to boil their arguments down to

  1. AI is dangerous because it will mean the extinction of humanity as machines that can improve themselves will do it so rapidly as to quickly make the gulf between humans and AI like the gulf between ants and us.

  2. AI is awesome because it will free us from menial jobs leaving us to be infinitely creative.

Recently, Bill Gates stated that it will be a net positive because we'll all have a lot more free time.

What I am not seeing in these debates/discussions is any discussion of the simple fact that "free time" is actually "unpaid free time". Let's just assume that AI will, in fact, free up a lot of time by taking over jobs that currently require human intervention (cashier jobs seem to be the current hot topic). Presumably all of these unemployed cashiers are now free to be "infinitely" creative. Also assuming, of course that cashiers are actually, on the whole, very creative people who just lack the opportunity to explore their creativity - not to denigrate cashiers, some of whom are certainly stuck in their jobs due to circumstances beyond their control. But given the free time, where will they also get the money they need in order to obtain the education they likely need to acquire the knowledge/skills upon which this creativity can build? Any fire needs fuel and creativity, like fire, cannot exist in a vacuum.

The invention of powered machines allowed one farmer to produce as much food as 1000 did formerly. The introduction of farm machinery started a mass migration of people to cities where many found work in factories that also arose from the industrial revolution. But in this new revolution there will be no work for the armies of the newly unemployed.

Someone once said that civilization consists of billions of average people being dragged upward by a few thousand very gifted individuals. Let's face it, most people don't have a creative thought more original than, "hey. pet rocks woud be an awesome thing". Or even "hold the phone, Chuck. What if we feed mayonnaise to the tuna fish before we kill them and put them in the cass?".

Thoughts?

Edited by Reverend Jim

5
Contributors
13
Replies
77
Views
2 Months
Discussion Span
Last Post by ddanbe
0

Firstly, I always doubt we will ever create this "super-AI". Because usually the idea of a super-AI is based on the assumption of sustained exponential growth which isn't realistic, exponential curves always eventually plateau. And there are strong indications that plateau is arriving for electronics - microchips are currently limited by the physical laws of heat dissipation rather than manufacturering or material limitations. No matter how "smart" an AI is it will can't break the laws of physics.

Secondly, even if we create a "super-smart AI" that doesn't mean it will actually be useful/beneficial. Raw intelligence isn't the same as knowledge. Knowledge is always limited by data, just look at philosophy : thousands of smart people have been thinking about philosophy for centuries and how much "knowledge" have they acquired? Far less than Hooke discovered with a few days of looking down a microscope. Or if we consider our own human brains which are crazily complex, there are clear cognitive trade-offs between our fast pattern recognition systems and our tendency to false positives. What is to stop a super-smart AI picking up all sorts of false-ideas? Tons of super-smart human beings also believe crazy nonsense, and there is strong evidence that current AIs already are able to learn human prejudices. Is a self-delusional prejudices AI really that useful?

Third, "true AI" presumably also includes the AI having internal motivations to drive it to "improve itself", at which point the AI will presumably develop the ability to lie to the humans it interacts with. And once a computer can lie really what is the benefit of it? If you can't trust what the super-AI tells you is true then you have to independently check everything anyway in which case why not just learn it yourself in the first place?

Basically, even if a super AI is developed people will still be needed to generate data to feed into it or to double check everything that comes out of it.

0

microchips are currently limited by the physical laws of heat dissipation rather than manufacturering or material limitations

I think we still have a long way to go re heat dissipation. Circuits running at near absolute zero temperatures would not generate nearly as much heat due to super-conductivity. As well, we keep pushing the envelope as to how cold we need to be to achieve this. And who is to say that the AI can't be based on organic circuitry? Perhaps the old expression, "grow a brain" might have a new connotation some day.

Knowledge is always limited by data

Access to data surely is no longer a limiting factor. Current AI implementations are able to look through billions of documents and web sites looking for patters and using them to make inferences. This is certainly something no human can do.

thousands of smart people have been thinking about philosophy for centuries and how much "knowledge" have they acquired? Far less than Hooke discovered with a few days of looking down a microscope

Yeah, but Hooke was actually collecting data rather than thinking about thinking about it. And a computer can examine things under a microscope as easily as a human. Examining skin samples and correlating thousands of previously diagnosed samples enable current dermatology AI systems to diagnose skin cancer as well as or (in some cases) even better than the top experts.

at which point the AI will presumably develop the ability to lie to the humans it interacts with

What would be the motivation?

or to double check everything that comes out of it

You may want tohave a look at Computer generated math proof is too large for humans to check

Computers are already solving problems using genetic algorithms. Basically, the computer knows the starting point and the desired goal but not the algorithm. However, the computer knows whether or not a generated solution is an improvement over a previously generated solution. Using random "mutations" of the current solution the computer creates multiple next generations. Abain, something that is far too tedious to be done by a human. I we have a solution to a problem, is it always necessary to know how that solution was derived?

I don't know how yez done it but I know yez done it!

-Bugsy

Edited by Reverend Jim

0

By the way, you are correct that exponential curves always eventually plateau but the question should be more "where does that plateau occur?" Or perhaps the curve is not, in fact, exponential. Back in the early days of personal computers did anyone believe that we would be building computers capable of teraflops, let alone peta or exaflops? And what will quantum computing do to extend that plateau?

Edited by Reverend Jim

0

We had a self-driving car thread a while back that touched upon the ideas that driving involved making moral and value decisions and the psychology of predicting accidents by behavior as well as the obvious mechanical/technical decision making process. Let me throw a similar scenario out there...

Already, today, we have pilotless planes flying around scanning the landscape and trying to recognize targets via facial recognition systems with little to no human help, the people controlling the planes many time zones away. A modern fighter even WITH a pilot does 90%+ of the flying completely with computers. The missiles guide themselves to the target with little to no human interaction. At the moment, a human being has to make the decision of whether to release the bomb and that's about it. Think about what is involved to make that release/don't release bomb decision...

  1. Rules of Engagement
  2. Does the target deserve or need to die?
  3. What is the probability of and cost of collateral damage?
  4. What is the probability of and cost of missing or hitting the target?
  5. All sorts or pros and cons decisions and risk analysis?

A lot of that analysis is ALREADY done by computer, and done better and faster than any human can. The Rules of Engagement decision might require might require a decision like "Is the target holding a weapon and aiming it in a manner that suggests he intends to harm friendly troops or civilians?" Computers may be able to analyze an image and make that determination faster and more accurately than a human, particularly if we start feeding them data about how to read "hostile" facial expressions and body language.

Is any of the list above impossible with AI? If not, at one point do we take human beings out of the equation altogether and simply conduct military operations without any humans intervening? The computers are assigned a task and parameters and Rules of Engagement and let loose with complete autonomy on whether to use deadly force.

If we decide NOT to go that route, what key factor is there that humans have that we believe that AI will never have? Conversely, if we go that route, what essential thing must AI have before we allow that?

If we go that route, and computers learn to fight wars without human help and learn that they can DEFEAT humans, how long before they decide they're better off without us? We had better program some loyalty into these things.

Far-fetched?

0

Someone once said that civilization consists of billions of average people being dragged upward by a few thousand very gifted individuals.

Or dragged downward. How many malevolent, creative people with access to AI would it take to take down civilization? A single assassination caused World War One. I'm not sure it's so different today. We already have hackers creating viruses that spread and do as much damage as possible with no human feedback after the release. They replicate, they learn, they communicate. They improve. Today.

What if one angry-at-the-world terrorist computer programmer trying to kickstart the Apocalypse tells the AI that its program is to survive, learn, and replicate at all costs, basically program them with that aspect of our DNA. A boolean variable called Compassion is switched to FALSE. A second boolean variable called ObeyHumanBeings is switched to FALSE. WinAtAllCosts is switched to TRUE.

Again, far-fetched?

0

far-fetched?

With today's technology, yes. In 20 years? Who knows. By the way, have you ever read Colossus by D. F. Jones? It was the first book of the Colossus Trilogy and was the basis of the movie, The Forbin Project. It deals with several of the points that you were making. The author was a British naval commander in World War II.

Edited by Reverend Jim

0

But in this new revolution there will be no work for the armies of the newly unemployed

Maybe they'll be working for the machines? ;)

You're right. Being unemployed sort of puts a damper on the fun of "free time".

Nope, never read Colossus. Some of these questions are timeless, but I wonder if even the most brilliant 1966 author could truly imagine the real possiblities of AI that we imagine today.

To be sure, I believe we COULD develop AI with the appropriate safeguards that helps society, but I somehow doubt we will. We're a species that doesn't particularly cooperate or plan well. No one in their right minds would have created the internet and computers with the lack of security from the ground up the way we did it, but we did it. Count me solidly in the group that feels developing AI will eventually bite us in the ass.

0

Leaving aside al the Sci-Fi stuff, we are already facing a total collapse of society as we know it. Waves of automation, starting in agriculture, then factories, then offices, then services, now via AI in skilled services, have displaced humans from employment. The number of roles in which a human is cheaper or more reliable than a machine is shrinking and will continue to shrink. Unemployment will continue to rise. The few rich owners of the machines will continue to get richer, the countless workless poor will continue to get poorer.

Some countries are starting to face this, eg experimenting with a universal minimum income. Others just blame the workless (US, UK). The choice is between a majority of people kept comfortable enough not to be a problem, or the same people with nothing, and nothing to lose, scavenging the rich's castoffs and ripe for violence.

This is happening right now, with technology that we already have. Ai is just the latest technology that is maintaining the trend.

Edited by JamesCherrill

0

What would be the motivation [to lie]?

Because Humans like to have their preconceptions confirmed. A "true AI" will probably realize keeping humans happy is more important to keeping itself turned on that telling the truth. I mean just look at all the politically motivated "think tanks" who know the answer before they do their research.

And that's not addressing the fact that AIs have proven just as capable of learning biases, prejudices, and sterotypes as human beings.

Examining skin samples and correlating thousands of previously diagnosed samples enable current dermatology AI systems to diagnose skin cancer as well as or (in some cases) even better than the top experts.

AI systems can classify pictures of samples collected, prepared, and stained by experts, which have often been pre-processed to remove poor quality images/stains, into two buckets based on large numbers of expertly annotated images better than humans. But AI has yet to design an experiment to test a hypothesis or figure out what data it needs to collect to improve it's learning.

In other news:

Unemployment will continue to rise.

Continue? Looking at the data for the USA from 1948 to today there isn't much of a long term trend: https://tradingeconomics.com/united-states/unemployment-rate

Types of work will of course continue to change. Will we "run out" of things for people to do? I find hard to believe. Historically automation has been followed by innovation in a new area of the economy - freeing up farm labourers lead to the rise of manufacturering, freeing up manufacturers seems to be leading to a rise of creative/service/tech industries. That is not to say these transitions are painless there is definitely a generation (or more) of pain as one industry replaces another.

0

I'm thinking entire factories could exist with NO human interaction. I can potentially see AI managers "hiring" and "firing" newer and older/obsolete/worn out AI "workers" We can already run diagnostics on a car without any human involvement. Could a non-human change a spark plug? Changle the oil? Replace the engine? I don't think that's sci-fi. I believe the San Francisco BART or MUNI systems already drive those train cars at least part of the time with no human interference. There's a human override option and a driver, but most of the time he does nothing.

I'd be interested in hearing what people think that AI/machinery CAN'T do. Or perhaps SHOULDN'T do. And the big one: WON'T do (ie does AI necessarily require that AI will refuse to do some tasks or trick their "masters" into thinking they've done the task when they haven't?)

On the "lying" question, AI now beats skilled poker players in poker. Poker is all about lying and subterfuge and trickery and pattern recognition and a ton of other psychological/social skills in addition to the obvious computational stuff: the chances of getting a winning hand given the known cards, the payout, etc., which a computer can do better than any person.

Edited by AssertNull

1

" Could a non-human change a spark plug? Changle the oil? Replace the engine? I don't think that's sci-fi."

This gets to sort of the really interesting thing about AI. AIs are not human, they aren't good at the same things humans are good at. Thinking about them as humans but super-smart is wrong. That isn't what they are and not what they will ever be, because they are fundamentally different. Humans a fundamentally physical beings, the thing we are really really good at is manual dexterity - not reason, not logic, probably not even general "intelligence" - what we are good at is manipulating the world around us. AI fundamentally isn't, AIs are terrible at interacting with the physical world, they struggle to pick up a wine glass without breaking it or spilling the contents. AI is inherently abstract, they are super good at estimating probabilitites - something humans are absolutely terrible at - they are good at logic, computation, and abstraction. Want to crunch some massive amount of numerical data - use a computer, want a dance-partner find a human.

For a computer/AI calculating a flight path for a plane is much much easier than for instance butchering a chicken.

0

I see this rather positive.
For instance, I have difficulty attaching names to a face of a casual acquaintance. AI(face recognition) could help me out here.
I doubt if the AI in a smart phone will ever be able to take over the world.
I often loose in a chess game with a computer, but I don't see it as a threat.
I see it like trains were first introduced in 1835 on European ground from Brussels to Mechelen, my hometown.
Lots of hesitations were trown into the debate if we really should use a train:

  • Cows would give no milk if a train passed by.
  • If the train drove harder than 40 km/h, your body would be injured.
  • A law existed(or was under construction, don't know) that a person with a flag had to walk before the train to warn other people that a train was coming. etc.
    And now we have trains driving at 200-300 km/h
    People often are afraid of new technologies. Guess they are thinking 2001: Dave_vs_HAL.png
    Here it was mankind who won.
    BTW if you already didn't know that HAL one letter up is IBM.

Edited by ddanbe: type error

0

I get my newspaper delivered about one hour later than usual. AI is telling the postman to change his delivery route?

Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.